ENCYCLOPEDIA OF
AMERICAN GOVERNMENT AND CIVICS
Encyclopedia of
American Government and Civics Michael A. Genovese a...
95 downloads
2671 Views
15MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
ENCYCLOPEDIA OF
AMERICAN GOVERNMENT AND CIVICS
Encyclopedia of
American Government and Civics Michael A. Genovese and Lori Cox Han
Encyclopedia of American Government and Civics Copyright © 2009 by Michael A. Genovese and Lori Cox Han All rights reserved. No part of this book may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage or retrieval systems, without permission in writing from the publisher. For information contact: Facts On File, Inc. An imprint of Infobase Publishing 132 West 31st Street New York NY 10001 Genovese, Michael A. Encyclopedia of American government and civics / Michael A. Genovese and Lori Cox Han. p. cm. Includes bibliographical references and index. ISBN 978-0-8160-6616-2 (hc: alk. paper) 1. United States—politics and government—Encyclopedias. 2. Civics—Encyclopedias. I. Han, Lori Cox. II. Title. JK9.G46 2008 320.47303—dc22 2007043813 Facts On File books are available at special discounts when purchased in bulk quantities for businesses, associations, institutions, or sales promotions. Please call our Special Sales Department in New York at (212) 967-8800 or (800) 322-8755. You can find Facts On File on the World Wide Web at http://www.factsonfile.com Text design by Kerry Casey Cover design by Salvatore Luongo Illustrations by Jeremy Eagle Printed in the United States of America VB BVC 10 9 8 7 6 5 4 3 2 1 This book is printed on acid-free paper and contains 30 percent postcosumer recycled content.
Contents
List of Entries Introduction Contributor List Foundations and Background of U.S. Government Civil Rights and Civic Responsibilities Political Participation Legislative Branch Executive Branch Judicial Branch Public Policy State and Local Government International Politics and Economics Selected Bibliography Appendices
vii xiii xv
1 117 219 331 449 677 753 881 991 1071 1083
LIST OF ENTRIES
FOUNDATIONS AND BACKGROUND OF U.S. GOVERNMENT
Mayflower Compact monarchy natural rights New Jersey Plan parliamentary government representative democracy republic rule of law separation of powers slavery social contract state states’ rights supremacy clause totalitarianism Virginia Plan
accountability Albany Plan of Union antifederalists Articles of Confederation Bill of Rights checks and balances colonial governments commerce clause common law concurrent powers Constitution, U.S. constitutional amendments constitutional Convention of 1787 Continental Congress Declaration of Independence democracy direct (participatory) democracy divine right of kings eminent domain English Bill of Rights (1688) federalism Federalist, The Great Compromise, The habeas corpus implied powers (elastic clause) Iroquois Confederacy Locke, John Magna Carta
CIVIL RIGHTS AND CIVIC RESPONSIBILITIES affirmative action asylum censorship citizenship civic responsibility civil disobedience civil liberties civil rights Civil Rights movement conscientious objector double jeopardy due process vii
viii List of Entries
equality equal protection freedom freedom of association freedom of religion freedom of speech freedom of the press gay and lesbian rights gender discrimination Jim Crow laws justice liberty Miranda warning naturalization right to privacy search and seizure sedition suffrage suffragist movement sunshine laws trial by jury voting voting regulations women’s rights
PO LITI CAL PARTICIPATION absentee and early voting campaign finance campaigning caucus coalition consensus conservative tradition corruption Democratic Party elections grassroots politics ideology interest groups liberal tradition lobbying media multiparty system negative campaigning party conventions party platform patronage
political action committees (PACs) political advertising political cartoons political culture, American political participation political parties political socialization political symbols politics polling primary system propaganda protest public opinion Republican Party third parties two-party system voter turnout
LEGISLATIVE BRANCH advice and consent appropriations bicameral legislature bill (acts of Congress) budget process casework caucus, legislative censure, legislative census code of legislative ethics committee system Congressional Budget Office congressional immunity congressional leadership congressional staffing constituency delegation of legislative powers districts and apportionment divided government filibuster floor debate franking gerrymandering (reapportionment) Government Accountability Office House of Representatives incumbency legislative branch
List of Entries
legislative process Library of Congress logrolling party politics pork-barrel expenditures private bills public bills representatives resolution rider rules committee Senate senators term, congressional (including special sessions) term limits (legislative) veto, legislative Ways and Means Committee
EXECUTIVE BRANCH administrative presidency appointment power assassinations Atomic Energy Commission attorney general, U.S. bully pulpit bureaucracy cabinet Central Intelligence Agency Chief of Staff, White House civil service system coattails commander in chief Council of Economic Advisers cozy triangles debates Department of Agriculture Department of Commerce Department of Defense Department of Education Department of Energy Department of Health and Human Services Department of Homeland Security Department of Housing and Urban Development Department of Justice Department of Labor Department of State Department of the Interior
Department of Transportation Department of Treasury Department of Veterans Affairs disability (presidential) electoral college emergency powers (presidential) Environmental Protection Agency evolution of presidential power executive agencies executive agreements executive branch Executive Office of the President executive orders executive privilege Federal Communications Commission Federal Emergency Management Agency (FEMA) Federal Energy Regulatory Commission Federal Reserve System Federal Trade Commission findings, presidential first ladies foreign policy power Hundred Days impeachment impoundment Interstate Commerce Commission Iran-contra scandal Joint Chiefs of Staff mandate National Labor Relations Board National Security Advisor National Security Council Nuclear Regulatory Commission Occupational Safety and Health Administration Office of Management and Budget pardon (executive) presidency presidential corruption presidential election presidential inaugurations presidential leadership presidential succession President’s Daily Briefing (PDB) regulation removal power renditions, extraordinary Securities and Exchange Commission selective service system
ix
x List of Entries
signing statements solicitor general, U.S. State of the Union Address transitions treaty making unitary executive United States Trade Representative veto, presidential vice president war powers Watergate
JUDICIAL BRANCH administrative law American Bar Association amicus curiae associate justice of the Supreme Court capital punishment chief justice of the United States constitutional law contract law district courts federal judges Foreign Intelligence Surveillance Act Court judicial branch judicial philosophy judicial review jurisdiction law clerks maritime law opinions, U.S. Supreme Court plea bargaining political question doctrine precedent tribunals, military U.S. Court of Appeals for the Armed Services U.S. Court of Federal Claims U.S. Courts of Appeals U.S. Supreme Court U.S. Tax Court writ of certiorari
PUBLIC POLICY aging policy arms control
collective bargaining Consumer Price Index defense policy diplomatic policy disability policy drug policy education policy energy policy entitlements environmental policy federal debt and deficit fiscal policy foreign policy Great Society gun control health-care policy housing policy immigration income taxes Keynesian economics labor policy minimum wage New Deal public assistance public debt public utilities reproductive and sexual health policy secrecy social security supply-side economics telecommunication policy transportation policy welfare policy
STATE AND LOCAL GOVERNMENT board of education board of elections bonds (local government) budgets, state and local campaign finance (state and local) charter, municipal and town city manager constitutions, state correction systems county government county manager (executive) governor
List of Entries
initiative (direct democracy) intergovernmental relations justices of the peace legislative process, state and local legislatures, state mayor militias, state municipal courts municipal government municipal home rule planning boards property rights recalls referendums special districts state courts state courts of appeals state government state judges state representative state senator taxation, state and local town meeting urban development zoning
INTERNATIONAL POLITICS AND ECONOMICS capitalism command economy
xi
communism developed countries developing countries distributive justice European Union globalization international law International Monetary Fund (IMF) international trade liberal democracy liberalism market economy nationalization newly industrialized countries North American Free Trade Agreement (NAFTA) North Atlantic Treaty Organization (NATO) Organization of Petroleum Exporting Countries (OPEC) social democracy socialism ugly American United Nations United States Agency for International Development (USAID) welfare state World Bank World Trade Organization (WTO)
INTRODUCTION
In the end, this definition seems to both avoid and embrace the many contradictions and complexities that make up the very essence of the American political and policymaking processes. Contradictions, both large and small, abound within American government; the U.S. Constitution itself grants power to the three branches of the federal government, while at the same time, the Bill of Rights limits the power of the government in relation to individual citizens. In addition, political power can emanate from many places, including formal institutions (such as a legislature) and formal political actors (such as a president or state governor) and nonformal institutions and political actors as well (such as a grassroots movement of citizens at the local level). It is important to remember that not only are the governing institutions themselves not static, but the political actors who participate in the governing process help to create a constantly evolving system of policymaking in an effort to meet the needs of a constantly evolving society. While we certainly acknowledge that providing the definitive explanation of how American government operates is perhaps an impossible task, we nonetheless offer this three-volume encyclopedia of American government and civics to provide nearly 400 essay-styled entries that cover a vast range of important topics and issues for any student of the American political system. The topics included cover the most fundamental concepts and terms relating to American government and civics; that is, the government and how it operates, as well as
How does one begin to explain the American system of government? Relying on the basic structure of government found within the U.S. Constitution and definitions of key terms, it might seem to some a simple and straightforward task. However, there is no simple explanation that can do justice to the complex and often chaotic political system in which the American government operates. In terms of its brevity and descriptive nature, perhaps the best definition that we have ever found to describe the American political system comes from one of our former graduate school professors at the University of Southern California, the late Herbert E. Alexander, who used to give the following definition to students on the first day in his American government courses: “The United States is governed under a system of separation of powers (checks and balances) by a President elected by the Electoral College but responsible to the people and subject to a co-equal Congress composed of two semi-autonomous bodies with two decentralized and non-ideological and undisciplined parties whose candidates mainly are nominated independently of the parties, then elected first past the post at fixed election dates by an unstable and relatively small electorate whose political interests are diverse, often volatile, and sometimes mobilized into interest organizations that seek to influence the outcome of elections and of public policy decisions. A co-equal judiciary has the power of judicial review of executive and legislative actions, and of state actions. Powers are further divided as between the national and state governments in a federal system.” xiii
xiv I ntroduction
citizens and how they participate within the political, electoral, and policymaking processes. The encyclopedia is divided into nine sections. The first section includes the fundamental concepts, as well as theories and terms related to them, Foundations and Background of U.S. Government, which covers the Founding Era and the evolution of the American system of constitutional government. The next two sections are devoted to Civil Rights and Civic Responsibilities, and Political Participation, which emphasize the many ways in which citizens act within and as part of the governing process, as well as how they react to government policies and decisions. The next three sections cover the three branches of the federal government, the Legislative Branch, the Executive Branch, and the Judicial Branch, and detail the unique institutional aspects and political actors found in each. The final three sections cover Public Policy, State and Local Government, and International Politics and Economics, which can be categorized as the most important outputs of the federal government in terms of what it produces (through policymaking and the distribution of resources to citizens at the federal, state, and local levels), and how the American government interacts and coexists with the remaining nations around the globe.
As the editors of this volume, we would like to thank all of our wonderful colleagues for sharing so generously with us their time and expertise. The cumulative intellectual power of this encyclopedia is amazing thanks to each and every contributor. We would particularly like to thank our respective department colleagues at Chapman University and Loyola Marymount University, many of whom went above and beyond in their willingness to contribute essential entries for each section. In addition, we would like to acknowledge the encouragement and support of our editor, Owen Lancer, at Facts On File, for helping to guide us from an initial list of entries to the finished product at hand. Finally, we are most grateful for the continued encouragement and support that we receive from our families, no matter how many projects we take on and no matter how many deadlines are looming. —Lori Cox Han Chapman University Michael A. Genovese Loyola Marymount University
CONTRIBUTOR LIST
Dawson, Jill, University of Idaho Delja, Denis M., Independent Scholar Devine, James, Loyola Marymount University Dewhirst, Robert E., Northwest Missouri State University Dimino, Laura R., Independent Scholar Dimino, Michael Richard, Sr., Widener University School of Law Dodds, Graham G., Concordia University Dolan, Chris J., Lebanon Valley College Dow, Douglas C., University of Texas at Dallas Eshbaugh-Soha, Matthew, University of North Texas Farrar-Myers, Victoria A., University of Texas at Arlington Frederick, Brian, Bridgewater State College Freeman, David A., Washburn University Freeman, Michael, Naval Postgraduate School Frisch, Scott A., California State University, Channel Islands Genovese, Michael A., Loyola Marymount University Gerstmann, Evan, Loyola Marymount University Gizzi, Michael C., Mesa State College Gordon, Ann, Chapman University Gordon, Victoria, Western Kentucky University Gustaitis, Peter J. II, Naval Postgraduate School Han, Lori Cox, Chapman University Han, Tomislav, Independent Scholar Harris, John D., Rutgers University Heldman, Caroline, Occidental College Herian, Mitchel N., University of Nebraska, Lincoln
Adams, Brian E., San Diego State University Adler, David Gray, Idaho State University Ali, Muna A., Purdue University Babst, Gordon A., Chapman University Baer, Susan E., San Diego State University Baker, Nancy V., New Mexico State University Becker, Lawrence, California State University, Northridge Beller, Marybeth D., Marshall University Belt, Todd, University of Hawaii at Hilo Benton, J. Edwin, University of South Florida Blakesley, Lance, Loyola Marymount University Blaser, Arthur W., Chapman University Borer, Douglas A., Naval Postgraduate School Borrelli, MaryAnne, Connecticut College Bose, Meena, Hofstra University Bow, Shannon L., University of Texas at Austin Boyea, Brent D., University of Texas at Arlington Brown, Antonio, Loyola Marymount University Brown, Lara M., Villanova University Cihasky, Carrie A., University of Wisconsin-Milwaukee Cohen, Jeffrey E., Fordham University Cohen-Marks, Mara A., Loyola Marymount University Conley, Patricia, University of Chicago Conley, Richard S., University of Florida Cooper, Christopher A., Western Carolina University Crockett, David A., Trinity University Cunion, William, Mount Union College Cusick, Roger J., University of Richmond xv
xvi C ontributor List
Henderson, Robert E., Independent Scholar Hodapp, Paul, University of Northern Colorado Hoff, Samuel B., Delaware State University Hoffman, Donna R., University of Northern Iowa Hoffman, Karen S., Wheeling Jesuit University Hogen-Esch, Tom, California State University, Northridge Holden, Hannah G., Rutgers University Holyoke, Thomas T., California State University, Fresno Janiskee, Brian P., California State University, San Bernardino Kassop, Nancy, State University of New York, New Paltz Kauneckis, Derek, University of Nevada, Reno Kelley, Christopher S., Miami University Kelly, Sean Q, California State University, Channel Islands Kim, Junseok, Dangkuk University, Seoul, Korea Kirchhoff, Carolyn, University of Wisconsin, Oshkosh Konow, James, Loyola Marymount University Kraft, Michael E., University of Wisconsin, Green Bay Kraus, Jeffrey, Wagner College Langran, Robert W., Villanova University Le Cheminant, Wayne, Loyola Marymount University LeLoup, Lance T., Washington State University Liu, Baodong, University of Wisconsin, Oshkosh Martin, Janet M., Bowdoin College Matthewson, Donald J., California State University, Fullerton Mayer, Kenneth R., University of Wisconsin, Madison Mazurana, Steve J., University of Northern Colorado Michelson, Melissa R., California State University, East Bay Miller, Mark C., Clark University Murray, Leah A., Weber State University Myers, Jason C., California State University, Stanislaus Neal, Melissa, University of West Florida Newman, Brian, Pepperdine University Offenberg, David, Loyola Marymount University Offenberg, Jennifer Pate, Loyola Marymount University Olsen, Norman L., Achievable Solutions
Osgood, Jeffery L., Jr., University of Louisville Palazzolo, Daniel J., University of Richmond Parrish, John M., Loyola Marymount University Parvin, Phil, Trinity Hall College Pelizzo, Riccardo, Griffith University Percival, Garrick L., University of Minnesota, Duluth Rasmussen, Amy Cabrera, California State University, Long Beach Renka, Russell D., Southeast Missouri State University Rice, Laurie L., Southern Illinois University, Edwardsville Riddlesperger, James W., Texas Christian University Rioux, Kristina L., Loyola Marymount University Rocca, Michael S., University of New Mexico Rose, Melody, Portland State University Rottinghaus, Brandon, University of Houston Rozzi, Alan, University of California, Los Angeles Routh, Stephen R., California State University, Stanislaus Rozell, Mark J., George Mason University Saarie, Kevin, Ohio University Saltzstein, Alan, California State University, Fullerton Savage, Sean J., Saint Mary’s College Savitch, Hank V., University of Louisville Schaal, Pamela M., University of Notre Dame Schuhmann, Robert A., University of Wyoming Scott, Kyle, Miami University Shafie, David M., Chapman University Singh, J. P., Georgetown University Singleton, Robert, Loyola Marymount University Sirgo, Henry B., McNeese State University Skidmore, Max J., University of Missouri, Kansas City Smith, Keith W., University of California, Davis Spitzer, Robert J., State University of New York, Cortland Spitzer, Scott J., California State University, Fullerton Steckenrider, Janie, Loyola Marymount University Steiner, Ronald L., Chapman University School of Law Stoutenborough, James W., University of Kansas Streb, Matthew, Northern Illinois University Strine, Harry C. “Neil” IV, Bloomsburg University Stuckey, Mary E., Georgia State University
Contributor List
Sussman, Glen, Old Dominion University Tadlock, Barry L., Ohio University Tatalovich, Raymond, Loyola University Chicago Teske, Paul, University of Colorado at Denver and Health Sciences Center Thompson, Peter, Loyola Marymount University Thompson, Seth, Loyola Marymount University Turner, Charles C., California State University, Chico Ward, Artemus, Northern Illinois University Warshaw, Shirley Anne, Gettysburg College
Wasby, Stephen L., University at Albany, State University of New York Wert, Justin J., University of Oklahoma Wilkerson, William R., College at Oneonta, State University of New York Will, Donald, Chapman University Williamson, Aimee, Suffolk University Wrighton, J. Mark, Millikin University Ye, Lin, Roosevelt University Yenerall, Kevan M., Clarion University
xvii
FOUNDATIONS AND BACKGROUND OF U.S. GOVERNMENT
accountability
checking and balancing the others. Not trusting power in the hands of any one person or any one branch, for fear that this might lead to tyranny, they spread power out and fragmented it among the executive branch, legislative branch, and judicial branch. Accountability was thus a key element in the creation of the separation of powers. In a democratic system, accountability is owed to the people and occurs via elections. In a republican form of government, the other branches demand accountability through the separation of powers and the rule of law. Thus, in the United States there are two forms of accountability to which government officials must be answerable: to the voters in periodic elections, and to the other branches in the system of checks and balances. Various forms of ongoing accountability can be seen in the impeachment process, which allows government officials to be removed from office for cause; elections that require government officials to go back to the voters for reelection; and structural accountability that can be seen in the day-to-day process of the checks and balances of the different institutions of government. Accountability is at the center of democratic control of government. Information and transparency are required if accountability is to be maintained. In an age of war against terrorism, the demands of accountability have often clashed with perceived needs to protect national security. Thus, the George W. Bush administration has been especially active in classifying
The essence of democratic control of government by the governed is seen in the practice of accountability: being responsible for and answerable for one’s actions. In the United States, accountability consists of government officials being bound by and answerable to the will of the people as expressed in regularized, free elections, as well as in law and the U.S. Constitution. As Associate Justice Samuel F. Miller noted in United States v. Lee (1882), “All officers of the government, from the highest to the lowest, are creatures of law and are bound to obey it.” In one sense, accountability is a simple process: people are to be held to account for their actions. A baseball coach whose team loses year after year is fired because he did not live up to hopes or expectations— he is held accountable for the poor performance of his team. Does a similar type of accountability apply in the world of politics? Yes, through elections that allow the people to “fire” a politician who is deemed to be performing his or her duties ineffectively or irresponsibly. Regularized elections are believed to be the key method of keeping officials accountable and responsible to the will and wishes of the people. Institutionally, accountability stems from the separation of powers and system of checks and balances that such a separation implies. The framers of the U.S. Constitution embedded into the new system of government an institutional device that divided political power among the three branches of government. In this way each branch had a role in 1
2 ac countability
government documents as “secret,” in limiting access to information, and in the exercise of secrecy, even in nonnational security matters (e.g., with whom the vice president met in developing the energy policy for the administration). This propensity for secrecy has met with opposition from a variety of groups and from political opponents who wonder if the administration is engaged in the legitimate classification of sensitive information or if they are merely covering up mistakes or questionable behavior. One of the more controversial decisions of the Bush administration revolved around Executive Order 13233, drafted by then–White House counsel Alberto Gonzales, and issued by President George W. Bush on November 1, 2001. This executive order restricted access to the public records of all former presidents. The order also gave the current administration authority to withhold information that might otherwise be released from presidential documents of the former presidents. Many wondered why the current administration would be so interested in being a filter—after requests for information were already filtered by the former president or his designees—and speculation was widely and at times, wildly offered. Of what utility would it be for the Bush administration to have such a stranglehold on the papers and materials of former presidents? Were they engaged in the necessary curtailing of sensitive or dangerous information, or were they merely controlling the flow of information and engaging in secrecy and/or censorship? After the executive order was issued, a number of academic groups, among them historians, librarians, and political scientists, met with members of the administration in an effort to get them to modify the order, but after several years, no progress was made, and the order stood. In an effort to combat excessive secrecy the U.S. government, in 1966, passed the Freedom of Information Act (FOIA). President Lyndon B. Johnson signed the act into law July 4, 1966. It has been amended several times since 1966. The FOIA is based on the public’s “right to know” what its government is up to, a key ingredient in the formula for accountability. Most of the requests for information under the act are directed toward the executive branch, especially agencies of the federal government such as the Federal Bureau of Investigation (FBI), the Central Intelligence Agency (CIA), and the National Security Council (NSC). Many requests come
from journalists and scholars, but a significant number come from average citizens as well. The Freedom of Information Act applies only to federal government agencies. The law requires that wherever possible, they are to supply requested information to applicants. The Privacy Act (1974) also applies to information requested by citizens and is in some ways similar to the FOIA. Both the FOIA and the Privacy Act have exemptions for sensitive information. Critics charge that the government employs these exemptions with such regularity as to make the FOIA all but meaningless, a charge that is overstated perhaps, but not too far off the mark. For the government to be accountable, the public must have information. But in an age of terrorism, too much information, or the wrong kind of information in the wrong hands, can be dangerous. But who is to judge the judges? Who is to monitor the censors? If too much information is withheld, democracy becomes a sham. If too much is released, national security might be endangered. Balancing the needs for accountability with the right of the public to know is not always an easy call. But just as no person can be a judge in his or her own case, how can one trust the government to monitor itself? History is replete with examples of the government censoring materials, not for national security needs, but to avoid embarrassment or to cover up scandals or mistakes. The basis of the separation of powers and checks and balances system is that no one branch of government is the final arbiter in its own case and that applies to information as well. If we trust the government to monitor itself, to select what information it will and will not make public, the temptation to cover up mistakes and crimes may be too great to resist. Thus, allowing the government to monitor itself is not feasible. One of the most infamous such cases occurred during the presidency of Richard M. Nixon, when the president was accused of a series of criminal acts, and the “proof” was in the hands of the president himself: White House tapes that the president possessed that would determine whether the president was lying or not. Nixon fought hard to prevent the release of those tapes, citing separation of powers issues, the right to privacy, and national security needs, among others. Eventually the dispute went all the way to the U.S. Supreme Court, and in the case United States v. Nixon (1974), the Court
Albany Plan of Union 3
ordered President Nixon to turn over the tapes in a criminal case involving several high administration officials. It sealed the president’s fate, and shortly thereafter, as it became clear that the president had in fact violated the law, Nixon resigned from the presidency. A functional information access policy would withhold sensitive information while releasing that which would not endanger national security or be too sensitive for public release. Recently, the government has erred on the side of caution, not releasing information freely and not responding in a timely fashion to FOIA requests. In a democracy, access to information is essential if citizens are to make informed decisions about the performance of their government. Without adequate information, democracy is not viable. This issue is not easily resolved, nor can one expect it to be settled any time soon. It is always a work in progress with moving lines of demarcation. It is an ongoing debate that tries to balance the needs for information with the needs of national security. But should a democratic system, based as it must be in the concept of accountability, lean more toward openness, even if some degree of national security is threatened? A robust democracy may come at a high price. One may have to sacrifice some degree of security in order to maintain the integrity of democratic governance. Perhaps the most important public debate in the age of terrorism is to what degree is the polity willing to balance these two needs, or would the public prefer to trade some security for greater accountability, or vice versa? Further Reading Arnold, R. Douglas. Congress, the Press, and Political Accountability. New Haven, Conn.: Princeton University Press, 2006; Przeworski, Adam, Susan Stokes, and Bernard Manin. Democracy, Accountability, and Representation. New York: Cambridge University Press, 1999. —Michael A. Genovese
Albany Plan of Union The English and French rivalry over who would control the North American continent led, in the 1750s, to what became known as the French and Indian wars. These wars lasted from roughly 1754 until 1763
when the English defeated the French, thereby becoming the dominant power in the New World, controlling the eastern seaboard of what are today Canada and the United States. The colonies of the Americas were caught in the middle of this 18th-century superpower struggle between the French and English, and sought ways to resolve the conflict being fought in their backyards. In June 1754, an effort was made by the colonies to form a union with England that would have granted them a great deal of independence, while also closely aligning them to England. Delegates from most of the northern colonies along with representatives of the Iroquois Confederacy (a union of six Native American tribes from the upstate New York area) met in Albany, New York, for the purpose of ironing out details of this proposed union. At this meeting, the delegates adopted a “plan of union” drafted by Pennsylvania delegate Benjamin Franklin. According to Franklin’s plan, the states would form a union with an elected legislature, known as a Grand Council (elected for three-year terms), and an executive known as a president-general, who was to be appointed by the Crown in England. The genius of this plan was that it severed ties with England as it also strengthened ties with England. This was a paradox, to be sure, but an ingenious one to say the least. The plan was a proposed trade-off with England: Having some independence, the colonies would swear allegiance to the Crown; let the colonies selfgovern (to a degree) and they would embrace the legitimacy of the motherland. One of the great ironies of the Albany Plan of Union is that it was modeled very closely after the Iroquois Confederacy. The confederacy was a union of the five (later six) Native American tribes who occupied upstate New York, in what is today known as the Finger Lake area (near Cooperstown). The Iroquois Confederacy was a fully functioning, democratic-based union of these six tribes who maintained some independence, while existing under an umbrella organization known as the confederacy. Governed by “The Great Law” which was the model for Franklin’s Plan for Union, this constitution created a union of tribes with a separation of powers, voting rights, democratic requirements, checks and balances, and the rule of law, in short, all the elements that would later make up the U.S. Constitution.
4 antif ederalists
And while the Iroquois Confederacy served as a model for Franklin’s Plan for Union, the new government of the United States did not adopt it wholesale. The Iroquois Confederacy, for example, had several, and not one, leader. Leaders were selected to take charge of several functional areas (such as war), and there was no one central leader or authority. Likewise, women were given more rights in the Iroquois Confederacy than in the U.S. Constitution. And religious authorities, medicine men or shamans, were given greater authority over the life of the confederacy than would be the case in the new system later established in the United States. Nonetheless, it is clear that the Iroquois Confederacy had a profound impact on Benjamin Franklin and the writing of the Albany Plan of Union, and a less obvious, but still significant impact on the writing of the U.S. Constitution. When the colonists looked to Europe for a model for framing their new government, all they saw were monarchies, kings, and royalty. When they looked up the road to the Iroquois Confederacy, they saw a robust, functioning republican form of government, with checks and balances, separation of powers, voting rights, and a consensus, not a command-oriented leadership system. The U.S. Constitution, as revolutionary as it must have seemed to the British, was less democratic than the Iroquois Confederacy that had been operating in the new land for decades before the colonists began thinking of establishing a republic. While some historians suggest that the influence of the Iroquois Confederacy was insignificant and that the framers of the American republic looked primarily to Europe for inspiration, a growing number of historians, in reexamining the historical record, are drawn to the conclusion that indeed, the Iroquois Confederacy did have a significant impact on the creation of the American republic, and that they ought to be included in the panoramic story of the invention of the American system of constitutional democracy. While the colonists believed they had struck on a clever offer to the motherland, the British saw through this plan and rejected the overture of the colonies. The Crown believed it did not have to make such a deal and preferred total power over the colonies, rather than this power-sharing model. Thus, while the plan failed, it nonetheless served as a preliminary constitutional argument for an independent nation, a model of what would follow two decades later.
It would be another 20 years before the foment of revolution would sweep the colonies, but revolution, when it came, did not occur suddenly. It was a step-by-step process with two steps forward, one step back. The actions of Benjamin Franklin, and the development of the Albany Plan of Union were a significant, if often overlooked part of the long and painful process. And the lost history of the role played by the Iroquois Confederacy on the development of the American republic, oft overlooked, lends greater depth to the story of the creation of the American system of government. Further Reading Berkin, Carol. A Brilliant Solution: Inventing the American Constitution. New York: Harcourt, 2002; Ferling, John. A Leap in the Dark: The Struggle to Create the American Republic. New York: Oxford University Press, 2003; Wood, Gordon S. The Creation of the American Republic. Chapel Hill, N.C.: University of North Carolina Press, 1969. —Michael A. Genovese
antifederalists Who were the antifederalists of the founding era of the United States? As a nation, the United States is accustomed to celebrating the work of the federalists, as they were the men who wrote and defended the U.S. Constitution. Usually ignored or given little attention are the “losers” in this debate, those who opposed, for various reasons, the adoption of the new Constitution. But these antifederalists were men of serious motives who deeply felt that the new Constitution was not fit for the post-revolutionary needs of the new nation. The debate on the true motives and goals of the antifederalists may never be fully conclusive because they were not one thing, but a collection of many different views and voices united behind a common enemy: the new Constitution of 1787. The glue that bound together these many disparate elements was that for a variety of reasons, they opposed the Constitution, and worked for its defeat in state ratifying conventions. In 1787, when the new Constitution was written, those favoring its adoption by the states became known as the federalists, or, those who supported a new federal constitution. The Federalist, a collec-
antifederalists 5
tion of newspaper articles written by Alexander Hamilton, James Madison, and John Jay, arguing in favor of the adoption of the Constitution by the state of New York, became the best-known and most cited defense of the new Constitution. The federalists were a “Who’s Who” of the American elite of the day. Numbered among their ranks were also Benjamin Franklin and George Washington. It was a truly impressive list of supporters. The antifederalists opposed adoption of the new Constitution. They were especially concerned that the new federal government was too powerful, too remote from the people, had a presidency that might produce a king, and did not contain a list of the rights of the people. Of course, in the long run, they lost the argument and the day, as the states did eventually ratify the Constitution, but not before the antifederalists were able to extract a major victory from the federalists: the promise that a Bill of Rights would be added to the new Constitution. Who were the antifederalists? Ranked among their numbers were some of the most vocal supporters of a more democratic political system, such as Patrick Henry. Others such as George Clinton, George Mason, and Melancton Smith, while vocal in opposition to the new Constitution, never were able to convince the states to withhold ratification. If this list is not quite as impressive as that of the supporters of the Constitution, the federalists, remember that these men were not lionized in American myth because they fought on the losing side of this battle. Remember, too, that in their day, these were some of the most powerful and persuasive men in the nation. In what did the antifederalists believe? While it is unfair to paint them with but one brush, in general the antifederalists opposed the adoption of the Constitution, favored a more democratic political system, feared a strong central government, looked to the states (government less remote and closer to the people), not the federal government as the primary political unit, feared the potential power of the presidency to become monarchical, and wanted rights guaranteed by law. They felt that the new Constitution failed to live up to the promise of the Revolution and the ideas embedded in Thomas Paine’s Common Sense, and Thomas Jefferson’s Declaration of Independence. In effect, the antifederalists believed that they were defending the true ideas of the Revolution;
ideas that had been abandoned if not turned upside down by the new federal Constitution. In the battle over ratification, the antifederalists were able to demand inclusion of a Bill of Rights attached to or amended onto the new Constitution. They were less able to chain the potential power of the presidency, which they feared could develop the trappings of monarchy. It is hard to imagine what the American experiment in constitutional democracy would have been like had the antifederalists not made the demands they did. An America without a Bill of Rights is today unthinkable. In fact, to many, it is the Bill of Rights, even more than the constitutional system of government established in 1787, that defines and animates the United States. Why and how did the federalists win? And why did the antifederalists lose the overall argument? After the Revolution had ended, the new government established by the nascent nation, the Articles of Confederation, created a very weak central government. So weak was this new government that there was no executive officer (no president) created to administer the task of governing the new nation. This is because the Revolution was essentially a revolution against executive authority as seen in the harsh hand of King George III of England. Thomas Paine called the king “the Royal brute of Britain,” and Thomas Jefferson’s brilliant and eloquent Declaration of Independence, after its magnificent prologue, was really just a laundry list of alleged offenses committed by the king against the colonies. Thus, when the revolution was over and it came time to build a new government, it was exceedingly difficult to reconstruct executive power out of the ashes of the revolutionary sentiment that was so violently antiexecutive. The failure of the Articles of Confederation to provide the new nation with a government adequate to its needs impelled many to see that they had gone too far in emasculating government, and they tried (sometimes quite reluctantly) to strike a new balance between liberty and governmental efficiency. Experience impelled them to reconstruct a stronger, although still quite limited government. The antifederalists were in the “old” camp, fearing a centralized government and hoping for a states-oriented system. But that ship had already sailed; it had been found wanting. Thus, a drive to develop a stronger central government seemed all but inevitable. In that sense, the time was
6 antif ederalists
right for the federalist position, and the time had passed for the antifederalists. The antifederalists were fighting the old war, and the federalists were fighting the new war. In this context, it was all but inevitable that the federalists would win. But not before the antifederalists won their last, and most important, battle: inclusion of a Bill of Rights into the new Constitution. If the antifederalists did not win the big victory, they certainly won a significant victory, one that has shaped the character of the United States from the very beginning to today. Most of the significant battles across history have been waged over the meaning and extent of the Bill of Rights over the powers of the government. From free speech, to religious freedom, a free press to the rights of the accused, from states’ rights to the arming of citizens, it has been the Bill of Rights that has limited the authority of the federal government, and empowered the people and their rights. The antifederalists won a huge victory in the founding era, and while history has characterized them as the “losers” such a critique misses key points in the battle and key outcomes of the age. The antifederalist position was better suited to a small, marginal nation than to a superpower of today. Their vision of a small, state-oriented democracy in which government was closer to the people seems quaint by modern standards, but it was for just such a government that the antifederalists fought. Had the antifederalists won the argument in 1787, it is unlikely that the United States could have developed as it has. Yesterday’s republic has become today’s empire, and a system of government modeled after the antifederalist approach might not have allowed for the expansion of the nation westward, nor the development of the United States as a world power. Some would argue that such an outcome is welcome, and be that as it may, the federalist argument won the day and the eventual result was a stronger, more centralized federal system, and a less robust system of state government. The tensions of the founding era still haunt the modern system, as arguments over federalism, states’ rights, and local control continue to animate the debate and discussion, but in general, it is the federalists who have won the argument, and the United States of today is more a reflection of their vision than of the antifederalists.
In many ways, the United States is a two-tier system of government, the result of the antifederalists winning the concession of a Bill of Rights tacked onto the original Constitution. Some see the U.S. government pulled in two different directions: the Constitution in one direction, the Bill of Rights in another. This creates an inherent tension that is never fully resolved. After all, the Constitution empowers the government, while the Bill of Rights limits government. The Constitution gives the central government the power to act, while the Bill of Rights prevents the government from intruding on areas of personal liberty. Reconciling these two very different approaches has been a challenge for the federal government, especially in an age of terrorism. The rights we possess as citizens may interfere with the government’s efforts to detain and interrogate suspects in a terrorist investigation. Which valuable goal takes precedence? Liberty or security? Should the government, in an effort to protect the safety of the community, be allowed to trample on the rights and liberties of individuals who are seemingly under the protection of the Bill of Rights’s provisions? Every generation faces these challenges, and they are challenges built into the very fabric of our constitutional order. To empower the government and to limit the government—such paradoxes make for a less efficient government but one that is always reexamining itself, challenging its core principles, reestablishing itself to face new and different challenges in new and different eras. This is the dynamic, not the static, quality of government, where each new generation reinvents itself within the framework of the Constitution and the Bill of Rights. It is in the give-and-take of this struggle that a balance is often reached between the needs or demands of the day and the ongoing and universal rights for which the revolution against Great Britain was fought. This struggle will never be fully resolved, but it is in the ongoing struggle that each generation finds its place in the ongoing argument over the federalist and the antifederalist sentiments. To a large extent, American history has been a struggle between those calling for a stronger federal government, and those promoting greater state or local control. It is the argument that the framers were sensitive to in their development of a system of federalism that separated powers vertically and gave some powers to the new federal government, and
Articles of Confederation 7
reserved others to the states. It is over this concept of federalism that we still have political debates today. These sides have been, until the post–cold war era, liberals and conservatives, respectively. Ironically, the great defenders of the Bill of Rights have been the liberals, while those more willing to promote order over individual rights have been modern conservatives. Thereby, each side—left and right— claims heritage linked to both the federalist and antifederalist camps. But in recent years, most clearly with the contemporary war against terrorism, defenders of the Bill of Rights and the antifederalist position of state or local rights, have all but disappeared. To fight a war against terrorism, some believe, requires that the rights found in the Bill of Rights be curtailed to better promote safety and security. And the small-government conservatives have abandoned their claim that the federal government has grown too big and too intrusive and promoted an intrusive and domineering antiterrorist state. Thus, today, the legacy of the antifederalists is both in doubt and in jeopardy. With the Bill of Rights under attack and the small or local state seen as a threat to security, the modern, post-9/11 world has not been very hospitable to the ideas that animated the antifederalist cause. Further Reading Ketcham, Ralph. The Anti-Federalist Papers. New York: New American Library, 1986; Strong, Herbert J. What the Anti-Federalists Were For. Chicago: University of Chicago Press, 1981. —Michael A. Genovese
Articles of Confederation The ideas that animated the American Revolution are embodied in both Thomas Paine’s influential essay, Common Sense, and in Thomas Jefferson’s magisterial Declaration of Independence. For the American founding fathers, an important question became how to incorporate those magnificent ideas into an organized structure of government. The first effort by the newly independent United States to structure a government was the Articles of Confederation. The articles formed the first framework of government attempted in the United States; however, they were largely ineffective.
Front page of the Articles of Confederation (National Archives)
First put into effect in 1781, the articles asserted that the states were entering into a “firm league of friendship” as well as a “perpetual union for the common defense, the security of their liberties, and their mutual and general welfare.” This proved hardly the case. The articles created a weak, decentralized form of government, reserving most powers to the states, and giving the federal government neither the power to tax nor the establishment of an executive. Additionally, the new government could not regulate commerce nor could it create a single currency, could not thus pay the war debt nor guarantee liberty or security. The federal government under the articles was too weak to govern the new nation. It was as
8 Ar ticles of Confederation
if there were 13 separate and semi-independent sovereign states, with a mere umbrella of a federal structure. Translating revolutionary sentiments into governing structures proved a daunting task. If the ideas that animated the revolution sounded good, incorporating them into a new government would be most difficult. At this time, the new nation—if a nation it truly was—began to split between the property-owning class and the common people. This cleavage, which was only just forming at the time the articles were written, would have a profound impact on the writing of the U.S. Constitution but would be only marginally important at this stage of history. At this time the framers, following America’s victory over the British, were imbued with the spirit of a democratic revolution. While small government seemed attractive at the time, it was not long before it became painfully clear that something was drastically wrong. As a reaction against the centralized rule of a king and strong government, the articles seemed to go too far in the other direction, creating too weak a government. A sense of crisis pervaded the new nation as many Americans, not to mention the French and British, began to have doubts whether selfgovernment was possible. To further complicate matters, the new economy was in trouble. Thousands of average citizens who fought in the Revolution were losing their farms, as they were unable to pay their mortgages due to the dire economic condition. In every state, mini-rebellions sprang up as farmers, though winning a war against Britain, were in jeopardy of losing their land to a new, ineffective government and began threatening the safety and stability of their communities. Daniel Shays led the most famous of these insurrections. In 1786, Shays, a former captain during the Revolution, led a group of roughly 2,000 farmers in a confrontation with the government of Massachusetts. Their goal was to close the government, thereby preventing foreclosure of their property. While the farmers drew sympathy from many, the propertied class saw these rebellions as a direct threat to their safety and wealth. Amid mounting pressure to strengthen the federal government, Virginia called for a meeting of the states in
Annapolis in 1786. Only five states sent delegates to the Annapolis Convention and it soon disbanded, but not before urging Congress to authorize another convention for 1787. Congress did so and instructed the state delegations to meet in Philadelphia for “the sole and express purpose of revising the Articles of Confederation.” After several years, many of the states sought to revise the articles to strengthen the federal government and create a stronger government. While there was some resistance, over time many of the states, some quite grudgingly, relented, and in 1787, a convention was called in Philadelphia “for the sole and express purpose” of revising the articles. Of course, it was at that convention that the articles were thrown out and a wholly new constitution written. The question remains, how could so smart a group develop so weak and unworkable a government as was created with the Articles of Confederation? After all, these were virtually the same men who invented the U.S. Constitution less than a decade later. How could they be so wrong at first and so right later on? Part of the answer to this perplexing puzzle can be found in the old adage that “politics is the art of the possible.” What was possible in 1787 was simply not possible in 1781. The Articles of Confederation were created in the midst of revolutionary fervor and a sense of great hope and optimism for the possibilities of being governed locally (at the state level) with few taxes and maximum freedom. The vision of the yeoman farmer, so Jeffersonian in sentiment, pervaded the hopes and aspirations of the framers; they felt confident that a government kept small, close to the people, and with very limited authority to interfere with the liberties of the people, would flourish in this new land of freedom. Imbued with the hope and expectations of a revolutionary sentiment that seemed boundless, the framers seemed more governed by optimism than realism. After all, the Revolution had just been fought to instill in the new government the ideas and ideals presented in Common Sense and the Declaration of Independence, ideas revolutionary, democratic, and radical for their day. It was easier to talk about such ideals, however, than to incorporate them into a new governing document.
Bill of Rights 9
Another reason for the failure of the Articles of Confederation is that they were in many ways a first and rough draft for a new government. Not as much deep thought went into the articles as one might have liked, and they were the product of speed as well as thought. Written by a people suspicious of central government, it took time for them to finally admit that indeed, a stronger central government might be necessary to produce good government. In this sense, the Articles of Confederation were an attempt to defy the reigning logic of modern government, replacing it with a radical new—and much less intrusive—government system. It seems understandable that the revolutionary fervor might overwhelm more prudent thinking, and the attempt to imbue the new government with as much of the revolutionary sentiment as possible seems quite logical. Today, with 20-20 hindsight, we are perplexed at the articles. How, we might ask, could they imagine that such a weak system could be at all workable? But if we try to put ourselves in the place of the framers, their attempt makes more sense. That this new government under the Articles of Confederation did not work well does not surprise the modern mind, and perhaps it did not surprise many of the framers. But the failure of the Articles of Confederation paved the way for a new, stronger, more centralized government, one not as closely linked to the revolutionary sentiment of Common Sense and the Declaration of Independence, but one that proved lasting, workable, and a better fit for the needs of the new nation. See also New Jersey Plan; Virginia Plan. Further Reading Hoffert, Robert W. A Politics of Tensions: The Articles of Confederation and American Political Ideas. Niwot: University Press of Colorado, 1992; Jensen, Merrill. The Articles of Confederation: An Interpretation of the Social-Constitutional History of the American Revolution. Madison: University of Wisconsin Press, 1970. —Michael A. Genovese
Bill of Rights The first 10 amendments to the U.S. Constitution make up the Bill of Rights. They were not proposed
in the original Constitution, and the absence of a listing of rights was one of the major stumbling blocks in the ratification of the Constitution. In order to insure ratification, the framers of the Constitution were compelled to agree that a bill enumerating the rights of citizens would be added to the newly proposed Constitution. As Thomas Jefferson asserted in a December 20, 1787, letter to James Madison, “A bill of rights is what the people are entitled to against every government on earth, general or particular; and what no just government should refuse. . . .” The original Constitution proposed by the delegates of the Philadelphia Convention of 1787 did not contain a provision guaranteeing the rights of citizens of the United States. This caused a great controversy, as immediately two warring camps developed: the federalists and the antifederalists. This rift threatened to undermine ratification of the new Constitution, as opposition was especially vocal in two of the biggest and most important states in the union: Virginia and New York. The federalists were those who supported adoption (ratification) of the new Constitution. Led by such luminaries as George Washington, James Madison, Alexander Hamilton, and John Jay, these men supported the new Constitution and a stronger central government for the United States. But winning in the state ratifying conventions would be no easy task, so Madison, Hamilton, and Jay began writing broadsides in support of ratification. These essays, published in New York newspapers, became known as The Federalist, and are today the most eloquent and cited guides to the original meaning of the Constitution. Why New York? For two reasons: First, New York, along with Virginia, was essential to a successful new system—if New York failed to ratify, the new system would almost certainly fail; and two, the battle over ratification in New York was a tight one, with the opponents to ratification initially in the lead. Thus, New York became the key battleground for the adoption battle. The antifederalists could also count on some heavy hitters, such as Patrick Henry. In essence, the antifederalists opposed ratification of the new Constitution because it contained no provision for the rights of the citizens, it transferred too many powers from the states to the federal government, and because they feared that the newly invented presidency
10 Bill of Rights
might create a monarch. As the debates over ratification heated up, it became clear that the key issue was the absence from the Constitution of a clear and inviolable set of rights that all citizens possessed. In the ratification battle, the federalists were winning in several of the states, but New York still held out. It became clear that the federalists would have to pay a price, and it would be a heavy one: a Bill of Rights. As the debates and arguments waged, the federalists, led by James Madison, began to capitulate. Madison agreed that if the Constitution were ratified, he would lead a movement in the first Congress to have a series of rights added onto the Constitution as amendments. The rights of citizens would be spelled out in clear language for all to see, but that would come only after the Constitution was adopted. It was a risky deal for the antifederalists, but they could see the writing on the wall; they knew that they could not hold out much longer, and eventually, they agreed to trust Madison. Madison did not let them down. In the first Congress he led the way in proposing 17 amendments to the new Constitution. Eventually, the Congress approved of a dozen amendments to the Constitution, 10 of which were eventually approved by the states, and they became known as the Bill of Rights, the first 10 amendments to the Constitution. They became the basis for the rights of American citizens against the power of the state or federal government. The Bill of Rights lists the basic liberties and rights of U.S. citizens. It was drafted in the first Congress, and these new rights became part of the Constitution in 1791. The Bill of Rights limits the government’s power over individuals, and guarantees certain basic rights (“unalienable rights,” in the words of the Declaration of Independence) to all citizens. They form the limit beyond which the government is not allowed to go when facing the citizenry. The First Amendment, today seen as one of the most controversial, guarantees freedom of thought, belief, and expression. It is here that one finds the citizens’ freedom of speech, religion, press, and of petition and assembly. The Second Amendment deals with the right of a state to maintain a militia, and has sometimes been viewed as guaranteeing individuals the right to bear arms. This amendment is open to interpretation, and has been the source of much controversy. The Third Amendment forbids the govern-
ment, during times of peace, from requiring citizens to house soldiers in their homes without consent. This may not seem like a significant right in our time, but during that period, it was often the case that when troops went into a new city, they could commandeer homes and property for military use, and after experiencing a great deal of this under the British Crown, the colonists and the citizens of the United States wanted no part of this practice. The Fourth Amendment protects individuals against unreasonable search and seizure, sets up rules for the attainment of search warrants, and demands “probable cause” as a requirement for the issuing of a warrant. These “rights of the accused” found in the Fourth, Fifth, and Sixth Amendments, have become very controversial at different times in U.S. history, especially when the crime rate is high. The Fifth Amendment requires indictment by a grand jury, outlaws double jeopardy, allows someone to decide not to testify against themselves, and prohibits the government from taking life, liberty, or property without first granting due process. The Sixth Amendment gives citizens the right to a speedy and fair trial, assures citizens the right to be informed of accusations against them, to confront witnesses, to cross-examination, and to have legal counsel. The Seventh Amendment grants the right to a jury trial in certain cases, and the Eighth Amendment prohibits the government from requiring excessive bail or from inflicting cruel and unusual punishment. The Ninth Amendment is a catch-all amendment that says that just because certain rights are not listed here, does not mean that they do not exist. And the Tenth Amendment guarantees that the state governments and the people retain any powers not specifically granted in the Constitution. To some, there is an apparent contradiction built into the fabric of the U.S. government. The Constitution empowers the government; the Bill of Rights limits the government. In essence, the government is being pulled in two different directions. The former reflects the desires of the federalists to create a stronger central government; the latter reflects the desire of the antifederalists to limit the scope and power of the new federal government that they feared might become too big, too powerful, and too much of a threat to liberty. This has caused controversy and conflict over time. To be successful, the government must act; to keep within
Bill of Rights 11
the bounds of the Bill of Rights, the government can only go so far, and no further. It makes for a complicated and sometimes difficult road for the government, but one that seeks to balance the powers of the government with the rights of citizens. At times the pendulum swings toward government authority and power; at other times, it swings back toward the rights of the people. It is a dynamic, not a static situation. Different eras, different demands, and different issues can animate, empower, and limit the government. There is no one answer to the dilemma of governing with both a Constitution and a Bill of Rights. With the government pulled in two different directions, it should come as no surprise that Americans are often pulled in two different directions as well. And while at times these conflicts paralyze us, at other times they bring forth a heated but powerful debate that allows us to reimagine ourselves, reinvent ourselves, and adapt our government of 1787 to the demands of the modern era. It has not always been easy, and it has sometimes become violent, but it is a way of keeping the old government new, of adapting it to new demands and new times, and of attempting to balance the legitimate needs of the government with the rights and liberties of the citizens. Over the years, a variety of heated battles have been fought over the scope and meaning of the rights guaranteed in the Bill of Rights. In fact, some of the most heated battles in American politics stem from disputes over the scope and meaning of the Bill of Rights. Freedom of expression, of religion, the rights of the accused, citizens’ rights to bear arms, property rights, and a host of other issues have been tested, redefined, reexamined, and reinterpreted over the years by practice, legislation, and court decisions. As the United States is an especially litigious society, it should not surprise us that battles for political rights and liberties often make their way into the judicial branch for resolution. This has often drawn the United States Supreme Court into many a heated political as well as legal battle, and this gives the courts added political clout as well. As the observant Frenchman Alexis de Tocqueville observed in the early 1800s, hardly any political issue arose in the United States that was not soon turned into a legal dispute to be settled in the courts. This propensity for Americans to legalize or criminalize political disputes puts greater responsibility as well as greater power
into the hands of the courts, the unelected branch of the government. And while few of these issues are resolved with any finality, the ongoing debate and recurring struggles compel us to take a fresh look at old policies, and a new examination of set procedures. It allows for a renewal of the United States as times change and demands grow. Today, the Bill of Rights remains controversial and the point of much dispute and conflict. Many of the key political battles take place over the interpretation of the rights of Americans, especially in the war against terrorism that began after the September 11, 2001, attack upon the United States. How many of the rights of citizens should be curtailed in an effort to fight terrorism and protect the homeland from attack? What is the proper balance between rights and government power? Some would diminish the constitutional rights of Americans in order to fight the war against terrorism, while others believe that it is precisely those rights and guarantees that most need to be protected in times of strife. How high a price is the United States willing to pay in the battle against terrorism? Is the nation willing to cut constitutional corners or even abandon long-held constitutional guarantees for some unspecified amount of increased security? Is the Bill of Rights being held hostage by a band of angry terrorists? And will the terrorists be handed a victory in our political and judicial arenas they have neither earned nor won on the battlefield by the United States’s abandonment of the core values and rights guaranteed in the Bill of Rights? These questions are being answered every day in ways large and small as citizens, judges, and elected officials struggle with the response to terrorism in the modern world. How we answer these challenges will determine how strong a people we are and how committed we are to the rule of law, the Constitution, and the Bill of Rights. See also freedom; justice. Further Reading Bodenhamer, David J., and James W. Ely, Jr. The Bill of Rights in Modern America after Two-Hundred Years. Bloomingdale: Indiana University Press, 1993; Meltzer, Milton. The Bill of Rights: How We Got It and What It Means. New York: Crowell, 1990. —Michael A. Genovese
12 checks and balances
checks and balances A central feature of the American system known as the separation of powers holds that in separating power between the three core branches of government, each branch can check and balance the other. Thus, no one branch can dominate and control power, as each branch has independent powers at their disposal with which to counter the power claims of the others. This is designed to prevent tyranny as, theoretically, no one branch can accumulate too much power over the others. The checks and balances system is derived largely from the writings of the French philosopher Charles de Montesquieu (1689–1775), whose classic work, The Spirit of the Laws (1734) introduced the separation of powers notion that was later more fully developed by Thomas Jefferson in his Notes on the State of Virginia (1784), where he wrote that “the powers of government should be so divided and balanced among several bodies of magistracy, as that none could transcend their legal limits, without being effectively checked and restrained by the others.” This was institutionalized into the new U.S. Constitution by James Madison and the writers of the Constitution of 1787. Nowhere in the Constitution is a separation of powers mentioned, nor are the words “checks and balances” to be found anywhere in the document, but they are embedded into the fabric of the American constitutional system, and are the theoretical underpinnings of the American government. Allowing one branch to check another branch makes tyranny less likely; when one branch balances powers with another it encourages an equilibrium of power that thwarts abuses of power. And while the Constitution does not set up literally a separation of powers but what would more accurately be described as a sharing and overlapping of powers, the overall intent remains the same: the prevention of tyranny and abuse of power. Seen from the perspective of the 21st century, many critics argue that this 18th-century concept interferes with government efficiency and hampers the United States in its war against terrorism. Others argue that it is not the separation of powers and checks and balances that are at fault, but unwise leadership that undermines or threatens national security. If the United States worked to make the separation of powers and checks and balances work better, we might produce better policies.
In Federalist 51, James Madison argued that “Ambition must be made to counteract ambition,” embedding the very concept of checks and balances into the Constitution he was defending. This separation is in contrast to the fusion of power such as exists in Great Britain where the authority of the executive and legislature are fused together to better serve the achievement of power. Such a system of fusing power together makes for a more efficient government, one that has the power and authority to act with fewer restrictions and roadblocks. But power was not the primary goal of the framers, liberty was, and they saw the government as the greatest threat to liberty. It was the government that needed to be checked and controlled, as it was the government that posed the greatest threat to liberty. As ambition could not be made to go away, a method had to be found to set ambition against ambition, and power against power. The framers decided on an architectural device—a separating and overlapping of power—to interlink power by delinking the institutions of power. No one branch would have so much power that it could long pose a threat to liberty, for if one branch grabbed too much power, it was in the institutional self-interest of another branch to step in and block it. In doing this, ambition was put in the service of equilibrium. Checks and balances remain a key element in the scheme of American government. In one of his famous “fireside chats” (March 8, 1937), President Franklin D. Roosevelt described this type of government as “a three horse team provided by the Constitution to the American people so that their field might be plowed. . . . Those who have intimated that the President of the United States is trying to drive the team, overlook the simple fact that the President of the United States, as Chief Executive, is himself one of the horses.” But of course, the president is expected to drive or lead the team, and in the absence of presidential leadership, deadlock often occurs. The checks and balances, as mentioned, were instituted to prevent tyranny. The side effect of this is that it is difficult to get the three branches to work in harmony with each other: coupling what the framers have decoupled becomes the challenge of leadership in the American system. To move the government, the political branches must unify what the framers have divided. While this system has been
checks and balances
effective in preventing tyranny it has not always been effective in promoting efficient government. But that was not the primary goal of the framers. Their main purpose was to prevent tyranny and promote freedom, not to facilitate power in the hands of the governing class. How has the system of checks and balances worked over time? It has been very successful in thwarting tyranny, its main objective. But it is often seen as the culprit in the deadlock or gridlock so characteristic of the American system (this is sometimes referred to as “Madison’s Curse,” named after James Madison, the father of the U.S. Constitution, and a leading advocate for the separation of powers). Presidents, often frustrated by the slow, cumbersome, seemingly intractable checks upon their power, often search for extra-constitutional methods of gaining power. These methods sometimes lead presidents to violate the spirit and letter of the law, as was the case with the Watergate scandal during the Nixon presidency, and the Iran-contra scandal during the Reagan presidency. Overall, the check and bal-
13
ance system created by the framers has served the republic fairly well. It may frustrate and confound leaders, but it also protects liberty. There are times when citizens want the government to act forcefully, and other times that call for restraint. In the former case, the separation of powers is often seen as interfering with the government’s doing what is good and necessary; in the latter it is often seen as a protection against too large and powerful a government. In the modern era, many critics have argued that the system of checks and balances has been replaced by an imperial presidency. This has especially been the case in the aftermath of the terrorist attack against the United States on September 11, 2001, when the Bush presidency practiced a bolder and more aggressive brand of presidential leadership and asserted that it was not bound by the normal checks and balances that operate in times of calm and peace. That Congress has often willingly or at times rather meekly gone along with the president does not calm the fears of those who see the erosion of the checks and balances so vital to the proper functioning of the separation of
14 checks and balances
powers. Even those who do not see the emergence of an imperial presidency do concede that the presidency has grown in power and prestige as the role of Congress has receded. This phenomenon is not unique to the American system, as in virtually all industrial democracies executives have risen in power and legislatures have declined. In this way, many see modernization and the revolutions in communication, transportation, weaponry, and other trends making for a more fast-paced world, one better suited to the streamlined nature of executive decision making, and not as amenable to the slower, more deliberative nature of a legislative assembly. Thus, legislatures are sometimes left in the dark as executives act, putting a strain on the system of checks and balances. In modern American politics the strains of the checks and balances can be seen in a variety of policy areas such as the war powers, the war against terrorism, science policy, foreign relations, and a host of other arenas of power. Presidents act and Congresses react. The presidency is a “modern” institution, built for action, decision, and dispatch. Often presidents get the upper hand by acting and leaving it to the Congress to follow or try to block presidential initiatives. This places Congress at a great disadvantage as members are often reacting to presidential decisions or actions that have already taken place. How can a president be checked if his decision has already been implemented? How can Congress balance power if it is left on the sidelines? By structure, the executive is better suited to the modern era. Congress has thus been left behind and has not found a role or place in the modern world. Might this spell the end of the checks and balances as we know them? Can you have a check and balance system when the president acts and the Congress merely (or mostly) reacts? Few would argue that there is today a “balance of power” between the executive and the legislature. Has this shift in power been inevitable or is there a way for the Congress to reassert its powers and prerogatives? Often Congress has been a willing participant in its own decline, delegating powers (for example, the budget in 1921, and war powers in the post– World War II era), or turning a blind eye when the president grabs power (as in declaring war or establishing military tribunals), and at other times,
it has tried to stop a president but has failed (e.g., domestic wiretapping). And while Congress is not helpless in this battle (it still has the power of impeachment), it has often sat back while presidents grabbed power. This has been especially troubling in the age of terrorism, where often, the Congress, lacking information, lacking the speed to make quick decisions, fearing a public backlash, intimidated by the bold actions and assertions of the executive, often merely allows the president to lead while the Congress sits back and watches. If the Congress is a rubber stamp to the executive, the separation of powers and the checks and balances are useless. This raises the troubling question: can a separation of powers/checks and balances system work for a 21stcentury superpower in an age of terrorism? Must we, for national security reasons, abandon the system that has served us so well for so many years, or are there ways to modernize and update this system to serve both the needs of checks and balances and the demands of security in a dangerous world? We must remember that the framers were less concerned with efficiency than they were with protecting against tyranny. For the framers, an efficient and powerful government was a dangerous government. They intentionally built into the system separation, checks, and balances, all of which made it more difficult for the government to act. We today seem to want, and perhaps need, a more modern, streamlined, and efficient government to meet the demands of the day. Does this mean the abandonment of the framers’ vision? Can we have a government that is both accountable and powerful, able both to meet the demands of the modern era and still protect the integrity of the separation of powers and checks and balances? These questions are not easily answered, but they center on the most important and basic questions that must be confronted in an age of terrorism, where the presidency has eclipsed the Congress in power and authority, threatening to become a permanently imperial office, above the laws, and disembodied from the separation of powers and checks and balances that have served us so well for so many years. How Americans confront these thorny issues will test the extent to which the United States remains a constitutional republic, or slips into an empire such as the framers feared.
colonial governments 15
Further Reading Goldwin, Robert A. and Art Kaufman. Separation of Powers—Does It Still Work? Washington, D.C.: American Enterprise Institute for Public Policy Research, 1986; Madison, James, Alexander Hamilton and John Jay. The Federalist Papers. New York: New American Library, 1961; Whittington, Keith E. Constitutional Construction: Divided Powers and Constitutional Meaning. Cambridge, Mass.: Harvard University Press, 2001. —Michael A. Genovese
colonial governments Although France, Spain, and Holland were also in the business of colonizing the New World in the 15th century, the American colonial government’s history comes more directly from England. Henry VII issued a charter authorizing the explorations of John Cabot in 1496 to subdue any land not held by a Christian power, which adhered to Pope Alexander’s 1493 principle that ownership and possession would be assured by right of discovery if there were no prior claims by European powers. Eventually England, which was primarily concerned with promoting markets for its woolen cloth, adjusted its principle under Elizabeth I in 1578 that occupation was necessary for recognition of ownership. In an effort to get colonists to occupy English territory, England allowed people to receive grants of land from the Crown that were considered private property and not held by some lord. Thus, colonists could more easily subdivide land than people could in England. As a result, English monarchs tended to give charters that enticed people to settle in the colonies. In 1600, there were no permanent English settlements, but by 1700 there were 17 distinct jurisdictions with a population totaling 400,000. During the 17th century, the first charter held that government was retained by the Crown through a council resident in London. A second charter gave investors direct ownership of the land and allowed them to govern as was necessary for the well-being of the colony. A third charter made the Virginia Company a corporation with stockholders making decisions about a governor and council. This corporation then made governing decisions. As it happened, the distance of the colonies from England made it difficult for the Crown to have
more direct control. Even though the distance made direct governance unwieldy, the people who settled in America brought with them English traditions and beliefs. Thus, while the colonists had vast leeway in establishing their own political and religious communities, they were steeped in English tradition. Regardless of the uniformity of handling colonies in England, there were three distinct sets of people who developed three distinct types of governments and political cultures. The first, the New England colonies, was established by the Pilgrims. These colonies did not begin with the democratic features conferred on the other charters and were able to secure extensive jurisdictional privileges to make them absolute proprietors. The merchants in London who backed these settlers gave them a patent but eventually sold out in 1626. The settlers arrived near Cape Cod and did not travel to Virginia as they wanted to guarantee a more exclusive area for their congregation to settle. The men, in an effort to keep anyone from questioning the congregation’s authority, formed a social contract known as the Mayflower Compact. Before any of them stepped off the boat, they signed the Mayflower Compact, which was a specific form of government. By early in the 17th century, the colony was a trading company with much control over its affairs. As a result of the absolute control by the Pilgrims in the Massachusetts Bay area, the colonies of Rhode Island and Connecticut were born. All of these governments erected in the New England colonies are traceable to the feudalist era of England, and these colonists considered themselves Englishmen with all the inherent rights. Their source of power was a direct grant from the Crown. They formed a government that had a governor and a council, in addition to an assembly of freemen that met once a year. Early in the colonial history, all adult males were considered stockholders and participated in elections. By 1660, suffrage was limited to those who had property, were able to engage in good conversation, and had taken an oath of fidelity. The Middle Atlantic colonies were conceived differently, as England’s gentry saw these colonies as personal estates and planned to transplant feudal forms of government and landholding to them. These proprietors were given power directly from the Crown. New York was originally established as New Netherlands by the Dutch West Indies Company,
16 c olonial governments
which considered colonists to be servants of the company. In 1664, the colony was surrendered to the duke of York who was given all the land by Charles II. The colonists, however, did not see themselves as feudal serfs and agitated for democratic rights of Englishmen. They adhered to the concepts developed in the political and constitutional history of England. When the duke took over, he allowed the colonists to begin a general assembly, which first convened in 1683. Pennsylvania was established by William Penn as a Quaker experiment. Penn wrote a constitution, and a council of 72 members were elected by freemen living there. This difference in understanding led to clashes between the lords and legislatures established in the 17th century. Both sides agreed that there should be a strong government having full authority of the people and established bicameral legislative bodies, with the governor and council as the upper house and the general assembly as the lower house. The southern colonies developed as a duplication of English government and society. The first English influx happened in 1607 with the founding of Jamestown. They considered the monarch to be superior, and the Anglican religion in these colonies was more noticeable. The source of power in the southern colonies was the corporation; they were conceived as commercial ventures to make money for England. In Virginia especially, the commercialism meant that political life did not take shape as early as in the other colonies. Administrative control was handled by a privy council composed of more than 40 members. To be an elected official in Virginia, a person had to have known integrity, good conversation and be at least 21 years old. In the Carolinas, they had the same commercial aspirations as Virginia, but in order to lure people to the colony, they allowed participants to choose councilors and a governor, who would eventually be replaced by colonists. Thus, there were more political rules early in Carolina. Charles Town used the Fundamental Constitution written by John Locke and Anthony Cooper in 1669. In 1712, North Carolina was officially recognized as its own colony when it got a governor. Georgia was created as a trust that would revert to the Crown once the colony became profitable. In one area, all the colonies were very similar: the Puritan influence. Puritans were an offshoot of the
Anglican Church who wanted to purify the Church of England even more of the Catholic influence. This Puritan influence was felt throughout the colonies, as 85 percent of churches were Puritan in nature. Since this religion posited that any individual could believe and there was no ecclesiastical authority, religious freedom of a sort abounded. Also, this individualistic faith led to democracy prevailing. When the Puritan influence is coupled with English constitutional tradition, we have the makings of an early American political culture throughout the original thirteen colonies. The distinct colonies began to develop a unique and separate national interest at least as early as 1754 when the French and Indian War led many Americans to believe that life in the colonies was in danger. Reports indicated that the French and Native Americans were gathering military forces, so colonial leaders requested a meeting. In 1754, a meeting was held in Albany, New York, at which Benjamin Franklin proposed a plan for a union of the colonies. The delegates there agreed to a unified government, which would consist of a president general and a grand council elected by colonial assemblies, would have power over Native American relations, war declarations, peace treaties, and land acquisitions, and which could raise an army and navy and could levy taxes to pay for itself. When the plan arrived in the colonial assemblies, some ignored it and others viewed it as unnecessary and voted it down. This early attempt at a national government indicates an American political culture that became the cornerstone for the Revolution and then the U.S. Constitution. The most important features of colonial governments were their general assemblies, as they were the pieces of government that were not directly part of the Crown. The first colonial legislature was in 1619 in Virginia, and by 1700, all of the colonies had a representative assembly. Even though, as explained above, these governments were all distinct in the 17th century given the different conceptions of government in each area, by the 18th century they all came to resemble one another. They were all representative, as it became feasibly impossible for all men to participate, even with the limited qualifications. They were all, except Pennsylvania, bicameral, and they were highly complex. There were long sessions and elaborate committee systems to handle the work-
commerce clause 17
load. Rules and procedures were adopted, and staff and facilities were provided. In many ways colonial assemblies looked similar to modern-day state legislatures. What is most important to note in thinking about colonial governments is that the colonists considered themselves entitled to rights that had been traditionally given to Englishmen throughout England’s history. The ideas of natural rights and representative democracy had deep roots from which American government grew. Further Reading Bliss, Robert M. Revolution and Empire: English Politics and the American Colonies in the Seventeenth Century. New York: Manchester University Press, 1990; Copeland, David A. Debating the Issues in Colonial Newspapers: Primary Documents on Events of the Period. Westport, Conn.: Greenwood Press, 2000; Dougherty, Jude. “Puritan Aspiration, Puritan Legacy: An Historical/Philosophical Inquiry,” Journal of Law and Religion 5, no. 1 (1987): 109–123; Frohnen, Bruce, ed. The American Republic. Indianapolis: Liberty Fund, 2002; Kavenagh, W. Keith. Foundations of Colonial America: A Documentary History. New York: Chelsea House, 1973; Kolp, John Gilman. Gentlemen and Freeholders: Electoral Politics in Colonial Virginia. Baltimore, Md.: Johns Hopkins University Press, 1998; Labaree, Leonard Woods. Royal Government in America: A Study of the British Colonial System before 1783. New York: F. Ungar, 1958; Squire, Peverill. “The Evolution of American Colonial Assemblies as Legislative Organizations,” Congress and the Presidency 32, no. 2 (2005). —Leah A. Murray
commerce clause The first three articles of the U.S. Constitution are known as the distributive articles, because they discuss the three branches of the national government and distribute powers among them. Article I of the Constitution lays out the specific powers of Congress, which, as the legislative branch, has more specific authority than the other two branches. Not only does the Congress have the authority to make laws, but the framers also granted Congress a long list of enumerated powers (which means that certain
powers are specified). As a result, mostly through the powers to tax and spend, Congress received the largest grant of national authority in the new government. Most of these enumerated powers are found in Article I, Section 8, and are followed by a general clause permitting Congress to “make all laws which shall be necessary and proper for carrying into Execution the foregoing powers.” Enumerated powers include the ability to: lay and collect taxes, borrow money, regulate commerce among the states, control immigration and naturalization, regulate bankruptcy, coin money, fix standards of weights and measures, establish post offices and post roads, grant patents and copyrights, establish tribunals inferior to the United States Supreme Court, declare war, raise and support an army and a navy, and regulate the militia when called into service. The “necessary and proper” or “elastic” clause has at times allowed Congress to expand its powers over state governments in creating policy that is related to the enumerated powers listed above. Congress today exercises far more powers than are specifically enumerated in the Constitution. Yet it can also be argued that Congress has remained within the scope of powers delegated to it by the Constitution based on the necessary and proper clause and the related doctrine of implied powers. This, along with the commerce, taxing, and spending clauses, is one of the key sources of congressional power. According to legal scholars Lee Epstein and Thomas G. Walker, “Of all the powers granted to government, perhaps none has caused more controversies and resulted in more litigation than the power to regulate commerce.” The issue of commerce was one of the primary reasons for calling the Constitutional Convention in 1787. Under the Articles of Confederation, the government did not have the ability to effectively control commercial activity, and following the Revolutionary War, the new nation, as well as the states, was in debt. These problems grew as the nation moved from an agrarian to an industrial society. The starting point in the interpretation of the commerce clause by the United States Supreme Court is found in Chief Justice John Marshall’s opinion in Gibbons v. Ogden (1824), which was a classic statement of nationalism and became a source of extensive authority for Congress to address new problems in the
18 c ommerce clause
regulation of the national economy. At issue was the constitutionality of New York’s grant of a steamboat monopoly. Thomas Gibbons, an attorney from Georgia, challenged this exclusive grant on the ground that it interfered with the power of Congress to regulate commerce among the states. Marshall wrote that the power of Congress over commerce among the states was plenary and subject to no competing exercise of state power in the same area. The federal law under which Gibbons operated his steamboat business was a modest exercise of that plenary power, but it was enough to invalidate the state law because the monopoly interfered with the commercial privileges provided by the federal government. Marshall defined the phrase “commerce among the states” and the power of the Congress to regulate it in broad and sweeping terms. As a result, his opinion came to be read as an endorsement of regulatory authority on a large scale. In a concurring opinion, although never supported by a majority of justices, Associate Justice William Johnson stated that the power of Congress in this area was not only plenary but also exclusive, meaning that state legislation in the absence of federal law regulating interstate commerce would be unconstitutional. In most cases, states are limited in their ability to regulate commerce. Until the late 19th century, this ruling represented the viewpoint that the federal government had broad powers under the commerce clause, and state regulation was not allowed, which increased commercial and industrial competition. The Court in the late 19th and early 20th centuries developed a more restrictive conception of the commerce power and invalidated important congressional legislation. Underlying the Court’s interpretation during this era was the notion of dual federalism, which suggested that the framers reserved important powers, including the police power, to the states. (Police power is defined as the power of the states to protect the health, safety, welfare, and morals of their citizens.) During this time, the Supreme Court was using the commerce clause as a means to limit the ability of the federal government to regulate the national economy. This time period also corresponded with what is known as the Gilded Age of American politics, where industrialists and top business leaders equated freedom with absolute liberty in the economic realm, and judges zealously protected the rights of contract and
property in the courts. The federal judiciary, including the Supreme Court, was actively protecting laissez-faire capitalism and corporate autonomy in its rulings. By the end of the 19th century, many Americans supported an end to corporate influence over government policies and what many regarded as economic injustice and exploitation in the workplace due to unsafe working conditions, particularly for women and children, and little to no protection for workers’ rights in terms of wages, hours, or other benefits. It was in this political environment that Congress responded by passing its first two major pieces of legislation dealing with commerce: the Interstate Commerce Act of 1887, which set up a system to regulate railroads engaged in interstate commerce, and the Sherman Antitrust Act of 1890, which was intended to limit corporate monopolies. The Supreme Court was also active in its interpretation of the commerce clause during this era, and it was not ready to share the view of Congress that commerce needed regulation imposed upon it by the federal government. A major change in commerce clause interpretation (since the Gibbons v. Ogden ruling in 1824) by the Court came in United States v. E.C. Knight Company (1895), resulting from the federal government’s effort to break up a powerful sugar monopoly by invoking the Sherman Antitrust Act. The American Sugar Refining Company, which had recently acquired other sugar refineries, controlled 98 percent of sugar production in the United States by 1892. In this case, the Court distinguished commerce from production, stating that manufacturing was not considered interstate commerce. The Court ruled that the relationship between commerce and manufacturing was secondary in nature, and that the connection was incidental and indirect. Under their police power, the Court ruled, states were free to regulate monopolies, but the national government had no general police power under the commerce clause. This formal distinction between commerce and production temporarily gutted the Sherman Act, without declaring it unconstitutional outright. If the government could regulate only against the postmanufacturing phases of monopolistic activity, and not against the entire enterprise, its hands were effectively tied in this area of business regulation. In his dissenting opinion, Associate Justice John Marshall
commerce clause 19
Harlan stated that the federal government must be empowered to regulate economic evils that are injurious to the nation’s commerce and that a single state is incapable of eradicating. The Supreme Court followed similar reasoning in Hammer v. Dagenhart (1918), invalidating federal restrictions on child labor. The manufacture of goods by children, even when those goods were clearly destined for shipment in interstate commerce, was not a part of commerce and could not be regulated by Congress. However, the Court upheld several equally farreaching exercises of congressional power under the commerce clause during this era. For example, Congress enacted laws imposing fines and imprisonment for participation in lotteries and prostitution. Activities of this sort, unlike child labor and business monopoly, were widely regarded as immoral. While the Court did not support restrictions on most areas of commerce, when it came to punishing what most people believed to be sinful behavior (like gambling and prostitution), the Court viewed the commerce clause as an appropriate tool for regulation in these other areas. In the field of transportation, particularly the regulation of railroad freight rates, the scope of national power under the commerce clause developed in accordance with the broad language of the ruling in Gibbons v. Ogden (1824). The Court did, however, develop several concepts designed to assist it in defining the outer limits of the commerce power, including formal rules and tests derived from the distinction between interstate and intrastate commerce. Federal regulation relied on an “effect of commerce” rule; that is, whether an activity within a state had an obvious effect or impact on interstate commerce so as to justify the exercise of federal power. The stream of commerce doctrine, first articulated by Associate Justice Oliver Wendell Holmes in 1905, was one such test to implement the effects rule. In Swift & Co. v. United States (1905), the Court rejected the claim of Chicago stockyard firms, which were challenging federal prosecutions for conspiring to restrain trade, that the purchase and sale of cattle in Chicago stockyards was not commerce among the states. The corporations involved controlled the meat industry and the stockyards, which constituted a restraint of trade (including price controls and withholding meat). Even though local in nature, Holmes found that they
were in the “current of commerce” among the states—from the point of origin to the point of termination. In a number of cases during this period, the Supreme Court indicated that even though the activity in question might not be defined as commerce per se, it could still be regulated if it had a direct effect on interstate commerce. The National Industrial Recovery Act, a major piece of New Deal legislation that attempted to enact codes for fair competition, was declared unconstitutional in Schechter Poultry Corp. v. United States (1935), since regulation had only an indirect effect on interstate commerce. Carter v. Carter Coal Company (1936) also struck down New Deal legislation that attempted to regulate wages for the coal industry. This led to President Franklin D. Roosevelt’s court packing plan and what is considered to be a revolution in constitutional interpretation, where the commerce clause came to be recognized as a source of far-reaching national police power. President Roosevelt’s New Deal attempted to replace measures that the Supreme Court had invalidated prior to 1937, including regulation in areas such as labormanagement relations, agriculture, social insurance, and national resource development. Roosevelt’s plan to “pack” the court came from his desire to place justices on the Supreme Court who would be willing to uphold his New Deal legislation as constitutional. After Roosevelt was elected in 1932, and began in 1933 to implement his New Deal legislation with the support of Congress, the President found himself facing a Supreme Court which had for many years been hostile to state and federal legislation designed to regulate the industrial economy in the United States. Faced with a hostile Court, Roosevelt and his fellow Democrats in Congress threatened mandated retirements and an increase in the Court’s size through a constitutional amendment. However, Roosevelt finally received his first appointment to the Court in 1937 (with Associate Justice Hugo Black), which is considered the famous “switch in time that saved nine,” as the Court began to uphold legislation that regulated commerce based on the commerce clause and Roosevelt no longer needed to “pack the court” in his favor. Beginning with its decision upholding the National Labor Relations Act in National Labor Relations Board v. Jones & Laughlin Steel Corporation (1937), the reoriented
20 c ommon law
Supreme Court swept away distinctions between commerce and manufacturing, between direct and indirect burdens on commerce, and between activities that directly or indirectly affected commerce. In Wickard v. Filburn (1942), the Court upheld the federal regulation of wheat production, even for a farmer who was growing the crop for his own use and consumption. Compared to E.C. Knight, this case illustrated the extent to which a single clause of the Constitution is subject to contrasting interpretations. The scope and reach of the commerce clause to regulate commercial activity expanded into new territory during the Warren Court of the 1960s. In the 1960s, Congress relied on the commerce clause to legislate outside of economic activities. In Heart of Atlanta Motel v. United States (1964), the Court unanimously upheld the public accommodations section of the Civil Rights Act of 1964 as a proper exercise of the commerce power. The motel in question did a substantial volume of business with persons from outside Georgia. The Court ruled that its racially restrictive practices could impede commerce among the states and could therefore be appropriately regulated by Congress. In the companion case of Katzenbach v. McClung (1964), the Court went even further by recognizing the power of Congress under the commerce clause to bar racial discrimination in a restaurant patronized almost entirely by local customers. The Court found a connection with interstate commerce in the purchase of food and equipment from sources outside Alabama. Between 1937 and 1995, the Supreme Court did not overturn even one piece of legislation that sought to regulate the economy under the provision of the commerce clause. However, in 1995, that trend changed, as the Rehnquist Court restricted congressional powers under the commerce clause. In United States v. Lopez (1995), a closely divided Court (in a 5-4 decision) invalidated the Gun-Free School Zones Act of 1990, a federal statute criminalizing the possession of a firearm in or within 1,000 feet of a school. Chief Justice William Rehnquist, writing for the majority, asserted that the act was “a criminal statute that by its terms [had] nothing to do with commerce or any sort of enterprise, however broadly one might define those terms.” Rehnquist observed that “if we were to accept the Government’s arguments, we are hard-pressed to posit any activity by an individual that
Congress is without power to regulate.” This decision served as an important reminder to Congress that “enumerated powers” suggests limitations. While the ruling did not suggest overturning previous cases, Rehnquist did suggest not granting further power to Congress in this area. This proved to be one of the most important decisions handed down by the Rehnquist Court (1986–2005). Further Reading Epstein, Lee, and Thomas G. Walker. Constitutional Law for a Changing America: Institutional Powers and Constraints. 5th ed. Washington, D.C.: Congressional Quarterly Press, 2004; Fisher, Louis. American Constitutional Law, 5th ed. Durham, N.C.: Carolina Academic Press, 2003; O’Brien, David M. Constitutional Law and Politics. Vol. 1, Struggles for Power and Governmental Accountability. 5th ed. New York: W.W. Norton, 2003; Stephens, Otis H., Jr., and John M. Scheb II. American Constitutional Law. 3rd ed. Belmont, Calif.: Thompson, 2003. —Lori Cox Han
common law Common law refers to a way of practicing law and adjudicating cases that is, to many, quintessentially English, in its reliance not only on legal precedent and historical tradition as the cornerstones of justice but also on custom as the ultimate source of judicial authority. In the United States, the English commonlaw heritage has become part of a system of justice that relies on the common law as its defining element, particularly in the development of American constitutional law. In addition to admiralty and equity, common law offers the primary venue through which disputes are resolved in the United States, and common-law doctrines regarding everything from contracts to torts provide the substantive criteria on which judicial opinions are predicated. Moreover, the political structure in which the American legal system is embedded depends on a common-law culture dedicated to the protection and maintenance of processes and guarantees that have accrued over the centuries. Common-law systems have been incorporated throughout the broader Anglo-American world, and their presence is especially conspicuous in Common-
common law
wealth countries with close ties to the United Kingdom. (The Commonwealth defines itself as an association of 53 independent states consulting and cooperating in the common interests of their peoples and in the promotion of international understanding and world peace; it includes the United Kingdom, Canada, Australia, India, and South Africa, among others.) The numerous common-law variants throughout the broader Anglo-American community constitute one of two principal systems of jurisprudence dominant in the Western world today. Along with the civil-law model prevalent in continental Europe, our common-law standard represents a doctrinal evolution that stretches back to the Middle Ages. The common law has formed the basis of both private and public law in the United States since its founding, and its validity as the source of substantive and procedural legal authority was secured through centuries of usage and acceptance in England. Upon creation of the American federal republic in 1787, the common law was embraced as the framework for state legal systems, and, with the exception of Louisiana, it has continued to serve, with minor modification and hybridization, as the normative model throughout the United States. By the late 1780s, the common law enjoyed widespread legitimacy in the former British colonies of North America, so the establishment of commonlaw systems in the states was no surprise. The colonies in British North America had operated for several decades, and, in a few cases, almost a couple of centuries, along political and judicial patterns defined in England and developed through multigenerational processes that originated long before the European settlement of North America. During the 17th century, English colonization of the eastern seaboard involved the transplantation of English political habits and legal practices to the new settlements, and a common-law tradition that had thrived in England for more than 400 years provided authority and relevance in colonial environments that were often precarious. Despite the constitutional and political tensions that emerged between the colonies and the mother country during the 1760s and 1770s, the legitimacy and authority of the common law was not challenged. English common law evolved over centuries, with beginnings that lead back to the Middle Ages.
21
No one can be sure how this comparatively peculiar system of law actually started, but the nature of early English government and politics provides us with some clues. Due to its relatively amorphous constitution, which was marked by decentralization of governing power, England was not conducive to centralization of control or normalization of administration of any sort during the early Middle Ages, unlike many of its ultimate rivals. As a result, the various locally based practices and protocols that emerged during the centuries following the Germanic and Viking invasions became firmly entrenched in English society. A pattern of custom-centered dispute resolution and political administration was established throughout the future kingdom, so that potential conquerors or unifiers were continually plagued by the dissension and instability of such decentralization. When Henry II undertook some of his famous reforms in the 12th century, which have been credited with creating the formal structures within which a mature system of common law later developed, he formed a kingdomwide legal structure that endowed custom and tradition with the legal, if not constitutional, support it had lacked prior to that. During the ensuing centuries, the system Henry fostered helped institutionalize the foundations of common law and, more broadly, the common-law system implemented by the English affirmed the viability of underlying political principles upon which the common law was established, such as due process of law. As one of the hallmarks of English law and eventually also American jurisprudence, due process was incorporated into English common law during the Middle Ages and became a defining feature of the common law. Neither natural rights nor divine rights through God could offer the kind of legal viability that custom and tradition seemed to present. Through its crucial role as the internal logic that defined the evolution of English common law, custom enabled the marriage of natural right and legal precedent in a way that would, by the 17th century, firmly entrench due process of law in the English political consciousness. Already by the beginning of the 13th century, with the issuance of the Magna Carta, English insistence on the recognition and confirmation of specific procedures without which legal status, privilege, and benefits or claims arising thereon could not be suspended,
22 c ommon law
modified, or abolished was evident (what we would now call due process). In terms of the common law and its role as a guarantor of due process, and American constitutional norms more broadly, these historical developments enshrined inviolable procedures that protected property in its customary legal formulations of life, liberty, and estate. Guarantees against the suspension or abolition of privileges associated with habeas corpus, those establishing standards for the administration of justice, and particularly others concerned with the rights of the accused became hallmarks of an English legal tradition that was increasingly concerned with government’s ability to control life and political liberty. In the arena of colonial politics within British North America, the common-law customcentered heritage of Sir Edward Coke and Sir Matthew Hale (both noted English jurists whose writings helped to develop English common law) became wedded with Lockean sensibilities about natural law and natural rights to produce a unique strain of due-process and common-law doctrines. This is not meant to imply that the common-law tradition or custom-centered political culture generally had ceased to be relevant in the colonies; rather, it should highlight the fact that what eventually became an American system of jurisprudence was founded on a combination of influences that began to set it apart somewhat from the legal system in Great Britain. With the ratification of the U.S. Constitution and the subsequent shift toward legal positivism (which differs from natural law in that laws are made by human beings with no necessary connection to ethics and/or morality, while natural law assumes that there is an inherent connection between law and ethics, morality, and justice), the aforementioned naturallaw influence became less prominent, and, during the 19th century, American common-law doctrines reflected both their historical English moorings and the impact of local circumstances. Gradually, doctrinal changes sought to reconcile sociocultural innovations triggered by industrialization with inherited case law on property, contractual dynamics, commercial rights, and governmental police powers. Although industrialization caused the rethinking of various controlling legal doctrines, the procedural protocols of a common-law system of jurisprudence were never seriously challenged. The systematization of law through
codification that increasingly characterized continental jurisprudence during the 19th century did not attract a viable following in the United States. The United States retained its fidelity to a common-law structure originally imported from England and never seriously questioned its validity. Unlike civil law, or the civil code, which is based on codification through statutory enactment, regulatory provision, or executive decree, the common law exists through an adherence to the precedent, custom, and tradition established by judicial interpretation, which is why it has often been labeled, rather crudely and inaccurately, as judge-made law. Although the practical consequences of common-law decision-making may include what some have considered judicial lawmaking, the common law requires judges to interpret (or, according to some, discover) the law according to principles and rights that are found in controlling opinions and other relevant sources. Rather than a specific rule, decree, statute, or code, the binding authority in common-law cases is the legal precedent and accumulated tradition legitimized through legal interpretation of prior decisions and, to a lesser extent, pertinent legislative enactments. To many observers, the crucial and indispensable component of common-law decision-making is the concept of stare decisis. And, insofar as common law survives through the establishment of and adherence to relevant precedents, stare decisis is indeed the centerpiece of the common law. Stare decisis reflects the foundational idea that established precedents should have controlling authority in cases of common law. Consequently, if relevant precedents are available and applicable to particular cases, judges should honor existing precedents in those cases, and they should regard the doctrinal aspects of those precedents as normative and inviolable, at least until the precedents are overturned or substantially revised. Judges are obviously not bound by specific precedent when one does not exist, but, even in such cases, their decisions should be guided, if not determined, by the fundamental political and legal principles that animate precedents. Under stare decisis, compliance with precedent is not absolute, inasmuch as it accommodates the probability that some precedents will eventually be overturned, but such doctrinal transformations should be reserved for only those circumstances that offer clear proof of error or irrelevance.
concurrent powers 23
Common law has usually been considered a type of positive law. Law can be envisioned as originating from one of two sources, or categories of sources; consequently, it is classified as either human (i.e., man-made) or nonhuman, which translates into the parallel observation that all law is either positive or nonpositive. Nonpositive law includes divine law, natural law, and other forms of nonhuman fundamental law, while positive law comprises statutory law, constitutional law (or human fundamental law), and common law. Among legal scholars, debate has arisen regarding the precise status of equity and customary law, since these concepts reflect the existence and authority of transcendent principles not explicitly created or stipulated by human acts, but, more often than not, equity and particularly customary law have been classified as positive law due to their existence as manifestations of human activity or sociocultural practice. On a more practical level, the common law has been treated as one of two primary types of positive law. For most practitioners and scholars, insofar as positive law is human law, then positive law is either precedent-based or code-based, so the conceptual dichotomy between common law and statutory, or codified, law has been widely accepted as valid. Of course, even these distinctions are merely approximate and largely utilitarian, since common law and statutory law frequently coexist and reinforce each other. American common-law courts habitually consult, heed, or accept statutory provisions, requirements, or guidelines that affect or control matters in question, and legal interpretation regularly entails the reconciliation of judicial and legislative acts on similar or identical topics. Therefore, especially as witnessed through decision-making in areas of constitutional law, common law does not simply refer to a specific venue or system, as distinguished from equity or admiralty, according to which specific legal procedures and actions are conducted, determined, and defined. Rather, common law refers more generally to a legal culture that conceptualizes the law as an institutional organism, so to speak, whose viability is a function not only of the evolution of the system itself but also of the stability and permanence that custom, tradition, and precedent provide. In that sense, the United States is a common-law culture, dedicated to the def-
inition of key political and legal principles through multigenerational consent, and our legal system, therefore, depends on the acceptance and continual acknowledgment of seminal doctrinal markers and precedents by which the law is defined. Further Reading Berman, Harold J. “The Origins of Historical Jurisprudence: Coke, Selden, Hale.” Yale Law Journal 103 (1994): 1652–1738; Friedman, Lawrence M. A History of American Law. New York: Simon and Schuster, 1985; Hall, Kermit L. The Magic Mirror: Law in American History. Oxford: Oxford University Press, 1989; Hogue, Arthur R. Origins of the Common Law. Indianapolis, Ind.: Liberty Press, 1966; Holmes, Oliver Wendell, Jr. The Common Law. New York: Dover Publications, 1991; Horwitz, Morton J. The Transformation of American Law, 1870–1960: The Crisis of Legal Orthodoxy. Oxford: Oxford University Press, 1992; Kelley, J. M. A Short History of Western Legal Theory. Oxford: Oxford University Press, 1992; Posner, Richard A. The Problems of Jurisprudence. Cambridge, Mass.: Harvard University Press, 2005; Tomlins, Christopher L. Law, Labor, and Ideology in the Early American Republic. Cambridge: Cambridge University Press, 1993. —Tomislav Han
concurrent powers The term “concurrent powers” refers to areas in which both the national and state governments can act simultaneously. Both the national and state governments have independent power in the United States because it has a federal form of government. This means that there is one national government and multiple state governments that exist separately from one another. Even more significantly, both the national and state governments have independent responsibilities, each exercising sovereignty in certain areas. Sometimes this sovereignty overlaps, leading to instances in which each level concurrently exercises power on the same issue. The most common areas in which powers are exercised concurrently are taxation and regulation. Because concurrent powers are not addressed directly in the U.S. Constitution, the meaning and the application of the term has developed and changed over time.
24 c oncurrent powers
The constitutional provisions with implications for concurrent powers are primarily Article VI and the Tenth Amendment. Article VI establishes the supremacy of the national government. “This Constitution, and the Laws of the United States which shall be made in Pursuance thereof: and of all Treaties made, or which shall be made, under the Authority of the United States, shall be the supreme Law of the Land; and the Judges in every State shall be bound thereby, any Thing in the Constitution or Laws of any State to the Contrary notwithstanding.” This is significant for states in exercising concurrent power because it means that national laws always supersede state laws. While this may seem to practically eliminate any concurrent role for states, the Tenth Amendment says “the powers not delegated to the United States by the Constitution, nor prohibited by it to the States, are reserved to the States respectively, or to the people.” This provision clearly protects an independent role for the states, which has been interpreted to imply concurrent jurisdiction for the states in any matter not explicitly prohibited by the Constitution. According to Alexander Hamilton in Federalist 32, there are only three cases in which states would be absolutely excluded from exercising concurrent authority. The first is when the Constitution explicitly gives exclusive authority to the national government. The second is when a power is given to the national government and also prohibited to the states. Finally, states cannot act on issues where the national government has authority and an equal authority exercised by the states would be “totally contradictory and repugnant.” An example of the first case, of exclusive authority, can be found in Article I, Section 8 of the Constitution, where it says that the national legislature shall “. . . exercise exclusive legislation in all cases whatsoever . . .” over the seat of government. In practice this means that the national government is in charge of the capital, Washington, D.C. The provisions regarding taxes can illustrate the second case. Article I, Section 8 also gives the national legislature the power to lay and collect taxes. Article I, Section 10 goes on to say that states cannot lay taxes on imports and exports. Here, then, is one specific area in which the states do not have concurrent power. The national government can tax imports and exports, the states cannot. The
third case is less clear and open to interpretation. Hamilton’s example is the national legislature’s power to establish a uniform rule of naturalization for new citizens, also found in Article I, Section 8. He points out that states cannot logically have concurrent authority in this matter, because if any state made a distinct naturalization policy the rule would no longer be uniform. In the full scope of the government’s authority there are relatively few powers that are clearly delineated as being either exclusive or concurrent in the Constitution. Thus, the interpretation of concurrent powers has evolved over time based on practice and the decisions of the courts. Because this evolution has occurred on a piecemeal basis, as individual court cases have been decided, there are still no fundamental principles to depend on in deciding when concurrent powers exist. It took almost a century for the country to firmly establish the supremacy of the national government. While Article VI contains a national supremacy clause, by tradition and practice the states were more involved in matters affecting the states than the national government. It was not until after the Civil War (1861–65) that the question was laid to rest. The Civil War established that the national government could not only nullify state policies that conflicted with the national government but that states did not possess the independence or authority to leave the Union. An important step along the way to the culminating Civil War was the United States Supreme Court case, McCulloch v. Maryland, decided in 1819. This case dealt with the ability of a state, Maryland, to tax the national Bank of the United States. One of the issues where concurrent powers had always been taken for granted was taxation. Even Hamilton maintained in Federalist 32 that both the states and the national government retained independent revenue raising authority. In his classic decision, however, Chief Justice John Marshall said that a state could not tax a national entity, for in his famous words, “the power to tax involves the power to destroy.” Since the national government was supreme, the states should not be allowed to pursue any policy that could potentially damage the national government. This was a blow for the states’ independent sovereignty, for it declared that they were subordinate to the national government. This decision did not take away
concurrent powers 25
the states’ concurrent taxing authority in its entirety, but it was limited. The establishment of national supremacy had significant implications for the exercise of concurrent powers because it meant that the national government could always trump the states’ jurisdiction, effectively taking away concurrent authority whenever it wanted to. Alongside the legal precedent of national supremacy it was also the case that changes in the country elevated the role of the national government. New problems and issues faced the nation during and after industrialization at the beginning of the 20th century. As the economy grew more national, and even international, in scope, issues were not as easily addressed by individual states. For instance, public concern with issues such as worker’s rights and child labor could more consistently be dealt with on a national level. Also, the growing importance of the issue of individual civil rights made it hard to justify disparate rights between states. Once the national legislature set policy on such matters, the opportunity for states to exercise concurrent authority evaporated. The Great Depression simply highlighted the national scope of the economy and the inability of states to address the larger picture. Added to that, two World Wars brought the national government into matters that were previously left to local authorities. Perhaps the best example of how the growth of the national government’s role affected concurrent powers is the evolution of the commerce clause. Article I, Section 8 of the Constitution gives Congress the power “to regulate commerce . . . among the several states . . . ,” also known as interstate commerce. In the early days of the country both the states and the national government regulated commerce. As the country grew and transportation improved, commerce increasingly crossed state borders, which allowed the national government to regulate more. In the landmark court case, Gibbons v. Ogden, decided in 1824, John Marshall declared that the Constitution gave Congress the exclusive authority to regulate interstate commerce, and the states could not. This decision greatly expanded the scope of the national government’s authority over business matters, while simultaneously shrinking the states’ concurrent role. It also suggested that the national government had the ability to eliminate the states’ role whenever it wanted to. Because Marshall’s definition of com-
merce was not all-encompassing, there was still room for the states to act on some business-related matters, such as inspection laws and roads. It was an important step toward national dominance, though, and it is not an exaggeration to say that, by the end of World War II, the national government possessed the authority to make policy regarding virtually any aspect of commerce it wanted to, including working conditions, transportation, manufacturing, and employee benefits. The tide shifted somewhat in the 1980s as the Supreme Court decided against national control in some cases, but it is still true that states have concurrent power on commerce only when the national government decides not to get involved itself, or explicitly includes a role for the states. In other words, it is at the national government’s discretion. Interestingly, the only specific constitutional provision for concurrent power occurred in 1919 with the Eighteenth Amendment, the ban on alcoholic beverages. Section 2 of that amendment stated “the Congress and the several states shall have concurrent power to enforce this article by the appropriate legislation.” There was a flurry of scholarly attention to logistics of concurrent enforcement at this time, but this interest died with the amendment. While any discussion of concurrent powers necessarily includes a discussion of federalism, it is important to note that concurrent powers are not the same as federalism. Under federalism, the national and state governments each possess areas of independent authority. While federalism creates a government in which concurrent powers may be exercised, it is possible, in theory at least, to have a federal system of government in which each level has completely separate tasks, with no coordinate power. For instance, the Constitution gives each state the power to decide on the qualifications of state officials, such as state legislators. This is not a concurrent power; it is simply a state power. In order for a power to be concurrent, both levels of government have to be able to make decisions about the same issue at the same time. While states now have concurrent authority on fewer matters than perhaps envisioned by the founders, the areas that still overlap with the national government are significant. Taxation is a prime example. Most U.S. citizens pay both a state and national tax on their income (seven states currently have no income tax,
26 C onstitution, U.S.
including Alaska, Florida, Nevada, South Dakota, Texas, Washington, and Wyoming). The state income tax is determined entirely by each state, and thus varies by state. Thus, national and state governments exercise concurrent power over income taxation. Likewise, there are many other taxes that are concurrent. For instance, citizens pay both a state and a national tax on each gallon of gasoline they purchase. While in 2006, the national tax is approximately 18 cents per gallon it varies widely among states. Alaskans pay 8 cents per gallon, while those in California pay approximately 26 cents per gallon. Concurrent state regulation also continues to occur throughout the country. While the courts have continued to monitor state regulation and are typically quick to uphold the supremacy of national policy, it is simply not logical or practical to take away the states’ role in this area. Because the courts have dealt with these issues on a case-by-case basis, practice varies widely and is sometimes inconsistent. For instance, the national government plays an active role in regulating pollution of the air, notably through the Clean Air Act, which regulates the amount of emissions that are allowed. Some states regulate pollution, as well. California has passed state laws that are even more stringent than the national requirements. This is a case in which the national government and California have exercised concurrent authority over pollution. Auto manufacturers have sued California over their laws, saying that the national emission standards ought to be controlling. The case is still tied up in the courts, but the decision will have important implications for concurrent powers. Clearly, the national government has the dominant role in deciding when states can exercise concurrent powers. The absence of clear constitutional or legal rules governing the practice will ensure that debate continues, however. The fact that the courts have been responsible for evaluating practices of concurrent power also contributes to the uneven application of concurrent powers. For every case that comes to court challenging or fighting for concurrent power, there are likely numerous instances in which national and state laws work together satisfactorily. However powerful the national government is, it cannot make, and is not even interested in making, policy on every single matter that affects the states. Thus, the states
will retain significant authority that is bound to overlap with national authority on occasion. Further Reading Grant, J.A.C. “The Scope and Nature of Concurrent Power.” Columbia Law Review 34, no. 6 (June 1934), 995–1040; Hamilton, Alexander. Federalist 32, in The Federalist with Letters of “Brutus,” edited by Terrence Ball. Cambridge: Cambridge University Press, 2003; O’Brien, David M. Constitutional Law and Politics. Vol.1, Struggles for Power and Governmental Accountability. 5th ed. New York: W.W. Norton, 2003. —Karen S. Hoffman
Constitution, U.S. A constitution is a framework that legally describes the rules and procedures of a government, and legally binds that government to a set of guidelines as it limits and empowers the government to act in the name of the people or the state. Usually, though not always, in written form (the British, for example, have an “unwritten constitution”), a constitution is often seen as the supreme law of the land, and sets up the structure of government. One of the first “constitutions” was Magna Carta in 1215. While not a constitution in the modern sense, Magna Carta codified the rights and limits of the English king, and placed his power within the confines of a rudimentary rule of law. Of course, a few years after signing Magna Carta, the king went back on his word and, when his political strength had been refortified, grabbed all his power back, but this effort nonetheless marks one of the early efforts to put in writing the power and limits of the state. In The Rights of Man, (1792), Thomas Paine wrote that “A constitution is a thing antecedent to a government, and a government is only the creature of a constitution. The constitution of a country is not the act of its government, but of the people constituting a government.” What exactly did the framers of the U.S. Constitution create in 1787? What structure or skeleton of power and government did the founders of the U.S. system design? The chief mechanisms they established to control as well as to empower the government are as follows: (1) Limited Government, a
Constitution, U.S.
U. S. Constitution (Wikipedia)
27
28 C onstitution, U.S.
reaction against the arbitrary, expansive powers of the king or state, and a protection of personal liberty; (2) rule of law, so that only on the basis of legal or constitutional grounds could the government act; (3) separation of powers, so that the three branches of government each would have a defined sphere of powers; (4) checks and balances, so that each branch could limit or control the powers of the other branches of government; and (5) under a Written Constitution, in a system of federalism, under a republican form of government, a constitutional system of government that fragmented power. The constitutional structure of the government disperses or fragments power; with no recognized, authoritative vital center, power is fluid and floating; no one branch can very easily or freely act without the consent (formal or tacit) of another branch. Power was designed to counteract power; similarly, ambition would check ambition. This structure was developed by men whose memories of tyranny and the arbitrary exercise of power by the king of England was fresh in their minds. It was a structure designed to force a consensus before the government could act. The structure of government established by the framers created not a single leadership institution but several— three separate, semiautonomous institutions that shared power. The framers designed a system of shared powers within a system that is purposefully biased against dramatic political change. The forces of the status quo were given multiple veto opportunities, while the forces of change were forced to go into battle with a decidedly weaker hand. Because there are so many potential veto points, the American system generally alternates between stasis and crisis, and between paralysis and spasm. On occasion, the branches are able to cooperate and work together to promote change, but it is especially difficult for the president and Congress—deliberately disconnected by the framers—to forge a union. The resulting paralysis has many parents, but the separation of powers is clearly the most determinative. Only in a crisis, when the system breaks down and the president is given, and takes, additional and often extra-constitutional powers, does the system move quickly. The framers purposely created uncertainty about who holds power in the United States. Their goal was to limit government and the exercise of power. They
feared the tyranny of a king. Thus, a system of baroque and complex cross-powers and checked powers created a constitutional mechanism that was designed to prohibit one branch from exercising too much power on its own. Opportunities to check power abound; opportunities to exercise power are limited. Remember, the framers were not interested in establishing an efficient system of government but in preventing government tyranny. It was a system designated to thwart tyranny, not promote efficiency, and by its own standards, it has worked quite well. But the natural lethargy built into the system now threatens to grind it to a halt. The failure of government to govern, to act, and to solve problems in any but crisis situations now seems so overpowering that the fundamental legitimacy of the system is increasingly threatened. This fluidity and fragmentation of power creates a situation in which “the government” is controlled not by any single person or place or party but by different people in different places (if it exists at all) at different times seeking different ends. This ambiguity may well prevent tyranny, but it is also a very inefficient model of organizing power. After weeks and weeks of struggles, compromises, bargains, power plays and frustration, the long, hot summer in Philadelphia in 1787 ended with agreement on a new Constitution. This new Constitution was brief, and contained only seven articles. The first related to the legislature. In the Constitution, Congress is the first, and most powerful branch of government. Nearly all the key governmental powers belong to the Congress: the power to tax, declare war, make all legislation, regulate commerce, and others. On paper at least, Congress seems the most powerful of the branches. Article 2 created a presidency that, on paper at least, appears to be more than a clerk, but less than a powerful national leader. Compared to the Congress, the president’s power is quite limited. Virtually all its powers are shared with Congress. Anyone attempting to ascertain the dominant branch by reading the Constitution would not select the president. Article 3 deals with the judicial branch. While nowhere in the Constitution does it say that the Courts have the power to declare acts of Congress or the president to be unconstitutional (judicial review), it would not take long for the court to grab that important power for itself (Marbury v. Madison, 1803).
Constitution, U.S. 29
Article 4 deals with relations between and among the states. Article 5 discusses methods for amending the Constitution. Article 6 deals with the supremacy of the Constitution and national laws over the states. And Article 7 spells out that the Constitution shall become the law of the land only when approved by nine states. That, very briefly, is the Constitution of the United States. It is brief, almost skeleton-like. It leaves many questions about the distribution of power unanswered. But with all its alleged faults, Americans love their Constitution. They will do practically anything for the Constitution—except of course, read it. Wherein all this is the Bill of Rights? That comes later, and is the first 10 amendments to the Constitution, passed by the first Congress and adopted in 1791. A reading of the Constitution reveals that there is not a great deal of “democracy” in the original text. The president is selected by an electoral college. The Senate was selected by state legislatures, and the judiciary was appointed by the unelected president with the consent of the unelected Senate. Only the House of Representatives was elected by the people, and at that time the definition of “the people” was very narrow and generally meant white male property owners only. While neither the Declaration of Independence nor the Constitution contain the word democracy, to a degree, the new government was founded on certain democratic principles and a democratic ethos. If democracy means, in Abraham Lincoln’s apt phrase from the Gettysburg Address, government “of the people, by the people and for the people,” then the new government failed the democracy test. The new Constitution is “of ” the people—just read the opening of the Constitution: “We the People of the United States . . . establish this Constitution of the United States of America.” “By the people”? As demonstrated, the new government was a republic, or representative form of government, but not a direct, pure, or participatory democracy with mass or universal involvement. In fact, the new Constitution excluded more people than it included in governing. “For the people? It certainly wasn’t for women, or minorities, or for white males who did not own property. Recognizing this democratic shortfall, James Madison, in Federalist 10, writes that, “Such democracies [as the
Greek and Roman] . . . have ever been found incompatible with personal security, or the rights of property; and have in general been as short in their lives, as they have been violent in their deaths.” These democracies could not govern, and did not last, warned Madison. And so, the framers chose a republican form of government with the voice of the people filtered through representatives. The U.S. Constitution, written in 1787 in Philadelphia, Pennsylvania, by delegates from 12 of the thirteen colonies, ratified by the states in 1788, and first put into operation in 1789, is the oldest ongoing written constitution in the world. It begins with the words, “We the people,” and derives its authority from the consent of the governed. It is important to note that the Constitution derives its authority from “the people” and not from the states. The Constitution combines several features that make the United States a “constitutional republic based on democratic principles.” While that may seem a mouthful, it means that the United States operates under a supreme law—the Constitution— and sets up a republic—a representative system, not a pure democracy—based on broadly democratic ideals—political egalitarianism and inalienable political rights. Thus, the United States is not a democracy but a constitutional republic, a government based on the consent of the governed as articulated through representatives who are bound by a constitution, thereby guaranteeing individual rights (found in the Bill of Rights, the first 10 amendments to the Constitution). The Constitution is not a self-executing nor is it a self-enforcing document. Ideally, the Congress writes the nation’s laws, the executive enforces those laws, and the United States Supreme Court interprets the law. But of course, there is a great deal of overlap in each of these areas. For example, while the Constitution says that “all legislative power” belongs to the Congress, it also gives to the president a limited veto. And while the Supreme Court claims to be the final word on what the constitution means (Chief Justice Charles Evans Hughes noted on May 3, 1907, that the Constitution is “what the judges say it is”), presidents often claim their own interpretative powers over what the Constitution really means (Thomas Jefferson, in a September 6, 1819, letter to Spencer Roane thought the Constitution was “a mere thing of
30 c onstitutional amendments
wax in the hands of the judiciary, which they may twist and shape onto any form they please”). Constitutions mark the culmination of a long and often bloody battle to wrestle power out of the hands of a single ruler (usually a king) and make the government bound by a system of laws established by the people or their elected representatives that even the ruler must follow. It is an imperfect system. No constitution can anticipate every contingency nor can a constitutional system long endure if the people do not support and defend it. President John Adams described the American Constitution as “if not the greatest exertion of human understanding, [then] the greatest single effort of national deliberation that the world has ever seen.” Adams was not alone in his admiration for the Constitution. British prime minister William Gladstone concluded in 1887 that the U.S. Constitution “was the most wonderful work ever struck off at a given time by the brain and purpose of man.” Further Reading Beard, Charles A. An Economic Interpretation of the Constitution of the United States. New York: Free Press, 1913; Madison, James, Alexander Hamilton, and John Jay. The Federalist Papers. New York: New American Library, 1961; Rossiter, Clinton. 1787: The Grand Convention. New York: Macmillan, 1966; Wood, Gordon S. The Creation of the American Republic. Chapel Hill: University of North Carolina Press, 1998. —Michael A. Genovese
constitutional amendments While the U.S. Constitution is the supreme law of the United States, it is not set in stone and unchangeable. And while rarely changed, it is open to amendment. The Bill of Rights is the first 10 amendments to the U.S. Constitution. The framers, knowing that their new constitution might need to be changed, wisely included a means to amend the document. While today we take for granted the existence of the Bill of Rights, it is important to remember that those rights were not part of the original Constitution. In fact, the absence of a listing of the rights of citizens proved to be one of the primary impediments in the ratification process and two of the most important
states, Virginia and New York, were especially insistent that these rights be spelled out. In an 11th-hour “deal” the supporters of the Constitution agreed to add on to the Constitution a list of rights of citizens, but only after the new Constitution became the law of the land. In the first Congress, James Madison and others led the drive to add the 10 amendments onto the Constitution in what is now known as the Bill of Rights. So amending the Constitution came early and dealt with some of the most important elements of what today represents the very essence of what it is to be American. However, Americans love their Constitution and change it rarely. Including the Bill of Rights, there have only been 27 amendments to the Constitution. Of course, nearly 10,000 amendments have been introduced, but very few proposals are taken seriously or can receive the two-thirds support necessary in each chamber of Congress. For example, in 2006, President George W. Bush led an effort to have samesex marriages banned via a constitutional amendment. This proposal was debated on the floor of the U.S Senate but failed to get the required two-thirds vote necessary to continue the effort to pass this proposed constitutional amendment on to the states for consideration. Part of the reason for this is that there is a very high hurdle set up to prevent the frivolous amending of the Constitution. Another reason is that while many proposals seem worthwhile or popular at the time, attention often fizzles as the interest in the newly proposed amendment fades from the public view. This is especially true where proposed amendments to alter or abolish the electoral college are concerned. After every disputed or controversial presidential election, new amendments are offered to have direct popular election of the president, or to otherwise change or eliminate the electoral college. They attract attention and popular support—for a time. But with time, the drive for change wanes, and after a few months, the impetus for reform dies out. In some respects, proposals to amend the Constitution have become political footballs to be tossed around by one interest group or political party or another. A “hot-button” issue is used to motivate and excite single or special-interest voters and a proposed amendment is floated out with no real hope for passage but is used as a way to rally supporters, attract
constitutional amendments 31
political donations, and build movements. In this sense, the amending process becomes the tool of narrow special interest politics. Lamentably, the strategy often works, and while it does not lead to a change in the Constitution, that was usually not its intended purpose in the first place. Whenever a special interest loses in the political process—be it over flag-burning, prayer in schools, same-sex marriage, changing the electoral college, or gender equality issues—a new constitutional amendment is introduced to attract those voters truly committed to the issue, and the amending process becomes hostage to narrow political or partisan interests. That virtually all of these proposals have no chance whatsoever to succeed is irrelevant to the proposing interest—they want to use the process to build their constituency or attract donations. And as such, the business of amending the constitution is big business—at least for the political causes the process has now been distorted to serve. The framers of the U.S. Constitution did not believe their creation to be flawless, nor did they believe it would never need changing. In fact, most of
the framers were well aware of the fact that the Constitution that emerged from the Philadelphia convention was the result of a series of bargains, deals, compromises, educated guesses, and even wishful thinking. They did not think their constitution should be written in stone for the ages, but should be open to amendments as the times and citizenry dictated. But neither did they believe that the Constitution should be changed for light or transient reasons. They opened the door for changing the Constitution but did not make it especially easy to alter the document. Even so, one suspects that they would be surprised at just how infrequently their original Constitution has been altered, and how enduring the structure and system of government they created in 1787 has been. In essence, the skeleton of government created by the original Constitution remains virtually intact more than 220 years later. There are two methods of amending the Constitution. The first, a convention called by Congress at the request of the legislatures of two-thirds of the states, has never been used. The other more common method is to have an amendment proposed in
32 c onstitutional amendments
Congress and passed by two-thirds of both houses of Congress. Congress has proposed 33 amendments but only 27 have been ratified. For a congressionally approved amendment to be ratified, three-fourths of the states must then support the amendment. Just how flexible or rigid should a constitution be? This question can be examined by comparing the constitutions of what are considered the two leading models of constitutional government in the world today: the United States and Great Britain. The Constitution of the United States is a written constitution. It is thus a fixed, and some might say a rigid, set of rules and procedures. It is difficult to change. It is a document for all seasons and all generations. This constitution both empowers and limits the government of the United States. By contrast, the British constitution is usually referred to as an unwritten constitution. While it is in reality written down, it is just not written down in any one particular place. It exists in laws, statutes, traditions, norms, and common practices. It is, in short, whatever the Parliament says it is. Thus, the meaning of the term parliamentary sovereignty, wherein the will of Parliament is per se the constitution. There is no higher law. In this sense the British Constitution is quite flexible. It can change whenever the Parliament passes a new law. The new law becomes part of the British Constitution. As perceived needs arise, the Parliament makes changes; these changes are thereby constitutional. Which model is better? The British model can adapt and change quite easily to meet new needs or unexpected demands. The United States model is difficult to change and does not adapt well or easily. But just how flexible does a nation want its constitution to be? Should a constitution express deep, enduring values and practices, not susceptible to the whims of the day, or the current fashion? Should a constitution be more fixed and enduring, rather than flexible and changeable? Should a constitution contain specific policy guidelines (such as a ban on same-sex marriages, or an authorization of prayer in public schools) or should it be restricted to structuring government and the powers and limits of government? Both the British and United States models have worked fairly well, but very differently. The British model does adapt and change. But then, so does the United States system. However, in the United States,
the changes take place without the constitution changing. There is, thus, the Constitution as written, and the Constitution as practiced and lived. Practices often change but the words of the constitution do not. Take the war powers for example. The wording regarding war powers (Article I, Section 8 of the Constitution says that only Congress can declare war) has not changed over the history of the republic. But practice has changed dramatically. In effect, there has been a sort of reversal in the roles played by the president and Congress on the war powers, with the president declaring and starting wars, and the Congress serving as a potential (though rather weak) post-action veto on the president. The demands of modernity have compelled a more central source, the president, to assume greater authority over war declaring and making. This has been accomplished by political practice, not constitutional amendment. Defenders of the Constitution argue that this is sleight-of-hand, and that if we have a Constitution, we ought to live by it. But defenders of the revised application of the static Constitution argue that in the modern era, greater flexibility is necessary. Thus, the United States adapts to new circumstances and changing conditions. The words of the constitution are fixed, but their meaning is not. Over time, constitutional understanding has changed with new perspectives, Supreme Court decisions, presidential and congressional practice, and developing and changing social and cultural norms. But the formal method of amending the Constitution remains a viable, if difficult, and infrequently successful alternative. Too rigid a constitution, and the system may collapse and break; too flexible a constitution and it may become devoid of meaning. The British and American alternatives each pose difficulties for governing, and yet each model seems to suit the nations affected. What is crystal clear is that there must be means for adapting constitutions to meet new demands, and both the United States and the United Kingdom have struck on their own methods of adapting and updating their constitutions to new circumstances and modern demands. Over the course of U.S. history, virtually thousands of proposed constitutional amendments have been floated about, from banning flag burning, to allowing prayer in public schools, to banning abor-
Constitutional Convention of 1787
tions, to eliminating the electoral college. Nearly all these efforts (some admittedly more legitimate than others) have failed. It is not easy amending the Constitution, and most argue that it should not be easy. The Constitution represents the highest law of the land and as such should represent enduring qualities and sentiments. To change it to fit the fashion and mood of the day might well cheapen its impact and demean the respect so many Americans have for the Constitution and for the rule of law in the United States. Further Reading Hall, Kermit L., Harold M. Hyman, and Leon V. Sigal, eds. The Constitutional Convention as an Amending Device. Washington, D.C.: American Historical Association and American Political Science Association, 1981; Kyvig, David E. Explicit and Authentic Acts: Amending the U.S. Constitution, 1776–1995. Lawrence: University Press of Kansas, 1996; Levinson, Sanford, ed. Responding to Imperfection: The Theory and Practice of Constitutional Amendment. Princeton, N.J.: Princeton University Press, 1995. —Michael A. Genovese
Constitutional Convention of 1787 When independence from Great Britain was declared by the American colonies on July 4, 1776, the ideas that animated the Revolution were largely democratic and egalitarian. Thomas Paine’s influential pamphlet, Common Sense, first published in January of 1776, and later that year, Thomas Jefferson’s Declaration of Independence, were rousing calls to unseat the British monarchy and replace it with a more democratic system of government. Of course, the roots of this revolution can be traced back in history to the ideas originating in the ancient democracy of Athens, the Roman republic, the political writings of john locke, Charles de Montesquieu, Jean-Jacques Rousseau, and others, to the framers’ direct observance of the Iroquois Confederacy of northern New York State, and to other influences. Fought largely to replace monarchy with democracy, the revolutionary fervor that captured the imagination of the colonists proved hard to implement once the Revolutionary War had been successfully won. The first effort was the Articles of Confedera-
33
tion, which proved unworkable. The Articles created a very weak central government and maintained a system of strong and only marginally connected state governments, and after a few years, this experiment in decentralization was almost unanimously deemed a failure. But with what should the new nation replace the Articles? In 1775, a convention to revise the Articles was planned for Annapolis, but very few states sent delegates, and the meeting was soon abandoned. But as frustration over the Articles grew, another convention, this one planned for the city of Philadelphia in the summer of 1787 gathered increased support among the states. In fact, every state but one, New Hampshire, sent delegates for the “sole and express purpose” of revising the Articles. But when the delegates arrived in Philadelphia, they soon decided on two strategically important matters: first, they would meet in secret so as not to potentially inflame the passions or feed the rumors of the citizenry; and second, they decided to throw out the Articles entirely and start from scratch to write a wholly new Constitution for the nation. In effect, they decided to invent a new system of government. Fifty-five of the most prominent men of the new nation’s citizens gathered in Philadelphia. But as important as who was there is the list of prominent citizens who were not there. Conspicuously absent were such democratic firebrands as Thomas Paine and Thomas Jefferson. The absence of these men caused alarm and suspicion among those committed to the pursuit of a more democratic state. While the men who attended the Constitutional Convention in Philadelphia may have been drawn from among the elite of American society, and while they were decidedly not committed to establishing a democracy in America, they were, for their time, quite bold and revolutionary. Their goal was to establish a constitutional republic. To today’s reader, this may seem a mild and uncontroversial goal, but for the late 18th century, this was a fairly revolutionary aspiration. The Philadelphia Constitutional Convention lasted from May 25 until September 18, 1787. Presided over by George Washington, the most respected man in the nation, the convention went through proposals and counterproposals, decisions and reconsiderations, bargains and compromises. A record of the proceedings of the convention come down to us today
34 Constitutional Convention of 1787
as a result of the copious notes taken by James Madison of the daily business of the convention. From these notes and other scattered writings, what emerges is a portrait of the struggle to give some power to the people, but not too much; create a system based on the rule of law, but also to be administered by men of ambition; to endow this new government with republican principles but also empower it to govern; to separate power between the branches, but compel them to work together to develop policies; to limit the scope of government while also creating a truly national government with power over the states. The inventors of the U.S. Constitution met in Philadelphia in the summer of 1787. Seventy-four delegates were chosen by the states, but only 55 attended the convention, held in the State House (now Independence Hall) in the room where, more than a decade earlier, many of the same men met to sign the Declaration of Independence. The problem they faced was that the Articles of Confederation created too weak a central government. Under the Articles, the federal government could not pay the war debt to foreign nations or to the U.S. citizens, could not regulate commerce nor create a stable national currency, could not levy taxes or develop a military (needed to protect national security, expand westward and, increasingly, to protect private property from threats of debtor revolts). There was a consensus that the states were too strong and independent, and that national government had to be strengthened, but beyond that there was little agreement. The Revolutionary War had been fought by the average citizen who in general was committed to the democratic and egalitarian principles found in Paine’s Common Sense and Jefferson’s Declaration of Independence. But after the Revolution, a “new” voice came to the forefront: the property class. Represented by Alexander Hamilton and others, they wanted a government to protect private property, develop an infrastructure to promote commerce, and, of course, protect their political and economic interests. Most of the men at the Philadelphia convention were from and represented the goals of the property class. Unlike the average citizen, their goal was not to create a democracy but to establish order. A conflict, perhaps inevitable, between the haves and have-nots became the cleav-
age that characterized this era, which threatened to undermine the effort at writing and ratifying a new constitution. But how, after a war inspired largely by democratic and egalitarian sentiments, could the delegates to the Constitutional Convention establish a government that betrayed the principles of the Revolution? Were they to establish a new monarchy in America, they knew that a new revolution would surely follow. After all, waiting outside the convention were thousands of poor, armed, combat experienced democrats ready and willing to once again put their lives on the line for the cause in which they so passionately believed. The framers faced a difficult problem: how to establish order, protect property, and promote commerce, while giving “the people” enough say in the new government to make it acceptable to the democratic masses? Most of those attending the Constitutional Convention feared democracy (some called it mobocracy). Delegate Elbridge Gerry called it “the worst of all political evils” and said that “the evils that we experience flow from the excess of democracy.” Roger Sherman warned that “The people . . . should have as little to do as may be about the government.” William Livingston argued that “The people have ever been and ever will be unfit to retain the exercise of power in their own hands.” John Dickinson warned of what might happen if the poor had real political clout when he said that property qualifications should be erected for voting because they are “a necessary defense against the dangerous influence of those multitudes without property and without principle, with which our country like all others, will in time abound.” And these quotes are fairly representative of the majority of the delegates at the convention. But others, even if they were suspicious of democracy, recognized the political reality they were facing. Delegate George Mason warned the convention that “Not withstanding the oppression and injustice experienced among us from democracy, the genius of the people is in favor of it, and the genius of the people must be consulted.” And James Madison agreed: “It seems indispensable that the mass of citizens should not be without a voice in making the laws which they are to obey, and in choosing the magistrates who are to administer them.” The myth is that the framers were all committed to democracy and liberty and
Constitutional Convention of 1787
that they established a mass democracy “for the people.” But the reality is not quite so rosy. In the end, the framers were pragmatists with class interest they wished to pursue but also with enough political sense to know that compromise would be required. At this time, three schools of thought began to emerge in the nation. For the sake of simplicity let us understand these differing views by looking at the chief representative figures of the time. On what we might today call the “left,” promoting more democracy, was Thomas Jefferson. Embracing a generally optimistic view of humankind, Jefferson promoted a small agrarian democracy that was close to and responsive to the people. Jefferson’s goal was democracy. On the political “right,” representing the property class, was Alexander Hamilton. With a more jaundiced view of humankind and its ability at self-government, Hamilton wanted a government that could impose order, one modeled on the British system. He sought a government strong enough to establish order out of chaos. Hamilton’s goal was to establish an oligarchy. Straddling the middle was James Madison. For Madison, who became the chief architect of the Constitution, a government with too much power was a dangerous government; and yet a government with too little power was, as the Articles of Confederation demonstrated, also a dangerous government. Seeing himself as a student of history, he believed that human nature drove men—at this time, only men were allowed to enter the public arena—to pursue self-interest, and therefore a system of government designed to have “ambition checked by ambition,” set within rather strict limits, was the only hope to establish a stable government that did not endanger liberty—or property. Realizing that “enlightened statesmen” would not always guide the nation, Madison promoted a check-andbalance system of separate but overlapping and shared powers for the new government. Madison’s concern to have a government with controlled and limited powers is seen throughout his writings, but nowhere is it more visible than in Federalist 51, where he wrote: “You must first enable the government to control the governed; and in the next place, oblige it to control itself.” Madison, like most of the founders, feared government in the hands of the people, but he likewise feared too much power in the hands of any one man. Therefore, the Madisonian model called both for protections against mass democracy and limits on governmental
35
power. This is not to say that the founders wanted a weak and ineffective government; had that been their goal, they could surely have kept the Articles of Confederation. But they did want a government that could not too easily act. The theory of government that the Madisonian design necessitates is one of consensus, coalition, and cooperation on the one hand, and checks, vetoes, and balances on the other. In this new government, rough balance was sought between governmental power and individual liberty. By separating powers, forcing institutions to share powers, and limiting powers through the rule of law, the framers hoped both to allow power (ambition) to counter power and to decrease the opportunity for powers to be abused. Since the people could not be trusted to govern, and since as Madison wrote in Federalist 10, “Enlightened statesmen will not always be at the helm,” power had to be fragmented and dispersed. Thus, Madison’s goal was to create a constitutional republic. After a revolution against a government that was widely perceived as too strong, the new nation labored under a government that was seen as too weak. And after a revolution to promote democracy, the delegates at the convention were decidedly not interested in establishing a pure or direct democracy. The fear of mobocracy existed alongside an equally strong fear of monarchy. How then to form a new government amid such conflicting and contradictory goals? While the delegates were pulled in many different directions, there was agreement on several things: Clearly, there needed to be a stronger central government, but not one so strong as to threaten liberty; a government that was guided not by the whim of one king or strong central authority but the rule of law (a constitution), that protected states’ rights and separated the powers of the federal government so as not to lead to tyranny; where the people had some say in the government but where municipal governments and minority rights were protected. As the convention began and the framers started their discussions (held in strict secrecy), two competing plans emerged: the Virginia Plan (favored by the large states) and the New Jersey Plan (favored by the smaller states). The Virginia Plan called for the creation of a stronger central government with a single executive, and a judiciary (both to be appointed by the legislature), along with a two-house (bicameral) legislature,
36 Constitutional Convention of 1787
with one house elected by the people, and the other by the state legislature. This legislature would have the power to override state laws. The number of representatives would be determined by the amount of taxes paid by each state. Under this plan, the three largest states (Virginia, Pennsylvania, and Massachusetts) would comprise a majority in the legislature and power would in effect be in their hands. The New Jersey Plan was an effort by the smaller states to defend their interests and power, and it called for a plural executive (so as to prevent one man from gaining too much power), and a strong singlehouse (unicameral) Congress in which each state got one vote. All members of Congress would be chosen by the state legislatures. From this point, the large and small states were engaged in a pitched battle. Each side had much to gain and much to lose. Finally, what became known as the “Great Compromise” was reached. The general structure of the Virginia Plan was maintained with a strong central government with the power to regulate commerce, tax, raise a military, conduct foreign policy, and set government policy. But the “compromise” part involved the makeup of the legislature. There would be, as the Virginia Plan called for, a two-house legislature, but the size of the House of Representatives was to be based on population (thus pleasing the large states) while the Senate would have two representatives per state regardless of population (thus pleasing the smaller states). Members of the House were to be elected by eligible white males (with standards established by the states, they were expected to be property owners) in the population, the Senate would be selected by the state legislatures. This compromise balanced the interests of large and small states, and as the legislature was to be the key policy making institution of the new government, all sides came out with what they needed if not what they wanted. With the general structure of the new government settled, the delegates began the difficult task of assigning specific powers and responsibilities to each branch of the new government. For a nation fresh off a revolution against what they perceived as a repressive state with a strong central authority, reconstituting a central government with significant powers would be no easy task. And while self-interest played a role in the transition from the weak Articles of Confederation to the establishment of a stronger federal
authority, there were other influences that shaped the thinking of the framers. America was invented in the midst of what became known as the Enlightenment or the Age of Reason. The framers embraced a view suggesting that men (and at this time they meant only males) were capable of exercising “reason.” Such a view allowed them to take steps toward giving power to citizens, or democracy. The political writings of John Locke, the British political philosopher who promoted a form of popular sovereignty, and the works of the French philosopher Montesquieu, who pioneered the development of a separation of powers model wherein tyranny might be thwarted by separating the powers of government into three distinct yet related governing functions—executive, legislative, and judicial— influenced the framers. James Madison, known as the father of the Constitution, was influenced also by the writings of the pioneering physicist Sir Isaac Newton, whose revolutionary views transformed physics and were applied by Madison to the world of government, allowing the framers to develop what became known as a “new science of politics” based on the goal of reaching balance and equilibrium. Although less understood, the lessons the framers drew from the Native Americans also had an impact on the writing of the Constitution. While the framers looked across the Atlantic to Europe and saw hereditary monarchies, they looked to the North, and could see a sophisticated, democratic, and egalitarian government in action: the Iroquois Confederation. This confederation, made up of six tribes/nations, organized along lines similar to a separation-of-powers system, was the model for Benjamin Franklin’s 1754 Albany Plan of Union, and was much studied by several of the framers. On July 27, 1787, the drafting committee of the Constitutional Convention met at the Indian Queen Tavern in Philadelphia to agree on a “final” draft of the new Constitution to submit to the entire convention for their approval. The committee’s chair, John Rutledge of South Carolina, opened the meeting by reading aloud an English translation of the Iroquois’s tale of the founding of their confederacy. Rutledge’s purpose was to highlight the importance for the new charter they were to adopt of a concept deeply embedded in the tradition of the Iroquois Confederacy: “We” the people, from whence all power comes.
Continental Congress 37
While this concept also had European roots, nowhere in the Old World was it being practiced. The NativeAmerican neighbors of the Constitution’s founders, however, had for decades been living under just such a constitution that brought this concept to life, and one that had an impact on the delegates who met in Philadelphia during that summer of 1787. In the end, the framers offered to the nation a new Constitution, one that, after heated battle, was ratified by the states, and became the supreme law of the land. Today, the delegates to the Constitutional Convention are considered iconic figures in the American pantheon. Further Reading Ferrand, Max, ed. The Records of the Federal Convention of 1787. New Haven, Conn.: Yale University Press, 1966; Madison, James. Notes of Debates in the Federal Convention of 1787. New York: W.W. Norton, 1966; Rossiter, Clinton. 1787: The Grand Convention. New York: Macmillan, 1966. —Michael A. Genovese
Continental Congress The Continental Congress is the name of consecutive representative bodies in the 1770s and 1780s that were instrumental in articulating colonial grievances with Great Britain, attaining American independence, and forming the United States of America. The impetus for the Continental Congress grew out of American dissatisfaction with British colonial policies. British subjects in America successfully resisted the 1765 Stamp Act, and they protested the 1773 Tea Act with the Boston Tea Party. From March to May 1774, the British parliament enacted several measures that sought to more tightly control the unruly colonies. Colonists called these laws the “Intolerable” or “Coercive” Acts, and they included the Quartering Act (which expanded a 1765 law requiring colonists to house British troops), the Boston Port Bill (which closed the port of Boston until colonists had paid for damages incurred during the Boston Tea Party), the Administrative Act (which stipulated that colonial courts could not try British officials), the Massachusetts Government Act (which revoked colonial charters and made colonies directly governed by Britain), and the Quebec Act (which cut off western
lands that colonists desired for expansion). Antiloyalist groups in each of the colonies complained about these and other British policies and pushed for some sort of coordinated action by all thirteen colonies. Benjamin Franklin had suggested a meeting for this purpose in 1773, and by 1774 the colonies agreed to a joint meeting as the Continental Congress. The first Continental Congress met in Carpenters Hall in Philadelphia from September 5 until October 26, 1774. It was the first representative body to formally discuss and articulate the collective concerns of the nascent American people. There were 55 delegates, coming from each of the thirteen colonies except Georgia, which was the newest and farthest colony and was disinclined to upset the British because it feared it would need assistance with conflicts with the Creek Indians. Most of the delegates at the Congress were selected by assemblies in their home colonies. Attendees included such political luminaries as John Adams, Samuel Adams, Roger Sherman, John Jay, Patrick Henry, and George Washington. Payton Randolph of Virginia was elected as the president of the Continental Congress. The majority of delegates at the Congress wanted to preserve their rights as Englishmen and did not at first seek independence. The delegates considered but did not adopt loyalist Joseph Galloway’s plan of union, which called for the formation of an American legislature to work with Britain’s parliament. Paul Revere delivered to the Congress a copy of the Suffolk Reserves, which were a set of local responses to British policies in Massachusetts, and the Congress voted to endorse them in September. This entailed boycotting British goods as a form of protest, and the Congress voted to create a Continental Association with local chapters called Committees of Safety to ensure that the boycott was honored. In October, the Congress unanimously endorsed and sent to King George III a “Declaration of Rights and Grievances” that protested British policies. The delegates wrote the declaration in their capacity “as Englishmen” to their “fellow subjects in Great Britain” and appealed to the English Constitution and to longstanding legal arrangements. Thus, it sought to ensure that Americans enjoyed rights to which they were entitled under current political arrangements, as colonial subjects. Although the declaration did not threaten a break with Britain, some delegates began to openly advocate independence.
38 C ontinental Congress
Title of a pamphlet summarizing the pr oceedings of the F irst Continental Congress (Library of Congress)
Shortly after the Congress ended, every colony’s legislature voted to endorse its actions. The first Congress had called for a second meeting to be held in May 1775, in the event that their grievances were not adequately addressed. In April 1775, hostilities broke out between militiamen and British troops in Lexington and Concord, British troops laid siege to Boston, and a rough American
military force began to gather outside the city. With circumstances more dire than before the first Congress, the colonies decided to assemble again. The delegates returned to Philadelphia and the Second Continental Congress began on May 10, 1775, at the Pennsylvania State House, now called Independence Hall. While the first Congress lasted only two months, the second Congress lasted six years. All thirteen colonies were represented in the second Congress, though the Georgia delegation arrived several months late. As with the first Congress, many of the delegates at the second Congress were selected by colonial assemblies, but others were selected by ad hoc provincial congresses that had replaced legislatures dissolved by royal governors. Most of the delegates from the first Congress also attended the second, and new delegates included Benjamin Franklin, Thomas Jefferson, and John Hancock, who was elected its president. When the second Congress started, it was not a foregone conclusion that the colonies would seek independence. Many Americans still hoped for reconciliation with Britain, despite the fact that the Revolutionary War was ongoing. Delegates such as John Adams and his cousin Samuel Adams and Richard Henry Lee pushed for independence, and their views gradually prevailed over more conservative delegates like John Dickinson. Much of the Congress’s attention was devoted to managing the war with Britain, though it prosecuted the war with powers that were severely limited and that had not been formally granted. In June 1775, the Congress named George Washington commander in chief of American forces. It later appointed military officers to assist Washington, funded the Revolutionary War with its own currency, authorized a preemptive invasion of Canada, and sent emissaries abroad to secure support from France and other European powers. As the war wore on, the Congress was forced on several occasions to temporarily move from Philadelphia to safer areas. In July 1775, the second Congress passed two statements that again articulated colonial dissatisfaction with Britain but that stopped short of declaring independence. Some delegates still hoped for reconciliation with Britain, while others openly pushed for a break with the mother country. In early 1776, Thomas Paine’s popular pamphlet Common Sense
Declaration of In de pen denc e 39
helped to galvanize public sentiment for independence, and the pro-separatist voices in Congress became dominant. In May 1776, Congress directed the colonies to govern their own affairs and to ignore British authorities. It then charged a committee to draft a document in which the thirteen colonies would declare their independence from Great Britain, and the committee turned the task over to Thomas Jefferson. Jefferson’s draft was revised, and Congress adopted and publicly released the Declaration of Independence on July 4, 1776. The document is famous as a principled assertion of basic political rights, but it also contains a lengthy list of colonial complaints about their erstwhile British rulers. The second Continental Congress continued to conduct the Revolutionary War after the Declaration of Independence, but its ability to do so was hampered by the facts that it could not create taxes to fund the war and that it could not pass binding legislation to compel the states to assist the war effort. For these reasons, and also to gain more legitimacy for the government of the newly proclaimed country, the Congress brought forth the Articles of Confederation. By most accounts, the Articles did not so much set out a principled vision of a new country as they merely legitimated what the Congress was already doing. Congress formally adopted the Articles in late 1777, but they were not ratified until the last of the thirteen colonies, Maryland, formally endorsed them on March 1, 1781. (Seven months later, the Revolutionary War effectively came to a close with the Battle of Yorktown.) When the Articles were ratified, the second Continental Congress was instantly transformed en masse into the Congress of the Confederation, or “The United States in Congress Assembled.” In this way, the second Continental Congress became the first government of the new United States, even though it had not originally assembled for that purpose and as such had limited constitutional legitimacy. It played this role for eight years, governing the new country with each state having one vote. The Congress of Confederation had more legitimacy and power than the first two congresses, but it still had difficulty governing. When the U.S. Constitution became operative on March 4, 1789, the Congress was replaced by the U.S. Congress that continues to govern today.
Thus, a body that was initially formed to discuss colonial grievances ended up serving as the government of a new country for some 13 years. From the start of the first Continental Congress to the advent of the U.S. Congress, the several distinct but closely related bodies called the Congress evolved in several respects. Politically, they gradually shifted from asserting rights as English subjects to seeking and defending independence. Governmentally, they made a transition from merely collectively voicing shared concerns to fully managing their own affairs. And democratically and constitutionally, they were transformed from an ad hoc gathering of separate colonies into a legitimate, united government. The two Continental Congresses and the Congress of the Confederation that followed may now seem like ancient history, but they were instrumental in articulating colonial America’s grievances, in gaining America’s independence, and in establishing and administering the government of the new country. Further Reading Henderson, H. James. Party Politics in the Continental Congress. Lanham, Md.: University Press of America, 2002; Montross, Lynn. The Reluctant Rebels: The Story of the Continental Congress, 1774–1789. New York: Harper, 1950; Rakove, Jack. The Beginnings of National Politics: An Interpretative History of the Continental Congress. Baltimore, Md.: Johns Hopkins University Press, 1982. —Graham G. Dodds
Declaration of In de pen denc e Thomas Jefferson’s Declaration of Independence is one of the most read, most quoted, and most admired works, and perhaps the most eloquent and powerful words ever penned by an American author. Cited for its insight into the human condition, quoted for its grace and beauty of language, admired for its spirit and dignity, and the source of inspiration for struggling peoples everywhere, this declaration of revolution and freedom ranks among the most important documents in American and human history. As the American colonists gave up on their attempt to fully join with Great Britain and as they faced indignity and injustice from England’s King George III, the stark choice facing the colonists
40
Declaration of In de pen denc e
The Declaration of Independence
Declaration of In de pen denc e 41
was: freedom (at great risk) or oppression (at the hands of the British). The stakes were high, the risks great, and the odds very much against them. How could this set of colonies take on the most powerful military in the world? As injustice was heaped upon injustice, the weight of argument in favor of revolution became more obvious as the grievances became more ominous. In January of 1776, Thomas Paine published his influential broadside, Common Sense, nudging the already angry colonists closer to open revolt. Paine’s words, while not wholly original, captured the revolutionary spirit and the revolutionary theses on which the nascent revolt would hinge. By the summer of 1776, declaring independence was all but inevitable. The Second Continental Congress met in 1776 and Virginia’s Richard Henry Lee made the motion (known as the “Lee Resolution”) that “these United Colonies are, and of right ought to be, free and independent states, that they are absolved from all allegiances to the British Crown, and that all political connections between them and the State of Great Britain is, and ought to be, totally dissolved.” It was a bold and dangerous step, but one that held the hope of freedom. On June 11, 1776, a committee of five was appointed to draw up a statement declaring independence from Great Britain. The committee consisted of John Adams of Massachusetts, Benjamin Franklin of Pennsylvania, Thomas Jefferson of Virginia, Roger Sherman of Connecticut, and Robert R. Livingston of New York. Almost immediately, the committee turned to the 32-year-old Virginian, Thomas Jefferson, to write the initial draft. That this body of more experienced and august men would turn to the youthful Jefferson to write the initial draft is a tribute to the high regard in which this junior member of the committee was held by his more senior colleagues. It turned out to be a wise move. The Declaration of Independence is made up of three parts: the preamble, a list of grievances, and the conclusion. By far the most important and most cited section is the preamble. It begins: “When in the Course of human events, it becomes necessary for one people to dissolve the political bands which have connected them with another, and to assume among the powers of the earth, the separate and equal station to which the Laws of Nature and of Nature’s God
entitle them, a decent respect to the opinions of mankind requires that they should declare the causes which impel them to the separation.” Thus Jefferson recognizes that, as men of reason, they are impelled to explain to mankind the reasons that compel them to revolution. It continues: “We hold these truths (italics added) to be self-evident, that all men are created equal.” There are truths that are self-evident to all if only reason will be our guide, and these self-evident truths reveal to us that all men are created equal. “That they are endowed by their Creator with certain unalienable rights, that among these are Life, Liberty and the pursuit of Happiness.” These God-given rights are unalienable, and they include the right to “life,” to “liberty,” and to the “pursuit of happiness.” And why form a government? Jefferson explains, “ . . . to secure these rights, Governments are instituted among Men, deriving their just powers from the consent of the governed.” Thus, the role of government is to secure rights, and the government is empowered and legitimized only with the consent of the governed. This turns upside down the basis of the government against which they are revolting. The people do not serve the government, the government serves the people, and only when the government has the consent of the people could it be considered legitimate. By turning upside down the relationship of citizen to state, Jefferson adds a right to revolution: “. . . whenever any Form of Government becomes destructive of these ends, it is the Right of the People to alter or to abolish it, and to institute a new Government, laying its foundation on such principles and organizing its powers in such form, as to them shall seem most likely to effect their Safety and Happiness.” Of course, Jefferson continues, “Prudence” dictates that people will not attempt to abolish a government for light or occasional usurpations of power. “But when a long train of abuses and usurpations, pursuing invariably the same Object evinces a design to reduce them under absolute Despotism, it is their right, it is their duty, to throw off such a Government, and to provide new Guards for their future security.” Jefferson then asserts that such is the state of the relationship of the colonies to the Crown, that this relationship has been violated, and the Crown is guilty of a “long train of abuses” leading necessarily
42 democr acy
to revolution by the colonists. As the Declaration continues, “The history of the present King of Great Britain is a history of repeated injuries and usurpations, all having in direct object the establishment of an absolute Tyranny over these States.” Then, Jefferson concludes his preamble, “To prove this, let Facts be submitted to a candid world.” The brunt of the remainder of the Declaration is a laundry list of charges and accusations leveled against the British Crown. This list of grievances is long and it is designed to rationally make the case that indeed, the colonies have a right to revolution. The thrust of this list of grievances is a direct assault upon the person and the power of the king—the Crown did this, the king did that, and the list goes on and on. It was antiexecutive in nature and wide in scope. All revolutions need the face of an enemy at whom to direct anger and resentment. Jefferson (and the colonists) chose the most likely bogeyman, the king. In a relentless assault on the king, Jefferson made George III the source and focus of all that was evil and unjust. In dehumanizing King George III, Jefferson gave a face and a name to the enemy. Of course, the king was a convenient and easy, if not always the appropriate, target. Blaming the king distorted the political reality on the ground and operating in England, where the Parliament was as much to blame as the king for the woes of the colonies. But the king made such an effective target and such a tantalizing enemy that the fuller truth got lost in the political shuffle. Thus, blaming the king and making him the primary target of hatred served the needs of the Revolution, if not the dictates of historical accuracy. As an interesting side note, after the Revolution, when the new nation needed to establish a new system of government, so antiexecutive was the feeling among the men who designed it (partly as a result of the animosity focused on the British king) that the first system of government created, under the Articles of Confederation, did not even have an executive. And when this proved an unworkable system, they created, as part of the new U.S. Constitution, a president but were careful not to give this new office too much power. The Declaration of Independence ends with a conclusion and formal declaration of independence, approved and signed by virtually all the members of the Second Continental Congress. Interestingly, in Jefferson’s original version, he dissolves the slave
trade. But when his version reached the full Congress, that provision was taken out as it would have impelled many of the southern states to withdraw support for the Declaration. This would have brought an end to the nascent Revolution. At the signing, John Hancock took the liberty of signing his name in unusually large letters, and he is alleged to have said that he did so in order for King George III to be able to read his name without putting on his glasses. It is then said that he warned the delegates: “We must be unanimous; there must be no pulling different ways; we must all hang together.” To which Benjamin Franklin replied, “Yes, we must, indeed, all hang together, or most assuredly we shall all hang separately.” This exchange may be apocryphal, but it certainly captures the mood of the moment. The Declaration of Independence, written by Thomas Jefferson, and amended by the Continental Congress, remains one of the most lasting and important documents ever produced in America. That it is as relevant today as it was more than 200 years ago speaks to the “universal truths” that Jefferson wrote about; universal truths that have animated United States domestic and foreign policy and that even today inspire and motivate people around the world. They are a high standard to live up to, but Jefferson was convinced that the American people were capable of achieving a republic of reason for ourselves and for the ages. Further Reading Jayne, Allen. Jefferson’s Declaration of Independence. Louisville: University Press of Kentucky, 1998; Maier, Pauline. American Scripture: Making the Declaration of Independence. New York: Knopf, 1997; Padover, Saul K. Jefferson. New York: Signet, 1993; Wills, Garry. Inventing America: Jefferson’s Declaration of Independence. Boston: Houghton Mifflin, 2002. —Michael A. Genovese
democracy In the modern age, the word democracy has become so universally honorific a term that it has, for the most part, become almost devoid of meaning. If virtually all governments claim to be democratic or based on democratic principles, then what does democracy mean and what can it mean?
democracy 43
In becoming a universally admired concept, the term democracy has come to be used to describe all manner of different forms of government. Novelist George Orwell, in his famous essay “Politics and the English Language” (1946), wrote that “In the case of a word like democracy not only is there no agreed definition but the attempt to make one is resisted from all sides. . . . The defenders of any kind of regime claim that it is a democracy, and fear that they might have to stop using the word if it were tied down to any one meaning.” Thus, democracy has come to mean all manner of things to all sorts of people for all kinds of political ends. It was not always thus. After the collapse of the ancient Athenian democracy, the idea of democracy went into disrepute, with classical thinkers from Plato up to the 17th century arguing that democracy was a dangerous and unworkable system. It took a great deal to rehabilitate the concept of democracy and make it once again respectable. Today it is the “universal” system, embraced by most and claimed by many. The word democracy comes from the Greek, demokratia, and means, loosely, rule by the people. President Abraham Lincoln’s famous definition of democracy found in his Gettysburg Address, “government of the people, by the people, and for the people,” captures in a few words the essence of democratic sensibility. In general, democracy means a system of government in which the people rule either directly or through representatives (representative democracy or a republic). Democracies are designed to serve the needs of the people, not rule over them. This point was beautifully made by Thomas Paine in his revolutionary pamphlet, Common Sense, when he argues that the old monarchical system should be turned upside down and the government should serve the people, not the people the government. Democracies are usually characterized as having free, open, and competitive elections in which virtually all adult members of the community decide, at a minimum, which shall serve in governing positions. Procedural democracy usually refers to a minimal requirement of the citizens choosing leaders. Such a system gives the voters the opportunity to select leaders, but asks little more of them than the occasional vote every few years. Most students of this form of
democracy believe it does not merit the name “democracy” as it is so minimal a definition as to be without democratic meaning. Participatory democracy sees a much more expansive role for citizens in assuming responsibility for governing themselves. Here more is required than the occasional vote. In a participatory democracy, citizens have rights, but they also have responsibilities and opportunities. They are expected to more fully participate in the life and decision making of the community, and many find this an onerous burden. That is why so few systems embrace this expansive form of democracy. Participatory democracy takes time and it takes work, but its rewards are many. In the end, a fully participatory democracy is designed to create a democratic citizen, fully formed, fully aware, fully committed, and a “whole and robust person.” Liberal democracy, so commonly practiced in the Western industrialized world, implies voting for leaders as well as individual rights and liberties for citizens. Most of the liberal democracies of the West are constitutional democracies that include some sort of list of citizen rights such as the Bill of Rights in the United States. But most liberal democracies have a minimalist view of the role and responsibilities of citizens, and few allow for the more robust of participatory elements of democracy practiced in such formats as the New England town meeting context. The pedigree of democracy has many fathers. A form of democracy was practiced in the republics of Maha Janapadas in India in the sixth century b.c. Athens in the fifth century b.c. also had a form of democracy long before most even thought of allowing the people any measurable power. In the Americas, the Iroquois Confederacy had been practicing a form of democracy for many years by the time the European colonists came to the shores of the new nation. In northern Europe, a form of representative democracy had been practiced long before the United States was invented. If democracy has many fathers, it also has many forms. One size does not necessarily fit all. The inventors of the United States were by and large suspicious of democracy and instead chose a constitutional republic as their model for government. As John Adams wrote in a letter to John Taylor
44 democr acy
dated April 15, 1814, “. . . democracy never lasts long. It soon wastes, exhausts, and murders itself. There never was a democracy yet that did not commit suicide.” James Madison, the father of the U.S. Constitution, wrote in Federalist 10, condemning pure democracies, that “. . . such democracies have ever been spectacles of turbulence and contention; have ever been found incompatible with personal security or the rights of property; and have in general been as short in their lives as they have been violent in their deaths.” Many of the framers, including Alexander Hamilton, saw democracy as dangerous and referred to it at times as mobocracy. They feared the people could easily become an impassioned mob, manipulated by fears and prejudices. Thus, they were careful to pay lip service to the icon that was embedded in the democratic elements of the revolutionary spirit, while institutionalizing a constitutional republic with checks and balances, a written constitution, the rule of law, as well as a system of minority rights. A democracy may take many forms. There are constitutional democracies with rights and systems of law. Thomas Jefferson had this type of system in mind when he wrote in 1798: “In questions of power . . . let no more be heard of confidence in man, but bind him down from mischief by the chains of the Constitution.” There is direct democracy, wherein decisions are made directly by the people, such as in a town meeting, or via referendum elections. There is participatory democracy where the direct involvement of individuals in the day-to-day life of the government calls upon citizens to actively become involved in a variety of governmental activities. And there is representative democracy, where the voters elect representatives to speak and vote on their behalf. The United States is a constitutional republic but has many of the attributes of a constitutional democracy. In 2002, the George W. Bush administration began a controversial effort to “bring democracy to the Middle East.” Part of this effort was the war in Iraq. But can democracy be imposed from without, and at the point of a gun? Clearly the American form of a constitutional republic came about in the aftermath of a bloody revolution, but were there antecedents that preceded this revolution that made a
democratic or constitutional republic possible? And are there preconditions that help make democracy last in one country but fail in another? Must there be a viable middle class, a commitment to rights, freedom, a willingness to compromise, and liberties and the rule of law for democracy to survive and flourish? Of those who study democracy, most believe that in order for a democracy to “take,” certain prerequisites are necessary, among them a public spiritedness, the acceptance of the rule of law, a large middle class, a strong civil society, stated rights and responsibilities, open and free elections, procedural guarantees, a willingness to compromise and bargain as well as share power, a willingness to accept outcomes that are politically viable or legitimate. Some also see a link between the free enterprise system (capitalism) and democracy, but such links are not necessary, as many of the preindustrial democracies demonstrated. There are two primary models of organizing democracies in the modern world, the Westminster Model (the British form of parliamentary democracy) and the United States Model (the separation of powers system). Most of the newly emergent democracies, when given the opportunity to choose which form of democracy they will have for their country, choose some variant of the British model. Why is this? Because few separation of powers systems have been stable over time; as a matter of fact, most have failed. In this, the United States is the exception. Variations of the parliamentary model are chosen because they seem to be more responsible, representative, powerful and accountable. They simply seem to work better than separation of power systems. This may be partly the result of the fusion of executive and legislative power that is characteristic of the British model. In fusing power, the government seems better able to act, decide, and lead. The separation of power model so often leads to gridlock and deadlock and proves unworkable in most nations. Thus, when given a chance to select a model of democracy for a new government, virtually all nations select some form of the parliamentary model of political democracy. Democracies can be messy, complicated, slow, and frustrating. Dictatorships are so much more efficient. But Winston Churchill, in a speech before the British House of Commons on November 11, 1947,
direct (participatory) democracy
captured the essence of it when he said that “no one pretends that democracy is perfect or all wise. Indeed, it has been said that democracy is the worst form of government except all those other forms that have been tried from time to time.” Democracy remains a powerful symbol and attractive ideal. Revolutions are still fought to bring democracy to countries across the globe. It is unlikely that the attractiveness of democracy will quell any time soon. It is also unlikely that nations will embrace the more participatory forms of democracy available to them. Further Reading Carothers, Thomas. Aiding Democracy: The Learning Curve. Carnegie Endowment for International Peace, 1999. Dahl, Robert A. How Democratic Is the American Constitution? New Haven, Conn.: Yale University Press, 2002. Dahl, Robert A. On Democracy. New Haven, Conn.: Yale University Press, 2000. Held, David. Models of Democracy. Stanford, Calif.: Stanford University Press, 1987. Woodruff, Paul. First Democracy: The Challenge of an Ancient Idea. New York: Oxford University Press, 2005. —Michael A. Genovese
direct (participatory) democracy Direct democracy, also known as pure democracy, and participatory democracy, its more thoroughgoing version, refer to a democratic political regime in which the people are empowered to exercise political power directly through voting on political matters or, in the case of participatory democracy, on issues with public implications whether they be political, economic, or otherwise. In both cases, political decision making is in the hands of the people, the value of public discussion is stressed, and the power of the people and their sovereignty is maintained and protected through their own political activities. As in some ancient Greek citystates and New England town meetings, the people vote on political matters rather than vote for representatives who then vote on such matters. The ideals of self-government and political equality are captured here, because there is no layer of specially assigned political agents who mediate between the people and the state. Direct democracy is justified on the basis of its leading to better political outcomes because of the
45
people’s unequalled knowledge of their own interests, and to better citizens because of the salutary effects of political participation, which include educating citizens about civic participation, enlarging their perspectives, socializing them in the art of self-rule in the company of their fellow citizens, and increasing their devotion to collective action on behalf of their community. According to a principal theorist of direct democracy, the Enlightenment thinker Jean-Jacques Rousseau, only a direct democracy is a true democracy, because to delegate the people’s power is to alienate it and risk losing control over one’s representative. He famously quipped that the British, with their parliamentary system, are a democracy whenever they assemble to vote on representatives, but on no other occasion. Indeed, many Americans participate in politics only once in a while when they choose their political leaders, who will make governmental decisions for them, though they may attend to politics more frequently owing to television and other mass media coverage, if only as spectators. If it is true that Americans are too absorbed in their own private interests to sustain meaningful participation in democratic politics, then a direct democracy would be an inappropriate form of rule and would fail to ascertain the common good. In a participatory democracy, reflecting a more thoroughgoing democratic society, the people vote not only on political matters, but also on other matters concerning the public, such as economic issues that cross the line dividing public and private. Here a parallel is attempted between the equality of political rights possessed, and those economic rights asserted to belong to each citizen as well. The bonds of the political community are strengthened through the greater participation of citizens across a variety of public spaces, better manifesting the interconnection of public issues and enhancing the political socialization and expertise of the people. Several European countries practice a form of participatory democracy, workplace democracy, and also conduct national and local economic planning through the use of councils composed of national and local public officials, corporate representatives, agents representing the workers, and others who stake a claim in the outcome of business decisions, such as environmental groups.
46 dir ect (participatory) democracy
While there are differences between the two forms of democracy, in both cases a majority of citizens needs to be convinced before legislation or a policy decision is passed or implemented. These forms of democracy may best express the core meaning of democracy as rule of, by, and for the people, in Abraham Lincoln’s phrase, because there is no separation between the ruler and the ruled in terms of making law or public policy, and the full value of the widest possible public deliberation can be attained and enacted. Each citizen is present and free to participate directly in the political decision-making process, hopefully with as much motivation and sense of public responsibility as had citizens in the classical Athenian democracy. An additional attribute of direct democracy at the level of the city-state is that citizens could get to know one another or know of one another, identify common interests, and conscientiously strive to achieve the public good. In classical Athens, where democracy meant direct democracy, there were about 40,000 privileged male citizens who assembled many times a year to conduct public affairs in their demes, or districts. A quorum of 6,000 citizens was required for the Athenian Assembly or Ecclesia to legitimate decisions that governed this key city-state, meeting about 40 times per year. While consensus was always the aim, decisions could also be rendered legitimate through majority vote when there were entrenched, opposed positions. Athenian citizens enjoyed a great deal of leisure time owing to their wives, responsible for the domestic sphere, and the labor of their slaves (who numbered around 90,000), freeing them for the demanding activities of politics. Politics here chiefly meant the affairs of the city-state and extended, at most, to other city-states and kingdoms within the geographic area of Greece and the Aegean Sea, and citizenship was understood to refer only to the community of their residence, not to any larger political unit. Direct democracy on the model of classical Athens cannot work in a modern country as large and populated as the United States, which necessitates a complex governing structure. Nonetheless, some New England-style town hall meetings still occur, while the communal assemblies of the Swiss confederation might be the best working example of direct democracy at the level of a nation-state. The principal examples of direct democracy in use in the
United States today include provisions in many states for ballot initiatives, referenda, or recall of elected politicians. These provisions allow for the people to propose legislation, pronounce on legislation, or render their negative judgment of sitting officeholders. Future technological innovations might provide us with a teledemocracy, a direct democracy made possible through mass communications or the Internet. The risk remains, however, that such technical fixes will merely reflect and solidify unrefined popular opinion, and not promote seasoned political analysis or edifying discourse, much as Plato believed was the danger in Athens, where his mentor Socrates and the popular will had a memorable collision. Plato, perhaps democracy’s most formidable critic, believed that politics was an art, and that it requires a considerably long education to prepare oneself for political leadership, for making the intelligent and morally correct judgments that only the few philosophically inclined could ever hope to perform. For Plato, the value of citizen participation in collective decision making did not outweigh the cost, and the ship of state would quickly run aground owing to incorrect decisions and bad leadership. Rousseau sought to unite popular rule with the people’s capacity to form an enlarged sense of public-mindedness, hoping that in their process of public decision making they would identify and will their community’s common good. While Rousseau may have been the theorist of the French Revolution, he was not especially influential in the American colonies, where the device of representation was the preferred method of indirectly discerning and acting on the public will. Those founders who were students of the classics were skeptical of direct or pure democracy. With Plato and Aristotle, they were wary of democracy’s tendency to cater to the unrefined inclinations of the majority, a mob to whom an unscrupulous politician or demagogue could appeal with disastrous consequences for everyone. They were mindful of the old argument that because democracies are neither based in, nor particularly concerned to cultivate reason or virtue, they will degenerate into tyrannies. The founders, however, did not share Aristotle’s concern that popular rule in a democracy is insufficiently concerned with protecting the wealthy minority, and they rejected a property qualification for the franchise,
divine right of kings 47
extending it to all free men. The Founders, then, established not a direct democracy, but a republican form of government that would hold the passions of the people in check, while yet granting them the franchise to elect representatives. The United States is a democratic republic, by which is meant a representative democracy, arguably the founders’ greatest political innovation. Regardless of any past skepticism regarding democratic governance, the 20th century saw a rise in democracies around the world commensurate with a political consensus that those countries that were not already democracies, or on their way to becoming democracies, were less legitimate if not outright illegitimate political regimes. Democracy with its political values of equality and liberty has inspired countless people in former African, Latin American, and Asian colonies. Though their political and revolutionary activities on behalf of their own freedom and independence have seldom resulted in the establishment of direct or participatory democracies, the aspiration to seize political power and return it to the people so that they are free to control their common fate stems from these models of self-rule. For many Americans, direct democracy remains the ideal form of self-rule because there would be no separation between the rulers and the ruled, and it would involve all the people taking part in governance, deliberating and adjudicating. Direct or participatory democracy holds out the promise of wrestling political power away from the nation’s two semi-distinct political parties and innumerable interest groups, a system where elites and money are perceived to dominate and the ordinary citizen may feel himself or herself less than equal. While pursuit of this ideal at the national level might be dismissed as romantic longing or foolish naïveté, it remains important to have a conception of direct and participatory democracy if just to be mindful of what has been sacrificed in order to make way for the gains of representative democracy. Engaging in the practice of direct democracy at the local level such as in the government of towns may serve to prepare and motivate citizens interested in public service, and begin the process whereby candidates for public office become known and judged by their peers as they run for election at ever higher levels of government.
Further Reading Barber, Benjamin. Strong Democracy. Berkeley: University of California Press, 1984; Held, David. Models of Democracy. 3rd ed. Palo Alto, Calif.: Stanford University Press, 2006; Hyland, James L. Democratic Theory: The Philosophical Foundations. Manchester: Manchester University Press, 1995; Pateman, Carole. Participation and Democratic Theory. Cambridge: Cambridge University Press, 1970. —Gordon A. Babst
divine right of kings The Europe from which the framers of the United States government fled in the 1600s and 1700s was being transformed in the transition from an age governed by the divine right of kings to an age of democracy, sometimes known as the divine right of the people. Power and legitimacy were shifting from the will of the king to the will of the people (usually expressed through representative assemblies). The medieval world of Europe was one in which kings claimed to possess authority handed down to them by God. They were, or so they claimed, God’s representatives on earth and were deserving of the respect that this link to divine connection merited. The belief that God spoke and acted through the king was widely accepted, and for the king, this gave authority and power to his rule. Thus, the will of the monarch was the will of God—or so the kings would have one believe. With power grounded in the divine, kings had almost limitless reach. After all, who would defy or question the word of God? Such boldness would be sacrilege and blasphemy. If the will of the king were equated with the will of God, none but the bold or insane would challenge the king, for to do so would be to challenge God. This is firm political footing on which a king can stand, and who was left to say that “the emperor had no clothes”? Such power was, as the poet Alexander Pope wrote in The Dunciad (1728), “the right divine of Kings to govern wrong.” Over time, it became increasingly difficult to maintain the fiction that the will of God and the will of the king were one. After all, kings were mere mortals, and some “more mortal than others.” An effective king could maintain the fiction, but sprinkled in between the benign and the effective, were brutes, bullies, and madmen. Some lusted after power, while
48 divine right of kings
others lusted after pleasures of the flesh. Some were intent on expanding their empires while others were content to simply maintain the status quo. But while the more benign kings maintained their status and authority with relative ease, the less effective kings often undermined their own authority and status, thereby undermining the legitimacy of the governing myth that the king was linked to the divine. It was these kings who opened the door to the deterioration of the concept of the divine right of kings, and who were responsible for the downfall of divine right as a governing concept. The behaviors of the “bad” kings directly undermined the claims that God was represented on earth by these ineffective, and sometimes cruel, rulers. Thus, over time, the challenges to the ideology of the divine right of kings became more and more palatable, until the house of cards on which the divine right of kings was built came tumbling down. In this way, over time, the sanctity of the belief in the divine right of kings was challenged from a variety of sources: by the church, the landed barons, later by the nascent parliaments, and later still, by the people, usually as represented by elected legislatures. This challenge to the divine right of kings was a long, often bloody, difficult process, as understandably, kings did not want to give up an inch of authority. But over time, the sanctity of the king eroded, and the demands for greater forms of participation and democracy emerged and finally overtook and replaced the divine right of kings with the divine right of the people. Legitimacy was thus transformed and placed on a new and less solid ground: the will of the people. This secular base of power proved more difficult to harness and control, and created the need for rulers to “lead” and not just command. But how was this will to be translated into the world of politics? Direct democracy was one option, and representative democracy another. Most nation states opted for the latter as it was easier to manage, and cynics argue, easier to manipulate the will of the people and the political process if they have as little to do as possible with directly governing themselves. During the age of divine right, kings could command. They had power, authority, and were virtually unchallenged. In the new era, leaders had to lead. That is, the king could rule or command with nearly unquestioned authority, whereas presidents and prime ministers had to persuade, win elections,
and gain consent. With the transformation from the divine right of kings to the divine right of the people, leaders possessed less power and were on more fragile and tentative political ground. They had to continually earn the support of the people lest their governments fall. And the people could be fickle. They sometimes wanted contradictory or paradoxical things (for example, lower taxes but more government services). This made the leaders dependent on the people for their “power” while making them servants of the electoral will or whim of the masses. The framers of the U.S. Constitution soundly rejected the idea of the divine right of kings. The new nation emerged at a particular historical moment when the divine right of kings was being challenged from all sides, and thus, as revolutionary fervor heightened, the ideas that animated their break from the past included a more democratic, egalitarian ethos. The framers were grounded in a deeper appreciation for democratic theory, and in practical terms, a rejection of the rule of the British kings. For the colonists, rejecting the divine right of kings was relatively easy, as they were preconditioned to embrace a more democratic, less regal view of power and politics. This was made even easier because the rule of King George III was widely seen as arbitrary and capricious. The American Declaration of Independence, apart from being one of the most eloquent statements of revolutionary and democratic theory ever penned, is also a ferocious indictment of executive abuses as allegedly committed by the British monarch. “The royal brute of Britain” is how revolutionary writer Thomas Paine referred to the king, and most colonists saw him as such. Thus, when the time came to form a new government, the framers rejected not only the notion that the will of the king was the will of God, but also they went so far in their antiexecutive sentiments as to have no political executive in their first government under the Articles of Confederation. Over time, the impracticality of this arrangement was widely recognized, and when the time came to revise the Articles, the framers, starting from scratch, wrote a new Constitution in 1787 that included a president, but one with limited power, bound by the rule of law under a Constitution. The king claimed that his authority derived from the “fact” that God had anointed him king. To diso-
eminent domain 49
bey the king was tantamount to disobeying God. As long as the vast majority of the people were willing to buy into that myth, the king could rule, or command, perched on the shoulders of God and fully expect to be obeyed by the people. Over time, the divine right of kings gave way to a new myth: the divine right of the people, or democracy. The ground beneath the king’s authority collapsed and was replaced by a secular legitimacy based on the will or consent of the people. Few followed the commands of the ruler. Now people had to be persuaded to follow, or they believed that the “elected” leaders were to follow their will. The grounds of authority and legitimacy were weakened, and presidents had to practice a new form of politics that was centered on the new source of power: the voice of the people. If the divine right of kings strikes the modern mind as farfetched, it might be useful to remember that all societies are guided by popular myths, legends, and fictions that are accepted as truth. Such myths serve purposes, be it to socialize citizens into the dominant governing ethos, or to serve the interests of the state. The divine right of kings served the interests of royalty, order, and stability. Today, the dominant myth is democracy. Will there be a point, perhaps 200 years from now, when future generations will look back at that dominant paradigm and ask how such intelligent people could believe such a thing? In politics, the only thing certain is that change is inevitable. Further Reading Kantorowicz, Ernst H. The King’s Two Bodies. Princeton, N.J.: Princeton University Press, 1957; Williams, Ann. Kingship and Government in Pre-Conquest England. New York: St. Martin’s Press, 1999; Wootton, David, ed. Divine Right and Democracy: An Anthology of Political Writing in Stuart England. Indianapolis, Ind.: Hackett, 2003. —Michael A. Genovese
eminent domain The concept of eminent domain comes from the distinguished Dutch jurist Hugo Grotius, who in a 1625 legal treatise, wrote about the legal rights of the state to confiscate private property for some state pur-
pose. The word comes from the Latin dominium eminens meaning supreme lordship. Eminent domain made its way into United States law via English law, where the king possessed an inherent right of the sovereign to the property of his subjects. As Parliament wrestled power away from the king and codified the rights of citizens as well as the limits on government, a way had to be found to legally confiscate private property when the interests of the state were believed to supersede the rights of the citizen or subject. Over time, the concept of eminent domain served this state purpose and became deeply rooted in English law, and later in American law. Defined as the government’s power to take land or private property for public use, eminent domain is embedded in American law in the Fifth Amendment to the U.S. Constitution, where it is written that “No person shall . . . be deprived of life, liberty, or property, without due process of law; nor shall private property be taken for public use without just compensation.” The state’s right of eminent domain is also written into the constitutions of all 50 states. But over time, it has also been the source of a great deal of controversy. Why would the state need to confiscate such private property? In general, the state and public interest may at times be served by building public roads, or providing some kind of public access, service, or program that requires the purchase of private property that is to be put to public use. Of course, in eminent domain, much depends on the definition of “public use.” And in recent years, that definition has greatly expanded, causing a backlash by several public interest groups, antigovernment organizations, and others. Baltimore’s Inner Harbor and New York’s Times Square were remodeled and revamped as a result of eminent domain rules. And those who were dispossessed in the eminent domain cases felt that the state had wrongly confiscated their property. The courts have been instrumental in defining the scope and limits of eminent domain. In Calder v. Bull (1798), Associate Justice Samuel Chase expressed concern with the meaning of the term “public use,” and that debate has animated the controversy over eminent domain ever since. Later, the controversy reached fever pitch when the United States Supreme Court dramatically expanded the definition of eminent domain. In 1876, the Supreme Court,
50 eminent domain
in Kohn v. United States, affirmed the right of the federal government to confiscate privately held land, as long as a fair market value was paid for the property, and as long as the property was to be used for some greater or public good. In a more contemporary context, the controversy over eminent domain reached even greater concern when the Supreme Court again dramatically expanded the definition of eminent domain. In Kelo v. New London (2005), the city of New London, Connecticut, wanted to condemn 115 residences, take over the property owned by the residents, and turn that property over to private developers who would then put that property to private use and profit. How, one might ask, is this “public use”? The city argued that by upgrading the property it would increase property values and thus increase the city’s tax base. But is this public use or public benefit? And what of the rights and interests of those being dispossessed of their homes (even at fair market value)? While the Supreme Court sided with the city in this case, it also generated an anti–eminent domain movement that has been especially vitriolic on the Internet. In a 5-4 vote, the United States Supreme Court found in favor of the City of New London, thereby giving local governments wide latitude for property seizure for “public purpose.” This opened the door for many cities to pursue economic development, but also undermined the concept that “a man’s home is his castle.” Giving the state greater latitude to seize private property pits the state against the individual and calls into question the very meaning of the Fifth Amendment to the Constitution. Historically, the limit of the state taking private property was centered on the “public use” question. As long as a legitimate public use was made of the property, the government had a strong argument in taking private property from a citizen, and courts across the nation usually sided with the government in disputes over the question of public use. But the definition of “public use” has proven a moving target, with no set definition or understanding. After the controversy surrounding this issue in the case of Kelo v. New London, more than 30 states have considered how to limit the government’s reach in taking private property under the eminent domain clause of the federal and state constitutions. As of this writing, five states have made minor changes to state law
to attempt to restrict confiscation of private property, but no new state law has frontally challenged Kelo v. New London. Several other states are considering changes in eminent domain as well. These state governments have been responding to citizen fears and outrage at the scope of eminent domain allowed by the Supreme Court in the Kelo case. While it seems likely that state governments will continue to attempt to chip away at Kelo, it seems equally likely that the Supreme Court will not significantly back away from its Kelo ruling. Will that create a deadlock or a victory for the Supreme Court? In all likelihood the Supreme Court’s view will prevail. There is no reason to presume that the Supreme Court will wish to change its mind in the near future and in spite of state efforts, the federal rulings seem unlikely to change in the coming decade. Even if citizen pressure is brought to bear on the Court, and in spite of the narrow 5-4 margin of the Kelo victory, recent changes in the Court personnel have, if anything, seemed to make Kelo even more likely to stand challenges from the states. Some of the antigovernment extremist groups have, in the wake of the Kelo decision, tried to enflame the passions of the people by suggesting that Kelo is just the tip of the iceberg, and that this is step 1 in a pattern of governmental power grabbing. They suggest that the liberty of the people is in danger if the tyrannical government that can come in and take away private property for no good “public use” reason. Thus far, the anti-government movement has made little headway in changing public opinion on this issue. However, these groups are trying to use the issue to recruit converts to their antigovernment ideology, and to the extent that eminent domain disputes continue to make headlines, such stories might form the tipping point for disgruntled citizens to join ranks with these groups and swell the ranks of the antigovernment movement in the United States. While eminent domain strikes at the heart of the rights and liberties of individual citizens, the “balancing test” of public need versus public use makes this a complicated and by no means a simple topic. Where both the rights of citizens and the interests of the state collide, the Supreme Court often approaches the problem with a “balancing” effort: give neither position a clear victory, and try to balance the needs and interests of both parties. But there is no balance
English Bill of Rights 51
when one’s land is confiscated, even if a fair market value is paid by the state to the property owner. In this way, it is unlikely that this controversy will soon be settled, nor will a balance be seen as an acceptable outcome by either side of the issue. See also property rights. Further Reading Epstein, Richard. Takings: Private Property and the Power of Eminent Domain. Cambridge, Mass.: Harvard University Press, 2005; Lieberman, Jethro Koller. A Practical Companion to the Constitution: How the Supreme Court Has Ruled on Issues from Abortion to Zoning. Berkeley: University of California Press, 1999; Raskin, Jamin B. Overruling Democracy: The Supreme Court Versus the American People, New York: Routledge, 2003. —Michael A. Genovese
English Bill of Rights (1688) The conception of the English Bill of Rights happened after decades of civil discord in England as a result of a change in religion. In order to fully understand the import of the Bill of Rights, a historical context is necessary. Henry VIII wanted his Catholic marriage to Catherine of Aragon annulled. Pope Clement VII refused to grant the annulment, so Henry declared himself Supreme Head of the Church of England. When this happened, the Church of England, and the English people, broke from the Catholic Church in 1534. Although the Church of England remained relatively Catholic in form during Henry’s reign, his son, Edward VI, theologically moved the church in a more Protestant direction. Queen Mary I linked the Church of England again with the Roman Catholic Church in 1555, but that did not last, as under Elizabeth I, the Church of England became firmly Protestant. Elizabeth I reigned for nearly five decades without marrying and thus without securing an heir to the throne. When she died in 1603, there was no Tudor in line for the English throne, and James Stuart, formerly James VI of Scotland and a cousin of Elizabeth’s, became king. James I did not favor either major religion in England and Scotland, which angered both Catholics and Protestants, but it was his belief in absolutism, or the divine right of kings, that sowed the seeds for civil war. His son,
Charles I, who ascended to the throne in 1625, married a Catholic and pushed the theory of absolutism that his father believed in. Both of these events caused problems for Charles I and England, as many Protestants believed Charles I was bringing the Church of England too close to Catholic doctrine, and many Englishmen believed Charles I was taking on too much power as king. He was opposed by Parliament, which challenged his absolutism, and by the Puritans, who challenged his religious choices. This opposition resulted in the Civil War, during which Charles I was executed in 1649. England then attempted a commonwealth government, which quickly dissolved into a military dictatorship under Oliver Cromwell from 1653 to 1658, and then under his son, Richard Cromwell, from 1658 to 1659. Charles I’s son, Charles II, was king of England in law but was exiled from 1649 to 1660. In 1660, after a few years of tremendous worry among the citizens of England that the nation would descend into anarchy, the nation restored the monarchy and placed Charles II officially on the throne. Charles II ruled until 1685 without a legitimate Protestant heir because his wife could not bear him any children. This left the succession to his Catholic brother, James II, who also believed in absolutism and was very unpopular with the English people. He was deposed very quickly, fleeing his throne ahead of his son-in-law, William of Orange, who landed in England with his army in 1688. When William arrived in England, he called a convention of all those living who had served in Parliament to decide what would happen given James’s flight. As a result of a near century of religious disputes and a series of monarchs claiming absolutism, the English Convention decided to pass An Act Declaring the Rights and Liberties of the Subject and Settling the Succession of the Crown, later to be known as the English Bill of Rights. This is a part of the long constitutional and political history of England in which Englishmen do not have their rights granted to them by the throne, but declare in writing the existence of customary rights that the throne needs to recognize. This history began with Magna Carta, written in 1215 in opposition to king John, followed by the Petition of Right, written in 1628 in opposition to Charles I, and culminated in the English Bill of Rights written in opposition to James II. The English tradition of forcing the monarch to recognize
52
English Bill of Rights
preexisting customary rights then continues into the U.S. Constitution and the Bill of Rights amended to that document. The people do not receive their rights because of these documents; instead, these documents guarantee that the monarch, or government in the case of the United States, recognizes that people have natural rights. The Glorious Revolution that surrounds the writing of the English Bill of Rights entailed settling three major issues that a century of dispute between religions, monarchs, and the people had given rise to. First, the convention needed to decide who would be king. James II had a legitimate son, but many worried that the Stuart monarchs had tended toward absolutism and Roman Catholicism, neither of which worked very well for the English Parliament and the people it represented. Second, the convention had to decide in what ways the government should be formed. This is a direct question about the divine right of kings and whether the monarch could rule absolutely. Where should the power in government lie and how should the government be formed to make sure that power was not abused? Finally, the Convention had to deal with the contentious issue of what the relationship was between the Church of England and other Protestant churches in the realm. William of Orange called the convention so there would be no question as to his legitimacy to rule with his wife, Mary (James II’s daughter), and also to settle these questions that had plagued England for nearly 100 years. It is during the Glorious Revolution that John Locke plays a major role in settling these political philosophical questions. Locke was an agent of the earl of Shaftesbury, who had been intimately involved with many of the events leading up to the Glorious Revolution. Shaftesbury was in favor of Charles I, until he decided that the king’s policies were too destructive, and then switched his support to Cromwell, until Cromwell became a dictator. Shaftesbury then took part in trying those who had participated in the trial and execution of Charles I. Finally, Shaftesbury’s downfall happened when he conspired with Charles II’s illegitimate son, the duke of Monmouth, to make him king instead of his Catholic uncle, James II. Shaftesbury fled to Holland when James II ascended to the throne, and John Locke went with him. That is how Locke became intimately
involved in the history of the Stuart monarchs. He did not want Catholic monarchs, but he was not opposed to a monarchy per se. Toward this end, Locke addressed the issue of whether the people have the right to resist a monarch and, if so, under what authority. This began Locke’s development of his social contract theory that stated people had the right to overthrow a monarch if that monarch broke the contract. Therefore, James II could be overthrown in favor of William and Mary because he broke the contract with his people by not recognizing their rights. The Convention found that the governing relationship was between a monarch and the people. Thus the power was situated in the relationship, not in the hands of the monarch alone. Thus, the monarch could abuse the relationship, resulting in his or her overthrow, and the people would then begin again with a new monarch. In this way, the Convention answered the question of how William and Mary could be legitimate monarchs without allowing for further unrest in the nation. The English Bill of Rights is the document that sets out the answer to those questions. First, the document attacked King James II and itemized what he did to violate the relationship of governance with the people of England. With his violation of the relationship established, it was clear that the people could start over. The royal succession was established from James II to his daughter, Mary, and her husband, William, who promised not to violate the relationship. Then the document turned to the formation of government by taking the reins out of the hands of an absolute monarch favored by the Stuart dynasty, and giving them to a mixed monarchy with a strong legislative government. Parliament’s role in legislation was firmly established, so that it was not subject to the whims of a monarch who chose to meddle in this area of governance. Certain checks were established on the monarch’s behavior, thus ensuring that no absolutism would be tolerated in the future. For example, the monarch was no longer allowed to raise and keep armies without Parliament’s approval. The English Bill of Rights was a revolution that restored continuity with English constitutional and political tradition, rather than one of change. The English Bill of Rights was written at an important time in American history as well, while
federalism 53
many of the colonists began to arrive and settle in the new American colonies throughout the 17th century. Thus, these events had an influence on how both people in the colonies and people in England would think about issues involving governing and politics. For example, the southern colonies were founded during a relatively smooth time for monarchs and had less of a problem with the king than did the New England colonies that were settled by Puritan dissenters. Another important element for American government found in the English Bill of Rights is the tradition of clearly enumerating the rights of the people and expecting the government to recognize those same rights. When contract theory was introduced to the English people, the colonists embraced it and started to push in their colonial assemblies for recognition of these rights as well. In fact, in comparing the Declaration of Independence alongside the English Bill of Rights, many major similarities exist between the two documents. An enumerated list of the ways a king has violated the governing contract with the people, a theory of how a government is to be formed and why, and indeed, Locke’s principles of people being allowed to overthrow a king and start over are written explicitly into the opening of the Declaration of Independence, which would become one of the founding documents of the United States. The English Bill of Rights also steeps American civics in a long tradition of people having rights that are recognized—not granted—by their governing agents. Americans believe civically in a specific natural rights order that they can quote quickly if asked in a civics lesson. Many of the ideas for these rights come from the English tradition of recognizing subjects’ rights throughout time. Further Reading Frohnen, Bruce. The American Republic. Indianapolis, Ind.: Liberty Fund, 2002; Roberts, Clayton. “The Constitutional Significance of the Financial Settlement of 1690.” The Historical Journal 20, no. 1 (March 1977), 59–76; Schwoerer, Lois G. “Locke, Lockean Ideas, and the Glorious Revolution.” Journal of the History of Ideas 51, no. 4 (Oct.–Dec., 1990), 531–548. —Leah A. Murray
federalism Federalism is a part of the U.S. Constitution, and refers to the sharing of power between the states and the federal government. Originally, federalism was seen as a check on governmental tyranny. The framers of the U.S. Constitution believed that if power were concentrated in one level of government, it might become tyrannical and abuse the rights of citizens. Therefore, they separated power both horizontally in the separation of powers, as well as vertically in federalism. This two-tiered form of separation reflected the framers’ concern not for promoting governmental efficiency but in creating a government that in many ways was incapable of exercising too much power over the people, so as better to guarantee liberty from the potential threat of government. A federal system thus was designed partially as a defense against tyranny. Separating governmental authority vertically and horizontally was an architectural or structural method of promoting freedom by limiting the scope and power of government. Such a system would also allow local differences to exist and for experimentation and local variety. Different regions had different interests, cultures, religions, and customs; forcing a “one-size-fits-all” model on these very different states seemed to smack of tyranny. However, over time it became clear that this local control not only promoted regional and state autonomy, but also the darker side: racial segregation. That dilemma would come back to haunt the system and end in a civil war in the 1860s. Federalism in the United States creates some confusion. Because it is impossible to decide in every instance what level of government shall have what powers, there can never be finality in the scope of federalism. It is a moving target, and always a work in progress with unclear boundaries and uncertain borders. For example, what level of government should be responsible for public education, environmental protection, or welfare? Is it possible in all cases to clearly draw the lines of authority, or are these and other policy areas the shared responsibility of local, state, and federal governments? And who is responsible and to what degree for which aspects of these political programs? Many of the problems of federalism were constitutionally sewn into the fabric at the invention of the
54 f ederalism
nation. In the abstract, federalism seems a promising idea; in operation, however, it is a difficult system to operate and administer with clarity. In public opinion polls, Americans consistently say they believe the federal government is too big. But Americans are less clear on just what functions and powers should be cut or devolved down to the state and local governments. This is especially true during a crisis when the local or state authorities may be unable to cope with the size or cost of major catastrophes. After the American Revolution, the framers, fresh off a revolt against central authority in the form of the king of England, were determined to find ways to both empower as well as limit the power of the government they were creating. Initially, the framers created the Articles of Confederation, a national charter that created a confederacy of the states that was high on state power and weak on federal authority. This dispersion of power reflected their fears concerning the potential for a central government to impose tyranny over the states and the people. Thus, a weak federal government was established. So weak was the new central authority that the national government had no taxing authority, no executive officer, no power to regulate commerce, a weak national judiciary, and very strong and independent states. It was a federal government in name only. The problem was that the states were really 13 separate independent nations with a weak umbrella of a federal government. This may have made sense initially, as the primary objective of the framers was to prevent tyranny, but over time the need for an effective government also became evident. After several years of weak and failed government, the framers finally concluded that they needed a stronger federal government. This did not mean that their fears of central authority had vanished. How then, to empower yet limit governmental authority? The framers accomplished this task by placing into the U.S. Constitution separated and federalized power. They separated the federal government’s institutions into three different branches, and federalized, or separated, the local, state and national governments into overlapping powers. The debate over ratification of the newly proposed Constitution highlighted the confusion over the true source of power in this new government. In
The Federalist, James Madison addressed the division of power between the states and the federal government and tried to reassure the fears in the states as he also established a new and more powerful role for the federal government. While sensitive to the concerns of the states, Madison was also determined to argue the case for greater federal power. He sought to reassure the states that the checks and balances built into the system would not allow the federal government to trample on the rights of the states, while also arguing that an extended republic (a large republic) best protected the rights of the people and the states. At the same time, he charted a course for a more robust federal government. However, not all of the states were persuaded. Federalism was seen as a check against government tyranny and against the mischief of faction. As Madison would write in Federalist 10, if “factious leaders . . . kindle a flame within their particular States,” national leaders can check the spread of the “conflagration” from reaching the other states. And as Madison noted elsewhere in Federalist 10, “The federal Constitution forms a happy combination . . . the great and aggregate interests being referred to the national, the local and particular to the State legislatures.” This federalism principle was thus a moderating influence that would both control the passions and selfishness of factions, while retaining the local character of government. Thus, “American government” today is something of a misnomer, as it really is “governments”, that is, many governments: one federal government (separated horizontally), 50 state governments, over 3,000 county governments, municipalities, and more townships, school districts, and special districts. For most of the nation’s history, local governments were closest to the daily lives of citizens (except for national defense). But over time several factors (industrialization, the rise of the United States as a world power, economic interdependence and globalization, etc.) joined together to give greater power to the federal government. But federalism is still very much a part of the American system, and states jealously guard their powers. In the early republic, when the Constitution was still young, the federal government exercised few powers. The state was closer to the lives of the citi-
federalism 55
zens, and most Americans identified more closely with their home state than with the distant federal government. The Tenth Amendment, ratified in 1791 helped clarify what powers belonged to the federal government and what powers rested with the states. It reads, in its entirety: “The powers not delegated to the United States by the Constitution, nor prohibited by it to the States, are reserved to the States respectively, or to the people.” This clarified matters, but only for a time. As the nation grew, so too did the power of the federal government. But this occurred slowly. For roughly its first hundred years, the United States was not a major player in world politics, had a limited involvement in the politics of Europe, and the federal government remained small and relatively weak. Most issues were handled at the local level, and the federal government had few major responsibilities apart from regulating commerce, imposing tariffs, and providing for the national defense. It took several decades for the federal government to impose its supremacy over the states. The federal courts were instrumental in establishing the supremacy of the federal government over the states. In 1819, the United States Supreme Court ruled in McCulloch v. Maryland that the powers of the federal government were not limited to those expressly granted in the Constitution. This opened the door for the future expansion of the powers and reach of the federal government. A few years later, in Gibbons v. Ogden (1824), the Marshall court again sided with the authority of the federal government at the expense of the states, when Chief Justice John Marshall broadly defined the national government’s authority to regulate commerce. This decision not only expanded the scope of federal authority but was a severe blow to the power of the states. As the federal government’s reach expanded, some states reacted. The theory of nullification emerged as an antidote to the expansion of federal power. Building on efforts of 1798 in the Kentucky and Virginia resolutions, James Madison and Thomas Jefferson responded to what they perceived as the injustice of the Sedition Act of 1798 by ghostwriting portions of state legislation designed to allow states to void federal legislation they believed to be unconstitutional.
Supporters of the theory of nullification argued that the federal union was primarily a compact among the states, but that the states were sovereign and supreme. It was a government of the states, not of the people defined nationally. They thus declared that the ultimate authority was the states, and not the federal government. In 1831, states’ rights advocate John C. Calhoun led the fight for state power, delivering his famous Fort Hill address in which he traced the roots of his cause to Madison and Jefferson and drew the battle lines between the federal and state governments. In 1832, the state of South Carolina adopted the Ordinance of Nullification, declaring two federally imposed tariffs null and void, and threatening to secede from the union if the federal government attempted to collect the tariffs by force. Reacting to the South Carolina law, President Andrew Jackson issued his famous warning to the state that any action against the federal government would constitute treason, and would be dealt with swiftly and harshly. The threat worked, and South Carolina did not press the issue, but the embers of rebellion were being stoked. From the 1820s to the 1860s, increased sectionalism and rebellion against the power of the federal government led to tensions, which eventually led to disagreements over tariffs, slavery, and other issues, leading to civil war. The federalist cause of the civil war stemmed from the question of sovereignty: Was the federal government the creation of states or the people? And could a state exert its supremacy over the federal government by nullifying the legitimacy of a federal law? Slavery played a role, but it was power that truly led to the outbreak of war. The Civil War decided the issue (for a time at least) in favor of a federal perspective. In the aftermath of the Civil War, issues of federalism receded as the federal government imposed itself on the southern states and began to grow in power and influence as the United States grew in power and influence. The next major shift in federal/state relations took place as a result of the Great Depression of 1929. The Depression was devastating. Unemployment soared, starvation and homelessness were common, and the problem was too big for local charities, local governments and state governments to handle. It was truly a national crisis and the public demanded a national response. That response came in the election of 1932
56 f ederalism
in the form of President Franklin D. Roosevelt and in the policy known as the New Deal. The New Deal nationalized politics, power, and policy. From this point on, poverty, economic management, employment levels, and a variety of other issues came under the control of the federal government. Prior to the Depression, local government, charities, churches, or the states had the primary responsibility for poverty and what we now call social welfare policies. But with millions unemployed, starving, and no hope in sight, public demands for federal support were too powerful to resist, and President Roosevelt supplied the nation with the policies and the politics, to create a new ethos for the nation. Roosevelt and the Congress passed a series of landmark laws that became known as the New Deal: jobs programs, relief, aid to families with dependent children, social security, farm programs, and the list goes on and on. It was the beginning of a new relationship of the people to the government, and of the federal government to the states. Politics had now gone national. In the 1950s, a revival of the theory of nullification occurred. Reaction against the New Deal, the role of the United States in the new post–World War II world, the rise of the cold war, and the emergence of the Civil Rights Movement, caused reevaluation and reappraisal. A split between liberals, who generally favored a stronger federal role, and conservatives, who tended to support local government, shaped the politics of this age, and conservatives championed federalism in opposition to the emerging welfare state they saw developing in the postwar era. The spark for the revival of the federalist debate again came primarily from the southern states, and the issue once again was race. In 1954, the liberal Warren court handed down one of its most controversial decisions: Brown v. Board of Education. This case involved a court order that the schools in Topeka, Kansas, had to desegregate all of their public schools. State and local officials saw this as an inappropriate federal intrusion into local politics, and resisted the court order. This incident also sparked other states to attempt to resist the power of the federal government. In 1956, Alabama passed a nullification resolution. Then, in 1957 President Dwight D. Eisenhower sent federal troops into Little Rock, Arkansas, to insure compliance with a school desegregation order. The introduction of federal troops was necessary to
protect nine black children attempting to enroll in public schools. Several years later, President John F. Kennedy had to again order federal troops into the American South, this time to Mississippi and Alabama, again to force school desegregation. While the 1950s sparked a reaction against federal authority, the 1960s saw a revival of the national government. After the assassination of President John F. Kennedy in November 1963, President Lyndon B. Johnson led the government in pursuit of what became known as the Great Society. Johnson and the Congress passed several landmark laws aimed at reducing poverty, promoting racial equality and education, voting rights, food stamps, Medicare, Medicaid, and a variety of other pieces of social welfare legislation. But the law of politics, as in physics, is that for every action there is a reaction, and that reaction against the 1960s was a rebirth of conservatism in America. The first president to embrace this trend was Richard Nixon. Nixon tried to navigate a rebirth of states’ rights while governing in an age of expanded federal power. He expanded the federal role where necessary, and contracted it where possible. But if the federal role grew during his presidency (between 1969 and 1974), it also contracted as Nixon promoted block grant funding to the states. Where Nixon tried to return some powers to the states, it was President Ronald Reagan who frontally took on the federal government. In his first inaugural address in 1981, Reagan vowed “to curb the size and influence of the federal establishment” because “the federal government is not part of the solution, but part of the problem.” Reagan promised to balance the federal budget by scaling back federal domestic social programs such as Social Security, Medicare, and Medicaid, and also wanted to dramatically increase defense spending, as well as cut taxes by one-third. Reagan really sought a shift and not a reduction in the power and scope of the federal government. He attempted to shift spending away from social spending and toward defense spending. Reagan talked about reducing the role of the federal government, but his proposals really served to reallocate, not cut the size of the federal government. His policies had mixed results. While able to increase defense spending and cut taxes, he did not significantly cut domestic programs. His policies also led to the bal-
Federalist, The
looning of the federal deficit. While Reagan failed to achieve his policy goals, he did animate a corps of loyal followers who picked up the cause of federalism. They would take shape in the so-called Republican Revolution of 1994. In 1992, Bill Clinton won the presidency as a “new Democrat;” that is, a more moderate Democrat. He was in office only two years before his Democratic Party would lose the majority of both houses of Congress. Calling for a reduced role of the federal government, Congressman Newt Gingrich, the Republican Minority leader of the House of Representatives, spearheaded an election-year agenda called the “Contract With America,” a list of campaign promises that rested on the antigovernment sentiment so popular at the time with voters. The Contract was a plan for devolution of power back to the states and it helped the Republicans capture control of both the House and the Senate for the first time in 40 years. As the new Speaker of the House following the 1994 election, Gingrich’s top priority was to cut the size of the federal government. President Clinton’s response was to attempt to “triangulate” the policy agenda; that is, to put himself in between what he portrayed as the hard right and the extreme left. It was an effective political strategy, and while the Republicans made some early gains, in the end, Gingrich imploded and Clinton, in spite of impeachment, regained control of the agenda and held off the charge of the Republican right. To gain political leverage, Clinton supported a welfare reform package that devolved power back to the states and cut welfare. He also made rhetorical gestures to the small-government conservatives by, for example, stating in his 1996 State of the Union address that “The era of big government is over.” But of course, it was not. The government continued to grow, not least under Clinton’s Republican successor, George W. Bush, who, after the 9/11 attack against the United States, dramatically expanded the size and scope of the federal government into the lives of citizens. Today, both political parties are big-government parties—they just want the big government to serve different political ends. Today, the primary drive for a more robust federalism is coming primarily from the United States Supreme Court. Led by a conservative majority, the Court has rehabilitated federalism as a constitutional
57
doctrine. The Rehnquist court (1986–2005) was devoted to extending federalism, making a series of important decisions that promoted the devolution of power back to the states. In 1992, in New York v. United States the Court sided with the states in a “commandeering” case, where the federal government sought to require the states to enforce federal will thereby commandeering the state to serve the dictates of the Congress. In Printz v. U.S. (1997), the Court invalidated a significant provision of the Brady Handgun Violence Prevention Act designed to require local law officials to conduct background checks on gun purchases, arguing that the law violated the Tenth Amendment by forcing a state government to carry out a federal law. In 1995 in U.S. v. Lopez, the Court held that Congress exceeded its power in the commerce clause by prohibiting guns in school zones. These and other cases made the Supreme Court the leading proponent of modern federalism. Issues of federalism are and will be ongoing. They will not soon be resolved. In this, they have become a part of the ongoing debate and ongoing evolution of power within the United States government. Further Reading Elkins, Stanley, and Eric McKitrick. The Age of Federalism: The Early American Republic 1788–1800. New York: Oxford University Press, 1993; Marbach, Joseph R., Ellis Katz, and Troy E. Smith, eds. Federalism in America: An Encyclopedia. Westport, Conn.: Greenwood Press, 2006; Nagel, Robert F. The Implosion of American Federalism. New York: Oxford University Press, 2002. —Michael A. Genovese
Federalist, The In order to understand the U.S. Constitution, one should, after reading the document itself, turn to The Federalist. Written by Alexander Hamilton, James Madison, and John Jay, The Federalist was originally intended as a series of newspaper articles (propaganda to some) designed to persuade the voters of the state of New York to vote in favor of ratifying the Constitution. At the time, New York and Virginia, arguably the two most important states in the new nation, were wavering and threatened to vote to reject the new Constitution. If either of these
58
Federalist, The
states rejected the Constitution, it was doubtful that even if the required states voted to ratify, the experiment in a new form of government would succeed. The essays were published in 1787 and 1788, and are generally considered to be the finest explanation of the Constitution available. Several states had already voted to ratify the Constitution, but the two key states, Virginia and New York, without whose cooperation a union would not be politically possible, remained undecided. And early sentiment in New York appeared to be going against ratification. Hamilton, Madison, and Jay set out to answer the criticisms of the Constitution that were coming from a loose-knit group known as the antifederalists, to convince the voters of New York that the Constitution did not violate the spirit of the Revolution, and to explain the meaning of the document to an unconvinced citizenry. Today, the value of The Federalist is in the essays’ powerful and concise explanation of the thoughts of some of the key delegates to the Constitutional Convention concerning just what the Constitution means. The Federalist comprises 85 essays, all published under the name of “Publius.” It is believed that Hamilton wrote 56 of the entries, Madison authored 21, and Jay wrote five, and that Hamilton and Madison may have collaborated on three of the essays. The Federalist deals with the scope of government and attempts to explain both the roots of the new Constitution as well as the many parts that constituted the document. Careful to draw distinctions between this new system and the old monarchy of England, the authors attempted to reassure skeptical citizens of New York that this new constitutional system did not resemble the monarchy and sought to allay the fears that this new constitution could some day evolve into a new form of tyrannical government. It was an argument that had to be made, as the barrage of criticism coming from the antifederalists made a mark in New York. The Federalist does not always chart an easy to follow path. Written by three different authors, there is some overlap, and a few gaps in the work, but it remains the most important ingredient in a modern understanding of the work of the framers. The first 14 entries discuss the dangers the United States faced, arguing that there was a grave need for a new
constitution. Entries 15 through 22 discuss why the Articles of Confederation are not adequate to the challenge of governing. The authors then go into a fuller explanation of the contents of the new constitution they were espousing. Generally, The Federalist papers that deal with the Congress are numbered 52 through 66; the executive function can be primarily found in numbers 67 through 77; and the judiciary in numbers 78 through 83. Discussion of the separation of powers can be found in numbers 47 through 51. Federalism is discussed in numbers 41 through 46, the first four dealing with the power of the federal government, and the last two dealing with state powers. Public policy questions on taxes, foreign and domestic policy can be found in numbers 23 through 36. The theory of “original intent” attempts to discern what the framers really meant, and proponents of this constitutional view argue that the contemporary United States should go back to the intent of the framers and that the modern government should be guided by their wisdom. But even with The Federalist, original intent is often a slippery slope. Conservatives tend to highlight original intent when such an interpretation coincides with their preferred policy or political outcomes, as when they argue against activist judges who create new rights and new requirements for the contemporary age. Liberals do much the same thing, only they cite different segments of the original understanding of the Constitution, such as the limits of the president’s powers over war. Clearly much of this debate centers on whose ox is being gored, and to truly go back to the original intent of the framers would mean a dramatically different Constitution, and a dramatically different United States. Remember, the “original” Constitution treated slaves as three-fifths of a person, and did not allow women or racial/ethnic minorities the right to vote. In the first essay, Alexander Hamilton explains the purpose of the series of essays, writing that they would “discuss the following interesting particulars”: The utility of the union to the citizens’ political prosperity—the insufficiency of the present confederation to preserve that union—the necessity of a government at least equally energetic with the one proposed, to the attainment of this object—the conformity of the proposed constitution to the true principles of republican government—its analogy to state
Great Compromise
constitutions—and last, the additional security that its adoption will afford to the preservation of that species of government, to liberty, and to property. Perhaps the two most cited and most important of the essays are Federalist 10 and 51, both attributed to James Madison. In Federalist 10, Madison writes of the dangers of faction, and how a large republic, representative government, and majority rule can prevent factions from becoming tyrannical. He also introduces the idea of checks and balances, arguing that ambition must be made to counter ambition, and power to counter power. In this way, factions, inevitable but dangerous, would have a system of checks and counterchecks, with a separation of powers creating a structural system of equilibrium to prevent any one faction from grabbing too much power. In Federalist 51, Madison discusses the need for a system of checks and balances imbedded in a separation of powers system. Possessing a fairly jaundiced view of human nature Madison believed that checks and balances were necessary to curb the tendency inherent in human nature for individuals to grab for power. “If men were angels, no government would be necessary. If angels were to govern men, neither external nor internal controls on government would be necessary. In framing a government, which is to be administered by men over men, the great difficulty lies in this: you must first enable the government to control the governed; and in the next place oblige it to control itself.” This language is a far cry from the optimism of Thomas Paine and Thomas Jefferson as they rallied the colonies to a democratic revolution, but it indicates the shift in thinking from 1776 to 1787. The optimistic language was replaced by a language grounded in the recognition of a darker side of human nature and the potential for tyranny and abuse of power. The Federalist, reassuring in tone, laid out the rationale for the new government the framers were trying to persuade the states to accept. For the states, it was a risky bargain. After all, the Articles of Confederation that the Constitution was to replace gave new significant powers to the new central government. Those who wished to keep government small and close to the people feared the powers being transferred to the new federal government. They especially feared the possibility of replacing one strong state (Great Britain) with another, more home-grown one.
59
Madison, Hamilton, and Jay had the difficult task of reassuring the state of New York, that this new government did not threaten to become an empire such as the government they had recently rebelled against, and that it would protect the rights of citizens while also proving to be workable and functioning. Given the fears of many of the citizens of New York, this was no easy task. The newly proposed government was a risk from the start, and the people of New York needed both reassurance that it could work and a roadmap demonstrating what all the pieces were designed to do and how they were to interact. The Federalist provides this. The essays remain an essential guide to the Constitution of the United States, and the finest explanation to the parts as well as the unifying whole of this new constitutional republic ever written. Today, The Federalist is often cited as a guide to understanding how power and politics should function in America. Harking back to what has become known as “original intent,” many of today’s political advocates ask: What would the framers do? To answer this question, The Federalist has become a much cited authority on the will and intent of the framers. In this way, The Federalist has become something of a political football, to be used and even manipulated to defend all manner of causes and policies. Of course, it is easy to take The Federalist out of context, but a full or holistic reading of it remains one of the essential ways for a modern audience to more fully understand what the framers envisioned for the nation, and how the new government was to be structured. Further Reading Cook, Jacob E. The Federalist. Middletown, Conn.: Wesleyan University Press, 1961; Epstein, David F. The Political Theory of the Federalist. Chicago: University of Chicago Press, 1984; Hamilton, Alexander, James Madison and John Jay. The Federalist Papers, edited by Clinton Rossiter. New York: Mentor, 1999. —Michael A. Genovese
Great Compromise Also known as the “Connecticut Compromise,” the Great Compromise successfully resolved the dispute at the Constitutional Convention about the basis for representation in Congress between large states
60 Gr eat Compromise
and small states. The question of how states would be represented in Congress was the most important issue at the convention, and the issue that most threatened to prevent a successful drafting of a new constitution. Under the Articles of Confederation, each state enjoyed equal representation in a unicameral Congress. The actual number of representatives differed, with the Articles specifying anywhere from two to seven representatives per state, but each state had only one vote, so each delegation had to agree on how it would cast that single vote. Thus, both large states and small states were sensitive to how a new charter of government might alter this arrangement. It is important to understand that the perspectives of the large and small states were based on fundamental principles of government. The large states focused on principles associated with national union, whereas the small states focused on principles associated with states as states. The Articles of Confederation embodied this tension, for it styled itself as a confederation of independent states bound in a perpetual union and “league of friendship.” When the delegates arrived in Philadelphia in 1787, James Madison of Virginia was waiting for them with a draft proposal in hand, called the Virginia Plan. Edmund Randolph, also from Virginia, formally introduced the plan at the convention, and it became the template for discussion. The Virginia Plan struck precisely at the tension between state sovereignty and union seen in the Articles, calling for a much stronger national government that had real authority over the states. The Virginia Plan also proposed a bicameral legislature in which one chamber would be directly elected based on population, and the other chamber would be chosen by the first, more popular house. In each chamber, membership would be based on proportional representation—that is, larger states would enjoy greater representation, and thus greater political influence. The Virginia Plan won the initial round of debate, essentially tossing out the Articles and starting from scratch with a much more powerful national government, but the rules of convention debate, written to foster compromise and healthy deliberation, allowed any issue to be revisited, so the vote did not settle the question. The attempt to solidify the move toward a stronger national government prompted resistance from smaller states. William Paterson of New Jersey coun-
tered with the New Jersey Plan, which called for a unicameral legislature based on equal representation of states, much like the Congress under the Articles of Confederation. In fact, although the New Jersey Plan would have strengthened the powers of the national government under the Articles, it retained the basic principle of the equality of states, in direct contradiction to the move represented by the Virginia Plan. Paterson argued that the New Jersey Plan more closely fit the general authority of the convention, which was to make changes to and revise the Articles, not start from scratch. He also argued that, whether the Articles were retained or discarded, the result would be the same—the states would be in a status of equal sovereignty with respect to one another, a status that could only be surrendered by the consent of the states concerned. Proponents of the Virginia Plan responded that the basic question at hand was whether there was a single union at stake, or a “league of friendship” among several independent republics. This exchange demonstrates that the fundamental disagreement at the constitutional convention revolved around the true nature and character of the union. New Jersey Plan partisans argued that independence had created free and equal states, and that the national government under the Articles was their agent. Virginia Plan partisans argued that the states had never been separately independent, but instead that the Declaration of Independence created a national union whose current charter was defective and in need of revision. One of those defects was the equal representation of states regardless of size. Either the union was a genuinely national one, or it was not. It was precisely this conflict that Alexander Hamilton highlighted in Federalist 15, where he critiqued the system under the Articles of Confederation for creating a “league or alliance between independent nations” instead of a genuine national government. By late June, the two sides of the debate appeared to be hardening their positions. Roger Sherman, delegate from Connecticut, had twice proposed that different principles of representation be used in each chamber of the legislature. With the two sides seemingly intractable, his fellow Connecticut delegate, William Johnson, crystallized the basic problem: “Those on one side considering the States as districts of people composing one political Society; those on the other considering them as
habeas corpus 61
so many political societies.” He then followed with the potential solution: “On the whole he thought that as in some respects the States are to be considered in their political capacity, and in others as districts of individual citizens, the two ideas embraced on different sides, instead of being opposed to each other, ought to be combined; that in one branch the people ought to be represented, and in the other the States.” The convention then decided that representation in the first chamber of the proposed Congress would not be based on equality of states, after which Oliver Ellsworth, also of Connecticut, proposed equal representation in the second chamber as a compromise between the two sides. The convention appointed a committee to make proposals, which met over Independence Day. The committee considered additional proposals, including the notion that all money bills originate in the first branch, and that representation in the first branch and direct taxes be based on population. The result was the Great Compromise, also called the Connecticut Compromise because of the central role of that state, particularly Roger Sherman, in forging the agreement. The convention voted on the compromise on July 16, and the proposal passed by one vote. The new constitution would have a bicameral legislature where representation in the first chamber, the House of Representatives, would be based on population, and representation in the second chamber, the Senate, would be based on equality of states. The compromise saved the convention, allowing the process of redrafting the founding charter to come to a successful conclusion. From the perspective of self-interest, the compromise was an ironic one, because the rivalry between big states and small states never appeared. Of much greater importance historically was the sectional rivalry between the North and the South. From a constitutional perspective, however, the compromise was essential in helping to define the nature of the new republic. Delegates from large states were successful in making the case for a stronger national government— a radical transformation from the system under the Articles of Confederation. Their case was strengthened by the inclusion in the new U.S. Constitution of such passages as the supremacy clause (Article VI), the full faith and credit clause (Article IV), and the privileges and immunities clause (Article IV). At
the same time, however, delegates from small states were successful in carving out space in the new government for the states as such—a formal position from which the principle of state sovereignty and federalism could be fought. The Senate remains today an institution where the principle of a federal republic made up of states that are equal to each other in status remains. That principle continues to play important roles in the presidential election process, to the extent that Senate membership is mirrored in the electoral college, and in the constitutional amendment process, where states play an important role regardless of size. The Great Compromise also remains a model example of practical politics at work. In Federalist 62, Madison discusses the structure of the Senate, and it is very clear that he remains generally opposed to this constitutional feature. Nevertheless, he acknowledges that equal representation is the result not of abstract and idealistic theory, but of political compromise necessary for the survival of the political system. The mutual understanding by all parties that a decision had to be made for the convention to continue toward success made compromise possible. The result was not perfection for either camp, but the common goal of forming “a more perfect union” paved the way for an acceptable resolution. Further Reading Collier, Christopher, and James Lincoln Collier. Decision in Philadelphia: The Constitutional Convention of 1787. New York: Random House, 1986; Hamilton, Alexander, James Madison, and John Jay. The Federalist Papers, nos. 15, 62, edited by Clinton Rossiter. New York: New American Library, 1961; Madison, James. Notes of Debates in the Federal Convention of 1787. Athens: Ohio University Press, 1966; Storing, Herbert. “The Constitutional Convention: Toward a More Perfect Union,” in Toward a More Perfect Union: Writings of Herbert J. Storing, edited by Joseph M. Bessette. Washington, D.C. AEI Press, 1995. —David A. Crockett
habeas corpus Habeas corpus, the Latin phrase for “you shall have the body,” has been historically celebrated as the
62 habeas corpus
principal safeguard of freedom in Anglo-American jurisprudence. The “Great Writ of Liberty” plumbs the depths of English history. It is older than Magna Carta, and it has been on the frontlines of battles that advanced liberty on both sides of the Atlantic. The writ of habeas corpus constitutes a unique judicial remedy for preserving liberty against arbitrary and unwarranted detention by a government. Simply put, the writ is a court order that raises the question of legality of a person’s restraint and the justification for the detention. In essence, the government is required to deliver a prisoner—the body—before a court and provide legal justification for his imprisonment. In theory, a writ of habeas corpus can be obtained by a prisoner or by someone on his behalf. As such, the writ is a procedural device that initiates a judicial inquiry. It is the demand imposed upon the government to produce a legal rationale for the imprisonment of an individual that distinguishes its capacity to maintain liberty in the nation. Justice Felix Frankfurter observed, in Brown v. Allen (1953): “Its history and function in our legal system and the unavailability of the writ in totalitarian societies are naturally enough regarded as one of the decisively differentiating factors between our democracy and totalitarian governments.” Habeas corpus is located in Article I, section 9, of the U.S. Constitution, which provides: “The privilege of the Writ of Habeas Corpus shall not be suspended, unless when in Cases of Rebellion or Invasion the public safety may require it.” The particular language, and stipulated exception, is made comprehensible by an understanding of the role that habeas corpus played in England. As with other emerging rights and liberties in 17th-century England, the writ of habeas corpus frequently clashed with the royal prerogative through the efforts of Sir Edward Coke, lord chief justice of Common Pleas, among others. The writ of habeas corpus came to be viewed as a liberty document in much the same way that Magna Carta had come to hold that special status. Still, the writ could be overwhelmed by executive power. Thus, the effectiveness of the writ was questionable, particularly if governmental officials followed appropriate procedures in the course of incarceration. Darnel’s case (1627) proved pivotal in the development of habeas corpus law. Darnel’s case, also known as the Five Knights case, featured Thomas Darnel, one of five knights
who refused to comply with the forced loan that King Charles I attempted to exact after dissolving Parliament, which afforded the king no means of raising taxes. The knights had been imprisoned without trial by order of Charles after they had refused to contribute to the forced loan. While in prison, Darnel petitioned the King’s Bench for a writ of habeas corpus. The court issued the writ, but on its return remanded the prisoners to jail. The return stated that Darnel had been committed “by special command of his majesty.” The king’s attorney general had argued that the return was appropriate since the case involved matters of state, and thus the king could imprison anyone on his own authority and without explanation. Darnel’s counsel objected. The return had not offered any reason for his client’s imprisonment, and it conflicted with Magna Carta, which prohibited incarceration “unless by the lawful judgment of his peers or by the law of the land.” The very concept of freedom from arbitrary executive imprisonment was at stake. Darnel’s attorney argued that if the court considered the return to be valid, then the king might incarcerate a man forever and “by law there can be no remedy for the subject.” He added: “The Writ of Habeas Corpus is the only means the subject hath to obtain his liberty, and the end of the Writ is to return the cause of the imprisonment, that it may be examined in this Court, whether the parties ought to be discharged or not; but that cannot be done upon this returning for the cause of the imprisonment of this gentleman at first is so far from appearing particularly by it, that there is no cause at all expressed in it.” The attorney general’s invocation of the concept of “matter of state” carried the day. The king’s Bench agreed with him that in such a “matter” nobody could question the king’s judgment. In effect, the laws fall silent when national security is involved. Accordingly, Darnel and his fellow knights were remanded to prison. National crisis, whether on the back of “matter of state,” or simply characterized as national security, explains the lone textual exception in the U.S. Constitution’s “guarantee” of habeas corpus. The writ may be suspended “when in cases of rebellion or invasion the public safety may require it.” The Civil War provided an occasion for the suppression of the writ.
habeas corpus 63
The ruling by the King’s Bench in Darnel’s case compelled three House of Commons resolutions, as well as the Petition of Right (1628), which the king approved, that declared the availability of habeas corpus to examine the cause of the detention. If the cause was not legitimate, the court would order the release of the prisoner. However, if “matter of state” were invoked, the writ would be canceled. In subsequent years, however, kings found various ways of defying writs of habeas corpus, despite legislative efforts to provide greater protection. It was not until Parliament passed the Habeas Corpus Act of 1679 that the writ became a matter of routine procedure. “In America,” the distinguished historian, Leonard Levy, observed, “Little is heard about the writ until the later seventeenth century, perhaps because legislative, executive, and judicial powers were scarcely distinguished, and lawyers, even law books, were scarce. The early colonies, moreover, did not rely on imprisonment; they preferred the whipping post, the stocks, and fines. Thus, the writ of habeas corpus had no history for much of the 1600s in America.” The writ was gradually introduced in the colonies, beginning in 1664 in New York. By the time of the Revolution, it had become familiar throughout the country, if not always honored or invoked. The evolving role of habeas corpus in the postrevolutionary period reflected increased familiarity and usage. By 1791, all of the states embraced the use of the writ in practice, if not necessarily in their statutes or constitutions, since they adhered to English common law. In the Constitutional Convention, several of the framers expressed doubt that a suspension of the writ would ever be necessary. Others flirted with the idea of imposing a limit on the length of time during which the writ might be suspended. However, the framers adopted Gouverneur Morris’s motion to provide for suppression of the writ “where in cases of Rebellion or invasion the public safety may require it.” The Committee of Style located the power in Article I of the Constitution, which enumerates congressional powers. Until the Civil War, it had been assumed that the power to suspend the writ belonged to Congress. President Abraham Lincoln had only a single precedent on which to rely when, in 1861, he suspended it, and imposed martial law in various places throughout the
nation. General Andrew Jackson imposed martial law in New Orleans and defied a writ of habeas corpus in 1815. Lincoln’s claim of authority to suspend the writ was rejected by Chief Justice Roger Taney, riding circuit in Ex parte Merryman (1861), who held that the president has no constitutional authority to suspend the writ and ordered the prisoner, John Merryman, to be freed. In 1863, Congress passed a Habeas Corpus Act which empowered the president to suspend the writ, and also retroactively authorized Lincoln’s previous suspensions. In 1866, in Ex parte Milligan, the U.S. Supreme Court condemned the use of military trials in areas in which courts were open. Historically, the authority to suspend the writ of habeas corpus has been rarely invoked. In addition to the suspension in the Civil War, it was suppressed in 1871, to combat the Ku Klux Klan, and also in 1905 in the Philippines, and in Hawaii in World War II. The few instances in which the writ has been suspended are a reflection of the general stability in the United States, as well as the commitment to the principle that courts should remain open and possess the authority to examine the cause of imprisonment. It remains true, as Zechariah Chafee, Jr., has written, that the writ of habeas corpus is “the most important human rights provision in the Constitution.” Without it, other liberties will fall. Whether circumstances will arise that may justify a suspension of the writ is open to debate and speculation. In 2006, Congress enacted the Military Commission Act, which banned 480 detainees held at the American naval base at Guantánamo Bay, and other enemy combatants, from filing petitions for writs of habeas corpus. Upon adoption, the administration of George W. Bush moved swiftly to file court papers asking for the dismissal of all petitions for habeas corpus sought by detainees at Guantánamo. In 2008, the Supreme Court ruled 5-4 in Boumediene v. Bush that detainees at Guatánamo have the right to seek habeas corpus in U.S. federal court. Further Reading Chafee, Zechariah, Jr. How Human Rights Got into the Constitution. Boston: Boston University Press, 1952; Duker, William F. A Constitutional History of Habeas Corpus. Westport, Conn.: Greenwood Press, 1980; Levy, Leonard. Origins of the Bill of Rights. New Haven, Conn.: Yale University Press, 1999. —David Gray Adler
64 implied powers
implied powers
(elastic clause)
The conceptual and analytical problems that arise in an examination of implied powers stem, manifestly, from the fact that they are not enumerated in the U.S. Constitution. As a consequence, the discussion of their origin, nature, and parameters encounters various difficulties. The very concept of implied powers itself is somewhat susceptible to different meanings and usages, but befitting its name, it is assumed to exist. For purposes of this discussion, implied powers are to be distinguished from inherent and extra constitutional powers. Civics courses commonly teach that the U.S. Constitution consists only of enumerated powers specially allocated to one of the three branches of government. This lesson is backed by some historical materials and United States Supreme Court rulings, even though they are of limited value. In 1907, in Kansas v. Colorado, for example, Justice David Brewer stated: “[T]he proposition that there are legislative powers [not] expressed in this grant of powers, is in direct conflict with the doctrine that this is a government of enumerated Powers.” But this crabbed and outdated teaching ignores the reality of the Constitution, and the system that is erected upon it. The Constitution, Chief Justice John Marshall pointed out in McCulloch v. Maryland (1819), is not a detailed legal code which spells out every power and the means by which it is to be executed, since such a document “could scarcely be embraced by the human mind.” There is nothing in it, he added, that “excludes incidental or implied powers; and which requires that everything granted shall be expressly and minutely described.” Whether enumerated or implied, constitutional powers must be conferred. That is precisely the point. All constitutional powers must be derived from the document, ultimately tethered to the document, whether expressed or not. Moreover, given the impossibility of detailing all powers, some doctrine of implied powers is functionally or instrumentally indispensable to the effectiveness of the political system. As James Madison explained in Federalist 44, the proposed Constitution would have been a dead letter if its framers had embraced Article II of the Articles of Confederation, and provided that the government possessed only those powers expressly delegated to it. Instead, delegates to the Constitu-
tional Convention adopted a more practical approach, as Madison observed in Federalist 44: “No axiom is more clearly established in law, or in reason, than that whenever the end is required, the means are authorized; whenever a general power to do a thing is given, every particular power necessary for doing it is included.” If each branch enjoys powers that are incidental to its enumerated powers, it is to be expected that controversy may surround the very claim of the existence, as well as the scope, of the power asserted. There is, in such a controversy, nothing different than the controversies or quarrels that arise out of the exercise of an enumerated power. As a consequence, there is no weight to the claim that implied powers are somehow less important or legitimate than express powers. What remains, of course, is the need to assess the legitimacy of the “means” invoked to execute enumerated powers. This issue applies to each of the three branches of government, and its application frequently engenders controversy and, occasionally, lawsuits. It may be supposed, however, that the need for order in the courtroom, and the incidental power of judges to exercise the contempt power, will not be seen as objectionable. And the nation, presumably, has moved beyond the objections and protests of the exercise of judicial review as incidental to, or part and parcel of, the duty of the Court, as charged by Chief Justice Marshall in Marbury v. Madison, “to say what the law is.” The rants, rather, are occasioned by the claims of implied powers adduced by the legislative branch and the executive branch. Executive claims to implied powers, particularly with respect to executive privilege, removal of officials from office, war making, and executive agreements, among others, have inspired great and continuing controversies. Resolution of these disputes, like others involving the concept of implied powers, will require consensus on the source and scope of these claimed powers. If these powers are, in some way, incidental to enumerated powers, what degree of proximity is required? Must the test be functional? For example, is it necessary to demonstrate the instrumental importance of executive privilege to the presidential performance of an enumerated power or duty? Is claimed presidential power to make executive agreements with other nations instrumental to the exercise of an enumerated power, or duty?
implied powers 65
If so, how far reaching is the power? May it be extended to substitute for the treaty power? If the premise of the existence of an implied power is accepted, how are its parameters to be determined? As an answer, Alexander Hamilton observed in 1793 that the president’s implied powers may not encroach upon powers granted to another branch of government. If a functional test is applied, it seems reasonable that a president, pursuant to his constitutional duty under the Take Care Clause to ensure that the laws are “faithfully executed” may remove officials who obstruct his dutiful efforts. But what of the removal of other officials, whose duties do not involve the execution of the laws? On the foreign affairs front, where the Constitution conjoins the president and the Senate as the treaty making power, but where it also says nothing of the authority to terminate treaties, it is reasonable to ask where the implied power to terminate treaties is vested? Organic lacunae provide a compelling case for implied powers. While the existence of implied powers in the judicial and executive branches is assumed, Congress, by the text of the Constitution, is vested with implied powers. The necessary and proper clause, located in Article I, section 8, paragraph 18, states: “The Congress shall have Power to make all Laws which shall be necessary and proper for carrying into Execution the foregoing Powers, and all other Powers vested by this Constitution in the Government of the United States, or in any Department or Officer thereof.” This clause authorizes Congress to pass laws to exercise its own constitutional powers as well as those granted to the executive and judiciary. The breadth of authority vested in Congress by the “Sweeping Clause,” affirms its status as “first among equals,” with broad authority to shape and structure the judicial and executive branches. Consequently, Congress may, as the courts have held, conduct investigations and perform oversight responsibilities as a means of exercising its lawmaking and appropriations powers; pass laws establishing the term of the Supreme Court, and how many justices will sit on the Court; and provide legislation establishing executive offices and determining the powers, responsibilities and wages of officials in regulatory agencies. It may also pass legislation permitting the president to reorganize the executive branch.
In what is perhaps the most famous exercise of power under the necessary and proper clause, Congress passed legislation to create a national bank as a means of carrying out its power to “lay and collect taxes.” In McCulloch v. Maryland (1819), Chief Justice John Marshall upheld the legislation as a “useful and convenient” means of effectuating the enumerated powers vested in Congress. Marshall’s “liberal” or broad interpretation of the necessary and proper clause set the historical tone for the Court which, typically, has sustained legislation under the clause if it is “reasonable.” Of course, a narrow construction of the provision would have placed Congress in a straitjacket and served to defeat the essential purpose of the clause and return the nation, for all intents and purposes, to the ineffectiveness of the Articles of Confederation, under which Congress possessed only those powers expressly delegated to it. The creation of the necessary and proper clause was enmeshed in controversy. Opponents of the provision feared that it would provide a means for Congress to enlarge its own powers beyond those granted by the Constitution, a point vigorously denied by the Supreme Court in McCulloch. As such, it would pose a risk to state powers and the liberty of the people. The creation of the Bill of Rights was, in part, a reaction to that threat. There is cause to wonder why the clause is referred to as a source of implied powers when the provision expressly states that Congress is authorized to pass laws necessary and proper to execute its constitutional powers. In addition, the Court’s development of the idea that each department possesses implied powers seems redundant in view of the fact that the necessary and proper clause specifically empowers Congress with authority to provide legislation for the purpose of facilitating the exercise of powers granted to the executive and the judiciary. There is potential for conflict between a congressional exercise of power under the necessary and proper clause, and the claim to incidental powers by either of the other branches. It appears that no case yet has come before the Court that raises such an issue. It is possible, however, to believe that Congress may pass legislation granting and delimiting an exercise privilege that is more narrow than the scope of executive privilege envisioned by the president. In
66 I roquois Confederacy
that event the Court might be required to settle the dispute by determining which of the claims is entitled to primary authority. Historically and constitutionally speaking, there is little doubt that the claim of implied powers has played an important, if not necessarily popular, role in American politics and law. While some claims to implied powers have led to abuses, it is fair to say, as well, that much good has flowed from its usage. Further Reading Corwin, Edward S. The President: Office and Powers, 1787–1984. 5th rev. ed. Edited by Randall Bland, Theodore T. Hinson, and Jack W. Peltason. New York: New York University Press, 1984; Gunther, Gerald, ed. John Marshall’s Defense of McCulloch v. Maryland. Stanford, Calif.: Stanford University Press, 1969; Van Alstyne, William W. “The Role of Congress in Determining Incidental Powers of the President and of the Federal Courts: A Comment on the Horizontal Effect of ‘The Sweeping Clause.’ ” 36 Ohio St. L. J. 788 (1975). —David Gray Adler
Iroquois Confederacy An important, but oft neglected contribution to the invention of the U.S. Constitution can be found in the impact of the Iroquois Confederacy on the work of its framers. The Iroquois Confederation, or Confederacy, was a political coalition of six Native American tribes centered in the upstate New York area that acted in union in war, trade, peace, and other areas. In effect, this confederation made one nation out of six different tribes. Their confederation was studied by many of the influential framers of the American republic, and their confederation was in some ways a model for the U.S. Constitution as drafted in 1787. The six nations that constituted the Iroquois Confederation were the Mohawk, Oneida, Cayuga, Seneca, and Onondaga tribes. Later, in 1722, the Tuscaroras also joined the confederation, which became known to many as the Six Nations Confederacy. With union, these six tribes became known also as the haudenosaunee, or “people of the long longhouse.” When the framers of the U.S. Constitution met in Philadelphia in 1787, they looked to Europe and saw hereditary monarchies. When they looked up the
road at the Iroquois Confederation, they saw a functioning democratic system with separation of power and checks and balances. The Europe the framers left behind was governed by monarchs, whereas the Iroquois Confederation was governed by “the people.” The Iroquois Confederation was in existence long before the European settlers arrived on the shores of the eastern seaboard. The exact date of creation remains uncertain, but a conservative estimate traces its creation to some time in the late 15th century. The creation of the union is steeped in myth and legend. The Iroquois oral tradition traces the roots of the Confederacy to the Great Law (Gayaneshakgowa) and was said to have been handed down by the spirit Deganawida. The Great Law is very much like a constitution and spells out rules, rights, duties, and responsibilities, and it established a separation of powers system among the various tribes in the confederacy, with checks and balances, vetoes, and in which women had significant political rights. A great deal has been written about the European intellectual and political roots of the American system of government, but little has been written regarding the role of Native American tribes and nations in the development of the Constitution. This oversight neglects the important contribution made by Native Americans to the invention of the American republic. The framers of the Constitution drew on their knowledge of the Iroquois Confederacy for guidance in the development of a separation of powers system, as well as selected aspects of the new Constitution they were designing. The framers also looked to Greek democracy and Roman republican forms of government for guidance and inspirations. Ironically, nearly 2,000 years after the decline of Athens and Rome, Europeans practiced decidedly antidemocratic forms of governing. Yet the Iroquois had a very sophisticated form of representative democracy already in place when the framers contemplated how to animate the ideas of the Revolution into a new constitutional government. Indian democracies were working democracies that colonists could observe and emulate, if they so chose. Governed by Ne Gayaneshagowa (the Great Binding Law), the Iroquois League’s higher law or “constitution,” these nations/tribes already had a con-
Iroquois Confederacy 67
stitutional system of government prior to the founding of the new government of the United Sates. The basis of governmental legitimacy came from the community and flowed upward to the chiefs and council, and was grounded in a concept of natural rights, consensus-oriented decision making, consent instead of coercion, a system of checks and balances, open public debate, discussion and deliberation, and the protection of individual rights and liberties (although the individual was secondary, and the tribe primary). In all important decisions, the Great Law required that chiefs of the league submit the issue to the entire tribe for approval. The Great Law also contained provisions for impeachment and removal of sechens (chiefs), and upon the death of a chief, the title revolved to the women of the clan whose task it was to determine who should assume the title. Their nomination of a new chief then went to the entire clan for approval, then to a governing council for final approval. The Ne Gayaneshagowa describes the leadership selection process as follows: When a lordship title becomes vacant through death or other cause, the Royaneh women of the clan in which the title is hereditary shall hold a council and shall choose one from among their sons to fill the office made vacant. Such a candidate shall not be the father of any confederate lord. If the choice is unanimous the name is referred to the men relatives of the clan. If they should disapprove, it shall be their duty to select a candidate from among their number. If the men and the women are unable to decide which of the two candidates shall be named, then the matter shall be referred to the confederate lords in the clan. They shall decide which candidate shall be named. If the men and women agree to a candidate his name shall be referred to the sister clans for confirmation. If the sister clans confirm the choice, they shall then refer their action to their confederate lords, who shall ratify the choice and present it to their cousin lords, and if the cousin lords confirm the name then the candidate shall be installed by the proper ceremony for the conferring of lordship titles. Women thus played a surprisingly important role in leadership selection as well as the daily life of the tribe. The lineal descent of the people of the five nations ran in the female line. Women were to be considered the progenitors of the nation, since they owned
the land and the soil. Men and women followed the status of the mother. In most native cultures, religion and spirituality play a highly significant role and are closely linked to politics and government. The concept of separation of church and state was inconceivable to the native communities, as they made little distinction between the spiritual and the political: The one fed into and was nourished by the other. While there were as many distinct religions as there were independent tribes, almost all Native American tribes shared broad common agreement on key religious fundamentals. Belief was usually linked to land, environment, and a founding myth. The supernatural played a significant role in shaping their beliefs, the keys of which include: a) belief in universal force; b) the social imposition of taboos; c) the force of spirits on everyday life; d) visions as a guide to behavior; e) the significant role of shaman as religious leader; f) communal ceremonies; and g) belief in an afterlife. Shamans held considerable authority over religious life and the interpretation of visions. As such they exerted a great deal of influence over the daily life of the tribe. But rarely did religious leadership overtly cross over into secular power. The shaman, or medicine man, was a functional area leader, widely respected and followed within his sphere of influence, but only marginally influential in nonspiritual concerns. The primary role of native spirituality and government aimed at the development of harmony between humans, animals, the earth, and the spirit world. This holistic interconnectedness also embraced a belief in the equal dignity of all these elements. Humans were thought to be superior to nature, but an equal part in a balanced universe. Harmony with nature led to community in politics. Individual rights gave way to community interests, and all rights came with responsibilities. Most Native American nations had not one but several chiefs. Determined by the consent of the people and based on a functional view of power, the tribe had different chiefs for different tasks: one chief for war, another for diplomacy, another for planting. Unlike European nations of the time, rights of birth were generally inconsequential. Chiefs were generally selected for ability in a given task. They were expected to devote themselves to the tribe, and govern by persuasion, not command.
68 L ocke, John
Many Native American governments were democratic, decentralized, and egalitarian. Leaders lacked coercive authority and their role depended upon maintaining the support of the tribe. Consensus, not individual rights, predominated; there was no inherent right to leadership. While leadership often fell to elders, even they governed only with support of the community. When support was withdrawn, the leader fell from power. Most leaders were men, but on occasion, a woman would assume a chief’s position. While chiefs exercised power or influence in different ways, depending on the tribe and circumstances, several characteristics apply to almost all tribes. Chiefs were generally expected to: Practice self denial, personify the tradition of the tribe, serve the community, practice persuasion not coercion, develop a consensus, work collaboratively, and link spiritual life to governing. There were, of course, some regional differences in leadership and government. In the Southwest, especially among the Pueblo tribe, the chief was usually a religious leader. In the region from what is today Northern California up the coast to southern Alaska, the wealthy ruling families governed in most tribes. They were expected to throw a huge religious celebration, part feast, part gift giving ceremony, called a potlatch. In parts of the Great Lakes region, leadership was reserved for clans that traced their ancestry to the tribes’ common spiritual forbears, usually an animal or mythical beast. In the Great Plains region, military power was greatly honored. In the western interior, tribes tended to be more nomadic and often had several chiefs. Significant evidence exists to support the view that Native American forms of government, most especially the Iroquois Confederacy, had an impact on the views of the framers of the U.S. Constitution. While the Native American legacy is now disputed in some academic circles, many historians and anthropologists argue that, indeed, the framers drew a good deal from the Native peoples. Many of the framers were familiar with the styles of government practiced by the Native Americans. Benjamin Franklin was well versed in Native political traditions. Clearly the Native American nations/tribes had some impact on the framers of the U.S. Constitution. However, precisely how much influence is difficult to determine.
No scholarly consensus exists regarding the impact the Iroquois Confederacy may have had on the writing of the U.S. Constitution. It is clear that many of the framers, Benjamin Franklin, James Madison, Thomas Jefferson, and John Adams, among them, were familiar with the intricacies of the Iroquois Confederacy and their Great Law, and many visited and observed the workings of the confederacy firsthand. It is also clear that the model for the separation of powers, and checks and balances was an essential part of the Great Law, and found its way into the U.S. Constitution. But some scholars are reluctant to link these ideas to the writing of the Constitution itself. What is clear is that there were deep connections, much that the framers drew from the Confederacy, and that these contributions were not fully recognized or respected in history. Further Reading Fenton, William N. The Great Law and the Longhouse: A Political History of the Iroquois Confederacy. Norman: University of Oklahoma Press, 1998; Johansen, Bruce Elliott, and Barbara Alice Mann, eds. Encyclopedia of the Haudenosaunee (Iroquois Confederacy). Westport, Conn: Greenwood Press, 2000; Richter, Daniel K., and James H. Merrell. Beyond the Covenant Chain: The Iroquois and Their Neighbors in Indian North America, 1600–1800. Syracuse, N.Y.: Syracuse University Press, 1987; Schoolcraft, Henry Rowe. Notes on the Iroquois, or, Contributions to American History, Antiquities, and General Ethnology. East Lansing: Michigan State University Press, 2002. —Michael A. Genovese
Locke, John (1632–1704) phi los o pher
En glishpo liti cal
John Locke was a preeminent British philosopher and political theorist who exerted a substantial influence on the character of American government. Both Thomas Jefferson and James Madison have given credit to his works; those of primary importance in shaping American government include Locke’s A Letter Concerning Toleration (1689) and his Two Treatises of Government (1689). Locke’s thought has influenced America’s political ideology most markedly in interests of just acquisition of political author-
Locke, John 69
ity, fair entitlement to private property, and the free exercise of religion. John Locke was born on August 29, 1632, to Puritan parents. In 1647, Locke left his home in Somerset, England, to receive an education at the Westminster School in London. After completing his studies at Westminster, Locke gained admission to Christ Church, Oxford, in 1652. At Oxford, Locke received a classical education. Locke found the curriculum at Oxford to lag behind the progressive trends of the Enlightenment; however, he managed to receive a Bachelor of Arts degree in 1656. After receiving his degree, Locke was elected a senior student at Oxford. While pursing advanced study, he rejected the instructive route to a clerical career and instead sought training in the practice of medicine while holding various academic posts. While at Oxford in 1666, Locke was introduced to Lord Anthony Ashley Cooper, later the first earl of Shaftesbury. This was to prove an encounter that would later shape the course of Locke’s life. Shaftesbury was a founder of the Whig movement and a dominant figure in British politics. Upon meeting Locke, Shaftesbury was so impressed with the scholar that he invited him to London to serve as his personal physician. Locke agreed and, joining the Shaftesbury estate, found himself immersed in the highly critical and often volatile revolutionary era of British politics. During this time, conflicts between the Crown and Parliament ran high and often overlapped clashes between Protestants, Anglicans and Catholics. Locke’s role in the Shaftesbury household soon extended beyond medical practice as Shaftesbury appointed Locke to serve in various minor political positions. When Shaftesbury became lord chancellor in 1672, he made Locke secretary of the Board of Trade. Locke soon also became Secretary to the Lords Proprietors of the Carolinas. Under this title, Locke was involved in drafting a constitution for the Carolina Colony. In 1674, Shaftesbury left political office, allowing Locke to retreat to France, where he spent much of his time writing and traveling. While Locke was in France, Shaftesbury was imprisoned in the Tower of London on suspicion of political conspiracy. In 1679, Shaftesbury’s fortunes turned, and he reclaimed another short period in office. Locke returned to
England to assist his friend in his political duties. At the time, Lord Shaftesbury was a participant in the Exclusion Crisis, an attempt to secure the exclusion of James, duke of York (the future King James II) from the succession to the throne. The Whig Party, which was overwhelmingly Protestant, feared James because of his Catholicism. Due to his involvement in the exclusion plot, Shaftesbury was later tried on charges of treason. Shaftesbury was acquitted by a London grand jury; however, he saw it in his best interest to escape to Holland pending future accusations. He left for the Netherlands in November of 1682. Locke remained behind in London, though not for long. Due to his close relationship with Shaftesbury, Locke was suspected of involvement in the Rye House Plot, a Whig revolutionary plot against Charles II. There was little evidence for any participation. However, Locke followed Shaftesbury’s lead and fled to Holland in 1683. In 1685, while Locke was living in exile, Charles II died and was succeeded by his brother, James II, with the support of a Tory majority in Parliament. In 1688, after James’s death it appeared that the throne would pass to his Catholic son, not to one of his Protestant daughters. These circumstances set the stage for the English Revolution. The Whigs, using the political system to interrupt the divine line of kingly succession, helped give the throne to James’s daughter Mary, and her husband, William of Orange. In 1688, William and Mary overthrew James II. The event, known as the Glorious Revolution, marked the point at which the balance of power in the English government passed from the king to the Parliament. After the revolution, Locke left Holland and returned to England in the party escorting the princess of Orange, who was to be crowned Queen Mary II. Throughout his life, Locke studied and wrote on philosophical, scientific, and political topics. However, Locke waited for favorable political conditions before publishing much of his work. As a result, several of Locke’s publications appeared in quick succession upon his return from exile. A Letter Concerning Toleration, a work examining the relationship between religion and government, was published in 1689. Also appearing in 1689 was Locke’s major contribution to the understanding of politics, his Two Treatises of Government. Both the Letter Concerning Toleration and the Two Treatises were published
70 L ocke, John
anonymously given their controversial subject matters. In 1690, Locke published another highly influential work, his Essay Concerning Human Understanding, which gave Locke claim to the founding of British empiricism, a body of philosophy claiming that all knowledge is based on experience. Under the new king, William III, Locke was once again appointed to the Board of Trade in 1696. He served on the board for four years until resigning due to illness in 1700. Locke later died in Oates on October 28, 1704. John Locke’s political philosophy makes use of the theory of natural rights. Natural rights theory reverts to the state of nature, a prepolitical state, and uses the standard of rights claimed to exist prior to enforcement by government, as a measure of legitimate political authority. The notion of natural rights was common to 17th- and 18th-century political philosophy. Locke’s political theory has most often been compared to the natural rights philosophy of Thomas Hobbes. Hobbes was the first philosopher who made natural rights the source of his political theory. Although Locke never mentioned Hobbes by name, many insist that Locke’s natural rights theory is a response to Hobbes. Unlike Thomas Hobbes, Locke believed that human nature is characterized by reason and tolerance. Observing that every individual shares in the faculty of reason, Locke claimed that the self-evident laws of nature bind the actions of every human agent. Locke’s First Treatise of Government aims at countering English political theorist Sir Robert Filmer. Filmer claimed that the authority of kings is secured by divine right. Locke objected to Filmer, stating that the presence of monarchy does not legitimize rule. Moreover, according to Locke, human beings are not naturally subject to a king, rather they are naturally free. In his Second Treatise, Locke erects his positive account of government. In this treatise, Locke’s formulation of the natural state of human beings is made explicit. Locke’s view of the state of nature coalesces with his religious beliefs. According to Locke, human beings are essentially God’s property in that they are His creations. Under God, all human beings are equal and all have the liberty to act without interference from one another. However, although Locke claimed that human beings are natu-
rally free, he also stated, “liberty is not license”; there are restrictions to innate freedom. According to the law of nature, human beings are not free to deprive others of their natural rights to life, health, liberty or possessions. Locke’s view of the natural right to property is especially significant to his political theory. Originally, Locke supposed that the earth and everything on it belonged to all human beings in common and all had the same right to make use of whatever they could find. The only exception to this rule, however, is that each individual has exclusive rights to his or her own body and its actions. This right to selfhood makes the acquisition of private property possible from that which is held in common. Property becomes private whenever one employs one’s effort to improve the natural world and thus appropriates the goods of nature as an extension of his or her own person. Yet, despite the claim that human beings have the right to acquire, natural law restricts boundless accumulation. According to natural law, man must not infringe on the natural rights of others. Therefore, human beings have the freedom to acquire as long as others are left no worse off and as long as that which they acquire is not left to waste. Although Locke’s observations are characterized by reason and tolerance, he also observes that problems enter into the state of nature, which make the formation of political society desirable. Despite man’s natural reasonableness, natural law is not always obeyed. When noncompliance occurs, men have the right to enforce the natural law. Locke does not exclude the possibility that one may intervene in cases where one’s own interests are not at stake and thereby defend another; however, in a prepolitical state, victims are often left to enforce the natural law by their own resources. This causes most difficulties in the state of nature. If justice is to be achieved in enforcing the natural law, the sentencing of perpetrators must be proportionate to the crime committed. Yet, when victims are left to enforce their own cases, impartiality, a fundamental requirement of just arbitration, is difficult or impossible to achieve, for in cases where victims are made to sentence criminals, perpetrators are often punished much more harshly than fair retribution would allow. Political authority is therefore needed to correct the difficulties met in the state of nature, by securing an impartial judge to
Locke, John 71
ensure the just enforcement of the law of nature and to protect natural rights. Once the need for a legislative power is realized, unanimous consent must establish the legitimacy of political authority. This is achieved by the formation of a social contract. The aim of a social contract is to provide for social order and the common good by setting laws over the acquisition, preservation, and transfer of property. The political community formed by social contract, however, is not yet a government. As Locke points out, it would be enormously difficult to achieve unanimous consent with respect to the promulgation of particular laws. So, in practice, Locke supposed that the will expressed by the majority must be accepted as determinative over the conduct of each individual citizen who consents to enter into a political community. Locke expected that any form of government could be legitimate as long as it secures the rights of life, liberty and property. However, there was reason for Locke to favor a system of government possessing separation of powers. Observing the difficulties met in the state of nature, Locke recognized the danger of leaving unchecked power in the hands of one individual. Locke thought that government’s power was best limited by dividing government up into branches, with each branch having only as much power as is needed for its proper function. As any civil government depends on the consent of those who are governed, consent may be withdrawn at any time by rebellion. In the case that a political body rules without consent or infringes upon man’s natural rights, subjects have the right to rebel. In cases where it is seen that only rebellion may restore natural rights, rebellion is not only one’s liberty, but also one’s fundamental duty to uphold the rights of nature. Besides his natural rights theory, Locke also contributed to political ideology by his concern for religious toleration. As explained in his A Letter Concerning Toleration, religious matters lie outside of the legitimate concern of civil government. As Locke conceived it, no individual possesses rights over the soul of another. Therefore, individuals cannot delegate to a political body the authority to invest in the spiritual welfare of its citizens. John Locke may be credited as a major source of political ideology in the United States. Locke’s politi-
cal philosophy provided significant conceptual support to the cause of the American Revolution. His political theory has also exerted a great influence on the formation and continued assessment of the U.S. Constitution. Locke’s advocacy of faith in reason rather than blind obedience to authority resonated with many early American thinkers. In the revolutionary era in America, John Locke was a major source of political and religious theory as evidenced by widespread circulation of revolutionary pamphlets crediting Locke as well as the recitation of religious sermons espousing Lockean principles. Locke’s advocacy of the right to rebellion inspired American revolutionaries and provided them firm justification for revolt as a relevant recourse to the tyranny of the British Crown. Under Lockean theory, it was seen that the British exercised illegitimate political authority maintained by force and violence rather than terms agreed upon by social contract. Locke’s conception of private property supported a fundamental grievance against the British rule, often expressed in the revolutionary tirade, “no taxation without representation.” According to Locke, human persons have a prepolitical right to property. Therefore a government without claim to consent may not tax the private assets of individuals. The Declaration of Independence is notably Lockean. Some have gone so far as to claim that Thomas Jefferson plagiarized Locke’s Second Treatise in its drafting. The preamble states, “We hold these truths to be self-evident, that all men are created equal, that they are endowed, by their Creator, with certain unalienable Rights, that among these are Life, Liberty, and the pursuit of Happiness.” Notably, Locke’s claim to the natural right to private property is replaced with the broader freedom, the pursuit of happiness; nonetheless much of Locke’s natural rights concepts and phrasing remain. The Declaration also lays hold of Locke’s concept of the social contract. The bulk of the Declaration, a list of grievances against King George III, indicts the king with rupturing the social contract, thereby invalidating any legitimate right to political rule. John Locke’s influence on American constitutional thought was and remains significant. Locke’s political theory can be witnessed in key elements of the U.S.
72 Magna Carta
Constitution. The document claims that all political power is vested in and derives from the “People.” Again, according to Lockean principles, government is only legitimate by the consent of the governed under the social contract. The Constitution goes on to provide for the separation of powers and the protection of civil liberties, including the protection of private property and freedom from encroachment on the exercise of religion. Today, Locke’s political philosophy continues to be referenced as a source for interpretation of the U.S. Constitution in academic discussions and in United States Supreme Court review. Further Reading Locke, John. A Letter Concerning Toleration in Focus. Edited by John Horton and Susan Mendus. New York: Routledge, 1991; Locke, John. Two Treatises of Government. Edited by Peter Laslett. Cambridge: Cambridge University Press, 1987. —Kristina L. Rioux
Magna Carta Magna Carta has served for centuries as a talismanic symbol of freedom, limited government, and the rule of law. Its legacy may be viewed in constitutional and legal documents that have embodied the magisterial concepts of the “law of the land” and “due process” and subjected all authority to the law. The sweeping, inspirational potential of Magna Carta could not have been glimpsed in its creation and origins. The Great Charter was forced upon a reluctant King John in the meadows of Runnymede in June of 1215 by irate and rebellious barons. Sir Edward Coke, the great 17th-century English jurist and champion of Parliament and the common law, described in lawn tennis language four ostensible reasons for the charter: honor of God, health of King John’s spirit and soul, the exaltation of the church, and the improvement of the kingdom. Yet, the simple and clear reason for John’s accession to the demands of the barons lay in his need for their loyalty. The Great Charter represented an agreement between the king and the barons; in return for their homage and fealty, the king would cease his abuse of power and confirm their liberties. King John had ascended to the throne in 1199, and in short order, he had alienated and repulsed the
barons to the point where they had renounced their allegiance to him. John’s costly and unsuccessful wars with France resulted in high taxes. Justice in the courts was fleeting, John was in conflict with Pope Innocent III, and he often seized property—horses, food and timber—without payment. Accordingly, the barons sought specific remedies for John’s train of abuses. Magna Carta is a feudal grant, reflecting the technical points of feudal law that dealt with mundane issues between the king and the barons. Its 63 short chapters lack eloquence and elegant language. It lacks lofty philosophical principles and statements, and it lacks the glittering generalities and political rhetoric that characterize the Declaration of Independence and other liberty documents. Neither is it a statement of constitutional rules or principles. It is, in short, a wholly pragmatic document aimed at curbing the king’s powers and restoring a line of division between rule based on law and arbitrary rule reflecting the will of the king. The barons, it should be noted, were concerned with their own grievances against King John. They were not at all egalitarian in their demands for remedies. Yet, the language they employed in this legal document was susceptible to exaggeration and adaptation by future generations of creative thinkers. The barons’ initial draft used the words “any baron,” but they were subsequently changed to “any freeman.” The alteration certainly did not encompass great numbers of Englishmen, since the term had a restrictive meaning, but in it, there appeared the seeds of aspiration and inspiration. Key chapters of the Great Charter could be interpreted to advance and protect the rights of the people of the nation. For example, chapter 39 declares: “No freed man shall be taken, imprisoned, diseased, outlawed, banished, or in any way destroyed, nor will We proceed against or prosecute him, except by the lawful judgment of his peers and by the law of the land.” There is no reason to suppose that “judgment of his peers” meant trial by jury. Rather, it meant, according to feudal custom, that barons would not be tried by inferiors. The breadth of the language, however, lent itself to some myth-making. Sir Edward Coke, in his analysis of Magna Carta, inflated chapter 39, and advanced it as a guarantee of trial by jury to all men, an absolute prohibition on arbitrary
Mayflower Compact 73
arrest, and a brace of rights that protected the criminally accused. Coke, moreover, interpreted the “law of the land” clause as a synonym for “due process,” a construction that would provide a crucial link three centuries later to the Bill of Rights as part of the U.S. Constitution. Coke’s treatment of due process as the equivalent of the law of the land represented an affirmation of a 1354 statute that had first interpreted the phase in precisely that expansive manner. Magna Carta had been placed in the statute books in 1297, and soon became regarded as fundamental law. In fact, a statute during the reign of Edward III, four centuries before the landmark ruling by the United States Supreme Court in 1803, Marbury v. Madison, required that Magna Carta “be holden and kept in all Points; and if there be any Statute made to the contrary, it shall be holden for none.” There is, in this statute, the seed of the supremacy clause in the U.S. Constitution. The conversion of Magna Carta into a grand liberty document reflected some misconstruction and some historical misunderstandings. Nevertheless, the meaning and status accorded it by legislators, judges, and statesmen became its meaning. It greatly influenced the development of Parliament and served as an effective weapon in the parliamentarian resistance to Stuart kings in the crises and convulsions of the 17th century. It later helped to form the body of English rights, to which American colonists laid claim. The passage of Magna Carta across the Atlantic came early. English subjects in Massachusetts complained that their liberties were being violated. In response, the colony’s magistrates drew up the famous “parallels” of Massachusetts. One column was entitled “Magna Charta,” the other, “fundamentals of Massachusetts.” The aim, of course, was to demonstrate that rights guaranteed by Magna Carta were being enforced. Such was the status of Magna Carta in the New World that governmental officials felt compelled to account for their adherence to the Great Charter. In Pennsylvania, William Penn employed Magna Carta in drafting the colony’s frame of government. In 1687, Penn became the first to publish Magna Carta in America. Magna Carta well served the cause of liberty in America. The Massachusetts Assembly invoked it to declare the Stamp Act invalid. In its petition to the king,
the Stamp Act Congress asserted that the colonists’ right to tax themselves and the right of a jury trial were affirmed by Magna Carta. Its legacy may be seen as well in the Northwest Ordinance of 1787, which included an encapsulated provision from the Great Charter protecting liberty and property from being deprived except “by the judgment of . . . peers, or the law of the land.” The same phrase had appeared in the Virginia Declaration of Rights of 1776 and the North Carolina Declaration of Rights, published in the same year. Magna Carta exercised influence as well on the draftsman of state constitutions, the U.S. Constitution, and the Bill of Rights. The Fourth Amendment, for example, was in part a combination of appeals to Magna Carta and the assertion that “a man’s house is his castle.” The due process clause of the Fifth Amendment is only the most obvious sign of its influence. Judicial decisions throughout the 19th century invoked Magna Carta to promote various constitutional rights and curb excessive governmental acts. The historical embellishment of Magna Carta has provided inspiring imagery. As a powerful symbolic vehicle for the promotion of the rule of law, due process, limited government, and the guarantor of other rights, it perhaps has no rival in English and American iconography. Further Reading Howard, A. E. Dick. The Road from Runnymede: Magna Carta and Constitutionalism in America. Charlottesville: University Press of Virginia, 1968; Thorne, Samuel, ed. The Great Charter: Four Essays on Magna Carta and the History of Our Liberty. New York: New American Library, 1965. —David Gray Adler
Mayflower Compact While Magna Carta (1215) marked the beginning of a written, codified legal arrangement that described and distributed the powers of the state and the rights of the people, perhaps more germane to the development of the U.S. Constitution is the Mayflower Compact. In the 1600s, Europeans began to make the arduous voyage to the new world of the Americas, some in search of fortune, others to flee from religious persecution, while still others were searching for political freedom.
74 Ma yflower Compact
In November of 1620, on a voyage to America, 41 of the 102 passengers on the Mayflower signed a pact (37 of the passengers were Separatists, fleeing religious persecution in Europe) or covenant as they approached the Plymouth Plantation in Massachusetts. While not technically a constitution, the compact they signed was a precursor to later constitutional agreements. The Mayflower Compact created a “civil body politic” or a new government. As such it was the first American Constitution. And while John Quincy Adams, in an 1802 speech called the Mayflower Compact the foundation of the U.S. Constitution, in reality, the Compact had little impact on the writing of the U.S. Constitution. This social contract or compact was short, but to the point. It reads in its entirety: In the name of God, Amen. We whose names are underwritten, the loyal subjects of our dr yet ead sovereign Lord, King James, by the grace of God, of Great Britain, France and Ireland king, defender of the faith, etc., having undertaken, for the glory of God, and advancement of the Christian faith, and honor of our king and country, a voyage to plant the first colony in the Northern parts of Virginia, do by these presents solemnly and mutually in the Presence of God, and one of another, covenant and combine ourselves together into a civil body politic, for our better ordering and preservation and furtherance of the ends aforesaid; and by virtue hereof to enact, constitute, and frame such just and equal laws, ordinances, acts, constitutions, and offices, from time to time, as shall be thought most meet and convenient for the general good of the colony, unto which we promise all due submission and obedience. In witness thereof we have hereunder subscribed our names at Cape-Cod the 11 of November, in the year of the reign of our sovereign lord, King James, of England, France, and Ireland the eighteenth, and of Scotland the fifty-fourth. Anno Domine 1620.
The compact lasted for 10 years. Sadly, roughly half the original colonists did not survive the first cold winter in the New World. However, their compact lasted. There are several things of note in the Mayflower Compact. First it is in the tradition of the social contract. European political thinkers had been promoting the concept of a social contract and this notion of a contract or agreement in which individuals freely
enter for some common purpose was a relatively new and even revolutionary concept at the time. Second, it is also in the tradition of covenants. While we think of covenants in religious terms, we must remember that most of those who sailed on the Mayflower were deeply religious, as their repeated references to God in the text of the Compact indicates. To the men and women of the Mayflower, devotion and service to God was the key, and we see in the commingling of social contract and covenant language, the bridge between one era and another. Third, it creates a civil body politic, or political system. And this new political arrangement was built on the basis of common consent and agreement. Fourth, it does so for the better ordering of society. The purpose of government is to serve the interests of the people under its domain. Fifth, it does so for the successful attainment of the ends desired by the union. That is, this new compact was designed to make life better for the people under its control. Sixth, it authorizes the establishment of the rule of law. Government would not be based on the rule of one man, but on the rule of laws and agreements. And finally, it pledges submission to this new system of government under the law. The signers pledge to be bound by the decisions of the collective. Taken individually, any one of these elements would have signified a significant step ahead, but taken together the sum total of all these elements marks a deeply progressive, and is some ways, a new way of viewing both the citizen and the government. Here were free citizens entering willingly into an agreement to establish a new form of government in which they had rights, obligations, and a role. The mutual consent of the governed was the basis for the creation of this new system of government, and each citizen pledged support for this new arrangement. These were not subjects, under the strong arm of the state of the king, these were citizens, with rights and responsibilities. The transformation from subject to citizen cannot be overemphasized in the development of democratic government and the rule of law concept. Perhaps it was this transformation, above all others, that marked the truly significant elements of the Mayflower Compact. The significance of the Mayflower Compact can be seen in its impact on subsequent constitutions and compacts. At the time, social contract theory
monarchy 75
This painting shows the Pilgrims signing the compact in one of the Mayflower’s cabins. (Library of Congress)
was emerging in Europe, and as was the case with the nascent Magna Carta, its slow and often violent rise to acceptance came in fits and starts. Step by step, the old order of the divine right of kings gave way to a new contract theory of government based on written agreements or contracts between the government and the people, and the Mayflower Compact was the earliest example of this in the New World. As such, it paved the way for the other colonies to enter into contracts between the state and the people and granted certain powers to the state, withheld others, and gave the people certain rights as well as imposing certain responsibilities upon them. Further Reading Cornerstones of American Democracy. Rev. ed. Washington, D.C.: National Archives Trust Fund Board, 1998; Donovan, Frank R. The Mayflower Compact.
New York: Grosset & Dunlap, 1968; The Rise of American Democracy, records assembled and annotated by Dr. Sydney Strong. New York: WilsonErickson, 1936. —Michael A. Genovese
monarchy The word monarchy comes from the Greek word monos, for “one” and archein for “to rule.” A monarchy is a form of government where the head of state (and sometimes the head of government as well) is the monarch, or king (and occasionally queen). A monarch usually holds office for life, and is able to hand down the crown to a descendant; usually the oldest male child is first in line for the crown, followed by the next male child, and so on. In the modern world there are very few true monarchies— that is, royal families that act as both the political and
76 monar chy
symbolic head of government as well as the head of state responsibilities—and those nations that have maintained a monarchical government, Britain, for example, have neutered their royalty of political influence and power, leaving them to serve the very important symbolic and often shamanistic role of head of state. In the modern world, most monarchs are symbolic heads of state and not politically influential heads of government. A monarchy does not sit well with the democratic revolution that swept through the world in the post– cold war era. When the Soviet Union collapsed in 1991, many new nations were formed out of the ashes of the Soviet empire. Virtually all of these new nations chose a form of parliamentary democracy or a brand of constitutional republicanism, without a monarch. Republics and democracies do not mix very well with monarchies, and even the symbolic monarchy was generally unwelcome in this new age of democratic sensibility. Most monarchies are hereditary in nature; that is, title and power are handed down from generation to generation, first to the oldest male son (this is known as primogeniture), then down to the next oldest male, and so on. If no male heir was available, the crown went to the oldest female child, and continued on in order of age. Some of the most powerful and effective monarchs have been queens, such as Queen Victoria of England. Monarchs often serve as a symbol of nationhood, of the state, of continuity, and of national unity. For example, the queen of England is a uniting symbol of Britishness and as such is highly respected and even loved by many. She serves a national purpose as the representative figure identified with nationhood, and is very visible and important at state functions as the symbolic and unifying figure designed to evoke feelings of national pride as well as national unity. As a symbolic figure the queen is not politically active and thus does not engender the outrage or opposition that might come with a more partisan and overtly political figure such as the prime minister. At one time, monarchies were the norm in Europe. In the Middle Ages, monarchs ruled with near absolute power. This was the age of the divine right of kings, when the monarch claimed to be the embodiment of God on earth,
chosen, as it were by God to rule. This was very firm political ground on which the monarch stood, as to defy the king was tantamount to defying God. Few were willing to risk such blasphemy, and therefore, the king’s authority was nearly absolute. Over time, however, the veneer of divinity was slowly stripped from the monarch and replaced by a more democratic political order. A new secular basis of political power emerged in this new age of mass democracy, and slowly, and often violently, the political power of the monarch was transferred to elected representatives of the people in Parliaments. From that time on, the monarch became more a symbolic than a political office. In this new world, there was no place for a politically involved monarch. Monarchies can be of several varieties: an absolute monarch has power in symbolic and political terms; a limited monarch (as most are today) generally has symbolic but not political power. In other cases, the monarch appears to have, and constitutionally may actually possess power, but true power is exercised by a military or other elite. Today, there are roughly 50 or so nations that in the most technical sense have monarchs. Some are constitutional principalities (e.g., Andorra, Liechtenstein, and Monaco), others are constitutional monarchies (e.g., Canada and Great Britain); others are constitutional kingdoms (e.g., Belgium, Cambodia, Denmark, Lesotho, the Netherlands, Norway, Samoa, Saudi Arabia, Spain, Sweden, and Tonga); some are absolute monarchies (e.g., Nepal); some are absolute theocracies (e.g., the Vatican); while still others are mixed monarchies that are not easily categorized (e.g., Malaysia, Luxembourg, Kuwait, Jordan, and Brunei). The presence of so many governmental systems with a monarch of some sort should not obscure the fact that in the past several decades, monarchies have been dramatically shrinking in number and power. Industrial nations that have maintained royal families and monarchies have already stripped them of virtually all political power, and even those monarchies that still have political power are on the defensive, as mass movements of democracy have challenged their legitimacy and viability. In a democratic age, monarchies stand out as anachronisms, and increasingly, as outdated relics.
natural rights
The colonial revolution against Great Britain was largely, though not exclusively, a revolution against monarchical power. Thomas Paine labeled King George III of England, “the Royal Brute of Britain,” and Thomas Jefferson’s Declaration of Independence, apart from being an eloquent defense of revolution and democracy, was for the most part, a laundry list of charges and accusations against the king. All revolutions need an enemy, and it is especially useful to caricature your enemy, dehumanize him, and personalize him. That is precisely what the revolutionary propagandists did to the British monarchy in general, and King George III in particular. He became not only the convenient scapegoat for all the colonial troubles, but also became the necessary target of animosity and revolutionary fervor. Thus, the British monarchy became the focal point of revolutionary sentiments and anger. When the revolution was over and the victory won, reconstructing executive authority out of the ashes of a revolution specifically designed to denigrate and demean executive authority became a significant problem; one that was not solved by the weak and executive-less Articles of Confederation but would have to await the writing of the U.S. Constitution, in Philadelphia in 1787, for a workable return to executive power in America. If monarchy seems an anachronism in the modern world it is because royal trappings seem so out of place in a world of egalitarian and democratic sensibilities. The leveling impact of the democratic ethos holds that no one, regardless of wealth, status, or birth, is above anyone else. In a democracy, all are “created equal.” In this way, royal sentiments truly are a relic of an age long past. They are handed down from the pre-American and French revolutionary eras, and these two revolutions, for very different reasons, have all but rendered respect for royalty passé. No true democrat in this age would defend royal power, even if they defended the symbolic and shamanistic attributes of a monarchy. In effect, there is virtually no constituency for a rebirth of royal power. And it would be hard to imagine a powerful rekindling of support for royalty in the modern, western world. Giving up power to others who govern by right of birth is a view that has long since been crushed by the tidal wave of democratic sentiment.
77
Today, many Americans hold a fascination with royalty and its trappings. There is a certain glamour and panache to royalty that some Americans seem to yearn for. And when an American president conducts himself like an elected monarch (John F. Kennedy and Ronald Reagan, for example), we shower popular approval on them and expect them to behave as our royal families. It is a paradox that democracies may yearn for the trappings of royalty as they demand that the government leave them alone. Further Reading Everdell, William R. The End of Kings. Chicago: University of Chicago Press, 1971; Kishlansky, Mark. A Monarchy Transformed: Britain 1603–1714. New York: Penguin Books, 1996; Williams, Glyn, and John Ramsden. Ruling Britannia: A Political History of Britain 1688–1988. New York: Longman, 1990. —Michael A. Genovese
natural rights Anglo-American constitutional theories incorporate the assumption that rights can originate or arise from one of three sources: (a) custom, (b) positive law or affirmative political recognition, and (c) nature or another authority such as God. Customary rights acquire legitimacy and acceptance through traditional sociocultural practices over many generations and often have questionable legal or constitutional viability. Positive rights are arguably the least ambiguous, inasmuch as their constitutional or political validity is ensured through affirmative identification in a constitution, statute, or judicial decision. Natural rights appear to carry the unassailable approval of an absolute authority whose existence antedates any political entity, but their status is occasionally compromised through a political system, such as that in the United States, that derives its final legitimacy from a positive fundamental law. The constitutional legitimacy of certain rights has become a nagging problem in today’s constitutional jurisprudence. It is perhaps anticlimactic that, despite our continual avowals of the centrality of individual rights throughout American history, the function and purpose of rights to the framers of the
78 na tural rights
U.S. Constitution were not immediately clear. Because the Constitution, among other things, assumed the role of guarantor of the public good, the public character of rights and also the nature of the relationship between rights and the public good were thrown into question, and Americans as constitutional interpreters have been struggling to define that character and that relationship for more than 200 years. In addition, as Alexander Hamilton professed in The Federalist 84, the Constitution was “intended to regulate the general political interests of the nation” but not to involve itself in “the regulation of every species of personal and private concerns.” This statement was a reflection of the framers’ primary political and legal objectives in framing the Constitution, an objective that consisted of devising a means to secure and promote the public good but that did not extend to questions of private interests, except insofar as the resolution of those questions enabled the realization of that primary objective. Legal scholar Howard Gillman has demonstrated that the framers could not have foreseen the need, such as the one that arose from the consequences of the socioeconomic restructuring during the industrial transformations of the Gilded Age and Progressive Era, for government to participate in the regulation and the protection of “personal and private concerns.” The comparatively ambiguous place of rights in the newly dominant constitutional jurisprudence notwithstanding, the legacy of the states’ experiments in republican government during the 1780s rendered the concept of individual rights a conspicuous and indispensable feature of American constitutional discourse. With respect to rights, two significant lessons were learned from the states’ experiences during the 1780s. First, the most disturbing aspect of the clashes between private interests and the public good was the fact that the public good was vulnerable to incursions from particular interests, because a naturally defined unity between the public and private spheres did not exist. Second, and this is the point that is especially relevant to a discussion of rights, the framers were convinced that, since the private sphere is autonomous, it has no access to a naturally defined mechanism for its own supervision and regulation. Therefore, an uncorrupted and equitable competition among private interests would have to be ensured in order to
prevent unauthorized and illegitimate restrictions of that competition. As historian Gordon Wood has shown, the framers, whose principal political and constitutional goal included the protection of the public good from the corrupting influence of private interest, were suddenly concerned with the protection of particular interests and the preservation of the “private property and minority rights” that would secure minority interests “from the tyrannical wills of” selfish and partial majorities. The framers never clearly resolved the confusion regarding rights that resulted from their reconceptualization of constitutionalism. Despite Alexander Hamilton’s articulation of the reasons for not including a Bill of Rights in the original draft of the Constitution and the many eloquent affirmations of the sanctity, especially, of inherited British rights in the ratification debates, the framers failed to provide an unambiguous account of either the relationship between rights and the reconfigured constitutionalism or the ideological implications of that relationship. Furthermore, while the framers did not leave such an account of their constitutional theory per se and the broader political and legal context per se in which it was embedded, the records they did leave offer an abundance of information that has enabled us to interpret both that theory and its broader context with a considerable amount of confidence. However, the material we have concerning rights, though also abundant, is inconsistent and, in most cases, part of arguments that attempt to establish the importance of rights as rights, and not as defining elements of a broader theory of constitutionalism. Despite these interpretive limitations, one aspect of the framers’ thinking about rights as defining elements of their theory of constitutionalism is evident. Anyone who reads through the post-1780s writings of someone like James Wilson, who was intimately conversant with the law, or someone like Thomas Paine, who, though he may have lacked Wilson’s legal prowess, was a talented interlocutor and an inspiring defender of constitutional government, will notice numerous references to natural rights. At the same time, the framers’ devotion to an emergent legal positivism (the belief that rules and laws are manmade) was unquestionable, whereas their confidence in natural-law doctrines had abated considerably by
natural rights
the late 1780s. So, how can we account for the framers’ frequent references to natural rights? The framers inherited an outlook on natural rights that was decreasingly relevant. That outlook was born of a concern for the public good and the trauma of political upheaval that shaped 17th-century English politics. This political turbulence compelled English political actors to examine closely the nature of the competing claims to political power among the factions that purportedly pursued England’s true political objectives. English republicans were determined to discover the source and limits of political power and to identify the universal propositions that authorized the use of that power. Political actors such as Sir Edward Coke and later John Locke believed that political power is, by the fact itself, circumscribed by the natural limits that define its authoritative use. They were convinced that Englishmen were entitled to exercise certain claims against the unauthorized use of power; Englishmen ostensibly possessed specific natural claims against the polity, or rights, to serve as intrinsic guarantees that the uncorrupted pursuit of the public good could not be thwarted by the unauthorized usurpation of political power. Therefore, as the Declaration of Breda (a 1660 proclamation by the exiled Charles II in which he outlined his demands for once again accepting the crown) affirmed, the “happiness of the country” was necessarily related to the citizenry’s success in identifying, preserving, and nurturing “the just, ancient, and fundamental rights” possessed by “king, peers, and people.” These rights were natural insofar as they were immanent manifestations of an Englishman’s existence as a member of a naturally ordered republic (constitutional monarchy). These were not rights whose affirmation was dependent on moral directives or natural laws. Rather, these rights were natural in an Aristotelian sense; an Englishman possessed these rights because he was a member of a natural political community. Specifically, as Sir Matthew Hale acknowledged, natural political rights included “those very liberties which, by the primitive and radical constitution of the English government, were of right belonging to” the English people. Hale argued that the rights of Englishmen should be regarded as “parts of the original and primitive institution of the English government” and that their existence had been
79
continually confirmed through “long usage . . . as if the authentic instrument of the first articles of the English government were extant.” Locke, whose formulations on rights most deeply affected the founding fathers during their initial political experiments of the 1760s and 1770s, further refined these ideas to include specific explanations of the sanctity of private property and the legitimacy of revolution. Nonetheless, even Locke’s deceptively relevant observations lost currency through the experiences of the 1780s. During this period of discursive transition, a period that was characterized by a rejection of inherited ways of understanding political circumstances, the framers were compelled to acquaint themselves with a rapidly changing set of practices whose manifestations, boundaries, and standards were comparatively unfamiliar, or, more appropriately, the framers had neither the opportunity nor the reason to consider and confront all of the possible ramifications of those practices. Certain political and legal questions, such as those dealing with rights, were not yet “ripe.” In other words, some issues, though foreseen by the framers, had not been developed to an extent that would have rendered them sufficiently interpretable in their new discursive context. This, above all, was the case with the framers’ conceptualization of rights and their role in the new political and legal discourse. Though the framers and subsequent generations of Americans continued to use a natural-rights terminology, the interpretation of that terminology was context-specific. The mere fact that the framers and later generations of American political actors still referred to natural rights does not mean that their conception of natural rights was still rooted in a preconstitutional awareness of those rights. Legal positivism as science did not reject the concept of the law of nature; it simply rejected metaphysical explanations of nature and epistemologies based on the definability of essences. Positivism accepts the existence of scientific laws, including the law of nature; the existence of those laws was a product of the classification of facts according to the criteria that are established by those who “posit” the existence of scientific laws. The laws of positive science simply represent the rubrics under which, and according to which, particular sets of material facts are categorized. What this means in the present context is that natural rights
80 na tural rights
were those that existed because material facts could be adduced as evidence of their existence. More to the point, natural rights were those that had been recognized through positive laws or some manifestation of positive laws. This explains why the framers continually iterated their devotion to their rights as Englishmen, a devotion that only makes sense if we acknowledge the fact that, without the statutes and common-law provisions that affirmed the existence of those rights, the framers would have no authoritative or legitimate claims to them. All in all, natural rights were natural not because of their grounding in a nature-centered metaphysics but because political actors had observed the fact that those rights were a characteristic feature of a particular society and had, thus, affirmed that fact through a positive law, or laws. Though it explains the many allusions to natural rights in contemporary writings, the determination that the framers interpreted the concept of rights through a legal positivist framework does not, in and of itself, account for Hamilton’s famous dictum in Federalist 84 concerning the superfluity of a bill of rights. In his oft-quoted appeal to antifederalist promoters of the idea that the Constitution should include a bill of rights, Hamilton declared that, in establishing a constitutional government, “the people surrender nothing; and, as they retain everything, they have no need of particular reservations.” In one of the few examples of academic unity, practically every scholar of the Constitution has agreed that Hamilton was aware of the fact that, in creating a limited government whose constitutional authority was confined to, at most, its enumerated and implied powers, the framers believed that those powers, rights, privileges, and immunities that were not stipulated in the Constitution were retained by the people. Alas, this only tells part of the story. The problem posed by Hamilton’s statement regarding the inadvisability of a bill of rights is that it cannot be completely reconciled with the framers’ positivist constitutional jurisprudence. Hamilton’s argument was predicated on the assumption that the government’s authority does not extend beyond its constitutionally mandated powers. From that, he concluded that the government was constitutionally prohibited from exercising its authority in areas that do not lie
within the purview of those powers. Furthermore, since the government is constitutionally enjoined from most activities and arenas, it therefore cannot violate, abridge, or encroach on the individual rights within those activities and arenas. In the end, then, what we have is not a constitutional recognition of rights, but a constitutional recognition of the fact that rights may exist in those areas that the government is enjoined from entering. These are not constitutional rights but are, at best, political rights by default. These cannot be constitutional rights because, if the framers’ jurisprudence was positivistic, which it was, those rights would have to be recognized through some constitutional, or extralegislative, mechanism in order to become constitutional rights. Otherwise, barring some subsequent constitutional affirmation of those rights, they could be repealed or abridged through an aggressive interpretation of the implied powers doctrine. Hamilton’s view of rights seems to be redolent of the notion of negative liberty. This poses quite a dilemma for interpreters of the framers’ constitutional jurisprudence, inasmuch as, of all the possible ways to conceptualize negative liberty, only one, a conceptualization based in rights-centered naturallaw jurisprudence, would entail the elevation of that liberty to constitutional status. We have already seen that this is not a possibility, so we are left with the question of how Hamilton’s argument can be accommodated within a positivist constitutional jurisprudence. The answer for which we will have to settle is that Hamilton and the rest of the framers evidently did not have sufficient experience or familiarity with the various manifestations of their new, yet comparatively unfamiliar, discourse to formulate a consistent and unequivocal strategy regarding constitutional rights. It should be noted that, in one of those auspicious accidents of history, the antifederalists probably demanded a bill of rights for all the wrong reasons, but the inclusion of the Bill of Rights was necessary and logical, even though Hamilton and his adherents may not have realized it. The Bill of Rights, especially through the Ninth Amendment, provided the aforementioned mechanism that elevated rights to a constitutional level and endowed them with their status as constitutional rights. The Ninth Amendment was not conceived as some sort of rights grab-bag that
New Jersey Plan 81
included everyone’s flavor of the day, nor was it a gateway for natural-law interpretations of the Constitution. Rather, it was an indispensable constitutional apparatus for positivist political actors, and it ensured that many of the inherited British rights and liberties, whose existence had theretofore only been confirmed through common-law doctrines, became recognized as constitutional rights. Further Reading Gillman, Howard. The Constitution Besieged: The Rise and Demise of Lochner Era Police Powers Jurisprudence. Durham, N.C.: Duke University Press, 1993; Gillman, Howard. “Preferred Freedoms: The Progressive Expansion of State Power and the Rise of Modern Civil Liberties Jurisprudence.” Political Research Quarterly 47 (1994): 623–653; Grey, Thomas C. “Do We Have an Unwritten Constitution?” Stanford Law Review 27 (1975): 703–718; Grey, Thomas C. “Origins of the Unwritten Constitution: Fundamental Law in American Revolutionary Thought.” Stanford Law Review 30 (1978): 843–893; Hale, Sir Matthew. The History of the Common Law of England. Edited by Charles M. Gray. Chicago: University of Chicago Press, 1971; Locke, John. Essays on the Law of Nature. Edited by W. von Leyden. Oxford: Clarendon Press, 1970; Locke, John. Two Treatises of Government. Edited by Peter Laslett. Cambridge: Cambridge University Press, 1987; Paine, Thomas. The Life and Major Writings of Thomas Paine. Edited by Philip S. Foner. Toronto: Citadel Press, 1974; Publius. The Federalist Papers. Edited by Isaac Kramnick. London: Penguin Books, 1987; Wood, Gordon S. Creation of the American Republic, 1776–1787. New York: W.W. Norton, 1969; Wood, Gordon S. The Radicalism of the American Revolution. New York: Vintage Books, 1991; Zuckert, Michael P. Natural Rights and the New Republicanism. Princeton, N.J.: Princeton University Press, 1994. —Tomislav Han
New Jersey Plan A decade after the signing of the Declaration of Independence, the newly minted United States was on the verge of collapse. The Revolution had cast off British tyranny and the revolutionaries estab-
lished a “league of friendship” among the 13 states in the Articles of Confederation. The Articles created a system designed to protect the liberties of individuals by granting state governments considerable power, while granting the common government virtually none. The Congress, the only national political body under the Articles, had no independent power to raise taxes, enforce laws on the states, or regulate commerce between the states. In addition, the states had equal power in the unicameral Congress, which required approval of nine states to do virtually anything significant. This often gave a small number of states an effective veto on proposed action. This arrangement led to a Congress powerless to pay even the interest on its war debts, protect the states from abusing each other economically, create a national economy that would bring common benefit, protect itself from internal insurrections like Shays’s Rebellion, or enforce the Treaty of Paris against a recalcitrant British army that refused to abandon several posts on U.S. territory. By 1787, it was clear to many that the Articles must be altered to grant Congress greater power to meet the formidable challenges that threatened the nation’s survival. To that end, delegates from the states met in Philadelphia in the summer of 1787. However, immediately upon the convention’s commencement, the delegates began to consider fundamental changes in the government structure. The first day of substantive action, Virginia governor Edmund Randolph presented the Virginia Plan, penned primarily by fellow Virginian James Madison, who sought a strong national government. The plan proposed a radical redefinition of government power and structure. In place of a unicameral legislature in which states had equal power, the Virginia Plan outlined a bicameral legislature in which power would be shared among the states by the proportionality principle: more populous states would hold more votes. The new legislature’s powers would be much greater than those Congress held under the Articles of Confederation. In addition, the plan called for an independent executive and judiciary that could check the legislature. All told, the revamped national government would be national in character, while the state governments would lose significant power. Not only would the national government have broader power
82 New Jersey Plan
to oppose the states, the individuals who occupied these offices would be largely beyond the grasp of state legislatures. Under the Articles of Confederation, state legislatures determined the state’s delegation to the Congress and could recall a member of the delegation any time it deemed the member had worked against the state’s interest. Under the Virginia Plan, members of the first house of the legislature would be popularly elected. This popularly elected branch would then have final say over who would sit in the second house; the state legislature could merely nominate a slate of candidates from which the members of the first house would ultimately choose. Moreover, the legislature would elect the executive and judiciary. In short, the state legislature would no longer have much control over the national government. The delegates debated the Virginia Plan for two weeks. Before taking a final vote on the plan, William Paterson of New Jersey asked that the convention take time to consider the plan further and in light of a “purely federal,” as opposed to nationalist, alternative. The next day, June 15, Paterson offered nine resolutions that came to be called the New Jersey Plan. Unlike the Virginia Plan, which essentially proposed an altogether new government, the New Jersey Plan called for the Articles of Confederation to be “revised, corrected, and enlarged” to enable the government to meet the challenges of the day. It proposed that the basic structure of Congress remain the same: unicameral with power distributed equally among the states. However, the Congress would be granted additional powers, including the power to raise revenue by taxing imports and levying stamp taxes and postal fees, the power to regulate interstate commerce, and the power to force states that refused to pay their requisitions to Congress to fulfill those requisitions. Like the Virginia Plan, the New Jersey Plan would establish an executive and judiciary. The executive would be composed of more than one person, but the exact number of persons was left unclear; the resolution literally reads “a federal Executive to consist of persons” (with an intentional blank space within the document). The members of the executive would be elected by the national legislature, could not be reelected and could be removed by the legislature if enough state executives applied for removal. The
executive would have power to appoint federal officials and to “direct all military operations” as long as no member of the executive actually takes command of troops. The executive would appoint members of the “supreme Tribunal,” which would eventually be called the United States Supreme Court. This tribunal would have the power to impeach federal officers and to hear a number of types of cases. As does the Virginia Plan, the New Jersey Plan provides for the admission of new states. It also establishes that naturalization rules be identical across states and provides for fair trials for crimes committed in one state by a citizen of another state. Curiously, the “purely federal” plan explicitly states “that all acts of the United States in Congress . . . shall be the supreme law of the respective states” and binding on state judiciaries. Furthermore, the national executive could “call forth the power of the Confederated States . . . to enforce and compel an obedience” to national laws and provisions. Clearly, the plan provided for a much stronger national government vis-àvis the states than was the case under the Articles of Confederation. It is a clear indication of the general feeling among the delegates that the national government’s hand must be strengthened that the alternative to the nationalist Virginia Plan included strong clauses of national supremacy. After about a week of debate, the delegates put the matter to a vote, with seven states voting for the Virginia Plan (Connecticut, Georgia, Massachusetts, North Carolina, Pennsylvania, South Carolina, and Virginia) and only three voting for the New Jersey Plan (Delaware, New Jersey, New York; Maryland did not cast a vote because its delegation was split, the New Hampshire delegation did not arrive until later, and Rhode Island did not send delegates to the convention). Although the New Jersey alternative was rather quickly dismissed, its proposal and the ensuing debate illuminated intense differences that remained among the delegates. These conflicts would arise time and again throughout the convention and ultimately altered significant elements of the Virginia Plan. The debate over the New Jersey Plan pointed to two main lines of conflict among the delegates: nationalism versus states’ rights, and large states (which favored representation based on population) versus small states (which favored equal representation). Although the large versus small
New Jersey Plan 83
state conflict tends to dominate American history and government textbooks, it alone cannot account for the states’ votes, since a small state like Connecticut voted for the Virginia Plan, while the populous state of New York supported the New Jersey Plan. On one hand, nationalists like Alexander Hamilton, James Madison, George Washington, and Benjamin Franklin, desired a national government that could stimulate national economic progress and establish and maintain equal rights in all areas. On the other, states’ rights advocates like Luther Martin and Elbridge Gerry feared concentrating power in the hands of a national government. They considered the states to be the best guarantors of liberty since state governments knew the local conditions and interests and would naturally be more concerned about them. These delegates feared the establishment of either a cabal of large states that would tyrannize the other states (the states were often fierce rivals at the time) or a far-off central government that would be unresponsive to or ignorant of local needs. Beyond these principled objections to the Virginia Plan, delegates warned practically that the convention had no authority to propose anything beyond revisions of the Articles of Confederation and that the states would refuse to ratify such a nationalist scheme. The tension between the nationalist and states’ rights visions highlighted in the debate over the Virginia and New Jersey plans would become central to the postconvention ratification debates between federalists and antifederalists in the states. The large state versus small state conflict was often bitter and led to frequent stalemate. This could hardly have been otherwise. As evidence of the small states’ devotion to equal representation, the Delaware legislature had bound the Delaware delegates to leave the convention if the equality principle were violated. Thus, the Delaware delegates, many of whom were nationalists, were forced to oppose the Virginia Plan because of its populationbased representation scheme. The large states, for their part, so wanted an end to equal representation that rumors floated that if a population-based scheme were rejected, some of the large states would leave the union and form their own nation. As it happened, small states had the advantage of pro-
tecting the status quo. Under the Articles, states had equal power in Congress. The small states simply had to sidetrack any proposal to keep their preferred arrangement. Even after the vote to dismiss the New Jersey Plan, and therefore equal representation, the basis of representation in the legislature continued to be a major point of disagreement. Population-based representation was acceptable to the majority of delegates but was clearly unacceptable to the minority. Without a change, this significant minority would have bitterly opposed the convention’s final product and fought against its ratification. Ultimately, the convention took up the issue on June 27 and wrestled with it through weeks of difficult, impassioned debate. Finally, the delegates agreed on the terms of the Great Compromise on July 16, establishing a population-based scheme in the House of Representatives and an equal representation scheme in the Senate. Three features of the New Jersey Plan are worth noting. First, the plan reminds us of the broad range of options the framers had open to them as they considered the shape of government. Looking back after more than 200 years, it often seems that the structure of government could not be other than it is. However, the framers considered alternatives such as having more than one president at a time, a unicameral legislature, a system of radical equality among the states, and a president elected directly by Congress rather than the public or even the electoral college. Foreign as these arrangements feel to the average American today, several of the framers’ discarded ideas are employed by various nations around the world. For example, in parliamentary governments common in Europe, the legislature elects executive leaders. In a few nations, like Switzerland, a plural executive governs. The Swiss executive, called the Federal Council, consists of seven members. Some nondemocratic regimes also include plural executives, called juntas, in which a group of military leaders form the ruling executive. It is doubtful, however, that such leaders were inspired by the New Jersey Plan. Second, the nationalist elements in the New Jersey Plan indicate the groundswell in support of a stronger national government. The inclusion of a national supremacy provision in the states’ rights
84 parliamentar y government
alternative to the Virginia Plan signifies a clear and broadly supported move toward nationalism. In addition to the national supremacy provisions, the New Jersey Plan also would have granted Congress the power to regulate interstate commerce. This provision, also part of the Virginia Plan and ultimately the U.S. Constitution, has often been the vehicle for increasing national power vis-à-vis the states. Finally, it seems clear that the New Jersey Plan ultimately helped to force a compromise on the basis of representation, preserving equal representation in the Senate. Today, the Senate’s equal representation of the states results in significant inequality in a number of other ways. Just under one in three Americans live in only four states (California, Texas, New York, and Florida), meaning almost a third of the population is represented by just eight Senators out of 100. California’s population is 53 times that of seven other states (Alaska, Delaware, Montana, North Dakota, South Dakota, Vermont, and Wyoming). All of this makes the Senate “the most malapportioned legislature in the world.” Equal representation has implications for party politics as well, especially when one party does well in less populated states as the Republican Party currently does. For example, in the 109th Senate (2005–2006), 55 Republican senators represent fewer people than the 44 Democratic senators. Furthermore, since most members of racial and ethnic minority groups live in states with larger populations, these minority groups tend to be underrepresented in the Senate. Decisions and compromises reached centuries ago during a hot Philadelphia summer continue to shape the ways democracy works in the 21st-century United States. Further Reading For the text of the Articles of Confederation and the New Jersey Plan, see Yale University’s Avalon project at http://www.yale.edu/lawweb/avalon/compare/artfr.htm and http://www.yale.edu/lawweb/avalon/const/patexta. htm; Berkin, Carol. A Brilliant Solution. New York: Harcourt, 2002; Bowen, Catherine Drinker. Miracle at Philadelphia: The Story of the Constitutional Convention, May to September 1787. Boston: Little, Brown, 1966; Griffin, John D. “Senate Apportionment as a Source of Political Inequality.” Legislative Studies Quarterly, 2006;
Lee, Frances E., and Bruce I. Oppenheimer. Sizing up the Senate: The Unequal Consequences of Equal Representation. Chicago: University of Chicago Press, 1999; Lijphart, Arend. Democracies: Patterns of Majoritarian and Consensus Government in Twenty-One Countries. New Haven, Conn.: Yale University Press, 1984; Rossiter, Clinton. 1787: The Grand Convention. New York: The MacMillan Company, 1966; Smith, David G. The Convention and the Constitution: The Political Ideas of the Founding Fathers. New York: St. Martin’s Press, 1965. —Brian Newman
parliamentary government The two leading models for modern democratic governments are the British parliamentary model, often referred to as the Westminster model, and the United States separation of powers model. These two models of democracy have much in common, but there are also key differences. The most essential difference is between the fusion of power and the separation of legislative and executive power. In a parliamentary system, executive and legislative power is fused together; in a separation system, these powers are separated. Fusing power creates the opportunity for the government to act with greater dispatch and energy. For example, in the British parliamentary system, the prime minister and cabinet, the core executive, get over 90 percent of their legislative proposals through the Parliament. In the American separation system, a bit more than 50 percent of the president’s proposals become law. After the fall of the Soviet Union in 1991, many newly emerging nations chose to organize themselves along democratic lines. Virtually all of these new democracies in eastern Europe and elsewhere chose a variant of the British parliamentary model, and not the American separation model. When Iraq wrote its new constitution after the American-led overthrow of Saddam Hussein’s government, they too chose a type of parliamentary system of government. Why, when given a choice, did so many choose a parliamentary democracy and not a separation of powers model? Because it is widely believed that the parliamentary model is better suited to the needs of modern government; that it works better and more efficiently; that it is strongly democratic and responsive; and that it is accountable to the people.
parliamentary government 85
The British parliamentary system has no formal written constitution. It does have a constitution, but it is contained in laws, traditions, expert commentary, and is not written down in any one particular place. The constitution is whatever the parliament says it is. In this sense, sovereignty is grounded in the Parliament. Each time the British parliament passes a new law, it becomes part of the evolving constitution. Per se, what the Parliament says is part of the constitution, that is, is part of the British constitution. It is thus a flexible constitution, written and rewritten each year. Such a constitution can change, be flexible, and adapt to new needs. And while technically, Great Britain is a constitutional monarchy, the Crown has very little real power and sovereignty emanates from the Parliament. In effect, what Parliament decides becomes constitutional doctrine. In the British system, a strong, disciplined party system allows the majority party to govern with strength. Real power, while technically residing in the Parliament, is in the hands of the prime minister and cabinet who control the parliamentary party. This is sometimes referred to as cabinet government, with collective responsibility, where a collegial decision making process within the cabinet guides the government. More likely, a skilled prime minister can control, even manipulate the cabinet, and usually get his or her way. Two of the most effective prime ministers in recent years, Margaret Thatcher and Tony Blair, effectively controlled their cabinets, their parties, and their governments, and were thus—until the ends of their terms—very powerful prime ministers. By contrast, John Major, the Conservative prime minister who served between the Thatcher and Blair prime ministerships, did a less effective job of controlling his cabinet and thus a less effective job of controlling the party and power. The strong central core executive is a characteristic of many parliamentary systems, and these systems are considered more powerful and efficient than the American separation of powers model that disperses and fragments power across several key governing institutions. Most scholars who study democracy prefer the parliamentary model to the separation of power model on the basis that parliamentary systems are both democratic and efficient, while also insuring rights and accountability. The governments of western Europe are virtually all hybrids of the parliamentary model. Some, like
Great Britain, are unitary systems (with power controlled by the center, or capital); others like Germany and France, have systems characterized by federalism. Still others, like Switzerland, are confederated systems that have weak central governments, and strong regional or state governments. There is no “one-size-fits-all” model of parliamentary government. And while the Westminster model (the British parliament is often referred to as the “mother of all parliaments”) is touted as the archetype for the parliamentary design, there are wide-ranging alternative models to follow. By contrast to the parliamentary systems, the United States’s separated system seems to many to be inefficient and plagued by deadlock (especially in domestic and economic policy). Clearly, the president has less power within the government than a prime minister who commands a majority in the legislature. If the United States’s system is so unattractive to the rest of the world’s democracies, how has the United States managed to become the leading power of the world with such a separated and deadlocked system of government? That is a complicated question, and there is no easy answer. Part of the answer, however, rests on the distinction between domestic and foreign policy. In the United States, the president is constrained in the domestic arena, but has a tremendous amount of power (far more than the U.S. Constitution suggests) in the areas of war and foreign affairs. In operation, the United States’s separation of powers system is thus not fully realized and the government manages, at times, to govern by extraconstitutional means. When reformers look to ways to improve the United States they invariably turn to parliamentary alternatives. But there is no way to replace the separation system with a fusion system of parliamentary form—Americans are just too wedded to their Constitution and system of government. If wholesale replacement is not in the cards, are there elements of the parliamentary model that might make the American system more effective? Can we pick and choose parliamentary devices to help make the American model more efficient and effective? Some reformers would have all U.S. elected officials elected at the same time for the same length of term; others would institute a leadership “question time,” wherein the president would, like the British prime minister,
86 r epresentative democracy
occasionally go before the legislature to answer questions; still others would provide for a “no confidence” vote wherein the government would fall and a new election would ensue; and some call for a stronger, more disciplined party system to aid in leadership and governing. When the U.S. Constitutional Convention opened in Philadelphia in 1787, delegate Alexander Hamilton of New York, rose and addressed the delegates. He gave an impassioned speech in which he argued that the new nation should model itself on the nascent parliamentary monarchy of Great Britain. It was, after all, the best government in the world . . . or so Hamilton argued. But after a revolution against just such a government, the new nation was in no mood—and the armed citizens of the nation waiting outside the convention would have no tolerance—for a rebirth of the British model. Thus, the United States rejected the parliamentary model of democracy, and embraced a new and quite revolutionary separation of powers model. They had their chance to embrace a parliamentary design, but they were shaped by the revolutionary sentiments that animated the revolutionary war against Great Britain, and immediately rejected anything that smacked of the system of government they had just jettisoned. One can only imagine how different the United States might be had Hamilton’s proposal been taken more seriously by the delegates to the Constitutional Convention. But hatred for the British was still fresh in the minds of most Americans, and Hamilton’s proposal was soundly rejected. Again, Americans are so committed to their system of government that it seems unlikely that these changes, many of which would require amending the Constitution, could be instituted. Parliamentary democracies are the preference of most of the world; but the United States with its separation of powers system has marched to a different democratic beat. It seems unlikely that at any time in the future, the United States will adopt the parliamentary alternative. Further Reading Bagehot, Walter. The English Constitution. New York: D. Appleton, 1898; Burns, James MacGregor. The Power to Lead: The Crisis of the American Presidency. New York: Simon & Schuster, 1984; Sundquist,
James. Constitutional Reform and Effective Government. Washington, D.C.: Brookings Institution, 1986; Watts, Duncan. Understanding U.S./U.K. Government and Politics. Manchester: Manchester University Press, 2003. —Michael A. Genovese
representative democracy A representative democracy is a political regime in which the people rule through elected representatives, usually politicians with set terms of office and prescribed roles. This type of political regime is also known as indirect democracy, because the sovereignty of the people is not directly tapped in making law or public policy; indeed, popular will is usually subject to constitutional limitations that protect the fundamental interests of any given minority. These limitations may include any or all of the following designed to protect the sovereignty of the people: a system of checks and balances among separate institutions and/or levels of government housing different types of representatives of the people; a Bill of Rights, usually including some enumerated individual rights; and structural limitations on the scope and power of the state, should a majority capture it. A representative democracy is able to honor the liberty and equality of each individual citizen by filtering out majority biases that could affect the state’s calculus of the interests of the people. Representative democracy indicates a type of relationship between the ruler and the ruled, linking the two and presenting the people’s views through the political process and institutions of government. Representative democracy utilizes the concept of representation, which means to portray or make present something not there at the moment by way of offering a temporary substitution or institutionalizing the practice of substituting an elected officeholder for a set number of people, geographic area, or type of political interest. In a nation as sizable and populated as the United States, it would be impossible to convene all the citizens to discuss a matter of public importance. It is possible, however, to convene a representative group of people constitutionally empowered to speak for the people, enact policy, and make law. While electronic means have been suggested as a way to virtually convene the citizenry, the variety and
representative democracy 87
complexity of governmental tasks would still remain as a disincentive to their effective participation, even if well-funded and other powerful interests could be sufficiently restrained from dominating an unevenly interested citizenry. Nonetheless, in a plural society with a diversity of interests or factions, as James Madison argues in Federalist 10, a representative democracy is far superior to a direct or pure democracy because it will filter rather than reify the inevitable divisions within society, without losing sight of the public good. The concept of representation lends itself to various interpretations regarding how one person could possibly represent another or an entire group of people, for example, and what that means given that each individual may have several interests, and that different groups may form around similar interests, yet diverge over others. Contemporary issues that illustrate these debates in interpretation include whether only female representatives can represent women’s interests, and whether only people of color can represent the interests of their particular ethnic group. That the people are represented in a political institution of representation does not mean that all individuals are represented, though representation by population is the best approximation possible within the framework of representative democracy. Different countries have used representation not on behalf of enfranchised individuals, but on behalf of corporate bodies such as the recognized estates or classes in society. In prerevolutionary France, for example, each of the three French estates had a roughly equal number of representatives, despite the upper two being greatly outnumbered by the lower class. Representation does not necessarily imply democratic equality, as a monarch can be said to represent his or her subjects. Systems of proportional representation, where political parties have seats in the national assembly according to the proportion of the popular vote they acquired, may better represent the diversity of political viewpoints present in any large society than a winner-take-all system such as characterizes the American electoral system. However, proportional representation does not necessarily address the political inequalities members of unpopular minorities must often suffer in the absence of written guarantees of individual rights and liberties,
such as are provided for in the American Bill of Rights. Securing representation is not the same thing as securing rights or the franchise. The different ways to understand representation and the nature, duties, and motivations of the representative are reflected in different styles of representation. Three styles of representation have been important in the American context, the first because it was the style experienced under British colonial rule (virtual representation), and the other two (the representative as trustee or as delegate), because they have formed the poles around which the nature of the representative continues to be discussed. Fortysix of the 55 framers of the U.S. Constitution had experience as legislators in colonial assemblies and were familiar with the concept of representative democracy, a form of governance that arguably stands at the center of both the Revolution and the Constitution. While still under British rule, the American colonists were said to be virtually represented in the British Parliament or at least in the House of Commons, despite not actually electing anyone to represent them there. They were virtually represented in Parliament no less than were residents of the British Isles who could not vote. The idea behind virtual representation is that as the inhabitants of both Great Britain and its colonies are all Englishmen, they share a great deal of interests and can be adequately represented by any member of Parliament, with no special need to elect representatives from the disparate geographic areas of the British Empire, or by all the people in any one area such as England itself. Any man elected to the House of Commons could represent any British constituent, no matter where he lived, and was charged with being a custodian for the interests of all British inhabitants and the greater good of Great Britain and all its people. Members of the lower house of Parliament, though elected, were not obligated to their electoral districts, or accountable to their constituents. As the conservative defender of the British political regime, Edmund Burke, commented, Parliament is one deliberative assembly of the commons of all Britain with one interest, that of the united British subjects, as opposed to a divided body representing parts of the population, local prejudices, or special interests. Burke’s notion of a restricted suffrage that serves primarily to identify the natural aristocracy
88 r epresentative democracy
who will represent the people in their common concern for the prosperity of their nation further shows the antidemocratic, elitist strain in representative democracy that faced the colonists. The problem here was that inhabitants of the American colonies could not be elected to Parliament, though in theory they could no less represent the interests and greater good of all Britain. A further problem was that no resident of the American colonies could vote for representatives in Parliament, even if the candidate pool were limited to inhabitants of the mother country. As regards public policy, the colonists noticed that Parliament often targeted laws at them to the benefit of Englishmen on the other side of the Atlantic, calling into question the commonality of interests said to be the focus of parliamentary attention. Hence, the colonists concluded that virtual representation was no representation at all, and that they were not being respected, but discriminated against as British citizens who ought all to enjoy the same rights and liberties regardless of where in the empire they resided. While for many colonists their identity as British subjects remained secure, for many others the thought that they were becoming increasingly estranged if not separate peoples began to loom large. That Parliament passed laws burdening the economic activities of the colonists, such as the Stamp Act and taxes on sugar and tea, understandably infuriated the colonists and prompted them not merely to reject virtual representation but to revolt. The theory of virtual representation fit well with republican sensibilities, because both prioritized the common good which some considered unitary, identifiable, and not subject to contestation; however, the American founders eventually concluded that the common good is better served through actual representation, where representatives are elected from and accountable to a large and plural citizenry living in apportioned districts. Today, there are two general models of representation or representative styles, the trustee and the delegate, which both purport to link constituent with representative in a meaningful way. When the representative is considered a trustee of the people, he is trusted to use his own insights and judgment to identify the right course of action, and may be regarded as possessing deliberative abilities the ordinary citizen is seen to lack. The representa-
tive as trustee distills the sense of the community and acts as any constituent would, were he or she also clearly to see the community’s best interests unfiltered through his or her own particular interests. As in a trusteeship, this style of representation is paternalistic and depends on the capacity of the people to identify those individuals with an enlarged public sense and similar political, social, or moral sensibilities whom they feel they can trust to discern and further the best interests of the community. The other general model, where the representative is considered a delegate of the people, entails the notion that the people instruct the representative to act for them, that the representative should act as if under instructions from constituents, mirroring the voters. As in the ambassador–home office relationship or interactions between spokesperson and principal, the representative is neither to act on his own, nor authorized to think independently, but only as directed by constituents. In practice, the representative responds to the will of the political majority in his constituent district but may on occasion calculate that exercising conscientious judgment independent of popular will or political party will best serve the public interest. In the American system of representative democracy, the people delegate some of their sovereign authority to representatives they elect to act for them and in their interests, which include their particular interests and their common interest in the public good. Thus, both styles of representation are reflected in the actual practice of representing the people, and it is up to the individual representative skillfully to manage the tension between reflecting the popular will of the people in deliberations on public policy, and judging when to go against it in consideration of their long-term interests. Ideally, the boundaries between the two styles will be fluid and negotiated across time and issues, with the representatives left to govern and the voters holding them accountable. Further Reading Pitkin, Hannah Fenichel. The Concept of Representation. Berkeley,: University of California Press, 1967; Reid, John Phillip. The Concept of Representation in the Age of the American Revolution. Chicago: University of Chicago Press, 1989; Rosenthal, Alan, Burdett
republic 89
A. Loomis, John R. Hibbing, and Karl T. Kurtz. Republic on Trial: The Case for Representative Democracy. Washington, D.C.: Congressional Quarterly Press, 2003; Wood, Gordon S. The Origins of the American Republic, 1776–1787. New York: W.W. Norton, 1972. —Gordon A. Babst
republic A republic is a type of political regime where attention is focused on the common good, though it has come also to mean one wherein sovereign political authority resides in the people who elect representatives, rather than directly participate in governance themselves. The word republic—res publica in Latin—means “the public thing,” which should be the preeminent focus of citizens’ political attention, as opposed to their private pursuits or the good of a class within the public. The civic republican tradition places great emphasis on an active and engaged citizenry, where men lead public lives and seek the glory of doing great deeds on behalf of the community. By contrast, men who lead predominantly private lives experience privation, depriving themselves of the opportunity to enjoy a fully human life with its important public component. As with women and slaves, or so the ancient Greeks and Romans thought, such men must be compelled by necessity to focus their attention on their private material existence, and must not have the resources or wherewithal to devote time to the commonweal. The ultimate goal of a republic is each citizen’s enjoyment of liberty through political participation in a secure public space free from any fear of official or elite reprisal. Hence, the founders recommended republicanism out of their concerns to repudiate monarchy, arguing that no people is truly free if under a king, and to check the power of elites, for in the presence of courtiers and a nobility the ordinary man’s equal status is manifestly called into question. In fact, a republican form of government is guaranteed to the American people under Article IV, Section 4 of the U.S. Constitution, which reads: “The United States shall guarantee to every State in this Union a Republican Form of Government.” The founders were also concerned to avoid the democratic excesses of a direct or pure democracy,
wherein the people may be swayed for one reason or another to neglect the public interest or their future common good. James Madison and other founders were keen to neutralize impulsive popular opinion while safely allowing its expression within bounds that preserved the public good and prior commitments to republican constitutionalism, which surges of potentially dangerous populism threatened to obscure. While it is true that some of the founders preferred a republican form of government to a direct democracy because of a hesitation among them to allow the common man a role in governance, their dislike of royal prerogative was more powerful. Overall, they trusted that the common man could distinguish among the persons who stood for election those gentlemen who genuinely would focus their attention on the common good, from those who sought power only to pursue their own interests. The founders believed that starting at the local level and rising through state politics to national politics, citizens would be able to identify and elect those persons to public office who would be the best custodians of the public weal. They also believed that it would be unlikely that unscrupulous, would-be despotic politicians could fool a sufficient number of people to repeatedly get elected and rise though the levels of government to cause real harm to the republic. In the past, republicanism had been attempted on a smaller scale in some city-states of ancient Greece with mixed constitutions and was self-consciously undertaken in the Roman Republic, some Renaissance Italian city-states, and Holland/the Dutch Republic. The main theorists of republicanism were the Roman lawyer/orator Marcus Tullius Cicero (106–43 b.c.) and the medieval Florentine political philosopher Niccolò Machiavelli (1469–1527). Cicero, who saw republican Rome turn into imperial Rome, elaborated a Stoic approach to government that included ideas from Plato’s Republic, though that earlier work focused attention on justice and was in no way republican in spirit. Cicero believed that only in a republic could power reside in the people who ruled through their senators in the context of a balance among the orders of society that produced a harmony in which the civic virtues would come to reign and produce beneficial effects such as order and liberty. Cicero understood liberty as both freedom and liberality, or generosity, a preparedness to
90 r epublic
be hospitable and humane toward fellow citizens. Ultimately, the city was to be ruled by natural law as comprehended by Roman Stoic philosophy, central to which was reason, a human faculty that promotes equality, and the notion of a humanity-wide community ruled by reason, as opposed to the unreason of a tyrant. Machiavelli revived the Roman tradition of republican government, and made it his ideal, albeit this has stood in contrast to the ideas advanced in his most famous work The Prince. Machiavelli was keenly aware that there are important differences between the situation of being in a republic and desiring to maintain it, and being in a different sort of political regime, desiring a republic and seeking somehow to found one. It is in the latter area that Machiavelli’s notion of an energetic great leader occurs, one who uses cunning, efficacy, and other political virtues and skills to establish and secure for a city its freedom and independence from outside powers, including the papacy in Rome. Thereafter, the best form of government to maintain and expand this freedom is a republic, wherein there is a secure public space in which citizens interact, contest public issues, and take charge of their collective fate. Machiavelli also revived the civic humanist tradition of public virtue based in the liberty and independence of each citizen, an independence the American founders initially located in property-holders, whose wealth enabled them to pursue public life and the common good, not being beholden to others for their sustenance. The republics of the past were smaller states by comparison, with many believing that the success of this form of political regime required a society with fairly homogeneous interests and customs, much as in the case of a direct democracy. It was Madison’s innovation to justify it as more viable for large nationstates, partly to combat the parochialism of the states he felt had doomed the Confederation, which itself had no mechanism to combat the inevitable factious majoritarianism of the democratic electoral process. Madison connected the idea of a republic to the presence of many factions, arguing that factions are best handled by a large republic wherein political officers are constitutionally charged with looking after the public interest despite the competition among them for shares of the electorate in which different views
compete for prevalence. Ideally, the notion of compound republicanism meant, out of many publics, one republic, and one not undone by any faction or combination of factions that pursued an unjust aim. In Federalist 10, Madison argues that the larger the territory, the greater the variety of interests and factions, and so the more stable the republic, by sharp contrast to earlier views of republicanism as workable only in small, fairly homogeneous settings where there would be relatively easy consensus on what is in the public interest. However, as Madison also recognized, a large state with a national government might have a lot of power that may be subject to misuse. Part of the argument for a republican form of government included separate governing institutions for checks and balances as well as different levels of government for exerting countervailing powers with respect to the national state. Often the terms republic and democracy are placed in juxtaposition, because republicanism does not mean democracy (many republics are not fullfledged democracies) and it provides a check on direct expression of the will of the people outside of regular general elections. Madison discussed this distinction in Federalist 14, writing that through their representatives, the people maintain their democracy while allowing it to spread over distances too vast for the people to assemble as in a direct democracy. Rather than restrict the sphere of territorial jurisdiction, or infringe on the people’s ability to self-govern, the founders chose to establish a republican form of government that provided pathways of political participation and the promotion of interests without curtailing the capacity of constitutional rule to see to the public interest, no matter how large or populous the country grew to be. In contemporary American practice, republicanism means constitutional government combined with representative democracy. Republicans believe in institutions restraining the state and any exercise of state power, and believe that constitutionalism— adherence to a written constitution—is an excellent way to do that. While the American innovation in republican practice was to combine republicanism with a presidential system (presidentialism), a republican form of government may also characterize a parliamentary system (parliamentarianism). The theory
rule of law
and practice of republican government should not be confused with the platform or political aims of the Republican Party in the United States, whose seemingly central commitments to certain moralist political positions implies a state with more, not less, powers, and one that caters to majority will in the face of constitutional guarantees to each citizen. As contemporary theorist of republicanism Philip Pettit might argue, a nonarbitrary and constitutionally endorsed interference with one’s liberty is acceptable, while an arbitrary interference or domination of one party by another in the pursuit of interests not held in common is a very different matter and contravenes republican principles because of the loss of liberty involved. Further Reading Pettit, Philip. Republicanism: A Theory of Freedom and Government. Oxford: Clarendon/Oxford University Press, 1997; Pocock, J. G. A. The Machiavellian Moment: Florentine Political Theory and the Atlantic Republican Tradition. Princeton, N.J.: Princeton University Press, 1975. —Gordon A. Babst
rule of law A handful of constitutional principles seems to stand above the rest in the American political consciousness and serves as the justification for beliefs about American exceptionalism. Among these is the idea, most famously promoted by the framers of the U.S. Constitution, that the American polity is a government of laws, not men. Seeking a legitimate solution to contemporary political dilemmas and impelled by memories of the British parliament’s supposed constitutional usurpations, the framers staunchly adhered to the notion that law should be king. They rejected prevailing jurisprudential doctrines that substantiated parliamentary sovereignty and instead argued for the emergence of an alternative system based on the sovereignty of law. Viewed from a modern perspective, the framers’ doctrinal innovations reflected the development of an American political culture devoted to the rule of law. Indeed, Americans appear convinced that one of the hallmarks of U.S. political ascendancy, if not superiority, is the centrality of law in their society.
91
An indispensable component of liberal democratic government and, by extension, liberal theories of government, has been the assumption that liberalism presupposes the existence of the rule of law. Although this link between liberalism and the rule of law seems self-evident to many, the concept of the rule of law is comparatively amorphous and, therefore, conceptually pliable. Of the myriad philosophers, politicians, jurists, and commentators that have considered this topic, few have been able to articulate a coherent conception of the rule of law. Despite a broad agreement among them that the rule of law necessarily entails constitutionalism, a consensus regarding other definitional characteristics has been conspicuous through its absence. Furthermore, while theoretical and empirical investigations of the rule of law frequently betray an eagerness and ability to identify those political systems in which the rule of law is present, a consistent or even recognizable set of criteria by which to make those identifications has yet to arise. This is not meant to imply that scholarly agreement exists regarding the definition of other fundamental concepts in political science or that most such concepts have been more precisely or adequately delineated. Rather, it is a reminder of the fact that certain rudimentary terms are such an omnipresent and pervasive part of the political lexicon that their utilization is often tautological and reflexive, i.e., an outgrowth of established assumptions that seem to possess a priori validity in political argumentation. Unfortunately, continued acceptance of this validity obviates the need for the sort of reflection and analysis that would clarify and properly contextualize concepts such as the rule of law. These interpretive limitations notwithstanding, the concept of the rule of law has, at least in part, become relatively trite because it is so significant within the Anglo-American political lexicon. As indicated, writers have normally associated the rule of law with constitutionalism. More precisely, they have customarily viewed constitutionalism as the sine qua non of political societies in which the rule of law is present. If constitutionalism reflects a belief that a fundamental law of some type animates and authorizes the use of political power and secures political justice through the protection of key substantive and procedural objectives, the rule of law
92 rule of law
presupposes a political culture in which a core set of legal principles serves as the justification for, and ultimate restraint on, all consequent political activity. Accordingly, the rule of law depends not just on the idea that humans are subordinate to laws but also, and more significantly, on the related conviction that some laws must be endowed with an extraordinary purpose and status that renders them superior to all others. Within the setting of American politics the equation of the rule of law with constitutionalism translates into a requirement for written fundamental law, such as the U.S. Constitution. However, within the more general context of Anglo-American political thought, a written constitution is not an absolute requirement, as evidenced by the nature of the British constitution (which is unwritten). As crucial as a belief in constitutionalism is to the existence of the rule of law, it is not a sufficient criterion for that existence. The history of the 20th century and what little we have witnessed of the 21st century have amply demonstrated that constitutions and an associated allegiance to constitutionalism, particularly of the legal-positivist variety, cannot prevent, discourage, or even invalidate the illegitimate use of political power and authority. Adolf Hitler in Germany, Pol Pot in Cambodia, Joseph Stalin in the Soviet Union, Slobodan Miloševic´ in Serbia, Franjo Tudjman in Croatia, Janjaweed militia in the Sudan, AUC auxiliaries in Colombia, the ruling junta in Burma, and countless others have ably shown that laws, constitutions, and appeals to justice can be employed to authorize and affirm the use of constitutional or otherwise legal power to realize clearly illegitimate objectives. Even in the United States, often cited as the foremost example of a society dedicated to the rule of law, a culture of constitutionalism has occasionally allowed the pursuit of illegitimate political ends. From both a historical and a philosophical perspective, specific regimes, or kinds of regimes, have been justified in three different ways. First, justification for a regime and its corresponding right to exist may be a product of its monopoly on, or control of, physical power. The ageless adage “might makes right” has served as a platform for political validation for centuries and, sadly, continues to do so in today’s world. Second, the concept of authority has offered a viable and more equitable substantiation for the use
of political power, especially over the last 250 years. The idea that the utilization of political power should be duly authorized through law and its logical connection to foundational political principles is manifestly more defensible than the notion that power is self-authorizing because its viability is secured through a brute coercive potential. Nevertheless, as confirmed above, constitutional or legal mechanisms designed to authorize power cannot guarantee the legitimacy of such authority. The failure of this second category of justifications, centered on authority, for the existence of particular regimes gradually led to the realization that power and authority must coexist with legitimacy. The third and newest category of justifications for political society flows out of the conviction that the moral and, by extension, ethical viability of a political regime is a function of its legitimacy. Pursuantly, though the exercise of power in government must be duly authorized, that government and its foundational political principles or laws must be legitimate. As a result, it is no longer sufficient to ask whether governmental power can be justified through constitutional provisions that authorize it. Rather, the authorization itself must be subjected to scrutiny through an assessment of the legitimacy of the constitutional system and political culture that are the arbiters of such authorization. In the United States, the acknowledgment that legitimacy must become the ultimate test of political viability has its origins in the controversies between the British parliament and colonial legislatures during the 1760s and 1770s. Following the Glorious Revolution, a newly renegotiated constitutional landscape enabled steady and consistent parliamentary accretions of power and authority that eventually solidified not only parliamentary supremacy but also, and more significantly for the colonies, parliamentary sovereignty. Although this development did not produce the abuses of authority that colonial paranoia and American lore have immortalized through partisan renderings of events, it appeared to enshrine a dangerous constitutional precedent that could be utilized to justify illegitimate political goals. Rejecting inherited and dominant contemporary political doctrines that affirmed the inherent legitimacy of well-ordered, or naturally warranted, political regimes, America’s founding fathers concluded that regimes did not derive their legitimacy from a
rule of law
correct structure justified through the laws of nature, on the one hand, or institutional custom, tradition, and evolution, on the other. Rather, they asserted that political legitimacy must be established and secured through a positive legal mechanism, i.e., a constitution, which authorizes and delimits the utilization of political power and a political culture that recognizes legitimacy as the ultimate criterion for political viability. In other words, the founders believed that a constitution—not nature, God, or custom—legitimates the political regime it creates because it reflects a wider conviction that the legitimacy of foundational laws and principles in a political society is paramount. The founders’ justification for political society imparts a subtle yet critical distinction to the standard equation of rule of law with constitutionalism. They evidently believed that a constitution plays an indispensable role in creating and maintaining the rule of law, not least because of its role in legitimating political power and authority, but they were equally convinced that a constitution and the system it supports were not intrinsically legitimate due just to the presence of that constitution. Men such as James Madison, Alexander Hamilton, Thomas Jefferson, and James Wilson, as much as their views may have differed on some matters, were all aware that constitutionally ordered regimes were not, ipso facto, legitimate. Legitimacy seemed to call for something more; it required a political culture that was willing to question the integrity of constitutional government in any guise and also the traditional or logically mandated justification of that government’s ethical viability. In many ways, the concept of legitimacy has been just as amorphous and pliable as the concept of the rule of law. Consensus exists even among those who uphold, promote, or lead illegitimate regimes that legitimacy constitutes the gold standard by which to judge governments. However, consensus begins to erode quite rapidly once the focus turns to the sources of legitimacy themselves. Despite comparatively widespread agreement that judgments concerning legitimacy are in some way related to overarching ethical criteria and, thus, moral objectives, ethical criteria can be based on one or a combination of so many distinct influences that a universally satisfactory test of legitimacy, insofar as it is even desirable, is not
93
achievable. Ethical systems derive their authority from theology, philosophy, ideology, culture, nationality, history, and numerous other sources, many of which are frequently incompatible, so assessments of legitimacy are fraught with problems of consistency and feasibility. Nonetheless, this should not discount the fact that legitimacy is an important, if not the most important, feature of the rule of law. Aside from legitimacy, legalism, and constitutionalism, additional norms by which the presence of the rule of law can be identified in a political society do exist. Lon Fuller and other legal scholars have claimed that transparency and consistency are central to the preservation of the rule of law. They have illustrated that the rule of law cannot cohabit with arbitrariness and secrecy, inasmuch as those conditions offer ample opportunity either for the subversion of existing laws or the promulgation of unjust and unconstitutional laws. For example, many current and former dictatorial regimes have secured and perpetuated their power by taking advantage of their subjects’ ignorance of the laws or their inability to predict and determine the legality of specific acts. On the other hand, countries in which the rule of law is thought to be present willingly publish their laws and constitutions and endeavor to maintain statutory consistency in order to inculcate and promote recognizable standards of right and wrong that secure rather than undermine individual rights and liberties. Another widely recognized feature of the rule of law is the concept of equality before the law. The principle that all citizens of a state should enjoy equal protection of the laws was incorporated into the Fourteenth Amendment of the Constitution and has been the motivation for a considerable amount of reformist legislation intended to redress the structural inequities and legal deficiencies that characterized much of American history. Equality before the law has often been subsumed under the more general rubric of political equality, which, according to most observers, is a fundamental prerequisite for the rule of law. In today’s world, free of 18th-century beliefs about natural sociobiological hierarchies, a political society in which equality of access and participation in the regime is not guaranteed seems inherently illegitimate. Moreover, to many people, political equality entails the related requirement of one person, one vote.
94 separ ation of powers
Political equality presupposes particular rights and liberties whose preservation is also necessary for the maintenance of the rule of law. Although the existence of the rule of law without individual rights may have been conceivable under some divine-rightof-kings theories of the late Reformation and early Enlightenment, today the belief that the rule of law can be established without simultaneously securing certain universally recognized political rights and personal liberties is unsupportable. The legitimacy of a regime is often judged by its ability to protect and ensure the dignity, privacy, integrity, and survival of its citizens through its willingness and determination to defend basic human rights. It is fascinating to note that, under this standard, even many of today’s most progressive regimes do not uphold the rule of law. In fact, putative stalwarts of individual freedom such as the United States still withhold full access to privacy rights and tolerate expansive police powers that abridge liberties and rights acknowledged as essential by the United Nations and international humanrights organizations. Whereas it was possible to imagine a monarchical or oligarchic regime operating under the rule of law in the 18th century, such a formulation would be nonsensical today. Democratic government of one sort or another is another one of those modern prerequisites for the existence of the rule of law that seems indispensable. The belief in selfdetermination and government by consent is regarded as one of those antidotes to despotism that could facilitate the spread of the rule of law to regions unfamiliar with it. More to the point, policy experts and media pundits have identified the proliferation of liberal democratic government in historically undemocratic regions as the engine of political legitimization and structural reform. Some writers, Francis Fukuyama among them, have even been so bold as to suggest that the inevitable rise of liberal democracy will eliminate the necessity for other types of regimes, thus inaugurating the end of history through the realization of selflegitimating governance. Although such a scenario appears impossible, it aptly underscores the common conviction that the rule of law is a trapping of liberal regimes, or vice versa. In the end, all of the traits described above can be found in a political culture that values law as an
end in itself. The rule of law cannot apply to people or societies that view the law as an epiphenomenon or an instrument of any type, even if the purpose of such an instrument is the realization of otherwise legitimate political objectives. This is why it is problematic to speak of regimes established or proposed prior to the 17th century as dedicated to the rule of law. The rule of law logically and necessarily involves an acknowledgment and acceptance of the law as sovereign and a corresponding belief that political legitimacy relies on the continued sovereignty of law and its supremacy over human action. Likewise, the rule of law exists among people whose respect for the supremacy of law flows from a core belief in the legitimacy of law itself and the desire to establish the law as a substantive objective that authorizes all subordinate political objectives. To paraphrase the framers of the Constitution, the rule of law is present in a political society governed by laws, not men. Further Reading Ackerman, Bruce A. We the People: Foundations. Cambridge, Mass.: Harvard University Press, 1991; Dworkin, Ronald. Law’s Empire. Cambridge, Mass.: Harvard University Press, 1986; Fuller, Lon L. The Morality of Law. New Haven, Conn.: Yale University Press, 1964; Kahn, Paul W. Legitimacy and History: Self-Government in American Constitutional Theory. New Haven, Conn.: Yale University Press, 1993; Levinson, Sanford. Constitutional Faith. Princeton, N.J.: Princeton University Press, 1988; Posner, Richard A. The Problems of Jurisprudence. Cambridge, Mass.: Harvard University Press, 2005. —Tomislav Han
separation of powers When the framers met in Philadelphia in May 1787 to draft the U.S. Constitution, they were not yet certain of the type of government they wanted to create, but they knew well what they wished to avoid. They rejected a system of government where all power was concentrated in the hands of one ruler. Their challenge was to craft a system where power would be sufficiently dispersed to prevent tyranny while, at the same time, providing enough coordination among the separate units to be “workable” and to operate effectively.
separation of powers
In their efforts, they were guided both by past practices and political theory. In colonial times, each colony had a governor, appointed by the king of England. Governors had power to create courts, to call legislative assemblies into session, and to nominate members with life tenure to a council that possessed judicial power. Thus, three types of power—executive, legislative and judicial—existed even during these early, preconstitutional times, although one person controlled the levers of all of them. When the thirteen colonies transformed into independent states, institutions exercising these powers emerged within states. Under the state constitutions, states had a governor, a legislature and a judicial system. It was the result of these state experiences with fledgling self-government, along with an appreciation of baron de Montesquieu’s theory of dividing power to protect liberty, expressed in his The Spirit of the Laws (1748), that guided and directed the framers as they sought to create similar structures on the national level. Efficiency, as United States Supreme Court Justice Louis Brandeis reminds us, however, was not their goal. His dissenting opinion in the Supreme Court case of Myers v. U.S. (1926) is notable for its candid and succinct explanation that “The doctrine of the separation of powers was adopted by the convention of 1787 not to promote efficiency but to preclude the exercise of arbitrary power. The purpose was not to avoid friction, but, by means of the inevitable friction incident to the distribution of the governmental powers among three departments, to save the people from autocracy.” Thus, in devising the structure of government, two primary motivations emerged for the framers: 1) dividing power to protect liberty, and 2) fostering deliberation by allowing for policies to gather consensus through a democratic process of bargaining, negotiation and compromise across the two policy-making branches (legislative and executive), insuring against actions based on the fleeting passions of the day or on any one dominant interest. There was little doubt, as Brandeis acknowledged, that this system would result in “friction” among the branches, and that the gradual process of assembling consensus would be slow, incremental, and, even at times, inefficient. But those were acceptable prices to pay for guarding against
95
despotic rule of the kind for which the colonists fled England and were determined to avoid in the new government they were creating. It was James Madison who adapted Montesquieu’s theory to fit an 18th-century America. In Federalist 47, he reasoned that when Montesquieu wrote that “there can be no liberty where the legislative and executive powers are united in the same person, or body of magistrates,” it did not mean that the branches were to be totally separate and distinct, but, rather, that the hands that exercise “the whole power of one department” should not be permitted to possess “the whole power of another department.” (emphasis in original) Moreover, Madison’s warning in Federalist 51 of the need for “auxiliary precautions” to control human nature and to keep government power limited was an argument for a flexible rather than a rigid approach to separation of powers. “Flexibility” allowed for each branch to play a partial role in the actions of the others, and for each to react to the official acts of the others. Thus was born the corollary concept of checks and balances, the essential flip side to separation of powers. While separation of powers allocates powers to the three branches, checks and balances monitors the relations among the branches to ensure that none usurps the powers of the others. Both concepts, though absolutely fundamental to the structure of government in the United States, appear nowhere, in explicit terms, in the U.S. Constitution. Instead, their meanings can be discerned from (1) the opening clause in the first three articles of the document, known as “the distributing clauses,” and (2) from a careful analysis of the powers granted to each branch in the remaining provisions of all three articles. Those opening clauses are as follows: Article I states that “All legislative Powers herein granted shall be vested in a Congress of the United States. . . . ;” Article II states that “The executive Power shall be vested in a President of the United States of America. . . . ;” and Article III states that “The judicial Power of the United States shall be vested in one supreme Court, and in such inferior courts as the Congress may from time to time ordain and establish.” These clauses establish the constitutional existence of the primary institutions in each branch, and announce the type of power each will exercise. It is then left to subsequent sections in each article to
96 sla very
flesh out more specifically the full range of constitutional responsibilities and authority allocated to each branch. Upon review, we see the overlapping and sharing of powers that was at the heart of Madison’s vision, and upon which he placed his faith in the ability of government to keep its own power limited. Nowhere is this more eloquently expressed than in Federalist 51, when he says, “In framing a government which is to be administered by men over men, the great difficulty lies in this: you must first enable the government to control the governed; and in the next place oblige it to control itself.” Thus, the president is part of the legislative process, proposing bills at the beginning of the process, negotiating the language and provisions with Congress, and signing or vetoing at the end. The senate is part of the appointment process, with the power to grant or deny confirmation to the president’s executive and judicial branch nominees. The chief justice presides over the Senate in an Impeachment trial of the president, vice-president or a federal judge. These are examples of how each branch plays some role in the sharing of powers with another branch, while also acquiring from that role an opportunity to register an input or “check” on the actions of another branch. From this operative description of separation of powers, scholars have questioned whether the term itself is misleading and inaccurate. Presidential scholar Richard Neustadt proclaimed, in his 1960 book, Presidential Power, that what we really have is “a government of separated institutions sharing powers.” Political scientist Charles O. Jones builds upon Neustadt’s description, and suggests that our system can best be described as one “where . . . separated institutions compete for shared powers.” Legal scholar Louis Fisher introduces the notion that the branches engage in a “constitutional dialogue” with each other through a continuous series of actions and reactions. For example, Congress may pass a law, which the Court may declare unconstitutional. Congress may then go back and rework the law to address the Court’s objections. Upon a new legal challenge, perhaps, the Court will find the revised law constitutionally acceptable. Thus, a “conversation” has occurred between the branches, and both reached agreement after an initial conflict.
Separation of powers is one of the two ways in which power is divided in the United States government. It is the “horizontal” division, where authority is spread laterally across the executive, legislative, and judicial branches. Federalism, the division of power between the national government and the states, is the “vertical,” or longitudinal, division. Both operate, as Madison intended, to disperse power as a method of guarding against concentration of authority in any one location. Both are constitutional concepts that function within a dynamic political context that, at any point in time, reflects the contemporary environment. This means that, throughout history, there have been and will continue to be periods of dominance by each branch (or, as in the case of federalism, by the national government or the states). It is at those times that the “balancing” function on which Madison relied to restore the necessary equilibrium becomes most critical, if we are to remain the government of limited powers that the Constitution created. Further Reading Fisher, Louis. Constitutional Dialogues: Interpretation as Political Process. Princeton, N.J.: Princeton University Press, 1988; Hamilton, Alexander, James Madison, and John Jay. The Federalist Papers. New York: The New American Library, 1961; Jones, Charles O. The Presidency in a Separated System. Washington, D.C.: The Brookings Institution, 1994; Myers v. United States, 272 U.S. 52 (1926) (Brandeis, dissenting); Neustadt, Richard E. Presidential Power: The Politics of Leadership. New York: John Wiley and Sons, 1964. —Nancy Kassop
slavery Slavery is the institution of human bondage, in which individuals are held against their will in the service of another. In the United States, this took the form of chattel slavery, in which human beings were the legal property of their owners, primarily for the purposes of providing labor. In various forms, slavery has existed for thousands of years, but the American case was unusually contentious. First, it presented a serious philosophical problem for the founders, who had established that individual liberty would be the defining principle of the new nation. But even more sig-
slavery 97
nificantly, slavery became a political quagmire at the very heart of American government for nearly a century, a conflict so profound and so intractable that it would be resolved only after a civil war that saw the deaths of more than 600,000 people. Slavery was common throughout the American colonies since the first slaves arrived at Jamestown in 1619. Because colonial America was so agricultural, the demand for labor far exceeded the supply, and slavery was an obvious solution to the problem. By the middle of the 18th century, slavery had become a significant part of American life; there were more than a quarter of a million slaves in the colonies by 1750, constituting approximately 20 percent of the entire population. Nearly 90 percent of those slaves lived in the southern colonies, where slavery was becoming a defining feature of the culture. But even in the northern states, slaves made up a fairly significant portion of the population. For example, slaves constituted approximately 15 percent of the population of New York. Even where there were few slaves, the institution itself was often vital to the economy; many New England shippers and merchants relied on the slave trade for their livelihood. By the time of the American Revolution, slavery had established deep roots in a place that would genuinely—if ironically—be a new nation “conceived in liberty.” Every American schoolchild learns about the monumental contradiction of the nation’s founding, best embodied by Thomas Jefferson. As the author of the Declaration of Independence, he announced in stirring passages that “all men are created equal,” with God-given rights to “life, liberty, and the pursuit of happiness.” Yet Jefferson owned more than 200 slaves, whom he viewed as grossly inferior, and not his equal in any sense. At the same time, like many slaveholders, Jefferson despised the institution, even arguing for its abolition early in his career. The initial draft of the Declaration contained an entire paragraph that charged the king with vetoing attempts by the colonists to end the slave trade, which Jefferson referred to as “execrable commerce.” Despite these apparent principles, however, he made no provisions to free his own slaves, even upon his death. Jefferson’s hypocrisy cannot be defended, but it is his contradictions that reveal so much about the relationship between slavery and liberty in early American history.
The rhetoric of the Revolution was filled with the language of despotism and liberty, and the rebels often spoke of their status as “slaves” of the tyrannical British Crown. The metaphor may have provided inspiration to the frustrated, overtaxed colonists, but it also brought into focus the irony of slaveholders demanding freedom. Such notables as Benjamin Franklin, Thomas Paine, and James Otis were early advocates of abolition as a matter of secular principle. Otis was clear about the implications of a doctrine of natural rights: “The colonists, black and white, born here, are free born British subjects, and entitled to all the essential civil rights as such.” Other Americans followed the lead of the Quakers, opposing slavery on religious and humanitarian grounds. But to be sure, very few Americans became true abolitionists in the late 1700s, mostly out of indifference, but partly out of prudence. As historian Peter Kolchin explains, “[The Founders] typically abjured hasty or radical measures that would disrupt society, preferring cautious acts that would induce sustained, long-term progress.” Most of them believed, as Jefferson did, that slavery would probably die a natural death within a generation, a seemingly safe prediction, since so many states were abolishing the slave trade or, in the case of the northernmost states, outlawing the institution altogether. Even in Virginia, the state legislature passed a law in 1782 that made it easier for slaves to be freed by removing all restrictions on manumission. Congress added to this momentum by prohibiting slavery in the territories covered by the Northwest Ordinance of 1787. But the full scope of the conflict would again emerge at the Constitutional Convention later that summer. There were numerous obstacles facing the delegates at the convention, but the conflict over slavery nearly derailed the entire process of creating a new U.S. Constitution. James Madison recognized the peril almost immediately: “The real difference of interests lay, not between the large and small but between the Northern and Southern States. The institution of slavery and its consequences formed the line of discrimination.” There is perhaps no better indication of the severity of the conflict than the omission of the very word slavery from the final document. All three references to slavery contain euphemisms, thereby avoiding any direct mention of the controversial practice. Consider the most contentious
98 sla very
Proclamation of Emancipation (Library of Congress)
slavery 99
item, the international slave trade. Many of the delegates wanted to abolish the importation of slaves altogether, as even most supporters of slavery generally found the slave trade extremely repugnant. Moreover, there was a widely shared view that a centralized Congress would be empowered to regulate all international commerce. But some of the southern delegates, including Charles Pinckney of South Carolina, flatly objected to any federal interference with slavery, and warned his colleagues several times that his state would not ratify a document that allowed for any regulation of slavery. The compromise that emerged gives little hint of the controversy it addressed: “The migration or importation of such persons as any of the states now existing shall think proper to admit, shall not be prohibited by the Congress prior to the year one thousand eight hundred and eight. . . .” (Article I, Section 9). That postponed the conflict over the slave trade, but the moment it was constitutionally permissible, Congress did end the practice; the Slave Trade Act of 1807 prohibited the importation of “such persons” effective January 1, 1808. Twenty years later, the issue was still ripe. Of course, there were countless compromises at the Constitutional Convention. It could hardly have been otherwise, with representatives from all states with starkly competing interests. Reaching a consensus on the many issues that divided them was nothing short of miraculous. But compromises often look less noble in retrospect, particularly ones that cut at the core value of individual liberty. Nineteenthcentury abolitionists frequently argued that the Constitution was, in practice, a pro-slavery document. William Lloyd Garrison referred to it as an “infamous bargain,” not only because of the well-known features that directly reflected an accommodation of slavery, such as the Three-Fifths Compromise, but also the indirect support of slavery provided by the institutions of federalism. For example, the requirement that three-quarters of the states would have to approve a constitutional amendment effectively gave the southern states a veto over any measure they collectively found objectionable. The structure of the U.S. Senate and even the design of the electoral college added further strength to the less populous southern states. This view has remained quite popular among historians and legal scholars. In 1987, United States Supreme Court Justice Thurgood
Marshall marked the occasion of the Constitution’s bicentennial by calling it “defective from the start.” He argued that the bargains at the convention represented more than simple concessions to political necessity, they fundamentally sacrificed the principles of the American Revolution, with consequences still: “The effects of the Framers’ compromise have remained for generations. They arose from the contradiction between guaranteeing liberty and justice to all, and denying both to Negroes.” Historian Don Fehrenbacher sees those compromises differently, contending that the offending features of the Constitution were crucial to securing support for the document, not only at the convention itself, but in the states, which would have to ratify the proposal before it could become law. Whatever the delegates’ competing views on the morality of slavery—and those views ran the entire spectrum of opinion—the political realities required some de facto protection for slavery. At the Virginia Ratifying Convention, James Madison bluntly responded to an objection pertaining to the Slave Trade Clause (perhaps recalling Pinckney’s threat) by arguing that it was for the greater good: “The Southern States would not have entered into the Union of America without the temporary permission of that trade; and if they were excluded from the Union, the consequences might be dreadful to them and to us.” Given such limitations, Fehrenbacher concludes that the founders could actually be credited for the antislavery potential in a constitution that strengthens the power of the national government. Today, this debate is largely an academic one among historians, but in the antebellum period, appealing to the Constitution was common among both supporters and opponents of slavery. Statesmen such as Daniel Webster and John Calhoun advanced complex arguments about the meaning of liberty, and consequently the powers and limits of the federal government. But the escalating political tensions were not merely abstract matters of philosophical debate. As the nation expanded westward, the admission of new states to the union threatened the delicate sectional balance in the federal government. Since southerners were already outnumbered in the House of Representatives, it became a vital matter for defenders of slavery that the Senate continue to provide the southern states with a veto over federal efforts to interfere
100 sla very
with it. A series of carefully negotiated bargains, such as the Missouri Compromise of 1820 and the Compromise of 1850, successfully maintained that balance for most of the first half of the 19th century. But the measures were only temporary and could forestall the inevitable crisis only for a period. An aging Thomas Jefferson called the Missouri Compromise “a reprieve only,” and he presciently saw where the conflict over slavery was ultimately heading, famously stating, “We have the wolf by the ears, and we can neither hold him, nor safely let him go.” Indeed, the nation could not permanently survive as “half-slave, half-free,” and the compromises that somehow held the country together for several decades would fail to settle the fundamental conflict. By the 1850s, acrimony over slavery had escalated into open hostility. Fifty-five people were killed in a border war that erupted between Kansas and Missouri in 1856 over the question of the expansion of slavery into the Kansas Territory. Violence even broke out on the floor of Congress, where Massachusetts senator Charles Sumner was beaten nearly to death following a passionate speech denouncing slavery. The fact that his attacker was another member of Congress only underscores the point that the issue had reached the boiling point. Unwisely, the United States Supreme Court attempted to resolve the conflict in its decision in Dred Scott v. Sandford (1857). Scott was a slave who sued for his freedom on the grounds that he had been taken to a free territory several years before. The Supreme Court rejected his claim, ruling that Scott had no status even to file a case, as he was not, and could not be, a citizen of the United States: Whether free or slave, black people “were not intended to be included, under the word ‘citizens’ in the Constitution, and can therefore claim none of the rights and privileges which that instrument provides for and secures to citizens of the United States.” It is worth noting that the contradictions of the founders were still quite relevant, and their actions were used to provide some justification for the decision in this case; the opinion states that “the men who framed this Declaration [of Independence] were great men . . . incapable of asserting principles inconsistent with those on which they were acting.” Hence, if Washington and Jefferson owned slaves, clearly those slaves were not meant to be citizens. Whatever the merits of the argu-
ment (and scholars have long criticized this decision for poor legal reasoning), the passage reveals that the contradictions of the founders were still unresolved. The Court also went on to invalidate major provisions of the Missouri Compromise, declaring that Congress could not prohibit slavery in the territories, as doing so would unconstitutionally infringe upon slaveholders’ property rights. This part of the decision probably made war inevitable, as many northerners interpreted the ruling as an indication that the logical next step would be for the Supreme Court to determine that the states could not prohibit slavery, either. An editorial in the Chicago Tribune a few days after the decision was announced warned that “Illinois is no longer a free state,” reasoning that “if the new doctrine applies to Territories it must apply with equal force to the States.” One can only speculate whether this would have occurred, but it was clear that the “house divided” could not stand, and the hope that the country could survive as half-slave and half-free was gone. The causes of the Civil War were complex and are still debated, but Lincoln said it best in his Second Inaugural Address, in a brief but powerful allusion to slavery: “All knew that this interest was somehow the cause of the war.” A fair examination of the war itself is far beyond the scope of this essay, but the Emancipation Proclamation warrants a special mention. Recent historians have scoffed at Abraham Lincoln’s legalistic statement that “freed no one”; indeed, it was a military proclamation that announced the freedom of only those slaves in the Confederate states not already under Union control. But this reading may miss the broader purpose of the document. Peter Kolchin explains that “the decree had enormous symbolic significance, transforming a conservative war to restore the Union into a revolutionary war to reconstruct it.” Following the war, the Thirteenth Amendment to the Constitution permanently outlawed slavery throughout the United States. Although the president has no formal role in the amendment process, Lincoln actively lobbied for its passage and personally signed the congressional resolution that was sent to the states. He did not live to see it ratified. Mississippi author William Faulkner once wrote that “The past is never dead. It’s not even past.” In 1963, with a calculating eye on the southern states that would have been crucial to his reelection bid,
social contract 101
President John F. Kennedy declined an invitation to give the keynote address to mark the centennial of the Emancipation Proclamation. A century was apparently not enough time for the wounds to heal. Even today, the legacy of slavery is still an extraordinarily emotional issue for many Americans. The very controversial reparations movement is premised on the claim that the lingering effects of the system of human bondage continues to disadvantage the descendants of those slaves. But at a deeper level, it may be that we are still grappling with the contradictions of the founders, trying to understand the meaning of a country founded on both liberty and slavery. Further Reading Fehrenbacher, Don E. The Slaveholding Republic. Oxford: Oxford University Press, 2001; Finkelman, Paul. Slavery and the Founders: Race and Liberty in the Age of Jefferson. Armonk, N.Y.: M.E. Sharpe, 1996; Horton, James Oliver, and Lois E. Horton. Slavery and the Making of America. Oxford: Oxford University Press, 2005; Kaminski, John P., ed. A Necessary Evil? Slavery and the Debate over the Constitution. Madison, Wis.: Madison House, 1995; Kolchin, Peter. American Slavery: 1619–1877. New York: Hill and Wang, 1993; Levine, Bruce. Half Slave and Half Free: The Roots of the Civil War. New York: Hill and Wang, 1992; Thurgood Marshall’s 1987 speech can be found here: http://www.thurgoodmarshall.com/speeches/ constitutional_speech.htm. —William Cunion
social contract The social contract is not an actual contract, but a way to conceive the relationship between the ruler and the ruled characteristic of the modern social contract theorists Thomas Hobbes (1588–1679), John Locke (1632–1704), and Jean-Jacques Rousseau (1712–78). Each of these theorists postulates a scenario called the state of nature, in which people are equally situated and motivated to come together and collaborate to create a political agent such as the state, to do what they individually or in small communities are unable to do, or do as well. The state of nature story provides a lens for critical reflection on human nature, so as to enable the theorist to establish a political regime fit for human beings, with a political authority they would
consent to had they ever existed in this hypothetical state of affairs. All three classical social contract thinkers were concerned to establish that legitimate government is instituted by the people and is always accountable to their sovereign will. In general, social contract theory challenged earlier notions of politics as divinely ordained or given in nature and substitutes for them a conventionally generated ideal to justify political authority. In Hobbes’s state of nature, mankind was free and equal, though overwhelmingly insecure from one day to the next because there was no law and no recognized, common authority to promulgate law. Hence, each person was a law unto him- or herself, equally vulnerable to becoming a victim of someone else’s search for power or glory, a search in which all things were permitted including the use of other people’s possessions and bodies. Hobbes reasoned that in such a survival-of-the-fittest environment, the people would be miserable and in utter despair, were it not for the rational hope that human reason held out, that people could come together and establish peace and then a political authority, whose primary task it would be to maintain it. For Hobbes, the social contract united all people who escaped from the state of nature, their reason having suggested peace as well as the furtherance of peace through an ordered political regime, ruled over by an unquestionable and unchallengeable political authority. Reason would further suggest that the political authority must be established with all necessary powers to enforce the peace through the laws it wills to that effect, thus making it possible for people to envision a future in which they could grow old relatively secure and comfortable. The Lockean social contract was grounded in the voluntary consent of the people of a society to form a government and transfer to it just enough power to execute their political will without infringing on their rights and liberties. Often referred to as America’s political philosopher, Locke’s adage that the state is to protect life, liberty, and estate is reflected in the American Declaration of Independence, itself a statement of the principle that the people are sovereign and the state must do their bidding or risk their recall of the authority and power entrusted to it. Locke was concerned to show both how government is legitimate, and how revolt is legitimate when the government perpetrates a chain of abuses of the
102 social contract
people’s trust, liberty, right to property, or other things important to the pursuit of the common good. Locke deploys the notion of tacit consent to address how subsequent generations can be bound by the original social contract. For Rousseau, the social contract establishes a new form of direct democratic rule in which state sovereignty is identical with sovereign citizens acting on their general will, living under only those laws they themselves have written for their common good and thereby achieving civil liberty, a substitute for the natural liberty they once enjoyed in the state of nature. The social contract responds to the paradox of legitimate government rule alongside men remaining as free as they were before its establishment. Rousseau enabled men to achieve the republican ideal of civic virtue, whereas in the earlier state of nature men were naturally good, though naïve and not virtuous because only through obedience to a moral law is virtue possible. One strong criticism of the social contract tradition represented in Hobbes, Locke, and Rousseau, is that the social contract rests on a hypothetical event, not an actual contract, while the consent envisaged is tacit, not actual; hence, it cannot be binding on real people. A recent critique argues that it privileges men who, unlike women until relatively recently, can enter into contracts on their own. Carole Pateman’s pathbreaking feminist work argued that women, such as the black slave women of the American colonies, were regarded as the subjects of contracts by their masters, wives by their husbands, and that the mutuality of contractual relations such as in marriage was in reality one-way in favor of the man. The reason for this favor was the social contract thinkers’ views of women as understanding through their feelings, not as men do through the use of their putatively exclusive abstract, universal reason. Far from providing an avenue of liberation, the social contract tradition solidified women’s already attenuated membership in the citizenry and restricted their freedom to participate in the civic life of the community according to the social mores of the time. The ethical writings of Immanuel Kant (1724– 1804) and the later political theory of John Rawls (1921–2002) are in the social contract tradition. Immanuel Kant regarded the social contract as implicit in reason and morally unavoidable, given the ideal
perspective of reason, which dictates both leaving the state of nature and uniting with everyone else in a legal social compact so as to secure that normative organization of the state that will allow for the law of freedom and reason to operate as universally as possible. Seen through the lens of reason unblemished by the passions, the state is a special form of social contract because it has universal significance and is an end in itself, whereas all lesser societies and contracts within it are regulated by its constitutional principles, to which the people are legally and morally obligated. Rawls, the 20th century’s foremost social contract theorist, regarded the political inclinations of the social contract tradition as implicit in the ethos of modern liberal-democratic societies, such as the United States. Rawls’s signature conception of justice as fairness is an example of a social contract theory, because he regards this conception and its accompanying political principles as those that would be chosen by rational persons and acceptable to all rational parties in a suitably framed discursive scenario in which each person is free and equal. Rawls terms this device of representation the original position, rather than the state of nature used by previous social contract theorists. In the original position a group of rational choosers is going to discuss and arrive upon a conception of justice whereby to order society behind a veil of ignorance, which occludes from view the participants’ actual knowledge of their particular conditions or what conceptions of the good they favor. In this way, individuals cannot know what position they will have in the society they establish, such as whether they will be advantaged or disadvantaged; hence, they will promote those principles that would be fair to any person in the light of this initial fair and equal starting position. Rawls further believes that persons so situated would, subsequent to their deliberations, agree to allow whatever conceptions of the good regarding human flourishing reasonably can be permitted in the resulting political regime, and to abstain from any that could not be permitted, such as attaching the political regime to a particular religious perspective. As a device of representing our intuitive understanding of justice in a modern liberal democracy, the original position allows Rawls to argue for a society wherein any inequality that works an advantage
state 103
to anyone does so in favor of the least advantaged in the first instance. Thus, no matter how lowly a position in society a person comes to occupy, society is guided by political principles operating under a conception of justice—justice as fairness—that provides him or her with a reason for belonging to it, taking an interest in its prosperity, and cooperating to maintain it. For the social contract theorist, then, the state is legitimated because it is what any rational person would agree to or could not reasonably reject. While social contract theory, also known as contractarianism, postulates an association of free and equal individuals voluntarily consenting to form a state, it is not in itself a democratic theory, though the two are mutually reinforcing. The social contract unites the authority to rule with the obligation to obey, principally because the political regime was formed through the voluntary cooperation of rational individuals who were free to choose otherwise, yet were led by their reason to consent to a preferred choice that they imbued with desirable political qualities. It would be irrational not to feel obligated, as if under a contract, to owe your political allegiance to the regime you freely chose operating in the light of political principles or other considerations deemed important to you and agreeable to any reasonable person. Another sense of the social contract is the obligation it implies on the part of citizens to be mindful, and not risk rupturing the social fabric of society through lawless behavior. Binding a population characterized by diversity into a society, the social contract can be regarded as a metaphor for obligations shared by citizens with each other, and shared between the citizenry and the state. The founders used the term social compact to indicate the coming together of the people in the thirteen original American colonies to form and legitimate a united, national constitutional order through the device of constitutional conventions that expressed the consent of the people, with provisions such as elections and an amendment procedure to ensure that the continuing authority of the people is maintained. The American social compact was Lockean in that it reflected the individuals of an already existing society contracting with each other to form a government, rather than a compact or bargain articulated between the ruler and the ruled.
Further Reading Boucher, David, and Paul Kelly, eds. The Social Contract from Hobbes to Rawls. London: Routledge, 1994; Herzog, Don. Happy Slaves: A Critique of Consent Theory. Chicago: University of Chicago Press, 1989; Pateman, Carole. The Sexual Contract. Palo Alto, Calif.: Stanford University Press, 1988; Rawls, John. A Theory of Justice, Rev. ed. Cambridge, Mass.: Belknap/Harvard University Press, 1999; Replogle, Ron. Recovering the Social Contract. Totowa, N.J.: Rowman & Littlefield, 1989. —Gordon A. Babst
state There are currently 50 states in the United States, each defined by specific geographic boundaries and varying in size, resources, and population. Their origin lies in colonial history, when parcels of land on North America were given or sold to individuals or groups by royal charter from King Charles I and his successors to the British throne. Thirteen separate colonies had formed by 1776. At the conclusion of the Revolutionary War, a confederation existed between the states in which state sovereignty prevailed over a weak central government. Many men regarded the national government as inadequate, particularly in the areas of economic and foreign policy. A convention to revise the Articles of Confederation was convened, and the result was the eventual adoption of the U.S. Constitution. Under the new national government, any additional western land claims held by the original colonies were ceded to the union by 1802. These allotments were turned into states one at a time by acts of the U.S. Congress, which was granted the power to do so by the Constitution (Article IV). Spain and France also had control of areas on the North American continent that were acquired through treaty or purchase by the U.S. government; these were divided and later given statehood by Congress once the population had grown to acceptable levels in each territory to support a state government and constitution. The Constitution established a federal structure for governing the United States, meaning that there is a national level of government (confusingly, this is often referred to as the “federal” government), and each of the states has its own government. The executive,
104 sta te
legislative, and judicial branches at the national level are replicated in the states, but operate independently from their state counterparts. For example, state executives (governors) do not serve “under” the national executive (president), but are elected for their own terms of office and have independent authority. This does not mean that the sovereignty of each state is unlimited, or that national law is irrelevant to the running of states. Both the state and national governments have the power to tax citizens, pass laws and decide how to spend tax money, but the Constitution’s supremacy clause in Article VI establishes the national Constitution and laws as controlling in the event of contrary state law. The founders knew that state leaders would not have approved of the U.S. Constitution unless it preserved a central role for the states. Ratification of the Constitution was by state convention, and the amendment process requires the consent of three-quarters of the states. States have much authority over election law, including establishment of qualifications for suffrage and the administration of elections (although Amendments Fifteen, Nineteen, Twentyfour, and Twenty-six limited states’ power to discriminate on the basis of race, sex, wealth, and age in defining the electorate). New parties and independent candidates who want access to general election ballots have to comply with state statutes that are biased, to varying degrees, in favor of the two major political parties. Members of the electoral college, who choose a U.S. president every four years, are chosen at the state level under rules primarily determined by the state legislatures. Population shifts between the states greatly affect their relative political import; as states in the south and west grow, so does their representation in the House of Representatives and in the electoral college. The importance of the state role in elections can be seen in the 2000 presidential contest, during which state and federal courts intervened to deal with revelations about faulty voting machines and counting errors in Florida. Florida’s electoral votes were cast for the Republican, George W. Bush, and were decisive in making him the winner. States always have retained autonomy in the organization of lower levels of government. Local structures (cities, towns, and counties) exist at the states’ discretion, with local responsibilities and organization
spelled out in state constitutions or statutes. States can create special districts within their boundaries to deal with specific functions, such as education, water, or mass transportation. These are created to make the delivery of services more efficient but can be confusing to citizens who reside in many overlapping jurisdictions. States can impose taxes on income, sales, gas, liquor, and cigarettes, and control which taxes may be levied by local governments (e.g., property and excise taxes). Historically, it has not been easy to discern which powers ought to belong solely to the states. Some sections of the Constitution have been subject to opposing interpretations, and the views of judges and elected officials concerning government responsibility have changed over time. Whether this change is good or bad depends on one’s political perspective. The changing balance of power between state and national governments has provoked much political controversy. Originally, the dominant perspective was that states were to keep whatever powers were not specifically given to the national government, as articulated in the Tenth Amendment of the Constitution. This was a familiar point of view among Americans, because any powers not expressly delegated to the national government had been reserved to the states under the Articles of Confederation. Nevertheless, this interpretation is arguable, because the definition of national power in the Constitution was ambiguous enough to allow for its expansion. As early as 1819, the United States Supreme Court favored the national government, when Chief Justice John Marshall declared that Congress had the power to charter a national bank, an authority that is not enumerated in the Constitution. Furthermore, the opinion of the Supreme Court held that states cannot tax the national bank, even though the Constitution contains no such prohibition (McCulloch v. Maryland, 1819). It is difficult to say precisely which areas of public policy should be assigned to each level of government because the boundaries of state and national authority under federalism are unclear. Generally, the 19th century is regarded as a period of “dual federalism,” during which the national level dealt with defense, foreign policy, currency, and some interstate commerce, leaving everything else to the states, including education, social policy, wel-
state 105
fare, health, and roads. Over time, a complicated partnership between state and national governments has grown. This is because demands for action from the public prompted the national government to intervene in the wake of events, such as economic depression, poverty, and the Civil Rights movement in the 20th century. In addition, the Supreme Court has interpreted the “commerce clause” in Article I of the Constitution in a way that allows the national legislature much latitude; Congress has been allowed to regulate all economic activities throughout the country that, when their aggregate effect is considered, have a substantial effect on interstate commerce, even if those activities occur wholly within one state (see United States v. Lopez, 1995; Wickard v. Filburn, 1942). Thus, the national government not only has imposed minimum wage laws, but has stepped in to “help” states to pay for things such as highways, social welfare programs, and education with federal tax dollars. Today, roughly 25 percent of state revenues come from Washington, D.C. This assistance appears benign, and state leaders want the money because it means they can provide more services to their citizens. But with those handouts come rules and restrictions that limit the autonomy of state and local politicians in policy areas previously reserved to them. States do not always have to follow the rules imposed by Congress; however, the consequence is that they will not receive any appropriations. In addition, many federal outlays come with a requirement that states “match” the national money with a certain percentage from the states, placing a burden upon governors and state legislatures to raise state taxes. Once a state’s population is accustomed to certain programs, it becomes exceedingly difficult for state politicians to eliminate what some regard as national interference. It should be noted that not all states benefit from federal assistance to the same extent. Some states (Alaska, New Mexico, North Dakota, and Montana) receive about $2 in grants for every $1 their citizens pay in federal taxes. At the other end, states like Connecticut, New Jersey, and Virginia pay more in federal taxes than is returned to them through program grants, and so they subsidize other states. The federal government has intervened in previously local and state policies, yet today an American
citizen’s daily life is affected greatly by state and local government and laws. Local and state governments employ more civilians than the national government does, and their combined expenditures exceed total federal domestic outlays. Variations in demographics and partisan affiliations across states affect opinions on a number of issues. State politicians, in turn, regulate their citizens’ behaviors to different degrees, in keeping with local liberal or conservative tendencies. Such differences mean that a person’s access to quality education, handguns, and abortion, to offer a few examples, depend on her/his state of residence. Policies can vary widely, as some states can tax residents to provide more health and welfare benefits to the needy than federal compliance requires. Nothing prevents states from being more generous than the national government, whether that generosity refers to health care benefits, civil and criminal rights, or environmental protection. However, generosity comes at a price that some states cannot afford; in addition, excessive state regulation and taxation of business (or of individuals) can drive taxpayers out of the state. Finally, world events affect the policy obligations of governments at all levels, causing state and local governments to intrude in what might be regarded as federal domains. Citizens expect local and state governments to respond quickly and effectively to natural disasters, such as Hurricane Katrina, which devastated New Orleans and other parts of the Gulf Coast in 2005. The federal government provides some emergency relief, but primary responsibility for ensuring the well-being of the people rests with the states. Dealing with terrorist attacks clearly is a matter of national defense; nevertheless, coping with the aftermath of an attack requires a direct response from local government, police and fire departments, and hospitals. Capable responses require advance planning, a huge commitment of resources, and intergovernmental coordination. Further Reading Gray, Virginia, and Russell L. Hanson, eds. Politics in the American States. 8th ed. Washington, D.C.: Congressional Quarterly Press, 2004; M’Culloch v. Maryland, 17 U.S. (4 Wheat.) 316 (1819); Shearer, Benjamin F., ed. The Uniting States. The Story of Statehood for the Fifty United States. Vols. 1–3.
106 sta tes’ rights
Westport, Conn.: Greenwood Press, 2004; United States v. Lopez, 514 U.S. 549 (1995); Wickard v. Filburn, 317 U.S. 111 (1942). —Laura R. Dimino
states’ rights Advocates of states’ rights emphasize the sovereignty of state governments over a national government, the latter of which is to have finite, limited authority. The roots of this stance date back to the Articles of Confederation, when the original American states were a set of independent and equal sovereigns. No amendments could be adopted to the Articles without the unanimous approval of all states, and the national legislature at the time could not tax the people directly. After the Revolution, the Treaty of Paris recognized the independence of each of the individual states, listing each one separately. When the Articles failed to provide enough effectiveness and stability in the areas of foreign policy and economics, many advocated a convention to revise them; the end result was, as we know, the U.S. Constitution, adopted in 1787. Controversy over the balance of power between states and the national government marked the debates over the Constitution. Those who came to be known as the antifederalists were actually the strongest defenders of federalist principles. They warned that the Constitution favored a centralized government but should have made the states primary, equal and more politically powerful than the U.S. Congress. Their insistence on the equal weight of each state, coupled with their contention that the largest states were the worst governed, implied a defense of the small states. In discussions of the appropriate mechanisms for electing the Congress, then, it followed that they advocated equal representation of each state, regardless of size. In a compromise, states were granted equal representation in the U.S. Senate, but population was to determine the weight of a state’s delegation in the House of Representatives. This compromise largely overlooks the antifederalists’ theoretical arguments about how only a small, homogeneous republic of virtuous citizens, in close, frequent contact with their representatives, was the way to preserve individual liberty. States’ rights advocates also pointed to the
ratification procedure as a harbinger of bad things to come. The authors of the Constitution required only that nine of the 13 states approve the document for its adoption, and that ratification bypass the existing state legislatures and occur instead within special conventions. This process ignored the fact that until a new government was adopted, the Articles of Confederation provided that any change require the unanimous consent of all states. Even if one were to argue that the Articles had effectively been suspended (although they were not), such a status would give equal sovereignty to all 13 states. On the other side of the political fray were the federalists, whose usurped nomenclature perhaps best conveyed that they were the defenders of a stronger federal authority than had existed previously. The prevailing argument at the time was that the Constitution established a governing system that was part federal and part national. There was no doubt, however, that it was designed to correct problems that followed from a national, centralized government that was too weak; the new Congress was given the power to tax and arguably, given every power that the states previously had, in one form or another. In the end, the antifederalists made ratification of the Constitution contingent upon the adoption of a Bill of Rights, as the first 10 amendments are known. Not surprisingly, modern advocates of the states’ rights perspective point to the Tenth Amendment of the U.S. Constitution to bolster their position, which states that any powers not explicitly delegated to the United States (the national government), nor prohibited to the states, are reserved to the states. Although the Tenth Amendment may sound quite clear, ambiguities in the Constitution have allowed for expansion of federal authority. For example, Article I, Section 8, clause 18 grants Congress power to make all laws necessary and proper for carrying into execution its other powers. Article VI, Section I declares that the Constitution and national laws are the supreme laws of the United States. If history is any judge, it has shown that national power has grown at the expense of states’ sovereignty; here we consider the issues of race and of congressional regulation of commerce. It is easy to view the Civil War as a moral dispute over slavery but fundamentally, it also concerned states’ rights and the balance of power between
states’ rights 107
regions of the country. Men who were slave owners when the Constitution was ratified were assuaged by the argument that the new national government had not been given specific power to emancipate the slaves. As decades wore on, and new states were added to the union, the nation had to keep revisiting the issue of slavery. For states’ rights advocates, the answer was clear and easy: allow each state to decide whether slavery would be allowed within its borders. For them, slavery was, along with family affairs, education, morality and public health, a “domestic institution” under the exclusive domain of the states. Hence, the Missouri Compromise of 1820, which drew a line that established a northern border for slavery, was considered unconstitutional by southerners. Still, the country lived with it until 1854, when the passage of the Kansas-Nebraska Act in Congress precipitated heated public debates that tore at the political system. The act placed no restrictions on slavery in newly organized territories west of Missouri and Iowa, repealing the Missouri Compromise. Free states held a majority of seats in Congress and a majority of electoral votes, but northern abolitionists feared what seemed to be the growing power of southern states. Their fears were fed further by subsequent bloodshed over the issue of slavery in Kansas, physical fighting in the U.S. Senate, the 1856 election of Democratic, pro-southern president James Buchanan, and the United States Supreme Court’s decision in what is known as Dred Scott (1857) that Congress did not have authority to prohibit slavery in the territories. The abolitionists were up in arms as a result, both metaphorically and sometimes, literally. When in 1857, the South remained virtually unscathed during a northern banking crisis, the experience only bolstered growing southern proclivities toward isolation from the rest of the nation. During the 1860 presidential election, Democrats in seven southern states abandoned the Democratic nominee Stephen A. Douglas, who would not protect slavery in the territories. They nominated their own candidate, John Cabell Breckenridge, and that split helped secure the election of the Republican, Abraham Lincoln. Those seven states threatened secession if Lincoln won; after his victory, they kept true to their word. State conventions were convened (as they were for the process of ratification) and each state repealed its support of the union under the Constitu-
tion. They supported secession as a way to preserve a particular way of life and economy, social and economic areas that were previously left to them. As the battlefields emptied, conflict persisted in the country over what to do with the rebellious states and the newly freed slaves. Reconstruction, which was a gradual process of readmitting the South into the union, largely involved the imposition of presidential or congressional authority upon state governments. This process demonstrated that national power could reach only just so far because the realization of day-to-day freedoms would require action and implementation by local and state people. Thus, the abolition of slavery did not immediately imply black voting rights, equal access to education, or that blacks would no longer work in southern agriculture. In many ways, states’ rights advocates could sidestep the intended consequences of changes in national law and constitutional amendments because of a federal system that recognizes state sovereignty, however indefinite its definition. Race relations continued to be a basis of states’ rights disputes into the 20th century. In 1947, the President’s Committee on Civil Rights produced a pro–minority rights report; the following year, the Democratic Party convention endorsed a strong civil rights plank with support from the nonSouth. In an action reminiscent of 1860, many southerners left the event and abandoned Harry S. Truman’s candidacy to campaign instead for South Carolina governor J. Strom Thurmond. The States’ Rights Democratic Party or the Dixiecrats, received 7.3 percent of the electoral vote in 1948, with only 2.4 percent of the national popular vote. The concentrated regional support for segregation allowed their candidate to achieve such a result. In the 1960s, voting rights and equal treatment under the law for African Americans were goals that were still thwarted by recalcitrant states. Acts passed by Congress and U.S. Supreme Court decisions later achieved some successes, but only when states’ rights advocates were not left alone to enforce the laws they opposed. While the abolition of slavery had an important impact on southern state economies, it should be noted that throughout the 19th century, the U.S. Supreme Court otherwise defended states’ rights to regulate local economic relations against congressional encroachments. The Supreme Court saw the
108 supr emacy clause
production of goods as a process that preceded and was separate from “commerce,” and thus beyond the reach of national legislation. Later, industrialization and the growing interdependence of states made it more difficult to separate local interests from the national market, and this allowed Congress to expand its powers through its original authority to regulate interstate commerce. So, over the history of the nation, as in the case of race relations, states’ rights in other areas have dwindled in their scope. Because states’ rights to control policy inevitably lead to inequalities in treatment of Americans, the power of states has often been viewed as unjust. For example, a strict application of the states’ rights doctrine would not allow for national standards for a minimum wage, basic health care or education, safe water or voting rights. It also should be noted that states voluntarily have surrendered some of their control over state policies in exchange for national money to support programs favored by governors and legislatures. A far-reaching economic depression led to the New Deal programs during Franklin D. Roosevelt’s presidency; states welcomed economic relief for their residents, but these programs established a lasting role for the federal government in social welfare. The 1980s and 1990s saw growing political support for cutbacks in federal spending and a devolution of authority back to the states among national Republican candidates. Still, hundreds of programs persist and the national government’s role in public health is increasing. Republican appointments to the Supreme Court are having some effect, however, in defending states’ rights in certain policy areas. When, in the 1990s, Congress attempted to outlaw guns within a thousand feet of a school, or require state and local police to run background checks on handgun purchasers, or legislate that victims of rape and other gender-motivated violence could seek damages from perpetrators in federal court, the Supreme Court declared these actions beyond their authority. Continuing public disagreement over social issues such as gender relations, gay marriage and abortion mean that the exercise of states’ rights will remain fraught with controversy. Further Reading Goldwin, Robert A., and William A. Schambra, eds. How Federal Is the Constitution? Washington, D.C.: American Enterprise Institute for Public Policy
Research, 1987; McDonald, Forrest. States’ Rights and The Union. Lawrence: University Press of Kansas, 2000; Storing, Herbert J. What the Anti-Federalists Were For. Chicago: The University of Chicago Press, 1981. —Laura R. Dimino
supremacy clause In the United States, the U.S. Constitution is considered “the supreme law of the land.” Article VI, Clause 2 of the Constitution established what is known as the supremacy clause, which makes national law supreme over state law when the national government is acting within its constitutional limits. It states: “This Constitution, and the Laws of the United States which shall be made in Pursuance thereof; and all Treaties made, or which shall be made, under the Authority of the United States, shall be the supreme Law of the Land; and the Judges in every State shall be bound thereby, any Thing in the Constitution or Laws of any state to the Contrary notwithstanding.” John Marshall, who served on the United States Supreme Court from 1801 to 1835 as chief justice, was one of the most influential proponents of a strong nationalist view for the federal government. Prior to Marshall’s appointment in 1801, the Supreme Court had held that the supremacy clause rendered null and void a state constitutional or statutory provision that was considered inconsistent with a treaty executed by the federal government. However, Marshall would more firmly define the doctrinal view of national supremacy as applied to acts of Congress in two landmark cases: McCulloch v. Maryland (1819) and Gibbons v. Ogden (1824). The decision in McCulloch v. Maryland (1819) ranks second only to Marbury v. Madison (1803) in importance in American constitutional law, not only in relation to the powers of Congress but also in terms of federalism. The case involved the constitutional question of whether or not the United States could charter a federal bank, and whether or not a state could levy a tax against it. The ruling in this case asserted national authority through both the necessary and proper clause and the supremacy clause. The charter for the First Bank created by Congress had lapsed in 1811. Congress then established the Second Bank in 1816. Several states, including Mary-
supremacy clause 109
land, attempted to drive the bank out of existence by levying taxes, but the head cashier of the U.S. Bank in Maryland, James McCulloch, refused to pay the tax. The ruling in the case was a clear statement about implied powers; if the government had authority to tax, borrow money, and regulate commerce, it could establish a bank to exercise those powers properly (through the necessary and proper clause, also referred to as the elastic clause). In his majority opinion, Marshall resorted to a loose interpretation of the Constitution to justify Congress’s authority to create the Second Bank of the United States. He rejected the idea of a strict interpretation of the Constitution then supported by states’ rights advocates. Such a reading would make it unworkable, and Marshall argued that the necessary and proper clause had been included among the powers of Congress, not among its limitations, and was meant to enlarge, not reduce, the ability of Congress to execute its enumerated powers. Marshall also invoked the supremacy clause, stating that the power to tax also included the power to destroy. Therefore, if a state could tax the bank, then it could also attack other agencies of the federal government, which could allow the total defeat of all the ends of the federal government. In the opinion, Marshall stated that “the States have no power, by taxation or otherwise, to retard, impede, burden, or in any manner control, the operations of the constitutional laws enacted by Congress to carry into execution the powers vested in the general government. This is, we think, the unavoidable consequence of that supremacy which the Constitution has declared.” In 1824, the Supreme Court also ruled in Gibbons v. Ogden that Congress had the right to regulate interstate commerce, stating that the commerce clause of the Constitution was a broad grant of national power to develop the nation as a nation, and not just a collection of states. In this case, the state of New York had granted a monopoly of steamboat operations between New York and neighboring New Jersey to Robert Fulton and Robert Livingston, who then licensed Aaron Ogden to operate the ferry. Thomas Gibbons operated a competing ferry, which had been licensed by Congress in 1793. Ogden obtained an injunction from a New York state court to keep Gibbons out of state waters, arguing that the state had legitimate regulation over this form of commerce.
Gibbons then sued for access to New York waters, and the case was appealed to the Supreme Court. The Court ruled in favor of Gibbons, declaring New York’s grant of the monopoly to be null and void based on federal supremacy. In his majority opinion, Marshall stated that “In argument, however, it has been contended, that if a law passed by a State, in the exercise of its acknowledged sovereignty, comes into conflict with a law passed by Congress in pursuance of the Constitution, they affect the subject, and each other, like equal opposing powers. But the framers of our Constitution foresaw this state of things, and provided for it, by declaring the supremacy not only of itself, but of the laws made in pursuance of it. The nullity of an act, inconsistent with the Constitution, is produced by the declaration, that the Constitution is the supreme law. The appropriate application of that part of the clause which confers the same supremacy on laws and treaties, is to such acts of the state legislatures as do not transcend their powers, but though enacted in the execution of acknowledged State powers, interfere with, or are contrary to the laws of Congress, made in pursuance of the Constitution, or some treaty made under the authority of the United States. In every such case, the act of Congress, or the treaty, is supreme; and the law of the State, though enacted in the exercise of powers not controverted, must yield to it.” In the aftermath of these two landmark rulings, a precedent was set that forced state and local laws and/or policies to yield in the face of legislation by Congress that was pursuant to its delegated powers (either those powers that are enumerated, or implied through the necessary and proper clause). This is known as preemption, and has happened frequently with issues arising under the commerce clause. In other cases, if a state is participating in a federal program (such as entitlement programs like social security), any state laws that are enacted which may be contrary to federal law are considered void. The theory of nullification, however, also played a role in the debate over national supremacy in the early part of the 19th century. Supported by many who were proponents of stronger states’ rights, the theory of nullification referred to the view that a state had the right to nullify, or invalidate, any federal law that the state has declared unconstitutional. This theory was based on the view that the states—as
110 t otalitarianism
sovereign entities—had originally formed the Union, and therefore, only the states should have the final say in determining the powers of the federal government. This question of whether or not a state can refuse to recognize a federal law passed by Congress and signed by the president caused what is known as the Nullification Crisis in 1832 during the presidency of Andrew Jackson. Congress had passed, and Jackson had signed into law, the Tariff of 1828 (also known as the Tariff of Abominations, because several southern states found the tariff to be an undue financial burden). In November 1832, South Carolina adopted an ordinance that nullified the act, stating that if necessary, it would defend its decision by military force against the U.S. government or even secession from the Union. (South Carolina was also home to John C. Calhoun, who served as vice president and also represented his state in the U.S. Senate, and who had been a strong proponent of slavery and the protection of states’ rights.) In December 1832, Jackson warned the state, through a proclamation, that it could not secede from the Union. The crisis was temporarily averted since no other state at that time was willing to follow the lead of South Carolina. However, many southerners had been sympathetic to the theory of nullification as tested in South Carolina, and the conflict helped to develop the theory of secession (which ultimately led to the start of the Civil War in 1861). In the end, Marshall’s view of national supremacy marked an important era in the early days of the American government by asserting that national power took precedence over state authority. Of course, that period was followed by an era more supportive of states’ rights while Roger B. Taney served as Chief Justice from 1836 to 1864 (Taney was appointed by Jackson following Marshall’s death in 1835). While the debate between a stronger national government (often referred to as cooperative federalism) and a stronger states’ rights view (often referred to as dual federalism) has varied at times throughout American history and with differing interpretations by various majority opinions from the Supreme Court, the supremacy clause has been vital to maintaining a balance between federal-state relations and the U.S. government. Further Reading Epstein, Lee, and Thomas G. Walker. Constitutional Law for a Changing America: A Short Course. 3rd
ed. Washington, D.C.: Congressional Quarterly Press, 2005; Hall, Kermit, ed. The Oxford Guide to United States Supreme Court Decisions. New York: Oxford University Press, 1999; O’Brien, David M. Constitutional Law and Politics, Vol. 1, Struggles for Power and Governmental Accountability. 6th ed. New York: W.W. Norton, 2005; Stephens, Otis H. Jr., and John M. Scheb II. American Constitutional Law. 3rd ed. Belmont, Calif.: Thompson, 2003. —Lori Cox Han
totalitarianism Totalitarianism refers to a political ideology that has been totalized, or made to apply to everything within its purview, without exception, so as to effect a single-minded and total transformation of society. A totalitarian political regime is one in which the operating political ideology provides the rationalization to determine all public and private interactions, and in which the reach of the state is so extensive as to almost eradicate any private sphere. While totalitarianism appears to be similar to tyranny, which is a longestablished form of political regime famously discussed in Plato’s Republic, the latter turns on the figure of the tyrant, while the former turns on the regime’s ideology. Totalitarianism has been compared to authoritarianism, fascism, and communism, because all of these political regimes are in practice run by a leader or ruling elite that brooks no opposition. Totalitarianism also has been identified with certain 20th-century failed states, though it would be a mistake to assume that the lure of totalitarian control or fashioning of a society is a thing of the past, never to return. The framers of the U.S. Constitution and other political thinkers of the modern period focused their attention on tyranny, and understood it as something to avoid because it conflicted markedly with classical liberalism’s basis in the inherent equality of all men, with no one by nature qualified to rule over the rest of the citizenry. The figure of the tyrant obviously conflicted with the democratic aims of the founding generation. Tyranny, such as the personal rule of a despotic prince, also conflicted with republicanism’s centrality of the rule of law, law that is already written and not capriciously man-made. In keeping with their liberal and republican political
totalitarianism 111
ideas, the framers understood private property as a bulwark against tyranny, and against any overreach of the authority of the state in their scheme of limited government. While the avoidance of tyrannical government was a main factor in the design of the American constitutional regime with its separation of powers by coordinate branches of government, the framers could not have imagined the extremes and concentration of power represented by modern totalitarianism. According to Hannah Arendt’s classical study of totalitarianism, it differs from tyranny because this form of political rule tends to dominate all aspects of the political and social worlds of a nation-state, destroying civil society and leaving little or no private realm for people to escape into. In addition, while a tyranny is characterized by the imposition of the personal will of the tyrant, in the totalitarian regime all individuals are rendered superfluous to it even to the extent that although the political ruler might exude a strong personality, he too must conform to the prevailing ideology, if not emblematize it. Arendt was especially concerned about totalitarianism’s logical inclination and capacity to eradicate the pluralism that democratic societies encourage, because totalitarian regimes in practice had already acted in this manner and shrunk the space in which freedom can appear through people acting spontaneously and declining to conform. The single-mindedness of totalitarian regimes demanded a response from the rest of the world, either to defeat their imperial ambitions, or contain them. Unlike under fascism, where individuals are hierarchically organized under the state and ennobled through submission to the state and to the rule of the dictator, totalitarianism requires the control over a population necessary to subdue it into carrying out the state’s ideology and believing in their own helplessness to do anything but conform to it. Nonetheless, when dictator Benito Mussolini spoke of the fascist Italian state as lo stato totalitario, he introduced the term into popular parlance. Both fascist and totalitarian states are characterized by the brutality of their enforcement and policing methods in coercing their people into obedience. In both Mussolini’s Italy and Adolf Hitler’s Germany many people already were inclined to conform to political messages that appealed to their sense of nationalism, nos-
talgia for the greatness of their ancestors, or common ideas about racial superiority or purity. Totalitarianism is regarded as a 20th-century phenomenon, because it relies on comprehensive ideological sloganeering to manufacture enthusiasm for the regime, and technologies of surveillance and control previously unavailable to the state to maintain its organization of society, and its control over it. These technologies include a mass party apparatus, communications methods utilizing mass media, widespread effective, systematic use of terror to intimidate, and, ultimately, tightly controlled use of instruments of violence monopolized by the state. For most everyone, life in a totalitarian society was highly regimented, and for the ordinary person was filled with fear and distrust of everyone else, except for a few very close friends and family members with whom a person could dare to share their honest opinions, rather than the patriotic fervor expected of everyone at all times. The totalitarian regime might also have an established cult of personality or other form of leader worship, such as characterized the political rule of Joseph Stalin in the Soviet Union, though this was far less religiously inclined than was the prototypical cult of the Roman emperor. Indeed, the 20thcentury’s totalitarian regimes were secular regimes that eschewed religion. The dictatorships of the ruling party in the communist regimes of the Soviet Union and mainland China may be identified as totalitarian, while the present-day communist regimes of North Korea and Cuba are better regarded as authoritarian, because authoritarianism, whether individual or collectivist, seeks merely to monopolize political power and usually turns on the personification of the basis of authority (e.g., a monarch or military leader) and lacks the comprehensive ambition and cohesive capacity of the totalitarian regime. Authoritarian regimes can be regarded as either benevolent, where constitutional democratic means do not obtain and a crisis must be faced with a unified front on behalf of the people, or malevolent, such as a dictatorship that regards the people as instrumentally useful at best. Benevolent authoritarianism might include a temporary regime of occupation following the ouster of a profoundly unjust regime such as under malevolent authoritarianism. One notable similarity between totalitarian and authoritarian regimes, however, is their tendency
112 Virginia Plan
to rely on a combination of loyalty and fear to effect the psychological manipulation of the people, to severely restrict the little freedom that is available to dissenters, and even to commit democide. The additional feature of a centrally directed economy is common to both totalitarian and communist regimes. In the 21st century, the question arises whether a fundamentalist theocratic regime is inherently totalitarian. The theocracies of past centuries did not have available to them the modern technologies necessary to effect the totalizing influence and dominance of the state, though they arguably had available to them a basis in ideas—here, religious—that could work a totalizing effect on society. In addition, bygone theocracies did not occur in political regimes that were also states. While the religious establishments of the past may not necessarily have aspired to political rule, the question can be raised about contemporary theocracies grounded in Islam, such as the Taliban’s former rule over Afghanistan and the cleric-dominated state of Iran. Given the present organization of most societies into states, a theocracy could substitute its religion for ideology and so bring into being a totalitarian regime, even in places where sophisticated technologies of surveillance and control are lacking. Both totalitarianism and the potential totalitarian theocratic regimes present the extreme danger of uniformity eradicating pluralism through the effective subordination of individuals to the reigning and all-encompassing ideology or religious doctrine. Both situations feature a combination of orthodoxy (unanimity in correct opinion) and orthopraxy (unity in correct practice) that leave little, if any space for dissent in belief or action. In practice, however, totalitarianism and related political regimes may be far more vulnerable to decline over time and/or overthrow because of the economic costs associated with maintaining the level of violence and seclusion from the outside necessary to keep a population in check and uninformed about alternative political systems, and the insecurity that comes with reliance on the faith of the people in the face of their lived reality of being in a society that is manifestly unconcerned about them and their material well-being. There has been speculation that contemporary globalization and the spread of human rights and democratic ideals are eroding the significance of the nation-state and, hence, reducing the threat that
totalitarian political regimes reemerge. There are also commentators who believe that either ideology is no longer an important consideration, or that historical evolution of great rival societies has ended, in a sense, with the final collapse of the Soviet Union in 1991, and so humankind has moved beyond brutality, encompassing visions such as were manifested in totalitarianism. While the increased diffusion and interpenetration of a great variety of cultural, economic, and political practices presently underway is bringing the pluralism of the world’s peoples to all corners of the globe, totalitarianism has become unlikely, but not unthinkable. Wherever the broad social impacts of globalization are deemed unwelcome, such as in fundamentalist theocratic regimes, and people look to the state as having the capacity to resist pluralism in the name of nationalism or racial, ethnic, or religious purity, then the potential is there for political leaders and followers who are attracted by the lure of total control or transformation of society to guide the state by the light of a locally crafted, logical, and compelling ideology. Further Reading Arendt, Hannah. The Origins of Totalitarianism. San Diego, Calif.: Harcourt Brace Jovanovich, 1951; Bell, Daniel. The End of Ideology: On the Political Exhaustion of Political Ideas in the Fifties. Cambridge, Mass.: Harvard University Press, 2000. —Gordon A. Babst
Virginia Plan On May 29, 1787, Edmund Randolph, governor of Virginia, presented to the delegates of the Constitutional Convention what came to be called the Virginia Plan. Also known as the Randolph Resolution, the Virginia Plan would serve as the basis of debate for much of the convention and, in large measure, was the blueprint of the U.S. Constitution. Understanding the significance of its call for a strong national government that would dominate state governments and its outline of three separate governing powers requires a working knowledge of the historical context, the structure of government under the Articles of Confederation, and the general state of the union a decade after the signing of the Declaration of Independence.
Virginia Plan 113
Having rebelled against what they saw as the tyrannical rule of British king George III, the colonists initially sought to institute a government incapable of repeating the abuses they had chafed under as British colonies. They wanted to avoid concentrating power in the hands of any individual or central, far-off government with little knowledge or care for local needs. They achieved their goal by establishing a central government that could do almost nothing at all. The Second Continental Congress proposed the Articles of Confederation, which would serve as the basis of the new government, in 1777. Ultimately ratified in 1781, the Articles centered power in the place the colonists trusted most: the state governments. The Articles established “a firm league of friendship” in which “each state retains its sovereignty, freedom, and independence.” It provided for no national executive or judiciary and the national Congress had no power to raise taxes, regulate commerce between the states, or enforce laws on the states. The only real source of national revenue consisted of Congress requisitioning funds from the state legislatures, essentially begging the states for money. The states could refuse to contribute without much trouble and regularly did so. This state-dominated government led to major problems since the states looked to their own interests at every turn. The states had long been the colonists’ primary political identity. When General George Washington requested a group of New Jersey militia to swear loyalty to the United States, they rebuffed him, claiming “New Jersey is our country.” The newly independent Americans continued to think of themselves primarily as New Yorkers, Virginians, or Pennsylvanians, rather than Americans. States often engaged in fierce rivalry, trying to advance their own economies by sending little money to Congress and raising stiff trade barriers against each other. Each state used its own currency, making any financial dealings across states a complicated financing headache. State rivalry and a Congress powerless to encourage cooperation or raise taxes left the fledgling nation in economic shambles. State economies faced severe deflation and Congress could not raise enough money from the states to pay the foreign debts it accumulated during the Revolutionary War, threatening the country’s future credit. Economic hardship agitated
class relations, leading to conflict among rural farmers and urban gentry. Farmers in South Carolina, Virginia, Maryland, Pennsylvania, New Jersey, and Massachusetts took up arms in reaction to economic policies, while an armed mob swarmed the New Hampshire legislature. Internationally, the United States was something of a joke. Great Britain brazenly maintained several military posts on U.S. territory in the northwest, defying the Treaty of Paris, with the U.S. military being far too weak to force the British into compliance. The British, along with the Spanish, provided military aid to various Ohio Valley tribes as they resisted American expansion and settlement. The American military could neither defend its citizens in the west nor prevent or punish the foreign powers for their role as agitators and suppliers. Meanwhile, American military weakness left shipping routes vulnerable. In 1785, Barbary Coast pirates captured an American merchant ship, stole the cargo, and held the crew for ransom. Congress did not have the money to pay the ransom or the price for which Tripoli had offered to secure the trade route. The United States had also become an embarrassment to the men who had created it. Men like James Madison wrote of the “vices,” “embarrassments,” and “mortal diseases” of the political system. In 1786, when armed farmers in western Massachusetts wreaked havoc during Shays’s Rebellion, the steady-handed George Washington confessed to being “mortified beyond expression.” More and more public figures were convinced that a stronger national government would be necessary to protect the states from internal and external threat, stimulate economic progress, and maintain the experiment begun in 1776. Eventually, the states sent delegates to Philadelphia in the summer of 1787 to consider altering the Articles of Confederation accordingly. James Madison, a Virginian set on establishing a strong national government, arrived in Philadelphia on May 3, 1787, well ahead of most delegates. It took several days for delegates from enough states to make the long and dangerous journey on ill-suited roads to make the convention legal. In the interim, Madison worked with other nationalists who had arrived in Philadelphia, including George Washington, Robert Morris, Gouverneur Morris, and Benjamin Franklin, to prepare a reform proposal that
114 Virginia Plan
the convention could work with. Although a collective effort, the Virginia Plan consisted mainly of Madison’s ideas. Once there were enough delegates to form a quorum and all the administrative details of the convention had been settled, Edmund Randolph presented the Virginia Plan on the first day of substantive action. Although the first of the plan’s 15 resolutions stated that the Articles of Confederation would merely be “corrected and enlarged,” the plan effectively proposed a new constitution granting far more power to the national government. The plan’s proposal was for a strong national government that could check the state rivalries that plagued the nation, build the economy, and guarantee rights and liberties to citizens of all states. To protect against abuse of these greater powers, this national government would be split into three separate parts, the executive, judiciary, and legislature, which itself was broken into two houses. Furthermore, the system was based in part on the principle of republicanism, which allowed citizens to elect their governing officials. The plan itself was somewhat vague regarding the actual powers of government, but clearer about its structure. In terms of specifics, the plan called for a bicameral legislature. The number of each state’s representatives to both houses of the legislature would depend on the state’s population, larger states having more representatives and thus more power in the legislature. Representatives to the legislature’s first branch would be popularly elected. The members of the second branch would be nominated by state legislatures and then elected by the members of the first branch. Each of these proposals marked a significant break from Congress under the Articles of Confederation, which was unicameral, gave each state the same power, and enabled state legislatures to appoint and recall members of its delegation. The plan granted the new legislature broad power to enact laws “in all cases to which the separate states are incompetent,” although the definition of incompetence was left unclear. The legislature could “negate” certain state laws and use force to compel states to fulfill their duties to the national government. All in all, the new legislature would have the power to counter the state legislatures that had dominated and incapacitated the Congress under the Articles of Confederation.
The plan also provided for an executive and a judiciary, the members of which would be chosen by the legislature. The plan left unclear whether the executive would consist of one person or a committee. The length of the executive’s term in office was also unspecified. The executive and some unspecified number of the judiciary would form a Council of Revision to examine acts passed by Congress. The council could reject acts before they were enacted, although such a rejection could be overcome if an unspecified number in each legislative branch determined to do so. For its part, the judiciary was to consist of “one or more supreme tribunals and of inferior tribunals,” which would have national jurisdiction. The plan guaranteed the members of the executive and the judiciary a salary that could not be altered during their tenure to limit the possibility of financial reward for those who did the legislature’s bidding or punishment for those who crossed the legislature. The specific powers of the executive and judiciary, other than the power to reject legislative acts, were left unclear. The remaining resolutions provided for the admission of new states to the union and required that new states be governed by “Republican Government,” that the new governing document could be amended (although the actual process of amending it was unspecified), that members of the legislature, executive, and judiciary swear “to support the articles of the union,” and that the changes ultimately proposed by the convention be ratified by state assemblies whose members were popularly elected by the citizens of each state. The delegates spent the next two weeks discussing specific provisions of the Virginia Plan. On June 13, a slightly modified and elaborated version of the plan was officially reported. Although Madison had moved the convention quickly toward a national government, his success was not yet to be realized. On June 14, William Paterson of New Jersey asked for more time to consider the plan before officially voting to accept or reject it and to propose an alternative, “purely federal” plan (as opposed to the nationalist Virginia Plan). Paterson’s suggestion to consider a plan less dramatically different from the Articles of Confederation, one that granted less power to the national government, was welcome even among delegates like Washington and Franklin, who desired a stronger national govern-
Virginia Plan 115
ment than the Articles afforded, but who feared Madison’s plan may be going too far. The combination of what political historian Clinton Rossiter termed “far out, almost militant nationalism” and the proposed system of population-based representation in both legislative branches worried both those who feared excessive central powers and delegates from small states that stood to lose substantial clout in the legislature. The next day, Paterson presented what would be called the New Jersey Plan, a moderate amending of the Articles of Confederation. Ultimately, the convention rejected the New Jersey Plan and moved forward with the modified Virginia Plan, but the next weeks of debate led to significant revisions. Most notably, the Great Compromise established a scheme of population-based representation in the House of Representatives and equal representation in the Senate, while the electoral college replaced the legislature as the body that would select the executive. Despite these and other significant revisions, it is clear that the major goals of the Virginia Plan—a stronger national government, separated powers, and republicanism— had largely been met. Two points bear noting for the student of American politics. First, the eventual adoption of many of the Virginia Plan’s guiding ideas speaks to the power of controlling the agenda. Madison, by all accounts an astute tactician during the convention, boldly seized the agenda by offering a proposal at the convention’s outset. By immediately proposing to alter fundamentally the Articles of Confederation, Madison opened the possibilities of reform in the delegate’s minds. His proposal made clear that the convention could “think big,” that dramatic changes were possible, an important step, given the convention’s limited mandate to tinker with the Articles of Confederation. In addition, by moving first, Madison defined the terms of debate. That the delegates spent the convention’s first two weeks entirely devoted to elements of the Virginia Plan and then used the plan as the baseline from which they would sometimes deviate had a great deal to do with the eventual acceptance of many of the plan’s main ideals. The importance of controlling the political agenda and the terms of debate continues to be seen in contemporary politics in the significance of the rules committee in the House of Representa-
tives, the House and Senate leadership, and congressional committee chairs, all of whom control various aspects of the legislative agenda. Likewise, the president’s ability to use the “bully pulpit,” the extensive media attention devoted to him, to set the nation’s political priorities and define the terms of political debate is one of the presidency’s most potent tools. Second, the system of separated powers that was at the center of Madison’s creative political genius has largely defined political dynamics throughout American history. The United States has seen ongoing struggles for dominance among the branches. Originally dominant, the legislature lost considerable power to the United States Supreme Court, which quickly claimed the power of judicial review, and to the presidency, which is now the central figure in American politics. In the era of the modern presidency, typically defined as the period from Franklin D. Roosevelt’s presidency onward, presidents have attempted to dominate the judiciary (most clearly seen in Franklin Roosevelt’s court packing proposal) and seize from Congress the reins of military and foreign policy decision making. Congress has not sat idly by, challenging the executive most notably by establishing the War Powers Resolution (over President Richard Nixon’s veto), which regulates the president’s powers as commander in chief, by challenging and increasingly often rejecting presidential appointees to the courts and executive posts, and ultimately by taking up impeachment proceedings against Nixon in 1974 and actually impeaching President Bill Clinton in 1998. Madison designed the separate branches to protect against concentrating power in the hands of a few. Perhaps the survival of the Constitution and the public order for well over two centuries is evidence of his success. Ironically, it is also the root of the American public’s disdain for government. Americans regularly complain about the slow process of getting different branches to agree and the squabbling among officeholders that inevitably arises in this process, especially in eras of divided government in which different parties control the presidency and Congress. This is precisely what Madison had in mind. As Barber Conable once quipped, the government is “functioning the way the Founding Fathers intended—not very well.”
116 Virginia Plan
Further Reading For the text of the Virginia Plan, see Yale University’s Avalon project at http://www.yale.edu/lawweb/avalon/ const/vatexta.htm; Berkin, Carol. A Brilliant Solution. New York: Harcourt, 2002; Bowen, Catherine Drinker. Miracle at Philadelphia: The Story of the Constitutional Convention, May to September 1787. Boston: Little, Brown, 1966; Hibbing, John, and Elizabeth Theiss-Morse. Congress as Public Enemy: Pub-
lic Attitudes Toward Political Institutions. New York: Cambridge University Press, 1995; Morris, Richard Brandon. The Forging of the Union, 1781–1789. New York: Harper & Row, 1987; Rossiter, Clinton. 1787: The Grand Convention. New York: The Macmillan Company, 1966; Smith, David G. The Convention and the Constitution: The Political Ideas of the Founding Fathers. New York: St. Martin’s Press, 1965. —Brian Newman
CIVIL RIGHTS AND CIVIC RESPONSIBILITIES
affirmative action
Order,” which guaranteed fair hiring practices in the construction industry through the use of specific goals and timetables. Philadelphia had been chosen as a test case city for the program due to the industries there being “among the most egregious offenders against equal opportunity laws” that were “openly hostile toward letting blacks into their closed circle.” According to Nixon, the federal government would not impose quotas per se, but would require federal contractors to show “affirmative action” in meeting the goals of increased minority hiring and promotions. By 1978, the United States Supreme Court had weighed in on the issue of affirmative action. In Regents of the University of California v. Bakke, a 37-year-old white student who was twice denied entrance to the University of California Davis Medical School sued the university. He claimed discrimination, since his entrance exam (MCAT) score and grade point average were higher than those of 16 minority students who had been accepted under a set-aside policy (a quota system guaranteeing spots in the entering class to minority students). The California Supreme Court ruled that the set-aside program was a violation of equal protection. The university appealed to the U.S. Supreme Court, but the Court also ruled in favor of Bakke. In a 5-4 decision, the Court voted to end the quota system while still endorsing affirmative action in the abstract (in this case, a compelling interest for diversity in medical school admissions). In its ruling, the Supreme Court, under Chief Justice Warren Burger, ruled that achieving
First mentioned in the 1935 National Labor Relations Act, the term affirmative action implied that government agencies should prevent discrimination against African Americans. As a result, several states passed laws banning discrimination against African Americans in hiring practices, although little was actually done to uphold the intent of these laws. Affirmative action as public policy, which provided preferential treatment in hiring, promotions, and college admissions for African Americans, really took shape in the 1960s with the national prominence of the Civil Rights movement. Following passage of the historic Civil Rights Act of 1964 and the Voting Rights Act of 1965, President Lyndon Johnson established two federal agencies—the Equal Employment Opportunity Commission in 1964 and the Office of Federal Contract Compliance in 1965—that, along with an executive order that he signed in 1965, began to implement racial hiring quotas for businesses and government agencies. The goal of Executive Order 11246 was to “take affirmative action” toward prospective minority employees in all aspects of hiring and employment, and contractors were to take specific actions in this regard and document all affirmative action efforts. In 1967, the executive order was amended to include gender as a category deserving the same hiring and employment preferences. Affirmative action as a federal public policy continued during President Richard Nixon’s administration. Nixon initiated what was known as the “Philadelphia 117
118 affirma tive action
diversity in student bodies is permissible and does meet the fourteenth Amendment standard for equal protection as long as quotas, which were ruled unconstitutional, were not used. Following the Bakke decision, three main points emerged for courts to consider in affirmative action cases: compelling interest in the government’s justification for such a program, the strict scrutiny test, and the need for educational diversity (which Associate Justice Lewis Powell argued for, a term that would be used by most defenders of affirmative action programs in university admissions). After the ruling, the court upheld affirmative action quotas in other areas, including federal public works contracts for minority-owned business. However, the Court during the 1980s, with several conservative appointments during the Reagan and Bush years, narrowed the scope of affirmative action, and in its 5-4 decision in Adarand Constructors, Inc. v. Pena (1995), ruled that the federal government could not use quotas in awarding federal contracts. Throughout the 1990s, affirmative action continued to be a controversial political issue with many groups trying to end the policy altogether. A large blow to affirmative action came in 1996 with the passage of Proposition 209 in California, a voter-sponsored initiative to amend the California Constitution to end all racial, ethnic, and gender preferences in college admissions, public jobs, and government contracts. The initiative, supported by University of California Regent Ward Connerly and the California Civil Rights Initiative Campaign and opposed by many affirmative-action advocacy groups, passed by a 54-46 percent margin. The constitutionality of the proposition was immediately challenged, and a U.S. District Court blocked the enforcement of the measure. A three-judge panel of the 9th Circuit Court of Appeals then overturned the ruling, but the U.S. Supreme Court refused to grant a writ of certiorari in 1997, which upheld the Circuit Court’s ruling to let the initiative go into effect. The Board of Regents of the University of California, along with the University of Texas Law School, had earlier voted an end to affirmative action in university admissions. Since then, there has been a dramatic decline in the number of minority admissions at each school, especially at University of California law schools. The U.S. Supreme Court had also denied a grant of certiorari in Hopwood v. Texas (1995),
which left intact a U.S. Court of Appeals for the Fifth Circuit decision declaring that the University of Texas Law School could not use race as a deciding factor in admissions. The Court denied the writ on the last day of the term, despite heavy lobbying by the Clinton administration, the District of Columbia, and nine other states that wanted to see the lower court decision overturned. The U.S. Supreme Court again revisited the issue of affirmative action in 2003 with two cases stemming from admissions policies at the University of Michigan. With his election in 2001, President George W. Bush and his administration had lobbied hard against affirmative actions policies. When the Supreme Court received for the 2002–03 term the University of Michigan cases that represented the legal challenges to racial preferences in the admissions process for both undergraduates and law students, Bush rejected a Justice Department brief that opposed the “diversity” justification and instead had the Department of Justice produce an amicus brief that opposed Michigan’s preferences on narrowly tailored mathematical grounds yet remained silent on the main issue of the case—the promotion of diversity as a compelling state interest. In a 6-3 vote in Gratz v. Bollinger (2003), for which Chief Justice William Rehnquist wrote the majority opinion, the Court struck down the undergraduate practice that awarded admission points for ethnicity as unconstitutional, thereby declaring that it did not meet the Fourteenth Amendment standard. However, in a surprise 5-4 decision in Grutter v. Bollinger (2003), with the majority opinion written by Associate Justice Sandra Day O’Connor, the Court upheld the more ambiguous use of race as one of several deciding factors in law school admissions based on the state’s compelling interest to achieve a “critical mass” of students from groups that had historically been discriminated against. O’Connor argued that the policy was narrowly tailored to meet the standard of a compelling interest, and in what received much public debate afterward, stated “We expect that 25 years from now, the use of racial preferences will no longer be necessary to further the interest approved today.” Most court watchers had predicted that the Rehnquist Court would strike down both admissions programs as unconstitutional, following the rejection of
asylum 119
affirmative action programs in government contracts by the same court in the 1995 Adarand v. Pena ruling. In spite of the Bush administration’s urging, along with numerous conservative political groups, to bar preferential college admissions for minorities, a more powerful force seemed to emerge from the 65 Fortune 500 companies and a group of retired military officials that besieged the Court with amicus briefs stating the need to preserve affirmative action in an increasingly diverse nation and global economy. Justice O’Connor relied heavily on the briefs in her assertion that race-conscious admissions were constitutional as a path to educational diversity. In the end, despite its opposition, the Bush administration quietly endorsed the O’Connor opinion. These two rulings in the Michigan cases show that the issue of affirmative issue is far from resolved as a public policy matter in the United States. See also civil rights. Further Reading Anderson, Terry H. The Pursuit of Fairness: A History of Affirmative Action. New York: Oxford University Press, 2004; Cahn, Steven M., ed. The Affirmative Action Debate. New York: Routledge, 2002; Epstein, Lee, and Thomas G. Walker. Constitutional Law for a Changing America: Institutional Powers and Constraints. 5th ed. Washington, D.C.: Congressional Quarterly Press, 2004; Fisher, Louis. American Constitutional Law. 5th ed. Durham, N.C.: Carolina Academic Press, 2003; O’Brien, David M. Constitutional Law and Politics. Vol. 2, Civil Rights and Civil Liberties. 5th ed. New York: W.W. Norton, 2003; Shull, Steven A. American Civil Rights Policy from Truman to Clinton: The Role of Presidential Leadership. Armonk, N.Y.: M.E. Sharpe, 1999; Stephens, Otis H., Jr., and John M. Scheb II. American Constitutional Law. 3rd ed. Belmont, Calif.: Thompson, 2003. —Lori Cox Han
asylum Asylum is a form of political protection that allows individuals who are in the United States to remain there if they face a grave threat to their life or safety by returning to the country of their birth or residence. In addition, the persons can remain in the United States if they fear persecution on account of their
race, religion, nationality, or membership in a political organization. While the humanitarian implications of asylum are obvious, the political implications are often far more complex and contradictory. In 1981, the Refugee Act was established to deal with refugees and asylum seekers (the terms asylum and refugee are often used interchangeably, but there are subtle differences between the two, the main difference relating to where one can apply for such status). There is no quota on the number of people who may be granted political asylum, but there is an annual limit on the total number of people who may obtain residency in the United States based on asylum claims. The initial step in applying for asylum is to make a formal request via application form I-589. This request is reviewed by the regional office of the United States Citizenship and Immigration Service (USCIS). If the application is approved, the individual is allowed to stay in the United States for one year, and may also apply for work. If denied, the applicant may appeal for a reconsideration of the case. If denied again, an appeal can be filed. For the most part, the United States Citizenship and Immigration Services handles asylum requests. Their stated mission is “To implement U.S. asylum laws in a manner that is fair, timely, and consistent with international humanitarian principles.” They define asylum as “a form of protection that allows individuals who are in the United States to remain here, provided that they meet the definition of a refugee (emphasis in the original) and are not barred from either applying for or being granted asylum, and eventually to adjust their status to lawful permanent resident.” In the post–World War II era, immigration and asylum were covered by the Immigration and Nationality Act (INA) passed in 1952. And while the INA did not expressly contain provisions for dealing with refugees and displaced persons, it was the governing law for asylum seekers. In 1956, the U.S. attorney general’s office began to take a more involved role in dealing with refugees and asylum seekers, and during the 1950s, Congress passed a series of laws designed and tailored to specific countries or regions. In 1965, Congress amended the INA to provide for the resettlement of refugees. This was the first time the United States had literally dealt with the
120 asylum
refugee issue and the term refugee was defined in both geographical and political terms as persons fleeing communist or communist-dominated countries. In the heated days of the cold war, refugees were political pawns in the bipolar battle between the United States and the Soviet Union, and thus merited political attention. Also in the 1960s, a series of United Nations treaties focused international attention on the plight of refugees and asylum seekers. The 1967 United Nations Protocol Relating to the Status of Refugees, which incorporated the 1951 United Nations Convention relating to the Status of Refugees drew a significant amount of worldwide attention on the status of refugees, and defined a refugee as any person who, “owing to a well-founded fear of being persecuted for reasons of race, religion, nationality, membership in a particular social group or political opinion, is outside the country of his nationality and is unable or, owing to such fear, unwilling to avail himself of the protection of that country . . .” (quoted from language in the 1951 United Nations Convention on Refugees). This definition broadened who was considered a refugee or asylum seeker beyond the cold war definition of the United States, and made a wider range of concerns or fears the basis of the claims of refugee status and asylum. This put considerable pressure on the United States to likewise broaden its definition and understanding of just who was considered a refugee and who might apply for asylum status in the United States. Under the 1951 Convention of Refugees and the 1967 Protocol, nations may not return individuals to the country of origin if there is substantial reason to believe that harm would come to them. The United Nations High Commissioner for Refugees (UNHCR) was established to monitor and promote the rights of refugees. Headquartered in Geneva, Switzerland, the UNHCR has been awarded the Nobel Peace Prize in 1954 and again in 1981. In the post–9-11 atmosphere of increased concern for national safety and security, the United States government has become more stringent in granting asylum status to applicants. There is a fear that asylum requests may mask ulterior motives, and that some asylum seekers may endanger the safety of the United States. Thus, applications for asylum have become more politically charged in recent years and
public opinion has begun to turn against immigration and asylum. As a political issue, asylum and immigration have been featured prominently on talk radio and cable television programs and mostly conservative commentators have fomented anti-immigrant and antiasylum sentiments among the general public. By spreading fear that immigrants were overrunning the nation and that they were bringing into the U.S. alien views and perhaps even terrorist intentions, those opposed to more lenient immigration and asylum laws have helped to turn opinion against immigration and asylum seekers, and made it politically difficult to rationally deal with this politically controversial issue. “Closing off the borders” became the rallying cry to many, and a variety of citizen militia groups began to patrol the U.S.-Mexican border. While border control is a very important and serious political and national security issue, since September 11, 2001, this issue has become so emotionally charged that serious debate has become difficult. In this atmosphere, serious asylum seekers have been caught in the middle of a political debate that has largely bypassed their concerns and needs. As immigration and border control have become hot-button political issues, legitimate asylum seekers have been lumped in with all others who attempt to come into the United States. And in this politically charged atmosphere, it has become difficult to make the case for more attention to asylum issues as they have become too closely linked to the issues of immigration and border control. There is no indication that this will change in the near future and thus, the real issue of asylum will almost certainly be subsumed into the larger and more heated debate centering on immigration and border control. Conflating asylum with immigration and border issues had done a great disservice to the very serious nature of the asylum issue. It has led to a “guilt by association” mentality that often sees asylum seekers as threats to the safety and security of the United States, and demeans the very serious problems that might lead one to ask for asylum. In the highly charged atmosphere of the war against terrorism, this might be understandable, but that does not make it any easier for those who legitimately need the protection that asylum may afford. Asylum and immigration became a problem for President George W. Bush, who was accused of
censorship 121
endangering the United States by not taking the border issue more seriously. The president had proposed a “guest worker” program for immigrants and offered asylum and amnesty for some who had already entered the United States illegally. This caused an uproar among conservatives in 2006 within his own Republican Party, and compelled the president to back away from his original proposal. The issue remains a prominent one on the U.S. political agenda, and is especially crucial among border states such as California, Arizona, New Mexico, and Texas. Further Reading Huysmans, Jef. The Politics of Insecurity: Security, Migration and Asylum. New York: Taylor & Francis, 2006; Willman, Sue. Support for Asylum Seekers: A Guide to Legal and Welfare Rights. Legal Action Group, 2004. —Michael A. Genovese
censorship The word censorship comes from the ancient Roman word “censor.” In Rome, a censor was responsible for supervising the morals of the public. Censorship generally refers to controlled, forbidden, punished, or prohibited speech or expression. It is usually accomplished by a government, but there are other forms of censorship, such as Tocqueville’s “tyranny of the majority.” Censorship can be either explicit, such as some rules or prohibitions embedded in law, or informal, such as norms and unstated cultural “oughts” that are enforced through social pressure and public expectations of the norm. They can be explicit and grounded in laws against publishing certain forms of opinion (as exists in the People’s Republic of China) or implicit, where intimidation instills fear in the public (as was the case in the United States during the McCarthy era of the 1950s, or the post–9-11 period where certain forms of criticism were met with outrage and intimidation). Explicit forms of censorship exist in democratic and nondemocratic regimes. In the United Kingdom, the Official Secrets Act prohibits a wide swath of material from being published if the government believes publication of such materials is against the public interest. In the United States, the Bill of
Rights (which are the first 10 amendments to the U.S. Constitution), protect the citizens from the government, and spell out certain rights that the government may not violate. The First Amendment (which became law in 1791) reads, in part, “Congress shall make no law . . . abridging the freedom of speech, or of the press; or the right of the people peacefully to assemble, and to petition the Government for a redress of grievances.” Therein, granted to the citizens of the United States and to its press, is the right to speak and publish materials, even if the government finds such speech uncomfortable or embarrassing, or even if the material is critical of the government and/or its officials. Article 19 of the Universal Declaration of Human Rights states that “Everyone has the right to freedom of expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.” Thus, there is a presumption of free speech. The burden on those wishing to limit free speech is high. In the United States, political speech almost always receives greater protection than nonpolitical speech. The theory behind this is that as a deliberative democracy, American citizens must be free to engage in public dialogues on important political issues and thereby participate in government as informed citizens. In the United States, the elusive search for when to allow censorship regarding obscene and indecent material, as well as defining those terms, has taken up an enormous amount of time and energy by courts over the past several decades. A “slippery slope” can exist in navigating a compromise between First Amendment speech freedoms and controlling the flow of obscene and indecent material. Since colonial times through World War II, any so-called obscene speech, especially descriptions of sexual activity, have been the target of censorship. During the early 20th century, many authors, including James Joyce, D. H. Lawrence, and Norman Mailer were the victims of censorship—their books were either banned or written with a wary eye toward potential censorship due to explicit passages. Many powerful political groups, including the United States Supreme Court, have historically attempted to ban “obscene” materials that they deem harmful to society and therefore not protected by the First Amendment.
122 c ensorship
During the 19th and early 20th century, American courts upheld the power of both Congress and state governments to ban obscenity. To do so, judges borrowed a common-law test from English courts, known as the Hicklin test. This restrictive test defined obscenity as material that “depraves and corrupts those whose minds are open to such immoral influences and into whose hands a publication of this sort might fall.” Congress passed its first law regulating obscenity in 1873, the Comstock law (which is still in effect), which prohibits the mailing of “every obscene, lewd, lascivious, indecent, filthy or vile article, matter, thing, device or substance,” and includes a $5,000 fine or up to five years in jail. While attempting to be specific, this showed the difficulty in the vagueness of the language and the problem of determining the exact nature of communications sent through the mail. Each state also had additional laws that strengthened the Comstock law; as a result, pornography in most communities went underground, since few people were willing to argue for its First Amendment protection. The United States Supreme Court attempted to provide a clear legal definition for obscenity during the 1950s. In two companion cases, Roth v. United States and Alberts v. California (1957), the Court upheld separate convictions that involved publications including nude photographs and literary erotica. In the majority opinion written by Associate Justice William Brennan (the Court ruled 6-3 in Roth and 7-2 in Alberts), obscenity was viewed as not protected by the First Amendment. The opinion defined obscenity as material that portrays sex in such a way as to appeal to “prurient interest,” relying on a history that rejects obscenity as “utterly without redeeming social importance.” The test that emerged was “whether to the average person, applying contemporary community standards, the dominant theme of the material taken as a whole appeals to prurient interest.” Associate Justice William O. Douglas, who dissented and was joined by Associate Justice Hugo Black and in part by Associate Justice John Marshall Harlan, pointed out the difficulties of the Court in setting censorship standards. Holding a more absolutist view of the First Amendment, Douglas argued against censorship since the constitutional protection of free speech was designed to “preclude courts as well as legislatures from weighing the values of speech
against silence. The First Amendment puts free speech in the preferred position.” Later, in Miller v. California (1973), the U.S. Supreme Court ruled in a 5-4 decision to uphold obscenity laws that banned the distribution of certain materials. Chief Justice Warren Burger delivered the majority opinion, which was an attempt to give states a stronger definition of obscenity for obtaining convictions through a more specific test aimed at “works which depict or describe sexual conduct” with offenses limited to “works which, taken as a whole, appeal to the prurient interest in sex, which portray sexual conduct in a patently offensive way, and which, taken as a whole, do not have serious literary, artistic, political, or scientific value.” Burger also included guidelines for juries to apply to obscenity cases, including the application of “community standards” to determine if material appeals to a prurient interest, and if sexual conduct is described in a “patently offensive way.” Today, opinions at opposite ends of the spectrum on obscenity show either an argument that pornography and other explicit materials can cause antisocial and deviant behavior, or that pornography can benefit certain members of society as a “vicarious outlet or escape valve.” Unfortunately, a lack of research exists to pinpoint a cause-and-effect relationship to back up the claims of either side. Critics of antipornography laws ultimately argue that legislation or court rulings to control such material are impossible, since judges and politicians cannot agree on a definition of obscenity. This area of constitutional law is most dependent on subjective rulings by judges, and is the most “ill-defined body of law in American jurisprudence.” Censorship within a democratic context creates dilemmas and paradoxes. On the one hand, there is a consensus that certain very explicit and offensive things may merit censorship; and yet the First Amendment grants citizens the right to free speech and expression. At what point does expression become so offensive or dangerous that it merits censorship? The United States, with the First Amendment and the presumption of free and open expression, finds it especially difficult to constitutionally and politically defend censorship, and yet, it is done every day in a myriad of ways. There is no set bar that marks the end of free speech and the beginning of censorship. A society too willing to censor speech runs the risk of
citizenship 123
being oppressive and closed. A society too open and willing to accept any and all forms of speech as legitimate may endanger the health of the polity. Thus, while censorship does and should strike us as a restriction on the freedoms of expression guaranteed in the Bill of Rights, many argue that in a war against terrorism and a society that is so interconnected via the Internet, that there may be times and types of expression that even an open and democratic society may fine objectionable, dangerous, and susceptible to forms of censorship. See also freedom of speech; freedom of the press. Further Reading Adams, Thelma, and Anthony Lewis. Censorship and First Amendment Rights: A Primer. Tarrytown, N.Y.: American Booksellers Foundation for Free Expression, 1992; Coetzee, J. M. Giving Offense: Essays on Censorship. Chicago: University of Chicago Press, 1996; Long, Robert Emmet, ed. Censorship. New York: H.W. Wilson, 1990; Pember, Don R., and Clay Calvert. Mass Media Law. Boston: McGraw-Hill, 2005; Riley, Gail Blasser. Censorship. New York: Facts On File, 1998. —Michael A. Genovese and Lori Cox Han
citizenship Citizenship refers to membership in a political community and as such is a core concept in the discipline of political science. The concept of citizenship is applicable to many types of political regimes, though Aristotle’s famous distinction between the good man and the good citizen bears keeping in mind. Although citizenship may elude precise definition, the status of being a citizen distinguishes those who are inside from those who are outside of a particular political regime, and so specifies a public relationship between a person and his or her state. Although principles of citizenship have been construed to differentiate and exclude, or to maintain a traditional hierarchy, today the notion of citizenship connotes equal political status in the eyes of the law, and so can provide a potent tool for human rights. Different citizens’ recognition of each other’s citizenship provides one way for them to recognize each other and their mutual obligations across their differences, obligations which may be
socially negotiated but may not extend to those who are not citizens. Hence, citizenship provides vectors for both recognition of commonality despite differences, and demarcation of the boundaries of membership in any given political community. The classic contrast has been between citizen and slave, a person who, no matter where he is from, cannot as a slave ever achieve the requisite independence and self-reliance to become a citizen. While slaves were denied both economic and political rights, some would argue today that the best ideal of citizenship includes social rights, such as the right to an education that enables the meaningful exercise of citizenship, and also cultural rights, such as the right to access the country’s cultural heritage. A citizen is different from a subject, one who is allowed to remain in the same country subject to the will of a monarch or ruling elite. Over time the subjects of European monarchies acquired more and more rights through their political activism, including the right to a political status that was not dependent on the monarch or ruling class, or on their confession of a particular faith. A citizen is also different from an alien, one who is from a different place than where he or she is residing. Most countries that base their citizenship on place of birth (the principle of jus soli) have provisions for a naturalization process that allows an alien to become a citizen, though in countries where citizenship is based on bloodline such as membership in a particular ethno-national community (the principle of jus sanguinis), this may not be possible, and aliens may have to remain permanently resident aliens. There is only one type of American citizenship, and that is equal citizenship, with each and every citizen being entitled to 100 percent of the rights and liberties available to any other. All American citizens are citizens of both the United States and the state in which they reside. In contemporary parlance, corporations and other artificial persons are also citizens of the state in which they were legally created. The Fourteenth Amendment to the U.S. Constitution, ratified on July 9, 1868, provides the textual basis for the principle of equal citizenship; it reads as follows: “All persons born or naturalized in the United States, and subject to the jurisdictions thereof, are citizens of the United States and of the State wherein they reside.” Unfortunately, for much of the nation’s history, a variety of segments of the American population
124 citiz enship
have felt they were less than equal to the iconic white, male, heterosexual, head-of-household, churched (preferably Protestant), property-owning citizen. It is a testament to the strength and soundness of the nation’s foundational documents, liberal-democratic principles, and political vision that over time restrictions on the exercise of full citizenship have been rolled back, at least in the eyes of the law. The American political regime has been successful in no small part because of its ability to provide avenues for its people to revise, enlarge, and make improvements, even if ultimately stateways (e.g., laws and public policies) may be unable to change folkways (e.g., traditional practices and old, prejudicial stereotypes) all the way down. Citizenship may be viewed as a principle of full inclusion that confers autonomy on the individual, rather than on the nation state. Citizenship emancipates individuals from the state and providing them as a whole with a basis from which to make political demands. The political theorist Judith Shklar presents American citizenship as constituted by four distinct elements. First is the legal category of citizenship as nationality, a category to be distinguished from being stateless, where a person can claim no political entity that is obligated to his or her well-being or rights. Second, there is the category of good citizenship as a political practice, referring to citizens who participate in public affairs beyond performing their duties in the workplace and being good neighbors. Thirdly, there is the civic republican notion of citizenship, which Shklar refers to as perfected citizens because of the high moral value placed on active engagement and the goal of a seemingly higher state of virtue to be fulfilled through single-minded devotion to the public good. Finally, there is citizenship as standing, the social meaning of citizenship in the American setting, one based in voting and earning. The right to suffrage and the right to engage in paid work so as to earn a living have been denied various segments of the American population who have nonetheless sought after them in order to achieve standing in the community as full and equal citizens. Initially, in ancient Greece, citizenship was a revered status restricted to the few, who were obligated to contribute to the welfare of society through their participation in democratic self-rule and martial obligations. This was the active citizenship of duty
and responsibility, and it was more intense than the modern, passive version of citizenship, which is based on universal criteria and stresses an individual’s rights and, more recently, entitlements. The earlier notion of citizenship survives in the civic republican tradition that traces through Renaissance Italian city-states back to the Roman Republic, and the term itself is derived from Latin (civitas). This tradition holds that active participation in the civic life of the community is an important part of a good human life, in no small part because such a life will require the cultivation of human virtues that a life spent in private pursuits cannot provide. The virtue of modern citizenship is its capacity to include potentially all members living under the same territorial sovereignty, without delimiting which individuals will receive public status or other benefits because of their being well-born, propertied, and educated in a certain way or place, or some other criterion, from those who will not. The downside to modern citizenship is that it does not require persons to develop their civic-mindedness, and may leave them to regard the focus of membership in society as being the rights to which they are entitled, with little or no expectations of them. Political scientist David Ricci suggests that good citizenship is understood by Americans today to mean a combination of obeying laws, participating in the public life of the community, and exercising virtue and an economic conscience. The latter considerations reflect historical republicanism’s concern that individuals strive to be good persons who manifest their concern for the quality of public life in their activities as citizens, and the traditional notion that being a good person has a broader social component, that of contributing to the welfare of their community. Given the time constraints many contemporary Americans experience, they may feel conflicted between devoting time to participate in their community’s public life, on the one hand, and gratifying their needs and advancing their economic interests, on the other hand. Wherever the balance is struck, it will have consequences for the freedom of the American people and the vibrancy of civil society. In the context of globalization, issues of citizenship have taken center-stage, starting with practices of multicitizenship in more than one country, and regional citizenship such as in the European Union. Another large factor in the recent interest in
civic responsibility 125
the concept of citizenship is the movement of large numbers of individuals and families across national boundaries in search of economic opportunity. Regulations concerning citizenship in both the United States and several western European countries are being challenged by these large numbers, the countries desiring on the one hand a steady supply of workers, especially for low-paying manual labor, yet, on the other hand, concerned that the character of the country cannot help but change and then in directions that would otherwise not have occurred. The status of migrant workers, no less than refugees from natural disasters and civil strife-torn areas, looms large in current debates about the meaning and significance of citizenship. While in the modern period, citizenship has been a vehicle for liberal-democratic political principles, it has not adapted well to the recent and unanticipated influx of foreigners. It may be for this reason that concepts of global citizenship and the idea of being a citizen of the world are widely discussed, because these notions of citizenship are based in our common humanity and so might facilitate achieving a minimum political status for people who otherwise might be entirely at the mercy of a sometimes inhospitable host country. Citizenship has proven itself to be a dynamic institution in the past and likely will continue to be so as it gets reinvigorated with new issues and contexts. Further Reading Clarke, Paul Berry. Citizenship: A Reader. London: Pluto Press, 1994; Heater, Derek. A Brief History of Citizenship. New York: New York University Press, 2004; Ricci, David M. Good Citizenship in America. Cambridge: Cambridge University Press, 2004; Shklar, Judith N. American Citizenship: The Quest for Inclusion. Cambridge, Mass.: Harvard University Press, 1990. —Gordon A. Babst
civic responsibility In its various incarnations, the idea of civic responsibility stands at the center of liberal-democratic theory. Along with related conceptual cornerstones of modern liberal theory, such as justice, individual rights, limited government, and rule by consent, it is based in an Anglo-American political tradition whose
origins can ultimately be traced to classical Greece. The notion of civic responsibility reflects a continual desire by classical thinkers, humanists, Enlightenment republicans, and eventually American political writers to define the nature, objectives, and limits of collaborative political participation in republican polities. In the United States, particularly prior to the 20th century, a primary focus of inquiry has been the seemingly inherent conflict between private interest and the public good. American political efforts in this regard have concentrated more on constraints against public institutions than affirmative substantive criteria that mandate or enhance specific civic duties and responsibilities among the citizenry. Recent American conceptions of civic responsibility have downplayed customary historical links to notions of civic virtue and positive (enumerated) civic goods, while emphasizing the centrality of civic privileges, implied liberties, and individual rights. As a result, the gulf between the practical examples of civic responsibility and their theoretical historical origins has widened, and, more significantly, the concept of civic responsibility itself has become less relevant and meaningful in the American political system. For centuries, Western conceptions of civic responsibility were inextricably tied to notions of civic virtue. In fact, in most settings, the concepts of civic responsibility and civic virtue were functionally synonymous. This conceptual linkage was a function of one of the oldest puzzles in Western political philosophy, i.e., how political actors can protect the common good from nonpurposive activities that subvert legitimate public objectives, thereby leading to the corruption and eventual demise of the republic. Whether through the progression of cycles that were unrelated to particular points in history as many ancient philosophers claimed, most political thinkers were convinced that the corruption and subsequent demise of polities was an inevitability. The two millennia that witnessed the establishment of an Athenian “democracy,” the founding of an American republic, and everything between were marked by an unrelenting desire to forestall, if not actually prevent, such corruption. For the thinkers relevant to this discussion, the answer to questions of political instability and corruption was civic virtue, a multifaceted concept whose inclusion in republican theories of government enabled
126 civic responsibility
political actors to posit relevant, if not always practicable, procedural and substantive prescriptions for stable and legitimate government. Of the postclassical writers who later influenced Anglo-American philosophers and political actors, Machiavelli was perhaps the most consequential and pertinent interpreter of Aristotelian republican theories. Machiavelli rejected Aristotle’s preference for a contemplative life in favor of an active—political—life and, in so doing, made the idea of civic responsibility as important as that of civic virtue. Machiavelli’s insistence that a life of political participation enhanced public virtue as a whole and, thus, facilitated the citizen’s cultivation of civic virtue provided an impetus for 17th-century English republicans and 18th-century commonwealthmen to define the duties and responsibilities of Englishmen vis-à-vis the constitutional monarchy. One of the most misunderstood and misquoted philosophers of the era, John Locke, endeavored to accomplish this through his refutations of royalist theories and the analysis of the purposes of political existence. Locke attempted to address what he perceived as the weaknesses in contemporary republican theory by emphasizing the significance of the individual and the individual’s contribution to the pursuit of the public good. One of the most glaring inadequacies of contemporary political philosophy, especially in an England that was beset by a relentless string of political convulsions, was the fact that it could not accommodate the need for a quick political response to those troubles; that is, the pace of change that was possible under prevailing formulations was glacially slow. In addition, notions of institutional virtue (as those espoused by Sir Edward Coke) accorded relatively little ethical responsibility to persons per se and, thus, minimized the significance of the individual as a contributory member of a political community. Finally, because the principal political and legal objectives of most English thinkers consisted of the articulation of procedures through which political stability and the authorized use of political power are secured and maintained, they did not concern themselves with the question of how a polity is ordered from the beginning, or reordered if the situation requires it. Locke, on the other hand, recognized the natural origins of political forms and was interested in those origins as a subject of philosophical inquiry in and of itself, which
was an interest that he shared with his intellectual predecessor, Aristotle, and also an interest that the framers of the U.S. Constitution would later share. Locke contended that “men uniting into politic[al] societies” protect and promote the citizenry “power of thinking well or ill [and] approving or disapproving of the actions of those whom they live amongst and converse with” for the purpose of “establish[ing] amongst themselves what they will call virtue,” inasmuch as “[v]irtue is everywhere that which is thought praiseworthy, and nothing else but that which has the allowance of public esteem is called virtue.” This description of virtue clearly shows Locke’s debt to Machiavelli and Machiavellian notions of a politically active citizenry. Though a loyal Aristotelian dedicated to the tenets of philosophical realism, Locke was, nevertheless, profoundly aware of the humanist insistence that republican stability is dependent on unified civic activity. Locke may not have been persuaded by Machiavelli’s epistemological conclusions regarding the stability and viability of universal propositions, but Locke understood Machiavelli’s desire to provide philosophically reliable foundations for epistemological processes that did not have a discoverable link to essences. Locke believed that the universality of virtue is substantiated through its existence as a complex idea that is inductively tied to particular feelings of pleasure. Such pleasure is caused by a public good and is, therefore, the result of the successful application of a participatory civic activity toward the enhancement of the public interest. The epistemological validity of virtue as a demonstrable philosophical truth is affirmed through the discursive engagement of the citizenry in its effort to ensure the compatibility of a complex idea of virtue with the public welfare. Locke hoped that this definitional process would enable the citizenry to avoid or prevent philosophical instabilities; though he believed that an individual who conscientiously adhered to the prescribed inductive methodology could define virtue, specifically, and the law of nature, generally, Locke also believed that the above process of definition and subsequent public confirmation would greatly decrease the probability of error that existed at each step of the process. Consequently, aside from the obvious political reasons, Locke’s republicanism demanded a steadfast devotion to philosophical integrity and civic
civic responsibility 127
responsibility, both from the individual and the citizenry as a republican unit. It is in the above context of the relationship between the individual’s determination of the public interest and the citizenry’s confirmation of it that a discussion of Locke’s notion of rights must begin. Locke inherited a notion of rights as naturally determined claims against encroachments of the public interest, and he expanded that notion from its role as a consequence of republicanism into its function as a seminal determinant of it. Locke’s conception of rights makes sense only if its republican function is recognized and if Locke’s focus on the individual is viewed as an Aristotelian effort to secure and promote the political and epistemological contributions to the collective pursuit of public goods in a wellordered republic. Locke’s theory of rights constitutes a defense not just of individual rights per se, especially as we would understand those rights today; it is also an affirmation of the rights that members of a well-ordered republic must and do possess because of their membership in that well-ordered republic. Nonetheless, Locke’s emphatic avowals of the significance of the individual have offered most interpreters of his works enough evidence with which to depict Locke as the intellectual starting point of ideologies devoted to the gratification of man’s acquisitive and individualistic nature. However, he warned that the law of “nature must be altogether negated before one can claim for himself absolute liberty.” Quite clearly, what we would today call unrestrained liberalism would have been unthinkable for Locke; it would have been a scourge contrary to everything Locke’s political thought represented. Locke’s theory of rights established individual rights and liberties as definitional and functional subordinates of the dictates of the public pursuit of happiness. As Locke argued, “if the private interest of each person is the basis” of the law of nature, the law of nature “will inevitably be broken, because it is impossible to have regard for the interests of all at one and the same time” without violating the unified interest of the citizenry. Locke implored his contemporaries to remember that “a great number of virtues, and the best of them, consist only in this: that we do good to others at our own loss” and, in so doing, protect individual rights because of their role as the guarantors of the public interest.
None of this is intended as a denial of Locke’s devotion to individual rights, but that devotion must be recognized as something whose roots lay in an Aristotelian awareness of the need to define properly and accurately the individual’s role as a participatory member of a well-ordered republic. In addition, since Locke’s empiricism was linked to the belief that the individual must have the ability to contemplate freely the epistemological relationships between his experiences and the complex ideas that were definable through those experiences, it was doubly important to Locke that political societies secure the liberty whose existence allows the unfettered exploration of those relationships. Individual rights were considered to be an epistemological asset, inasmuch as those rights preserved the unobstructed discovery of the law of nature. Liberty was conceptualized as the sufficient quantity of freedom that would allow individual members of a well-ordered republic to pursue, as a collaborative unit, public happiness and, thus, to attain epistemological certitude through the induction of the law of nature. The American political founders were determined to uphold the ancient principles of republican government that were articulated by Aristotle and reinterpreted by Locke, whose focus was the achievement of political stability and the definition of the sources of English political authority. Since, as John Adams put it, the founders believed that “the divine science of politics is the science of social happiness,” the founders’ primary political objective was the uncorrupted pursuit of the common good that would enable the definition of those sources. Hence, their political activities were targeted toward the creation of a political environment that would allow a virtuous citizenry to survive. Their political thinking was tied to the Lockean conviction that the purpose of government is “to promote and secure the happiness of every member of society” and to preserve the viability of natural rights as the guarantors of the uncorrupted pursuit of the public good. Inasmuch as the founders were Lockeans, their epistemological objective consisted of the elucidation of the law of nature through the complex ideas that are the inductive products of an empiricist methodology (that is based on experience). More narrowly, those objectives included the wish, per John Adams, “to form and establish the wisest and happiest government
128 civic responsibility
that human wisdom can contrive.” The definition of the law of nature was especially important to Lockeans such as the founders because a knowledge of the law of nature was considered a prerequisite for the determination of public goods. Such a determination was thought to be the outcome of the rational consideration of ethical alternatives that was accomplished through the use of empirically guided reason. As Thomas Paine stated, the investigation of political truths and the identification of public goods “will establish a common interest,” which is virtue, “with[in] every part of the community,” and citizens “will mutually and naturally support each other” to uphold that virtue. Since virtue was thought to be the universal of which the individual goods are predicable and whose existence ultimately enables the public confirmation of the validity of individual goods, the founders were convinced that, as Moses Mather argued, “the only way to make men good subjects of a rational and free government is” to ensure that they are “virtuous.” Theoretical considerations aside, events during the 1780s ostensibly demonstrated that an organic unity and an inherent bond between the public and private spheres did not exist. The framers were ready, albeit grudgingly, to accept the seemingly intrinsic competition between private interest and public goods as an ontological truth, so Aristotelian dreams of a unified sociopolitical structure based on a naturally determined order were becoming vitiated. As Gordon Wood has illustrated, the former “revolutionary leaders like James Madison were willing to confront the reality” of the immanent incompatibility of private and public “interests in America with a very cold eye.” Madison’s Federalist 10 was only the most famous and frank acknowledgment of the degree to which private interest had overwhelmed the newly established state governments. The framers concluded that some mechanism, such as a constitution, should act as a mediating authority between the public and private realms, a mediating authority that would define and legitimate a governmental structure free from private corruption. From a 21st-century perspective, the irony of the situation is glaring, especially to an audience steeped in tales of the immanence of liberalism in American political development. The framers’ efforts to protect and promote the public good were reflective of a hope, perhaps quixotic, to embed republican princi-
ples in the new constitutional discourse. They hardly could have foreseen that the constitution they established as a supposedly neutral arbiter over the public and private realms would foster a context conducive to liberal hegemony. As Bernard Bailyn and Gordon Wood have ably demonstrated about the prenational period and Christopher Tomlins has cogently argued about the early national era, the ideological landscape was far from settled during these seminal years. Although liberalism was eventually able to conquer that landscape and suppress most competing discourses, that result was not inevitable to the framers, nor was it desirable. Their hope was to insulate government from the corrupting influence of private interest and to contain the impulse of individualism within modest limits. The new constitutional discourse accommodated elements of what we would today call liberalism, but it was hardly the type of untrammeled individualistic liberalism with which today’s scholars are familiar. Once again, the irony is that a system designed to control and to fetter the corrupting influence of liberal individualism—and what in this century became pluralism—actually promoted it by, as David Montgomery and Jennifer Nedelsky have shown, insulating capitalist markets from democratic processes. Further Reading Bailyn, Bernard. The Ideological Origins of the American Revolution. Cambridge, Mass.: Harvard University Press, 1967; Gustafson, Thomas. Representative Words: Politics, Literature, and the American Language, 1776–1865. Cambridge: Cambridge University Press, 1992; Kraut, Richard. Aristotle on the Human Good. Princeton, N.J.: Princeton University Press, 1989; Macedo, Stephen. Liberal Virtues: Citizenship, Virtue, and Community in Liberal Constitutionalism. Oxford: Oxford University Press, 1991; Ober, Josiah. Mass and Elite in Democratic Athens: Rhetoric, Ideology, and the Power of the People. Princeton, N.J.: Princeton University Press, 1989; Pocock, J. G. A. The Machiavellian Moment: Florentine Political Thought and the Atlantic Republican Tradition. Princeton, N.J.: Princeton University Press, 1975; Robbins, Caroline A. The Eighteenth-Century Commonwealthman: Studies in the Transmission, Development, and Circumstances of English Liberal Thought from the Restoration of Charles II until the
civil disobedience 129
War with the Thirteen Colonies. Cambridge, Mass.: Harvard University Press, 1959; Tomlins, Christopher L. Law, Labor, and Ideology in the Early American Republic. Cambridge: Cambridge University Press, 1993; Wood, Gordon S. Creation of the American Republic, 1776–1787. New York: W.W. Norton, 1969. —Tomislav Han
civil disobedience Civil disobedience is the deliberate and conscientious breaking of a law or refusal to obey a law with the aim of changing that law. An act of civil disobedience draws attention to what is believed by some people to be a serious violation of justice, or a breach of high moral principle in the law. The locus classicus for theorizing civil disobedience is Henry David Thoreau’s lecture turned essay “Civil Disobedience,” while the canonical example of the practice of civil disobedience is Rosa Parks’s refusal in 1955 to give up her seat on the bus to a white man as was expected of blacks in the segregated South. Her iconic act breathed life into an already existing Civil Rights movement and demonstrated to all America the law-abiding nature of blacks and others concerned to align legal practice more closely to American ideals of freedom and equality under the law. Parks’s example set the tone for the American Civil Rights movement and the leadership of Martin Luther King, Jr., himself influenced by Thoreau and also the thought and action of India’s Mohandas Gandhi (1869–1948), who stressed nonviolent resistance to British authority in his campaign to help India gain its independence from Great Britain. Civil disobedience should be distinguished from political protest, resistance to political authority, and uncivil, lawless, or criminal disobedience. The First Amendment to the U.S. Constitution provides for a right to assemble and petition the government for grievances. Citizens may lawfully protest the government through mundane activities such as writing letters to their representatives in Washington, to participating in an organized protest rally or demonstration. While the time, manner, and place of organized gatherings may be regulated for the public interest, the right of the people to assemble and petition the government may not be abrogated. When, however, normal channels of communication and avenues of legal protest are exhausted without the desired
result, such as when citizens in the segregated South petitioned their prosegregation democratically elected senators, citizens may turn to passive resistance to political authority. Passive resistance includes acts of civil disobedience that peacefully resist or impede the work of the state (e.g., lying down in the street in front of the National Institutes of Health to protest the speed at which AIDS drugs are being placed on trial, thereby risking being taken away by the police). Civil disobedience is characterized not only by resistance to authority, but also disobedience of a law that denotes an escalation in seriousness, that one cannot obey a law without compromising one’s own moral personality. Uncivil disobedience involves breaking a law unrelated to the law being protested (e.g., seizing an unguarded television set on display in a broken storefront window in the course of protesting a racially motivated incident involving the police). Criminal behavior such as firing weapons at the police, or other lawless behavior that harms persons or property outside the parameters of self-defense is hard to connect to the imperative of promoting a national discussion to address a pervasive injustice, though the frustration that leads up to the incident may be quite understandable. Civil disobedience itself is paradoxical, because it involves breaking a law in fidelity to the rule of law. Hence, there are at least three conditions that must apply to any act of civil disobedience for it to break through this paradox and establish a better norm, or improve an existing one. An act of civil disobedience must by nonviolent for reasons of both morality and efficacy. Willfully causing harm to persons or property is not only illegal but generally regarded as immoral. The illegality and immorality of such an act combine with the violence to overshadow the issue putatively being protested, at least in the eyes of the reasonable person sitting at home watching it all unfold on televised news. The civilly disobedient should not be seen as a physical threat, but merely as threatening to call into question an accepted practice that is believed by some to violate the nation’s political principles or contravene society’s public morality. The use or threat of violence to force an issue can only be regarded as coercive, and likely will meet with resistance regardless of the righteousness of the cause. The civilly disobedient want to strike a different note, one that acts of violence are likely to obscure.
130 civil disobedience
Persons who commit acts of civil disobedience must be willing to accept the consequences for their actions, including legal arrest and punishment. Those who express that they will not suffer any consequences hit the wrong note, as they suggest to the reasonable bystander that they feel themselves to be above the law. While the civilly disobedient may indeed feel themselves in the right and are absolutely convinced of this, they are not thereby empowered to break the law without suffering the consequences others would, even if at the end of the day a majority of Americans will agree that the law was unjust. Instead, the civilly disobedient should pose as if offering their critique of a better idea to the American public, an offering that the people are free to reject, but about which they will be appealed to ultimately to decide. Not striking this pose suggests that the civilly disobedient respect neither the rule of law, nor the people, even as they argue that the people are mistaken not to adopt their point of view. If the civilly disobedient are unconcerned to strike the right note, then they could just as well use violence or engage in flagrant lawless behavior, for all the good effect any of these could have (likely none). Finally, an act of civil disobedience must be capable of being justified, the most tendentious condition because society’s verdict that the civilly disobedient were justified may not come for years, even decades. The perpetrator of an act of civil disobedience should understand that his or her offering to society, his or her perspective on the issue he or she wishes to call to the public’s attention, might be rejected or not accepted except grudgingly, over time. The use of violence or suggestion that one is above the law may both appear as tempting shortcuts to the supplication and waiting that proper civil disobedience requires. Nonetheless, a message unclouded by violence and performed in public with apparent sincerity regarding both the issue being protested and the overall rule of law will more likely contribute to a genuine change in public opinion than any alternative. The civilly disobedient appeal to the sovereignty of the American people is a unique way, calling upon them to reconsider contemporary practices that are believed to be incongruent with the nation’s foundational political principles. Given that the practices are ongoing, the civilly disobedient must accept that change may take place only slowly, no matter how morally right and politically faithful to our common values they
may happen to be. Acts of civil disobedience that meet the criteria of nonviolence, of perpetrators willing to accept punishment, and of a view being urged that seems to comport better with the political values we already share or claim to espouse, do the republic a great service by calling to our attention the times we fail to live up to our own principles. Civil disobedience, at its finest, points to a gap between the justification for a state to promote, say, equality and liberty, and the legitimacy of aspects of its political regime, say, legally allowing race-based discrimination. Civil disobedience, then, works as an improvement on our politics, which, prior to the revision in the law, were disingenuous given our foundational political principles. Civil disobedience calls the sovereign—in the American case, the people—back to first principles, a move political philosophers such as Machiavelli and Hannah Arendt would urge on any republican form of government in a time of crisis. Thoreau had the notion that one should not support a government by paying taxes if it sanctions policies one holds to be immoral. In his case, the issue was the likely admittance of new states into the Union as slave states owing to the War with Mexico of 1846–48, possibly tipping the delicate balance in the U.S. Congress between free and slave states in favor of the latter and so risking further entrenchment of this morally odious institution. He did not want any of his taxes supporting a war effort that would result in an even greater perpetration of injustice on the part of the United States than was already the case. The Fugitive Slave Act of 1850 stipulated that northern law enforcement officials were obliged to return the “property” (runaway slaves) of southern plantation owners to them, intolerably blurring the line between the free North and the slave-holding South in Thoreau’s mind. No matter that only a small proportion of his taxes would in fact support the war or any effort to return a slave to the South, Thoreau did not want to be personally complicit in the institution of slavery. Thoreau’s refusal to pay his poll taxes got him a night in the Concord town jail, a consequence he willingly, even cheerfully, suffered. For him, it was a matter of conscience, and he believed that if each citizen disobeyed the laws of what he believed was an unjust regime, given its legal sanction of slavery in the face of its foundational political principles, then all this moral energy would become an effective agent of change. In addition,
civil liberties 131
Thoreau’s notion that bystanders who simply obeyed the law could nevertheless become complicit in an act of great evil is a powerful one. The political philosopher Hannah Arendt based her contributions to our understanding of civil disobedience partly on her reflections on the civil rights and anti–Vietnam War student protest movements. She reasoned that while Thoreau had expressed the courage of his convictions and offered initial considerations toward a theory of civil disobedience, he failed to hit the right note regarding the private/public divide. Thoreau’s act of civil disobedience was conscience-based, and, while it no doubt allowed him to sleep easier believing he had finally done something that his conscience had been compelling him to do, it was primarily a private act about which the greater public would know nothing had he not talked about it in a later speech. For Arendt, genuine civil disobedience needs to connect to a wider public, a voluntary association of individuals drawn together by their common commitment to the nation’s public values, seen as breeched. There needs to be an articulated moral generalizability that intentionally extends beyond one’s private conscience to incite action in the public space. On Arendt’s account, civil disobedience strengthens democracy because, were there no option of appealing to the people in this special way, long-festering perceptions of injustice might just turn into revolution, a resolution that ruptures the social fabric and could overturn the rule of law. Further Reading Arendt, Hannah. “Civil Disobedience,” in Hannah Arendt, Crisis of the Republic. San Diego, Calif.: Harvest/Harcourt Brace Jovanovich, 1972; Hampton, Henry, and Steve Fayer. Voices of Freedom: An Oral History of the Civil Rights Movement from the 1950s through the 1980s. New York: Bantam Books, 1990; Thoreau, Henry David. “Civil Disobedience,” originally “Resistance to Civil Government,” 1849, reprinted in numerous locations. —Gordon A. Babst
civil liberties The rights that individuals have as citizens in a nation are difficult to define and often controversial. Civil liberties are such individual rights. In the United
States, they most often refer to the freedom of expression, the freedom of religion, the right to bear arms, and the rights of individuals accused of a crime. Under the U.S. Constitution, government authority over individuals’ civil liberties is limited. Those limitations are defined in the main text of the Constitution by limiting government authority regarding ex post facto laws and bills of attainder. But more significantly, civil liberties are protected in the first 10 amendments to the Constitution, those provisions known collectively as the Bill of Rights. The framers of the Constitution were primarily interested in outlining the duties of government— separating powers among the three branches of government and dividing powers between the national and state governments. Implicitly, these concerns affected the rights of individuals, since the framers feared that a strong central government could foster the ills that had been pointed out as reasons for the Declaration of Independence. However, the original document had specific provisions protecting civil liberties only in the limitation of government powers regarding ex post facto laws and bills of attainder. Ex post facto laws (laws making an act retroactively illegal) and bills of attainder (laws inflicting punishment on an individual by legislative act rather than through the judicial process) were specifically precluded in Article 1 of the original document, but have rarely been factors in American politics since then. However, during the ratification process for the Constitution, concerns that individual rights needed more explicit protection were raised. One of the strongest advocates for adopting such amendments was Thomas Jefferson, perhaps the leading American political figure who had not taken part in deliberations of the Constitution. Jefferson had been the American envoy to France at the time. In any event, several states made their ratification of the Constitution contingent on the passage of a Bill of Rights, and President George Washington felt constrained to promise that such a listing of rights would be added. Accordingly, the First Congress proposed and the states ratified the first 10 amendments to the Constitution in 1791. Originally, the Bill of Rights were intended to apply only to the national government since state constitutions limited the powers of state governments.
132 civil liberties
Over time, though, the variations in rights among the states became national issues. Beginning in 1925, in the case of Gitlow v. New York, the United States Supreme Court began to apply the Bill of Rights to the states as well, arguing that such an interpretation was compelled by the due process clause of the Fourteenth Amendment to the U.S. Constitution. That clause said that state citizens could not be denied “life, liberty or property without the due process of law.” As a practical matter, the court said, such language required nearly all of the rights in the Bill of Rights to apply to the states as well as the national government. Accordingly, under the incorporation doctrine, most of the Bill of Rights has been interpreted as limiting the powers of states in addition to limiting the power of the national government. The freedoms of religion and expression are protected under the Constitution by the language of the First Amendment. In the famous language beginning the Bill of Rights, “Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof.” Thus, the government is precluded from either having an official state religion or keeping citizens from worshiping in their own ways. These seemingly sometimes conflicting notions have been the subject of many case opinions by the Supreme Court, and have been particularly controversial regarding religious practices in public schools. In 1962, in the famous case of Engel v. Vitale, the Supreme Court ruled that an official state prayer could not be required as a recitation at the beginning of each school day, as a violation of the prohibition against establishment of a religion. The prayer was nondenominational in character, reading “Almighty God, we acknowledge our dependence upon Thee, and beg Thy blessings upon us, our teachers, and our country.” The court stated the requirement to recite that prayer tended “to destroy government and to degrade religion.” By 1971, after a number of other establishment cases, the Supreme Court wrote another decision, in Lemon v. Kurtzman, designed to create rules about the boundaries of government tolerance of religion in arenas where there was government sponsorship of events. In that case, the Supreme Court said that government policies “(1) must have a secular, as opposed to a religious purpose, (2) must have a primary effect which neither advances nor inhibits
religion, and (3) must not foster an excessive entanglement between church and state.” In terms of free exercise, the Court has attempted to preserve individual rights to believe what they wish to believe while there may be government limitations on how one might practice one’s beliefs. For example, when the famous boxer Muhammad Ali claimed that he could not enlist in the army because of his deep Islamic beliefs, the court system did not uphold his claim as a protected free exercise of religion. There have been many rulings on the boundaries of free exercise over the years, and the Supreme Court, for examples, has allowed Amish communities to withdraw their children from public education after the eighth grade and has said that states cannot require Jehovah’s Witnesses to participate in flag-saluting ceremonies. But, on the other hand, people cannot legally engage in polygamy and churches that discriminate on the basis of race can be denied tax-exempt status. Freedom of speech, Freedom of the press, and the right of assembly are all collectively known as the freedoms of expression. Freedom of expression is often seen as the most basic liberty in the United States, in part because the health of a democracy depends on the unfettered marketplace of ideas, where political discourse can inform the public to make choices in the best interests of the nation. Yet even this freedom is not completely unregulated by government. For example, in the famous Supreme Court decision of Schenck v. United States, decided in 1919, the Supreme Court pointed out that falsely shouting “fire” in a crowded theater, an action almost certain to cause panic and risk lives, would not be a protected form of speech. But, the Supreme Court wrote, in order to be limited, speech must cause a “clear and present danger.” In the years since that ruling, the Supreme Court has tried to explain the contexts under which speech can be limited, but not surprisingly, those rules are necessarily vague and unclear. The Court has said that symbolic speech, even to the point of burning an American flag as an act of political protest, is the kind of expression intended to be protected by the First Amendment. The most important thing to understand is that in the American polity, there are very few limits on freedom of expression.
civil liberties 133
The court has listed three specific exceptions to freedom of expression: obscenity, defamation of character (libel and slander), and “fighting words.” These exceptions apply in important, but limited circumstances. The Second Amendment of the Constitution says that “a well-regulated Militia, being necessary to the security of a free state, the right of the people to keep and bear Arms, shall not be infringed.” This is a liberty that has been the source of constant debate over the last two decades, with some arguing that this amendment applies only to the right of states to raise militias and others arguing that the right to own guns is a basic American liberty. Supreme Court case law is inconclusive on the fundamental right to keep and bear arms, but the court has allowed extensive regulations by the states regarding limiting the types of weapons that are legal (sawed-off shotguns are normally illegal, as are assault weapons) and the types of people who can own them (felons are normally precluded from gun ownership). A major component of civil liberties stems from the adage that a person is “innocent until proven guilty.” As a result, the Constitution—in Amendments 4 through 8, outlines a number of protections that are accorded to people accused of a crime. Specifically, those accused are assured of protections during criminal investigations and the bringing of formal charges, during trial, and if convicted, regarding punishment. When criminal investigations occur, police must use carefully constructed procedures that are designed to protect individuals. Specifically, the Fourth Amendment is designed to protect individuals from “search and seizure” unless the government has “probable cause” to conduct a search. In the case of Mapp v. Ohio (1965), the Supreme Court ruled that if those careful procedures, accompanied by a valid search warrant, are not used, evidence cannot be used against the accused in a court of law. This exclusionary rule is intended to place strong limits on the government’s right to intrude into the private rights of citizens. Additionally, the court has ruled that, rather than obscure guarantees being listed in the Constitution, it is the right of individuals to be aware of the rights that they have during a criminal investigation. That is why in the case of Miranda v. Arizona (1966), the court ruled that as soon as an investigation begins to focus on the activities of an individual, that person
must be informed that there are rights guaranteed under the Constitution. The Miranda warning is as follows: “You have the right to remain silent. If you give up that right, evidence collected from your answers may be used against you in court. You have a right to have a lawyer as counsel with you during questioning. If you cannot afford a lawyer, one will be provided for you.” Finally, the government must have careful procedures to formally charge a person with a crime. In the Constitution, the required procedure is indictment by a grand jury. While the grand jury provision has not been applied to the states, some states use that procedure and others have similar formal processes in place to make certain that individuals are not charged with crimes frivolously. After suspects are formally accused of a crime, they have a right to a trial where the government must prove beyond a reasonable doubt that the individual is guilty of the crime with which he or she has been charged. Often, people accused of crimes “plea bargain” with government to plead guilty to a crime in exchange for a lesser punishment than the one they might receive in trial, but the right to trial is guaranteed. During the trial, the accused has the right to be represented by a competent lawyer, a right established in the case of Gideon v. Wainwright in 1963. Additionally, accused people have the right to a speedy and public trial so that they cannot be locked up indefinitely without conviction and so that they cannot be convicted in private settings. During trial, they also have a right to have a “jury of peers,” a protection that was intended to make sure that if someone is convicted, it is by average citizens just like them. Finally, during the trial, the accused have the right to issue subpoenas to require that people who could help them can testify in court and they have the right to be confronted face-to-face by those who would testify against them. All of these protections are intended to make sure that there is a fair trial. If the person accused is convicted during a trial (or in a plea bargain), there are also limits upon how they may be punished. The Eighth Amendment precludes “cruel and unusual punishment,” a provision that was intended to prevent torturous punishment. While cruel and unusual punishment has not been completely defined, it does mean that prisoners have certain rights in prison, including the rights to have proper nutrition,
134 civil rights
adequate places to sleep, and other basic elements of human decency. What the phrase does not mean, however, is that individuals are protected from capital punishment. The death penalty has never been ruled cruel and unusual, as long as it is applied using careful procedures. In the case of Furman v. Georgia (1972), the court said also that the death penalty could not be applied in a “freakish” or “random” way. Another limitation on government is that it may not levy “excessive fines,” though that limitation has not been defined carefully. One of the most interesting provisions in the Bill of Rights is the Ninth Amendment. When the first 10 amendments were added to the Constitution, there was a concern that public officials might see them as a complete listing of rights, therefore limiting other basic rights of individuals. Accordingly, that amendment says that the first eight amendments shall not be interpreted to “deny or disparage others retained by the people.” This amendment has not been used often, but it is essential in considering one of the most controversial rights that the Supreme Court has declared to be protected by the Constitution, the right to privacy. In the case of Griswald v. Connecticut in 1965, the Court said that many provisions in the Bill of Rights pointed to the right to privacy, and it was just the sort of right that was intended to be protected by the Ninth Amendment. That ruling became most controversial when it was used as a justification for protecting the right of a woman to have an abortion in the case of Roe v. Wade in 1973. In the Roe case, the rights of the mother, at least in the early stages of pregnancy, were deemed to be protected by the right to privacy. In the American democracy, the civil liberties protected by the Constitution and the Bill of Rights are seen as fundamental to the basic purposes of government as delineated by Thomas Jefferson in the Declaration of Independence. In that document, Jefferson said that governments’ powers were limited to protecting life, liberty and the pursuit of happiness. These basic rights, the founders believed, were given to the people by the “Laws of Nature and of Nature’s God.” The civil liberties protected by the Constitution were seen by Jefferson and others as essential to a healthy democracy. Their protection is fragile, and constant attention by each generation is required for their maintenance.
Further Reading Abraham, Henry J. and Barbara A. Perry. Freedom and the Court: Civil Rights and Liberties in the United States. 8th ed. Lawrence: University Press of Kansas, 2003; Garrow, David J. Liberty and Sexuality: The Right to Privacy and the Making of Roe v. Wade. New York: Macmillan, 1994; Lewis, Anthony. Gideon’s Trumpet. New York: Random House, 1964; Strossen, Nadine. Defending Pornography: Free Speech, Sex, and the Fight for Women’s Rights. New York: Scribner, 1995. —James W. Riddlesperger, Jr.
civil rights As opposed to civil liberties, which are personal freedoms and individual rights protected by the Bill of Rights, civil rights are the right of every citizen to equal protection under the law. The question of civil rights considers whether or not individuals of differing groups are granted the same opportunities and rights from the government. Legally speaking, all Americans are granted equal rights, and the ideal of equality dates back to the words of Thomas Jefferson, author of the Declaration of Independence, who wrote that “all men are created equal.” Of course, that phrase had a much more limited and narrow definition during the founding era than it does today. Yet, all minority groups within the United States have struggled to gain equal rights, including women, African Americans, Hispanic Americans, Native Americans, Asian Americans, gays and lesbians, the disabled, and numerous other groups throughout the nation’s history. While many of these groups have achieved important legal and political victories in terms of voting rights and laws pertaining to equal employment and housing, for example, much inequality still exists in the practical, day-to-day sense in that not all Americans have equal access to all opportunities. While discrimination in nearly all forms is now illegal, it still exists in subtle, and sometimes not so subtle, forms. Disadvantaged groups within the American political process share at least one thing in common—the struggle for equality has been a hard-fought battle that included intense political organization and action. However, the notion of equality may be one of America’s most important political ideals, yet equality in the true sense of the word is extremely difficult to
civil rights
achieve. What does the U.S. Constitution say about equality, and what role should the government play in ensuring that all citizens are treated equally in regard to such issues as race, ethnicity, gender, and sexual orientation? The concept that all individuals are equal before the law is one of the core philosophical foundations of the American democratic system of government. As early as 1776, this ideal was expressed in the Declaration of Independence, which stated, “We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.” This ideal is also expressed in the equal protection clause of the Fourteenth Amendment, ratified in 1868, which declares that no state shall “deny to any person within its jurisdiction the equal protection of the laws.” Equal protection means that states are prohibited from denying any person or class of persons the same protection and rights that the law extends to other similarly situated persons or classes of persons. The equal protection clause does not guarantee equality among individuals or classes, but only the equal application of the laws. By denying states the ability to discriminate, the equal protection clause is crucial to the protection of civil rights. For an individual to legitimately assert a claim that the clause has been violated, the aggrieved party must prove some form of unequal treatment or discrimination, and must prove that there had been state or local government action taken that resulted in the discrimination. There are two different types of discrimination that can exist within society. De jure discrimination is that which stems from the law. For example, racial segregation in public schools prior to its ban in the 1950s was a form of this type of discrimination. On the other hand, de facto discrimination is that which stems from human attitudes. For example, the existence of hate groups or a racist or sexist publication stems from social, economic, and/or cultural biases but is not necessarily illegal or punishable by law. There are also three tiers of review for determining whether laws or other public policies that are challenged in court are discriminatory and in violation of the equal protection clause. The traditional test used to decide discrimination cases is the rational basis test, which basically means, is the challenged
135
discrimination rational, or is it arbitrary and capricious? The second test is the suspect class, or strict scrutiny test, which is used when the state discriminates on the basis of a criterion that the United States Supreme Court has declared to be inherently suspect or when there is a claim that a fundamental right has been violated. Racial criteria are considered suspect since they are mostly arbitrary. The third test, known as an intermediate scrutiny (or heightened scrutiny) test, is applied for sex discrimination cases. In order to withstand judicial scrutiny, any law that seemingly discriminates based on sex or gender must be substantially related to the achievement of an important governmental objective. Perhaps no group of citizens has endured a greater struggle for equality than African Americans. Their ancestors first arrived in America in chains in the early 1600s, having been captured in Africa to be sold in open markets as slaves. Slavery in America would last nearly 250 years. The infamous “threefifths compromise” within the U.S. Constitution that declared slaves to be counted as three-fifths of a person helped to institutionalize slavery in the southern states until the end of the Civil War in 1865. Despite the congressional ban on the slave trade in 1808, the agrarian economy in the South remained dependent on cheap slave labor. During the years following the Civil War, many members of Congress were fearful that discriminatory laws against former slaves would be passed in the South. Even though the Thirteenth Amendment, ratified in 1865, had outlawed slavery, many southern states had adopted Black Codes, which made it difficult for blacks to own property, enter into contracts, and established criminal laws with much harsher penalties than for whites. As a result, Congress passed the Civil Rights Act of 1866, but many feared that enforcement of the law at the state level would be difficult. As a result, the Fourteenth Amendment was introduced, which included the equal protection clause and the due process clause. Despite passage of the Civil War amendments to the Constitution, many issues of inequality remained for African Americans, particularly in the southern states. Reconstruction, which was the federal government’s attempt to rebuild the infrastructure and economies of southern states following the Civil War, lasted until 1877. However, many white southerners
136 civil rights
resented the presence of federal troops and resisted the integration of African Americans into political and social life in the South. And when the federal troops were no longer present to protect the voting and other rights of African Americans, many Jim Crow laws were passed in southern states, including those that required racial segregation in schools and public accommodations (such as restaurants, theaters, and forms of public transportation such as trains), and laws that banned interracial marriage. In 1875, Congress had passed the Civil Rights Act to protect all Americans, regardless of race, in their access to public accommodations and facilities. However, it was not enforced, and the Supreme Court declared the law unconstitutional in 1883 since the act could not ban discrimination by private, as opposed to state, facilities. In one of its most infamous rulings ever, the Supreme Court helped to validate the discriminatory behavior towards African Americans in the South in Plessy v. Ferguson (1896). The Court ruled 7-1 that separate but equal public accommodations for black and white citizens was constitutional, providing a rationale for government-mandated segregation. In Justice John Marshall Harlan’s famous dissent, he argued that “our Constitution is color-blind, and neither knows nor tolerates classes among citizens.” However, Harlan’s view would not become a majority view by the Court until the “separate but equal” doctrine was overturned in 1954. The Plessy ruling, and with it the “separate but equal” doctrine, stood as the law of the land until the Supreme Court unanimously ruled on Brown v. Board of Education of Topeka (1954). In this landmark case, and under the direction of the new Chief Justice Earl Warren, the Court reversed its ruling in Plessy, declaring that segregated public schools in Kansas, South Carolina, Delaware, and Virginia were not equal. The case was brought by the NAACP, and argued before the Court by eventual Supreme Court Associate Justice Thurgood Marshall (who would be the first black to serve on the nation’s highest court). In the Brown opinion, Warren wrote that segregation based on race “generates a feeling of inferiority as to [children’s] status in the community that may affect their hearts and minds in a way very unlikely ever to be undone. . . . in the field of public education the doctrine of ‘separate but equal’ has no place.” In a com-
panion case, Bolling v. Sharpe, the Court held that the operation of segregated schools by the District of Columbia violated the due process clause of the Fifth Amendment, which states that “no person shall be deprived of life, liberty, or property without due process of law.” The Court recognized the equal protection component in the Fifth Amendment due process requirement, indicating that uniform antidiscrimination mandates were to be applied to the federal government as well as the states (since the Fourteenth Amendment did not cover the District of Columbia, as it is not a state). After the Brown ruling, the fight for civil rights for African Americans became a national political movement. In 1963, the Civil Rights movement reached its dramatic highpoint with more than 1,000 desegregation protests in more than 100 cities across the southern states. The leader of the movement, the Reverend Martin Luther King, Jr., had first gained national attention for leading a boycott of local buses in Montgomery, Alabama, in 1956 after Rosa Parks, a black seamstress, was arrested in December 1955 for sitting in a “whites only” section of a municipal bus. King had founded the Southern Christian Leadership Conference (SCLC) in 1957 based on a philosophy of nonviolent protests. In 1960, the Student Nonviolent Coordinating Committee (SNCC) was also formed, a grassroots organization focused on recruiting young black and white citizens to protest segregation laws. The SNCC was also instrumental in sponsoring freedom rides, an effort in the early 1960s to force southern states to integrate their bus stations. Passed by Congress and signed into law by President Lyndon Johnson in 1964, the Civil Rights Act of 1964 provided all persons with equal access to public accommodations and banned discrimination in hiring, promotion, and wages. A civil rights bill banning discrimination in public accommodations had originally been suggested by President John F. Kennedy in 1963. Following his assassination in November of that year, Johnson, the former Senate majority leader from Texas who served as Kennedy’s vice president, placed passage of the bill at the top of his priority list upon taking over the office of the presidency. Johnson’s extensive knowledge and skill regarding the legislative process on Capitol Hill played a large role in the bill’s passage, and many southern Democrats, including Senator Strom Thurmond
civil rights 137
of South Carolina, attempted to filibuster the bill. (Thurmond would eventually switch to the Republican Party, as would many southern Democrats by the early 1990s; however, his eight-week filibuster in the Senate still holds the record for the longest ever in an attempt to block passage of legislation). The most comprehensive civil rights bill ever passed, the act gave the federal government the means to enforce desegregation in southern schools. In addition, the act specifically outlawed any discrimination in voter registration, prohibited segregation in public places, created the Equal Employment Opportunity Commission (EEOC), and prohibited discrimination in hiring practices based on race, color, religion, national origin, age, or sex. The following year, Congress would pass and Johnson would sign the Voting Rights Act of 1965, which ended any racial barriers to voting. Specifically, the bill outlaws literacy tests and poll taxes as a means of determining whether or not someone was fit or eligible to vote; citizenship and valid voter registration was all the was needed to participate in an election. The impact of the law came in the rising number of African Americans registered to vote throughout the southern states, which also brought with it an increased number of African American politicians elected to public office. Women in America have also fought many legal and political battles in pursuit of equality. The women’s rights movement in America is generally defined by three separate waves or eras. The first wave is generally considered the fight for women’s suffrage, beginning in 1848 in Seneca Falls, New York, and culminating with passage of the Nineteenth Amendment to the U.S. Constitution granting women the right to vote in 1920. The second wave of the women’s movement emerged in the politically turbulent decade of the 1960s and coincided in part with the Civil Rights movement, with major attention focused on breaking down the legal barriers to sexual equality and, toward the end of this period, on the failed passage of the Equal Rights Amendment to the U.S. Constitution in 1982. The third wave of the women’s rights movement began in the early 1990s and has focused on increased political participation by women as well as a more inclusive and global notion of women’s rights in both the United States and around the world.
Many other groups within American society have been disadvantaged in terms of legal and political equality. Other groups have quite different and distinct histories from those of the civil rights and women’s rights movements, yet all share in their hard-fought victories to end discrimination. According to the 2000 census, American demographics continue to change regarding race and ethnicity. So-called “minority” groups continue to make up a larger percentage of the U.S. population; approximately 28 percent of Americans identify themselves as either nonwhite or Hispanic. Two states, California and Hawaii, are majority-minority; that is, non-Hispanic whites no longer compose a majority of the state population. Many states have experienced a large population growth among minorities due to Latino and Asian immigration. Being considered a political minority, however, also extends beyond race and ethnicity to include such categories as sexual orientation, physical capabilities, or even age. By 2003, more than 40 million Hispanic Americans were living in the United States, and they make up one of the most dynamic and diverse racial/ethnic groups in America. The number of Hispanic Americans, one of the nation’s oldest ethnic groups, doubled between 1980 and 2000, and in 2003, they became the nation’s largest minority group. The term Hispanic generally refers to people of Spanishspeaking backgrounds. The term “Latino” is often broadly used to include different ethnic backgrounds, including those citizens who emigrated from Mexico, Cuba, Puerto Rico, Central America, or Latin America. Most recently, however, the ethnic term Latino is used inclusively to refer to any person of Latin American ancestry residing in the United States (and also connotes identification with Indo-American heritage rather than Spanish European heritage). Projections suggest that by 2100, the U.S. Hispanic/Latino population could grow from 13 percent of the total U.S. population (in 2000) to 33 percent. Because of their growth in numbers, Hispanic Americans represent an important voting block to both major political parties, as well as increasing buying power within the U.S. economy. Currently, roughly half of all Hispanic Americans were born in or trace their ancestry to Mexico; more than half of the population of Los Angeles—the nation’s second most populous city behind New York—is of Hispanic descent.
138 civil rights
Like Hispanic or Latino Americans, the grouping of Asian Americans for purposes of the U.S. Census includes citizens from a variety of national origins and cultural identities. Continuing immigration combined with U.S.-born Asian citizens make it one of the fastest growing groups currently within U.S. society, with more than 12 million citizens (about four percent of the total U.S. population). Asian Americans represent a diverse group with varying languages, cultural and religious practices, political systems, and economic conditions; they originate from East Asia (which includes China, Japan, and Korea), Southeast Asia (which includes Cambodia, Indonesia, Laos, Malaysia, the Philippines, Thailand, and Vietnam), and South Asia (which includes Bangladesh, India, Myanmar, Nepal, and Pakistan). Most arrived in the United States following the passage of the 1965 Immigration and Nationality Act, which adjusted discriminatory immigration quotas for groups previously admitted into the country in very small numbers; others, most notably from Vietnam, Laos, and Cambodia arrived as refugees in the 1970s at the end of the Vietnam War. Today, more than 4 million Native Americans live in the United States, with four out of 10 living in the western United States. In terms of civil rights, the goals of Native American tribes have often differed from those of other groups, as Native Americans have sought sovereignty, self-government, preservation of languages and cultures, and economic selfdetermination. A tribe can be defined as a group of indigenous people connected by biology or blood, cultural practices, language, or territorial base, among others. From a political and legal standpoint in the United States, a tribe is a group that has received federal recognition in the form of a diplomatic agreement granting the tribe sovereignty, which means that the tribe has the right to form its own government, create and enforce its own laws, develop its own tax system, determine citizenship, and regulate and license activities. The only limitations placed on tribal sovereignty are the same limitations placed on states in the U.S. Constitution—neither a tribe nor a state has the authority to make war, coin money, or engage in foreign relations. Historically, gays and lesbians in the United States have suffered tremendous legal discrimination. Many, in an attempt to avoid discriminatory behavior in the workplace, housing, or other areas of
public life, have tried to keep their sexual orientation hidden. The gay rights movement in the United States first became prominent in the late 1960s and early 1970s, following the lead of the civil rights and women’s rights movements at the time. By the 1990s, the gay rights movement was recognized as a wellorganized political force capable of playing an important role in shaping the national political agenda. Two prominent gay rights organizations include the Lambda Legal Defense and Education Fund and the Human Rights Campaign. Lambda, founded in 1973, focused on litigation, education, and lobbying for public policies that recognize the civil rights of gay men, lesbians, bisexuals, transgender people, and those with HIV. The Human Rights Campaign (HRC), founded in 1980, is the nation’s largest civil rights organization working to achieve gay, lesbian, bisexual and transgender equality. The group is recognized as an effective lobby for gay and lesbian rights in Congress, as well as providing campaign support to what the group considers fair-minded candidates who support issues of equality and civil rights. The HRC also works to educate the public on various issues relevant to gays and lesbians, including relationship recognition, workplace, family, and health issues. Government policies regarding disabled citizens date back to the Revolutionary War, when assistance was provided for army veterans who could no longer provide for themselves due to physical disabilities resulting from the war. During World War II, extensive government programs were implemented to help rehabilitate veterans with disabilities. Military veterans have long been outspoken advocates for stronger laws protecting disabled Americans. By the early 1970s, a strong movement in favor of disability rights emerged in the United States to pass legislation that would ban discrimination in many areas. Disabled citizens have often experienced higher levels of poverty due to unemployment, formidable barriers to adequate housing and transportation, as well as exclusion or segregation in education. In 1973, Congress passed the U.S. Rehabilitation Act, which included an antidiscrimination clause modeled after provisions of the Civil Rights Act of 1964. The act prohibited discrimination against an otherwise qualified person with a disability, solely on the basis of the disability, in any program or activity receiving federal financial
Civil Rights movement
assistance. The disability movement achieved its greatest legislative victory in 1990 with passage of the Americans with Disabilities Act (ADA). Age became a protected category against discrimination in employment with passage of the Civil Rights Act of 1964. An early case of age discrimination occurred among U.S. airlines, which up until 1968 had forced female flight attendants to retire at the age of 32 (the companies believed that the women would no longer be viewed as attractive by their predominantly male clientele at the time). The Civil Rights Act began to end this type of discrimination by denoting that an employer cannot discriminate based on age unless it can be proved that the job cannot be performed by someone past the designated age. In most cases, age discrimination occurs at a much later stage in life. Since 1964, two other important acts have been become law. The Age Discrimination in Employment Act of 1967 protects certain applicants and employees 40 years of age and older from discrimination on the basis of age in hiring, promotion, compensation, or being fired from a job. Mandatory retirement ages for most jobs do not exist; however, forced retirement based on age can be allowed if age is a factor in the nature of a job or the performance of a particular employee. The Age Discrimination Act of 1975 prohibits discrimination on the basis of age in programs and activities receiving federal financial assistance. The American Association of Retired Persons (AARP) serves as a powerful lobby for older Americans, and works hard at ensuring that the rights of seniors are protected in the workforce. Equality for all American citizens has never existed. Being treated equally in the eyes of the law is a notion that has been developed throughout the nation’s history, and equal protection as a constitutional provision did not exist until ratification of the Fourteenth Amendment in 1868. Even then, it took decades for the judicial branch to begin to interpret the equal protection clause as a guarantee of civil rights, and those legal interpretations that continue to expand our notion of equality continue today. Equality may mean something very different in the current political context than what it meant to the framers of the Constitution during the founding era, but it is clearly an ideal that Americans have accepted as part of their political culture. Yet attaining true equality for all citizens remains a difficult and sometimes elusive
139
task. An important dilemma also exists for the most politically disaffected groups in America—those who most need to have their voices heard within the political process often participate at the lowest rates. Further Reading Chang, Gordon H., ed. 2001. Asian Americans and Politics. Stanford, Calif.: Stanford University Press; Epstein, Lee, and Thomas G. Walker. Constitutional Law for a Changing America: Institutional Powers and Constraints. 5th ed. Washington, D.C.: Congressional Quarterly Press, 2004; Fisher, Louis. American Constitutional Law. 5th ed. Durham, N.C.: Carolina Academic Press, 2003; Marable, Manning. Race, Reform, and Rebellion: The Second Reconstruction in Black America, 1945–1990. Jackson: University Press of Mississippi, 1991; Mezey, Susan Gluck. Disabling Interpretations: The Americans with Disabilities Act in Federal Court. Pittsburgh, Pa.: University of Pittsburgh Press, 2005; Mohr, Richard D. The Long Arc of Justice: Lesbian and Gay Marriage, Equality, and Rights. New York: Columbia University Press, 2005; O’Brien, David M. Constitutional Law and Politics. Vol. 2, Civil Rights and Civil Liberties. 5th ed. New York: W.W. Norton, 2003. Rosen, Ruth. The World Split Open: How the Modern Women’s Movement Changed America. New York: Penguin Books, 2000; Segura, Gary M., and Shaun Bowler, eds. Diversity in Democracy: Minority Representation in the United States. Charlottesville: University of Virginia Press, 2005; Stephens, Otis H., Jr., and John M. Scheb II. American Constitutional Law. 3rd ed. Belmont, Calif.: Thompson, 2003. —Lori Cox Han
Civil Rights movement The Civil Rights movement is generally considered to have started during the 1950s, although the fight to achieve racial equality can trace its roots much earlier in American history. The creation of the National Association for the Advancement of Colored People (NAACP) in 1909 as a response to a major race riot that had occurred in Springfield, Illinois, one year earlier was a pioneering effort to address the disparate treatment received by African Americans in U.S. society. Despite the work of the NAACP to combat
140 Civil Rights movement
discrimination through a concerted litigation strategy and other organizational tactics, the struggle for civil rights gained little headway throughout most of the first half of the 20th century. However, the attempts by the NAACP and other civil rights groups to address this mistreatment of African Americans finally bore fruit in 1954 when the United States Supreme Court declared school segregation unconstitutional in Brown v. Board of Education, thus providing one of the first successful challenges to the doctrine of “separate but equal” handed down by the Court in Plessy v. Ferguson (1896). The Civil Rights movement never really expanded its reach to the masses until 1955. On December 1 of that year, a black seamstress named Rosa Parks refused to relinquish her seat on a Montgomery, Alabama, bus to a fellow white passenger. Parks was arrested and ultimately convicted for this act of defiance. This arrest triggered a backlash in Mont-
gomery’s African-American community and led to the Montgomery bus boycott, which was organized by Dr. Martin Luther King, Jr., and other black leaders to protest racial segregation on the bus system. Lasting 381 days, this boycott served to publicize the injustice of segregation pervasive in the South at the time. The Montgomery bus boycott came to an end when the ordinance mandating the segregating of blacks and whites was finally rescinded nearly a year later. While other cities, such as Baton Rouge, Louisiana, had instituted boycotts before 1955, many scholars consider the Montgomery boycott to be the birth of the Civil Rights movement. During his involvement in the Montgomery boycott, Dr. King was propelled into the spotlight as a national leader in the Civil Rights movement. He used his newfound notoriety to help advance the cause of African Americans by helping to mobilize a more broad-based grassroots movement that would
President Lyndon Johnson signs the Civil Rights Act of 1964 as Martin Luther King, Jr., looks on. (Johnson Library)
Civil Rights movement
employ a wide-ranging set of tactics beyond litigation to achieve its objectives. In 1957, Dr. King, Reverend Ralph Abernathy, and other leaders of the Montgomery bus boycott formed the Southern Christian Leadership Conference (SCLC) to combat discrimination in accordance with the principles of nonviolent disobedience. These tactics included boycotts, public protests, and sit-ins to illuminate the plight of African Americans and eliminate racial injustice. The Civil Rights movement gained even more traction in the late 1950s when students in cities such as Greensboro, North Carolina, and Nashville, Tennessee, initiated a series of sit-ins at lunch counters in local establishments as a means to protest segregation. Many protesters were violently removed from the eating facilities, which generated tremendous sympathy among many Americans for the movement’s goals. A number of the leaders of these sit-ins joined together to create the Student Nonviolent Coordinating Committee (SNCC) to translate the momentum they had gained into sustained action in favor of racial justice. SNCC collaborated with the more established Congress of Racial Equality (CORE) to organize what became known as the freedom rides. The freedom rides were designed to mobilize activists to travel by bus through the South to desegregate interstate bus travel. This tactic led to violent retribution toward many of the activists involved. In Birmingham, Alabama, freedom riders were brutally attacked by white mobs organized by the Ku Klux Klan, while in Anniston, Alabama, one bus was firebombed. Another violent incident that further underscored the hostile atmosphere toward African Americans in the South was the 1963 murder of Mississippi NAACP official Medgar Evers for his attempts to help black Mississippians register to vote. Meanwhile, despite the deep resistance in the South to integration and the pervasive discrimination exacted against African Americans, national legislative efforts to address these problems were slow in coming. In response, African-American civil rights leaders A. Philip Randolph and Bayard Rustin planned the March on Washington for Jobs and Freedom, which was proposed in 1962. Even though he had announced his support for sweeping civil rights legislation, President John F. Kennedy and members of his administration mounted an effort to
141
convince the leaders of the march to call it off. Nevertheless, the march took place on August 28, 1963. This event is most remembered for Dr. Martin Luther King’s “I have a dream” speech where he urged that Americans be judged by “the content of their character and not the color of their skin.” These words captivated many Americans and helped intensify the pressure on Congress and the president to enact a landmark civil rights bill. Following the march, several civil rights leaders including King, met with President Kennedy on the issue. Kennedy declared his commitment to enacting civil rights legislation but doubted there was enough support in Congress to achieve passage. For most of 1963 the bill did languish on Capitol Hill and failed to gain any momentum until the assassination of President Kennedy in November of that year. Kennedy’s successor, Lyndon Johnson, picked up the mantle on civil rights and called for passage of the legislation as a tribute to the fallen president. Finally the intense opposition of southern members of Congress was overcome, and on July 2, 1964, President Johnson signed into law the 1964 Civil Rights Act. This law banned discrimination in employment and accommodations on the basis of race and created the Equal Employment Opportunity Commission (EEOC) to enforce these provisions. The Civil Rights Act of 1964 was a major step toward fulfilling the agenda of the movement, but it did not put an end to the reality that millions of African-American citizens in the South were still systematically prohibited from exercising their right to vote. For example, in many southern states (and some northern states), citizens were forced to pay a poll tax or pass a literacy test to register to vote. Because few African Americans were well off financially, the poll tax placed an undue burden on their ability to register. In 1964, the U.S. Constitution was amended to declare the poll tax to be unconstitutional. However, the problem of the literacy tests— which were often unfairly administered by a white registrar—remained. To highlight the cause of expanding voting rights and eliminating literacy tests, on March 7, 1965, Hosea Williams of SCLC and John Lewis of SNCC organized a march from Selma, Alabama, to the state capitol in Montgomery. While crossing the Edmund Pettis Bridge, the marchers were attacked by state police and local law enforcement
142 Civil Rights movement
officials using such items as tear gas, billy clubs, and bull whips. John Lewis was knocked unconscious and several protesters were hospitalized. This incident was shown widely on television news broadcasts and galvanized support in the nation for federal action to fully extend the franchise to African Americans in the South. President Johnson invoked the incident as he urged Congress to move forward on voting rights legislation. On August 6, 1965, President Johnson signed into law the Voting Rights Act, which applied a prohibition against the denial or abridgment of the right to vote based on literacy tests on a nationwide basis and forced southern states and other areas of the country with a history of discrimination to submit changes in election laws to the U.S. Department of Justice. Subsequent to the adoption of the Voting Rights Act of 1965 the number of African Americans registered to vote in the South increased dramatically, and discrimination in voting based on race became a much more infrequent occurrence throughout the nation. Passage of the Voting Rights Act was the high point of the Civil Rights movement, as a series of developments in the late 1960s led to its decline. Support for civil rights began to wane in the wake of a number of urban riots that broke out in places like Watts and Detroit. For a substantial percentage of white Americans, including President Johnson, there was a sense of betrayal that so many African Americans could engage in such violent conduct after the strides that had been made on the civil rights front in the previous few years. The movement was now becoming associated in the minds of some Americans with the problem of rising crime and lawlessness in the streets, undercutting its mainstream appeal. The focus of civil rights leaders also broadened beyond the South toward issues like eradicating housing discrimination in all parts of the country, further eroding white support for the movement’s goals. New issues like affirmative action generated a backlash among the U.S. public, because unlike previous civil rights policies which were viewed in terms of granting equal rights to African Americans, affirmative action was seen as giving special preferences to members of minority races, a concept many Americans refused to accept. Court-imposed busing of students to achieve school desegregation antagonized a sizable portion of white citizens as well.
Internal divisions among various African-American leaders also weakened the prospect of achieving future progress toward the movement’s goals. As established figures such as King and others wished to maintain a moderate course, younger African Americans became especially frustrated with the pace of change and started to press for more confrontational tactics in the fight to improve the condition of black people. The ascension of Stokely Carmichael as the leader of SNCC marked the rise of the black power movement as a direct challenge to the nonviolent strategy pursued by King. Coupled with the founding of the Black Panther Party in 1966, these developments illustrated that many African Americans were now embracing what was perceived as a much more militant approach toward black empowerment, thus further marginalizing the civil rights struggle in the eyes of white America. Another massive blow to the movement occurred on April 4, 1968. On this day during a trip to Memphis, Tennessee, to protest the treatment of striking sanitation workers, Martin Luther King was assassinated by a white man named James Earl Ray. No African-American leader since his death has been able to effectively articulate the objectives of the Civil Rights movement as did King when he was alive. In contemporary times, while many organizations like the NAACP continue to flourish with hundreds of thousands of members, the Civil Rights movement is not the influential force that it was at its apex in the middle of the 1960s. The conservative thrust of the country since the Reagan years during the 1980s has brought major assaults on programs like affirmative action that form the core of the movement’s modern agenda. In some sense, the Civil Rights movement is a victim of its own success. Legal institutionalized racial discrimination has been virtually eradicated in most areas of American society. No longer is it socially acceptable to openly espouse views that are hostile to members of different racial groups. There is now a thriving black middle class and numerous African Americans who have achieved unprecedented levels of success in the fields of law, politics, business, and entertainment. African Americans like Thurgood Marshall and Clarence Thomas were appointed to the U.S. Supreme Court, Barack Obama, Carol MosleyBraun and Edward Brooke have been elected to the U.S. Senate, Colin Powell and Condoleezza Rice
conscientious objector 143
have served as secretary of state, and Black Entertainment Television founder Robert Johnson became the first black billionaire in the United States, in part because of the groundwork laid by the pioneers of the civil rights struggle. The movement forever reshaped the character of American life through its accomplishments. Many observers would point out that African Americans and other minorities have yet to attain full equality in the United States and that there is still a great deal of progress to be made. However, the gains that have been achieved can largely be attributed to the Civil Rights movement. Further Reading Branch, Taylor. At Canaan’s Edge: America in the King Years, 1965–1968. New York: Simon & Schuster, 2006; Branch, Taylor. Parting the Waters: America in the King Years, 1954–1963. New York: Simon and Schuster, 1988; Branch, Taylor. Pillar of Fire: America in the King Years, 1963–1965. New York: Simon & Schuster, 1998; Davis, Townsend. Weary Feat, Rested Souls: A Guided History of the Civil Rights Movement. New York: W.W. Norton, 1998; Graham, Hugh Davis. The Civil Rights Era: Origins and Development of National Policy. New York: Oxford University Press, 1990; Marable, Manning. Race, Reform and Rebellion: The Second Reconstruction in Black America, 1945–1982. Jackson: University of Mississippi Press, 1984; Young, Andrew. An Easy Burden: The Civil Rights Movement and the Transformation of America. New York: Harper Collins, 1996. —Matthew Streb and Brian Frederick
conscientious objector One of, if not the most momentous decisions a nation enters into is the decision to go to war. In a diverse nation such as the United States, where there are many different religions, the state must be wary of compelling citizens to engage in acts forbidden by their faiths. One such potential area of controversy can be seen when the nation decides to go to war, but members of certain faith communities hold as a religious doctrine the belief that war and killing are forbidden by their faith. A conscientious objector is someone who claims that military service, particularly in a combat role and/or in a time of war, goes against
their religious, moral, or ethical beliefs. An individual may also claim conscientious objector status and not ground the claim in the doctrine of a particular religious faith, but in personal beliefs alone. Conscientious objectors find their beliefs incompatible with military service and if granted status as a conscientious objector, are exempt from serving in the military. While most conscientious objectors oppose all war and violence and are pacifists, some argue that conscientious objector status need not be absolute but can be an objection to what are believed to be unjust wars. Thus, some conscientious objectors are against all wars, while others are against particular wars. In such cases the government has a provision to allow a citizen to declare that he or she is a conscientious objector to war and if this declaration is allowed, that individual is exempt from service in the military but must serve the country in some form of alternative, nonmilitary service. Local draft boards have traditionally been the place where appeals to the status of conscientious objection are made and decided. During every American war, there have been conscientious objectors. Most of those objecting have done so for religious reasons, but some did so—or attempted to do so—for political reasons. The religious objectors claimed that their religions forbade killing and that, therefore, participation in the military was against their beliefs. Some mainstream religions have been supportive of members making such claims, but others have not been supportive. During the Civil War, conscientious objectors (and others as well) were allowed to “buy” their way out of military service. Grover Cleveland (who would later become the 22nd president of the United States) did just such a thing during the Civil War. Two of Cleveland’s brothers were serving in the Union Army, and as Grover Cleveland was the primary source of financial support for his mother and sister, he opted to hire a substitute for himself after he was drafted. Ironically, given today’s standards, this did not become a significant campaign issue or political liability when Cleveland ran for the presidency. During the Vietnam War, many Catholics claimed that the roots of their religion were grounded in a pacifist orientation found in the message of Jesus Christ. However, some church leaders denied that such a claim was valid, making the appeal of Catholic
144 c onscientious objector
conscientious objectors difficult to sustain before draft boards. In such cases, individual draft boards often handled applications from Catholics for conscientious objector status very differently. Those who claimed to object on political grounds had an even harder time gaining acceptance. Most of those who sought conscientious objector status for political reasons did so because they objected to a particular war. For some, they felt that the war was unjust, for others that the war was a projection of imperialism, and for still others, that the war was against a nation or peoples against whom they had no grievance. According to the U.S. selective service system, the federal agency responsible for registering American men in case a draft is needed, “Beliefs which qualify a registrant for conscientious objector status may be religious in nature, but don’t have to be. Beliefs may be moral or ethical; however, a man’s reasons for not wanting to participate in a war must not be based on politics, expediency, or self-interest. In general, the man’s lifestyle prior to making his claim must reflect his current claims.” At times, conscientious objectors faced scorn, ridicule, and violence. Some were called cowards. Others were branded as traitors. It is difficult to stand up to the majority on grounds of conscience, and to take a principled stand often takes more courage than to go along with popular mass opinion. The observant French social commentator Alexis de Tocqueville observed in the 1800s that one of the greatest forces compelling conformity in the United States was what he referred to as the “tyranny of the majority.” In a democracy, to defy the majority was to defy the will of the people, tantamount to political blasphemy. And while Americans prided themselves on promoting freedom and personal liberty, running at crosspurposes with freedom was the weight of majority opinion in a democracy. At times, the liberty of the individual was crushed by the overwhelming weight of majority opinion, as the tyranny of the majority made it politically dangerous to hold opinions at variance with the masses. At such times, those holding contrary views were sometimes ostracized and at other times faced retribution, even violence. It is in this sense that those who objected to war as a matter of conscience often found themselves at odds with popular opinion—especially when the drumbeat of war led to a popular passion in support of war and
against the defined enemy. Those who objected were painted as aiding the enemy and often faced harsh recriminations. While there may be no solution to the tyranny of the majority, a nation founded on personal freedom must find ways to allow for the wide-ranging views of a pluralist culture without punishing those who object to majority opinion. Embracing freedom means that some will march to a different drummer. If the coercive power of mass opinion compels everyone to march in lockstep behind the majority, then freedom of thought and speech will be crushed. The government thus has a difficult task of protecting freedom while also pursuing the national interest as defined by the majority. That is one of the reasons why the Bill of Rights builds into the system a set of guaranteed rights that apply—or are supposed to apply—in times of war as well as times of peace. The legal status of those claiming conscientious objection has been a long and arduous effort to gain acceptance legally as well as socially. Many conscientious objectors have been imprisoned for refusing to serve in the military. Several court cases have shaped the parameters of conscientious objector status in the United States. The United States Supreme Court, in United States v. Seeger (1965) and later in Welsh v. United States (1970), held that individuals without “traditional” religious beliefs can be considered conscientious objectors but, in Gillette v. United States (1971), held that an individual could not base the claim of conscientious objection on a particular or specific war (in this case, the legitimacy or morality of the war in Vietnam). Ironically, the noted military historian B. H. Liddell Hart, in his classic work Thoughts on War, written in 1944, wrote that “there are only two classes who, as categories, show courage in war—the front-line soldier and the conscientious objector.” Hart’s understanding of the depth of courage needed to face up to mass public opinion and the power of the state in claiming to object to war speaks volumes to the difficulty a conscientious objector faces when attempting to speak truth to power. Further Reading Hanh, Thich Nhat. Love in Action: Writing on Nonviolent Social Change. Berkeley, Calif.: Parallax Press, 1993; Schell, Jonathan. The Unconquerable World:
double jeopardy 145
Power, Nonviolence, and the Will of the People. New York: Henry Holt, 2003. —Michael A. Genovese
double jeopardy The prohibition against double jeopardy is not found in the main text of the U.S. Constitution but is instead found in the Bill of Rights. Specifically, the Fifth Amendment states, in part, “nor shall any person be subject for the same offence to be twice put in jeopardy of life or limb.” It applies to both felonies and misdemeanors, no matter what the punishment, but it is separate for each level of government. The Fifth Amendment’s prohibition was written for the national government, and in the 1969 case of Benton v. Maryland, the United States Supreme Court reversed an earlier decision (Palko v. Connecticut, 1937) and held that states are also bound by it as one of the liberties protected by the due process clause of the Fourteenth Amendment. Thus, neither the national government nor the states may try any person twice for the same crime, but each could try a person separately for the same crime. A trial means that the jury must have been empanelled and sworn in, or the first witness sworn in if it is heard by a judge only. It is not considered double jeopardy to return a case to a grand jury if it does not indict, and the government can refile charges in the following instances without violating double jeopardy: If the government seeks a new preliminary hearing after the magistrate dismisses the charges (such as for lack of probable cause); if the trial court dismisses the charges on some pretrial objection; or if the trial court dismisses the charges on grounds that would bar reprosecution (such as not having a speedy trial) but the state wins an appeal. Although double jeopardy seems quite clear, the Supreme Court has had to clarify its meaning with several cases. For example, in Ashe v. Swenson (1970), four men were charged with the armed robbery of six poker players plus theft of a car. One of the four was found not guilty of being one of the robbers, but because the state was conducting separate trials for each of the six victims, it tried him again. However, the Court held that when it was established in the first trial that he was not one of the robbers, the state could not litigate the issue again without violating double jeopardy.
After that decision, however, the Court became very restrictive in its double jeopardy interpretation. For example, in Illinois v. Somerville (1973) the trial judge found a procedural defect in the theft indictment and ordered a new trial after a valid indictment, over the defendant’s objection. The Court upheld the judge’s decision, saying that if no mistrial was allowed, the state would have to conduct a second trial after the verdict was reversed on appeal. Therefore, the Court argued, why wait for that to occur? The defendant was not prejudiced by the judge’s ruling, the delay was minimal, and the interests of public justice were served. Similarly, in United States v. Wilson (1975), a person was convicted of converting union funds to his own use, but the U.S. District Court reversed its earlier decision and dismissed the indictment. The U.S. Court of Appeals held that the District Court’s dismissal constituted an acquittal and refused to look at the government’s appeal, but the U.S. Supreme Court held that because there was no threat of either multiple punishment or successive prosecutions the Court of Appeals should consider the government’s motion and it would not constitute double jeopardy. Arizona v. Washington (1978) was a case where there was a new trial for a person convicted of murder because the prosecution had withheld exculpatory evidence from the defense. In the second trial the defense lawyer told the jurors about it, and the judge declared a mistrial. The Court upheld the judge’s decision, saying that he had good reason to declare the mistrial and there was no double jeopardy involved. Burks v. United States (1978) was a rare victory for a defendant, in this case one whose guilty verdict for robbing a federally insured bank by use of a dangerous weapon had been reversed by a U.S. Court of Appeals because of its rejection of an insanity plea, but the latter then left it to the U.S. District Court as to whether to acquit or have a new trial. The U.S. Supreme Court held that, had the Court of Appeals found a trial error, a new trial would be in order, but in this case the reversal was due to insufficiency of evidence, and a new trial for that would be double jeopardy. In United States v. Scott (1978), however, the defendant police officer accused of distribution of narcotics obtained a termination of his trial before the verdict due to a preindictment delay, and a U.S. Court of Appeals said that there could be no further prosecution due to double jeopardy. However, the
146 double jeopardy
U.S. Supreme Court ruled that the government can appeal such a mid-trial termination of proceedings favorable to the defendant without its being considered double jeopardy because the defendant had not been acquitted or convicted. In Oregon v. Kennedy (1982) the defendant in a theft trial was able to get a mistrial due to the state presenting an expert witness who had earlier filed a criminal complaint against him. The state then wanted a new trial, and the U.S. Supreme Court allowed it by holding that although the prosecutorial conduct in wanting a new trial might be harassment or overreaching, it was nevertheless allowable as long as there was no intent to subvert the protections afforded by double jeopardy. Heath v. Alabama (1985) was a case where a person had hired two people to kill his wife. The meeting of the three persons took place in Alabama, but the murder took place in Georgia. The suspect pleaded guilty in Georgia and was given a life sentence, but then was tried in Alabama and given the death sentence. The U.S. Supreme Court upheld the decisions in the Alabama court, saying that it was not double jeopardy due to the dual sovereignty doctrine; i.e., both states undertook criminal prosecutions because they had separate and independent sources of power and authority before being admitted to the Union, and the Tenth Amendment to the Constitution preserves those sources (the Tenth Amendment says that powers not specifically delegated in the Constitution to the national government nor specifically denied to the states are reserved to the states or to the people). Many other cases involving the definition of double jeopardy have also come to the U.S. Supreme Court in recent years. For example, United States v. Dixon (1993) was a complicated case because it was actually two cases decided as one, and there was mixed results. In one case a defendant was out on bond for second degree murder and told to commit no criminal offense during that time. He was later arrested for possession of cocaine with intent to distribute, found guilty of criminal contempt and given 180 days in jail, and then indicted on the drug charge. In the other case, the defendant violated a civil protection order, was cited for contempt and given 600 days imprisonment, and later the government indicted him on five assault charges. The U.S. Supreme Court in the first case dismissed the cocaine indictment as
double jeopardy, since he had already been punished for it by the contempt conviction. In the second case, the Court dismissed one indictment, since it (simple assault) had been the subject of his contempt conviction, but since the other four charges were for crimes different from violating the restraining order, the contempt conviction was considered inapplicable to them and he could be tried for them without its being double jeopardy. The case of Schiro v. Farley (1994) concerned a defendant found guilty of killing a woman while committing a rape. The jury returned no verdict on the count of knowingly killing the victim. The defendant argued that failure to convict him on that count acquitted him of intentional murder, yet that was the aggravating circumstance used in sentencing him to death. The U.S. Supreme Court said it was not double jeopardy because that provision was meant for trial and conviction, not punishment. Also, the trial court’s instructions to the jury were ambiguous, which meant that the jury was not sure it could return more than one verdict; therefore, the verdict could have been grounded on another issue than intent to kill. In Witte v. United States (1995) the U.S. Supreme Court held that a person can be charged with a crime even if that conduct had already been used to lengthen the sentence for another offense. The judge, while sentencing a defendant in a marijuana incident, almost doubled the maximum penalty because of relevant conduct involving cocaine. Later, the defendant was indicted on that same cocaine charge, but the Court held it was not double jeopardy because sentencing judges have traditionally been allowed to consider a defendant’s past behavior even if there had been no conviction, and courts have been allowed to impose tougher sentences for repeat offenders. Hudson v. United States (1997) concerned several bank officers who broke federal banking statutes and regulations. They agreed to a consent order with the Office of the Comptroller of the Currency (OCC), under which they paid assessments and agreed not to participate in the affairs of any bank without OCC approval. Later they were indicted on criminal charges, and the U.S. Supreme Court held it was not double jeopardy because the consent order was a civil matter, not a criminal one, and the provision only applies to criminal cases. Monge v. California (1998)
due process 147
looked at the issue of double jeopardy and sentencing. Under that state’s three-strikes law, a convicted felon with one prior conviction for a serious felony could have the prison term doubled. In an assault prior conviction, there had to be use of a dangerous weapon or great bodily injury to the victim in order to count as a “strike.” In this case the defendant was convicted of selling marijuana. At the sentencing hearing the prosecutor said the defendant had used a stick during an assault for which he had been convicted and had served a prison term, but introduced in evidence only that he had been convicted of assault with a deadly weapon. The judge then gave the defendant a five-year sentence, which he doubled to 10 because of the prior conviction, and then added an 11th year as enhancement because of the prior prison term. When the sentence was appealed, the state argued that it did not prove beyond a reasonable doubt that he had personally inflicted great bodily injury or used a deadly weapon, as required by the law, and asked to do the sentencing hearing again. The U.S. Supreme Court allowed the state to hold a new sentencing hearing, saying that double jeopardy protections are inapplicable to sentencing proceedings in noncapital cases, as the defendant is not placed in jeopardy for an offense. Double jeopardy applies only to a determination of guilt or innocence, not sentencing. In a most interesting recent case, in August 2006, a federal District Court judge in Miami ruled that the government brought overlapping and redundant charges against José Padilla, a former “enemy combatant” linked to Al Qaeda, and two codefendants, and dismissed one that could have resulted in a life sentence, i.e., conspiracy to murder, kidnap and maim people in a foreign country. The judge said that all three charges related to one conspiracy to commit terrorism overseas, and charging the defendants with a single offense multiple times violated double jeopardy. In sum, it is apparent that a seemingly uncomplicated provision in the Bill of Rights is indeed a very complicated one that needs the U.S. Supreme Court and other courts to interpret its meaning. In recent years most of the decisions narrowed the scope of the provision, but in general, the Court has been restrictive of the rights of criminal defendants during that period. Future justices sitting on the Supreme Court
may be more expansive of this most important civil liberty. See also due process. Further Reading Israel, Jerold H., Yale Kamisar, Wayne R. LaFave, and Nancy J. King. Criminal Procedure and the Constitution. St. Paul, Minn.: Thomson West, 2006; Weinreb, Lloyd L., ed. Leading Criminal Cases on Criminal Justice. New York: Foundation Press, 2006. —Robert W. Langran
due process The idea of due process, which refers to the concept more appropriately known as due process of law, can be viewed as the cornerstone of the American system of justice as defined by the original U.S. Constitution of 1787. Insofar as the Constitution, as initially ratified by the founding generation, represented a largely procedural framework, with obvious substantive guarantees, for the establishment of a comparatively limited government through a purposive process and structure, the concept of due process of law was its animating principle, both from a specific and a general perspective. At a specific level, due process secured the centrality of procedural constraints as guardians of political liberty. On the other hand, at a more general level, due process confined the exercise of governmental power to those particular authorities to whom it was granted and thus permitted by the processes and structures defined in the Constitution. Although the concept of due process had traditionally been interpreted as validating and necessitating a set of manifestly procedural rules and restraints, according to which the dictates of law would be served and duly authorized political power would be maintained, its application during the later 19th and early 20th centuries expanded to include substantive criteria through which the content and effects of legislation could be judged. At first, such substantive due process efforts largely served those seeking to invalidate efforts by progressive-era reformers to affirm the rights of disadvantaged groups and increase governmental police powers by, among other things, protecting the liberty of contract. Eventually, however, not least due to the doctrinal innovations of key members
148 due process
of the Warren court (1954–1969), substantive due process approaches underpinned the extension of the Bill of Rights to states and the confirmation, some would say creation, of theretofore nonpositive rights through the liberty and due process provisions of the Fourteenth Amendment. Nevertheless, particularly from the standpoint of criminal law, due process still presupposes a consistent set of procedures to which every citizen has a right and without which he cannot be deprived of his liberty as a citizen. As such, the notion of due process of law has its moorings in an Anglo-American common-law tradition that stresses the necessary integrity of duly acknowledged and knowable processes that establish, promote, and secure an official, or governmentally sanctioned, system of right and wrong. That tradition, though distinct from continental civil-law systems, betrays some influences of the Roman conception of jus naturale, especially in its denotation as natural right and not the more general, or universal, natural law. Indeed, the concept of natural right is probably the most appropriate starting point for a discussion of due process of law, AngloAmerican or otherwise. By natural right, most Roman philosophers and what we would today call jurists meant the particular manifestation, specification, or application of universal natural law in order to demonstrate the inherent correctness, i.e., right, and necessary logic of an action or potential to act. Most significantly for us, this descriptive incarnation of jus naturale eventually required and validated the existence of correct, or legally right, procedures through which the benefits, privileges, and legal attributes of citizenship were recognized and protected. Some of the main features of jus naturale were ultimately incorporated into English common law during the Middle Ages, but that should not be construed to imply that a linear transmission of the Roman conception of natural right can be traced from late antiquity to medieval England. In the hands of church-based scholastic writers, who exerted considerably less influence over the development of English jurisprudence than their counterparts throughout Europe, the intrinsic linkage between natural right and nature as an ontological anchor was severed. Though these writers helped inspire a tradition outside the church that arguably culminated with James
I and finally Robert Filmer, their attempts to replace nature with God as the ontological and epistemological source of right proved untenable. In the end, neither nature nor God could offer the kind of viability that custom seemed to present. Through its crucial role as the internal logic that defined the evolution of English common law, the concept of custom enabled the marriage of natural right and legal precedent in a way that would, by the 17th century, firmly entrench the concept of due process of law in the English political consciousness. By the beginning of the 13th century, with the issuance of Magna Carta, English insistence on the recognition and confirmation of specific procedures without which legal status, privilege, and benefits or claims arising therefrom could not be suspended, modified, or abolished was evident. In fact, Magna Carta contains the roots of many of the procedural guarantees against the arbitrary abridgment of political liberty that became such a prominent aspect of the U.S. Constitution. Magna Carta is pivotal in another regard also. It represents a transition, which had probably been effected some centuries prior in England, from procedural manifestations of natural right as it applied to the exercise of power and the duties and privileges of citizenship generally to the narrower procedural constraints on government in its actions against and relationships with citizens or subjects. So, the focus of what eventually became known as due process gradually but conspicuously shifted from general precepts of right as determined by nature to those specific proscriptions against and limitations of government that abridged or had the tendency to abridge the scope of necessary and inherently allowable activity on the part of the subjects and citizens. Viewed from an ahistorical perspective, English common law became less interested in the descriptive propositions of right that directed political action and increasingly interested in those processes that prevented government from restricting the exercise of political rights and liberties. In terms of process per se, these developments enshrined as inviolable and practically inalienable those specific procedures that protected property in their customary legal formulations as life, liberty, and estate. Although pre-17th-century conceptions of liberty and, by association, rights differed from those of
due process 149
the writers and political actors who later influenced the framers of the U.S. Constitution, the fundamental components of what would become the Lockean understanding of property as life, liberty, and estate were already in place. This is why Magna Carta and other contemporary documents emphatically affirmed certain due process criteria without which a subject could not be tried for criminal offenses or his property could not be seized, transferred, or otherwise modified. Guarantees against the suspension or abolition of privileges associated with habeas corpus, those establishing standards for the administration of justice, and particularly others concerned with the rights of the accused became hallmarks of an English legal tradition that was increasingly concerned with government’s ability to control life and political liberty. By the 18th century, after several generations of political change wrought by civil war, intrigue, and revolution, British legal doctrines had incorporated a conception of due process that was quite similar to our own. During the 17th century, jurists such as Sir Edward Coke and Sir Matthew Hale helped solidify a custom-centered system of jurisprudence that located due process of law at the center of a web of rights and liberties as the protector of those rights and liberties, through its ties to the so-called ancient constitution. The ancient constitution, despite its questionable historiographic viability, conferred an ontological imprimatur on the procedures that secured rights and liberties by establishing a “natural” origin and inherently legitimate source of authority for those procedures. As a result, the idea of due process had become not only constitutionally indispensable but also quintessentially English—and, by association, British. In the American context, i.e., in the arena of colonial politics within British North America, the common-law, custom-centered heritage of Coke and Hale became wedded with Lockean sensibilities about natural law and natural rights to produce a divergent strain of due-process doctrines. This is not meant to imply that the common-law tradition or custom-centered political culture generally had ceased to be relevant in the colonies; rather, it should highlight the fact that what eventually became an American system of jurisprudence was founded on an amalgam of influences that often seemed incompatible to jurists and political writers in Great Britain.
Indeed, as J. G. A. Pocock has shown, John Locke was an aberration with respect to the evolution of English—and later British—politics, not least because his nature-centered discourse could not be reconciled with the prevailing custom-centered rationalizations of the ancient constitution and incipient parliamentary sovereignty. Prior to the creation of the American republic, the Lockean view in colonial and state politics often predominated, as the dictates of natural law were used to substantiate the existence of procedures that secured and protected natural rights and political liberty. Even at its most Lockean, as exemplified in the writings of Richard Henry Lee and Thomas Paine, American political discourse never rejected or neglected its essential links to an English heritage that, in its own way, made those rights and that political liberty “natural” by confirming their necessarily English character. The import of colonial discussions about due process lay in the continued concentration of doctrinal focus on the procedures that protected the rights of accused persons, defined the extent of governmental authority with respect to rights and political liberty in civil and criminal matters, and distanced property from arbitrary interference by government. By the late 1780s, as the framers and critics of the Constitution struggled to build a new federal government, due process had become functionally equivalent with the prevention of tyranny, or corrupt government more broadly. The obsession with tyranny and its prevention reflects one of the most salient narratives in late 18th-century American history, inasmuch as colonial experiences with Parliament during the 1760s and 1770s and the subsequent excesses of democracy among state governments in the 1780s proved to the framers of the Constitution that tyranny and corrupt government represented the most immediate and tangible threats to the preservation of English rights and liberties. The avoidance and elimination of those conditions that foster tyrannical government and the related abridgment of property, as life, liberty, and estate, became the prerequisite for any system of jurisprudence in the new American republic. For reasons that are beyond the scope of this essay, the framers of the Constitution shed much of their traditional allegiance to Lockean principles of
150 due process
constitutionalism by transferring the locus of political and, therefore, juridical legitimacy from nature to positive law via a positivist constitution that became the seminal source of due process. Due process as envisioned by the framers encompassed the specific procedural guarantees without which life, political liberty, and property as estate could not be suspended, abridged, or appropriated. This included the by now prominent provisions regarding the rights of accused persons, not least of which were access to the writ of habeas corpus, trial by jury, protections against self-incrimination, and all of the other elements that have underscored the concept of the presumption of innocence; stipulations against the arbitrary and unjustifiable abridgement of property as estate and restrictions of contracts; and the imposition of strict limits upon the exercise of the coercive capabilities of government. The last point is one that is easily missed, especially since we usually associate due process with its comparatively narrow application to criminal law and the inherent rights of accused persons. However, the idea of due process of law is equally relevant within the aforementioned context of the prevention of tyranny and the associated constraints on government. To the extent the Constitution creates a system of justice devoted to the establishment and maintenance of a constitutional process and related structure that confines the legitimate purview of governmental authority in a way that undermines tyranny, that process and structure defines authority in terms of procedural limits on the use of power against citizens. By the very logic of limited government, the framers built the concept of due process into the fabric of government itself, so that due process of law automatically and necessarily entails an adherence to the political processes that make power and authority self-limiting through their relationship to property. (Of course, this aspect has become almost irrelevant with the expansion of governmental power through the unprecedented growth of the federal government over the past 70 years.) As suggested at the beginning of this essay, the doctrinal status of due process has been somewhat ambiguous due to interpretive innovations over the last several decades. On the one hand, the procedural constraints and requirements that protect the rights of the accused have, if anything, been imbued with
even greater significance than they possessed in the past, not least through a more rigorous application of those constraints and requirements to cases of minorities and other disaffected groups. The definition of so-called Miranda rights, additional constitutional limitations on searches and seizures, restrictions concerning the utilization of the death penalty, and several other issues have enhanced the protections afforded Americans in this arena. On the other hand, the revival of substantive due process approaches during especially the 1960s and 1970s to support and validate frequently necessary and laudable, though constitutionally questionable, expansions of personal liberties has pushed notions of due process away from process-based criteria for the protection of existing rights toward substantive standards for the review of the content of legislation and the affirmation of “new” liberties and associated rights. The Burger and Rehnquist courts, despite their purported uneasiness with the resurrection of substantive due process arguments, also displayed a willingness to embrace substantive due process in their efforts to protect economic and religious liberties, so no one is free of blame here. Moreover, the reliance on differential standards for the review of legislation, from reasonableness to strict scrutiny, ultimately legitimizes the imposition of substantive criteria as a supplement to or, in some cases, substitute for strictly procedural norms. So we are left with a paradox that lies at the heart of current thinking about due process: the viability of due process depends, to a great extent, on the related viability of what can best be described as due substance of law. In the end, despite some of the manifestly beneficial consequences of such a situation, the doctrinal meaning and relevance of due process have been compromised. Further Reading Caenegem, R. C. van. An Historical Introduction to Western Constitutional Law. Cambridge: Cambridge University Press, 1995; Ely, John Hart. Democracy and Distrust: A Theory of Judicial Review. Cambridge, Mass.: Harvard University Press, 1980; Friedman, Lawrence M. Crime and Punishment in American History. New York: Basic Books, 1993; Gillman, Howard. The Constitution Besieged: The Rise and Demise of Lochner Era Police Powers Jurisprudence. Durham, N.C.: Duke University Press,
equality 151
1993; Holt, J. C. Magna Carta. Cambridge: Cambridge University Press, 1992; Reid, John Phillip. Constitutional History of the American Revolution: The Authority of Law. Madison: University of Wisconsin Press, 1993. —Tomislav Han
equality In what is undoubtedly one of the boldest and most memorable examples of American political rhetoric, the Declaration of Independence affirms that “all men are created equal.” The importance of equality to American political culture is matched only by a handful of related political principles. As evidenced through its contribution to Americans’ sense of their own exceptionalism, a devotion to equality is considered quintessentially American and is believed to have a distinguished lineage that stretches back to our earliest colonial origins. That famous phrase in the Declaration supposedly acknowledges a belief in egalitarian principles that has changed very little over the centuries. Although some historians have viewed the Declaration’s pronouncement on equality as evidence of a nascent egalitarian sentiment among the founders, it is much more likely that egalitarianism, insofar as it even existed at that time, had nothing to do with Thomas Jefferson’s decision to include this provision in the Declaration. The founders’ notions of equality were not consistent with our ideas about equality of opportunity, rank, and treatment. Rather, those notions were tied to the Aristotelian belief in proportional equality and, in the words of legal scholar John Phillip Reid, to the desire “to secure a right [to equality] already possessed by the British.” Reid has illustrated that, according to contemporary political and legal doctrine, 18th-century Englishmen were “entitled . . . ‘to equal rights to unequal things.’ ” Equality meant that a person had a right only to as much of something as his natural station in life justified. As J. R. Pole has indicated, Anglo-American political and legal discourse “was not based on assumptions of social equality” and “did not rest [British liberties] on anything remotely resembling a society of equals.” Similarly, Reid has noted that “[c]olonial whigs seldom said that all individuals, as individuals, were equal.” In their allusions to equality, revolutionary-
era Americans wished to emphasize “that the American people were equal to the British people.” Accordingly, equality also entailed “a right to an equality of rights”; in the context of controversial argument regarding the extent of parliamentary authority, equality denoted the right “to be taxed as were the British living at home, or Protestants in Ireland, by constitutional consent.” In short, the founders’ conceptions of equality cannot be reconciled with today’s notions of egalitarianism. Instead, those conceptions manifest their Aristotelian roots. According to Aristotle, a well-ordered republic, whose constitution reflects the naturally occurring sociopolitical orders in a political community, remains stable as long as the balance among those orders is not disturbed. Such a balance, Aristotle writes in the Politics, is maintained through the establishment and promotion of political equality and through an understanding of the “three grounds on which men claim an equal share in government[s]: freedom, wealth, and virtue.” From what we know of Aristotelian political science, it may seem inconsistent that a political system based on natural hierarchies would be dedicated to equality, but Aristotle’s conception of equality was fundamentally different from those that would emerge during the 19th and 20th centuries. What Aristotle meant by equality can best be approximated by a term such as proportional equality. He believed that those who are inherently best equipped to govern, due to the wisdom and other intellectual virtues they possess, are, ipso facto, most deserving of the benefits that the pursuit of the common good will attain and are also, and perhaps more significantly, best qualified to utilize those benefits in a politically responsible fashion. This all makes eminent sense if we remember the centrality of the public good in Aristotelian political science; pursuantly, the Politics instructs, “what is equally right is to be considered with reference to the advantage of the state and the common good . . . and with a view to [a] life of virtue.” A well-ordered republic “exists for the sake of noble actions, and not [merely] of living together. Hence they who contribute most to such a society have [the] greate[st] share in it.” Equality as conceptualized by Aristotle is metaphysically warranted. It is an equitable and necessary means of preserving the balance of sociopolitical forces that enhances constitutional stability. A stable
152 equalit y
polis successfully maintains a balance within and among the three fundamental sociopolitical spheres, those based on wealth, freedom, and virtue. Although individuals who possess the most developed, or advanced, intrinsic capacities within each sphere should enjoy the greatest benefits, the individuals who possess intellectual virtues should be accorded the largest overall share of control and influence in the polis. A stable polis must have a constitution that reflects the presence of all three elements—freedom, wealth, and virtue—but wise men fulfill the most significant function, for they are the ones that possess the capacities to discover the universal propositions that underpin the metaphysical blueprint for a stable political community. The founding fathers were guided by classical sources in many instances, and their political discourse betrayed conspicuous elements of Aristotelian political science. Although numerous philosophical innovations and historical developments separated the founders from the ancient Athenians, their conceptions of key theoretical concepts were strikingly similar to those of the Athenians. This was definitely the case with the founders’ thinking regarding equality. Consequently, the assumption that Jefferson conceptualized equality according to a modern egalitarian framework we would recognize today is groundless and misleading. Notions of equality have become an indispensable part of our national psyche, so it has been tempting to extrapolate our interpretations backward toward the 18th century, but such extrapolations are substantively meaningless. Furthermore, the men who penned the Declaration and the rest of our founding-era documents were lawyers intimately familiar with the doctrinal relevance of particular terms and the semantic distinctions that differentiated those terms from one another. These men used a specific political lexicon with care, precision, and deliberation. When Jefferson and his peers claimed that “all men are created equal,” they construed those words more narrowly than we have in the generations since they were authored. Every word in this famous phrase was chosen purposively to convey a targeted contemporary political and legal idea. Jefferson intended to confirm the common conviction among revolutionary leaders that men, which denoted white property-owning Englishmen of Protestant heritage, should have complete access to the
English rights conferred on them as Englishmen of a certain social standing. Despite the fact that subsequent generations of Americans have imputed a broadly egalitarian motive to Jefferson’s words, such an interpretation would have been outlandish during Jefferson’s time. The obvious inhumanity and immorality of it aside, the American legal lexicon of the late 18th century did not equate slaves or women with men. The term “man” logically and necessarily denoted “white freeman,” or even “white English Protestant freeman,” and not anything else. Within this context, we must also remind ourselves that the Declaration was—first, foremost, and always—a legal document deliberately crafted to provide a defensible rationale for severing the constitutional relationship with the king. Jefferson and his contemporaries were anxious about the constitutional status of their rights as Englishmen and especially wished to protect the rights denied them by the king and Parliament. Legal writings and political tracts from the latter half of the 18th century support the relatively narrow, classical view of equality outlined above. As viewed by the founding generation, and particularly as pursuant to contemporary Whig-influenced ideologies in the newly established United States, equality was a pivotal concept because it reflected that generation’s belief in the centrality of balance and stability. Equality ensured an equitable distribution of English rights and access to political privileges in proportion to a citizen’s station and rank. In a society that still embraced ideas of paternalism and deference and accepted the ostensible stratification of people according to intrinsic merit, equality was a tool for the preservation of order and justice. Equality was not only necessary but also just. Eventually, equality became something very different, and interpretations of equality revealed an underlying concept that had morphed into an extremely malleable and almost universally applicable political slogan. Veneration of equality and its elevation to an all-encompassing ideal fostered the conviction that America demanded equality in all realms, not simply the political. We cannot be sure why and how the founders’ confined legalistic conception of equality ultimately served as the inspiration for its application beyond their intentions. Nevertheless, we do know that industrialization,
equality 153
urbanization, the emergence of a broad middle class, and other related factors encouraged a rethinking of inherited social and economic norms and promoted egalitarian ideologies. In addition, the popularization of politics initiated during the Jacksonian era (a time spanning approximately 1820 to 1840) emboldened critics of seemingly ancient notions of proportional equality. Also, the appearance and growth of abolitionism prior to the Civil War was a significant factor, inasmuch as it required a political justification for what many viewed as radical social remedies advocated by abolitionists. Lincoln was not the first, but perhaps the most famous, to invoke Jefferson’s famous words in order to validate the potential freeing of slaves and the eventual abolition of slavery. Ultimately, it was slavery and the Civil War that provided the impetus to legitimize and constitutionally affirm a more expansive conception of equality. The Reconstruction Amendments (which include the Thirteenth Amendment ratified in 1865, the Fourteenth Amendment ratified in 1868, and the Fifteenth Amendment ratified in 1870) finally offered the occasion to include an affirmative reference to equality in the U.S. Constitution. Although most Americans seem to believe that the Constitution inherently incorporated specific stipulations about equality, the equal protection clause of the Fourteenth Amendment was the first such provision. As has been true of other aspects of American history, constitutional change did not inexorably lead to required political reform to ensure the type of equality social reformers such as the abolitionists had promised. Although egalitarian political and economic ideas gained unprecedented currency during the late 19th and early 20th centuries, progress was largely confined to the improvement of the circumstances confronting white (male) workers as a result of mass industrialization. Women and particularly former slaves and their descendants benefited little, if at all, from the ostensibly significant constitutional changes after the Civil War. In fact, the equal protection clause of the Fourteenth Amendment lay dormant for almost 60 years following the adoption and ratification of the Reconstruction Amendments. Since then, most notably as exemplified by the many progressive opinions of the Warren Court, both the equal protection clause specifically and the notion
of political equality generally have been invoked in numerous cases dealing with civil rights and civil liberties. Equality has been a standard for the desegregation of schools and public facilities, the protection of voting rights, the equitable treatment of women and minorities in the workplace, and countless other accomplishments. As a result, the United States appears to be more egalitarian than at any point in its history. Equality, at least according to its proponents, now entails equal access, treatment, opportunity, and rank. It is as much an economic and social concept as a political one. However, the promise of equality still eludes large numbers of Americans. A semipermanent group of dislocated Americans, including the poor, disabled, and underprivileged are structurally prevented from pursuing the benefits of equality. Members of particular ethnic or religious groups are habitually denied access to the institutions and networks reserved for other Americans. Women, despite tangible economic and political gains over the last several decades, are rarely considered equal partners in any endeavor by their male counterparts, while millions of children in the United States go hungry every day. None of this means that equality is not an ideal worth pursuing or that the United States has failed in its pursuit of it. But equality is an elusive ideal, one that is as difficult to define as it is to achieve. Aside from the obvious structural and situational inequities in the American system that militate against equality, American visions of equality are often bipolar. Ironically, we continually tout the invaluable role, both as an unapproachable ideal and achievable goal, equality fulfills in our society, but we fail to realize that the system itself is intrinsically unequal. We live in a pluralist democracy that bestows economic and political advantages on groups whose continued survival partly depends on their abilities to deny equal access to their competitors. More to the point, the American polity has evolved from a republic governed by intellectual elites to a pluralist democracy in which privileged economic and political minorities enjoy influence that far outweighs their numerical strength. From an economic perspective, Americans have embraced the intrinsically unegalitarian characteristics of free enterprise and have also accepted the ostensibly inevitable dominance of large corporations
154 equal protection
in the American economy. In other words, the American political and economic systems are both structurally biased against egalitarianism and favor outcomes that frequently maximize inequality. So, the pivotal question is whether equality is a realistic goal for the United States, and not whether it actually exists. And, if equality continues to serve as a desirable political, economic, or social objective for Americans, then the system it endeavors to animate may need to be reformed. Further Reading Aristotle. The Complete Works of Aristotle. Edited by Jonathan Barnes. 2 vols. Princeton, N.J.: Princeton University Press, 1984; Horwitz, Morton J. The Transformation of American Law, 1870–1960: The Crisis of Legal Orthodoxy. Oxford: Oxford University Press, 1992; Nedelsky, Jennifer. Private Property and the Limits of American Constitutionalism: The Madisonian Framework and Its Legacy. Chicago: University of Chicago Press, 1990; Pole, J. R. The Pursuit of Equality in American History. Berkeley: University of California Press, 1993; Reid, John Phillip. Constitutional History of the American Revolution: The Authority of Law. Madison: University of Wisconsin Press, 1993; Reid, John Phillip. Constitutional History of the American Revolution: The Authority to Legislate. Madison: University of Wisconsin Press, 1991; Reid, John Phillip. Constitutional History of the American Revolution: The Authority of Rights. Madison: University of Wisconsin Press, 1986; Reid, John Phillip. Constitutional History of the American Revolution: The Authority to Tax. Madison: University of Wisconsin Press, 1987. —Tomislav Han
equal protection The concept that all individuals are equal before the law is one of the core philosophical foundations of the American democratic system of government. As early as 1776, this ideal was expressed in the Declaration of Independence, which stated, “We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.” This ideal is also expressed in the equal protection clause
of the Fourteenth Amendment, ratified in 1868, which declares that no state shall “deny to any person within its jurisdiction the equal protection of the laws.” This basically means that states are prohibited from denying any person or class of persons the same protection and rights that the law extends to other similarly situated persons or classes of persons. The equal protection clause does not guarantee equality among individuals or classes, but only the equal application of the laws. By denying states the ability to discriminate, the equal protection clause is crucial to the protection of civil rights. During the years following the Civil War (1861– 65), many members of Congress were fearful that discriminatory laws against former slaves would be passed in the South. Even though the Thirteenth Amendment, ratified in 1866, had outlawed slavery, many southern states had adopted Black Codes, which made it difficult for blacks to own property or enter into contracts, and established criminal laws with much harsher penalties than for whites. As a result, Congress passed the Civil Rights Act of 1866, but many feared that enforcement of the law at the state level would be difficult. As a result, the Fourteenth Amendment was introduced, which included the equal protection clause. While most white southerners opposed the law, the Republican majority in the Congress (representing the northern states) made ratification a requirement for southern states for reentry into the union. The equal protection clause applies to anyone within a state’s jurisdiction and not just former slaves. For an individual to legitimately assert a claim that the clause has been violated, the aggrieved party must prove some form of unequal treatment or discrimination, and must prove that there had been state or local government action taken that resulted in the discrimination. Much of modern equal protection jurisprudence originated in a footnote in the United States Supreme Court’s decision in United States v. Carolene Products Co. (1938). This case, which dealt with the commerce clause and the role that Congress would play in regulating economic activities, led Associate Justice Harlan Stone to write in his opinion, “Prejudice against discrete and insular minorities may be a special condition . . . which may call for a correspondingly more searching judicial inquiry.” As a
equal protection
This 1942 phot ograph shows the M ochida family a waiting the evacuation bus to an internment camp. (National Archives)
result, the U.S. Supreme Court would develop different levels of judicial scrutiny to use while examining the constitutionality of legislation dealing with race and gender. There are now three tiers of review for determining whether laws or other public policies that are challenged in court violate the equal protection clause. The traditional test used to decide discrimination cases is the rational basis test, which basically asks, is the challenged discrimination rational, or is it arbitrary and capricious? The Court must decide if the state had acted reasonably to achieve a legitimate government objective. Under this test, the burden is on the party challenging the policy to show that its purpose is illegitimate and/or that the means employed are not rationally related to the achievement of the government’s objective. This level of scrutiny by the courts is used most often when dealing with economic interests. For example, this might include a state requirement to have a license to practice medicine, which is considered to be in the public’s interest. The second test is the suspect class, or strict scrutiny test, which is used when the state discriminates on the basis of a criterion that the Supreme Court has declared to be inherently suspect or when there is a claim that a fundamental right has been violated. Racial criteria are considered suspect since they are mostly arbitrary. The law must be the least restrictive means available to achieve a compelling state interest. The Supreme Court employs strict scrutiny in judging
155
policies that discriminate on the basis of race, religion, or national origin, classifications that are deemed to be “inherently suspect.” In such cases the burden is on the government to show that its challenged policy is narrowly tailored to the achievement of a compelling governmental interest. For example, in Korematsu v. United States (1944), the Court embarked on a new approach to the equal protection clause by stating that all restrictions of civil rights for a single group may be immediately suspect, although not all restrictions are unconstitutional. This case dealt with the detainment of Japanese Americans during World War II, and while hardly a victory for civil rights, it did introduce the suspect classification doctrine. An intermediate or heightened scrutiny test is applied for sex discrimination cases. The law must be substantially related to the achievement of an important governmental objective. However, discrimination by private organizations is not actionable under the Fourteenth Amendment but can be challenged under a variety of other state and federal laws. The equal protection clause has a long history with the Civil Rights movement in the United States. In Plessy v. Ferguson (1896), the Court ruled 7-1 that separate but equal was constitutional, providing a rationale for government-mandated segregation. In Associate Justice John Marshall Harlan’s famous dissent, he argued that “our Constitution is colorblind, and neither knows nor tolerates classes among citizens.” That ruling stood until the Supreme Court unanimously ruled on Brown v. Board of Education of Topeka (1954), when the Court reversed its ruling in Plessy, declaring that segregated public schools in Kansas, South Carolina, Delaware, and Virginia were not equal. The case was brought by the NAACP, and argued before the Court by eventual Supreme Court Associate Justice Thurgood Marshall (who would be the first black to serve on the nation’s highest court). In a companion case, Bolling v. Sharpe, the Court held that the operation of segregated schools by the District of Columbia violated the due process clause of the Fifth Amendment. The Court recognized that the equal protection component in the Fifth Amendment due process requirement, indicating that uniform antidiscrimination mandates were to be applied to the federal government as well as the states. Because the Fourteenth Amendment applied only to the states and because the Bill of Rights
156 fr eedom
contained no explicit equal protection provision, it followed that the national government was under no constitutional obligation to provide equal protection of the laws. Hence the Court’s finding of an equal protection component in the due process clause of the Fifth Amendment, which states that “no person shall be deprived of life, liberty, or property without due process of law.” The equal protection clause has also been applied by the U.S. Supreme Court to settle disputes involving voting. A long debate has existed in the United States over the notion of “one person, one vote.” Reapportionment (to maintain relatively equal numbers of citizens in each congressional or state district) was not always occurring in each state between 1900 and 1950. Urban and rural districts were often unbalanced, which resulted in malapportionment. It was not uncommon for urban districts to be 10 times as populous as rural districts. Originally, the Supreme Court had refused to hear a case dealing with this issue in 1946 (Colegrove v. Green) by invoking the political questions doctrine (which means that the Supreme Court prefers to defer the controversy to a more appropriate branch of government to resolve). Until 1962, voters in urban districts continued to seek relief from federal courts, arguing that the equal protection clause of the Fourteenth Amendment was being violated by unequal districts. In Baker v. Carr (1962), the Court ruled that the question of the apportionment of a state legislature was a justiciable question. Then, in Reynolds v. Sims (1964), the Court ruled that states had to follow the principle of “one person, one vote,” and that the equal protection clause required states to make an honest and good faith effort to construct districts equally at the state level. In addition, in Wesberry v. Sanders (1964), the Court established that federal courts also have jurisdiction to enforce the constitutional requirement that representation at the federal level be based on equalpopulation districts. More recently, the equal protection clause took center stage in the disputed 2000 presidential election between Republican George W. Bush and Democrat Al Gore. The controversy stemmed from the different standards used to count ballots in the state of Florida during the necessary recount to determine which candidate had won the popular vote in the state (and thus would win the 25 electoral votes
that would decide the presidential contest in the electoral college). In a 5-4 ruling, the U.S. Supreme Court decided that the different standards used to count the ballots violated the equal protection clause. The decision was controversial in that the majority declared that in spite of the equal protection violation, there was not enough time to conduct a recount of the vote. The decision also caused controversy in that the five more conservative justices on the Court relied on unprecedented constitutional arguments (and mostly out of line with their own judicial philosophies regarding states’ rights) to end the recount, which allowed Bush to claim victory. Further Reading Epstein, Lee, and Thomas G. Walker. Constitutional Law for a Changing America: Institutional Powers and Constraints. 5th ed. Washington, D.C.: Congressional Quarterly Press, 2004; Fisher, Louis. American Constitutional Law. 5th ed. Durham, N.C.: Carolina Academic Press, 2003; Gillman, Howard. The Votes That Counted: How the Court Decided the 2000 Presidential Election. Chicago: University of Chicago Press, 2001; Hasen, Richard L. The Supreme Court and Election Law: Judging Equality from Baker v. Carr to Bush v. Gore. New York: New York University Press, 2003; Lee, Francis Graham. Equal Protection: Rights and Liberties under the Law. Santa Barbara, Calif.: ABC-CLIO, 2003; O’Brien, David M. Constitutional Law and Politics. Vol. 2, Civil Rights and Civil Liberties. 5th ed. New York: W.W. Norton, 2003; Pole, J. R. The Pursuit of Equality in American History. Berkeley: University of California Press, 1978; Stephens, Otis H., Jr., and John M. Scheb II. American Constitutional Law. 3rd ed. Belmont, Calif.: Thompson, 2003. —Lori Cox Han
freedom Freedom means to be unrestrained or to be able to do with one’s life as one sees fit. In political thought, the extent to which humans are free or should be free has long been debated. The debate centers on the question of the extent to which each individual who lives in the polity is entitled to do as he or she sees fit. For example, if all are equally free then how can one
freedom 157
be assured that the freedom that others enjoy will not impinge on one’s own freedom? Throughout the history of Western and American political thought, the previous question has been one of the driving questions since the Enlightenment. Most theorists have attempted to answer the question by devising various formulations of balancing state, group, and individual interests. In American politics, there is a long history and debate about the nature of freedom. The French thinker Alexis de Tocqueville (1805–1859) once wrote, “The great advantage of the American is that he has arrived at a state of democracy without having to endure a democratic revolution and that he is born free without having to become so.” Americans see freedom, the ability to pursue one’s path, as something with which they are born. The value and meaning of freedom is something that has been debated throughout the history of the republic but has always remained a central part of political debates. The relationship between the power of the state and the individual has long been a concern of American political thinkers. For example, in the early debates between the federalists, advocates of a strong central government, and the antifederalists, advocates of states’ rights, centered around the degree to which a strong central government might impinge on or interfere with the freedom of individual citizens. However, there was seemingly a set of assumptions that most of the early political thinkers agreed upon which are often seen as articulated in the founding documents, though the extent of agreement and meaning of these documents was highly debated then and still is in current scholarship. In the Declaration of Independence, Thomas Jefferson identifies “certain unalienable rights.” Among these rights are life, liberty, and the pursuit of happiness. Most scholars argue that these rights are predicated upon some idea of freedom, meaning one’s ability to enjoy one’s life, to be free from coercion, and that the ability that one has to pursue happiness should not be impeded by the government or others. There is a question as to whether the government should intercede in the lives of its citizens to help promote these unalienable rights. For example, if a person has an unalienable right to liberty and is constrained in his or her ability to do as he or she sees fit in order to achieve happiness
because he or she lacks financial resources, should the government be obligated to help that person? In contemporary politics, the philosophical question that underlies many policy debates is whether implementing a policy advances freedom or ultimately detracts from it. In answering the previous question, if it is determined that the government should help that person because he or she lacks financial resources, the government will have to take money from someone else and give it to that person. Some scholars, such as John Rawls (1921–2002), in his A Theory of Justice (1971), see this as appropriate because, as Rawls argues, all of us would require a minimum level of financial wherewithal in order to promote justice as fairness, which leads to freedom. Other thinkers, such as Robert Nozick (1938– 2002), in his work Anarchy, State, and Utopia (1974), argues that having the government take anything from its citizens without consent is stealing and severely limits freedom. There are at least three ways to conceive of freedom that are all related to the possibilities with which social groups are generally concerned: (1) metaphysical freedom, which is more commonly known as freewill, (2) ethical freedom, and (3) political freedom. Political theorists have identified and debated two concepts of political freedom that are related to positive and negative liberty. Negative liberty refers to freedom that an individual has in not being subjected to the authority of another person or a political organization. In short, negative liberty means to be free from coercion. Positive liberty refers to the ability that one has to achieve certain ends or one’s ability to fulfill one’s potential. Traditional philosophy has usually posited the necessity of free will as a prerequisite in making meaningful choices. Likewise, theorists have usually conceived of humans as having free will. Thinkers have seen free will as critical in order to account for agency and accountability within various social arrangements. In other words, because of free will, people can always be held responsible for their actions and appropriately rewarded or punished. Recently, challenges from cognitive scientists and evolutionary theorists have raised some doubt as to the nature of free will and whether something like a soul even exists. These questions have serious implications for how we conceive of the polity. For example, if one’s
158 fr eedom of association
decisions and actions are somehow determined, contingent, or radically influenced by one’s surroundings, the decisions or actions of others, some theorists propose that we will have to radically rethink how we dole out social rewards and punishments. Some theorists, such as Daniel Dennett (1942–) have argued that evolutionary theory, once properly understood, will radically alter how we conceive of freedom and the citizen’s relationship to the polity. The most important thing to understand about this “new” way of looking at freedom is that it radically overturns the traditional ideas of individual agents acting according to free will and, thus, being solely responsible for their actions. If thinkers like Dennett are correct, it is not clear what impact this will have on political organization and the construction of laws. The concept of ethical freedom is closely related to the concept of free will but it brackets the deeper metaphysical questions—the nature of soul—in order to ask the more practical question of how one ought to act. The nature of the soul and how one “ought” to behave has long been a concern of political theorists. For example, Plato argued that a virtuous life could only be lived properly in a state with the correct organization and with citizens who have a correct understanding of the soul. Aristotle was quite concerned with making sure that we only give praise to good actions that are voluntary or that are freely chosen. In other words, the only actions that can be considered just actions are those that are committed in a state of freedom. Utilitarians argue that people are free to choose as they see fit and ethical freedom consists of making sure that the greatest good for the greatest number of people is attained. Those who are concerned with freedom in this sense of the word look at the various possible outcomes among the several choices that an actor can take and then try to make judgments as to what the best possible course would be. The theories of utilitarianism and pragmatism are closely related to this concept of freedom. A pragmatist argues that one cannot know the ultimate good of any one action. One can only know the possible outcomes of one’s actions. Therefore, one ought to make decisions based on what is the most desirable outcome to the actor and not concerns about final ends, which are merely products of one’s desire anyway. In contemporary political usage there are several ways in which the term freedom is used, such as politi-
cal freedom, freedom of speech, economic freedom, freedom of thought, freedom from unjust government intrusion, individual freedom, freedom of expression, and freedom of religion. All of these concepts of freedom are closely aligned with the theory of political liberalism. At the very least, political liberalism posits that there are limits to government power such that individual freedoms cannot be abridged. However, in this contemporary sense of the use of freedom, it seems as if freedom deals more with limitations than it does with all the possible conceptions or manifestations of the term. For example, we have come to recognize that, although we have freedom of speech, that speech can be limited. We cannot, to use a well-known example, yell “fire” in a crowded theater. However, all the possible ways in which freedom can be expressed are not expressly elucidated. Thus, we understand, in a political community, that expressions of freedom are often curtailed by the rights of others to be free from potential harm. Further Reading Aristotle. Nichomachean Ethics. Cambridge: Cambridge University Press, 1909; Dennet, Daniel. Freedom Evolves. London: Allen Lane, 2003; Mill, John Stuart. On Liberty. London: Oxford University Press, 1963. —Wayne Le Cheminant
freedom of association The First Amendment does not specifically state that citizens have the right of association. However, the concept has evolved through the First Amendment’s guarantee of a right to peaceably assemble and to petition the government. The First Amendment’s guarantee of freedom of assembly—“the right of the people to peaceably assemble”—means that citizens have the right to gather in public to protest and demonstrate, to march and carry signs, or to otherwise express their views in a nonviolent manner. Citizens can also join and associate with groups and organizations without interference from the government. Regarding the First Amendment right to “petition the Government for a redress of grievances,” this means citizens have the right to appeal to the government in favor of or against policies that impact them.
freedom of association
This includes the right to lobby Congress or other legislative bodies, or to gather signatures in support of a cause. The freedom of association protects a citizen’s membership in any organization that is not involved in criminal activity. This fundamental right has its origins in the opposition of the American colonists during the 17th and 18th centuries to the British Crown’s attempts to suppress a variety of political associations. This was also an important concept debated at the time of the adoption of the Bill of Rights. In The Rights of Man (1791), Thomas Paine wrote that “The end of all political associations is, the preservation of the rights of man, which rights are liberty, property, and security; that the nation is the source of all sovereignty derived from it.” The United States Supreme Court first recognized the right to peacefully assemble and to freely associate with groups during the first half of the 20th century, as long as those groups were not subversive to government or advocating violence. In De Jonge v. State of Oregon (1937), the Court stated “the right to peaceable assembly is a right cognate to those of free speech and free press and is equally fundamental.” The ruling reversed the conviction of Dirk De Jonge, who had been arrested for teaching a group of people about communism. In another important ruling, the Supreme Court recognized a First Amendment right of association in National Association for the Advancement of Colored People v. Alabama (1958), when they unanimously supported the right of the NAACP to not turn over its membership lists to the Alabama state attorney general. This issue arose due to the legal battle that the NAACP was waging to desegregate the South. Freedom of assembly has, throughout the nation’s history, been granted to a wide variety of groups with diverse viewpoints, as long as the action is peaceful, and not violent, assembly. This has provided protection to civil rights advocates, antiwar demonstrators, labor unions, interest groups, political parties, and even the Ku Klux Klan, in allowing them to organize and support their causes. Regarding public protests or other acts, there must be a “clear and present danger” or an “imminent incitement of lawlessness” before the government has a right to restrict the right of free association and assembly. Government limitations on these types of activities
159
must be “content neutral,” which means that activities cannot be banned due to the viewpoint expressed. However, “time, place and manner” restrictions can be imposed on public activities as long as they represent legitimate public interests, such as preventing traffic problems or securing the safety of citizens. Unless there is a serious danger of imminent harm, government officials cannot restrict the right of assembly, even if they do not like the message being espoused by the particular group. Several high-profile cases regarding freedom of assembly have involved groups espousing antitolerant messages towards racial and ethnic groups. In 1977, a federal district court affirmed the right of the National Socialist Party of America—a neo-Nazi group—to march in Skokie, Illinois, a Chicago suburb with a large Jewish population that included many Holocaust survivors. On April 29, 1977, the Circuit Court of Cook County entered an injunction against the National Socialist Party, which prohibited them from performing certain actions within the village of Skokie, including “[m]arching, walking or parading in the uniform of the National Socialist Party of America; [m]arching, walking or parading or otherwise displaying the swastika on or off their person; [d]istributing pamphlets or displaying any materials which incite or promote hatred against persons of Jewish faith or ancestry or hatred against persons of any faith or ancestry, race or religion.” The group challenged the injunction in state and federal courts. Initially, the Illinois Supreme Court, in a 6-to-1 ruling, held that displaying swastikas was a form of symbolic speech protected by the First Amendment, and that prior restraint of the event based on the “fighting words” doctrine developed by the Supreme Court in Chaplinsky v. New Hampshire (1942) was not possible since advance notice of the march gave citizens the option of avoiding face-toface confrontations. One month later, a federal district judge ruled against the Village of Skokie, stating that the ordinances were unconstitutional. The judge held that not only did the ordinances censor certain kinds of speech, they also provided for censorship on the basis of what might be said as opposed to what had actually been said. The judge stated, “The ability of American society to tolerate the advocacy even of the hateful doctrines espoused by the plaintiffs without abandoning its commitment to freedom of speech
160 fr eedom of religion
and assembly is perhaps the best protection we have against the establishment of any Nazi-type regime in this country.” This decision was upheld by a U.S. Court of Appeals, and that ruling stood when the U.S. Supreme Court declined to hear the case. A similar case occurred in 1998, when the Ku Klux Klan was also protected during a march in Jasper, Texas, the town where earlier that year a black man named James Byrd had been dragged to death behind a pickup truck by three white men (who were later convicted of his killing). In addition to public protests, the freedom of association gives people the right to discriminate by choosing with whom they wish to associate and in what context. However, this right is not absolute. In cases involving freedom of association, the U.S. Supreme Court has stated that requiring loyalty oaths for public employees or firing public employees for their political beliefs or organizational memberships is unconstitutional. However, federal employees are prohibited from active participation in political campaigns. Also, the Court has found no First Amendment protection for many private organizations, like the Jaycee or Rotary clubs, who seek to discriminate against women and racial minorities. In California Democratic Party v. Jones (2000), the U.S. Supreme Court struck down California’s blanket primary law in which all registered voters in a primary could vote for any candidate of any party. Political parties in California challenged the law based upon their rights of free association. That same year, the U.S. Supreme Court ruled on another high-profile case regarding freedom of association in Boy Scouts of America v. Dale (2000). In the 5-4 ruling, the Court stated that homosexuals could be excluded from membership in the Boy Scouts. The Boy Scouts, a private, not-for-profit organization engaged in instilling its system of values in young people, asserted that homosexual conduct is inconsistent with those values. James Dale had been a member of the Boy Scouts since 1978, joining when he was eight years old. He remained a member throughout high school and as an adult, and held the position of assistant scoutmaster of a New Jersey troop. His membership was revoked when the Boy Scouts learned that Dale was an avowed homosexual and gay rights activist. Dale then filed suit in the New Jersey Superior Court, alleging that the Boy Scouts
had violated the state statute prohibiting discrimination on the basis of sexual orientation in places of public accommodation. In the case, the U.S. Supreme Court considered whether the Boy Scouts had a First Amendment right to defy a New Jersey state law barring discrimination based on sexual orientation. In 1999, the New Jersey Supreme Court had ruled in favor of Dale. But the majority opinion, written by Chief Justice William Rehnquist, overturned the previous ruling, stating that the Boy Scouts’ right to express their views against lesbians and gay men would be hampered if the organization was forced to admit openly gay people as leaders: “Forcing a group to accept certain members may impair the ability of the group to express those views, and only those views, that it intends to express. The forced inclusion of an unwanted person in a group infringes the group’s freedom of expressive association if the presence of that person affects in a significant way the group’s ability to advocate public or private viewpoints.” Further Reading Abernathy, M. Glenn. The Right of Assembly and Association. Columbia: University of South Carolina Press, 1981; Brannen, Daniel E., and Richard Clay Hanes. Supreme Court Drama: Cases That Changed America. Detroit: U.X.L, 2001; Gutmann, Amy, ed. Freedom of Association. Princeton, N.J.: Princeton University Press, 1998; Hamlin, David. The Nazi/ Skokie Conflict: A Civil Liberties Battle. Boston: Beacon Press, 1980; Murphy, Paul L., ed. The Bill of Rights and American Legal History. New York: Garland Publishers, 1990. —Lori Cox Han
freedom of religion In addition to freedom of speech and freedom of the press, the First Amendment also states that Congress shall make no law respecting an establishment of religion (which means that the government does not favor one religion over another), or prohibiting the free exercise thereof (which means that there will be no government interference in religious practices). However, just like freedom of speech and freedom of the press, freedom of religion is not an absolute guarantee. Together, the two clauses guarantee freedom from and of religion; while the establish-
freedom of religion 161
ment clause suggests the principle of the separation of government from religion, the free exercise clause suggests a voluntary approach for citizens in choosing a religion (or none at all). Yet, there is also an inherent tension between the two clauses as they often come into conflict, particularly when upholding the free exercise clause. Examples of this tension can be found in issues such as exemptions from the draft for conscientious objection to killing and war, exceptions in public education laws to allow Amish children to stop attending school past the eighth grade, or striking down a state law requiring that creationism be taught in public school science courses, all of which can be viewed as promoting one religion over another (which by most interpretations the First Amendment prohibits). Problems arise in this area of constitutional law due to the unclear intent of the authors of the Bill of Rights. When prohibiting laws respecting an establishment of religion, did they intend, in the words of Thomas Jefferson, to raise a wall of separation between church and state? Called the separationist or nonpreferentialist position, it is not clear if the authors intended for the establishment clause to prohibit completely any kind of state support, direct or indirect, of religion, or whether they merely intended to forbid the creation of a state religion, as was a common European practice. The latter approach is known as the accommodationist position, and supporters believe that the authors did not envision a nation that was hostile to religion in general, only one that did not favor one religion over another. Many believe that America’s colonial heritage represented a struggle to break free from religious conformity in Europe, particularly England. However, despite the fact that many colonists had come to America to escape religious persecution, there was great discrimination in some colonies against Roman Catholics, Quakers, Jews, or “dissenting” Protestants. Massachusetts established the Congregational Church and taxed other churches, including Quakers and Baptists. Five southern states established the Anglican Church of England. Most of the colonies had official, or established, religions, and most required loyalty to that religion to hold office or to vote (the state of Maryland required officeholders to profess a belief in God until 1961). Article 6 of the U.S. Constitution states that “no religious Test shall ever be
required as a Qualification to any Office or public Trust under the United States.” This is an important guarantee of religious freedom, since most states did have this requirement at the time. Freedom of religion jurisprudence, especially throughout the 20th century, like many other areas of constitutional law, was confusing and conflicting at times. During the past century, the United States Supreme Court has largely constructed jurisprudence in this area that embraces the notion of privatization of religion, a view notably espoused by John Locke. During the founding era, this view was supported by Thomas Jefferson and Thomas Paine, who closely linked religious toleration with notions of free expression. The establishment clause has been interpreted to mean that government may not favor one religion over another, as in England, where there is an official state religion. This is where the concept of separation of church and state comes from, that a “wall of separation” is maintained between church and state, even if the government support is nondenominational. Some exceptions have been made by the Court, like allowing state governments to provide secular textbooks to religious schools, because that is not viewed as an excessive entanglement with religion. In Everson v. Board of Education of Ewing Township (1947), the establishment clause was ruled applicable to the states. In this case, Associate Justice Hugo Black wrote the high-wall theory of the separation of government from religion into constitutional law but upheld the reimbursement of the costs of transporting children to private religious schools. The constitutionality of a state program that used public funds to provide transportation of students to parochial schools was questioned. The plaintiffs argued that public aid to these students constituted aid to the religion. The question in the case arose as how to distinguish between acceptable and unacceptable cooperation between the state and religion. In this case, the Court upheld the cooperation on the grounds that its primary purpose was secular and intended to benefit the schoolchildren (a doctrinal approach to establishment issues called the “child benefit theory”). In McCollum v. Illinois (1948), the Court struck down a program permitting public school students to attend weekly religious classes on school premises and ruled that, unlike Everson, the primary purpose
162 fr eedom of religion
of the released time program was not secular, and children were not the primary beneficiaries of the aid. In Zorach v. Clauson (1952), the Court upheld a released time program for students who attended religious programs off school premises, saying that Americans are a “religious people whose institutions presuppose a Supreme Being. . . . When the state encourages religious instruction or cooperates with religious authorities by adjusting the schedule of public events to sectarian needs, it follows the best of our traditions.” These cases show the inconsistencies in the Court’s rulings on establishment issues, and raises the question of how best to articulate a clear and consistent test for distinguishing between acceptable accommodations and aid to religion and unconstitutional establishment. Many of the cases stemming from the establishment clause have dealt with issues involving private and public schools. In Lemon v. Kurtzman (1971), Chief Justice Warren Burger wrote a three-part test that attempts to clarify law in this area. The issue in Lemon was a state program that contributed to the salaries of teachers in private schools who taught secular subjects. The test that emerged, which this case failed, suggested that governmental aid would pass constitutional muster if it had a valid secular purpose, its primary effect was neither to advance nor inhibit religion, and it did not lead to “excessive government entanglement” with religion. School prayer is another important constitutional issue when considering the establishment clause. In Engel v. Vitale (1962), the U.S. Supreme Court ruled that the reciting of prayers in public schools was unconstitutional. This has remained one of the most politically controversial decisions by the Supreme Court throughout its history. The case involved a practice in New York schools of reciting a short prayer at the start of the school day. The majority ruled that it was not the business of the government to compose state-sponsored prayers. One year later, in School District of Abington Township v. Schempp (1963), the Supreme Court also struck down Bible reading and the recitation of the Lord’s Prayer in class. In both cases, the Supreme Court ruled that a secular purpose must be present for the practice to be acceptable, which means that the Constitution does not preclude the study of religion or the Bible “when presented objectively as part of a secular program of
education.” Many conservative politicians and other interest groups have fought hard since then to overturn this ruling. Many schools ignored the Engel ruling, as obligatory prayer continued in schools for many years and still continues in some schools in the South. Ronald Reagan brought the issue to the political forefront while president, and almost every session of Congress since 1962 has witnessed an attempt to amend the Constitution to allow school prayer. Another ruling, Wallace v. Jaffree (1985), struck down a mandatory “moment of silence” in Alabama, suggesting that it would only be constitutional if it were a neutral “moment of silence.” In this case, Chief Justice William Rehnquist articulated his belief that the separationist understanding about a wall of separation is a mistake, and that government does not have to remain neutral in aid to religion, only nondiscriminatory. In Lee v. Weisman (1992), the Court upheld the Engel ruling by reaffirming the ban on statesponsored prayer by extending it to graduation ceremonies, even if the prayer was nondenominational. Other issues have also been considered by the U.S. Supreme Court concerning the establishment clause. As early as the Scopes “monkey trial” in 1925, school curriculum and the teachings of evolution versus creationism have been a controversial issue. In recent years, a third theory has emerged in the political debate. Known as intelligent design, and suggesting that the complexity of the universe can only be explained through a supernatural intervention in the origins of life, the issue over whether to teach this theory in public schools became a hot political topic in 2005 with the election of school board members in numerous states. The core issue in this debate involves whether or not a majority of a community should be able to decide such an issue, or if the responsibility lies with the individual teacher, as well as whether or not all theories should be presented, or if the teaching of creationism actually promotes religion. The U.S. Supreme Court struck down the practice of teaching creationism to balance the teaching of evolution in Louisiana public schools in 1987 in Edwards v. Aguillard. Other important constitutional debates in recent years regarding the establishment clause have included the use of public funds for private schools through tax credits or school vouchers; whether or not a city whose residents are predominantly Christian can be permitted to erect a nativity scene on city
freedom of religion 163
property during the Christmas holidays; and whether or not public school teachers should be allowed to lead students in a nondenominational prayer at the beginning of the school day. The free exercise clause restricts government from interfering with anyone’s religious practices. This is based on the Constitution’s commitment to individual autonomy and the influence in the writing of the Constitution of classical liberal beliefs and the demand for tolerance. Basically, the free exercise clause means that people are free to believe as they want, but cannot always act as they want upon those beliefs. But the Supreme Court has not always been strict with this guideline in some cases where a compelling government reason to interfere exists. The Supreme Court ruled in Cantwell v. Connecticut (1940) that the First Amendment “embraces two concepts—freedom to believe and freedom to act. The first is absolute, but in the nature of things, the second cannot be. Conduct remains subject to regulation of society.” In this case, the free exercise clause was upheld as applicable to the states due to the Fourteenth Amendment. Also in 1940, the Supreme Court dealt with the issue of what extent public schools should be required to respect the religious beliefs and practices of schoolchildren and their parents. In Minersville v. Gobitis (1940), the Court upheld the practice of requiring a flag salute in classrooms as an exercise of the police power to promote patriotism among the students. The Gobitis children had been expelled from public schools when they refused, based on the religious beliefs of the Jehovah’s Witnesses, to salute the flag. The majority on the Court concluded that the purpose of the law was secular and therefore constitutional. The Court would overrule this decision three years later in West Virginia State Board of Education v. Burnette (1943) when it overturned a compulsory flag salute statute as unconstitutional. Prominent free exercise cases before the Supreme Court in recent decades have dealt with a variety of questions. For example, at what point should the community’s interest in public order, be it protecting children, animals, or a shared moral sense, restrict an individual’s freedom of belief, and under what conditions, if any, may the community regulate or prohibit the religious beliefs and practices of individual citizens or religious groups? More specifically, should a
society be permitted to prohibit polygamy if it offends the moral sensibilities of a majority? These have not been easy questions for the U.S. Supreme Court to resolve, and its rulings throughout the years on this complex issue have not always been consistent. In an early case during the 19th century, the Supreme Court ruled against the Mormon Church on this issue, saying that allowing polygamy would grant a special exception to the church for an existing general law. In Reynolds v. United States (1879), the Court upheld a congressional statute banning polygamy over the objections of the Mormon Church. In Wisconsin v. Yoder (1972), the Court ruled that although secular, the law requiring children to attend school until age 16 would have a profoundly negative impact on the Amish community. Therefore, the Court ruled, Amish citizens are exempt from sending their children to school beyond the eighth grade. The Court has also ruled on whether or not the use of illegal drugs should be allowed if part of a religious ceremony. In Employment Division, Department of Human Resources of Oregon v. Smith (1990), the Court said that the free exercise clause does not prohibit the application of Oregon’s drug laws forbidding the use of peyote during services of the Native American Church in Oregon. The Court in recent years has also ruled on the sacrifice of animals as part of religious ceremonies. In Church of the Lukumi Babalu Aye v. City of Hialeah (1993), the Court struck down a ban on animal sacrifice that it considered impermissibly religiously motivated and an infringement on the free exercise of religion. Freedom of religion continues to be a divisive political issue, especially in recent years, as some members of the U.S. Supreme Court have signaled a desire to move away from established doctrine about the high wall of separation. Conservative politicians in recent years also often talk about the need for prayer in school and reestablishing religion within society. Many Republicans running for Congress in 1994, in an attempt to appeal to conservative voters, promised changes in this area. In addition, the election in 2001 of President George W. Bush, a selfproclaimed born-again Christian, has helped to move the issue of religion in public life to the forefront of the political debate. However, the Supreme Court struck down the Religious Freedom Restoration Act of 1993 in City of Boerne v. Flores (1997), which had
164 fr eedom of speech
been passed by the Republican Congress and signed into law by President Bill Clinton, saying that Congress did not have the right to redefine the meaning of the Constitution through a statute. The act had been a political attempt by both congressional Republicans and Clinton to court the religious vote in 1996, especially in the South. The case dealt with a city that wanted to stop a church from enlarging its building. When the case was not overturned in favor of the church, Congress proceeded to pass the act. Clearly, various political issues involving freedom of religion and the interpretation of the First Amendment, and whether or not the authors intended to create a high wall of separation between church and state, is far from resolved. Further Reading Epstein, Lee, and Thomas G. Walker. Constitutional Law for a Changing America: Institutional Powers and Constraints. 5th ed. Washington, D.C.: Congressional Quarterly Press, 2004; Fisher, Louis. American Constitutional Law. 5th ed. Durham, N.C.: Carolina Academic Press, 2003; Hammond, Phillip E. With Liberty For All: Freedom of Religion in the United States. Louisville, Ky.: Westminster John Knox Press, 1998; O’Brien, David M. Constitutional Law and Politics. Vol. 2, Civil Rights and Civil Liberties. 5th ed. New York: W.W. Norton, 2003; Segars, Mary C., and Ted G. Jelen. A Wall of Separation? Debating the Public Role of Religion. Lanham, Md.: Rowman & Littlefield Publishers, 1998; Stephens, Otis H., Jr., and John M. Scheb II. American Constitutional Law. 3rd ed. Belmont, Calif.: Thompson, 2003; Witte, John, Jr. Religion and the American Constitutional Experiment: Essential Rights and Liberties. Boulder, Colo.: Westview Press, 2006. —Lori Cox Han
freedom of speech Citizens have struggled to win the right to speak freely about political, social, and religious issues since the time of ancient Greece. The notions of freedom of speech and its role in democratic forms of government have been tied together since the days of Socrates and Plato during the fifth century b.c. in Athens. In The Republic, Plato writes that truth is best reached through a process called dialectic, which is a form of
rigorous discussion from which no fact or argument is withheld. Plato believed that this type of deliberative and substantive discussion was necessary if a government was to serve the needs of its citizens. While the city-state of Athens and its early conception of democracy and government by the people did not survive, the ideas found in Plato’s writings continue to be influential in terms of free speech. By the fall of the Roman Empire, about 27 b.c., the government system of autocracy was established and well recognized throughout Europe and the Middle East. Popular support was not needed for monarchs to govern, since this came from the divine right through God to rule the people. In most cases, citizens who did speak out in ancient or medieval societies did so with the risk of punishment or even death by those in political power. With Johannes Gutenberg’s invention of the printing press during the 15th century, among other advances in areas such as the sciences and religious reformations, autocratic rule began to be questioned and challenged. Suddenly, much more information was accessible to a broader range of citizens, and as a result, governing authorities began to be questioned. Plato’s view of free speech began to reemerge in the works of other philosophers during the 17th and 18th centuries, especially those of John Milton and John Locke, among others. The views of these philosophers, as well as the writings of Sir William Blackstone in Commentaries on the Laws of England (1765–69), were influential in how the framers of the U.S. Constitution viewed freedom of speech and freedom of the press. The American legacy of both freedom of speech and freedom of the press can be traced to this 17th-century notion of libertarianism—a no-prior-restraints doctrine that still allowed subsequent punishment, especially for seditious libel. Proposed in 1789 as part of the Bill of Rights, the First Amendment and the concept of freedom of speech in America was founded on the ideal that citizens need to be free to criticize their government and its officials. The U.S. Constitution, in effect since 1789, was silent on the issue of the civil liberties and civil rights of citizens, and freedom of speech, press, and religion were among those considered important concepts to spell out to make sure that the federal government did not infringe on the rights of
freedom of speech
World War II poster advertising war bonds with one of Norman Rockwell’s paintings fr om the F our Freedoms series. Published in 1943 in the Saturday Evening Post (Library of Congress)
citizens. The Bill of Rights, ratified in 1791, guaranteed that citizens could appeal to the federal judiciary if they believed their rights were being infringed upon by the federal government. However, no clear definition has ever emerged as to what the authors of the First Amendment really intended. Thomas Jefferson, author of the Declaration of Independence, and James Madison, who wrote the First Amendment, seemed to reject the English common law tradition of punishing seditious libel, instead favoring a healthy discussion of government. However, only seven years after the ratification of the Bill of Rights, Congress passed the Sedition Act of 1798, which prohibited criticism, mostly in newspapers, of public officials (particularly the president at the time, John Adams). This act emerged, in part, from fear that the United States would be drawn into war with France and Britain. The Sedition Act tested the government’s commitment to freedom of speech and press and showed
165
that neither were absolute rights. Several journalists were jailed but were released when President Thomas Jefferson took office in 1801. According to legal scholar Cass Sunstein, the American notion of freedom of speech comes from the concept of a Madisonian First Amendment, which is based on the notion that sovereignty in the United States lies with the people, not a specific ruler or government. In effect, the First Amendment created what Sunstein calls a “government by discussion,” which recognizes a commitment to equality and the use of public forums to make decisions and solve problems. However, both a literal reading of the First Amendment and the history surrounding its inception are vague and ambiguous and are probably not adequate for courts to rely upon in making contemporary freedom of speech and freedom of press decisions. If the First Amendment was really intended to serve a democratic society in protecting the rights of citizens to engage in political discussions, which involve the wellbeing of an organized and lawful society, then so many other forms of nonpolitical speech, like commercial speech, should not receive protection today under the First Amendment. Sunstein, like some other scholars, believes that government should control some speech that is harmful (like certain forms of advertising, pornography, and allowing political candidates to spend as much as they like during an election under the auspices of freedom of speech), otherwise the American democratic system may be undermined. When looking at the relevant case law, the U.S. Supreme Court has never put forth an absolute view on free speech rights. No significant cases dealing with freedom of speech reached the Supreme Court until the 20th century. Several cases have shaped both the understanding of and theories surrounding the concept of freedom of speech. The Court’s first significant ruling on freedom of speech came in 1919. Two years earlier, Congress had passed the Espionage Act, prohibiting any political dissent that would harm America’s effort in World War I. The Court upheld one of more than 2,000 convictions for encouraging antidraft sentiments in Schenck v. United States. The Court’s unanimous decision declared the Espionage Act constitutional, and even though the war had ended, declared that urging resistance to the draft would pose a threat to the nation’s efforts to win the war. The opinion, written
166 fr eedom of speech
by Associate Justice Oliver Wendell Holmes, would introduce the clear-and-present-danger test, which gave freedom of speech low priority in legal decisions for the time being. Any speech with a tendency to lead to “substantial evil” or to cause harm to vital interests that Congress had the authority to protect could be banned. Holmes wrote that it is a question of “proximity and degree” as to whether or not the speech was dangerous. His famous example stated that a man would not be protected for falsely shouting “fire” in a crowded theater, which would cause a panic. Therefore, speech is most dangerous when it will cause immediate harm. The Schenck case demonstrated that political speech did not receive much protection. A time of war allowed for the suppression of free speech in this decision, since Congress had the right to protect the interest of the military’s involvement in that war. The clear-and-present-danger test was meant to move beyond the bad tendency theory that for centuries had allowed for censorship of any speech that had a tendency to undermine government authority. Despite the majority decision and Holmes’s opinion in Schenck, no evidence existed that the defendant’s activities actually harmed America’s war efforts. In Abrams v. United States (1919), Holmes more firmly defined his view. Five socialists who were sympathetic to the Bolshevik movement in Russia distributed pamphlets attacking Woodrow Wilson’s use of U.S. troops to fight against the revolution. While the majority decision upheld the convictions, which included 20-year prison sentences, Holmes did not see a real threat. In his dissenting opinion, he wrote that if the threat is not so severe that immediate action is needed to save the country, then speech should not be censored just because it is unpopular. This is closer to defining the clear-and-present-danger test. To Holmes, a pamphlet criticizing Wilson’s policies was less harmful than actively campaigning against the draft. The case of Gitlow v. People of the State of New York (1935) became extremely important in later decisions involving the issue of states’ rights. Gitlow was a Socialist in New York who distributed a pamphlet calling for a general strike, which he believed would start the downfall of the capitalist system. His conviction under state law was upheld
by the Supreme Court, but the majority opinion also included the statement that a state’s attempts to restrict freedoms of speech and press were subject to review under the First Amendment. This set the precedent that federal courts could review decisions by state courts, since freedoms of speech and press were considered a federal issue. The precedent came from the Fourteenth Amendment, which dealt with the equal protection of laws, and declares that no state shall deprive any person of life, liberty, or property without due process of law. During the 1950s, America’s paranoia about the threat of communism led to the prohibition of many speech freedoms. In Dennis v. United States (1951), the Court upheld convictions of 11 Communist Party members for advocating the overthrow of the U.S. government, which had been outlawed under the Smith Act of 1940. The balancing test emerged in Dennis, where national security was deemed more important than free speech. With this test, competing rights are balanced to determine which should be given priority. However, by 1957, the Court had changed its view on a similar case. In Yates v. United States, the Court overturned similar convictions of Communists. The decision stated that since the overthrow of the government was only advocated in theoretical terms, it qualified as speech, which should be protected under the First Amendment. This included the rise of the “preferred position” doctrine, which is similar to balancing, but the First Amendment is favored. The Supreme Court decision in Brandenberg v. Ohio (1969) signaled the end of laws that allowed for suppression of speech which merely advocated the overthrow of the government, even if the threats were violent. This was also the last time the Supreme Court heard an appeal in a sedition case. A member of the Ku Klux Klan was arrested in southwestern Ohio for stating in an interview that he would take revenge against officials who were trying to bring about racial integration. The Court overturned the conviction, stating that the Ohio state law under which Brandenberg had been convicted was so broad that it would allow unconstitutional convictions for people who only talked about resorting to violence. The Court ruled that state laws had to be more narrowly defined to prevent imminent lawless action.
freedom of the press
The view of absolute rights regarding free speech has never received support from a majority of Supreme Court justices. Absolutism was supported by Associate Justices William O. Douglas and Hugo Black during the 1950s who believed that speech should always be protected and a balancing test undermines the principles of democracy. Also, speech should not be judged in terms of good or bad, since those types of judgments are a form of censorship. This view, they believed, was especially true for the issue of pornography and defining obscenity. Absolutists argue that government is the enemy of free speech and that government should be neutral in its regulation of such matters. All speech, not just political, should be protected, because the distinction between political and nonpolitical is too difficult to make. If any form of speech, whether it is political, artistic, sexually explicit, or even symbolic, is banned, then a slippery slope is created; one ban will lead to others, and more will be allowed than originally intended. Opponents of the absolutist view support the notions of balancing, since the government should have the right to impose restrictions in areas such as hate speech, advocacy of crime or violent overthrow of the government, obscenity, and libel, which can be harmful to society. Different types of speech are also looked at with different levels of scrutiny to determine First Amendment protections. For example, any law that appears to ban speech on its face deserves strict scrutiny by the Court, and in such a case, the government must present a compelling state interest. Often, symbolic speech, which is not pure words, receives intermediate scrutiny when laws are passed to regulate such conduct. Usually, pure speech receives greater protection than actions. However, flag burning is protected as free speech, and has been considered political speech (as in Texas v. Johnson 1989). Other types of action, under the guise of political speech, are not protected, such as kidnapping, murder, or other various crimes. The government must show a compelling interest in banning speech or expression, and ordinances usually need to be narrowly tailored. If they are too broad, then they have a harder time withstanding scrutiny by the Court. In RAV v. St. Paul (1992), the Supreme Court ruled that the city ordinance that banned the display of symbols that would arouse
167
anger, alarm or resentment based on race, color, creed, religion or gender was too broad, and restricted an expression based on content. In doing so, the Court overturned a conviction stemming from the burning of a cross on the front lawn of an AfricanAmerican family’s home, reinforcing the notion that one view cannot be favored over another by the government. Also, time, place, and manner restrictions on speech in a public forum must be content neutral and must represent other compelling public interests, such as safety, traffic, or preserving order if a riot is likely. Traditional public forums would include public streets, sidewalks, or parks; other designated public forums include public auditoriums, schools, and universities, which, once opened up for the discussion of ideas, are considered traditional forums; other public properties can have reasonable restrictions, because they are not traditional open forums and are used for specific government purposes, like post office lobbies and stacks in public libraries. Government restrictions on such activities must be content neutral and reasonable. Further Reading Middleton, Kent R., and William E. Lee. The Law of Public Communication. Boston: Allyn & Bacon, 2006; Pember, Don R., and Clay Calvert. Mass Media Law. Boston: McGraw-Hill, 2005; Sunstein, Cass R. Democracy and the Problem of Free Speech. New York: Free Press, 1995. —Lori Cox Han
freedom of the press According to the First Amendment to the U.S. Constitution, “Congress shall make no law . . . abridging the freedom of speech, or of the press.” Throughout the nation’s history, a unique relationship has existed between the American government and the press, and the two often have competing interests. The original intent of the First Amendment is still debated today, since the terms “freedom” and “the press” can take on drastically different meanings in a contemporary context compared to what they meant during the Founding Era. Sir William Blackstone, whose work Commentaries on the Laws of England (1765–69) was definitive in establishing
168
freedom of the press
many common laws in America, was also influential in how the framers of the U.S. Constitution viewed freedom of speech and freedom of the press. Blackstone held a limited view of freedom of the press, condemning government censorship, yet supporting punishment for publishers who promoted sedition, treason, or other kinds of libel. The American legacy of both freedom of speech and freedom of the press can be traced to this 17th-century notion of libertarianism—a no-prior-restraints doctrine that still allowed subsequent punishment, especially for seditious libel (criticism of the government). As early as the 16th century, the British government sought ways to minimize press freedoms. Many publications were restricted through licensing and prior restraint (censorship prior to publication), and seditious libel was outlawed. The British government attempted to use similar tactics to silence dissenting views among American colonists during the late 17th and early 18th centuries. However, despite the use of taxes, licensing, and sedition laws by the British government, printers in colonial America had more freedom in this regard than their British counterparts. No discussion of freedom of the press would be complete without mentioning the famous trial of John Peter Zenger during the period 1734–35. Zenger, the publisher of the New York Weekly Journal, was charged with seditious libel and jailed for nine months due to the publication of stories about the governor of New York, William Cosby. Zenger had printed the viewpoints of the opposition party, who had accused the governor of dishonesty and oppression. At the time, any seditious libel, true or false, was punishable, if it undermined the authority of the government official to govern. In reality, the bigger the truth (for example, government corruption), the more harm it would cause the official if it indeed undermined his authority. Even though Zenger had clearly broken the sedition law in place at the time, the jury in the case eventually acquitted Zenger on the libel charges. Following a convincing plea by his attorney, renowned criminal lawyer Andrew Hamilton, Zenger was found not guilty based on the notion of truth as a defense. In doing so, the jury ignored the sedition law and, for the first time, this concept was recognized. However, despite Zenger’s legacy as a true hero of American journalism, his case did not set a legal precedent, and the case had no real impact on freedom of the
press at the time. Colonial legislatures and assemblies simply used other legal means to punish printers and editors for seditious libel, and many were still jailed for publishing dissenting viewpoints. Nonetheless, the Zenger trial was considered an important event in rallying the colonists to fight against press censorship by British rulers and ultimately helped promote the concept of a free press in America. It is important to note, however, that the “press” and its role within the political process during the 18th century was very different from the role that the press plays in contemporary politics. Most publishers, like Zenger, were actually printers who specialized in the craft of operating a printing press. Newspapers were also not autonomous, in that, to stay in business, publishers relied on government printing contracts for other printing jobs such as pamphlets and handbills, and those writing political commentaries at the time did so through opinion and not through any type of investigative journalism as a check on the government. The press did play an important role in the American Revolution, and it was a highly partisan entity at that time. In addition, many newspapers engaged in the propaganda war that preceded the Revolution, and opposition to the cause of American colonists gaining their freedom from British rule was virtually silenced in the press. Newspapers would also play an important role in the debate over the eventual ratification of the U.S. Constitution following the Constitutional Convention in 1787 by reprinting the Federalist for broader distribution among the 13 states. Proposed in 1789 as part of the Bill of Rights, which were ratified in 1791, the free speech and press clauses of the First Amendment were founded on the ideal that citizens need to be free to criticize their government and its officials. Thomas Jefferson, author of the Declaration of Independence, and James Madison, who wrote the First Amendment, seemed to reject the English common law tradition of punishing seditious libel, instead favoring a healthy discussion of government. Many people mistakenly believe that the freedoms set out in the First Amendment (religion, speech, press, and assembly) were so important to the members present during the first session of Congress that they placed these rights first among the other proposed amendments. In reality, the First Amendment was originally the third of 12 proposed.
freedom of the press
The first two amendments dealt instead with the procedural matters of apportionment in the House of Representatives and salary increases for members of Congress (the latter issue was finally approved by three-fourths of the states to become the 27th Amendment in 1992, prohibiting current members of Congress from giving themselves a pay raise during the same session of Congress). Nonetheless, Jefferson and Madison were not alone in their beliefs that certain guarantees for civil liberties (free expression among them) should be adopted. So the Bill of Rights, ratified in 1791, guaranteed that citizens could appeal to the federal judiciary if they believed their rights were being infringed upon by the federal government. However, only seven years after the ratification of the Bill of Rights, Congress passed the Sedition Act of 1798, which prohibited criticism, mostly in newspapers, of public officials (particularly the president at the time, John Adams). This act emerged, in part, from fear that the United States would be drawn into war with France and Britain. The Sedition Act tested the government’s commitment to freedom of speech and press, and showed that neither were absolute rights. Several journalists were jailed but were released when President Thomas Jefferson took office in 1801. The issue of prior restraint remained important in the evolving relationship throughout the 20th century between the U.S. government and the American news media. The United States Supreme Court decision in Near v. Minnesota (1931) is the landmark case for prior restraint, and pitted the First Amendment rights of the news media against states’ rights through the due process clause of the Fourteenth Amendment. The case involved the Saturday Press of Minneapolis, a local tabloid, and its publishers, Jay Near and Howard Guilford, who were defaming local politicians and other officials in an attempt to clean up city corruption. Near and Guilford had begun the paper as a means to publish the names of those public officials who were involved in bootlegging, many of whom took bribes. Despite being an ill-reputed paper published by an editorial staff who was admittedly racist, antiunion, and extremely intolerant, Near was most intolerant of government corruption and organized crime in the Minneapolis/St. Paul area. Critics of the paper claimed that Near was using his paper as
169
a form of blackmail. Those who did not want their names to appear in connection with the illegal trafficking of alcohol would either buy advertising or pay Near directly. His tactics so angered local officials that the paper was shut down in 1927 under the 1925 Minnesota “gag law,” which allowed suppression of malicious and defamatory publications that were a public nuisance. Guilford had also been shot and wounded by unknown assailants. Near sought support from anyone who might help him in his appeal of the decision. Surprisingly, he gained the help of Robert R. McCormick, publisher of the Chicago Tribune. While McCormick was no fan of Near’s paper or his tactics, he did believe that, left unchallenged, the gag law would impede the rights of a free press. The American Newspaper Publishers Association also joined in the case and paid for part of the appeal process. In a 5-4 decision, the Court ruled in favor of Near. The Court’s ruling showed the belief among those in the majority that suppression was more dangerous than an irresponsible attack on government officials, and that the government carries a heavy burden of proof for a prior restraint. However, the decision did outline three exceptions for prior restraint: the publishing of military secrets, the overthrow of the government, and obscenity. Despite the apparent victory, the case remains a paradox for the claim of freedom of the press since the Court had laid out three exceptions to the rule for prior restraint. The issue of whether or not the government could rely on prior restraint to stop the publication of information they deemed dangerous to national security came before the Supreme Court 40 years after the Near decision, in New York Times v. United States (1971). As the Vietnam War continued to divide the nation, President Richard Nixon sent Attorney General John Mitchell to a federal district court to ask for the suspension of publication of the New York Times for its series of stories on the “Pentagon Papers,” a 47-volume study on the “History of the U.S. Decision Making Process on Vietnam Policy.” The Defense Department study had been leaked to the newspaper by Daniel Ellsburg, then a defense analyst who worked for the Rand Corporation. The contents were historical and nonmilitary in character but very political and diplomatic, and showed that the United States, as early as the 1940s, was more deeply involved
170
freedom of the press
in Vietnam than the government had reported. The case fell to Federal District Judge Murray Gurfein, a Nixon appointee, on his first day on the job. The judge issued a temporary restraining order after only the third installment of the story in the Times. The Washington Post, also in possession of the “Pentagon Papers,” had also begun running stories and faced the same legal challenge. Under appeal, the Supreme Court agreed to hear the case and issued its decision within one week. The attorneys representing both papers had originally planned to make the case a landmark legal decision for First Amendment rights by arguing that prior restraints are unconstitutional under any circumstance. But after the Court’s decision to maintain a temporary prior restraint, the decision was made to instead win the case on the immediate grounds that the government could not prove a risk to national security. After suspending the Times’s series of stories for 15 days, the Court ruled 6-3 in favor of the press, stating that the government had not met the necessary burden of proof, national security was not involved, and prior restraint was unconstitutional. But the court was divided, with each justice writing an individual opinion. Chief Justice Warren Burger, in his dissenting opinion, raised questions involving the public’s so-called “right to know” when top secret documents were involved. He also criticized the Times for publishing the stories, knowing that the documents had been stolen. Many American journalists wrongly believe that prior restraints no longer exist, and that the issue was resolved in both the Near and the Pentagon Papers case. Prior restraints still exist in many areas, including government licensing of broadcast stations through the Federal Communication Commission (though the practice is widely accepted in the United States, it is still a form of state control). While federal taxation of publications is not allowed, many states have implemented a sales tax for newspapers and magazines. However, the most significant prior restraint that exists is military censorship during a war. Recent examples include the U.S. invasion of Grenada in 1983, of Panama in 1989, and the Gulf War in 1991. All three military actions were cases where the government strictly controlled the flow of news. A national press pool was created following complaints by the news media after the news blackout of the Grenada
invasion. The pool included a rotating list of 16 credentialed reporters preselected by the Pentagon, to remain on call for any emergency military action. The pool was supposed to be immediately transported to any military hot spots, but this worked poorly in Panama, with reporters showing up 4 to 5 hours after initial action, and even then coverage was still controlled. Disputes still remain about the actual number of casualties and how they occurred, and some government videotapes shot from military helicopters have never been released. Coverage in the first Gulf War in 1991 was heavily orchestrated by the Pentagon. Most news organizations agreed to the pool, but most coverage included military footage from bombings and military briefings. Only a few reporters tried independent tactics to get stories, but many were held in military detention or threatened with visa cancellations. Similar coverage, or denial of press access, occurred during the American-led NATO action in the Balkans in 1999, and in American military action in Afghanistan beginning in 2001 following the terrorist attacks of 9/11. This type of coverage differed substantially from that during the Vietnam War, when reporters had much greater access to military personnel on the battlefield. The tighter control of press coverage by the Pentagon since Vietnam stems from the realistic portrayals through the press, particularly television coverage, during the late 1960s that contrasted starkly with government reports that America was winning the war against communist aggression in Southeast Asia. Critical news coverage of the Vietnam War also contributed to a decline in public support for the war during both the Johnson and Nixon administrations. Coverage of the War in Iraq, which began with the American invasion in 2003, has also been controlled by the Pentagon. However, a new strategy emerged in an attempt to provide more information to the press through the practice of embedding reporters (who would travel with various military units in an effort to report directly from the field). Libel is also an important topic when considering press freedoms. Libel is an expression that damages a person’s standing in the community through words that attack an individual’s character or professional abilities. Common laws involving protection from defamation have been in use since the 13th century. Most favored the plaintiff, who only had to prove
gay and lesbian rights
publication, identification, and that the published remarks were defamatory. In more modern times, the defendant could only avoid paying damages by proving that the remarks were true, were a fair comment, or were from privileged information (such as official government or judicial documents). The plaintiff’s burden of proof includes defamation, identification, publication, as well as timeliness and correct jurisdiction, and some defendants must also prove falsity. Prior to 1963, libel was a form of communication not granted protection under the First Amendment. The Supreme Court case New York Times v. Sullivan (1963) changed that. In this case, the Court ruled that the First Amendment protects criticism of government officials even if the remarks are false and defamatory. Public officials can only sue for libel if they prove that the defamation was published with the knowledge that it was false, or with reckless disregard for the truth, also known as actual malice. The case dealt with an ad that ran in the New York Times in 1960, purchased by a group of civil rights activists that discussed supposed abuses against black students by police in Montgomery, Alabama. The ad contained several inaccurate statements, and Sullivan, the police commissioner, and other officials all sued for libel after the Times would not print a retraction. Despite an award of $500,000 by a lower court, which was upheld by the Alabama Supreme Court, the U.S. Supreme Court ruled unanimously to overturn the Sullivan decision. Even though the Times could have checked the facts in the ad against stories they had run, the Court ruled that robust political debate was protected by the First Amendment, and that public officials must prove actual malice, either that the information was printed with knowledge of falsity, or that the paper had exercised a reckless disregard for the truth. Also, the ruling relied on previous rulings in the area of seditious libel by not allowing the government to punish the press for criticism of its policies. With the Sullivan ruling, four aspects of the law were changed: the protection of editorial advertising, which is given higher constitutional protection than commercial advertising; libel had to be considered in terms of public issues and concerns, protecting uninhibited and robust public debate; partial protection of some false statements, excusing some falsehoods
171
uttered in the heat of debate over the public conduct of public officials; and actual malice, meaning that the publisher of the libel acted either in the knowledge that the assertion was false or in reckless disregard of whether it was true or not. In a subsequent case, Gertz v. Welch (1974), the Court defined three types of public figures: all-purpose (those with special prominence in society and who exercise general power or influence and occupy a position of continuing news value); limited or vortex (people who have willingly injected themselves into a debate about a public controversy for the purpose of affecting the outcome, and as a result, must prove actual malice for defamation relating to the activity); and involuntary public figures. In addition to prior restraint and libel, many other areas involving the role of journalists and First Amendment protections (such as newsgathering practices, invasion of privacy, or weighing the rights of reporters to have access to information versus the rights of a defendant under the Sixth Amendment to a fair trial) continue to contribute to an ever-evolving definition of freedom of the press in contemporary American politics. See also sunshine laws. Further Reading Cook, Timothy E., ed. Freeing the Presses: The First Amendment in Action. Baton Rouge: Louisiana State University Press, 2005; Emery, Michael, and Edwin Emery. The Press and America: An Interpretive History of the Mass Media. 8th ed. Needham Heights, Mass.: Allyn & Bacon, 1996; Lewis, Anthony. Make No Law: The Sullivan Case and the First Amendment. New York: Vintage Books, 1991; Pember, Don R., and Clay Calvert. Mass Media Law. Boston: McGraw-Hill, 2005. —Lori Cox Han
gay and lesbian rights The rights of gay men and lesbians is an issue that arrived relatively recently on the national political scene. “Gay and lesbian rights” is a general term that stands for a broad variety of constitutional, legal, and policy issues. Issues of legal equality are at the forefront of the current debate over gay and lesbian rights.
172 gay and lesbian rights
The crux of the debate over gay and lesbian rights is whether or not this is a civil rights issue comparable to the battle over equal rights for racial minorities and for women. Gays and lesbians often argue that they are seeking the same sort of rights that minorities and women had to fight for, such as the right to serve in the military, which is seen by many as a basic requirement for full citizenship, and protections against job discrimination. Like women and minorities, gays and lesbians have often had to turn to the courts since the democratic process generally offers little protection for a minority group against a hostile majority. On the other side of the debate are those who argue that homosexuality is a moral choice and not comparable to race or gender. From this perspective, protecting gays and lesbians from, for example, workplace discrimination, would amount to government endorsement of homosexuality. Under the Clinton administration, the federal government leaned somewhat in the direction of protecting equal rights for gays and lesbians. President Clinton rescinded the ban on granting top-level security clearance to gay and lesbian Americans, and, as will be discussed below, attempted to end the ban on gays and lesbians serving in the military. Under the George W. Bush administration, the federal government has more clearly sided with those who consider homosexuality immoral. President Bush called for a constitutional amendment to prohibit same-sex marriage and his secretary of education went as far as to threaten to pull federal money from the Public Broadcasting Service if it aired a children’s television show in which the character Buster the Rabbit was given a tour of a maple syrup farm by a lesbian couple in Vermont. The federal courts have so far attempted to find a middle ground on these issues. They clearly have not treated discrimination on the basis of sexual orientation as the legal equivalent of race or gender discrimination. The United States Supreme Court has upheld restrictions on Immigration of gays and lesbians, as well as the exclusion of gays and lesbians from public parades and from the Boy Scouts. On the other hand, as discussed below, the Supreme Court has struck down state sodomy laws that target gays and lesbians and also struck down a state law that prohibited local governments from enacting laws against discrimination on the basis of sexual orientation.
Poster for a gay pride parade, 1977 (Library of Congress)
This is a complicated area because gay men and lesbians are seeking equal legal rights in all areas of public life. One such area is freedom from employment discrimination. While federal law prohibits discrimination on the basis of factors such as race, gender, religion and national origin, it offers no protection to a qualified job applicant who is denied employment solely on the basis of his or her sexual orientation. While Congress has considered passing protective legislation to remedy this gap in the form of the Employment Non-Discrimination Act, such legislation has not yet passed. Some state and local governments have gone further than the federal government and have enacted laws against discrimination on the basis of sexual ori-
gay and lesbian rights
entation. However, in 1992, the voters of the State of Colorado passed a ballot initiative striking down all local civil rights law protecting gay men and lesbians from discrimination and prohibiting any such civil rights laws in the future. In a major legal victory for gay rights advocates, the U.S. Supreme Court, in its decision in Romer v. Evans (1996), held that the ballot initiative deprived gay men and lesbians of the equal right to seek civil rights legislation through the normal democratic process. This was the first pro–gay rights decision in the history of the U.S. Supreme Court and appeared to represent something of a turning point in the legal debate. In 2003, the U.S. Supreme Court weighed in on another major issue pertaining to sexual orientation. In Lawrence v. Texas, the Court struck down a Texas law against same-sex sodomy, referring to the law as an affront to the “dignity of homosexual persons.” This ruling was particularly noteworthy because in reaching this result, the Court overturned an earlier Supreme Court decision in 1986 (Bowers v. Hardwick) that was widely regarded as derogatory towards gay men and lesbians. The Court is notoriously reluctant to overrule its earlier decisions, especially a relatively recent decision, so its willingness to do so, together with the Romer decision, may indicate an increased sympathy by the Court to the equal rights claims of gay and lesbian plaintiffs. Another major issue in gay and lesbian rights is qualification for military service. When Bill Clinton ran for president in 1992 he promised to end the ban on gay men and women serving in the military. Shortly after his election, Clinton attempted to do so but encountered strong opposition in Congress, which has ultimate power to regulate the armed forces. Clinton and Congress compromised with an approach called “don’t ask, don’t tell,” which allows gays and lesbians to be dishonorably discharged if they tell anyone they are gay, but otherwise prohibits the military from discharging a gay or lesbian member of the armed forces. This was widely regarded as a setback for the gay and lesbian rights movement, especially because the number of discharges of gay and lesbians from the military actually increased after the new policy took effect. The military has interpreted the new policy loosely and has discharged, for example, servicemen who have told their psychologists that they are attracted to persons
173
of the same gender. A 2005 study by the Government Accountability Office found that in the first 10 years of the “don’t ask, don’t tell” policy, the military discharged 9,500 service members under the policy, 757 of whom had training in critical jobs and/or foreign languages, such as Arabic. President Clinton was successful, however, in rescinding the automatic ban on gays and lesbians qualifying for top-level national security clearance. This ban had prevented gays and lesbians from working in many defense industry jobs even as civilians. At the turn of the 21st century, the leading issue for gays and lesbians was same-sex marriage. Equal marriage rights became a prominent issue for gays and lesbians for at least two reasons. First, marriage is a requirement for many legal rights such as hospital visitation, automatic inheritance, and custody of a partner’s children in event of the partner’s death. The federal government has estimated that there are more than 1,000 legal rights and privileges that automatically accompany marriage. The children of gay and lesbian couples also derive very important benefits from having married parents such as being eligible for health insurance coverage by either adult partner. Second, many gays and lesbians see equal marriage rights as the most basic requirement of full citizenship. Various polls show that a happy marriage is among the most important goals of the great majority of people regardless of sexual orientation. The movement of equal marriage rights received little attention until various state courts began ruling that denying gays and lesbians the right to marry violates basic principles of legal equality. In 2003, the highest court of the State of Massachusetts ordered the state to allow same sex couples to marry. This had an enormous impact on the public debate. On the one hand, voters in most states voted to ban same-sex marriage from their states. On the other hand, polls showed that public opinion moved in favor of marriagelike rights for same-sex couples with a majority of Americans favoring either civil unions or full marriage rights for them. The issue of legal equality for gays and lesbians is likely to remain a complex and controversial area in American government for many years to come. Issues such as “don’t ask, don’t tell” and same-sex marriage affect core institutions of our society and lie at the intersection of conflicting American ideals such as
174 gender discrimination
equal rights and majority rule. For opponents of civil rights for gays and lesbians, this issue amounts to an attempt by the courts and liberal elites to foist overly permissive values on the more conservative majority of the American people. For gays and lesbians and their supporters, the issue is fundamentally about equal rights for all people, with anti-gay policies seen as the modern day equivalent of our old segregated military and bans on interracial marriage. This is a debate that will continue to embroil the courts, the Congress, and state governments, and is also likely to remain a high-profile issue in presidential elections. See also equal protection. Further Reading Eskridge, William. Equality Practice: Civil Unions and the Future of Gay Rights. New York: Routledge, 2002; Gerstmann, Evan. Same-Sex Marriage and the Constitution. New York: Cambridge University Press, 2004; Richards, Davis. The Case for Gay Rights: From Lawrence to Bowers and Beyond. Lawrence: University of Kansas Press, 2005. —Evan Gerstmann
gender discrimination Gender discrimination, and its related term sex discrimination, is a pattern of bias against a group of individuals based upon their female or male traits. In popular culture, gender discrimination is sometimes used synonymously with the related term, sex discrimination. Feminist scholars of the 20th century were successful in separating the categories of sex and gender, however, in order to describe the human experience more fully and with greater nuance. For the purposes of feminist scholarship, sex refers to born characteristics; whether they are chemical differences such as hormones, or reproductive organs, the notion of sex is one generally understood to be relatively fixed through biology. Sex discrimination therefore is bias, exclusion, or maltreatment on the basis of one’s sex; a decision to hire only men into the field of teaching because they will not have babies and leave the workforce would be an example of sex discrimination. In contrast, gender is understood as a mutable characteristic; generally this term explains the social
characteristics, inclinations, and behaviors associated with one’s sex. Oftentimes gender is described as a person’s identity. Many feminist scholars argue that while sex is immutable, gender is highly influenced by social arrangements, understandings, expectations and institutions. Systems of parenting, religious norms, and other cultural traditions and expectations help mold the individual’s gender traits. Because gender is more subtle than sex, gender discrimination can be more insidious than its partner attitude, sex discrimination. An example of gender bias would be a preference to grant custody rights automatically to women based on the assumption that women make better parents than men. Still, some would warn against drawing too bright a line between the categories of sex and gender. The scientific disciplines cannot definitely demonstrate the limits of biology: It is quite possible that hormonal differences and sex characteristics influence our patterns of behavior and social traits, giving sex and gender an interactive quality. It is also becoming more common to find humans born with both male and female sex characteristics, making the distinctions of sex, once thought to be definite, more complex and subtle. One is hard-pressed to find a country that does not demonstrate a historical pattern of gender and sex discrimination. In China, the one-child policy, combined with a cultural preference for male children, has led to the widespread abandonment of girl babies. In the United States, gender/sex discrimination is evident in the 21st century within subtle employment practices that allow certain professions, dominated by women, to be paid less than maledominated fields that require roughly the same level of expertise, risk, or training. Still, while there remains gender and sex bias within the United States, much progress on this front was made in the 20th century. The observation of discrimination against women dates back to the founding era. Abigail Adams, in writing to her husband, John Adams, while he was at the Constitutional Convention, famously admonished him to “remember the ladies.” Early examples of blatant sex-based discrimination in America abound: “coverture” laws extended citizenship protections to women through their relationship to men. Property laws, Suffrage laws, and other provisions related to citizenship responsibilities, like jury duty,
gender discrimination
excluded women, forcing them into a second-class status throughout the entire 19th century. The first wave of the women’s movement toward equality officially began in 1848, when a group of women and men gathered at Seneca Falls, New York, to draft the Declaration of Sentiments. The document parallels the language of the Declaration of Independence, and calls for the equality of women in America. Some of these early feminists worked toward the abolition of slavery. While successful in that quest, feminist abolition suffragists were bitterly disappointed when, in 1870, the 15th amendment to the U.S. Constitution extended voting rights to freed slaves by inserting the word “male” into the Constitution, excluding freed females and female abolitionist activists from the franchise. The first wave of the women’s movement then focused on extending suffrage to women in order to allow them equal participation in society. Early suffragists themselves sometimes used sex and gender stereotypes to make their claim: some argued that women would make better citizens, while others argued for suffrage on the basis of abstract rights. Still others invoked racist charges, claiming the need for white women to “check” the votes of recent male immigrants and freed black male slaves. The first wave won the achievement of national women’s suffrage in 1919, when the 19th amendment to the Constitution added women’s suffrage to the Constitution. Still, women’s access to employment and control over their bodies languished through much of the 20th century. The second wave of the American women’s movement began in the 1960s, when women of diverse backgrounds clamored for equality in the workforce and for greater control over their reproduction. Liberal feminists focusing on employment opportunities and access gained tremendous ground in the mid-20 century through a wave of federal legislative changes. The Kennedy administration developed the first Women’s Commission, which produced a report documenting the barriers to women’s economic equality. Based on the observations of this report, as well as the emerging second-wave women’s movement, Congress began to take action. As voters, women pressed their representatives in Washington for the Equal Pay Act, which in 1963 made it illegal to pay men and women of equal skills differ-
175
ent wages for the same job. This law proved limited, however, because it did not take into account other aspects of sex and gender-based discrimination at work. Perhaps the most significant achievement for employment equality in the 20th century was Title VII, an amendment to the Civil Rights Act of 1964. Originally offered as a way to prevent the legislation from passing, Title VII has become the hallmark of women’s economic parity. Title VII extends much greater protections to women, making workplace sex discrimination of all kinds illegal. Still, the United States Supreme Court had to interpret the legislation in order to breathe life into Congress’s promise of equality. The Supreme Court began reviewing “sex discrimination” cases as a result of Title VII. Many feminists argued that sex discrimination should be analogous to racial discrimination for purposes of the law. The Court extends “strict scrutiny” to allegations of racial discrimination, meaning that the Court presumes there to be no legitimate reason for employers or governments to treat individuals different on the basis of race, placing a high burden on the discriminating group to legitimize its actions. This concept makes all race-based treatment inherently constitutionally suspect, allowing very few examples of racebased treatment to survive constitutional scrutiny. The Court’s alternative to strict scrutiny in the wake of Title VII was a standard of “reasonableness,” in which the Court would only require the defendant to prove that sex-based treatment was reasonably related to a stated goal. In 1973, a plurality of the Court argued in Frontiero v. Richardson that strict scrutiny should be applied in gender discrimination. Still, that view never garnered a majority of the court, and in 1976, the Court argued that sex-based discrimination does not properly fit the expectations of strict scrutiny or reasonableness, and it created a third, middle category of review: intermediate scrutiny (in Craig v. Boren). This level of analysis allows the Court to view some instances of sex discrimination as legitimate. In practice, the Court’s intermediate scrutiny standard has allowed insidious forms of gender and sex discrimination to stand. Many feminists today argue that the only resolution to the intermediate scrutiny standard is to pass an Equal Rights Amendment (ERA) to the Constitution.
176 Jim Crow laws
An ERA was introduced in Congress in 1923, but gained little momentum in the wake of the suffrage movement’s recent success. The country nearly ratified the ERA a generation later. Advocates reintroduced the ERA in 1972, and Congress swiftly sent it to the states for ratification. The country fell three votes shy of ratification when, after one congressional extension, the amendment expired on June 30, 1982. Today feminists concerned with sex and gender discrimination argue that ratification of the ERA would close the gap between women and men by making all forms of unequal treatment illegal. Still, the law is limited in affecting major cultural norms and some argue that the ERA would not have the impact advocates desire. Battles today around sex and gender discrimination often focus on reproductive rights, which have eroded since the 1970s, and the observation that women and men still receive different treatment at work. While women can be found in every institution of power today, they are rarely in places of authority within those institutions, and often face insidious barriers and sexual harassment. Another more recent use of the term gender discrimination is its application to gay Americans, whose sexual identity often challenges norms of gender behavior. The future holds potential for legal protections of gender discrimination to extend to homosexuals, extending the promise of American equality to another group who has faced historical discrimination. Moreover, it is important to recognize the intersectionality of various forms of discrimination: Racism, homophobia, and age-based discrimination, for example, often interface with gender or sex discrimination. Recent feminist scholarship focuses on the nexus of systems of discrimination to more fully understand how individuals experience their identity, and how systems of oppression sustain one another. See also equal protection; sexual and reproductive health policy; women’s rights. Further Reading Baer, Judith A. Our Lives Before the Law: Constructing a Feminist Jurisprudence. Princeton, N.J.: Princeton University Press, 1999; Burk, Martha. Cult of Power: Sex Discrimination in Corporate America and
What Can Be Done about It. New York: Scribner Books, 2005; Butler, Judith. Gender Trouble: Feminism and the Subversion of Identity. New York: Routledge, 1990; Crawford, Mary, and Rhoda Unger. Women and Gender: A Feminist Psychology. 4th ed. Boston: McGraw-Hill, 2004; Hoff, Joan. Law, Gender, and Injustice: A Legal History of U.S. Women. New York: New York University Press, 1991; Kerber, Linda K. No Constitutional Right to Be Ladies: Women and the Obligations of Citizenship. New York: Hill and Wang, 1998; MacKinnon, Catharine A. Toward a Feminist Theory of the State. Cambridge, Mass.: Harvard University Press, 1989. —Melody Rose
Jim Crow laws Originating in the antebellum world of popular entertainment and extending to the world of politics and public policy in post-Reconstruction America, the term Jim Crow carries a profound legal, political and cultural legacy. The term simultaneously conjures up images of a cartoonish, stereotypical black male character from minstrel shows, yet also refers to the specific proliferation of state and local laws—predominantly in the South—that facilitated segregation and racial injustice in American life and institutions in the century following the Civil War. Emerging from minstrel shows in the 1830s– 40s, the term “Jim Crow” refers to a stock black male character (portrayed by white men in black face) depicted on stage and referenced in the popular songs of this—and subsequent—eras. Jim (i.e., male) Crow (i.e., black) became a standard, often derogatory, phrase used by white Americans to reference and discuss African-American men specifically, and black heritage and culture in general. The largely derisive nature and frequently negative connotation of “Jim Crow” was perpetuated by the black-faced portrayal of the stock minstrel character—an individual who was primarily depicted as a simpleton with exaggerated facial features and patterns of speech. The Jim Crow character generally mocked or embellished the perceived dialect, gesticulations, culinary preferences and overall heritage of black Americans, while at the same time borrowed generously from their rich reservoir of cultural and musical history, interpreting and presenting their version
Jim Crow laws 177
A drinking fountain on the county courthouse lawn, Halifax, North Carolina, 1938 (Library of Congress)
of black song and dance to white audiences across the country. Thus, having been firmly established as slang for black men and culture through public performance, and later further cemented in the American psyche via popular folk songs such as “Jump Jim Crow” and “Zip Coon,” among others, the term “Jim Crow” became a staple not only in minstrel performances and to their audiences, but increasingly in the American vernacular, making its way into the general American (often working-class) culture through the traveling white minstrel troupes in blackface. Critics of the minstrel shows’ exaggerated characterizations of black America, such as the prominent African-American author-orator-activist Frederick Douglass, were particularly offended by what they viewed as the pernicious effect of the Jim Crow image in American culture. Moreover, Douglass was outraged that the white performers benefited financially while appealing to the masses’ lurid tastes and
prejudices. Indeed, in 1848, Douglass referred to black-face imitators who perpetuated the Jim Crow image as “the filthy scum of white society, who have stolen from us a complexion denied to them by nature, in which to make money and to pander to the corrupt taste of their white fellow citizens.” While it is apparent that not all people who viewed and performed in minstrel shows were wholly contemptuous of, or antagonistic toward, African Americans and their culture—indeed, northerners, abolitionists, and many middle- and working-class Americans embraced the African rhythms, stories and melodies at the heart of the minstrel music (for a fascinating consideration of this paradox, see Eric Lott’s provocative book Love and Theft)—nonetheless, the pervasive image of the dim-witted Jim Crow, with his “Sambo” stereotype fully intact, no doubt perpetuated many of the most unfortunate, simplistic, and lasting negative images of African Americans.
178 Jim Crow laws
The Jim Crow character, and the legacy of blackface minstrel shows that created him, continued to permeate American popular culture and political debates. The casting of Kentucky native D. W. Griffith’s controversial, virulently racist silent film epic Birth of Nation (1915), was in one crucial aspect a virtual minstrel show: most of the actors portraying African Americans in the film were in fact white men in black face. Moreover, contemporary African-American auteur Spike Lee—the creative force behind such films as Do The Right Thing, She’s Gotta Have It, and Jungle Fever—provided a modern twist on the minstrel show in contemporary corporate America with his film Bamboozled (2000), a wicked satire of television, the entertainment industry and American attitudes toward race. (In the film, the television show in question features a minstrel show of black actors in black face . . . and becomes a huge hit.) While derived from the aforementioned character in songs performed in minstrel shows, in the realm of U.S. politics and law the term “Jim Crow laws” is a stock phrase meant to characterize any of the multitude of state and local laws designed to maintain the status quo in the South in years following Reconstruction: namely, to enforce the segregation of African Americans in various facets of society. Thus, from the 1870s through the 1960s, myriad “Jim Crow” laws—the majority of which were passed and executed in southern and border states—were established to express state sovereignty and circumvent federal attempts to tear down the legal and cultural vestiges of slavery and discrimination. Specifically, this meant passing laws that sought to thwart the spirit and enforcement of the Fourteenth and Fifteenth Amendments—Civil War era additions to the U.S. Constitution intended to guarantee freed slaves and their descendents equal protection and due process under the law, as well as the right to vote. Prominent examples of state and local laws and procedures designed to disenfranchise African Americans included poll taxes, the grandfather clause and literacy tests. However, Jim Crow laws were not limited to stifling African-American suffrage, and over the next several decades a dizzying array of legal prescriptions addressed the segregationist order in other areas of political and social life. For example, state and municipal laws emerged that prohibited racial integration and equality in a broad range of activi-
ties, institutions, and conditions in American public life and private business, including: marriage, libraries, lunch counters, cemeteries/burial rights, juvenile delinquents, education, fishing, boating, bathing, seating in theaters, telephone booths, prisons, bathrooms, child custody matters, transportation (trains and, later, buses), medical care and hospital entrances. (The Martin Luther King, Jr., Historical Site maintains a Web page devoted to diverse examples of Jim Crow laws across the United States over many decades, providing a listing of state and municipal laws in alphabetical order by state, from Alabama to Wyoming; this resource is available online at: http://www. nps.gov/malu/documents/jim_crow_laws.htm.) As state and municipal efforts to retard or outright forbid integration in major sectors of public and private life flourished throughout the South, the United States Supreme Court handed down key decisions that invalidated federal efforts to foster equality in public accommodations and provide for equal protection under the law. For example, the Supreme Court ruled in 1883 that the federal government could not outlaw private forms of discrimination, in effect invalidating much of the spirit of the 1875 Civil Rights Act passed by Congress. Moreover, in 1896, with the Court’s decision in Plessy v. Ferguson, the Jim Crow era was provided with additional constitutional cover when the Supreme Court established the “separate but equal doctrine” in response to the equal protection (Fourteenth Amendment) argument of Homer Plessy, a Louisiana resident of one-eighth non-white descent (the rather pejorative term for this lineage was “octoroon”) who was denied access to railroad cars reserved for Caucasians due to the state’s segregation of railroad cars. The Court rejected Plessy’s Fourteenth Amendment equal protection claims, and the “separate but equal” doctrine became the reigning legal precedent for the next 58 years, until the Warren Court rejected the longstanding doctrine in the landmark cases Brown v. the Board of Education (1954) and Brown II (1955), thereby calling for the desegregation of America’s public schools with “all deliberate speed.” The Brown decision (the plaintiff’s case was spearheaded by the NAACP’s Thurgood Marshall, later the first African American to serve on the U.S. Supreme Court) illustrated that direct legal, political
Jim Crow laws 179
and cultural challenges to segregationist Jim Crow laws could be successful. By the time of Brown, a number of key decisions and dynamics had attacked the very foundation of segregation and helped to facilitate national change and a questioning of the legal, political and cultural Jim Crow order in the South. In 1948, President Harry Truman signed an executive order integrating the U.S. armed forces— the same year that he endorsed a civil rights plank in the Democratic party platform, effectively pushing the southern segregationist wing of the party out the door (though this tumultuous political divorce would not become official at the national level until the 1964 presidential election). In 1957 the “Little Rock Nine” integrated the public schools of Little Rock, Arkansas—with assistance from President Dwight Eisenhower—and on June 11, 1963, President John F. Kennedy addressed the country on the matter of integration and civil rights from the Oval Office. Calling civil rights a “moral” issue as “old as the scriptures” and as “clear as the constitution,” Kennedy provided a legal and moral framework for advancing federal legislation that would provide for equality in public accommodations and employment, legal protections desired by civil rights activists and organizations for decades. In addition, events on the ground in the South and elsewhere illustrated that many Americans were openly refusing to abide by Jim Crow barriers to equality. The 1955–56 Montgomery bus boycott— sparked by Rosa Parks and led by a young minister named Martin Luther King, Jr.—challenged the morality and constitutionality of segregation on Montgomery, Alabama’s buses, and the U.S. Supreme Court eventually found such segregation to be unconstitutional. A few months prior to the bus boycott, the murder of a 14-year old African-American boy from Chicago—Emmett Till—in Money, Mississippi (for whistling at a white woman), brought international and national attention to the ongoing plight of blacks in the Jim Crow South. Likewise, the 1964 “Freedom Summer” college student–led nonviolent voter registration drives of black citizens in Mississippi—which resulted in the murder of civil rights workers Andrew Goodman, James Chaney, and Mickey Schwerner— garnered international headlines and helped build moral support and political capital for eventual federal voting rights legislation. Similarly, the March 7,
1965 march for voting rights organized by the SCLC (Southern Christian Leadership Conference) and SNCC (Student Nonviolent Coordinating Committee)— known as “Bloody Sunday” due to the brutality that met John Lewis, Hosea Williams, and other civil rights marchers as they tried to cross the Edmund Pettus Bridge in Selma, Alabama—brought further attention to the need for comprehensive federal legislation to combat Jim Crow voting procedures still prevalent in much of the South. The Civil Rights Act of 1964—the “public accommodations” bill sought by civil rights leaders for nearly a century, finally was realized in the summer of that year. The landmark legislation—promoted by President John F. Kennedy in 1963 but ultimately passed and signed by President Lyndon Baines Johnson, provided for equal treatment under the law in employment and public accommodations, regardless of race, religion or national origin. It also established the EEOC—the Equal Employment Opportunities Commission—to investigate claims of discrimination in the workplace. The following year the Voting Rights Act of 1965 provided federal oversight of the right to vote via a guarantee that malfeasance at the state and county level would be actively resisted by the federal government, as the act empowered the attorney general of the United States to send federal supervisors to areas of the country where less than half of the eligible minority voters were registered. Voter education and outreach initiatives also accompanied this concerted effort to undo pernicious remnants of Jim Crow laws. Thus, taken collectively, the Civil Rights Act of 1964 and the Voting Rights Act of 1965— along with the Twenty-fourth Amendment’s official ban of the poll tax, in many ways represent the nail in the coffin of Jim Crow laws—also known as de jure segregation (segregation imposed by law). Southern and border states’ attempts to deny African Americans full participation in American politics and public life through Jim Crow strategies had been dealt several death blows by the U.S. Supreme Court, the federal government, and an active Civil Rights Movement in the 1950s and 1960s. Five years after passage of the historic Voting Rights Act of 1965, African-American registration in southern states more than doubled, paving the way for increased political power that has—along with seminal court decisions, federal legislation, attitudinal shifts, and the
180 justic e
tremendous sacrifices of many—changed the social and political landscape of the once–Jim Crow South. Further Reading Chafe, William Henry, Raymond Gavins, and Robert Korstad, eds. Remembering Jim Crow: African Americans Tell About Life in the Segregated South. New York: New Press, 2001; Lott, Eric. Love and Theft: Blackface Minstrelsy and the American Working Class. New York: Oxford University Press, 1995. —Kevan M. Yenerall
justice “What is justice?” may very well be the oldest question that political thinkers have tried to answer. Most standard definitions of justice state that justice applies the principle of moral rightness to an action. Generally speaking, these definitions also include the idea that actions that are considered wrong need to be paid for by some sort of compensation. Others argue that justice can be related to particular situations or institutions. In politics, the discussion about justice covers a wide range of ideas and differences of opinions. From Socrates’ refutation of Thrasymachus’s assertion in the Republic that justice is the interest of the stronger to John Rawls’s argument for justice as fairness, there have been numerous arguments about the nature of justice. In the United States, the concept of justice is continually debated, and many factors have influenced how these debates have been shaped. In its most basic and earliest form, the concept of justice is concerned with balancing out wrong action with punishment and rewarding good or correct action. Two of the most ancient accounts of demarcating correct and incorrect action are found in the Code of Hammurabi, ruler of Babylon from 1795 to 1750 b.c. and in the Ten Commandments, as well as subsequent books found in the Hebrew Bible. The basis of justice in these two codes is to delineate what correct action looks like on the one hand and the corresponding punishments for breaking the law on the other, in order to set things right. Hence, the concept of justice that lies behind “an eye for an eye” is that if one harms another, balance must be restored by causing harm to the party that caused the initial injury. Not all concepts of justice argue that righting the scales is the end of justice. In Plato’s Republic,
one of Socrates’ interlocutors, Thrasymachus, holds that justice is the interest of the stronger. For Socrates, justice has something to do with superior moral character and intelligence; that is to say, virtuous action in accordance with one’s station in life. However, for Thrasymachus, justice is whatever those in power say it is. For Socrates, justice necessarily relates to knowing what virtue is and acting in accordance to it. Thomas Hobbes (1588–1679), an English philosopher, argued that justice is purely a matter of social convention. In a state of nature, or existence in the absence of any governmental entity, there is no morality, no law, and no justice. In a very real sense, without a government, all people have the right to all things and there are no wrong actions. The only thing that limits a person’s action is his or her wit and strength. In order to alleviate this condition, which is brutish and nasty, according to Hobbes, we join a state and give all our rights to a sovereign. The sovereign or monarch, who holds absolute power, in Hobbes’s political theory, creates or commands a system of justice that will suit the needs of that particular community. Justice has nothing to do with some sort of cosmic good or morality that exists beyond the here and now, it simply has to do with what is expedient so that security can be provided for the community. John Locke (1632–1704) argues that it is the role of the government to provide justice. Justice is the establishment of laws so that all can have one law under which to live. Governments are established both to rid citizens of the inconveniences of nature and to protect individual inalienable rights. In Locke’s estimation, a government that does not protect individual rights is no longer a government. In short, the very reason that governments exist is to improve the lives of their citizens. If a government cannot provide such conditions, then it is an unjust state and should be dissolved. In terms of justice in a Lockean state, the punishment that a person receives should be commensurate to the crime committed. One of the most important tasks that the state undertakes is the creation of an impartial system of judges who can assure the citizenry that they will be judged with the idea of proportionality in mind. Karl Marx (1818–1883), the influential German philosopher, argued that one cannot account for justice in terms of individual actions alone. For example,
justice 181
in looking at Locke, we see that justice is a matter of a state executing punishment that is commensurate with the individual’s crime against another individual. Marx argues that one cannot emphasize only individuals when looking at the concept of justice. For Marx, one must look at the community as a whole in order to understand justice. Justice, according to Marx, is a result of the emancipation from harsh and unfair conditions that constitute the unequal relationships between the various classes of society. While Marx is often unclear as to what he means by emancipation and how justice can be sought, it is important to note his work due to his influence on contemporary communitarian thought. Contemporary accounts of justice often take into account not only correct actions, but also the context in which those actions take place and the distribution of resources and wealth. Intention, and not merely the results of an action, is also considered to be an important factor in calculating whether a particular action is right or wrong. For example, an ambulance driver who kills someone in an accident while trying to rush a patient to the hospital would not be held liable for the accident, assuming that the driver followed all correct and proper procedures while driving. In other words, it was not the intention of the driver to harm someone else, so it would be unjust to punish the driver as if he or she was a murderer. In contemporary American thought, one of the most well known accounts of justice is that of John Rawls (1921–2002). In his work, A Theory of Justice (1971), Rawls asks us to rethink what justice is using our rational capacity as individual thinkers. Unlike premodern accounts of justice that usually are wrought by some type of arbitrary power, Rawls asks us to conceive of what sort of social arrangement would constitute the just state. He asks that we all imagine ourselves in a state of existence that is prior to society, which Rawls calls the original position, one in which we know that we will exist in a polity that resembles our current polity and in which rewards are not distributed equally. However, we do not know what our station in life will be, not to mention that we do not know what our gender, race, or individual talents will be once we form a society. In this veil of ignorance, as Rawls calls it, he imagines that all of us will require certain minimum standards of justice for the society that we are
about to form and join. He argues that all of us, being rational, will demand that certain rights be protected and that certain goods meet a minimum level of fairness, or none of us would be rational to join the society. For Rawls, justice is fairness. So, social primary good should be distributed fairly. These include liberty, opportunity, and wealth. According to Rawls, the distribution of these goods should only be unequal if this unequal distribution favors the least advantaged. It should be noted that for Rawls, liberty is the good that needs to be favored over all others, once a minimum level of wealth has been met for all citizens. While Rawls is the most important liberal thinker in contemporary political theory, his ideas have been criticized by many. Robert Nozick (1938–2002), American political theorist, gives the best-known critique of Rawls’s work. In Anarchy, State, and Utopia (1974), Nozick argues goods can only be distributed, equally or unequally, through the free and willing exchanges by individuals. According to Nozick, Rawls seeks justice by forcing people to part with their resources for the sake of others, even to the point of coercion. In order for Rawls’s theory of justice to work, certain minimum conditions of resource distribution will have to exist. This means that those who are more wealthy than others will have to give more in order to create a society in which the distribution of goods is at least minimally fair. For Nozick, the amount that one has to give to the state for redistribution is not the point. Redistributing any goods without one’s consent is not just. So, for Nozick, justice is recognizing the integrity of the individual’s goods, which represents the work, skill, efforts, and even luck of an individual. Redistributing these goods without consent is stealing and is not just. Another critique of Rawls’s A Theory of Justice comes from communitarian thinkers. The first critique communitarians give of Rawls’s theory is that it purports to be universal in its scope. Communitarians argue that justice can only be thought of and judged given the particular communities from which theories of justice originate. For example, Michael Walzer (1935– ) argues that liberal theories of justice, such as Rawls’s theory, are simply too abstract and ignore particular, and yet important, differences of various communities.
182 liber ty
As we can see from a brief examination of several key thinkers, theories of justice are often not commensurate with one another. For example, is it possible for there to be a universal theory of justice to which all of us will agree because we are all rational? Is it not the case that particular cultural practices, differences in religion, regional or educational differences, and so forth, will all play a role in how any one individual conceives of justice? Much of the current work on thinking about justice looks at the ways of overcoming the problems posed by these two questions. There are many problems with how to conceive of justice and how it is to be interpreted in contemporary American political life. For example, one’s station in life can influence how one conceives of justice because of the different roles and group interests that must be met based on one’s position. And for example, it is quite likely that there are jurists who will look at justice in a way that differs from religious leaders, who differ from law enforcement personnel, who differ from labor, and so on. In the American political system, there is often a concern for what justice means in terms of both the various relationships between individuals that constitute the polity and the various groups that these individuals join. In order for justice to exist in a contemporary liberal state, many theorists surmise, there must be a correct balance between individual rights and group interests. In a liberal state, it is also assumed that the judiciary is impartial to the outcomes of the various cases that come before it. For many theorists, the law mediates the interests of individuals and groups as well as being a reflection of what society considers to be “moral” or correct. Further Reading Nozick, Robert. Anarchy, State, and Utopia. New York: Basic Books, 1974; Rawls, John. A Theory of Justice. Cambridge, Mass.: Belknap Press of Harvard University Press, 1999; Walzer, Michael. Spheres of Justice: A Defence of Pluralism and Equality. Oxford, England: M. Robertson, 1983. —Wayne Le Cheminant
liberty The concepts of liberty and freedom are closely related. While there is no clear-cut distinction between the two—some languages do not have dif-
fering translations for the two different words in English—for many the concept of liberty generally refers to a certain enjoyment of political rights and one’s relationship with the government. A working definition of liberty is that liberty refers to one’s ability to make meaningful decisions concerning important events in one’s life—personal, political, and societal—without fear of retribution from the state due to those decisions. Most theorists distinguish between two kinds of liberty: negative liberty and positive liberty. Negative liberty refers to the absence of impediments and obstacles in the way of an individual as the individual attempts to live his or her life. Positive liberty generally refers to the ability to make meaningful choices or realize one’s purpose in life. Isaiah Berlin (1909–1997) is the thinker most commonly associated with the study of liberty and the above distinction of positive and negative liberty. Some thinkers might try to establish a difference between liberty and freedom in terms of ontology. For some, free will, the philosophical belief that one has the ability to make free choices, can be distinguished from liberty, which refers to whether one actually has the freedom necessary to make free choices. Many thinkers argue that without freedom it is impossible to make morally meaningful choices; however, if one believes in philosophical determinism—the belief that all current actions are determined by prior causes— then it is impossible to make free and morally meaningful choices regardless of the type of political regime in which one lives. It is safe to say that most political theorists do not hold to philosophical determinism. Thus, it is important to examine the kinds of barriers and impediments the state might enact that prevent the exercise of one’s liberty. If the state can intervene in all areas of life—family, economic choices, political preferences, religious worship, educational opportunities, travel destinations—and prevent the free enjoyment of one’s life in these areas, then individuals are said to be without liberty. However, one could still make the case that these individuals have philosophical freedom in that they could choose to rebel against the state and put up a resistance to the totalitarian regime. If one enjoys political liberty then one enjoys the ability to make meaningful choices in these various areas of one’s life. For example, in a country with a great deal of liberty, an individual can choose with whom one associates; what
liberty 183
church, if any, one wants to attend; one’s political affiliation; where one wants to live; what profession one chooses to follow and so forth. Likewise, meaningful dissent against the actions of the state without fear of retribution from the state is also possible. Many thinkers have discussed, pondered, and argued about the concept of liberty. For example, Plato argued that it is more important that each person fulfill his or her particular role in the state than to have unlimited freedom. In fact, one of the problems of democracy, as both Plato and Aristotle saw it, is that individuals have too much freedom and are able to dictate the path the state should take based on the whims and passions that the mass of people hold at any particular time. In the Leviathan, Thomas Hobbes (1588–1679) argues that a single sovereign should hold power so that dissension and discontent in the state can be kept to a minimum. For Hobbes, it is the variety of opinions, beliefs, and values that lead to the destruction of the state. Therefore, a single sovereign who holds power can dictate what rights and beliefs are permissible in the state. In his Second Treatise of Government, John Locke (1632–1704) argues that the state must respect the natural rights of life, liberty, and property of its citizens and that the state can rule only by the consent of the governed. Therefore, one can conclude from this that individuals should have as much liberty as possible, assuming that they do not infringe on the rights of others. John Stuart Mill (1806–1873) is well known for, among many things, his work On Liberty. Mill is associated with the utilitarian school of thought. He argues that moral action should be in accordance with that which maximizes human welfare. This is sometimes known as the greatest good for the greatest number of people. There are difficulties with utilitarian thought. For example, one might ask whether it is the case that one ought to act so that one’s actions maximize human welfare. There is nothing that necessarily compels us to believe that this is the case, though Mill certainly thought that it is the case that humans ought to behave in such a way, and he sought to increase the sympathy humans had for others by expansive training in religion and literature. Mill argues in On Liberty we should encourage maximum freedom in thought and discussion. If our ideas can survive rational criticism in the free exchange of ideas, then they can be considered justified ideas. Mill advo-
cated the idea that one has the right to do as one wants so long as these actions do not interfere with the actions of another or cause harm. Mill considered harm to be actual physical harm since, in his opinion, being offended or shocked did not constitute an actual harm. Mill was an early advocate for full women’s rights, which he discusses in his work The Subjection of Women. Isaiah Berlin famously argued for two ways of conceiving liberty in his essay “The Two Concepts of Liberty” (1969). In this essay, Berlin asks the question to what degree should an actor be left alone to do as he or she sees fit. The way in which one answers this question says a great deal about what one thinks of negative liberty, or the absence of obstacles or barriers that stand before an actor. Berlin also asks to what degree someone should control what another person does. The answer to this question speaks to what one thinks about positive liberty. As one can see, there might be a paradox built into the two concepts of liberty and, at the very least, they are not always compatible with one another. If one seeks to have all impediments removed from one’s path, then one seeks as much negative liberty as possible. However, in a community or a polity, it is quite likely that there will be community goals that necessarily put impediments up in front of others. For example, most people in a community want barriers put up in front of criminals. This means that the positive liberty exercised in this case, setting up impediments in front of criminals so that one can realize one’s potential in the polity, impinges on the negative liberty of some, namely, criminals. Of course, most people do not have an issue with that. However, what should the polity do when decisions concerning positive liberty and negative liberty collide in areas that are not so easily decided? For example, implementing laws so that each individual is able to live as healthily and as long as possible (positive freedom) encroaches on various negative liberties by setting up obstacles. It may include outlawing abortions (encroaching on what some see as reproductive rights), organizing state health care (taking away individual ability to choose a doctor or requiring the collection of additional taxes), or preventing assisted suicide, one sets up impediments in front of various individuals to live as they would choose in the name of some positive liberty. If this is a paradox, it is rarely, if ever, decided by a lasting philosophical argument but
184 M iranda warning
rather by political power, meaning that these issues are often decided by whoever is in control of the lawmaking apparatus of a state (the legislative body). All states debate the degree to which their citizens can enjoy liberty, as well as who gets to make those decisions. In the United States, the concept and practice of liberty is debated with several important factors in mind. Since part of the liberal tradition that predominates in American political thought states that individuals enjoy certain unalienable rights, that leaders are chosen through a vote by the majority of the electorate, and that these leaders have the power to write and enact legislation on our behalf that is binding on all, there are bound to be conflicts concerning what liberty means and which impediments, if any, to implement. Some thinkers argue that people should enjoy liberty to the degree that they can do whatever they want insofar as they do not harm others. Others argue that certain acts are inherently immoral and should be restricted whether these acts directly harm others or not. In other words, we see a conflict between the desires and wants of the individual versus the purported common good. Further Reading Berlin, Isaiah. “The Two Concepts of Liberty.” In Liberty: Incorporating Four Essay on Liberty. 2nd ed., edited by Isaiah Berlin and Henry Hardy. New York: Oxford University Press, 2002; Locke, John. Second Treatise of Government. Edited with an introduction by C. B. Macpherson. Indianapolis, Ind. Hackett Publishing, 1980; Mill, John Stuart. On Liberty. Reprint, London: Oxford University Press, 1963. —Wayne Le Cheminant
Miranda warning During the years that Earl Warren served as chief justice (1953–69), the United States Supreme Court contributed many significant rulings to the area of criminal due process. Building on some of its own decisions involving self-incrimination and the right to counsel, one of the Warren Court’s most controversial rulings was the enlargement of protection for criminal suspects subjected to custodial interrogation in the case Miranda v. Arizona (1966). In order to do their jobs properly, law enforcement officials need to question suspects to solve
crimes. However, this is not viewed as an absolute right in police investigations. A protection against selfincrimination was recognized as far back as 17thcentury common law in England. James Madison, in drafting the Fifth Amendment, encompassed that common-law tradition. As generally interpreted, a coerced confession violates the self-incrimination clause of the Fifth Amendment, which states that no person “shall be compelled in any criminal case to be a witness against himself.” Prior to the 1960s and the Miranda ruling, the Fifth Amendment prohibition against self-incrimination applied to judicial or other formal legal proceedings and not police interrogations while a suspect was in custody. Also, the test was based on subjective voluntariness as to whether the confession was voluntary. However, too many subjective variables existed in determining whether or not a confession was truly “voluntary,” and judges often sided with law enforcement officials over due process rights of those accused in determining what police tactics were acceptable during investigations (and thereby determining the admissibility of information during court proceedings). During the 1930s and throughout the 1950s, the U.S. Supreme Court heard several cases dealing with the rights of the accused under the due process clause of the Fourteenth Amendment. In general, the Court had determined that coerced confessions during police interrogations did not meet the constitutional standard of due process within the legal system. Often, the Court would look at the “totality of circumstances” in each case to determine whether a confession met the constitutional standard, which did not provide adequate guidance for police in their dayto-day practices or lower courts in their rulings. Hearing precursor cases in relation to police interrogations in 1964 through interpretations of the Fifth and Sixth Amendments, the Court had laid the groundwork for the Miranda ruling, which would replace the “totality of circumstances” approach with a more definitive set of rules for law enforcement officials to follow. The landmark Miranda ruling stemmed from the arrest of 23-year-old Ernesto Miranda, a resident of Arizona, who had been convicted of kidnapping and raping an 18-year-old girl outside of Phoenix. Miranda, an indigent who never completed his high school education, was arrested 10 days later. The victim
Miranda warning 185
identified Miranda in a police lineup, and during a custodial interrogation following his arrest, Miranda eventually confessed to the crimes. During the trial, the prosecutors relied on Miranda’s confession to obtain conviction, for which Miranda received 20-to30 years for both charges. Appeals to overturn the conviction by Miranda’s attorney, on the grounds that the confession had been coerced and that police had violated his Fifth Amendment rights, were denied in state appeals courts. In 1966, the U.S. Supreme Court agreed to hear the case and overturned the conviction. In a 5-4 majority opinion written by Chief Justice Warren, who had been a former prosecutor, the coercive nature of custodial interrogation by the police violated the Fifth Amendment self-incrimination clause and the Sixth Amendment right to an attorney unless the suspect had been made aware of his rights and had subsequently agreed to waive them. Therefore, the majority concluded that Miranda’s confession was inadmissable during his trial because it had been obtained in an unlawful manner. According to Warren in the majority opinion, “[T]he prosecution may not use statements, whether exculpatory or inculpatory, stemming from custodial interrogation of the defendant unless it demonstrates the use of procedural safeguards effective to secure the privilege against self-incrimination.” In contrast, those justices in dissent of the majority opinion painted a rather bleak picture for the future of law enforcement effectiveness in interrogating suspects. According to the dissent by Associate Justice John Marshall Harlan, “What the Court largely ignores is that its rules impair, if they will not eventually serve wholly to frustrate, an instrument of law enforcement that has long and quite reasonably been thought worth the price paid for it. . . . Nothing in the letter or the spirit of the U.S. Constitution or in the precedents squares with the heavy-handed and one-sided action that is so precipitously taken by the Court in the name of fulfilling its constitutional responsibilities.” Nonetheless, to safeguard the immunity against self-incrimination, the Court developed what are now known as Miranda warnings. Most Americans are familiar with these warnings through the mass media’s portrayal of criminal arrests on television or in movies, where the police officer begins the process by informing the suspect that “you have the right to
remain silent.” Specifically, unless police officers inform suspects of their rights to remain silent and have an attorney present during questioning, and unless police obtain voluntary waivers of these rights, suspects’ confessions and other statements are inadmissible at trial. Basically, to prevent compulsion or coercion by law enforcement officials, a person in custody must be clearly informed of the rights prior to interrogation. The Warren Court was first harshly criticized for this decision as being too soft on criminals. But the practice of “Mirandizing” suspects has become standard procedure, and many argue that it has helped to professionalize police conduct, and protects confessions from being challenged later. The Court has, however, also recognized a number of exceptions to Miranda, including the public safety (rights may be waived if there is a concern for public safety during the arrest) and inevitable discovery exceptions. The latter stems from allowing evidence to be used even though the suspect had indicated his or her desire to remain silent until an attorney appears. Many politicians at the time of the Miranda ruling did not view favorably the Warren Court’s expansion of due process rights to protect the accused within the criminal justice system. Not surprisingly, those politicians who favored a stricter view of law and order in regard to crime prevention did not support the Court’s ruling in this particular case. The case became one of many during the Warren era expanding civil rights and civil liberties that prompted efforts to impeach Warren by those who believed the Court had ventured too far into policymaking with their rulings. The issue stemming from the Miranda ruling impacted the electoral process as well. For example, while a presidential candidate in 1968, Richard Nixon harshly criticized the ruling, promising to uphold law and order throughout the nation by nominating strict constructionists to the U.S. Supreme Court if elected. Once elected, Nixon had four opportunities to nominate and appoint justices to the Supreme Court (including the nomination of Chief Justice Warren Burger in 1969). While many observers believed that Nixon’s appointees would overturn the Miranda ruling, this did not happen. Instead, over the years, the Court recognized the limits of
186 na turalization
the ruling since it allowed criminal suspects to waive their rights to have an attorney present during police interrogation, even while under the inherent pressure of being placed under arrest. While the practice of “Mirandizing” suspects had become accepted over the years, the legal issue was still not settled. A more recent case before the U.S. Supreme Court dealt with a Fourth Circuit Court of Appeals decision upholding a section of the Crime Control and Safe Streets Act of 1968 that allowed Miranda rights to be waived in federal, as opposed to state, cases. The Court had invited Congress to continue to find ways through legislative efforts to refine the laws protecting the rights of the accused. Congress had done just that in 1968 by stating that federal court confessions were admissible based on the totality of circumstances rule and not the Miranda decision, which was viewed by Congress and some of the justices as a procedural ruling as opposed to resting on firm constitutional grounds. However, the issue again came to the Court in 2000, providing an opportunity for Miranda to be overturned. In Dickerson v. United States (2000), the core issue was whether the Miranda decision was based on a constitutional interpretation of the Fifth Amendment’s protection against compelled selfincrimination, in which case Congress had no authority to overrule it, or whether, as the Fourth Circuit Court held, it simply announced a procedural rule that need not be binding. At issue before the U.S. Supreme Court, in a pending appeal by an accused bank robber, was the validity of a law that Congress passed in 1968, two years after the court decided Miranda v. Arizona, with the goal of overturning the decision in federal prosecutions. The Court, in a 7-2 decision, upheld Miranda as constitutional. So despite the opportunity to overturn a slim majority in a Warren Court ruling considered controversial at the time, the Rehnquist Court affirmed in Dickerson the basic premise that the Fifth Amendment protects those in police custody from a coerced confession. Further Reading Fisher, Louis. American Constitutional Law. 5th ed. Durham, N.C.: Carolina Academic Press, 2003; Garcia, Alfredo. The Fifth Amendment: A Comprehensive Approach. Westport, Conn.: Greenwood Press, 2002; O’Brien, David M. Constitutional Law and Politics.
Vol. 2, Civil Rights and Civil Liberties. New York: W.W. Norton, 2003; Stuart, Gary L. Miranda: The Story of America’s Right to Remain Silent. Tucson: University of Arizona Press, 2004. —Lori Cox Han
naturalization Naturalization, the process of becoming an American citizen, has been a contentious issue since the founding of the American republic. The first federal naturalization law, passed in 1790, established a resident requirement of two years and allowed for “free white persons” to become citizens in any American court. While this clearly excluded blacks and Asian immigrants, the law was less clear regarding native-born nonwhites. Until 1870, the states were left to decide whether these individuals were citizens or not. In 1795, the residency requirement was increased to five years, adding a three-year waiting period. In 1798, as part of the Alien and Sedition laws, the residency requirement was further increased to 14 years and the waiting period to five, but these extreme requirements were soon repealed. The Naturalization Act of 1802 returned the residency requirement to five years and established the first basic requirements for naturalization, including good moral character and declared allegiance to the U.S. Constitution. Naturalization was a haphazard process, and often noncitizens who had declared their intention of naturalizing were given the right to vote. At first, Native Americans (American Indians) were not considered citizens. This was based on the United States Supreme Court’s decision in Cherokee Nation v. Georgia (1831) that Indian tribes were “domestic dependent nations.” Because they were not white, Native Americans could not naturalize. In practice, many states and local governments treated acculturated Native Americans as citizens, including allowing them to vote and hold public office. The Dawes Act of 1887 gave citizenship to acculturated Native Americans not living on reservations; citizenship was then granted to individual tribes in a piecemeal manner until 1924, when federal law extended citizenship to all Native Americans in the United States. As this did not clarify whether or not U.S.-born Native Ameri-
naturalization 187
Naturalization ceremony (Department of Defense)
cans, after enactment of the 1924 law, had birthright citizenship, Congress passed another law in 1940 that gave birthright citizenship to all Indians, Eskimos, Aleutians, and other aboriginal tribe members. Anti-immigrant sentiment of the early 19th century led to the rise of the American, or Know-Nothing Party, which advocated a 21-year waiting period for citizenship, but no such laws were adopted. The first federal naturalization policy was the Fourteenth Amendment to the Constitution, adopted in 1868, which gave citizenship to “all persons born or naturalized in the United States.” This gave citizenship not only to former slaves, but also to U.S.-born Asians. This unintended loophole was soon closed; in 1870, Congress amended federal naturalization law to allow citizenship for “white persons and persons of African
descent,” deliberately excluding Asians. The intent of Congress was clear, although many court battles were fought over the definition of “white persons,” with representatives of various Asian races arguing (unsuccessfully) that their skin was white. The U.S. Supreme Court put the matter to rest in the early 1920s, first declaring that only Caucasians were white (Ozawa v. U.S., 1922), and then, noting that the earlier decision had included as Caucasian far more people than it had meant, restricting naturalization to those people that “the common man” would understand to be white. In the aftermath of the Spanish-American War of 1898, Puerto Ricans and Filipinos, their lands having been annexed by the United States, were considered nationals. Filipinos again became foreigners subject to immigration and naturalization laws with the
188 na turalization
Philippine Independence Act of 1934; Puerto Ricans, whose homeland remains a commonwealth (since 1950), were made citizens with the 1917 Jones Act. In 1935, Hitler’s Germany limited citizenship to members of the Aryan race, making Nazi Germany the only country other than the United States with a racially discriminatory naturalization policy. Congress noted this bad company and slowly began to liberalize the country’s naturalization policies. In 1940, naturalization was opened to “descendants of races indigenous to the Western Hemisphere”; in 1943, the ban on Chinese was lifted. In 1946, individuals from the Philippines and India also became eligible for naturalization. This piecemeal retreat from racist policies was finally brought to a close in 1952, when naturalization was made available to all individuals regardless of race, sex, or marital status. The Immigration and Nationality Act of 1952 (McCarranWalter) made all races eligible for naturalization, although it retained the quota system that limited immigration (and thus naturalization). The quota system was eliminated with the Immigration Act of 1965. At first, the citizenship of wives followed that of their husbands. In 1855, Congress granted automatic citizenship to alien women who married American men, if the woman was eligible (white). However, the Expatriation Act of 1907 revoked the citizenship of American women (naturalized or native born) who married noncitizens. This was amended by the Cable Act of 1922 to apply only to women who married noncitizen Asians; the act also required women to naturalize separately from their husbands. In 1931, the provision regarding Asian husbands was repealed. Minor children have generally followed the citizenship of their parents; when their parents naturalized, they were automatically granted citizenship as well. However, this was limited to whites; the U.S. Supreme Court ruled in Dred Scott v. Sandford (1857) that blacks were not and could never be citizens. The Civil Rights Act of 1866 and the Fourteenth Amendment (1868) granted birthright citizenship to blacks, but the birthright citizenship of children of nonwhite noncitizen parents was left unclear until 1898, when the U.S. Supreme Court ruled that they had such rights (U.S. v. Wong Kim Ark). In 1945, Congress eased naturalization laws for spouses and minor children of U.S. citizens serving
in the military (the War Brides Act), but Japanese or Korean wives continued to be excluded until 1947. The law expired in 1952, but several similar statutes have followed in the wake of various overseas military operations. In 1966, Congress extended “derivative citizenship” to children of civilians living abroad while working for the U.S. government or certain international organizations. In 1982, citizenship was granted to children born of U.S. citizen fathers in Korea, Vietnam, Laos, Kampuchea or Thailand after 1950. The Child Citizenship Act of 2000 eased naturalization for minor children (both foreign-born and adopted) with at least one citizen parent. Aliens have been permitted to enlist in the U.S. armed forces since 1957. Congress has eased naturalization rules for noncitizen veterans, including the waiving of fees, for applicants who have served honorably in one of America’s major foreign conflicts: World War I, World War II, the Korean and Vietnam Wars, Operation Desert Shield/Desert Storm (the Iraq War of 1990–91) and Operation Enduring Freedom (the War on Terrorism that began 9/11/01). Veterans of good moral character with three years’ military service are eligible to apply for naturalization, even if they have never been lawfully admitted to the United States for permanent residence. While some critics hail such policies as a welcome opening for would-be immigrants, others criticize such policies as encouraging noncitizens to die for a country otherwise unwilling to have them. Naturalization increased dramatically in the 1990s. Many individuals seeking citizenship at this time were former illegal immigrants whose status had been regularized by the Immigration Reform and Control Act (IRCA) of 1986 and who first became eligible for citizenship in 1994. Another factor in the spike in naturalization rates was anti-immigrant legislation of the 1990s. This included California’s Proposition 187 (approved in 1994) which made illegal immigrants ineligible for public social services (including health care and education) and required various state and local officials to report suspected illegal aliens. At the federal level, various statutes approved in 1996 made life in the United States more difficult for noncitizens, including making legal resident aliens ineligible for public benefits such as welfare and food stamps. A third major factor was President Bill Clinton’s Citi-
right to privacy 189
zenship USA program, which sped up the naturalization process. Today, naturalization is open to permanent resident aliens with five years of residence in the United States. Qualifications include good moral character, knowledge of the English language, knowledge of U.S. government and history, and an oath of allegiance to the U.S. Constitution. In some cases (e.g., for Hmong veterans of the Vietnam War), the English provision is waived, and applicants are given an easier version of the civics exam in a language of their choice. Further Reading Daniels, Roger. Asian America: Chinese and Japanese in the United States since 1850. Seattle: University of Washington Press, 1988; Daniels, Roger. Coming to America: A History of Immigration and Ethnicity in American Life. 2nd ed. Princeton, N.J.: Perennial, 2002; Haney-López, Ian F. White by Law: The Legal Construction of Race. New York: New York University Press, 1996; Johnson, Kevin R. The “Huddled Masses” Myth: Immigration and Civil Rights. Philadelphia: Temple University Press, 2004; Reimers, David M. Still the Golden Door: The Third World Comes to America. 2nd ed. New York: Columbia University Press, 1992; Schneider, Dorothee. “Naturalization and United States Citizenship in Two Periods of Mass Migration: 1894–1930, 1965–2000.” Journal of American Ethnic History 21, 1 (2001): 50–82; Weiner, Mark Stuart. Americans without Law: The Racial Boundaries of Citizenship. New York: New York University Press, 2006; Zolberg, Aristede R. “Reforming the Back Door: The Immigration Reform and Control Act of 1986 in Historical Perspective.” In Virginia Yans-McLaughlin, ed., Immigration Reconsidered: History, Sociology, and Politics. New York: Oxford University Press, 1990. —Melissa R. Michelson
right to privacy Most Americans believe they have a right to privacy, even though it is not explicitly stated within the U.S. Constitution. The United States Supreme Court, through its rulings, has added the right of privacy as a constitutional right, even though it is not clearly discussed. In general, privacy rights are the
basic rights of individual conduct and choice. Yet several constitutional amendments imply specific aspects of privacy rights, including the First Amendment (freedom of speech, freedom of religion, and freedom of association), the Third Amendment (prohibiting the quartering of troops within a private home), the Fourth Amendment (freedom from unreasonable searches and seizures), and the Fifth Amendment (freedom from self-incrimination). In addition, the Ninth Amendment states that the enumeration of certain rights does not deny others, and the due process and equal protection clauses of the Fourteenth Amendment have been interpreted to provide protection in regard to personal privacy. Many state constitutions also include privacy provisions. In general, the constitutional right of privacy protects the individual from unwarranted government interference in intimate personal relationships or activities. The concepts of individualism and morality are both deeply rooted in American traditions and cultural values, and both are often antagonistic in determining constitutional rights involving privacy. Individualism, as conceived during the politically liberal Age of Enlightenment in the late 19th century, is most closely associated with the philosophy of libertarianism, which argues that individual freedom is the highest good and that law should be interpreted to maximize the scope of liberty. The countervailing position, which would be considered classical conservatism, holds that individuals must often be protected against their own vices. The classical conservative view not only defends traditional morality but the embodiment of that same morality in the law. It is these distinct theoretical views that continue to be debated in the ongoing political dialogue involving privacy rights. In addition, the right of privacy must be balanced against the compelling interests of the government (mostly the state governments). Those compelling interests, usually accorded to the states under their police powers, include the promotion of public safety, public health, morality, and improving the quality of life. As defined in the U.S. Supreme Court decision Lawton v. Steele (1894), police powers include the powers of regulation as they are “universally conceded to include everything essential to the public safety, health, and morals, and to justify the
190 right to privacy
destruction or abatement, by summary proceedings, of whatever may be regarded as a public nuisance.” For more than a century, jurists have often relied on an 1890 Harvard Law Review article by Samuel Warren and Louis Brandeis for a basic understanding of the concept of privacy. The article, dealing with press intrusion into the lives of members of Boston social circles, articulated the “right to be let alone.” Warren and Brandeis argued that people should have the right to protect themselves from an invasion of privacy. This right was similar to the right to protection against invasion by trespassers, and the protection of writing and other creative expression by copyright, in that citizens have a right to be left alone and to control the extent to which others could pry into their private lives. The authors wrote: “Instantaneous photographs and newspaper enterprise have invaded the sacred precincts of private and domestic life. Numerous mechanical devices threaten to make good the prediction that ‘what is whispered in the closet shall be proclaimed from the house-tops.” The right to privacy began to be recognized in courts around the turn of the 20th century. In Lochner v. New York (1905), the U.S. Supreme Court held that the “liberty of contract” protected by the Fourteenth Amendment had been infringed when the State of New York adopted a law restricting the working hours of bakery employees. Although Lochner and related decisions were concerned exclusively with the protection of individual property rights, these cases paved the way for the creation of the right of privacy by giving a substantive (as distinct from a strictly procedural) interpretation of the due process clause. Under the substantive due process formula, courts can “discover” in the Fourteenth Amendment rights that are “fundamental” or “implicit in a scheme of ordered liberty.” While substantive due process is no longer applied in cases dealing with economic matters, it is still used in regard to personal matters. As jurisprudence in this area of case law has taken shape since the 1960s, the right of privacy includes the freedom of an individual to make fundamental choices involving sex, reproduction, family life, and other intimate personal relationships. The case that first addressed this issue was Griswold v. Connecticut (1965), when the U.S. Supreme Court voided a state law that made the sale or use of contraceptives, even by a married couple, a
criminal offense. The decision stated that “specific guarantees in the Bill of Rights have penumbras, which create zones of privacy.” A similar case was decided in 1972 in Eisenstadt v. Baird, when the Court refused to accept Massachusetts’s argument against use of contraceptives by unmarried persons. However, the U.S. Supreme Court has not always been willing to uphold privacy rights in all areas. In Bowers v. Hardwick (1986), the majority rejected the view that “any kind of private sexual conduct between consenting adults is constitutionally insulated from state proscription.” This ruling upheld a Georgia statute banning sodomy. However, that decision was overturned in Lawrence v. Texas (2003), which struck down a similar statute in Texas that criminalized the act of sodomy. Writing for the majority, Associate Justice Anthony Kennedy wrote, “Liberty protects the person from unwarranted government intrusions into a dwelling or other private places . . . [and] presumes an autonomy of self that includes freedom of thought, belief, expression, and certain intimate conduct.” The U.S. Supreme Court has also held that the right of privacy does not extend to the terminally ill who want medical help in ending their own lives, and the Court has ruled that the Fourteenth Amendment does not include a constitutional right to doctor-assisted suicide. The landmark abortion ruling Roe v. Wade (1973) was also decided on the issue of right to privacy. In this case, state laws that criminalized abortion were voided as a violation of the due process clause of the Fourteenth Amendment, which, according to the 7-2 majority of the Court, protects the right to privacy. The majority opinion stated, however, that states do have a legitimate interest in protecting the pregnant woman’s health and the potentiality of human life, both of which interests grow and reach a “compelling point” at different states of pregnancy. As a result, the ruling in Roe set out what is known as a trimester scheme for when restrictions on abortions can be viewed as constitutional due to a compelling state interest. Since then, other cases dealing with abortion have also come to the U.S. Supreme Court that have both strengthened and weakened the Roe ruling. In Webster v. Reproductive Health Services (1989), the Court reaffirmed the states’ rights to regulate abortion within the broad confines of the guidelines laid down in Roe. Associate Justice Sandra Day O’Connor indi-
search and seizure
cated that the concept of viability should replace Roe’s trimester scheme and that state regulations were constitutional so long as they do “not impose an undue burden on a woman’s abortion decision.” In Planned Parenthood of Southeastern Pennsylvania v. Casey (1992), the Court upheld Roe but also upheld several restrictions put into place on abortion by Pennsylvania law. Those restrictions included the requirement that doctors discuss the risks and consequences of an abortion in order to obtain written consent for the procedure, a 24-hour waiting period prior to the procedure, the requirement that unmarried women under the age of 18 must have parental or a judge’s permission to obtain an abortion, and that doctors must report abortions performed to public health authorities. However, the Court did strike down a spousal notification requirement as an undue burden on a woman. Privacy rights in regard to abortion and reproductive choices remain a controversial political and legal issue. Some legal scholars, including Associate Justice Ruth Bader Ginsburg, have argued that the Supreme Court invoked the wrong part of the Constitution in deciding Roe v. Wade in 1973. The argument is centered on the fact that the Court would have been on firmer legal ground, and would have invited less academic criticism and public outrage, by relying on equal protection rather than due process, emphasizing a women’s ability to stand in relation to man, society, and the state as an independent, selfsustaining, and equal citizen. Other areas of privacy stemming from specific constitutional amendments raise many interesting legal questions as well. The Fourth Amendment recognizes a right of personal privacy against arbitrary intrusions by law enforcement officers. The framers of the Constitution were sensitive to this issue, since colonists had been subjected to general searches by police and customs officials under decree by the British Parliament. But like many other broad provisions of the Constitution, this amendment raises many questions, including what is meant by the terms “unreasonable” and “probable cause,” and what constitutes a “search.” In recent decades, the U.S. Supreme Court has heard many search and seizure cases in an attempt to more clearly define these terms and to provide better guidelines for law enforcement officials.
191
Regarding First Amendment concerns, privacy laws only began to be recognized as tort law in the early 20th century. This differed from libel laws, which date back to common law traditions from the 13th century. Privacy laws vary from state to state, but clear principles and guidelines have emerged through various cases and court rulings. Four distinct torts have traditionally been recognized: intrusion, disclosure of private facts, false light, and appropriation. They have little in common except that each can interfere with a person’s right to be left alone. Intrusion is similar to trespassing on someone’s property, but the intrusion violates the person instead, defined as entry without permission into someone’s personal space in a highly offensive manner. A good example of a case of intrusion would be a celebrity suing aggressive paparazzi for damages for invasion of privacy. Public disclosure occurs when personal information is published that a reasonable person would find highly offensive and not of legitimate public concern. False light is defined as the public portrayal of someone in a distorted or fictionalized way; the information can be neutral or flattering in content, as long as it portrays someone as something they are not to the point of embarrassment. And appropriation is defined as the unauthorized commercial exploitation of someone’s identity. Further Reading Alderman, Ellen, and Caroline Kennedy. The Right to Privacy. New York: Alfred A. Knopf, 1995; Fisher, Louis. American Constitutional Law. 5th ed. Durham, N.C.: Carolina Academic Press, 2003; O’Brien, David M. Constitutional Law and Politics. Vol. 2, Civil Rights and Civil Liberties. New York: W.W. Norton, 2003; Pember, Don R., and Clay Calvert. Mass Media Law. Boston: McGraw-Hill, 2005. —Lori Cox Han
search and seizure The Fourth Amendment to the U.S. Constitution has provisions designed to protect the security of individual citizens. Particularly, it states that people have the right to be secure in their persons, houses, papers, and effects from unreasonable searches and seizures by government officials. The amendment then goes on to prescribe the narrow circumstances
192 sear ch and seizure
under which this general rule can be abridged. The government may engage in legal searches if it procures a search warrant, signed by a judge, based upon probable cause and specifically describing the person or place to be searched and the things that might be seized. The purpose of this amendment is to protect citizens from intrusion from government. It is an essential element in the rights of people accused of crime and is a basic civil liberty protected under the Bill of Rights. A search warrant is a form that is filled out by law enforcement officials when they have reason to believe that someone is guilty of a crime and they wish to search that person’s home, possessions, or body. Once the form has been completed, detailing the search, it must be presented to a judge who then decides if the search is justifiable under the Constitution. The critical point concerns whether there is “probable cause” to conduct a search. Probable cause is more than just a suspicion that criminal behavior has occurred. It requires that there be strong evidence that a crime has been committed and some threshold of evidence that it was committed by the person to be searched. For example, when police officers entered a living unit from which shots had been fired, they found an expensive stereo, which the officers thought to be stolen, and they seized that stereo. Upon investigation, it was determined that the stereo was indeed stolen, but the United States Supreme Court invalidated the seizure, saying that it had been based only upon a “reasonable suspicion,” a standard that did not rise to the level of “probable cause” required by the Constitution (Arizona v. Hicks, 1987). Only when a judge is satisfied that the probable cause threshold has been met can a search take place. Until 1961, however, there was generally no consequence in potential convictions if the police force violated technical aspects of the Fourth Amendment. Not infrequently, evidence collected without a search warrant would be used to convict someone, something that could occur because, while the Constitution has clear language outlawing unwarranted searches, there was no sanction against police who violated the principle. As a result, there were few incentives for law enforcement officials to adhere to the Fourth Amendment. In that year, in the case of Mapp v. Ohio (1961), the U.S. Supreme Court created a rule
designed to give strong disincentives to policemen to keep them from conducting illegal searches. The exclusionary rule states that any evidence collected in violation of the Fourth Amendment cannot be introduced during trial in a court of law. Significantly, the Court stated that the purpose of the exclusionary rule was to deter law enforcement personnel from violating the Fourth Amendment. That finding has been important since that time, because the Court has “trimmed” the universal nature of the exclusionary rule when it has thought that narrowing the application of the rule would not diminish the deterrence effect. When it was handed down, it was very unpopular with law enforcement officials and required some number of convicted criminals to be released from prison because of the use of flawed evidence during their trials. Civil libertarians hailed the ruling as making the Fourth Amendment more meaningful than it had been up until that time. Over time, as police learned to use the rules carefully, the exclusionary rule had no negative impact on conviction rates. The exclusionary rule has been the subject of hundreds of U.S. Supreme Court cases, and a number of exceptions have been handed down. An illustrative group of those decisions, but by no means a complete listing, is discussed below. Since the exclusionary rule was implemented, the Supreme Court has handed down a number of decisions clarifying its use. In some cases, those rulings have defined some circumstances in which searches can be conducted without a warrant. In particular, warrants are not required when there is no reasonable expectation of privacy for a person or when no search would be necessary to discover contraband. For example, when police need to search an open field, the Court has said no warrant is necessary, even when the search is on private property. In those circumstances, a person has no reason to believe that activities of any nature would be private. As a result, since no privacy will be violated, no warrant is necessary. In Whren v. U.S. (1996), the Court ruled that if an automobile is stopped legally because of a traffic violation, contraband discovered during that stop can be introduced as evidence. In that case, an occupant of the vehicle was seen with illegal drugs in his hand when the policeman approached the car. Similarly, if a policeman makes a legal stop of an automobile, evi-
sedition 193
dence within plain view or that is within arm’s reach of the car’s driver is deemed searchable without a warrant. In fact, if a law enforcement official searches a vehicle without a warrant but with “probable cause,” material discovered during the search might be allowed in evidence, an exception allowed because of the “ready mobility” of motor vehicles. No warrant is needed if police are in a legal hot pursuit of a suspect. Under those circumstances, policemen, in the normal conduct of their duties, might discover material that would incriminate a suspect during trial, but since it was discovered as a side effect of legal activity rather than as a purposeful illegal act, it is permissible. Finally, in the case of U.S. v. Verduo-Urquidez (1990), the Court said that the Fourth Amendment is not applicable to nonresident aliens in the United States but is limited to covering the rights of citizens. In some cases, also, the Court has said that incomplete or flawed warrants might not be invalid. In the case of Evans v. Arizona (1995), the Court ruled that there was a “harmless error” in the warrant. That occurs, for example, when a date is entered incorrectly or two numbers are transposed in an address. Another reason for allowing a flawed search takes place when officers, acting in “good faith,” violate the Fourth Amendment. The reasoning in this case, U.S. v. Peltier (1975), is that since the police were trying to comply with the exclusionary rule but simply made a mistake, the exclusionary rule would not apply. Since the exclusionary rule was developed to ban knowing violations of the Fourth Amendment, no purpose would be served by excluding evidence. The exclusionary rule might not apply when it is “inevitable” that evidence would have been discovered eventually by legal means, as the court wrote in Nix v. Williams in 1984. Parolees are deemed not to have an “expectation of privacy.” In the case of Samson v. California (2006), a man was stopped and frisked on the street because the policemen knew him to be on parole for possession of illegal drugs. His parole was revoked when the policeman found methamphetamines in his possession. The court said that since he was still under the supervision of the Court, the search was legal. In the period following September 11, 2001, the Congress passed a law called “the PATRIOT Act” to try to combat terrorist activity. That law gave to the executive branch of government expanded powers to wiretap and use electronic surveillance against
suspected terrorists, in many cases without a warrant. This act has been seen by civil libertarians as a direct affront to the Fourth Amendment, while supporters of the law have argued that such executive latitude is essential to combat terrorism. Why the long litany of court cases defining the boundaries of the exclusionary rule? In part, it is because the issues posed by the Fourth Amendment are complex. In part, too, it is because of the political controversy associated with the exclusionary rule, where libertarians see the rule as a fundamental protection for citizens, while others see the rule as allowing criminals to get off because of mere technicalities. Finally, in part, it is because the issue is an important one, one that tries to balance the rights of individuals to have the full protection of the Constitution against the right of the public to be free of dangerous criminals who can threaten the security of society. These issues have been central to the political debate since the exclusionary rule was developed in 1961 and will doubtless remain in public debate. However, while there are controversies surrounding specific applications of the rule, the U.S Supreme Court has strongly endorsed it as a general principle. Most law enforcement officials take that endorsement seriously, and when they do, a fundamental liberty—the right of security—is enhanced. See also due process. Further Reading Hubbart, Phillip A. Making Sense of Search and Seizure Law: A Fourth Amendment Handbook. Durham, N.C.: Carolina Academic Press, 2005; Jackson, Donald W., and Riddlesperger, James W. “Whatever Happened to the Exclusionary Rule: The Burger Court and the Fourth Amendment?” Criminal Justice Policy Review 1 (May 1986): 156–168; Long, Carolyn N. Mapp v. Ohio: Guarding Against Unreasonable Searches and Seizures. Lawrence: University Press of Kansas, 2006. —James W. Riddlesperger, Jr
sedition Sedition (not to be confused with its close relation, treason) is a legal term that refers to “nonovert conduct” designed to undermine the authority of government. Treason is considered an overt act and thus
194 sedition
sedition tends to be a milder or lesser form of treason. However, different countries define sedition in different ways; therefore, both treason and sedition often get confused or intermingled depending on the exact legal definition that is applied. In general, sedition is defined as conduct or language inciting rebellion against the authority of the state, and can also be defined as an insurrection (which is an act of unorganized outbreak against an organized government). Normally, sedition involves speech, political organizing, subversion, or other forms of protest that are critical of the government or attempt to undermine or build up opposition to the government or acts of government, or incitement to riot or rebellion. While such behavior is often protected in constitutional democracies, even in democratic systems, not all opposition to the government is permitted or constitutionally protected. A long tradition exists in the United States of protecting national interests where the First Amendment is concerned. In most cases, if laws are enacted to protect government interests while not restricting speech unduly, they can be enforced. Usually, government censorship and prior restraint are not allowed; however, the United States Supreme Court has ruled in some instances that speech that may aid an enemy in times of war, or help to overthrow the government, can be censored. It is considered for the most part natural for a government to protect its existence. The infamous Alien and Sedition Acts of 1798 during the early days of the republic demonstrate how, even in a constitutional republic with guaranteed rights, political opposition may be defined as sedition. It also shows how sedition charges may be imposed on a country that ostensibly protects free speech, the right to organize, and a free press. Passed by the Federalist-controlled government of President John Adams in 1798, the Sedition Act was a politically motivated effort to undermine the growing support for the Democratic-Republican Party headed by Thomas Jefferson. The party system in the new republic had split between supporters of George Washington, Adams, and Alexander Hamilton (who were all Federalists), and supporters of Thomas Jefferson and James Madison (leaders of the DemocraticRepublicans, also known as Jeffersonians). The Federalists controlled the presidency, both houses of
Congress, as well as the judicial branch. In an effort to head off the rising influence of Jefferson, they also passed the Alien Act, which among other things, granted the authority to the president to deport any alien he deemed dangerous, and the Sedition Act, defining sedition and applying criminal penalties to acts of sedition. The Alien and Sedition Acts also emerged, in part, from fear that the United States would be drawn into war with France and Britain. The contents of the Sedition Act of 1798 were brief, and key elements of the act merit reprinting here: Section 1—Be it enacted . . . That if any persons shall unlawfully combine or conspire together, with intent to oppose any measure or measures of the government of the United States, which are or shall be directed by proper authority, or to impede the operation of any law of the United States, or to intimidate or prevent any person holding a place or office in or under the government of the United States, from undertaking, performing or executing his trust or duty; and if any person or persons, with intent as aforesaid, shall counsel, advise or attempt to procure any insurrection, riot, unlawful assembly, or combination, whether such conspiracy, threatening, counsel, advice, or attempt shall have the proposed effect or not, he or they shall be deemed guilty of a high misdemeanor, and on conviction, before any court of the United States having jurisdiction thereof, shall be punished by a fine not exceeding five thousand dollars, and by imprisonment during a term not less than six months nor exceeding five years; and further, at the discretion of the court may beholden to find sureties for his good behaviour in such sum, and for such time, as the said court may direct.
(Section 1, while also making more serious forms of treason a crime, made the lesser act of opposition to the government potentially a federal crime.) Section 2—That if any person shall write, print, utter. Or publish, or shall cause or procure to be written, printed, uttered or published, or shall knowingly and willingly assist or aid in writing, printing, uttering or publishing any false, scandalous and malicious writing or writings against the government of the United States, or either house
sedition 195
of the Congress of the United States, or the President of the United States, with intent to defame the said government, or either house of the said Congress, or the said President, or to bring them, or either of them, into contempt or disrepute; or to excite against them, or either or any of them, the hatred of the good people of the United States, or to excite any unlawful combinations therein, for opposing or resisting any law of the United States, or any act of the President of the United States, done in pursuance of any such law, or of the powers in him vested by the constitution of the United States, or to resist, oppose, or defeat any such law or act, or to aid, encourage or abet any hostile designs of any foreign nation against the United States, their people or government, then such person, being thereof convicted before any court of the United States having jurisdiction thereof, shall be punished by a fine not exceeding two thousand dollars, and by imprisonment not exceeding two years. Section 3—That if any person shall be prosecuted under this act, for the writing or publishing any libel aforesaid, it shall be lawful for the defendant, upon the trial of the cause, to give in evidence in his defence, the truth of the matter contained in the publication charged as a libel. And the jury who shall try the cause, shall have a right to determine the law and the fact, under the direction of the court, as in other cases. Section 4—That this act shall continue to be in force until March 3, 1801, and no longer. . . .
Critics of the government, newspaper editors and others, were convicted and imprisoned under this act. Opponents charged that the act violated the First Amendment of the U.S. Constitution. The Act caused a backlash, and in the election of 1800, the Democratic-Republicans swept into power with Jefferson winning the presidency. As a result, the Federalist Party began a decline. In 1801, the acts expired, and Jefferson pardoned all persons who had been convicted. Since that time, efforts to revive the charge of sedition have foundered. The U.S. Supreme Court has never put forth an absolute view on free speech rights, and no cases dealing with freedom of speech or press reached the U.S. Supreme Court until the 20th century. How-
ever, a few key cases, and their subsequent rulings, have shaped the legal definition of freedom of speech during the past century and helped to define case law dealing with sedition. The Supreme Court’s first significant ruling on freedom of speech came in 1919. In 1917, Congress had passed the Espionage Act, prohibiting any political dissent that would harm America’s effort in World War I. In 1918, the Sedition Act was also passed by Congress, which made it a crime to attempt to obstruct military recruiting. In Schenck v. United States (1919), the Court upheld the conviction of socialist Charles Schenck for distributing pamphlets that encouraged antidraft sentiments. The Court’s unanimous decision declared the Espionage Act constitutional, and even though the war had ended, declared that urging resistance to the draft would pose a threat to the nation’s efforts to win the war. The opinion, written by Associate Justice Oliver Wendell Holmes, would introduce the clear-andpresent-danger test, which gave freedom of speech low priority in legal decisions. Holmes argued that speech with a tendency to lead to “substantial evil” or to cause harm to vital interests that Congress has the authority to protect could be banned. Holmes wrote that it was a question of “proximity and degree” as to whether or not the speech was dangerous. His famous example stated that a man would not be protected from falsely shouting “fire” in a crowded theater, which would cause a panic. Therefore, speech was most harmful when it would cause immediate harm. Schenck spent six months in prison following the Court’s decision. During the 1950s, America’s paranoia about the threat of communism led to the prohibition of many speech freedoms. In Dennis v. United States (1951), the Court upheld convictions of 11 Communist Party members for advocating the overthrow of the U.S. government, which had been outlawed under the Smith Act of 1940. The balancing test emerged in Dennis, where national security was deemed more important than free speech. With this test, competing rights were balanced to determine which should be given priority. However, by 1957, the Court had changed its view on a similar case. In Yates v. United States, the Court overturned similar convictions of Communists. The decision stated that since the overthrow of the government was only advocated in theoretical terms, it qualified as speech, which should be
196 suffr age
protected under the First Amendment. This included the rise of the “preferred position” doctrine, which is similar to balancing, but the First Amendment is favored. The Supreme Court’s ruling in Brandenberg v. Ohio (1969) signaled the end of laws that allowed for suppression of speech that merely advocated the overthrow of the government, even if the threats were violent. This was also the last time the Supreme Court heard an appeal in a sedition case. A member of the Ku Klux Klan was arrested in southwestern Ohio for stating in an interview that he would take revenge against officials who were trying to bring about racial integration. The Court overturned the conviction, stating that the Ohio state law under which Brandenberg had been convicted was so broad that it would allow unconstitutional convictions for people who only talked about resorting to violence. The Court ruled that state laws had to be more narrowly defined to prevent imminent lawless action. Finally, another form of sedition that is punishable is a threat to a public official, especially the president, which is a federal crime punishable by five years in prison and a $250,000 fine. However, only those threats considered serious enough to cause harm are prosecuted. Further Reading Magee, James J. Freedom of Expression. Westport, Conn.: Greenwood Press, 2002; Miller, John Chester. Crisis in Freedom: The Alien and Sedition Acts. Delanco, N.J.: Notable Trials Library, 2002; Pember, Don. R., and Clay Calvert. Mass Media Law, 2007/ 2008. Boston: McGraw-Hill, 2007. —Michael A. Genovese
suffrage Suffrage, otherwise known as political franchise, refers to the right to vote and the exercise of that right. Voting rights have presented a number of challenges to the development of civil rights in the United States. Questions regarding the equal right and equal access to political franchise have been related to issues of class, race, and gender throughout the history of the nation. Among the many groups excluded from suffrage at various points in American history have been poor
males who did not own property, African Americans (both during and after slavery), and women. As the United States developed into a democratic republic, many white males were excluded from the right to vote due to their socioeconomic status and lack of land ownership. The United States was formed in part based on concerns for the vagaries of a direct democracy. Fear of the tyranny of the majority or mob rule spurred the wealthier framers of the U.S. Constitution to exclude a variety of groups including white males who did not have the wealth that they perceived would validate their stake in the development of the nation. By the late 19th century, issues related to race and franchise were addressed. In 1870, the Fifteenth Amendment to the Constitution provided for men of all races, particularly African Americans, the legal right to vote. While white males typically had the right to vote by that time, few males of African descent were enfranchised prior to the Civil War. Ending pernicious and pervasive disfranchisement based on racial classification as related to African descent, men (no women, regardless of color, had the right to vote at that time) in the United States became the focus for abolitionists and their supporters during the antebellum and post–Civil War eras. The end of the Civil War saw relatively rapid movement to give blacks the vote. Enforcement Acts and the Freedman’s Bureau were designed and implemented to put teeth into the Fifteenth Amendment to assure African Americans access to the franchise. For a short time during the 1870s, African Americans were empowered at the ballot box, participated in elections, and held offices at municipal, state, and federal levels. Resistance from whites in the North and South, poll taxes, literacy tests, death threats, and violent massacres along with decreasing support from the United States Supreme Court and eventually the 1877 Compromise ended the rally at the ballot box for African Americans. It was not until the Twenty-fourth Amendment was ratified in 1964, which ended poll taxes, and the 1965 Voting Rights Act was passed by Congress and signed into law by President Lyndon Johnson, that the United States saw enforcement, empowerment, and unfettered encouragement to include African Americans and other minority groups in the practices of universal suffrage.
suffrage 197
Similar to the struggle undertaken regarding the African American right to franchise, the path to women’s suffrage was also fraught with resistance, activism, and controversy that resulted in the Nineteenth Amendment to the Constitution in 1920. American women were seeking sociopolitical reform that emphasized access to franchise as both a symbol and method to transform society and their roles in it. Prior to the Nineteenth Amendment, some women enjoyed the right to vote based on local legislation. In 1887, however, Congress passed the Edmunds Tucker Act, which revoked all voting rights and ended suffrage for all women in America. By 1890, the National American Woman Suffrage Association was formed. Its main goal was to garner a Constitutional amendment granting women the right to vote. Key figures in the suffrage movement included Susan B. Anthony, Sojourner Truth, Ida B. Wells, and Elizabeth Cady Stanton. They were joined throughout history by other suffragists, known as the suffragettes. Their political interests and involvements ranged from the abolition of slavery to the women’s suffrage movement and Civil Rights movement. The suffragettes’ (along with their numerous male supporters) fight for political change included marches, speeches, and public education forums utilized to shift public opinion and influence legislators to empower women at the ballot box. Their struggle for women’s rights, equal rights and voting rights reshaped the American political landscape throughout the late 19th and early 20th centuries. Age has also been a determinant of access to the franchise in America. Youth suffrage is sometimes an overlooked aspect of voting rights. Historically, the question of voting age has been a source of political debate, activity, and engagement. Arguments regarding the voting age have always involved perspectives on participation in the armed forces, whether based on the draft or otherwise. The time line of youth suffrage in America illustrates the connection between support for lowering the voting age and the age at which one is allowed to serve, or is drafted to serve, in the military. Throughout history the voting age in the United States had been 21 years of age. However, historically and as is the case today, one may serve in the armed forces beginning at 18 years of age. The question arose regarding the incongruity involved in being seen as qualified to fight and die for one’s coun-
try prior to being able to vote in one’s country. In 1941, during the World War II era, U.S. Representative Jennings Randolph of West Virginia introduced an amendment reducing the voting age to 18. Later both President Dwight D. Eisenhower and President Lyndon B. Johnson offered support for acts to lower the voting age. Finally, under the pressure of anti–Vietnam War protests, Congress was forced to act. President Lyndon B. Johnson urged that a constitutional amendment be proposed, which would lower the voting age to 18. In 1971, 30 years after his initial bid, Jennings Randolph reintroduced the amendment to Congress. The Twenty-sixth Amendment swiftly passed and was certified by President Richard M. Nixon in 1971. Eighteen years of age became the legal voting age throughout America and influenced the legal status of youths on many fronts. On the heels of youth suffrage, the age of consent was lowered regarding the right to marry, the right to enter into contracts without parental consent, and the right to participate in numerous other activities as a legal adult. Throughout the United States there remains some variation among the states regarding the legal gambling age and the legal age at which one may buy tobacco products. In 1984, however, the National Minimum Drinking Age Act established 21 years of age as the legal age at which one may purchase alcohol in the United States. Today, many youths argue that the drinking age should be lowered to 18 to match the voting age. Opponents of the national minimum drinking age argue that if one is entrusted with voting responsibly at 18 years of age one should be entrusted with drinking responsibly at that age as well. The various battles over access to the franchise illustrate the importance of voting rights to American democracy and political culture. Access to the ballot box has transformative value. Elections are among the events in the American governmental system that holds politicians accountable. The disfranchisement of groups based on their socioeconomic status, gender, age, race, or ethnicity, as well as other social or physical attributes such as disability, means that those groups are very likely to be ignored by our government and its leaders; without the vote, groups and individuals are not empowered to hold elected officials accountable nor can they readily participate in
198 suffr agist movement
decisions that are crucial to their lives and welfare. Furthermore, suffrage is symbolic of one’s social status, level of political influence, and connection to the political culture. Voting is one of the major forms of political participation in the United States. Exclusion from the vote signals exclusion from a legitimate place in society. While youth, African Americans, women, and others fought to have their say at the ballot box, they also fought to be seen as fully legitimate and respected citizens of the country. The power of the vote is a power worthy of the many battles undertaken. See also suffragist movement. Further Reading Grofman, Bernard, and Chandler Davidson, eds. Controversies in Minority Voting. Washington, D.C.: Brookings Institution, 1992; Kimmel, Michael S., and Thomas E. Mosmilller. Against the Tide: Pro-Feminist Men in the United States 1776–1990, A Documentary History. Boston: Beacon Press, 1992; Streb, Matthew J., ed. Law and Election Politics: The Rules of the Game. Boulder, Colo.: Lynne Rienner Publishers, 2005; Thernstrom, Abigail M. Whose Votes Count? Affirmative Action and Minority Voting Rights. Cambridge, Mass.: Harvard University Press, 1987. —Antonio Brown
suffragist movement The formal women’s rights movement began in 1848 at the Seneca Falls Convention, convened by Lucretia Mott and Elizabeth Cady Stanton, to talk about the “social, civil, and religious rights of women.” Most women who attended had been active in the abolitionist movement for years, even decades. The idea for the convention had been born following the 1840 World Anti-Slavery Convention in London, where female delegates, including Mott and Stanton, had not been allowed to participate and were even forced to sit behind a partition so as not to be seen. Prior to the Seneca Falls Convention, Stanton wrote her famous “Declaration of Sentiments and Resolutions,” a bold document declaring the rights of women modeled after the Declaration of Independence. Stanton’s “Declaration” demanded economic and property rights, and denounced slavery, discrimination in education, exploitation of women in the
workforce, the patriarchal family, divorce, and child custody laws, and organized religion as “perpetuating women’s oppression.” In general, the women’s rights movement in the United States is broken into three waves: the First Wave from 1848 to 1920, the Second Wave which begins in the 1960s and continued through the 1980s, and the Third Wave which began in the early 1990s and continues today. While suffrage would become the major issue of the latter stages of the first wave of the women’s movement, that was not the initial case of the claims that came out of Seneca Falls. Instead, suffrage was a last-minute issue that Stanton added to the list of demands, and it was the only resolution not unanimously supported at the Seneca Falls Convention. Yet, securing the right to vote did ultimately emerge as the major issue for the movement, since women’s activists like Stanton, Alice Paul, and Susan B. Anthony believed suffrage to be the most effective way to gain access to the political system and change the unjust way that women were viewed in the eyes of the law. Members of the women’s rights movement shared an important philosophical claim with abolitionists in their pursuit of equal rights. The Civil War, however, delayed the fight for women’s rights. In 1863, Stanton and Anthony formed the Women’s Loyal National League in the North to fight for a constitutional amendment for emancipation for slaves and universal suffrage for freed slaves and women. But fearing that adding the vote for women to the political mix would weaken the chances of the amendment’s passage, the Republican Party of President Abraham Lincoln pursued passage of the Thirteenth Amendment (adopted in 1865) without any mention of women’s voting rights. Many of the leaders of the women’s movement had gained leadership and organizational skills as activists in the abolitionist movement, so for many generations of suffragists, the strategy to achieve what at the time seemed like a radical change to the U.S. Constitution included protests, marches, lectures, writings, and various forms of civil disobedience. From the start, Stanton and Anthony remained prominent leaders within the suffrage movement. Both had been active in the American Equal Rights Association (AERA), which had been formed in 1866 to fight for universal suffrage. However, the organiza-
suffragist movement 199
Suffragists marching in New York City, 1913 (Library of Congress)
tion disbanded in 1869 due to internal conflicts involving the political priorities of the group (whether or not woman’s suffrage should be a higher priority than black male suffrage). In May 1869, Stanton and Anthony formed the National Woman Suffrage Association (the NWSA would eventually become the League of Women Voters in the 1920s and is still in existence today). Led by Anthony, the NWSA preferred fighting for a constitutional amendment to give women the right to vote nationally. A second group, the American Woman Suffrage Association (AWSA), was formed in November 1869 by Lucy Stone and Henry Blackwell to fight for suffrage on a state-bystate basis. The Women’s Christian Temperance Union (WCTU) also joined in the fight for suffrage during the latter decades of the 19th century. Alcoholism was a leading cause of domestic abuse, abandonment, and poverty for women and children, so leaders within the WCTU supported giving women the right to vote since women would be more natural supporters of banning the sale and consumption of alcohol.
While numerous women are credited with the eventual success of the suffrage movement, Stanton and Anthony are perhaps the two most famous for their dedication to securing the right to vote for women. Through her many influential writings, Stanton, known as the “founding mother of feminism,” was the leading voice and philosopher of the women’s rights and suffrage movements. The wife of prominent abolitionist Henry Stanton and mother of seven, Stanton was 32 years old when she helped to convene the Seneca Falls Convention in 1848. A graduate of Troy Female Seminary, she refused to be merely what she called a “household drudge.” In 1866, Stanton ran for the House of Representatives, the first woman to ever do so, when she realized that while New York prohibited women from voting, the law did not prohibit them from running for or holding public office. Her election bid was unsuccessful. Anthony was the political strategist who organized the legions of women who struggled to win the ballot for American women. Prior to her years as an activist
200 suffr agist movement
force within the women’s rights movement, Anthony had become a teacher at the age of 17. After teaching for 15 years, she became active in the temperance movement, considered one of the first expressions of American feminism by dealing with the abuses of women and children who suffered from alcoholic husbands. As a woman, however, Anthony was not allowed to speak at public rallies. As a result, she helped to found the Woman’s State Temperance Society of New York, one of the first women’s associations of its kind. After meeting Stanton in 1850, she soon joined the women’s rights movement and dedicated her life to achieving suffrage for women. Unlike Stanton, Anthony never married and did not have the burden of raising children. As a result, she focused her attention on organization within the movement and was more often the one who traveled, lectured, and canvassed nationwide for suffrage. Anthony was arrested for attempting to vote on more than one occasion, but remained committed to her endless campaign for a constitutional amendment allowing women the right to vote. She gained national attention for the cause of adding a constitutional amendment to give women the vote, as well as much needed support, when she was arrested and tried for voting in the 1872 presidential election. The amendment to give women the right to vote, first introduced in Congress in 1878, would be presented to 40 consecutive sessions of Congress until it finally passed as a proposed amendment in 1919. Along the way, the suffrage movement faced fierce opposition from a variety of antisuffrage groups. Big business (particularly the liquor industry), the Catholic Church, and political machine bosses feared that women voters would support political reform. Women led many of the temperance efforts of the late 19th and early 20th centuries in an attempt to ban the sale of alcohol. Other organizations, like the National Consumer’s League, formed in 1899, and the National Women’s Trade Union League, formed in 1903, worked to change labor conditions for various corporations. Many southern states also opposed women’s suffrage because they did not want AfricanAmerican women to gain access to voting rights or argued that suffrage was a states’-rights, and not a federal, issue. Just as they did in the suffrage movement, women emerged as strong leaders in the antisuffrage movement as well. The women leaders
in both movements tended to be among the social elite—educated, with access to money, and having important social contacts. But many women did not support the breakdown of the public versus private sphere dichotomy, fearing that women would lose their power and influence within the domestic sphere and among social networks if forced to become participants in public life. As a result, the fight for women’s suffrage, or later political efforts within the women’s movement, did not universally represent all women. Between 1878 and August 1920, when the Nineteenth Amendment was ratified, activists for women’s voting rights relied on a variety of strategies to gain support for the proposed amendment. Legal strategies were used in an attempt to invalidate male-only voting laws, while others sought to pass suffrage laws at the state level. Some women fighting for the cause could not be deterred, enduring hunger strikes, staging rallies or vote-ins, or even being jailed for publicly campaigning for the amendment. The movement became revitalized with an influx of younger women joining the fight in 1910 due to immigration, urbanization, and an expanding female labor force; the cause also won a state referendum in Washington granting women the right to vote that same year. California would follow in 1911, and by 1912, a total of nine western states had passed legislation giving women the right to vote. As a territory, Wyoming had granted women full suffrage in 1869 and retained the law when it became a state in 1890. The other six western states included Colorado, Utah, Idaho, Arizona, Kansas, and Oregon. Another major turning point came in 1916 when a coalition of suffrage organizations, temperance groups, women’s social welfare organizations, and reform-minded politicians pooled their efforts and resources to wage a fiercer public battle. The political tide began to turn in the suffragists’ favor in 1917, when New York adopted women’s suffrage legislation. Then, in 1918, President Woodrow Wilson also changed his position and backed the constitutional amendment. On May 21, 1919, the House of Representatives passed the proposed amendment, followed by the Senate two weeks later. Tennessee became the 36th state to ratify the amendment on August 18, 1920, which gave the amendment the necessary three-fourths support from the states (it was officially
sunshine laws 201
certified by Secretary of State Bainbridge Colby eight days later on August 26, 1920). Few of the early supporters for women’s suffrage, including Anthony and Stanton, lived to see the final political victory in 1920. Further Reading Ford, Lynne E. Women and Politics: The Pursuit of Equality. Boston: Houghton Mifflin, 2002; Han, Lori Cox. Women and American Politics: The Challenges of Political Leadership. Boston: McGraw-Hill, 2007; Hymowitz, Carol, and Michaele Weissman. A History of Women in America. New York: Bantam Books, 1978; Jeydel, Alana S. Political Women: The Women’s Movement, Political Institutions, the Battle for Women’s Suffrage and the ERA. New York: Routledge, 2004. —Lori Cox Han
sunshine laws Do American citizens have a right to know what government officials are doing, and do they also have a right to access government documents? This is an especially important question for journalists, who are guaranteed freedom of the press under the First Amendment but who often encounter regulatory hurdles in gaining access to information within the government. Since no complete and accurate record of debate at the Constitutional Convention in 1787 exists, it is unclear whether or not the framers of the U.S. Constitution intended to create a government where all official business was conducted in public. The Constitution itself does not mention “a right to know,” and the Constitutional Convention was conducted in secret. In addition, the Senate also met in private for its first five years of existence. The only mandated disclosures by the Constitution require that a journal of congressional proceedings, including official acts, be kept, as well as publication of the annual federal budget. Beginning in the 1950s, media organizations as well as public interest groups began to lobby Congress to pass openrecords and open-meeting laws. As a result, Congress began to consider the issue and attempted to set out guidelines in several laws passed beginning in the late 1960s. In 1966, Congress passed the Freedom of Information Act (FOIA), which was designed to open up
documents within the federal government for inspection by members of the public. Not surprisingly, journalists have made extensive use of this law. Prior to 1966, reporters who wanted certain types of government information, or access to documents, were left to cultivate sources around Washington who might be willing to leak the information. After the FOIA was passed as an attempt to make federal records available to any person, reporters can now request certain types of information under the guidelines set out by Congress. Each year, more than 600,000 FOIA requests are made to the government. The Freedom of Information Act applies to federal agencies to make all records available for inspection and copying. An agency must respond to a written request for a record within 10 working days. If the request is delayed, appeals to the head of the agency must be responded to within 20 working days. Extensions are sometimes granted, due to the large volume and backlog of requests. If time limits are not met, or the person requesting the documents is denied, they can appeal in a federal district court, and the process must be expedited. If the plaintiff wins, the government must pay all the legal costs associated with the appeal. Business firms have become major users of the FOIA, and as a result, some courts have refused to award costs in an appeal. Agencies covered by the act include departments within the executive branch, or independent agencies such as the Central Intelligence Agency (CIA) or the National Aeronautics and Space Administration (NASA). FOIA requests cannot be used for documents in the possession of the president and his immediate advisers, Congress, its committees and agencies under its direct control (such as the General Accounting Office and the Library of Congress), or the judicial branch. Nine FOIA exemptions exist to maintain some confidentiality of documents. The exemptions include: national security (usually the executive branch gets to determine what remains classified); agency management records (issues that are of little concern to the public, like parking records or sick leave requests); materials already kept secret by other laws (such as tax returns or patent applications); trade secrets or commercially viable items (for example, documents related to licensing of television or radio stations, drug manufacturers, or businesses seeking government contracts
202 sunshine laws
that have to provide detailed information to federal agencies); inter-and intra-agency memos (those used in the deliberative policy making process, and the exemption usually shields policy drafts, staff proposals, studies, and investigative reports); personnel, medical, and similar files (so as not to invade the privacy of federal employees on personal matters); material about ongoing civil and criminal investigations; reports by banks and financial institutions; and maps of oil and gas wells. In 1978, Congress also passed the Presidential Records Act (PRA), which governs the official records of presidents and vice presidents created or received after January 20, 1981. President Jimmy Carter signed this bill into law, yet it was his immediate successor, Ronald Reagan, who would first be governed by it. Basically, the PRA changed the legal ownership of presidential official records from private to public, and established new regulations under which presidents must manage their records. Specifically, the PRA defines and states public ownership of the records, and requires that an incumbent president and his staff take care to manage and preserve the papers of the administration. The PRA also established a process for restriction and public access to certain records that may be classified or considered worthy of restriction due to national security concerns. However, journalists, researchers, or other members of the public can gain access to presidential records through a FOIA request beginning five years after the end of the administration (with the president retaining certain restrictions to public access for up to 12 years). The PRA also requires that vicepresidential records are to be treated in the same way as presidential records. In 2001, President George W. Bush signed Executive Order 13233, which provided more secrecy for presidential records and allowed an incumbent president to withhold a former president’s papers even if the former president wanted to make them public. Various scholarly organizations across the country protested the move, which went into effect just as new documents were about to be released at the Ronald Reagan Presidential Library. Similar to the open-document laws, Congress also attempted to mandate that government meetings and other functions should be held publicly. As a result, Congress passed a federal open-meetings law in 1976 requiring some 50 federal agencies to meet
in public. Known as the “Government in Sunshine Act,” the purpose of the law was to provide information to the public on the decision making process within government. An example of one of the agencies governed by this act is the Federal Communications Commission, which has to notify the public prior to a meeting where decisions will be made. Basically, these agencies are required to conduct all of their business in public, a notice of public meetings must be given at least one week in advance, and agencies must keep detailed records of any business done in a closed meeting under exemptions in the laws that exist. Several sunshine laws also exist at the state level; virtually every state has passed similar laws to those at the federal level. For example, in California, the Brown Act was adopted in the 1970s to govern how legislatures and other governing boards (such as city councils, county governing boards, or school boards) would meet and hold meetings. Openmeeting laws require at least a 48-hour notice of the agenda, and limit executive sessions (those closed to the public where only board members and other staff can be present) to matters concerning collective bargaining, pending legal actions, the purchase of land or buildings, and some personnel matters. While many news organizations and even individuals have benefited from these types of sunshine laws that attempt to make government more open and responsive to its citizens, the government agencies themselves have not always been overly cooperative in following the guidelines. However, the Freedom of Information Act and other similar laws aid journalists in their role as watchdogs against government abuse and corruption, and many investigative stories have alerted citizens to important issues such as environmental dangers from nuclear weapons plants or serious safety defects in the gas tank on the popular Ford Pinto during the 1970s. The increased use of computers within the federal government throughout the 1980s also created new questions about the documents governed under FOIA. In 1996, Congress adopted an amendment to the original legislation that included all electronic documents, including e-mail messages, to fall under the same standards as paper documents. And in the days and months following the terrorist attacks on 9/11, the federal government began to pursue tighter control
trial by jury
over certain types of information, such as details about the infrastructure of highways, bridges, telecommunications, banking, and energy sources, fearful that any information related to these topics would be useful to terrorists planning another attack within the United States. Those seeking greater access to government information remain at odds with a government that is often reluctant to follow the rules originally set out by Congress more than four decades ago. Further Reading Overholser, Geneve, and Kathleen Hall Jamieson, eds. The Press. New York: Oxford University Press, 2005; Paletz, David L. The Media in American Politics: Contents and Consequences. 2nd ed. New York: Longman, 2002; Pember, Don R., and Clay Calvert. Mass Media Law. Boston: McGraw-Hill, 2005. —Lori Cox Han
trial by jury Trial by jury is considered a fundamental American right, and it was viewed as an important freedom and safeguard against government abuse by the framers of the U.S. Constitution. The purpose of a trial by jury is for jurors to act independently and to speak out against an unjust punishment. British courts, as early as 1670, guaranteed the independence of trial juries. That year, William Penn (who founded what is now the state of Pennsylvania) had been put on trial for preaching about the Quaker religion. The British Parliament had previously outlawed public assemblies of members of nonconforming religions such as the Quaker faith (nonconforming meant any religion that differed from the Church of England). The jurors ignored the judge’s instructions for a guilty verdict and instead acquitted Penn. As a result, the jurors were jailed and then ordered to pay a fine for contempt against the British king. One of the jurors, Edward Bushell, refused to pay the fine, and he remained in jail until an appellate court released him. The appellate court, by doing so, established an important precedent that jurors could not be imprisoned or otherwise punished for a wrong or untrue verdict. The Bushell case established an important precedent in colonial America, as American jurors often
203
refused to enforce the acts of the British Parliament. The trial of John Peter Zenger in 1734–35 is a wellknown example. Zenger, the publisher of the New York Weekly Journal, was charged with seditious libel and jailed for nine months due to the publication of stories about the governor of New York. Zenger had printed the viewpoints of the opposition party, which had accused the governor of dishonesty and oppression. Even though Zenger had clearly violated the sedition law, the jury in the case ignored the sedition law and acquitted Zenger on the charges, finding him not guilty based on the notion of truth as a defense. According to the Sixth Amendment, “the accused shall enjoy the right to a speedy and public trial, by an impartial jury,” and according to the Seventh Amendment, “In Suits in common law, where the value in controversy shall exceed twenty dollars, the right of trial by jury shall be preserved.” As a result, the trial by jury system is recognized as a federal constitutional right in both criminal and civil cases. Since the United States Supreme Court ruled in Duncan v. Louisiana (1968), with the incorporation of the Sixth Amendment, defendants in state criminal cases also have a constitutional right to trial by jury. In this case, the trial by jury provision of the Sixth Amendment was incorporated at the state level through the due process clause of the Fourteenth Amendment. The only exception to the right to jury trial involves misdemeanor trials where defendants face incarceration for less than six months. The size of juries has, at times, been a controversial issue that has made its way to the U.S. Supreme Court. Historically, trial juries in the United States have been composed of 12 members, all of whom had to agree in order to convict a defendant. Although this is still the case in most states, some jurisdictions allow for six-person juries in noncapital cases. And four states, including Oregon, Oklahoma, Louisiana, and Texas, no longer require juries to be unanimous to convict defendants in some noncapital cases. In Williams v. Florida (1970), the Supreme Court approved Florida’s use of six-person juries in noncapital cases. Two years later, in Johnson v. Louisiana (1972), the Court surprised many observers by allowing state criminal trials to depart from the historic unanimity rule by allowing for convictions by nine votes on 12-person juries in noncapital cases.
204 trial by jury
However, the Supreme Court has not allowed for a less-than-unanimous vote with a six-person jury. The U.S. Supreme Court has also considered constitutional issues stemming from citizen participation on juries. While it is assumed that Americans have a right to a trial with a jury made up of their peers, that statement appears nowhere within the U.S. Constitution (although it is a stated right in Magna Carta of 1215). Originally, only white men were allowed to serve on juries. The right to serve on a jury was not extended to minorities and women until the latter half of the 20th century. In 1880, the Supreme Court had ruled that based on the equal protection clause of the Fourteenth Amendment, a state could not ban black men from jury service. But in a second decision that same year, the Supreme Court distinguished between consideration for jury service and actually serving on a jury. This allowed states to continue to exclude blacks from jury duty for many decades. It would take the Supreme Court nearly 100 years to finally settle this issue. In Batson v. Kentucky (1986), the Supreme Court ruled that jurors of a particular race could not be purposefully excluded from jury service. A state law in Louisiana that barred women from serving on juries without a written declaration of their willingness to serve was struck down as unconstitutional in 1975. In Taylor v. Louisiana (1975), the Supreme Court barred systematic exclusion of women from jury service, stating that a cross section of the community was necessary for jury selection as part of the guarantee of an impartial jury as well as equal protection of the laws under the Fourteenth Amendment. Similarly, the Supreme Court ruled in Witherspoon v. Illinois (1968) that jurors opposed to the death penalty cannot automatically be excluded from jury service. In states that allow the death penalty, a juror, even if opposed to the sentence, can still serve on a jury that may result in that outcome if they can set aside their personal beliefs in deciding the case and applying the law. A person who cannot set aside their opposition to the death penalty, however, can be excluded from jury service in a capital case. The Sixth Amendment also guarantees “the assistance of counsel” for those on trial. Historically, the right to counsel in “all criminal prosecutions” meant no more than that the government could not prevent a person accused of a crime from hiring an attorney if
he or she could afford to do so. In Gideon v. Wainwright (1963), the Supreme Court overruled precedent and held that the Sixth Amendment right to counsel as applied to the states via the due process clause requires states to provide counsel to felony defendants who cannot afford to hire attorneys on their own. Because the decision in Gideon was made retroactive, several states had to retry hundreds of convicted felons who had not been represented by counsel at their first trials. In many cases, the key witnesses were not available and the states were forced to drop the charges. Now, states provide counsel through a public defender’s office or some other pro bono arrangement with the state bar association. Since the Gideon decision in 1963, the Supreme Court has for the most part upheld the ruling, and in 1979, the Court ruled that a misdemeanor defendant also had the right to counsel if imprisonment was a possibility. Ineffective representation can sometimes result in an appeal or retrial, and a defendant also has a constitutional right to self-representation, as long as the waiver for counsel is made knowingly and intelligently. A conflict often results between First and Sixth Amendment rights regarding the right to a free press versus the right to a fair trial by an impartial jury. The Sixth Amendment does guarantee a “public trial,” and a longstanding common law tradition exists for trials to be open to the public. Not only is this seen as an essential safeguard against judicial abuse of power, but it is also seen as an opportunity for the public to be educated about the judicial branch and its many processes and procedures. But in this mediadominated age, how are jurors to remain impartial in deciding the outcome of trials with high news value? In general, impartial jurors can still know something about the defendant, as long as they can keep an open mind and decide the case only on evidence they hear during the trial. However, the Supreme Court has overturned convictions after massive publicity from prejudicial news reporting, even if some of the evidence reported was inadmissible in court. For example, in Irvin v. Dowd (1961), the Supreme Court granted Leslie Irvin, who was accused of killing six people in Indiana, a new trial due to prejudicial news coverage that made a fair trial impossible. Newspapers gave the defendant the nickname “Mad Dog Irvin,” and local opinion polls showed that most
trial by jury
people in the area believed him to be guilty. The jury found him guilty and sentenced him to death. After the Supreme Court granted a new trial, Irvin was tried again, found guilty again, and sentenced to life in prison. Perhaps the most famous case dealing with the right to an impartial jury versus freedom of the press to cover a trial is Sheppard v. Maxwell (1966). Sam Sheppard was a prominent doctor in Cleveland, Ohio, who was convicted of murdering his wife. Sheppard maintained his innocence, claiming that a bushyhaired stranger had broken into his house and beaten his wife to death, and Sheppard had come home while the crime was occurring. No one else had reported seeing this man, and when it was discovered that Sheppard and his wife were having marital problems, he quickly became the prime suspect within the news media. Local newspapers helped to get Sheppard arrested about one month after his wife was murdered. Headlines in local newspapers included: “Somebody Is Getting Away With Murder” and “Why Isn’t Sam Sheppard in Jail?” The coverage before the trial was extremely sensational, and included stories alleging extramarital affairs involving Sheppard and that Sheppard had cleaned up the blood from the crime scene and disposed of the murder weapon before calling the police. In addition, newspapers published the names and addresses of the jurors. During the trial, the press table was so close to the defendant’s table that Sheppard could not talk to his attorney without being overheard by the press. The judge was lax in keeping cameras out of the courtroom and in making sure the jury did not read or hear media reports on the case; reporters were also allowed to handle the physical evidence. Sheppard was convicted of murder, but after he had served 12 years in jail, the Supreme Court decided to hear his appeal. The Supreme Court overturned the conviction on the grounds that Sheppard had not received a fair trial due to the media coverage of the case. Sheppard was retried, more than 12 years after the crime, and acquitted. He died of liver disease in 1970, but family members continued for years to prove his innocence. The 1960s television show The Fugitive, and later a movie starring Harrison Ford in 1993 by the same title, were based on the facts of this case. Various types of prejudicial pretrial publicity can make it difficult for a defendant’s case to be heard by
205
an impartial jury. For example, the confession to a crime may be considered newsworthy, but it can often be ruled inadmissible during the trial if various rules were not followed correctly by law enforcement officials. In addition, the results of lie detector tests, blood tests, ballistic tests, and other criminal investigatory procedures can be unreliable and not actually used in the trial, even though the results may be newsworthy. Prejudicial news coverage can also come from stories about a defendant’s prior criminal record, irrelevant information about the defendant’s lifestyle or personal character, or inflammatory statements in the press (as with both Irvin and Sheppard) stating a defendant’s guilt prior to the trial. A fair trial is one that presupposes courtroom decorum as well as an environment that is free from a carnival-like atmosphere. In its ruling in Sheppard, the Supreme Court criticized the lower court in the case for failing to protect against prejudicial pretrial publicity, failing to control the courtroom, and failing to restrict the release of prejudicial information during the trial. Judges have several tools to prevent prejudicial publicity, including such options as a change of venue, a change of venire (which changes the jury pool instead of the location of the trial), granting a continuance in the trial in the hopes that publicity will die down, voir dire (which is the questioning of potential jurors before a trial to determine if they have already formed an opinion about the case), judicial admonition (telling jurors to avoid media coverage of the case and to avoid discussing the case with anyone) and sequestration of the jury (which can allow no juror contact with family or friends and screening of phone calls and mail). Judges can also use gag orders, directed at the participants in the trial, including attorneys, or even the press. Gag orders on participants in the trial are usually upheld as constitutional, but gag orders on the media are usually found unconstitutional. In Nebraska Press Association v. Stuart (1976), the Supreme Court ruled the lower court’s restrictions on the reporting of the case violated the First Amendment. The case involved a man who had killed six members of a family in a small area of Nebraska, then confessed the crime to various members of the media. The case received substantial news coverage, and as a result, the judge issued a gag order in an attempt to ensure the selection of an impartial jury. Usually, First
206 v oting
Amendment rights receive more protection than Sixth Amendment rights when access to a public trial is concerned. Gag orders are usually only used as a last resort as an attempt to ensure a fair trial. Other cases have ruled that states cannot punish the media for publishing information that is in the public record and dealing with the judicial system, or even information that is confidential and otherwise protected by law. Also, a series of cases during the late 1970s and early 1980s protected the right of both the public and the press to attend judicial proceedings, including the jury selection process, pretrial hearings, and the trial itself, unless the state can offer a compelling reason for closure. The identity of jurors can also be protected from the public, including the press, as was the case in both the O.J. Simpson murder trial in 1995 and the trial of Timothy McVeigh, convicted and sentenced to death for the bombing of the Oklahoma City federal building in 1995. The issue of cameras in the courtroom remains controversial. The Supreme Court first ruled on this issue in Estes v. Texas (1965). Billie Sol Estes (who had connections to President Lyndon Johnson) was convicted of fraud in a business deal involving fertilizer tanks, but the Supreme Court eventually overturned the conviction due to lack of a fair trial caused by broadcast coverage. Estes was later tried and convicted again, but the Supreme Court ruled that the press must be allowed to cover proceedings with as much freedom as possible, though the preservation of the atmosphere necessary for a fair trial “must be maintained at all costs.” Today, each state has different rules governing cameras in the courtroom, and in most states the judge, and sometimes the parties involved, decide whether or not the cameras are allowed. All 50 states now allow cameras in at least some courtrooms. Cameras can still be banned in federal courts, and are not allowed in the U.S. Supreme Court, but many experiments have been undertaken in the last few years in lower federal courts to allow the presence of cameras. Further Reading Abramson, Jeffrey. We, the Jury: The Jury System and the Ideal of Democracy. New York: Basic Books, 2000; Dialogue on the American Jury: We the People in Action. American Bar Association Division for Public Education, 2006; Jonakait, Randolph N. The
American Jury System. New Haven, Conn.: Yale University Press, 2003; Lehman, Godfrey D. We the Jury: The Impact of Jurors on Our Basic Freedoms: Great Jury Trials of History. Amherst, N.Y.: Prometheus Books, 1997; Middleton, Kent R., and William E. Lee. The Law of Public Communication. Boston: Allyn & Bacon, 2006; O’Brien, David M. Constitutional Law and Politics. Vol. 2, Civil Rights and Civil Liberties. New York: W.W. Norton, 2003; Pember, Don R., and Clay Calvert. Mass Media Law. Boston: McGraw-Hill, 2005. —Lori Cox Han
voting The right to vote in free and fair elections has often been thought to be the sine qua non of democratic citizenship; it is the core principle that, more than any other, distinguishes democratic regimes from nondemocratic ones. In reality, of course, the picture is much more complex. The right to vote is merely one of many basic rights and freedoms that we expect to be encouraged and protected in liberal democratic states. Nevertheless, while such things as freedom of speech, freedom of association, and freedom of religion are rightly held to provide important limits to the actions of institutions and elected representatives in democratic states, it is this idea that representatives must be elected by the people to whom they are ultimately responsible rather than appointed by arbitrary authorities, dictators, or rich elites that resonates most strongly among many citizens and commentators. The reason for this, of course, is the crucial link between freedom and democracy that runs through the history of Western political thought. The core democratic idea—that people are only truly free when they live under laws and institutions that they have had a hand in creating—can be found in the work of thinkers as diverse as Aristotle, Rousseau, Montesquieu, and Tocqueville, and most explicitly in the work of civic republican thinkers throughout history, from Cicero and Machiavelli, to the founding fathers of the American republic, who shared a conception of participatory governance and freedom variously embodied in the res publica of Rome, the city-states of Renaissance Italy, and the demos of ancient Greece. Democratic political systems, they
voting 207
VOTER TURNOUT OF EUROPEAN DEMOCRACIES AND THE UNITED STATES Country A Italy 90% Iceland 89% Greece 85% Belgium 84% Sweden 84% Denmark 83% Argentina 81% Turkey 80% Portugal 79% Spain 79% Austria 78% Norway 76% Netherlands 75% Germany 72% United Kingdom Finland 71% Ireland 71% France 61% Luxembourg 60% United States Switzerland 38%
verage
72%
45%
Data are based on Voting Age Population (VAP). Source: International Institute for Demoncracy and Electoral Assistance. URL: http:/ www .fairvote .org/ turnout/ intturnout .htm.Accessed August 10, 2005.
claimed, hold the promise of genuine freedom for their citizens by providing them with the ability to discuss, shape, and determine the laws under which they live. In so doing, they place political sovereignty in the hands of the people themselves and hence free them from the tyranny of arbitrary authority. representative democracy emerged as a response to the rigors and complexities of modern life: a solution to the problem of how best to safeguard the individual freedoms of a huge and diverse citizenry without requiring all of them to contribute to the democratic process at all times. Voting represents the fundamental process by which individual citizens entrust their sovereignty to another—or a group of others—to govern on their behalf; it is the crucial bridge between citizens and the representatives to whom they entrust political power. Participa-
tion provides the means by which citizens can confer legitimacy and authority onto the state by providing their consent to it. Given the formal and symbolic importance of the right to vote, then, it is crucial that every democratic state is clear about who should be allowed to exercise this right, and why. The United States’s struggle to answer this question in a way that is consistent with the principles of freedom and equality asserted in the Declaration of Independence, has given rise to some of the most protracted, complex, and vehement social, political, legal, and civil upheaval in the nation’s history. The United States now has universal suffrage, which is to say that all citizens above the age of 18 are able to vote, subject to certain constraints. But the road to universal suffrage in America has been rocky and fraught with conflict. For a long time, only white, property-owning men were generally allowed to vote. However, over the years the franchise was progressively extended outwards, with the consequence that political leaders became answerable to greater and greater numbers, each of whom possessed their own ideals, values, and individual perspectives on how political power should be appropriately exercised. The state by state abolition of the requirement that voters must own land (finally negated in 1850) and the subsequent abolition of the requirement that only people who pay taxes could vote in the Twenty-fourth Amendment (1964), gave political power to a much wider and more divergent constituency of men. The recognition in the Fifteenth Amendment (1870) that individuals should not be denied the vote on grounds of “race, color, or previous condition of servitude,” and the subsequent civil unrest, congressional amendments and court decisions throughout the 1950s, ’60s, and ’70s, finally led not only to the formal right of black people to vote in elections but the substantive right to do so, largely free from intimidation or unfair discrimination. The extension of the franchise to women in the Nineteenth Amendment (1920) following a long period of protest and activism by the suffragist movement dramatically expanded the number of people to whom the government became formally responsible, and this was increased even further with the decision to reduce the voting age from 21 to 18 (with the ratification of the Twenty-sixth Amendment in 1970).
208 v oting
With the widespread extension of the franchise throughout the U.S. population, more attention has been placed on the idea that the right to vote confers upon the individual a responsibility to exercise this right. The debate about rights and responsibilities extends much more widely than merely the right to vote; indeed, the idea that the possession of a right confers on the individual a responsibility to exercise it has become increasingly popular among political and democratic theorists, politicians, and commentators. But again, the extent to which this claim is seen as particularly important with regard to voting confirms the earlier claim that voting is seen as a different (and somehow more fundamental) kind of right than those others that citizens possess. For example, we talk about the importance of the right of free assembly, but on the whole we do not think that, in giving people this right, the state should force people to assemble or attend meetings. Similarly, the right to free exercise of religion does not, we think, compel people to be religious, just as the right to free speech does not confer a responsibility to speak publicly about controversial matters. We might want to argue that these rights confer a responsibility on all citizens to defend and respect them, but this is a slightly separate matter; freedom of speech, assembly, and religion act more as constitutional safeguards that protect people’s ability to act in a particular way should they wish to. However, votes collected via free and fair elections provide the legitimacy and authority of local, state, and national political institutions, and the people who work in them, and hence voting is seen as qualitatively different from these other rights. Decision making and policy formation in liberal democracies like the United States operate with the consent of the citizen body; hence, some feel that if people have the right to vote (and thereby to provide their consent to the prevailing system) then they are morally obliged to do so. Choosing not to vote, it has been suggested, in some way represents a rejection of democracy itself. Do citizens have a responsibility to vote? Many have thought so. Theodore Roosevelt, for example, claimed in 1883 that “the people who say they have not time to attend to politics are simply saying that they are unfit to live in a free community. Their place,” he said, “is under a despotism.” But on what
grounds might we say that they have a responsibility or a duty to participate? In countries like Australia in which it is compulsory for citizens to vote the answer is relatively straightforward: the responsibility to vote there is a legal one, and therefore one must vote or be subject to legal sanction. In democracies like the United States and Britain, which do not compel their citizens to vote, is it meaningful to speak of a duty or responsibility to vote? There are several reasons that we might offer for doing so, including the few that follow. First, on top of what has already been said about the important role of voting in providing consent to liberal democratic states and governments, we might want to say that people have a responsibility to vote given how many people have fought and died to secure for them the right to do so. For example, do the long and difficult struggles to secure the vote for black people and women in America that we mentioned earlier confer a moral duty on those women and black people currently possessing this hard-won right to exercise it? What should we think about women or black people in the United States who do not vote because they “cannot be bothered” or are “not interested in politics?” Is this insulting to those who were imprisoned or killed in order to give them the right to vote? Second—and relatedly—we might say that people have a moral responsibility to vote because in living in a democracy they enjoy rights and freedoms that millions of people around the world do not. A great many people around the world live in fear of their political leaders and the political systems that support them, and despite this, there exist brave and committed individuals and groups who struggle to secure the kinds of freedoms that many U.S. citizens take for granted. People in certain parts of Africa, eastern Europe, and Asia, for example, still have little or no meaningful say in who governs them, and many voters around the world continue to suffer intimidation and threats if they do try to exercise their right to vote. Are those in the United States and other democratic nations who choose not to vote because it is “too inconvenient” or “a waste of time” being too complacent about democratic values, and arrogantly dismissive of the difficulties of others? Third, we might want to say that U.S. citizens have a responsibility or duty to vote out of a sense of patriotism. U.S. national identity is largely “civic”
voting 209
rather than “ethnic” in nature; patriotism in America is mainly concerned with a commitment to certain principles and ideals (liberty, meritocracy, democracy, etc.) and so to vote in a democratic election is in some way to register one’s support for the American way of life, and the principles through which it is defined. Less prosaically, of course, we might just suggest that democratic citizenship is premised upon a sense of give and take: you get a package of benefits from the political system, and in return the state can require you to do certain things, such as serve on juries and take part in elections. Or we might want to say that people have a responsibility to vote in order to protect this right from withering or being phased out altogether. If the current political system is premised upon the popular consent of the citizen body through the vote, then the widespread failure of people to vote could conceivably prompt a reform of the political system that does not rely on votes so heavily, and draws its legitimacy from elsewhere. It would be increasingly difficult to justify a set of political arrangements that draw their legitimacy from the fact that the public act in a particular way if people were not acting in that required fashion. We might suggest that the right to vote must be exercised, or it could wither and, ultimately, be marginalized by well-meaning governments who wish to make their system more legitimate given that people do not participate in the way the system requires. On the other hand, of course, we might want to argue that all this is missing the point. Critics of the “rights confer responsibilities” approach to voting point out that there is more to living in a democracy than voting; democratic governments should respect and encourage the freedom of their individual members, they argue, and this means protecting their right not to vote if they do not wish to. After all, deciding not to vote can represent an important form of protest. If people think that the political parties look too similar, or the system itself is somehow corrupt, or if there is party unity on a particular policy or issue that an individual is not happy with, then they may feel unable to endorse a system that they feel has failed to represent them. In this sense, the argument is similar in structure to the one we mentioned earlier with regard to the right to free speech, religion, and association: that the
right to vote is a constitutional safeguard that protects the ability of the citizen to do something should they wish to, rather than something that they must do. There is a sense in which, in questioning the link between the right to vote and the responsibility to vote, we are protecting the right of the citizen to take a stand against the current political system (or a set of policies) by withholding their consent. For those excluded or marginalized minorities in society, this may in fact be one of the only effective forms of protest they can make: Sometimes the person who speaks the loudest is the one who remains silent. It is not clear that it is a responsibility of a democratic system to force people to engage with it; whether or not people choose to engage with their political system is in fact a useful and important barometer by which we might measure the public’s support for their political system. By understanding voting to be a civic duty that they must perform regardless of their wider feelings about the political system (and, perhaps, believing that the state should compel people to vote by law, as some people have argued) we may in fact rob ourselves of the ability to gauge the public’s satisfaction with its political system, and the actions of their representatives. This is important because voter turnout in the United States is currently very low. If U.S. citizens do feel they have a responsibility to vote, it would seem that they are failing (in their millions) to discharge that responsibility. Indeed, despite the fact that the right to vote is now universally shared among adult citizens, a comparatively small number of people actually choose to exercise this right. As a recent study by the American Political Science Association has pointed out, voter turnout in America ranks very low among other democratic nations; only the Swiss have lower voter turnout. Notably, they point out, even when voter turnout in America was at its highest (in the 1960s), turnout lagged well behind many other democratic nations at that time. In 2000, on the day of what was to become one of the closest presidential elections in American history, the media excitedly showed queues of people waiting to cast their votes. In the end, 59 percent of those eligible to vote did so. By comparison, the news that in the United Kingdom, turnout in their last general election had reached 61 percent
210 v oting regulations
prompted many commentators and academics to suggest that the United Kingdom was in the grip of a democratic crisis and that something serious needed to be done to reengage the public. Countless studies have attempted to find the cause of political disengagement among the American public. Perhaps the most famous body of work in this area has been led by Robert Putnam (Harvard University) who has argued that as a result of profound social change, citizens no longer feel a shared sense of solidarity with those around them. Consequently, he argues, people are more reluctant to engage in collective political debate and action. This breakdown of “social capital” among the U.S. citizen body has resulted not only in dramatically reduced rates of voting, but in many other forms of political action too, such as getting involved in political campaigns, volunteering, and writing to the editor. Whether or not the decline in political participation is due to a breakdown of social capital, it does raise important questions about the opinions that U.S. citizens hold about their political system, and their own understanding of what it means to be a “citizen” of the United States. On the old civic republican model, which defined a citizen as someone who participated in politics, nonvoters and the wider disengaged would appear not to be citizens at all. In 21st-century America, citizenship is understood differently, as a much more diffuse and complex package of rights and responsibilities of the kind that we have mentioned in this section. Whether or not citizens actually have a responsibility to vote in elections in the United States is for the individual to decide. What is clear, however, is that one’s answer to that question will depend on one’s views on the prior question: what does it mean to be a citizen? Further Reading Diclerico, Robert E. Voting in America: A Reference Handbook (Contemporary World Issues). Santa Barbara, Calif.: ABC-Clio Inc, 2004; Macedo, Stephen, et. al. Democracy at Risk: How Political Choices Undermine Citizen Participation, and What We Can Do About It. Washington, D.C.: Brookings Institution Press, 2005; Maisel, L. Sandy, and Kara Z. Buckley. Parties and Elections in America: The Electoral Process. New York: Rowman & Littlefield, 2005; Putnam, Robert. Bowling Alone: The Collapse and Revival of
American Community. New York: Simon & Schuster, 2000. —Phil Parvin
voting regulations Voting regulations determine who can vote, where, when, and how. With the exception of the date of national elections, the U.S. Constitution originally gave the states the exclusive power to determine their own voting regulations. However, Constitutional amendments, laws passed by Congress, and United States Supreme Court decisions have placed restraints and requirements on state voting regulations. Additionally, voting regulations have been modified in many states at the behest of political parties and through citizen initiatives. One of the central regulations of voting is deciding who can vote and who cannot. A broad expansion of the right to vote, known as suffrage or the franchise, has taken place over the history of the United States. Following the American Revolution, property was a qualification for voting in 10 of the 13 newly independent states, with the other three requiring the payment of taxes in order to vote. Beyond the property qualification, most state constitutions stipulated that only white, Christian males could vote. Most states lifted the religious, property, and taxpaying requirements by 1830. At the time of the Civil War, even many nonslave states did not permit black males to vote. It took the war, plus legislation and amendments to the U.S. Constitution, to extend the franchise to black males. The 1867 Reconstruction Act forced the southern states to ratify the Thirteenth Amendment, which established state citizenship laws. Additionally, the Reconstruction Act required that each of the southern states draft a new state constitution that gave suffrage rights to all free men. These new constitutions were subject to congressional approval. The Fourteenth Amendment provided equal protection and due process of law, which extends to voting laws. The Fifteenth Amendment gave the right to vote to all citizens and prohibited the states from abridging this right. For the remainder of the 1800s, black men had the right to vote. But as the turn of the century approached, many southern states implemented voting regulations to limit voting by blacks.
voting regulations 211
Limitations on black voting were part of what were known as Jim Crow laws designed to segregate whites and blacks in the South. While the southern states could not outlaw voting for blacks, they designed several rules to disenfranchise many. One common rule was the literacy test. These tests were very subjective, and the government official giving the test had a tremendous amount of discretion in whom he passed. Moreover, the official was prohibited from giving the voting applicant a reason why he failed the exam—ostensibly to reduce cheating, but the real effect was to shield the official from accusations of racial bias. Other tests included a test of knowledge about the U.S. Constitution, and a test of “good moral character” where applicants were required to bring a registered voter who could vouch for them. The last state structural voting regulation that influenced black participation was the poll tax—a tax on voting. While these official state voting regulations were effective at reducing black voting, political parties had their own rules that further disenfranchised blacks. The most influential of these rules was the establishment of “whites-only” primaries. Because primary elections are used to determine a party’s nominee for the general election, the party itself can determine who should be included in the process (for instance, no members of other parties). Since the end of the Civil War, the South was dominated by the Democratic Party (President Abraham Lincoln had been a Republican). This meant that nearly all of the general elections would be won by Democratic candidates—Republicans did not stand much of a chance. Because of this, the most important election was the primary, in which the Democratic Party would choose which candidate would go on to the general election and, most likely, victory. By having their votes excluded from the primary elections, blacks were left with an insignificant vote in the general election (if they could even register for that). Beyond the official state and party restrictions on voting, blacks were kept away from the polls by concerted acts of intimidation, harassment, and violence. The efforts by southern states to keep blacks away from the polls were highly successful, with only about 3 percent of the black population registered to vote in the South at the beginning of the 1960s. Congress
stepped in during the 1960s with legislation to overturn the Jim Crow election laws. In 1965, following the televised beatings and gassing of voting rights activists by state troopers on a peaceful protest march in Mississippi, Congress passed the Voting Rights Act. The act included a number of unique provisions. First, it prohibited the use of literacy tests. Second, it permitted the federal government to intervene in states and jurisdictions that had a history of using discriminatory registration methods and had a rate of voter registration less than 50 percent of the voting age population. In these areas, the federal government had the authority to appoint voting registrars for the purposes of conducting drives to register voters. Moreover, the problem areas covered by the act could not change any of their voting regulations unless they received permission from the U.S. attorney general or the U.S. District Court of Washington, D.C. The purpose of this last provision was to ensure that any new changes did not have any discriminatory consequences. Another restriction on voting, the poll tax, was not targeted specifically at blacks. The poll tax, by its nature, served to disenfranchise poor people in general, but many of these poor people in the South were black. The poll tax was abolished in 1966 when the U.S. Supreme Court ruled that a poll tax implemented by the state of Virginia was an unconstitutional violation of the Fourteenth Amendment. A national law providing the right to vote for women trailed the laws that enfranchised black men by more than half a century. In 1890, Wyoming was admitted to the Union and became the first state to give women the right to vote (women enjoyed the right to vote in Wyoming while it was still a territory). After many years of struggle and protest, women received the right to vote with the Nineteenth Amendment in 1920. The voting age in all states was 21 years until Georgia lowered its voting age to 18 in 1943. In 1970, Congress attempted to lower the voting age to 18 nationally, but the Supreme Court ruled that Congress could only lower the age for national elections (not state or local elections). In response, Congress introduced the Twenty-sixth Amendment, which lowered the voting age to 18 years old. The push for the lower voting age was a result of the Vietnam War, in which young men were sent off to battle without any
212 v oting regulations
choice of the leaders who sent them. The amendment was ratified in 1971. Other than the age restriction, there remain two distinct classes of persons who are not permitted to vote: persons in prison and persons committed to mental institutions (although three states permit persons in prison to vote). In 13 states, convicted felons are not permitted to have their voting rights restored after they have served their sentence. Efforts in Florida to delete felons from voter registration files prior to the 2000 election caused a great deal of controversy when the computerized voter purge list proved to be wildly inaccurate. The last major restriction on voting is the registration process. While tests are illegal, states still have the right to determine when and how citizens can register to vote. This became an increasing concern toward the latter half of the 20th century, as political scientists were able to show the close correlation between voter turnout and ease/difficulty of voter registration. In an effort to make voter registration more convenient, the 1993 National Voting Rights Act (commonly known as the Motor Voter law) required states to provide voter registration materials at their motor vehicle offices as well as other social services offices. The act also mandated that states permit citizens to register to vote by mail. Lastly, the act stipulates that the cut-off date for registering to vote in an election can be no longer than 30 days. States also control where a citizen may vote. The “where” aspect of voting regulations comprises two elements. The first element is the actual precinct in which the citizen votes. States control where citizens of different areas go to vote on election day by mapping residential areas into precincts. Precinct boundaries may need to be redrawn on occasion due to population growth and shifts. Related to the redrawing of precinct boundaries is the redrawing of congressional district boundaries, the second element of where a person votes. Every 10 years, the U.S. census is taken to determine how many people live in which areas across the country. Seats in Congress are then assigned to the states based on population. This process is called reapportionment, and some states may gain or lose seats in Congress. Because of this, and because of the natural population shifts within each state, the states are required to redraw the boundaries of the congres-
sional districts, which is called redistricting. This influences the choices voters have, in that some voters may find themselves in different districts from decade to decade. In addition to determining who can vote, another voting regulation deals with when citizens may vote. As noted above, states can schedule when their elections are held, with the exception of the requirement that national elections must be held on the first Tuesday following the first Monday in November. This has led to different dates for primary elections in different states. New Hampshire, the state that traditionally holds the first presidential primary in the country, has legislation that mandates that its primary be held at least a week earlier than any other state’s primary. This makes New Hampshire the most important primary for presidential aspirants, causing candidates to spend much time campaigning in the state—making promises that the New Hampshire voters want to hear. Other states have scheduled their primary elections earlier in the year so that presidential candidates would view their primaries as being more important, but New Hampshire has kept moving its primary earlier and earlier. The trend for more primaries to occur earlier in the presidential election year is known as “frontloading.” Since 1988, several states have grouped their primaries together on the same day, known as “Super Tuesday,” with the intent of making the interests of those states more important in the presidential nomination process. Last, voting regulations also deal with how people vote, which includes the mechanics of casting a vote and the types of elections in which people can vote. For instance, the state of Oregon has a system of voting by mail—no polling places on election day. The other states having polling places but have varying rules to allow citizens to vote by mail if they wish. This is known as “absentee voting” and is becoming increasingly popular with voters. In 2002, Congress passed the Help America Vote Act (HAVA). Under HAVA, states receive funds from the federal government in order to establish electronic voting machines at each polling place that would facilitate voting for persons with disabilities. However, one of the complaints with the electronic voting system is that many of the systems purchased by the states do not offer a verifiable paper trail. In addition to electronic balloting, HAVA also required
women’s rights
that states collect “provisional ballots” from persons turned away at the polls. For example, if a person’s name did not appear on the list of eligible voters, the person may vote on a provisional ballot that could be counted if the election result is close and the voter can be verified as being eligible. In addition to the actual act of voting, states also have different regulations regarding the structure of the election in which the citizens vote. These regulations often reflect the influence of the state political parties, but they have also been altered in many states through citizen initiatives. Primary elections are one example of structural electoral differences among the states. In choosing party nominees for the general election, a little over half of the states (27) run a closed primary election. In this type of election, only members of a given political party can vote in that party’s primary election, and they may only vote for candidates of that political party. However, 10 of these states permit voters to change their party affiliation on election day in order to vote in the primary of their choice. Open primary elections are those in which the voter may vote a party’s ballot without being a member of that political party. Of the 21 states that run open primaries, about half separate the ballots by party, so the voter is forced to request one party’s ballot. The other states with open primaries list all of the party ballots on one grand ballot and instruct voters to vote only in one of the party ballots listed on the grand ballot (this method sometimes causes confusion among voters). The last two states, Alaska and Louisiana do things differently. In Alaska, Republicans use a closed primary, but the other parties use a blanket primary— where a voter can vote for candidates of more than one party on a single ballot that lists all of the candidates of all of the parties (excluding Republicans). Louisiana, in contrast to all of the other states, uses a “runoff primary” election. In a runoff primary, if no candidate receives more than 50 percent of the vote, the two candidates receiving the most votes for a given office advance to the general election, irrespective of party. Under this system, occasionally two candidates of the same party will be on the ballot for the same office in the general election. If a candidate receives more than 50 percent of the vote in the primary, that candidate wins outright, and there will be no general election for that office.
213
Another difference in primary elections deals with the selection of presidential candidates. In some states, party nominees for the presidential election are determined by caucuses instead of elections. Caucuses are party meetings conducted locally, and a voter’s presence is required to participate in the selection process. Local governments (counties and cities) also have different voting regulations. Many local governments run “nonpartisan” elections, in which the political party of the candidate is not identified on the ballot. Frequently, a runoff system is used in local election in conjunction with nonpartisan candidate identification. In conclusion, voting regulations differ significantly among states. The reason for this is that the power to set voting regulations rests predominantly with the states themselves. While the federal government has placed several requirements on states and restricted discriminatory regulations, states still wield enormous power in determining who can vote, where, when and how. Further Reading Dudley, Robert L., and Alan R. Gitelson. American Elections: The Rules Matter. New York: Longman, 2002; Keyssar, Alexander. The Right to Vote: The Contested History of Democracy in the United States. New York: Basic Books, 2000; Piven, Frances Fox, and Richard A. Cloward. Why Americans Don’t Vote. New York: Pantheon, 1988; Piven, Frances Fox, and Richard A. Cloward. Why Americans Still Don’t Vote and Why Politicians Want It That Way. Boston, Mass.: Beacon, 2000; Wayne. Stephen J. Is This Any Way to Run a Democratic Election? 2nd ed. Boston: Houghton Mifflin, 2003. —Todd Belt
women’s rights Women have had representation in the government of the United States in both indirect and direct ways since its founding, although the term founding fathers has been used for more than 200 years to refer to the all-male signers of the Declaration of Independence and formal attendees and drafters at the Constitutional Convention. The term has suggested for a long time that women were
214 w omen’s rights
absent from the earliest beginnings of the history of the United States, although history books have long alluded to the presence of women but without a full elaboration of their contribution to civic society. First ladies from Martha Washington and Abigail Adams on have been acknowledged, but recent scholarship has fully demonstrated the role that these women and other women have contributed in governing the United States. One of the first topics that is often addressed in looking at the issue of women’s rights is the issue of equality and the fight for equal rights. The most definitive method in bringing about change in guaranteeing equal and shared rights for women has been through amendments added to the U.S. Constitution that nationally guarantee rights for women. As the U.S. government is a federal system, individual states may have granted these same rights to women before statehood. Yet not all states have had the same set of laws applying to men or women, or men of different races, or people of all ages as part of the United States. And so an amendment to the Constitution has been viewed as the most definitive way to ensure equal rights for women, although it is not the only means by which women’s rights have been defined. Amendments most closely associated with women’s rights are the Nineteenth Amendment, granting suffrage to women in all states; and the Equal Rights Amendment, which failed to be ratified by three-quarters of the states, either within the sevenyear period viewed by some as standard for ratification, since that time period has been written into the ratification process for some amendments, or within the extension period Congress had given the ERA (10 years, from 1972 to 1982). The ERA fell short by three states of the 38 needed for ratification. It did have majority support in terms of the number of states supporting ratification. Of the remaining states, if for example, Illinois had required a simple majority vote of their state legislative body for approval instead of a 3/5 majority vote, the amendment might have been ratified. In addition, while women legislators gave support to the ERA, their male counterparts in the state legislatures failed to give the same support. At the time, some argued that the rights guaranteed through this amendment could be achieved through other means, including
legislation, and the use of the Fourteenth Amendment through the courts. These are the means primarily by which women’s rights have been expanded since the 1970s. Both in terms of suffrage and equal rights, there is no one defining moment when rights are granted. In terms of suffrage, while the Nineteenth Amendment will be discussed below, there is a long history of discrimination that prevented African-American women from voting, long after white women achieved suffrage. Not until the 1960s, and the passage of voting rights laws, and even more to the point, the eventual funding and full enforcement of these laws did suffrage come about. The 2000 election, and the problems identified in some states regarding registration, are classic examples of the types of issues that have long surrounded suffrage and access to the polls. In spite of the addition of the Nineteenth Amendment to the U.S. Constitution, it is important to note that women had been voting in some states, including federal elections, long before that amendment was ratified. States such as Wyoming demanded that women be allowed to continue voting if they were to be added to the Union. And women were elected to Congress from some of the newer western states, which reflected both their participation and representation in the election process, long before women would be elected to Congress from many other states. Provisions in the 1964 Civil Rights Act regarding sex discrimination were given teeth through enforcement powers, once Title IX of that act was finally revised during the Richard Nixon administration to add sex to the areas the U.S. attorney general could enforce. Although confusing, a different Title IX, in the 1972 Education Amendments, however, had to wait until another program, the Women’s Educational Equity Act, received increased funding before any funds could be used to enforce Title IX. This meant that not until at least the 1980s would any movement begin in the area of sports equity for women. While “Title IX” was passed in 1972, no meaningful improvement in the balancing of sports programs for men and women would come about for another 10 years. In time, the courts became a way by which the U.S. solicitor general in the executive branch
women’s rights
could recommend cases of discrimination to the Court, and the attorney general could bring suits against companies and institutions in the public and private sector that discriminated or allowed cases of sexual harassment. In terms of legislative change, it is not only that women were mentored or had the right apparatus of support behind them to launch a campaign or be selected to fill a vacancy, often upon the death of their husbands, either starting as a placeholder and then becoming incumbents seeking election in their own right, or being quite prominent, as a number of the earliest women to run for political office were, either as leaders in business, as heads of organizations, or as holders of a lower public office, and so their election was a natural steppingstone in the movement upward to a higher position of office.
215
Many of the first women to serve in Congress were appointed to fill a vacancy, such as Rebecca Latimer Felton, a Democrat from Georgia, who served only two days in the U.S. Senate in 1922 but was actively supportive of woman’s suffrage and vocational training for women, and whose symbolic appointment, albeit short, would help the governor of Alabama who had opposed suffrage but now had to respond to newly enfranchised women in future elections. Jeannette Rankin, the first woman elected to the House of Representatives, a Republican of Montana, helped women gain suffrage in Montana in 1914. The Equal Rights Amendment was seen by some early women activists as not necessary, and in fact, a threat to protections put in place, especially since the turn of the 20th century, the creation of the Department of Labor, and especially the creation
216 w omen’s rights
of a Children’s Bureau. Labor in particular wanted to protect working women and child labor as a movement that had gathered international attention. In fact, the protection of women workers was a secondary concern that evolved out of the desire to protect children. It was in the protecting of women of childbearing age that protective labor policy emerged. The Women’s Bureau functioned as an agency that gathered data on the status of working conditions for women from the states and published annual bulletins and reports. Its decentralized structure helped to facilitate the development of parallel women’s organizations that emerged, especially in the 1920s, and again in the 1960s, to push for two areas of concern to women: an equal rights amendment, once suffrage had been achieved, as well as equal pay. The year 1992 was designated by some political pundits as the “year of the woman,” since there was a greater influx in both the number and percentage of women elected to both the House and Senate. Often this is attributed to the choice of women to run in response to televised Senate Judiciary Committee hearings on the nomination of Clarence Thomas to be an associate justice to the United States Supreme Court and the sexual harassment Anita Hill perceived at the hands of Thomas, her supervisor in a previous job. However, a number of variables common to congressional turnover led to the turnover seen in 1992, including incumbents choosing to retire in greater numbers, and thus creating open seats. Women ran for these open seats and were able to raise money at the same levels as those of male candidates. In that year, the percentage of seats in the House and Senate held by women jumped from nearly 6 percent (29 in the House; 2 in the Senate) to 10 percent (48 in the House; 6 in the Senate). By the end of 2006, in the 109th Congress, the number of women in the House reached an alltime high of 70, or 16 percent of the House, and there were 14 women in the Senate. The 108th Congress was the first to see a woman lead her party in Congress, as Nancy Pelosi was elected the Democratic minority leader of the House of Representatives in 2002. Following the 2006 midterm elections, in which Democrats won control of both houses, Pelosi became the first woman to hold the position of Speaker of the House. With the addition of a critical mass of women in the House and Senate,
and especially as women have been gaining seniority and positions of leadership, the agenda of Congress has changed to include such issues as pensions, family leave, educational opportunities, child care, and women’s health issues. The history of Women’s rights in government is often viewed as a chronological listing of when women achieved certain goals or “firsts.” For example, Frances Perkins, President Franklin D. Roosevelt’s secretary of labor, first woman to serve in a president’s cabinet, from 1933 until 1945; Sandra Day O’Connor, appointed by President Ronald Reagan in 1981, and serving on the U.S. Supreme Court until her retirement in 2006; Madeleine Albright, secretary of state, first woman to serve in the inner cabinet (perceived as the advisers closest to the president), appointed by President Bill Clinton, and serving from 1997 until the end of his administration in 2001; and Geraldine Ferraro, the first woman to run on a major party ticket, joining Walter Mondale as his running mate on the national Democratic presidential ticket in 1984. While “firsts” provide important historical markers, the process of gaining equal rights for women remains ongoing. Title IX is celebrated as a victory for women’s sports, yet the goal of women having equal resources has been long in coming. In 1972, Congress passed legislation calling for an equalization of funds for women’s sports, but enforcement would not be authorized by Congress for another decade. In the 1990s, the Supreme Court was hearing cases concerning the inequalities of sports facilities and budgets for men’s and women’s sports. Another decade later the Republican Speaker of the House, and former high school wrestling coach, Dennis Hastert, continued to offer amendments to remove funding for the implementation of Title IX legislation. Equal pay has been an agenda item for decades. The increase of women to elective office in Congress, as political appointees to the courts and executive branch positions, and eventually elected as president will lead to a further expansion of women’s rights in terms of their identification, passage into law, and enforcement by the executive branch. In addition, the national government’s actions will serve as a role model for states and local government to follow, as well as employers and individuals in the private sector.
women’s rights
Further Reading Andersen, Kristi and Stuart Thorson. “Congressional Turnover and the Election of Women,” Western Political Quarterly 37 (1984): 143–156; Borrelli, MaryAnne, and Janet M. Martin, eds. The Other Elites: Women, Politics, and Power in the Executive Branch. Boulder, Colo.: Lynne Rienner Publishers, 1997; Caroli, Betty Boyd. First Ladies. New York: Oxford University Press, 1995; Graham, Sara Hunter. Woman Suffrage and the New Democracy. New Haven, Conn.: Yale University Press, 1996; Mansbridge, Jane J. Why We Lost the ERA. Chi-
217
cago: University of Chicago Press, 1986; Martin, Janet M. The Presidency and Women: Promise, Performance and Illusion. College Station: Texas A & M University Press, 2003; O’Connor, Karen, Bernadette Nye, and Laura VanAssendelft. “Wives in the White House: The Political Influence of First Ladies,” Presidential Studies Quarterly 26, no. 3 (Summer 1996): 835–853; Watson, Robert P. The Presidents’ Wives: Reassessing the Office of First Lady. Boulder, Colo.: Lynne Rienner Publishers, 1999. —Janet M. Martin
POLITICAL PARTICIPATION
absentee and early voting
votes were cast by absentee ballot, and 8.4 percent (10,189,379) were cast early or on election day at a voting location other than the voter’s regular polling location. Absentee ballots have proven decisive in many close races and have overturned apparent victories in several elections, including the 1982 and 1990 California gubernatorial races, the 1988 Florida Senate race, and scores of local elections. This entry briefly reviews the history of absentee and early voting and examines the impact of these voting methods on campaigns and elections. Liberalized use of absentee balloting and early voting are part of a larger trend since the 1980s of legislative efforts to increase and broaden electoral participation by making voting easier and facilitating voter registration. Absentee balloting actually has a long history in U.S. elections. The first large-scale use of absentee voting occurred during the Civil War when President Abraham Lincoln actively encouraged Union soldiers to participate in elections back home by casting absentee ballots. Men and women in military service remain one of the largest blocs of absentee voters. Their right to vote, and that of other American citizens living abroad, is protected by the Federal Voting Assistance Act of 1955 and the Overseas Citizens Voting Rights Act of 1975. Absentee voting got a boost when the federal Voting Rights Act of 1965 and its amendments defined more explicitly the right to vote as applying to a broad class of people who might be denied access to the ballot box by conditions that make it difficult to navigate polling
Although voter turnout rates in the United States lag behind those of most industrialized democracies, many Americans perform their civic duty without setting foot in a polling place on election day. All states offer voters some alternative to traditional election day voting. Some states allow “absentee voting,” enabling voters to return paper ballots by mail. A few states even pay the return postage. Other states allow “early voting,” enabling voters to cast their ballots in person at the offices of county clerks or at other satellite voting locations without offering an excuse for not being able to vote on election day. During the 2004 general election, 12 percent (14,672,651) of
Petty O fficer 3r d Class C andie Thompson assists fir eman P aul Byrd in filling out an absent ee ballot. (Photographed b y Raul Quinones, U.S. Navy)
219
220 absent ee and early voting
places, including the elderly and people with physical disabilities or language handicaps. In 1978, the state of California went further with its “no excuse” absentee voting law, making an absentee ballot available to any registered voter who requested one without the need to plead sickness, disability, or any other reason for wanting to vote before Election Day. By the 2004 general election, 26 states had followed California’s lead. Voters in the other 24 states and the District of Columbia face stricter guidelines delineating who may vote by absentee ballot. One of these states, Oregon, did away with polling locations altogether, becoming the first state to conduct statewide elections entirely by mail during the 2000 election. Many local elections, typified by extremely low turnout, also rely exclusively on absentee voting. In addition, 23 states followed the lead of Texas when in 1993 that state began allowing any registered voter to report to an early voting station and vote before election day—without first applying for the privilege to do so. Located in a public place such as a government building or even a shopping mall, early voting stations often provide extended hours on weekdays and weekends. The growing use of absentee and early voting has captured the attention of political parties, campaign strategists, and political commentators and has important implications for American campaigns and elections—including turnout, campaign tactics, and the democratic process itself. To the extent that liberalized requirements for absentee voting and implementation of early voting were intended to increase and broaden electoral participation, those reforms are generally seen as failures. In years of high turnout nationally, states that allowed early voting, all-mail voting, and absentee ballots had the smallest increases in turnout. In years of low turnout nationally those states had the biggest decreases in turnout. Although some studies have detected boosts in turnout among older, welleducated, and highly interested citizens, these liberalized balloting systems have demonstrated limited potential for expanding the electorate by attracting the disadvantaged to the polls. Indeed, absentee voters surveyed shortly before California’s 2003 gubernatorial recall election reported that they did so for the sake of ease and convenience. However, to the extent that absentee voting was intended to help peo-
ple who might have difficulty casting a ballot at a polling place, it should come as little surprise that some absentee voting has proven more popular among the elderly, students, the disabled, and people with young children. Other than these demographic traits, analysts have detected important differences between absentee and early voters compared with traditional election day voters. Recent studies have found politically active people more likely to vote by absentee ballot and a lower likelihood of voting early among those voters with low political efficacy, little interest in the campaign, and no strong partisan attachment. Although early voting systems have so far had a negligible impact on turnout, they do seem to affect voters’ proclivity to complete all portions of their ballots. In high salience elections such as presidential elections and California’s gubernatorial recall election, preliminary evidence suggests that those who make up their minds and vote early may be more likely to cast votes in such high-profile races. During the 2004 general election, for example, jurisdictions allowing no-excuse absentee voting and early voting enjoyed a lower incidence of voters failing to mark choices for U.S. president, Senate, or House of Representatives. However, there is some evidence that voting before election day has meant a marked increase in “ballot roll-off” for low-information “down-ballot” races like those for local offices and some ballot measures. Before Nevada implemented no-excuse absentee voting, for example, approximately 6 percent of those casting ballots typically skipped state legislative races and matters further down the ballot. After Nevada liberalized its absentee voting requirement, 12 to 14 percent of that state’s absentee voters left those parts of their ballots blank. Analyses of absentee voters in Los Angeles County also revealed higher levels of “voter fatigue” in local races, bond measures, and ballot initiatives compared with election day precinct voters. Far from increasing turnout, some critics worry that the increased use of absentee and early voting may actually contribute to political apathy and disengagement as campaigns must start their advertising sooner, much of it negative. Others question whether absentee and early voting systems lead to sound decision making by voters. Limited political information may drive those who
absentee and early voting 221
vote far in advance of election day to rely more heavily on party labels compared with traditional polling place voters. Does one party or the other enjoy an advantage among absentee or early voters? The evidence is mixed. Among the very few systematic studies of absentee voters, some have found them slightly more likely to reside in areas where Republican registration runs high or in areas more likely to support Republicans including rural, suburban, and highincome areas. Other investigations, however, detect no partisan advantage among absentee voters. After California enacted universal eligibility for absentee voting in 1978, Republican-leaning counties had higher rates of absentee voting but the Republican advantage faded in subsequent elections. Likewise, the most recent surveys of voters have not detected partisan differences between absentee voters and traditional election day voters. In any case, as both parties target absentee and early voters, these voters may come to more closely resemble their counterparts who vote at the polls on election day in terms of their party identification and vote choices. That was certainly the case during California’s bizarre and highly publicized 2003 gubernatorial recall election. With just 10 weeks’ notice of that special election, the California Democratic Party targeted registered Democrats who regularly vote absentee and encouraged them to oppose the recall by voting “no” on the first ballot measure. Rescue California, the main pro-recall committee, targeted Republican and Independent voters in conservativeleaning counties. On the second ballot measure, voters were also asked to choose from among 135 replacement candidates in the event the recall passed. Under state law, voters were allowed to cast absentee ballots as early as 29 days before the election. Those who voted promptly were not exposed to information and campaign messages that emerged shortly before election day. Some who voted swiftly cast ballots for candidates who later dropped out of the race. The California Democratic Party did not even meet to decide whether to also endorse Lieutenant Governor Cruz Bustamante in the event of Governor Davis’s recall until September 13—five days after county officials began mailing absentee ballots. Moreover, the majority of absentee ballots, almost 2 million, had already been mailed in by the time news reports surfaced concerning admiring comments front-runner
Arnold Schwarzenegger had made about Adolf Hitler and allegations of sexual harassment and assault against the actor and body builder by several women. Nonetheless, Schwarzenegger easily prevailed in that election, becoming California’s 38th governor. As alternatives to voting in person at polling places on election day become more common, both parties are vigorously courting absentee and early voters. However, the Republican Party is reputed to have the lead in these efforts. One state-by-state survey of party organizations found the Republican Party to enjoy a particular edge in local, special, and midterm elections where turnout is often a paltry 20 percent of eligible voters. Mailing absentee ballot applications to likely supporters constitutes a favorite tactic. During the 2000 campaign season in Florida, the Republican Party mailed applications for absentee ballots to approximately 2 million registered party members—complete with a letter from Governor Jeb Bush on what appeared to be official state stationery, urging the party faithful to vote for his brother, Republican presidential candidate George W. Bush, “from the comfort of your home.” The Florida Democratic Party mailed only approximately 150,000 absentee ballot applications during that election. Four years later, both presidential campaigns tailored their strategies to “bank the vote” early, especially in the handful of battleground states where the election was the tightest. Karl Rove, President Bush’s chief reelection strategist, commented at the time, “every one of our supporters, every Republican, is receiving an absentee ballot application and a phone call.” As in states allowing no-excuse absentee voting, the only requirement for participation in Oregon’s all-mail elections is being registered as a voter. In those races, mobilizing core voters is a game that two parties can play. Indeed, the Oregon AFL-CIO turned out a whopping 86 percent of its members in 2000 and 81 percent in 2002 under the state’s mail ballot system. “We followed the tried and true,” recalls union president Tim Nesbitt, “first distributing flyers in the work places, then direct mail from both their individual union and the state AFL-CIO. The key is intensive phone-banking. We make one or two calls before the ballots go out, then as many as three or four reminder calls. We check county records frequently to make sure we are not calling those who have already voted.”
222 absent ee and early voting
Like absentee voting and all-mail voting, early voting enables parties and campaign organizations to harvest core supporters before moving on to cultivate support from swing voters. However, early absentee voting and early in-person voting call for different campaign strategies. Unlike absentee voting, no advance application is required to vote early. “Absentee ballots are often a matter of convenience for activists, while early in-person voting can be targeted to the less active,” notes Democratic consultant Rick Ridder. “For example, Jesse Jackson could hold a rally, then load people onto buses and take them to vote.” In fact, the 1992 Clinton-Gore campaign had some success in mobilizing supporters under Texas’s early voting system in counties with large Latino populations and increases in voter registration. Although such examples point to the potential of early voting to increase participation among demographic groups with historically lower voting rates, some have expressed concern that partisan county elections officials will approve requests for early voting stations in locations favorable to one party or the other such as union halls or conservative Christian churches. By all indications, campaign strategists have adapted to the rise of absentee and early voting by mobilizing their core supporters. Whether these innovations in balloting methods eventually increase turnout largely hinges on whether parties and political campaigns “activate more than the easiest and closest at hand.” For those casting absentee or early ballots, these voting methods undoubtedly prove convenient, albeit methods subject to abuse— including fraud and undue pressure by campaigns and organizations trying to influence the voter’s choice. Balloting outside the polling place may leave voters vulnerable to pressure from spouses, pastors, employers, labor leaders, and others. Absentee voting, warns political scientist Norman Ornstein, “is corroding the secret ballot, which is the cornerstone of an honest and lawful vote.” One unsavory if not blatantly illegal practice is that of “house-to-house electioneering.” A political strategist observes, “You can target a home, give the voters applications for absentee ballots, then talk to them again [when they’re filling out their ballots]. It’s all done in the privacy of a home. It’s electioneering and campaigning, but nobody sees it.” Reports surfaced during the 2004 presidential election of campaign opera-
tives monitoring requests for absentee ballots—matters of public record—and approaching voters with offers of assistance when ballots were due to arrive in recipients’ mailboxes. Absentee and early voting systems raise additional concerns. Some commentators note early voters lack the same information as those who vote on election day. For example, early voters may miss out on candidates’ performance in debates and will not factor other late-developing election events into their decisions. Others note that early and absentee voting systems magnify the advantages of incumbency because a longer voting period necessitates more resources— both money and organization. Perhaps the gravest concerns with expanded opportunities to vote before election day center on the way these practices have eroded participation in an important communal ritual. Turning out to the polls on Election Day is one of Americans’ only opportunities to do something important together as a nation: come together to govern themselves through the choice of their leaders. Some object that making participation “in this important civic rite a matter to be pursued at an individual’s convenience is to undermine the sense of our nationhood, our common experience in the government of, by, and for the people.” Notwithstanding such concerns, liberalized absentee voting laws and opportunities to vote early are likely to remain permanent fixtures of the American political landscape as states ease restrictions and promote the use of alternatives to traditional election day voting. See also voting; voting regulations. Further Reading Barreto, Matt A., Matthew J. Streb, Mara Marks, and Fernando Guerra. “Do Absentee Voters Differ from Polling Place Voters? New Evidence from California.” Public Opinion Quarterly 70, no. 6 (2006): 224– 234; Dubin, Jeffrey A., and Gretchen A. Kalsow. “Comparing Absentee and Precinct Voters: A View over Time,” Political Behavior 18, no. 4 (1996): 369– 392; Dubin, Jeffrey A., and Gretchen A. Kalsow. “Comparing Absentee and Precinct Voters: Voting on Direct Legislation,” Political Behavior 18, no. 4 (1996): 393–411; Frankovic, Kathleen A. “Election Reform: The U.S. News Media’s Response to the Mistakes of Election 2000” In Ann N. Crigler, Marion
campaign finance 223
R. Just, and Edward McCaffery, eds. Rethinking the Vote. New York: Oxford University Press, 2004; Karp, Jeffrey, and Susan Banducci. “Absentee Voting, Participation, and Mobilization,” American Politics Research 29, no. 2 (2001):183–195; Kershaw, Sarah. “Officials Warn of Absentee Vote Factor,” New York Times, 7 October 2003, A16; Kiely, Kathy, and Jim Drinkard. “Early Vote Growing in Size and Importance,” USA Today, 28 September 2004; Michels, Spencer. NewsHour with Jim Lehrer. Transcripts, October 7, 2003; National Conference of State Legislatures. “Absentee and Early Voting.” Available online. URL: http://www.ncsl.org/programs/legman/elect/ absentearly.htm. Downloaded June 26, 2006; Neeley, Grant W., and Lilliard E. Richardson, Jr. “Who Is Early Voting? An Individual Level Examination,” Social Science Journal 38 (2001): 381–392; Newton, Edmund. “Recall Vote Underscores Weight of Absentee Ballot,” Los Angeles Times, 22 June 1989, A1; Oliver, J. Eric. “The Effects of Eligibility Restrictions and Party Activity on Absentee Voting and Overall Turnout.” American Journal of Political Science 40, no. 2 (1996): 498–513; Ornstein, Norman. “The Risky Rise of Absentee Voting,” The Washington Post, 26 November 2000; Patterson, Samuel C., and Gregory A. Caldeira. “Mailing in the Vote: Correlates and Consequences of Absentee Voting,” American Journal of Political Science 29, no. 4 (1985): 766–788; Simon, Mark. “Mass mailings aimed at absentee voters,” San Francisco Chronicle. 8 September 2003; Stein, Robert. “Early Voting,” Public Opinion Quarterly 62 (1998): 57–69; United States Election Assistance Commission. Election Day Survey. Available online. URL: http://www.eac.gov/election_survey_ 2004/toc.htm. Downloaded June 21, 2006. —Mara A. Cohen-Marks
campaign finance The term “campaign finance reform” refers to periodic efforts to revise the nation’s laws and regulations governing how political candidates and others raise and spend money in political campaigns. Supporters of these efforts seek to limit the effect that money has on elections and the possible corrupting influence (or at least the appearance of corruption) that might be associated with large campaign donations. Those on the other side of the issue express concern
that additional campaign finance regulations might stifle legitimate political speech. Key issues worth noting in analyzing campaign finance reform include understanding the environment in which major changes in the nation’s campaign finance have been passed and understanding the intended and unintended effects of those changes as they are implemented. This last point is particularly important because when loopholes and weaknesses of any legislation or regulation emerge, they provide the impetus for the next round of discussions over campaign finance reform. Efforts to limit the effect of money on political elections can be found as early as during Theodore Roosevelt’s presidency when the Tillman Act prohibited corporations and nationally chartered banks from directly contributing to federal candidates’ campaigns. The current campaign finance system has its roots in the reform efforts of the early 1970s to make the workings of government more open and accessible. That system, established under the Federal Election Campaign Act (FECA) of 1971 and its amendments—the primary amendment passed in 1974 with others enacted in 1976 and 1979—and as modified following the United States Supreme Court decision in Buckley v. Valeo in 1976, created a set of rules that applied to elections involving all federal offices. With the original piece of legislation in 1971, Congress set in place a framework designed to limit the influence of special interests and wealthy individuals, to control spending in elections, and to establish a regime of public disclosure by candidates and parties of their campaign finances. The 1974 amendment to FECA broadened the scope of the initial act to establish limits on contributions by individuals, political parties, and political action committees (PACs), to set limits on campaign spending, to provide enhanced public funding in presidential elections (originally provided for in companion legislation to FECA in 1971), and to create the Federal Election Commission (FEC) to administer and enforce the laws governing the nation’s system of campaign financing. With Buckley v. Valeo, though, the Supreme Court invalidated certain portions of the new campaign finance laws while upholding others. Specifically, the Court permitted limitations on contributions
224 campaign finance
by individuals, parties, and PACs as a legitimate way to protect against the corrupting influence, or at least the appearance of corruption, stemming from large campaign contributions to candidates. The Court, however, struck down other portions of the new laws—most notably limits on independent expenditures (that is, expenditures that are not coordinated with a campaign) and limits on overall campaign expenditures by candidates. In striking down these provisions, the Court equated spending money with speech, and concluded that limiting such expenditures was an unconstitutional limitation on a person’s First Amendment right to free speech. The framework of campaign finance regulation stemming from the 1970s reforms and related court challenges centered on the limit on the amounts that an individual could contribute to a candidate ($1,000 per election) and political parties ($5,000 per year to state and local parties; $20,000 per year to national parties). Other key components included limits on contributions of political parties ($5,000 per election) and PACs to candidates ($5,000 per election), disclosure requirements imposed on candidates and campaigns (for example, candidates must use best efforts to obtain information about any individual who contributed $200 or more), and empowering the FEC to resolve ambiguities by issuing new rules and to enforce the campaign finance laws against candidates and others who violated them. As noted above, in addition to the campaign finance regulations established by FECA and its amendments, which affected all candidates for federal office, an additional reform established a separate set of rules that applied only in the presidential context: public funding of presidential candidates tied to additional restrictions. Specifically, presidential candidates initially qualify to receive matching funds by raising $100,000 from contributions of $250 or less, with a minimum of $5,000 being raised in at least 20 states. Once qualified, a candidate receives (but not before January 1st of the election year) public funds matching dollar for dollar the first $250 of every individual contribution, up to one-half of the federal spending limit for that election cycle. But with the benefit of receiving public financing came additional restrictions on candidates accepting such funding, most notably spending limits. The Supreme Court upheld such restrictions in Buckley since they were
voluntary—a candidate who did not accept public funding would not be subject to such restrictions. One effect of the campaign finance system stemming from the 1970s reforms was that presidential candidates could (and at least initially did) tailor their fundraising activities toward the goal of maximizing the funds received through public financing. Front-runners would use such funds to try to bolster their leads. Struggling candidates would remain in the race in the hope the influx of public funds would bolster their campaign. Recent election cycles, however, have shown that candidates may actually thrive better in a system in which they do not accept public financing and, as a result, are not subject to limitations that accompany it. H. Ross Perot (1992) and Steve Forbes (1996 and 2000) used their personal fortunes to run for president instead of relying on public funding. George W. Bush gambled in 2000 that he could raise more money being free of the restrictions accompanied by accepting public financing than he could receive in public funds— and he did. The fund-raising successes of Bush, John Kerry, and Howard Dean in 2004, and Barack Obama and Hillary Clinton in 2008, all of whom declined public funding, highlights the possibility that one of the core campaign finance reforms of the 1970s may no longer be relevant in the current political environment. The discussion regarding public funding of presidential elections highlights an important component of reform politics, namely that passing campaign finance reforms does not necessarily cure all that ails the nation’s electoral and campaign finance system. Further, just as old issues may be resolved, new problems may arise as loopholes and shortcomings in the system are discovered. Despite the 1970s reformers’ efforts to curb the influence of money in elections, during the post-reform era, more and more money is required to run for office. For example, average spending by congressional candidates has more than doubled between 1990 and 2004. Similarly, each presidential election cycle results in new record amounts being raised and spent during the primary season. In addition to the increased money spent on elections, candidates, PACs, interest groups, parties, and donors alike were finding ways to raise, contribute, and spend money that were not otherwise prohibited or regulated under the FECA system. Although Congress would occasionally explore additional changes to the nation’s system of campaign
campaign finance 225
finance laws throughout the post-reform era, proponents of change did not make much headway until the late 1990s. Led by Republican John McCain and Democrat Russ Feingold in the Senate and Republican Christopher Shays and Democrat Marty Meehan in the House of Representatives, reformers pursued major changes affecting a wide array of campaign finance issues. Occasionally they experienced some success, such as in 1998 when Shays and Meehan overcame numerous legislative obstacles in getting the House to pass a major overhaul of the campaign finance system; however, the Senate did not pass the bill. Nevertheless, proponents continued to press the issue, scaled back the scope of their proposed legislation, and in 2002 (in the wake of the scandal surrounding the collapse of corporate giant Enron) succeeded in passing the Bipartisan Campaign Reform Act (BCRA). BCRA primarily focused on the perceived (by the act’s proponents) “twin evils” of the campaign finance system: soft money and issue ads. Soft money—that is, unregulated funds received by the political parties purportedly for party-building activities—had become an increasingly visible part of campaign finance. Similarly, issue ads run by someone other than the candidate or his campaign appeared very similar to campaign ads but were carefully crafted to avoid meeting the standard for “express advocacy” established by the Supreme Court. As a result, such ads had been out of the purview of the Federal Election Commission, the administrative agency created during the 1970s reforms to implement and oversee the nation’s campaign finance regulations. BCRA also contained a wide range of other provisions designed to change various specific aspects of the nation’s campaign finance system. Almost immediately after BCRA’s passage, Republican Senator Mitch McConnell, the selfproclaimed “Darth Vader” of campaign finance, filed a legal challenge to the constitutionality of BCRA; indeed, the bill’s sponsors anticipated such a challenge and included a provision in the act that would require express judicial hearing of any such lawsuit. As with Buckley, the Supreme Court struck down certain provisions of BCRA but upheld others—most significantly, upholding the core provisions of the act banning soft money and regulating issue ads. The Court was highly fractured in its decision, with dis-
sents filed by four of the justices being highly critical of the decision. Nevertheless, BCRA’s primary changes to the nation’s campaign finance laws stayed in place, with the 2004 election being the first one held under BCRA’s rules. The one change under BCRA that observers initially believed might have the most significant direct impact on the campaign finance system was the increase in the maximum contribution an individual could make to presidential and other federal candidates from $1,000 to $2,000 (indexed for inflation going forward) during an election cycle. The argument was that donors who previously gave $1,000 to a candidate conceivably might have given more if not for the cap; thus, by increasing the maximum contribution to $2,000, these donors would funnel more funds into the campaign system. However, a donor who gave $50 or $100 would not now give twice that simply because the maximum amount that an individual could contribute was doubled. Thus, a gap between the large contributors and small donors would increase. The 2004 election showed that, to some degree, these concerns regarding the impact of small donors were unfounded, at least at the presidential level. On a relative basis, the allocation of small, mid-level, and large donors in 2004 was nearly the same as it was in 2000. Candidates in 2004 were tapping all available sources for contributions. Certainly, their larger donors took advantage of the increased maximum contribution. George W. Bush, John Edwards, and several other candidates each got a majority of their contributions from maximum-level donors. But an increased number of small donors, resulting in part because of candidates’ use of the Internet as a fundraising tool, kept the campaign finance system from being further skewed in favor of the largest donors. The issue that remains to be seen in future elections is whether the small donors will continue to keep pace with those contributing the maximum amount to a candidate, as was the case for Obama in 2008. Possibly the more significant impact that BCRA had in the 2004 election was the increased influence of 527s—tax-exempt organizations that played a very vocal role in presidential electoral politics in 2004. Their rise may be tied back to BCRA’s ban on soft money. In 1996, President Bill Clinton made soft money a key element of his fund-raising strategy,
226 campaigning
bringing in funds to the Democratic Party to be used in ways that would help him get reelected. This approach brought a great deal of additional money into the Democratic Party from labor unions that would otherwise be prohibited from contributing to Clinton’s campaign under federal election laws. During the 2000 election season, the amount of soft money exploded—the total amount of soft money that the national party organizations spent or transferred to state parties that year was $279.7 million; in 1996, this number was $125.2 million. BCRA’s ban on soft money, however, did not stop interest groups and political action committees from wanting to influence presidential and other federal elections. The vehicle of choice for interest groups and political action committees during the 2004 presidential election was the 527 and other tax-exempt entities. One may ask, “why would someone be opposed to reforming the campaign finance system and limiting corruption?” Those who opposed BCRA and similar efforts to revise the nation’s campaign finance laws are not opposed to eliminating known corruption from the political system. Instead, they believe that other values need to be protected, most significantly the First Amendment right to free speech. The ability of groups and individuals to express their political views is a fundamental hallmark of our political system. Further, they contend, the exchange of ideas and debate of positions can only help to inform and improve governmental policy. Campaign finance regulation has the effect of squelching such vigorous debate. In addition, BCRA opponents believe that mechanisms other than extensive governmental regulation are more effective means of ensuring a democratic electoral system. Their preferred method would be enhanced disclosure requirements. Political corruption takes place in the shadows and out of sight of the public eye. Enhanced disclosure would make campaign financing more transparent and thus remove the ability of anybody to engage in any activities of corruption. The rules and regulations governing the financing of elections have a very real and direct political impact. If a group or set of interests is to raise and spend more effectively than their opponents within the current rules, they will have a distinct advantage in elections. As a result, they also would be less likely to want to change those rules and lose their advan-
tage. Conversely, if another group thinks that the campaign finance system works to their disadvantage, they are more likely to want to see the rules changed. The issue of campaign finance reform is complex enough that all sides can bolster their arguments with themes of democracy and the U.S. Constitution. But much of any debate over reforming the campaign finance system may come down to who would gain an advantage in doing so. See also campaigning; lobbying. Further Reading Corrado, Anthony, Thomas E. Mann, Daniel Ortiz, and Trevor Potter, eds. The New Campaign Finance Sourcebook. Washington, D.C.: Brookings Institution Press, 2005; Malbin, Michael J., ed. The Election after Reform: Money, Politics, and the Bipartisan Campaign Reform Act. Lanham, Md.: Rowman & Littlefield, 2006; Smith, Bradley A. Unfree Speech: The Folly of Campaign Finance Reform. Princeton, N.J.: Princeton University Press, 2003; Dwyre, Diana, and Victoria A. Farrar-Myers. Legislative Labyrinth: Congress and Campaign Finance Reform. Washington, D.C.: Congressional Quarterly Press, 2001. —Victoria A. Farrar-Myers
campaigning Campaigning refers to all activities undertaken with the purpose of winning a political contest. Most often, campaigns are waged on behalf of a particular person or candidate for political office but also can focus on the election of a party, slate or passage of a policy. Former Speaker of the House of Representatives, Tip O’Neill (D-MA), is known best for stating that all politics is local (something his father told him after Tip lost his first race). O’Neill later met with much electoral success, and reduced a campaign for office to four elements: the candidate, the organization, the money, and the issues. Of course, this simplification glosses over many of the details, but O’Neill’s elements provide a good launching point for discussion. When a campaign is for a particular office, the candidate is clearly the center of the universe. If the area covered by an election is small enough (for example, a city council race or some state legislative districts), the candidate may literally go door to
campaigning 227
Ronald Reagan campaigning with Nancy Reagan in Columbia, South Carolina, 1980 (Ronald Reagan Presidential Library)
door to meet and talk to voters. Personal contact is viewed as a very valuable way to sway voters but is exhausting, time-consuming, and not always feasible. Thus, candidates often meet with organized groups to deliver speeches, perhaps over a dinner, in order to contact more people all at once. It is hoped that the support of some members of a group will lead others in that group to go along. Candidates can also be seen shaking hands and chatting with voters at just about any public venue when Election Day is near: shopping malls, subways, county fairs, and so on. Clearly, a candidate cannot manage a campaign single-handedly. Organizations include everyone from close advisers, to paid managers and assistants, to envelope-stuffing volunteers in the neighborhood where the candidate lives. In national campaigns, there are so many meetings and so much traveling, multiple persons are needed to handle the practical details and make sure the candidate is fed, clothed, and shows up on time. In contemporary politics, money is an essential element of any campaign. Candidates, groups, and
parties need money to pursue their goals. How much money is needed depends on the level of the contest, but it is no surprise that serious contenders exert a lot of effort in raising as much funding as possible. Politicians are involved in a great juggling act, forever trying to keep the correct (i.e., winning) balance between fund-raising and other activities. A typical incumbent candidate for the House of Representatives raised just under $1 million in 2002; most candidates for statewide offices (U.S. Senate or governor) will need more. Depending on whether a campaign is state or federal, different laws regulate campaign contributions. Candidates without great personal wealth to squander must seek contributions from many sources, albeit within legal limits. Direct contributions come from individuals, political action committees and parties. Because the political system protects freedom of speech, parties and other groups also can spend unlimited millions of dollars independently to transmit information to voters. Money is used to buy many things—anything from bumper stickers, billboards, and buttons to
228 campaigning
radio, newspaper and television commercials (and the best consultants to make the commercials) and direct mailings to registered voters. Money also has a way of attracting more money; candidates who look like winners by virtue of their full war chests are easily able to convince other contributors to bet on what looks like a sure winner. Furthermore, when an incumbent (current office-holder) seeks reelection, his/her ability to attract support can deter prospective challengers. Some observers comment that American election campaigns are lacking in issue content, but candidates and parties do concern themselves with issues, the last element of campaigns in O’Neill’s list. In particular, candidates want to control which issues voters are thinking about when they cast their ballots. For example, if the Republican Party is perceived by the public to be better at reducing government spending, the party would prefer to use campaign messages that make voters think about the drawbacks of big government. Ironically, a party out of power often hopes for legislative defeats on its strong issues prior to an electoral contest. In such a situation, the losing side can blame its opponents; it then has an enemy against which to rally its supporters, and a strong cause for encouraging its partisans to vote. This can have the side effect of producing a negative campaign; but to some extent, all candidates out of office must criticize the incumbents or there would be no reason to elect someone new. Of course, in some years, the issues at the forefront of the political agenda are beyond individual politicians’ immediate control; terrorist attacks, war, and unexpected economic turns can affect turnout and direction of the vote. In 2006, for example, exit polls showed that opposition to the continuing war in Iraq was one motivation for voting against the president’s party. The day after the November 7 election, Republican Representative Tom Reynolds argued that too many Republican members of the House of Representatives did not “localize” their races. In other words, because incumbents could not focus enough voters on local, district concerns, overall negative trends in public opinion on national issues affected outcomes. Whether campaigns for office are national, state or local, American voters are typically faced with a choice between two major political parties, Republicans and Democrats. One criticism of Ameri-
can campaigns is that the candidates of the opposing parties sound the same, making it difficult for voters to differentiate between the two or care about who wins. In part, this is true because of one key electoral rule that is used most widely in the United States: the winner is the candidate with the most votes, or the plurality. In such a system, any party that secures just over 50 percent of the vote can guarantee victory. Thus, two parties are encouraged, with each just trying to edge out the other. When a society is organized politically into only two groups, each risks “sounding like the other” because both are trying to obtain the support of voters in the moderate middle. Note that this assumes public opinion is not polarized on the major policy issues. In other words, most voters’ opinions are not consistently on the extreme left (liberal) or right (conservative) side. Third parties or unaffiliated candidates with a geographically dispersed, small percentage of the vote win no seats in government under a plurality rule. Campaigns at the national level attract the attention of the media and voters more readily than do local or state events. This is reflected in the fact that voter turnout in presidential election years is higher than participation in any other year. This is somewhat ironic because originally, it was seen as inappropriate for presidential candidates to campaign for office at all; it was judged to be beneath a worthy individual to grovel for votes for such an esteemed position. Now, of course, presidential hopefuls plan their strategies years in advance and are subject to intense personal scrutiny. The quadrennial November election is the culmination of a public campaign that starts at least a year in advance because the first stage of a serious presidential campaign involves securing the nomination of the Democratic or Republican Party. How does one become a nominee? He or she needs to win the support of the delegates at the party’s national convention, where there are delegates from each state. How does one do that? Nowadays, most delegates are committed to certain candidates before they arrive at the convention. How does a candidate get committed delegates from a state? He or she has to win lots of support from members of the party in state contests, the most common of which are primary elections or caucuses. Because all 50 states have some freedom in choosing the date of their primaries or caucuses, each candidate must campaign
campaigning 229
throughout a sequence of state contests that lasts from February through early summer of the election year. To put it bluntly, this kind of campaign is not for the faint of heart. Candidates need lots of money, just the right amount of media coverage, and state-bystate organization before the year starts. If a candidate wins in a state with an early contest where such a victory was unexpected, positive news coverage can propel him forward toward more contributions and votes. Early losses, or victories that are not as large as polls predicted, cause many a decent candidate to drop out of the race. After securing a nomination, candidates campaign in the fall prior to the November balloting in hopes of winning the presidency by getting a majority of the electoral college vote. Each state is assigned a certain number of electoral votes; the larger a state’s population the greater its number of electoral votes (which equal the number of the state’s representatives in Congress). Presidential candidates, therefore, campaign by state; they should not waste time in states their advisers know will be lost. Competitive, large states like California are most likely to see active campaigning and advertising. While independent or third party candidates appear on general election ballots, campaign, and sometimes appear in televised presidential debates, their chances of winning the presidency are virtually nil. This is because each state’s electoral votes all go to the one candidate with a plurality of the popular vote; second and third place finishers get zero. Since 1952, television has played a role in presidential campaigns, although measuring the impact of advertising, news, or debates is an imperfect science. Nevertheless, major candidates buy television time, and it is believed that its impact is crucial in some years. A case in point is 1988, when Democratic nominee Michael Dukakis lost despite early leads in opinion polls. The former Massachusetts governor was relatively liberal on crime, although this was not a focus early in the campaign. Then, Republicans learned that a local parole policy regrettably had allowed a homicidal rapist out of jail to repeat his offenses. Republicans repeatedly aired a powerful visual ad that showed criminals walking through a revolving door to instill fear of a Dukakis presidency. This attack was bolstered by the Democrat’s poor performance in a subsequent, televised debate. The first question put to Dukakis asked if he would support the death penalty if his own
wife was raped and murdered. Without a moment’s hesitation or a show of emotion, he said “No, I wouldn’t.” Commentary following the event evaluated his performance negatively. While it is unknown whether Dukakis’s issue position itself would have been enough to defeat him, negative advertising coupled with negative news coverage following his weak handling of the debate allowed the opposition to focus the electorate on an issue that worked in its favor. Campaigns for the U.S. Congress have been studied extensively, and like presidential races, are very expensive, professionally organized enterprises. Members of the House of Representatives, whose terms of office are only two years long, arguably are campaigning all the time. Some have argued convincingly that all members’ behavior is consistent with the pursuit of reelection. In the process of holding on to their jobs, all representatives do “represent” as well (even if coincidentally). Each legislator develops a “home style,” by presenting him/herself in the local constituency to not only report on Washington activities, but more significantly build trust and empathy by fostering a unique identification with his or her people. Members assign staff to district offices and make trips home themselves in order to provide constituents with personal access. Voters who perceive they have access to a legislator and staff who will listen and help out when necessary, in turn provide political support that allows a savvy and responsive incumbent to get reelected. Indeed, incumbency is a House candidate’s biggest asset; in 2002, only eight incumbents were defeated in the general election, and 78 House members faced no major-party opposition. The Congressional budget covers mailings to constituents, phone lines, offices and staff salaries, resources that directly benefit anyone already in office. Members of the U.S. Senate receive even larger allocations for staff and communications, with the largest shares going to legislators from the most populous states. One of the differences between Senate and House campaigns worth noting concerns the impact of campaign expenditures. Spending by challengers in all races has a negative impact on how an incumbent does in an election; a challenger with lots of cash to spend already has been judged by contributors to be a viable threat and then spends the money to prove them right. However, while expenditures by Senate incumbents positively affect the number of votes received by them, this is not the case for House incumbents. In other words, because House incumbents
230 caucus
spend so much of their time attending to their constituencies, by the time the election rolls around every two years they are already in a solid position to win, regardless of how high their campaign expenditures rise. (This finding by political scientists has not caused House incumbents to stop spending money! Politicians are risk averse and at a minimum, will do whatever worked in the last race.) In contrast, senators are elected for sixyear terms, have more commitments in Washington, D.C., and serve large, diverse constituencies with whom they cannot stay in close touch. An expensive campaign every six years is needed to mobilize their supporters. Where do the political parties fit in all this? On one hand, a lot of campaign literature does not mention the partisanship of the candidate. On the other hand, no candidate runs for office without first assessing how many Democrats and Republicans are in her or his state or district. Partisanship in the electorate affects candidates’ chances, and party organizations can choose to play a role in fund-raising and election strategy. Candidates of both parties pay attention to the president’s approval rating; a popular president can help his fellow partisans, while a weak one helps his opponents’ campaigns. Overall, however, modern election campaigns are candidate-centered. See also elections; lobbying; negative campaigning. Further Reading Fenno, Richard F., Jr., Home Style: House Members in Their Districts. Scott, Foresman, 1978; Hernnson, Paul S. Congressional Elections. Campaigning at Home and in Washington. 4th ed. Washington, D.C., Congressional Quarterly Press, 2004; Mayhew, David R. Congress: The Electoral Connection. New Haven, Conn.: Yale University Press, 1974; O’Neill, Tip. All Politics Is Local and Other Rules of the Game. New York: Times Books, 1994; Selecting the President From 1789 to 1996. Washington, D.C.: Congressional Quarterly Press, 1997; Troy, Gil. See How They Ran. The Changing Role of the Presidential Candidate. New York: The Free Press, 1991. —Laura R. Dimino
caucus Most broadly, a caucus is a gathering of group members who meet to discuss issues of the group’s govern-
ance and/or public policy issues. Two main types of caucuses exist in the United States: partisan caucuses that are part of a political party’s nomination process, and legislative caucuses. Legislative caucuses may be formal or informal, partisan or bipartisan. Caucuses in the United States have existed since at least 1763, when John Adams wrote, “This day learned that the Caucus Club meets at certain times, in the garret of Tom Dawes, the adjutant of the Boston Regiment. He has a large house, and he has a movable partition in his garret which he takes down, and the whole club meets in one room. There they smoke tobacco till you cannot see from one end of the room to the other. There they drink flip, I suppose, and there they choose a moderator, who puts questions to the vote regularly; and selectmen, assessors, collectors, wardens, fire-wards and representatives, are regularly chosen before they are chosen in the town” (quoted in Charles S. Thompson). Thus, the caucus functioned as both a forum for discussion, as well as a kind of nominating system for public office. Once the new republic was established and political parties emerged, the congressional nominating caucus came into being. Party members in Congress would meet to decide on the presidential candidate the party would support. Typically referred to as “King Caucus,” this method of selecting a party’s presidential candidate fell out of use after the 1824 election. It was highly criticized for being secretive and undemocratic, and in addition it raised issues relating to the separation of powers. After the demise of “King Caucus,” legislative caucuses, however, continued to be a factor in Congress but they no longer were responsible for the nomination of the party’s presidential candidate. Each party’s delegation in each chamber would meet to discuss matters of policy and decide on leadership positions. Today, the formal caucuses of Congress consist of the House Democratic Caucus, the Senate Democratic Conference, the House Republican Conference, and the Senate Republican Conference. These groups meet regularly to decide party governance issues within each chamber, which include electing the party’s leadership in their respective chambers. Furthermore, they are forums for discussion of policy and legislative issues, and serve as strategy sessions for advancing the party’s policy positions. The formal caucuses also facilitate the fostering of party unity.
caucus 231
In the latter half of the 20th century, informal legislative caucuses began to arise in Congress. These groups began to form around various policy issues and/or characteristics shared by members, and they advocated policies supported by group members. Susan Webb Hammond attributes the proliferation of these informal groups, especially during the 1970s and 1980s, to the decline of effective party leadership in Congress due to the increasingly complicated and cross-cutting policy issues arising during this time. A major purpose of this type of caucus is to serve as a way to exchange and disseminate information both among its members and with others outside the organization. Some of these informal organizations will be smaller, partisan groups that form within the larger party organization. For example, the Conservative Opportunity Society (COS) was formed in the 1980s by young, conservative Republican House members who wanted the Republican leadership to be more combative in their dealings with the majority Democrats. The COS was particularly important in advancing the congressional career of Newt Gingrich (R-GA), who would become Speaker of the House after Republicans captured control of that body in the 1994 elections. Similarly, the Democratic Study Group was formed by moderate and liberal Democrats in the late 1950s who were dissatisfied with the influence wielded by conservative members in the party. Other informal caucuses are bipartisan and may also be bicameral. The Congressional Caucus on Women’s Issues contains senators and representatives from both parties. The Congressional Black Caucus is similarly bicameral, and is officially bipartisan, although it currently has no Republican members. Some caucuses will form on the basis of members’ personal interests, such as the Congressional Arts Caucus. Other groups will form along regional lines or around a specific industry important to a member’s constituency, such as the Alcohol Fuels Caucus. Beyond the legislative variety, one will find caucuses used in some states as part of the party’s governance and nominating system for public office. Caucuses are used in conjunction with a series of conventions. A nonlegislative caucus is a meeting of party members, usually at the precinct level, who will choose delegates to the party’s next level of conven-
tion (typically county), which is then often followed by district and state conventions where national convention delegates will be chosen. Participants in caucuses may give speeches regarding the candidate they support, and delegates are selected often on the basis of candidate support (true of Democratic caucuses, not necessarily true of Republican ones). In addition, the caucus may discuss party governance issues, especially platform planks. Because of differences in the two parties’ rules, as well as differences in state practices, the precise functioning of caucuses will vary. In presidential election years, states that do not use a presidential primary will utilize the caucusconvention method for the selection of national party convention delegates. Since the McGovern-Fraser reforms of the Democratic Party’s delegate selection rules in the early 1970s, a minority of states now use caucuses for selecting convention delegates. The Republican Party was affected by these rule changes when states controlled by the Democratic Party changed state laws to comply, most adopting primaries. Caucuses gain attention in presidential election years (even though they may also occur in midterm election years), and the Iowa precinct caucuses are particularly prominent because they are the event that begins the official presidential nomination season. As such, potential presidential candidates flock to the state as early as two years before the presidential election, engaging in retail politics as they seek to advance their candidacy for their party’s nomination by doing well in the Iowa caucuses. Other states that use caucuses do not garner as much attention as Iowa, or even as much attention as states that use primaries. This is because Iowa begins the presidential nominating season, as well as the fact that caucuses are a more complex process than a primary because they are the first stage in a multistage process. Caucuses differ from primaries in several ways, even though the end result is the same—the selection of convention delegates. Caucuses are meetings held in designated places at a designated hour that will take substantially more time and effort from voters than participating in a primary, where one simply casts a ballot at a designated polling place during polling hours. Consequently, voter turnout at caucuses is lower than in primaries. Whereas the ballot cast in a primary is secret, this is often not true of a
232 c oalition
caucus, where candidates’ views may be heard, and they may be viewed in candidate preference groups. Regardless of the type of caucus, all serve as forums for group members to deliberate about particular issues before the group. These issues may be about the selection of delegates, the election of party officials, other issues about the way the group governs itself, or matters of public policy. See also elections; political parties; primary system. Further Reading Hammond, Susan Webb. Congressional Caucuses in National Policy Making. Baltimore, Md.: Johns Hopkins University Press, 1998; Mayer, William G. “Caucuses: How They Work, What Difference They Make.” In In Pursuit of the White House, edited by William G. Mayer. Chatham, N.J.: Chatham House, 1996; Thompson, Charles S. “An Essay on the Rise and Fall of the Congressional Caucus as a Machine for Nominating Candidates for the Presidency” (1902). Reprinted in Leon Stein, ed., The Caucus System in American Politics. New York: Arno Press, 1974; Whitridge, Frederick W. “Caucus System” (1883). Reprinted in Leon Stein, ed., The Caucus System in American Politics. New York: Arno Press, 1974. —Donna R. Hoffman
co ali tion In the world of politics and government, the term “coalition” has many meanings. In general, a coalition is a union or alliance in which different groups or individuals agree to work together for the purpose of a mutually agreed-upon end. Most coalitions are temporary, even fleeting. They tend to emerge on an issue-by-issue basis. In this sense, a coalition tends to be less binding and less permanent than a covenant, treaty, or alliance. Thus a coalition tends to be a union or bond, but a fairly weak one based on self-interest in a particular area. And coalitions come in all shapes and sizes, based on permanent as well as transitory reasoning and interests. Politically, a coalition may refer to a “coalition government,” such as often occurs in parliamentary democracies with multiparty systems. In such governments, when no single party emerges from an
election with a majority of the votes or a majority of the members (seats) of the legislature, they must form a coalition with other parties in order to make up a majority of the legislature and thus rule. This occurred in September 2005 when, after the German national election, no single party could claim control, and Angela Merkel’s Christian Democratic Union, the leading vote-getting party, had to form a coalition government that included the Social Democratic Party. These coalitions can be quite fragile and tend to break up over policy or personal differences. After all, joining separate and different political parties together into a coalition may not work well as there are policy differences as well as historic cleavages between and among parties. Such alliances, when they do work, often require a highly skilled politician at the helm or a unifying issue or crisis that compels the different parties to stick together in spite of their core differences. Italy has had a very difficult time holding its many different post–World War II coalition governments together, and with so many parties and the need to bring several of them together in order to form a government, any split may cause the government to collapse. In international affairs, a coalition often refers to a temporary or ad hoc union of different nations that come together to accomplish a specific goal. In 1991, U.S. president George H. W. Bush put together a large international coalition of nations that came together to remove Iraqi military troops from Kuwait. The coalition engaged in a short and successful military operation, Saddam Hussein’s troops were expelled from Kuwait, and a series of sanctions were imposed on Hussein’s government in Iraq. Over a decade later, President Bush’s son, President George W. Bush attempted to put together another coalition to overthrow Saddam Hussein’s government, but very few large nations were willing to join in the invasion, and the grand coalition that George H. W. Bush had successfully put together never materialized. Instead, the younger Bush was compelled to call his invading force a “coalition of the willing.” In the politics of the United States, coalitions are common, necessary, and usually short-lived. Coalitions are often transitory and fleeting, based on shortterm political and/or policy interests and the expectation of gain. Often a president will reach out “across the aisle” to the political opposition, and
co ali tion 233
attempt to forge a bipartisan coalition in Congress. At other times, the president will stick to his own party and attempt to unify the members in Congress to follow his program or proposals. The separation of powers system necessitates fluid and evolving coalitions where sometimes a coalition is based on partisan ties of political parties, sometimes based on regional interests, sometimes on ideological interests, and at other times on personal connections, relationships and interests. The separation of powers literally separates the executive from the legislature and in order to get the business of government accomplished, ways must be found to reconnect what the framers had separated. This is often accomplished via coalitions. For example, one of the key elements in a successful presidency is how capable the incumbent is at coalition building. Given that the president’s constitutional power is limited and shared, in order to govern effectively, presidents must develop coalitions to build congressional support for their proposals. One type of coalition is a partisan coalition, where the president tries to build support from his own party in Congress. This is possible when the president’s party controls a majority of the seats in the House and Senate. Often presidents will try to develop another type of coalition, a bipartisan or nonpartisan coalition. This occurs when the president reaches across the aisle and tries to join together members of both parties in a coalition not based solely on party loyalty. Presidents are most likely to reach out in this way when there is divided government (the White House is controlled by one party and at least one house of Congress is in the hands of the opposition party). Even when the president’s party controls both houses of the Congress, presidents may seek out bipartisan coalitions. A president may wish to develop a moderate solution to a problem, and the more extreme wing of his party may balk. In such cases, it is common for a president to reach out to moderates in the opposition party in an effort to develop bipartisan legislation. This may also make the president appear more moderate and thus, more palatable to the population in general. Some presidents have been able to rely on a partisan coalition. Democrats Franklin D. Roosevelt and Lyndon Johnson each had huge legislative majorities in Congress and did not have to reach out to develop
cross-partisan coalitions. Others, such as Republican Ronald Reagan, with a slim Republican majority, often reached out to conservative Democrats, and often with some success (particularly in 1981–82, with key legislative victories in the areas of tax cuts, decreases in social spending, and increases in defense spending). Still others, such as George W. Bush, relied almost exclusively on a partisan approach to governing during his first six years in office when the Republican Party controlled both houses of Congress (with the exception of mid-2001 through the end of the congressional term in 2002, when Democrats gained a one-seat majority in the Senate with the defection of Senator Jim Jeffords, a Republican from Vermont, who left the party to become an Independent and caucused with Senate Democrats). Bush was able to do this because his party had a narrow control over both houses of Congress; and in the aftermath of the 9/11 tragedy, he was able to assert a bold and often unilateral brand of presidential leadership. The foundation on which the American political system is grounded is on the building of coalitions, consensus, deal making, bargaining, and compromise. The point was not to create an efficient form of government, but one that was representative, responsible, and that would not trample on the rights of the citizen. The framers thus sought coalitional politics in order to moderate and filter the passions of politics and of the people. Creating a system that required coalition building was an important ingredient in this effort. Another type of coalition is a military coalition where the military forces of several nations are unified under the command of a single nation. Again, the 1991 political coalition put together by Republican president George H. W. Bush led to the establishment of a military coalition among many nations and that coalition defeated Saddam Hussein’s Iraqi army in Kuwait. See also interest groups; lobbying Further Reading Harcourt, Alexander H., and Frans B. M. de Waal. Coalitions and Alliances in Humans and Other Animals. New York: Oxford Science Publications, 1992; Riker, William H. The Theory of Political Coalitions. New Haven, Conn.: Yale University Press, 1962;
234 c onsensus
Rochon, Thomas R., and David S. Mayer. Coalitions and Political Movements: The Lessons of the Nuclear Freeze. Boulder, Colo.: Lynne Rienner, 1997; Seligman, Lester G., and Cary R. Covington. The Coalitional Presidency. Chicago: Dorsey Press, 1989. —Michael A. Genovese
consensus Consensus is an agreement on a particular issue or matter. In a command-oriented political system such as a military dictatorship, a societal consensus on an issue may not be necessary, as the junta leader often has the power and authority to command or require compliance. But in societies organized around more democratic principles, some modicum of consensus is often needed in order to reach a general agreement on how society should progress. A democratic society without a general consensus on operating procedures as well as means and goals is often a divided and thus a divisive society. This was the case in the United States in the postWatergate era, when politics degenerated into a slash-and-burn hyper-partisanship, in which the cold war consensus that had animated political interaction for roughly 30 years, broke down, and no new consensus replaced it. From that point on, a form of dissensus was the main operating style that characterized American politics as well as parts of America’s political culture. In the absence of a consensus on which to base political discourse, American politics became more personalized, and attack politics became more prominent. This is primarily because the stakes were so high, with each side attempting to forge a new consensus, and in order to do so, often believing that the first step was to destroy the opposition. Political opponents became enemies and politics became a war. Clearly a political system is more civil when a consensus exists, but often a consensus is a luxury that is politically and culturally unattainable. Where no consensus exists, politics often degenerates into petty wrangling and partisan sniping. Democratic societies reach a consensus on operating procedures through an agreed-upon constitutional structure. As long as this system is open and amenable to amendment, the society can usually reach agreement that the system is legitimate and worthy of compliance, even on issues with which
members may disagree. If the method of decisionmaking is deemed fair, the outcome is usually accepted as legitimate, even by those who were on the losing side of the debate. John Rawls, considered by many as the most influential political philosopher of the 20th century, and author of the influential book, A Theory of Justice (1971), discussed the importance of what he referred to as “overlapping consensus” for the achievement of justice and fairness in society. To Rawls, “Justice as Fairness” is a key principle in achieving a just community, and the development of a social and political consensus around rights and fairness is the key to sustaining a just and fair society. Rawls begins by questioning when and why citizens have obligations to the state, and what the state owes citizens. He writes about hypothetical agreements which are made under conditions of equality, so that no disparities in bargaining power place any citizen below another. This allows the state to make rules that apply to all and to which all are bound. This social contract view is grounded in reaching a consensus on what is to be done. To Rawls, a just social contract is one we would agree to if we did not know in advance who would be advantaged by the outcome. If we have a consensus on process, then the outcome may be seen as just, and therefore binding on all in the community. That Rawls’s entire theory of justice hinges on consensus stems from the deep and rich social contract tradition, so much a part of American political thinking. The government does not impose its will on the people, but seeks to develop a consensus, or agreement on rules and procedures that are widely considered to be just and fair. Rawls’s theory of justice had a profound impact on political theory in the late 20th century, and it continues today to influence democratic theory and political analysis. In pragmatic terms, consensus usually refers to the attempt to forge agreement out of disagreement. That is usually the job of the leaders of society. In a society where there is fundamental disagreement about who we are as citizens and where we want to go with policies in the future, reaching such a consensus can be difficult. Most difficult to achieve is a consensus on where society should go, how best to get it there, and the means and policies to be put to that end. Societies are often divided along religious, ethnic, economic, and other lines, and finding common
conservative tradition 235
ground can be difficult. The task of leadership in a democratic society is to try to achieve a workable consensus on policy where different interests clash. This often leads to pursuing the “lowest common denominator” or the policy that is most acceptable to the most people and groups. This usually leads to watereddown policies, sometimes termed as “satisficing.” For such a system to work, collaboration, accommodation, bargaining, compromise, and deal making are required. Some good government advocates see this bargaining model as unbecoming and/or corrupt. How, they ask, can one make compromises and bargains over good policy issues? But this model is the “political” model of consensus achievement in a democratic system, and many see it as messy but effective. To be successful, this model of bargaining and consensus attainment requires different sides to work with, accommodate, and respect others. A skilled leader negotiates with these different groups in an effort to bring as many on board as possible, to try to reach an agreement or consensus that gives no one group a total victory but is a result that everyone can live with even if no one is totally pleased with the outcome. The observant French writer Alexis de Tocqueville warned that in such systems, a “tyranny of the majority” can emerge and the consensus can break down, leading to some groups becoming disaffected, as the excluded groups see the majority opinion dominating the policy making process and thus excluding many voices. Another problem in this consensus model is that, due to the fact that so many different groups must be brought into the system, there is a danger that the government will try to buy some of them off in order to bring them into the process. This is sometimes characterized as the special interest state, where many groups have sufficient power to get the government to give them resources from the public budget, and before long, there is a huge bill to pay. Here, the government cannot say no, and thus there is a long and continuous drain on the budget as the interest group claims remain and even get stronger (institutionalized) over time. Still another problem with this model may be that by being successful, this consensus system ends up being consumed with the achievement of consensus, at the expense of problem solving. If the consensus model requires watering solutions down, then prob-
lems that are large may only get incremental reforms, thereby leading a big problem to crisis proportions. Can a consensus government also be a problem solving government? Many critics are skeptical. A president who can present to the public a compelling vision of the future may be able, for a time, to break the stranglehold of interest groups on the policy making process. But few presidents can achieve this goal, and most become bargaining, or what is referred to as transactional, leaders. Further Reading Greenberg, Edward S., and Benjamin I. Page. The Struggle for Democracy. New York: Longman, 2003; Patterson, Thomas E. The American Democracy. Boston: McGraw-Hill, 2003; Rawls, John. A Theory of Justice. New York: Oxford University Press, 1971. —Michael A. Genovese
conservative tradition The celebrated historian Richard Hofstadter once observed that the American political landscape has been shaped by the opposing impulses of reaction and reform. Because Hofstadter harbored progressive sympathies even after his disillusionment with the political left, he probably overstated the significance or impact of reformist influences on the development of American political culture. Indeed, despite the obvious contributions of American progressives, radicals, and reformers to that development, especially during the Jacksonian and Progressive eras, the New Deal years, and the 1960s, American political culture has been characterized by the kind of incrementalism and institutional traditionalism that have formed the bedrock of Anglo-American conservatism for the last three centuries. At the risk of overstating the case, the so-called conservative tradition has been the American tradition, though popular lore has often obscured this conspicuous fact. Such popular lore is deeply rooted in foundation myths that seek to validate the very principles upon which most Americans believe the republic was founded. Foundation myths are as old as the earliest civilizations. Resilient, steadfast, and famously impervious to empirical refutation, they are rooted in an unimpeachable appeal and inexorable political utility. Their legitimacy has been affirmed by generations of
236 c onservative tradition
political writers who have endowed these “noble lies,” as Plato referred to them, with the intellectual support they would otherwise lack, and even rationalist theories of government have tolerated the propagation of stories intended to substantiate the ostensible superiority and uniqueness of certain political cultures through the selective and purposive exploitation of historiography. Most important, foundation myths do not discriminate among regimes; they exist and thrive in authoritarian environments as well as democratic ones. The United States is no exception in this regard. Americans have been reared on epic stories about the origins of the republic and compelling accounts of their political heritage rooted in notions of what it means to be American. A vast and persistent academic effort over the past 40 years has unearthed convincing historical evidence that many of these stories are, in fact, false. Nevertheless, false stories about some of the most conspicuous and, by now, copiously documented phenomena in American history refuse to die. Regardless of how much effort Americans invest in their attempts to dispel them, they only grow stronger. Such has been the case with arguably the greatest story of all—the American foundation myth. The American foundation myth portrays the establishment of an American constitutional republic as the culmination of a revolutionary movement that was imbued with a radical spirit whose objective was the construction of a democratic, if not egalitarian, society upon the principles that defined the revolutionary cause. Unfortunately, like many of the rest, this foundation myth, though it appeals to patriotic sentiment and consumer demand, is untrue. Stories of revolutionary radicalism and incipient egalitarianism have become part of our national fabric, but their conspicuous presence should not obscure their inaccuracy. The constitutional republic founded in 1787 may have been the product of a fundamentally transformed political and legal discourse, i.e., the product of an apparently radical discursive shift, but that shift was neither planned nor welcomed, and it emerged as the framers groped for a conservative solution to an unimaginable crisis. The romantic and vastly popular, yet mistaken and ahistoricist, view of the radical origins of the American polity centers, among other things, on a misreading of the Declaration of Independence
and the related efforts by scholars to utilize history as an instrument for the perpetuation of current ideological and methodological debates. Marxist, neoMarxist, and anti-liberalist writers of various persuasions have frequently transformed foundingera ideologies into an intellectual monolith without texture or complexity to illustrate the pervasively degenerative sociocultural effects of industrialization and a liberal ethos that robbed the country of egalitarian ideals. Regardless of the personal reasons and methodological motivations, the results have been clear. Foundation myths about the centrality of radicalism as a seminal discursive construct and a pertinent, if not dominant, political ideal during the latter half of the 18th century persist even among those who should know better. The fact that the founders did everything in their power to negotiate a peaceful and practicable solution to the problems with the British government has been confirmed by scores of responsible scholars. The fact that practically every contemporary document reflects the founders’ wish to reach such a solution is evident to anyone who reads those documents. Indeed, our Declaration of Independence submits that, during “every stage of the oppressions” that were imposed on the colonies, the colonists “petitioned for redress in the most humble terms.” Moreover, the fact that the colonists’ fear of a ministerial conspiracy was based in their desire to maintain the status they had theretofore enjoyed as subjects of the British Crown is a secret to no one. And finally, the fact that the founders believed the way to realize these wishes and desires was through a restoration of the ancient principles that were recaptured through the Glorious Revolution of 1688–89 should be obvious to all students of the American Revolution. The intent of this essay is not to desecrate the hallowed tale of America’s struggle for independence, but independence was not the primary objective of that struggle. Rather, independence was the byproduct of the founding fathers’ essentially conservative efforts to restore a traditional order on an increasingly unfamiliar political landscape, or, in the parlance of contemporary civic humanism, independence was a political expedient employed to impose virtue on the disorder created by British ministerial corruption. As such, the American colonists’ prosecution of the war was a manifestation of an impulse that
conservative tradition 237
impelled them to search for a conservative solution to the crisis with Britain based on custom-centered precedent. As had been the case almost 100 years prior during the vitriolic English controversies that marked the Exclusion Crisis and contributed to the ouster of James II, the founders wished to define the true origins of political authority and to restore the polity to its natural form. Although differences between the two situations existed, to the founders the similarities were manifest, and the significance of the Glorious Revolution as an ideological precedent could not be overstated. The founders were convinced that, as had been true of the participants in the Revolution of 1688, the purpose of their struggle for the restoration of the British polity was, according to James Wilson, to identify the universal propositions that “are naturally and necessarily implied in the very idea of the first institution of a state.” Wilson demanded that, like the revolutionaries of 1688, the North American colonists “be reinstated in the enjoyment of those rights to which” they were “entitled by the supreme and uncontrollable laws of nature.” He and his peers hoped to demonstrate anew the validity of the “principles on which the British nation acted, as a body, in deposing King James the second, that tyrannical oppressive prince, when pursuing measures tending to their destruction.” In other words, the American revolutionaries were no band of radicals devoted to the nullification of an inherited political system. Rather, they were conservatives whose classical and humanist influences were obvious to everyone. Above all, those influences supported a traditionbound conception of politics and an underlying wish to restore colonial government, as Wilson stated, to “the fundamental maxims” that defined “the British constitution” and “upon which, as upon a rock, our wise ancestors erected that stable fabric” of English political society. Contemporary documents such as the Fairfax County Resolves confirm these conservative aspirations, inasmuch as they embody the American revolutionaries’ wish to reestablish their government according to the political priorities their ancestors had recognized. This important document declares that the original English settlers possessed “privileges, immunities, and advantages which have descended to us their posterity, and ought of
right to be as fully enjoyed, as if we had still continued within the realm of England.” The colonists’ political objectives were shaped by the same conservative impulse that had driven previous generations of Englishmen to take refuge from political instability in the comforts of English constitutional certitude and traditionalism. Had the American revolutionaries truly been radicals, they would not have been troubled by allegations of British conspiracies against them, but they would have welcomed those allegations and used them as a pretext for the supposedly radical political and legal transformations they envisioned. Instead, reports of ministerial complicity against the American colonies instilled terror in the hearts and minds of colonists who wished to return British politics to the political and legal course that had been outlined by political actors such as John Locke during the constitutional crises of the 17th century. For those interested in tracing American conservatism to its ideological origins and formative principles, the English common-law tradition, particularly as interpreted by Sir Edward Coke, serves as a useful starting point. By the beginning of the discontent in the North American colonies during the mid to late 1760s, English (and also British) politics had long been dominated by a custom-centered, institutionally defined constitutionalism whose intellectual lineage stretched, via Coke and Sir John Fortescue (both influential jurists of their time), to the judicial and legal changes of the 12th and 13th centuries. It was a lineage anchored in the conviction that the political authority and legal stability of England is a function of the multigenerational definition of English institutions that is possible only through an adherence to custom, tradition, and legal precedent. Coke argued that the aforementioned multigenerational definition produces “an artificial perfection of reason, gotten by long study, observation, and experience.” Coke observed that natural reason is “dispersed into so many several heads” that universal propositions of law are not definable by any one individual but that, “by succession of ages” law can be “refined by an infinite number of grave and learned men and by long experience.” The ancient constitution, as it is called, was an umbrella concept that referred to a unified collection of unwritten, codified, or implied customs, standards, and traditions, or,
238 c onservative tradition
more broadly, to the laws, institutions, and political principles whose function was, according to Lord Fairfax, “the ordering, preservation and government of the whole” body politic of England. It would be difficult to overstate the significance of tradition, custom, and precedent as the anchors of American political culture during its seminal stages of development. As already demonstrated, most of the founders of the American republic were proud of their English heritage and only reluctantly renounced their imperial ties. They broke the political bonds that bound them to Britain not in an attempt to reject their Englishness but in an effort to preserve it. Furthermore, since the newly established United States lacked credibility and stature in a Eurocentric world that largely viewed the American experiment in republican government with great skepticism and even condescension, the founding generation of Americans was especially eager to embrace, or reaffirm, those aspects of English tradition, such as a common-law bias toward incrementalist institutionalism, that apparently improved its credibility and stature. Of course, despite the cultural biases toward traditionalism and incrementalism, what most people have come to know as American conservatism has not enjoyed an unchallenged or inevitable status as a dominant force in American politics. From a longterm perspective, some of our earliest proponents of English-centered traditionalism, particularly of a kind devoted to institutionalist elitism and sociocultural stratification, such as John Adams and Alexander Hamilton, were eventually overshadowed by the association of conservatism with proslavery manorialism and rationalizations of antebellum southern racism. John Calhoun, John Taylor, and even seemingly less legitimate spokesmen for the southern cause such as George Fitzhugh, were eventually lumped together as exponents of a failed and bankrupt effort to impede social progress and political reform in the name of invalid conservative principles. From a historical standpoint, American conservatism was further undermined by various antireform politicians and jurists of the Progressive and New Deal eras, who opposed the ostensibly unconstitutional expansion of governmental authority into economic and social arenas. In addition, conservatism has often been linked with the politics of segregation
and continued discrimination in the South, which frequently enabled progressives and reformists to delegitimize conservatism through guilt by association. More broadly, during much of the 20th century, conservatism served as a convenient scapegoat for intellectuals and academics who wished to highlight the deficiencies of liberal democracy and capitalism in the West. These developments notwithstanding, it can be argued that, unlike academics and intellectuals, most Americans have remained fundamentally incrementalist and dedicated to the maintenance of mainstream sociocultural traditions and the protection of timehonored political precedents. More to the point, though it would be misleading to claim that most Americans have been political conservatives, they have been devoted to the kind of political centrism and institutional stability that conservatism espouses as its core. Consequently, the American public never became disenchanted with conservatism en masse, nor did it ever reject its principles in any significant way. In other words, despite several periods over the past 100 years or so when conservatism confronted serious intellectual setbacks, in its various guises, it has enjoyed consistent support from a significant portion of the citizenry. During the late 1940s and throughout the 1950s, conservatism in the United States acquired a wholly American character that has provided it with a unique political legitimacy among practitioners and scholars alike. As a result, the conservative tradition, in its most popular guises, has remained one of the mainstays of American political culture since World War II. Not least because of the obvious ideological manifestations of the cold war, American conservatism became a veritable cause célèbre during the early stages of the geopolitical struggle with the Soviet Union and its satellites. A considerable intellectual effort endeavored to substantiate the American way of life and to prove the superiority of American values and political ideals over those of its geopolitical rivals. Much of that effort was inspired by an unshakeable belief in American exceptionalism and the inevitable ascendancy of liberal democracy in the United States. Leo Strauss was arguably the most famous political scientist associated with these consensus-era views, but he was hardly the only one. Strauss located
conservative tradition 239
the inherent predominance of American political culture in a Western intellectual genealogy that stretched, through the humanists, back to classical thinkers and civilizations. Through his writings, he hoped to provide a justification not only for the assertive defense of American interests but also for the support and promotion of liberal democratic principles across the globe. Over the last several years, Strauss’s ideas have constituted the foundation of so-called neoconservative policies dedicated to the propagation and protection of liberal governments and capitalist economies in key geopolitical theaters. Political historians such as Clinton Rossiter and— later—Forrest McDonald became the standardbearers of the consensus school, whose purpose was to illustrate the historical triumph of Anglo-American ideals devoted to incrementalism, stability, tradition, and individual rights. Stories of American radicalism and revolutionary-era upheaval gave way to depictions of intra-imperial constitutional struggles intended to restore a sense of order and English political propriety to colonial and newly established state governments. All in all, consensus history strove to identify those cultural traits that enabled liberal government and free markets to survive in the United States and eventually to dominate much of the rest of the world. Furthermore, this intellectual effort on behalf of a uniquely American political culture that ostensibly ensured the triumph of American ideals over those of rival cultures found enthusiastic support among nonacademic commentators. Journalists and authors such as William F. Buckley, Jr., and George Will offered conservatism a popular approval that intellectual movements often lacked through an appreciation, if not veneration, of the common man. Their discussion and recognition of the centrality of the average American, i.e., the prototypical denizen of middle America, elevated conservatism by embracing a populist character that the elitism of Alexander Hamilton and John Adams had so conspicuously shunned. Of course, this mixture of populism and conservatism was occasionally vulnerable to what some have seen as intolerance and exclusion, particularly in the capable hands of someone like Pat Buchanan. More than anything else, however, it was this popularization and, therefore, populist legitimization of conservatism that allowed it to transcend its
customarily academic roots and to transform itself into a viable political force. The popularization of conservatism and its evident utility in combating communism and autocratic government rendered it particularly malleable as an ideological weapon in the hand of conservative politicians. During the 1950s and even the 1960s, selfproclaimed conservatives such as Barry Goldwater and Ronald Reagan provided American conservatism with an appeal and political validity that would ultimately inspire some of the most powerful politicians of the 1980s and 1990s. So-called paleo-conservatives seemed to have the upper hand during the 1970s, particularly as evidenced by Richard Nixon’s policies of détente and the support of the sobering, yet more practical, objectives of political realism. By the 1980s, after the perceived failures of the Jimmy Carter years and the apparent retreat of American greatness, both economically and diplomatically, idealism would eventually trump realism, and the defense of liberal democratic principles at home and abroad acquire new vigor under conservatives such as Ronald Reagan. Although conservatives such as Goldwater and, to a lesser extent, Reagan were largely opposed to the institutionalization of religious dogma within the conservative movement, during the last 20-years, American conservatism has become especially susceptible to manipulation and usurpation by religious dogma. It would be historically inaccurate to credit George W. Bush and his advisers with this phenomenon, but, on the other hand, it would be just as wrong not to recognize the significant role Bush and his team has played in the institutionalization and legitimization of religion within American conservatism. As a result, though much of middle America is undoubtedly pious and even supportive of the growing influence of religion in American politics, conservatism risks losing its appeal, especially among those that have primarily been drawn to conservatism due to its promotion and protection of the political ideals upon which this country was seemingly founded. See also liberal tradition. Further reading Arendt, Hannah. The Human Condition. Chicago: University of Chicago Press, 1959; Bloom, Allan. The Closing of the American Mind. New York: Simon and
240 c orruption
Schuster, 1987; Burgess, Glenn. Absolute Monarchy and the Stuart Constitution. New Haven, Conn.: Yale University Press, 1996; Diggins, John Patrick. “Comrades and Citizens: New Mythologies in American Historiography,” American Historical Review 90 (1985): 614–638; Hartz, Louis. The Liberal Tradition in America: An Interpretation of American Political Thought since the Revolution. New York: Harcourt Brace, 1955; Pangle, Thomas L. The Spirit of Modern Republicanism: The Moral Vision of the American Founders and the Philosophy of Locke. Chicago: University of Chicago Press, 1988; Pocock, J. G. A. The Ancient Constitution and the Feudal Law: A Study of English Historical Thought in the Seventeenth Century. Cambridge: Cambridge University Press, 1987; Strauss, Leo. The Rebirth of Classical Political Rationalism, an Introduction to the Thought of Leo Strauss: Essays and Lectures by Leo Strauss. Edited by Thomas L. Pangle. Chicago: University of Chicago Press, 1989. —Tomislav Han
corruption It will come as no surprise that politics has always had its seamy side. The lure of power, position, status, and money are sometimes far too tempting for mere mortals to resist. In the most general terms, political corruption refers to the illegal or unauthorized use of a public office for some private or personal gain. The most common forms of political corruption are bribery, extortion, the abuse of power, and the misuse of government authority or information. During the 20th century, presidents have often been subjected to investigation based on allegations of corruption. According to organization scholar Chester Barnard, the chief role of the executive is to manage the values of the organization. Presidents, no less than corporate executives, or college presidents, set the moral tone for their administrations, give clues as to acceptable and unacceptable behavior, and establish norms and limits. Presidents demonstrate by both words and deeds the kind of behavior that will be tolerated, and the standards that will be applicable to the Administration. Presidents may also have an impact on the moral climate of the society beyond their Administrations. They can, at times, use the bully pulpit to speak to the values of society. They
may also serve as moral symbols for the nation. And on occasion, they can influence the cultural norms of the society. Thus, presidential corruption is an important understudied issue for inquiry. (Note that the presidencies considered the most corrupt in U.S. history are the administrations of Ulysses S. Grant, Warren G. Harding, Richard M. Nixon, Ronald Reagan, and Bill Clinton). A great deal of scholarly attention has been devoted to the problem of governmental corruption. Likewise, attention has focused on the individual cases of presidential corruption. But nowhere in the literature is there a work that brings these two areas of inquiry together in the hope of developing a framework for understanding presidential corruption. Sadly, politics has become a dirty word in the American political lexicon. The use of the word politics as a pejorative has become commonplace in the Unites States, and to say “Oh, he is political,” or “That’s just politics” expresses a resignation that politics is somehow a corrupt world or beneath contempt. To say, “he is just being political” is to mock the accused, as well as taint the process. Such an attitude reflects, in part, a healthy skepticism of governmental power. But this scorn and cynicism is so widespread and runs so deep in American culture that it may be counterproductive. After Vietnam, Watergate, the Iran-contra scandal, Whitewater, and other more recent scandals, it may be understandable that the public has become cynical, but such an attitude has consequences. If politics is, or appears to be, corrupt and scandal ridden; if self interest and not pursuit of the public interest appears to be the chief motivating force in American politics, then it should not surprise us that the public sees “politics” in a negative light. But such an attitude is likely to undermine democracy and the respect for the rule of the law, as well as drive good (i.e., not corrupt) people out of the political process. Political corruption is a difficult subject to understand because so much remains hidden from view. That is in fact the whole point of political corruption: to get away with not following the rules that govern the political process. Additionally, there is no commonly accepted definition of what constitutes corruption. There are many definitions but no consensus on
corruption 241
just what constitutes political corruption. One such definition, according to international relations scholar Joseph Nye, attempts to define the term in the following way: Corruption, to Nye, is behavior which deviates from the normal duties of a public role because of private-regarding, pecuniary or status gains, or violates rules against the exercise of certain types of private-regarding influence. This includes such behavior as bribery; nepotism, and/or misappropriation of resources. This definition covers a great deal of territory and is fairly broad, covering, for example, nepotism as a form of political corruption (which means to give family members government positions). Another scholar of corruption, Peter DeLeon, narrows Nye’s definition a bit: To DeLeon, political corruption is a cooperative form of unsanctioned, usually condemned policy influence for significant personal gain, in which the gain could be economic, social, political, or ideological rewards. Several elements seem relevant when thinking about political corruption. First, conduct of a public official in his or her public capacity. This removes private or personal forms of scandal such as one’s private life (including personal and/or sexual relationships). Thus, when President Bill Clinton had an inappropriate relationship with a White House intern, that was a private form of corruption, but not a public one. When he subsequently lied about it under oath, it became a public problem. Second, political corruption involves behavior that violates law or accepted norms. In this sense, when Congressman Randy “Duke” Cunningham, (R-CA) was indicted for taking bribes while in office and forced to resign his House seat, he had violated the law. Third, gain of some sort is involved. Why else would anyone engage in such risky behavior? This may involve money, power, positions, etc. Fourth, it thus involves a violation of the public trust. In short, the public has a right to expect better or, at least different, behavior from public officials. The sources of corruption are many and varied: the quest for power, the drive for money, the hunger for status, and/or the political and ethical climate of the times. Essentially, one could reduce the sources or causes of corruption to three categories: personalistic, institutional, and societal. First, personalistic/ psychological/individual explanations trace the root
causes of corruption to weaknesses within the individual. This “rotten apples” view of corruption sees the causes of corruption buried deep in the psyche of the person. The culprit here is “human nature,” and our inability to control our natures when temptation arises. But why does one person crumble in the face of temptation, and another resist its force? While the personalistic explanation is appealing, in the long run it may be unsatisfying. Second, the institutional explanation posits that the “rotten barrels” view best explains political corruption. In presidential terms, such factors as the bloated size of the presidency, the increased power and responsibilities of the office, and the absence of effective checks and balances, may all contribute to corruption. This view is more useful because it goes beyond individual behavior, but its drawback is that remedies look to legalistic cures in isolation from politics and culture. The third view, the societal/systemic explanation looks to the larger forces, the cultural climate, the temper of the times, or the capitalist system, suggesting that corruption is caused by the interaction between the individual in government and these social forces. Here, corruption is seen, not as the failure of individuals, or institutions, but as reflecting attitudes, pressures, or demands being placed upon the government. The weakness of this view is that it is somewhat soft; that is, it is so all encompassing and comprehensive as to lose some of its analytical meaning and clarity. Regardless of the dispute over causes, there is little dispute over the consequences of corruption. Widespread political corruption leads to apathy, cynicism, alienation, and instability within the political and governing process. The more a society sees its politics and politicians as corrupt, the more likely will individuals be to drop out of the system, refrain from voting, disengage from political debate and dialogue, and cease to be fully functioning citizens. Such consequences of corruption can be devastating to the fulfillment of a robust democratic political culture. Corruption drives a wedge between the citizen and the political process, and as such, no democracy can long exist when the people see politics as corrupt and self-serving. A democratic political system requires a democratic culture, and high levels of political corruption undermine efforts to
242 Demo cratic Party
create that viable and robust democratic political culture. Further Reading Nye, Joseph. “Corruption and Political Development,” American Political Science Review, LXI, 2 June 1967 417–27; DeLeon, Peter. Thinking About Political Corruption. Armonk, N.Y.: M.E. Sharpe, 1993. —Michael A. Genovese
Demo cratic Party The Democratic Party is the oldest continuously existing political party in the world. The Democratic Party traces its earliest origins to the antifederalist delegates at the Constitutional Convention in Philadelphia in 1787. Led by Alexander Hamilton, a New York delegate, the federalists wanted to replace the Articles of Confederation with a more powerful national government exerting supremacy over the states through their proposed U.S. Constitution. In order to attract the support of the more moderate, compromising antifederalists at the convention and during the ratification process among the states, the federalists promised to submit a Bill of Rights to the states for ratification as the first 10 amendments. The Bill of Rights limited the power of the national, or federal, government by protecting civil liberties and states’ rights from excessive federal power. Other antifederalists, such as Governor George Clinton of New York and Patrick Henry of Virginia, actively opposed ratification of the Constitution. They feared that the Constitution would eventually lead to a tyrannical presidency, a corrupt and antidemocratic Congress, and the destruction of individual liberty and states’ rights. Although Thomas Jefferson was skeptical of the Constitution and shared many antifederalist concerns, he was in Paris serving as American minister to France and did not campaign against ratification. Shortly after George Washington became president in 1789, he appointed Jefferson secretary of state and Hamilton secretary of the treasury. Jefferson suspected that Hamilton, a founder and leader of the Federalist Party, excessively influenced Washington’s
decisions. Although Washington tried to be nonpartisan, the president increasingly sided with federalist policies, including a pro-British, anti-French foreign policy, a national bank, a tight money supply, and high protective tariffs. Jefferson perceived these policies to be detrimental to American relations with the new government of France, agricultural exports, states’ rights, and civil liberties. Although James Madison participated with Hamilton in writing and advocating ratification of the Constitution, Madison, then a congressman from Virginia, joined Jefferson in opposing federalist policies. At the beginning of Washington’s second term in 1793, Jefferson and Madison established the Democratic-Republican Party. Also known as the Jeffersonian Republicans or simply as the Republicans, the Democratic-Republican Party sought to organize and formalize opposition to Federalist policies in Congress and within the Washington administration. It also wanted to provide an alternative ideology and policy agenda to Americans, especially regarding the interpretation and application of the federal government’s powers in the Constitution. Before he left the presidency, George Washington warned his fellow Americans about the divisive, corrupting influence of partisanship in his farewell address. Conflict between the Federalist and Democratic-Republican Parties intensified, however, during the presidency of John Adams, a Federalist and Washington’s vice president. Thomas Jefferson was elected vice president under Adams in 1796 because he received the second largest number of electoral college votes. Adams and a Federalist-controlled Congress enacted the Alien and Sedition Acts of 1798 as they prepared for war with France. In particular, the Sedition Act criminalized the publication of “false, scandalous, and malicious writing” against the federal government and its officials. Jefferson, Madison, and other DemocraticRepublicans perceived this law as a Federalist attempt to suppress and punish any criticism of Federalist policies and officials, especially President Adams, by Democratic-Republican newspapers and politicians. The Democratic-Republican Party used its principled opposition to this law as a major issue position in the presidential and congressional elections of 1800.
Democratic Party 243
DEMOCRATIC TICKET FOR THE PRESIDENCY Election year 1792 lost 1796 lost 1800 w 1804 w 1808 w 1812 w 1816 w 1820 w 1824 lost 1828 1832 w 1836 1840 lost 1844 1848 1852 1856 1860 1864 1868 1872 lost 1876 1880 1884 1888 lost 1892 1896 1900 1904 1908 1912 1916 w 1920 1924 1928 1932 1936 w 1940 w 1944 w 1948 w 1952 1956 1960 w 1964 1968
Result (a)
on(b) on on on on on (c)
won on won won lost won won lost lost lost lost lost lost won won lost lost lost lost won on lost lost lost won on on on on lost lost on won lost
Nominees: President
Vice President
(none) Geor Thomas Jefferson
ge Clinton Aaron Burr George Clinton
James Madison James Monroe William H. Crawford Andrew Jackson Martin Van Buren James Knox Polk Lewis Cass Franklin Pierce James Buchanan Stephen Arnold Douglas (Northern) John Cabell Breckinridge (Southern) George Brinton McClellan Horatio Seymour Horace Greeley[3] Samuel Jones Tilden Winfield Scott Hancock Stephen Grover Cleveland
Elbridge Gerry Daniel Tompkins Albert Gallatin John Caldwell Calhoun[1] Martin Van Buren Richard Mentor Johnson
Alton Brooks Parker William Jennings Bryan Thomas Woodrow Wilson
George Mifflin Dallas William Orlando Butler William Rufus de Vane King[2] John Cabell Breckinridge Herschel Vespasian Johnson Joseph Lane George Hunt Pendleton Francis Preston Blair, Jr. Benjamin Gratz Brown Thomas Andrews Hendricks William Hayden English Thomas Andrews Hendricks[2] Allen Granberry Thurman Adlai Ewing Stevenson Arthur Sewall Adlai Ewing Stevenson Henry Gassaway Davis John Worth Kern Thomas Riley Marshall
James Middleton Cox John William Davis Alfred Emmanuel Smith Franklin Delano Roosevelt[2]
Franklin Delano Roosevelt Charles Wayland Bryan Joseph Taylor Robinson John Nance Garner
William Jennings Bryan
Harry S. Truman Adlai Ewing Stevenson II John Fitzgerald Kennedy[2] Lyndon Baines Johnson Hubert Horatio Humphrey
Henry Agard Wallace Harry S. Truman Alben William Barkley John Jackson Sparkman Carey Estes Kefauver Lyndon Baines Johnson Hubert Horatio Humphrey Edmund Sixtus Muskie (continues)
244 Demo cratic Party
DEMOCRATIC TICKET FOR THE PRESIDENCY (continued) Election year
Result
Nominees: President
Vice President
1972
lost
George Stanley McGovern
1976 1980 lost 1984 1988 1992 1996 w 2000 2004
won
James Earl Carter, Jr.
Thomas Francis Eagleton Robert Sargent Shriver[4] Walter Frederick Mondale
lost lost won on lost lost
Walter Frederick Mondale Michael Stanley Dukakis William Jefferson Clinton
Geraldine Anne Ferraro Lloyd Millard Bentsen Jr. Albert Arnold Gore, Jr.
Albert Arnold Gore, Jr. John Forbes Kerry
Joseph Isadore Lieberman John Reid Edwards
(a) Jefferson did not win the presidency, and Burr did not win the vice presidency. However, under the pre–Twelfth Amendment election rules, Jefferson won the vice presidency due to dissension among Federalist electors. (b) Jefferson and Burr received the same total of electoral votes. Jefferson was subsequently chosen as president by the House of Representatives. (c) Crawford and Gallatin were nominated by a small group of their congressional supporters, which called itself the Democratic members of Congress.[48] Gallatin later withdrew from the contest. Andrew Jackson, John Quincy Adams, and Henry Clay also ran as Republicans, although they were not nominated by a national body. While Jackson won a plurality in the electoral college and popular vote, he did not win the constitutionally required majority of electoral votes to be elected president. The contest was thrown to the House of Representatives, where Adams won with Clay’s support. The electoral college chose John C. Calhoun for vice president. [1] Resigned from office. [2] Died in office. [3] Died before the electoral votes were cast. [4] Thomas Eagleton was the original vice presidential nominee but withdrew his nomination less than a month after receiving it.
Further weakened in his reelection effort by the active opposition of fellow Federalist Alexander Hamilton, Adams was succeeded in the presidency by Jefferson in 1801. With his party now controlling Congress, Jefferson changed American foreign policy by imposing a trade embargo against Great Britain. He and his party’s members of Congress also tried to remove Federalists from the army and judiciary. Although he and his party ideologically rejected heavy federal spending and discretionary presidential powers, Jefferson agreed to the Louisiana Purchase from France in 1803. He partially justified this major expansion of American territory by claiming that it would promote a more democratic society, both politically and economically, by providing free or affordable farmland to American settlers. The Democratic-Republican Party became known as simply the Democratic Party by 1828 and was formally renamed the Democratic Party in 1840. This signified more than merely a name change. By the 1820s, the Democratic-Republican
Party intensified its rhetorical and ideological identity as the party of the “common man” and the only truly “democratic” party in its ideas, policies, and intraparty decision-making processes. For example, the short-lived Anti-Masonic Party adopted the use of national conventions with delegates elected from the states to write and pass party platforms and nominate its presidential and vice presidential candidates. The Democrats also promoted the enactment of universal white male suffrage, i.e., the end of property requirements for voting rights by the states. This issue position was especially attractive to urban laborers, frontier settlers, and the growing number of Irish and German immigrants. First elected president in 1828, Democratic war hero Andrew Jackson further encouraged widespread participation in party politics and government service by expanding the spoils system and rewarding successful party service with patronage jobs and federal contracts. Until the election of the first Whig president in 1840, the Jacksonian Democrats had
Democratic Party 245
benefited from the absence of a strong, united opposition party since the demise of the Federalist Party shortly after the War of 1812. From 1840 until 1860, the Democratic and Whig Parties were equally competitive in presidential elections. During this period, however, both parties were internally, regionally, and ideologically divided on the issue of slavery. The results and consequences of the 1860 presidential election signified the end of the Whig Party, the rise of the Republican Party, and a sharp, prolonged decline for the Democratic Party. During the Civil War (1861–65), northern Democrats were divided between War Democrats who generally supported the Union war policies of Abraham Lincoln and the Republican Party, and Peace Democrats, or Copperheads, who wanted to negotiate and compromise with the Confederacy to end the war. Despite heavy Union casualties and riots against draft laws, Lincoln was reelected in 1864 with Andrew Johnson, a War Democrat from Tennessee, as his running mate. The controversy of Johnson’s impeachment and narrow acquittal by Republicans in Congress dramatized the close, bitter competition between the Democratic and Republican Parties from 1865 until the Republican realignment of 1896. Democratic electoral strength in presidential and congressional elections gradually increased after 1865 as southern states were readmitted to the United States and immigration sharply increased the populations of northern industrial cities. Although the Democrats won only two presidential elections from 1868 until 1912, Democratic presidential nominees won more popular votes than their Republican opponents in 1876 and 1888, but the Republicans narrowly won these elections in the electoral college. Potential Democratic voting strength was reduced by the proliferation of minor parties that opposed Republican economic policies, especially high tariffs and the gold standard. In an effort to co-opt members of the Populist Party, the largest of these protest parties, the Democratic Party gave its 1896 presidential nomination to William Jennings Bryan, a Nebraska congressman who belonged to the Democratic and Populist Parties. With an impassioned, evangelical tone in his speeches, Bryan zealously denounced Republican economic positions for enriching bankers, railroads,
and industrialists while exploiting and impoverishing farmers, laborers, and consumers. Nominating Ohio governor William McKinley for president, the Republican campaign shrewdly and effectively promoted its economic platform as equally benefiting business, labor, and agriculture, and portrayed Bryan as a demagogic economic radical and a rural, religious fundamentalist who was intolerant of immigrants and big cities. Consequently, many previously or potentially Democratic immigrants and urban laborers voted Republican in 1896. The Republican realignment of 1896 exerted an enduring influence on the results of American presidential and congressional elections until the Democratic realignment of 1932–36. Woodrow Wilson, the Democratic governor of New Jersey, won the 1912 presidential election with 42 percent of the popular votes because the Republican electoral majority divided its popular votes between incumbent Republican president William H. Taft and former Republican president Theodore Roosevelt, who was the nominee of the Progressive Party. After adopting some of the Progressive platform for his legislation, Wilson was narrowly reelected in 1916. Toward the end of World War I, Democratic foreign and domestic policies under Wilson became unpopular and controversial. The Republicans won control of the Senate in 1918, partially because of Wilson’s uncompromising advocacy of the League of Nations. The national prohibition of alcohol divided Democrats between mostly northern “wets” and mostly southern “drys.” In addition, high inflation, labor unrest, a Communist scare, and universal suffrage for women contributed to a Republican landslide in the presidential and congressional elections of 1920. Until the Great Depression began in 1929, many Americans associated the Republican administrations of the 1920s with peace and prosperity. The Democratic Party’s lack of intraparty cohesion and national voter appeal was further aggravated by the Ku Klux Klan’s rising political influence, intraparty differences over prohibition, and the alienation of northern, urban Catholic Democrats from the southern domination of the party. The Democrats needed 103 ballots at their raucous 1924 national convention to nominate a presidential candidate, James W. Davis. Republican candidate Calvin Coolidge easily
246 Demo cratic Party
won the 1924 presidential election as Robert LaFollette, the presidential nominee of the National Progressive Party, won 16 percent of the popular vote. LaFollette’s candidacy was attractive to nonsouthern farmers and laborers who believed that their economic grievances were ignored by the two major parties. In 1928, Republican presidential nominee Herbert Hoover easily defeated the Democratic nominee, Governor Alfred E. Smith of New York. Especially in the South, Smith’s candidacy was weakened by his Catholicism, opposition to prohibition, and relationship with Tammany Hall, a notorious political machine. After the stock market crash of 1929, however, many Americans blamed Republican policies for the widespread economic suffering of the Great Depression. The Democrats won control of the House of Representatives in 1930. Reelected governor of New York by a landslide in 1930, Franklin D. Roosevelt emerged as the leading candidate for the 1932 Democratic presidential nomination. Assuming that Smith, his main rival, would be endorsed by most northern urban delegates at the Democratic national convention, Roosevelt first secured the support of most southern and western delegates. Roosevelt’s economic and agricultural proposals for lower tariffs, rural electrification, and soil conservation were especially appealing to the South and West. After Speaker of the House John Garner of Texas urged his delegates to support Roosevelt, the New York Democrat was nominated for president and chose Garner as his running mate. With the economy worsening under Hoover, the Democrats easily won control of the presidency and Congress in 1932. Promising the nation vigorous presidential leadership and bold experimentation in policies to combat the Great Depression, Roosevelt called his domestic policy agenda the New Deal. The New Deal tried to stabilize and stimulate the economy by emphasizing economic cooperation and planning among government, business, labor, and agriculture through new laws and agencies, especially the National Recovery Administration (NRA). The New Deal also reduced unemployment and poverty through public works projects, new social welfare benefits, agricultural subsidies, legal rights for labor unions, and a national minimum wage.
Roosevelt was reelected in 1936 with over 60 percent of the popular vote and the electoral votes of all but two states. The 1936 election results also signified a long-term Democratic realignment of American voters. For the first time since 1856, most voters were Democrats. In additional to the overwhelming electoral support of the traditional Democratic base of southern whites and Irish Catholics, Roosevelt received more than 70 percent of the votes of nonIrish Catholics, Jews, African Americans, and labor union members. The emergence of this broad, diverse New Deal coalition enabled Roosevelt to be reelected in 1940 and 1944 and his party to continue to control Congress until 1946. It also helped Harry S. Truman, Roosevelt’s successor, to win an upset victory and his party to regain control of Congress in the 1948 elections. The enduring ideological, coalitional, and programmatic influence of Roosevelt’s presidency and the Democratic realignment was also evident in bipartisan foreign and defense policies during the first 20 years of the cold war, the acceptance of major New Deal policies by President Dwight D. Eisenhower and other moderate Republicans, and Democratic control of Congress during most of Eisenhower’s two-term presidency. The unfinished policy priorities of New Deal liberalism influenced liberal domestic policy agendas under Democratic presidents Harry S. Truman, John F. Kennedy, and Lyndon B. Johnson, respectively named the Fair Deal, New Frontier, and Great Society. These liberal Democratic policy goals and accomplishments included national medical coverage for the elderly and poor, civil rights for African Americans, antipoverty programs, urban renewal, and more federal aid to education. However, these liberal Democratic policies, especially on civil rights, alienated southern white voters who increasingly voted for either Republicans or southern-based minor party candidates, such as Strom Thurmond in 1948 and George Wallace in 1968, in presidential elections. Also, by the late 1960s, many liberal Democrats opposed the continuation of the bipartisan containment policy in the cold war, especially in Vietnam. These intraparty conflicts contributed to the fact that the Democratic Party won only one presidential election from 1968 until 1992, lost the 1972 and 1984 presidential elections by landslide margins, and lost control of the Senate in 1980.
elections 247
Aware of and influenced by a more conservative public opinion toward crime, taxes, and welfare dependency, Governor William J. Clinton of Arkansas positioned as a moderate New Democrat and won the 1992 presidential election. Clinton’s victory was also facilitated by a brief recession, the strong independent presidential candidacy of H. Ross Perot, and greater public attention to domestic issues because of the end of the cold war. Although Republicans won control of Congress in 1994, Clinton was reelected in 1996, becoming the first Democrat to be reelected president since Roosevelt. Despite the controversy pertaining to a sex scandal and his impeachment resulting from it, Clinton left his party a moderate, favorable public image regarding crime, welfare reform, and economics. This more attractive policy image of the Democratic Party helped Albert Gore, Clinton’s vice president, to receive nearly 600,000 more popular votes than George W. Bush, his Republican opponent, but a United States Supreme Court decision secured Bush’s victory in the electoral college. During and shortly after the 2004 presidential election, in which Bush was reelected, Democrats were divided over whether to express an outspoken antiwar position toward Bush’s foreign policy in Iraq or a moderate position accepting Bush’s commitment in Iraq while criticizing Bush’s competence and credibility. See also political parties; two-party system. Further Reading Goldman, Ralph M. The Democratic Party in American Politics. New York: Macmillan, 1966; Savage, Sean J. JFK, LBJ, and the Democratic Party. Albany: State University of New York Press, 2004. —Sean J. Savage
elections In a democratic political system, elections operate as a mechanism for the aggregation of voter interests, a means to keep elected officials accountable, and an affirmation of the continued legitimacy of the governing system. While elections differ around the world, these three central components are at the heart of democratic elections. The United States, with its unique history and institutional structure, has a peculiar
election system that differentiates it from much of the democratic world. The United States differs from other industrialized democracies in the frequency with which elections are held and in the number of offices that are chosen by elections. American presidential elections take place every four years, while “midterm” congressional elections take place every two years. Most states elect governors in midterm years, although a few schedule gubernatorial elections in “odd” years. State legislatures generally schedule elections for state assemblies and state senates in even numbered years, while local elections take place almost continuously. School board elections, town council elections, and elections for county offices can and are scheduled as needed throughout the year. Very often school board elections are deliberately scheduled at times that differ from legislative or presidential elections in order to de-emphasize the partisan nature of the selection process. American elections are most often “first-past-thepost” elections, meaning that the candidate with the most votes in the election wins, regardless of the number of votes they receive. For example, if there are three candidates for a town council position, the candidate with the largest number of votes wins the office, even if they garner only 40 percent of the votes cast. This “first-past-the-post” system is in contrast to a simple majority system that requires a run-off election to select the successful candidate if no one candidate receives more than 50 percent of the vote in the general election. American election districts are geographically based single-member districts, which means that they elect one representative per district. Single-member districts contrast with multimember districts, which elect more than one person per district to the post under consideration. In the United States, multimember districts may exist at the municipal level, but rarely are in place for higher offices. Multimember districts result in higher numbers of successful women candidates for office but often decrease the minority representation of the district. State legislatures are responsible for drawing district boundaries. As a result of the decennial American census, at least every 10 years the district boundaries must be redrawn to account for population shifts and to guarantee “one person-one vote” by
248 elec tions
ensuring that each congressional district has roughly equal population. Historically, these changes have happened only once every 10 years, but recently state legislatures have redrawn district lines when the partisan balance of the state legislature shifts. Gerrymandering occurs when districts are drawn deliberately to benefit one party at the expense of another. Gerrymandering can project the partisan balance of the state legislature onto the state’s congressional delegation for years to come. Through the practice of gerrymandering, the Pew Research Center estimates that over 90 percent of congressional seats in 2004 were “safe,” meaning that they are almost guaranteed to elect a member of the same party year after year. Political parties play an essential role in American elections. Despite some concern about the “decline” of political parties in recent years, the endorsement of a major political party remains a virtual prerequisite to election to office. In the electorate, approximately 33 percent of voters identify as “Independent,” that is, supporting neither major party. In contrast, all but 2 percent of the U.S. senators identify as members of one of the two major parties, and all 435 members of the U.S. House of Representatives identify with a major party. The two-party system in the United States arises as a result of electoral rules. Maurice Duverger, a French political scientist, theorized two major political parties would develop in first-past-the-post systems organized into single-member districts. His theory has been more than borne out in the American experiment, with two parties dominating politics since the founding. Even as a major party such as the Federalist or Whig Party dies, a new party rises to take its place. This phoenixlike process culminated in the modern Democratic and Republican Parties. Duverger predicted that in a two-party system the positions represented by the political parties would tend to cluster around the position represented by the median voter. The median voter represents the center point on a single dimension along which is arrayed all the American voters and their issue positions. By converging on the median voter, political parties ensure that they are appealing to the largest subsection of the electorate, thereby ensuring their electoral success. Appeals to the median voter result in a lack of substantial differences in the platforms of
either party. In contrast, political systems with more political parties, such as systems that use proportional representation and multimember districts, have more substantial differences and represent a broader range of political interests. American party organizations structure elections through the recruitment of candidates, mobilization and education of the electorate, and the facilitation of electoral accountability. Parties recruit candidates to win elections and to advance the party’s substantive interests once in office. They also manage popular ambition by providing an outlet whereby the ambitious can seek various levels of office and gain manageable amounts of power. Parties mobilize the electorate in ways as diverse as holding voter registration drives, sending campaign workers door-to-door, and holding political rallies. These mobilization mechanisms also serve to educate voters, first by informing them of the existence of an election, and second by giving the party’s imprimatur to particular candidates. Parties in this sense serve as branding devices, allowing voters to simplify the process through which they choose candidates. In fact, more vote choices are made on the basis of partisan identification than on any other single factor. Political parties affect who participates in politics and which political interests are communicated to officeholders by selectively mobilizing particular groups in society. Although voting is not an information-rich means of communicating political preferences to elected representatives, it remains the most fundamental way to participate in the American political process. Voting provides the basis upon which officeholders gauge support for their policies and their continued service in office. It is generally assumed that politicians are more responsive to politically active and engaged groups than they are to groups that do not participate in the political process. Changes in suffrage rules have increased the numbers of citizens who are eligible to vote. When the U.S. Constitution was ratified, many states restricted the right to vote to white male property owners. Property requirements were the first restrictions on suffrage to disappear, thus extending the franchise to lower-class white men. The Civil War resulted in the expansion of suffrage to AfricanAmerican men through the Fifteenth Amendment.
elections 249
However, the ability of African Americans to vote was largely unrealized until the Voting Rights Act of 1965. This act prohibited discriminatory practices, such as poll taxes and literacy tests, which were designed to prevent African Americans from voting. The passage of the Nineteenth Amendment extended the right to vote to American women, and the Twenty-sixth Amendment lowered the voting age from 21 to 18. Historical exclusions from, and modern expansions of, the right to vote are visible today in differential turnout levels among historically marginalized groups. The percentages of African Americans and young people voting in modern elections are lower than those of comparable groups. Only in 1980 did women surpass men in turnout rates. Part of the reason that some groups come to the polls at higher rates than others is due to the selective mobilization practiced by political parties. When resources are scarce, parties will maximize their chances of electoral success by investing their mobilization resources in the groups who are the most likely to vote. Because mobilization is key to turnout, lack of mobilization perpetuates the problem of low turnout among marginalized groups. Election watchers concerned by the unequal distribution of participation rates across various groups in society argue that those who vote differ in substantive ways from those who do not. They argue that the interests of minority Americans are not congruent with the interest of white Americans. Similarly, the issues of most concern to young Americans may differ greatly from those most of interest to their grandparents. This lack of representativeness undermines the ability of elections to serve as a means of aggregating public interests. Therefore, many electionwatchers look to political parties to address deficits in voting rates through the mobilization of previously marginalized groups. Party influence, though great, is not as strong as it once was due to changes in electoral procedures and media coverage of campaigns. In the early days of the Republic, political parties provided their supporters with preprinted ballots that contained the names of all the candidates endorsed by that party. With the introduction of the Australian ballot, voters were able to both cast secret votes and split their support between candidates of the two parties, should they wish to do so. This procedural change
weakened the power of political parties, especially urban machines. Media coverage of elections has given rise to modern “candidate-centered” elections. Rather than voting a party line, voters make decisions among candidates based more on personality, evaluations of candidate traits, and issue positions than on the candidate’s party affiliation. The use of television advertising to communicate with potential voters lessens the importance of political party organizational support while dramatically increasing the costs of campaigns. This shift to candidate-centered rather than party-driven decision making has weakened the accountability mechanisms available to voters. The founders presumed that one check on the behavior of officeholders was the possibility of being voted out of office in the next election. The rise of political parties made possible wholesale retribution against the entire party caucus when voters were dissatisfied with the direction taken by government. However, the diminution of the importance of political party identification makes it less likely that voters will vote against an entire party slate when they are dissatisfied with government, instead taking their dissatisfaction out on select officeholders. What party influence still exists differs by type of election. Ideological influence is at its zenith in primaries, while branding is ascendant in general elections. Primary elections or a party caucus, which determine who will represent the party in the general election, are more explicitly partisan and less representative than general elections. Primary elections choose one out of a slate of candidates who are seeking their party’s endorsement for the general election. Primaries are generally limited to voters explicitly registered with the party holding the election. Primary elections only attract the most motivated voters, who tend to cluster at the extremes of the American political spectrum. More moderate voters often sit out these races, perhaps deeming them less important than the general election that will actually determine who will hold elected office. However, given that general election voters will choose among candidates selected through more partisan and more ideological primaries, the candidates available on general election ballots poorly represent the interests of moderate voters. As elections do not ensure the complete representation and aggregation of national interests,
250 gr assroots politics
they are at best weak instruments of objective legitimacy. When voters make informed choices among candidates to represent their interests, the very act of casting a ballot expresses confidence in and support for the current system of governance. To the extent that some groups are systematically and consistently less likely to take part in organized elections, the legitimacy of the governing system may come into question. Legitimacy can be of two kinds: objective legitimacy and subjective legitimacy. Objective legitimacy is whether the political system does accurately represent the interests of the citizens, while subjective legitimacy is whether or not the people think that it is an acceptable system of government, no matter the validity of the claim. Elections are a way to achieve subjective, if not objective, legitimacy. The limited selection between candidates of the two major parties, and the fact that votes are not information rich means of participation, limit the capacity for objective legitimacy of American elections. Further Reading Cox, Gary W., and Jonathan N. Katz. Elbridge Gerry’s Salamander: The Electoral Consequences of the Reapportionment Revolution. Cambridge: Cambridge University Press, 2002; Duverger, Maurice. Party Politics and Pressure Groups: A Comparative Introduction.; New York: Crowell, 1972; Jacobson, Gary C. The Politics of Congressional Elections. New York: Longman, 2003; Maisel, L. Sandy, and Kara Z. Buckley. Parties and Elections in America: The Electoral Process. 4th ed. New York: Rowman and Littlefield, 2005. —Hannah G. Holden and John D. Harris
grassroots politics With its origins dating back to the Progressive era of U.S. politics, the term “grassroots politics” denotes citizen- and community-based political action—a “ground-up” (or “bottom-up”) form of direct political action and, in some cases, long-term social movements—that seeks reform of the established political order. Emanating from the “roots” of a community (i.e., the everyday people and their immediate concerns) rather than the interests of a hierarchy of elected officials or other political elites—grassroots
politics is the antithesis of elite-driven, top-down, custodial politics that supports the status quo and discourages change. Thus, much like grass grows from its roots in the soil, “grassroots politics” has its foundation—its roots—firmly planted in the aspirations and struggles of everyday people. At the core of grassroots politics is the belief in the necessity of individual action and empowerment at all levels of government—local, state, and national. Like moralistic political culture, which promotes mass participation in the political system, grassroots politics embraces the notion that individual citizens can and should participate in politics in order to bring about preferred sociopolitical goals in their communities and, depending on the issue, the country at large. Given this mission, civic engagement and issue advocacy from rank-and-file citizens is needed to realize change unlikely to be engineered—or, perhaps even understood—by political elites. Grassroots political action is expressed in a wide array of activities—from door knocking and personto-person interaction (often referred to as canvassing), to writing letters to the editors of local newspapers, holding public forums and rallies, organizing phone banks and petition drives, meeting directly with elected officials, mass e-mailing, establishing Web sites, executing “Get Out the Vote” (GOTV) efforts (including providing transportation for the disabled, elderly and poor to polling stations), and raising money in order to finance the aforementioned operations and elect preferred candidates. In addition to promoting specific issues and candidates, these citizen-based actions are also executed in order to engender support for broad causes and, on occasion, an entire slate of candidates (i.e., school boards, county commissioners, etc.). Moreover, these myriad grassroots activities may involve strategic coalition building—building partnerships with other grassroots movements—in order to garner media attention, affect public opinion, gain political power, and influence elected officials. To varying degrees, the temperance, suffrage, environmental, and labor movements—as well as the anti-ERA (Equal Rights Amendment) movement of the 1970s, among many other efforts—are examples of grassroots politics. Thus, whether it is a school board election focused on mobilizing conservative Christian voters or liberal efforts to promote more stringent
grassroots politics 251
environmental regulations in their community— citizen-driven “grassroots” political action is not the sole province of either the left (liberal) or right (conservative)—or, for that matter, socialists, libertarians, or other ideologically driven individuals. Rather, a sense of populism—favoring the interests of common people over the elites—tends to pervade most grassroots action. The key characteristic of grassroots politics, therefore, is not necessarily a strict adherence to an ideology per se, but rather that it originates among and is engineered by everyday people who wish to implement change. Left or right or neither, citizen-driven political actions aimed at establishing cohesive, effective networks of individuals who, through direct and collective action, can influence political and social change, all embody the spirit of grassroots efforts. Given the moral and political legitimacy often afforded to grassroots efforts, lobbyists fronting wellestablished organized interests frequently use grassroots tactics. For example, the practice of “astroturfing”— elite-engineered phone calls, postcards, faxes, and e-mails masquerading as “bottom-up” citizen action— has become a staple of sophisticated elites aiming to raise money, attract media attention, influence elected officials and shape public policy. In terms of presidential campaigns, in many ways the presidential candidacies of Democrat Jesse Jackson and Republican Pat Robertson in 1988, Republican John McCain in 2000, and Democrat Howard Dean in 2004, embody aspects of citizen-based, ground-up missions that frequently worked outside of their respective parties’ elites and institutional hierarchy. In the realm of contemporary U.S. politics, grassroots political action has evolved in unique ways, affected by technology (namely, the Internet) and Americans’ changing lifestyles, increasingly cluttered lives, and new approaches to civic engagement. One of the most significant aspects of the new Internet-driven grassroots politics has been the advent of “meet ups”—the phenomenon of using Web sites and e-mail to schedule person-to-person meetings with like-minded citizens. This can mean working to address issues or, most famously in the 2004 election cycle, organizing on behalf of presidential candidates. Such meetings take place at people’s homes, coffeehouses, diners, and restaurants, and seek to rally local supporters—often uninvolved
in party politics—behind a candidate. Another grassroots technique that has developed in the Internet age—blogs—short for “Web logs,” involve citizens writing about political issues from their own perspective, often welcoming responses via Internet chat rooms. The 2004 presidential campaign of Vermont governor Howard Dean illustrates the promise and peril of Internet-driven grassroots politics. While Dean was not the first presidential candidate to raise significant amounts of money and mobilize citizen action via cyberspace (Dean campaign strategist Joe Trippi cites John McCain’s 2000 bid as an example of successful Internet usage), his vast primary war chest—he ended up refusing public financing in the primaries—was fueled primarily by a deluge of Internet-based donations, with the average donation totaling about $75. Beyond the money, however, was the swell of grassroots support reflected in crowds ranging from 5,000 to 35,000 that greeted Dean as he crisscrossed the country in the summer of 2003 on what was dubbed the “Sleepless Summer Tour.” Shirts sold and distributed at these rallies touted “peoplepowered Howard” as the voice of those left behind by the machinations of the Bush administration as well as the Democratic Party elite. Yet, as Dean’s candidacy sputtered and failed early in the primaries, his insurgent candidacy reveals the limits of grassroots politics at the ballot box. At the same time, strong grassroots support in 2003–04 catapulted Dean to the chairmanship of the Democratic National Committee in 2005. Nonetheless, for Dean’s chief strategist Trippi, who finds existing party bureaucracies tired, ineffective and disempowering, the new Internet-fueled “wired” politics can be transformational, with the capacity to engage everyday citizens in meaningful ways that promotes their consistent participation. For Trippi and others, this new cyberspace-based grassroots politics is a rational way to reach average Americans otherwise disinterested in—or wholesale divorced from—community and other civic-minded endeavors. At the same time, while access to the Internet has become dramatically more democratized— expanding exponentially in a relatively short period of time—its prevalence in political movements does raise legitimate questions about its truly “common” or grassroots nature, with Internet access still tilted
252 ideology
toward more middle-class citizens and upwardly mobile college graduates. Supporting the candidacy of Progressive (or “Bull Moose”) party candidate Theodore Roosevelt in 1912, Indiana senator Albert Jeremiah Beveridge famously asserted at the party’s convention that this new political coalition had “come from the grass roots” and had “. . . grown from the soil of people’s hard necessities.” As necessities—or people’s perception of their necessities—continues to evolve, so, too, will the nature and scope of grassroots action in the U.S. political system. Further Reading Critchlow, Donald T. Phyllis Schlafly and Grassroots Conservatism: A Woman’s Crusade. Princeton, N.J.: Princeton University Press, 2005; Trippi, Joe. The Revolution Will Not Be Televised: Democracy, the Internet, and the Overthrow of Everything. New York: Regan Books, 2004. —Kevan M. Yenerall
ideology Ideology refers to the ideas or beliefs that groups or individuals hold that reflect the aspirations, needs, or ideals of these groups or individuals. The term ideology has meant many things to many different thinkers. It is a term of recent origin, coined sometime in the late 18th century. In the simplest terms, ideology refers to a set of beliefs. For many theorists, an ideology is based on certain assumptions held as true by those who adhere to the ideology. The ideology is then used to analyze and to frame political action, social movements, and even the actions of individuals. Thinkers who see ideology in this light try to construct and maintain a cogent ideological position. Some thinkers argue that one cannot help but look at the world through an ideological lens, even if the ideology is not clearly articulated. These thinkers argue that all thought is inherently ideological, meaning, in this case, biased towards a certain worldview and that it is impossible to be nonideological. In current American politics, ideology is often used as a pejorative to deride or belittle an opponent whose political aspirations reflect a “personal ideology.” A basic understanding of ideology shows that ideologies serve the following purposes: (1) to articulate
the goals of a group or individual; (2) to articulate the proper manner or method by which the goals can be attained; (3) to identify allies and enemies of the group or individual; (4) to establish guidelines for proper conduct; and (5) to establish incentives and punishments for following or going against the articulated goals and methods of the group. While it is difficult to determine a singular American “ideology,” it is most likely the case that any competitive American ideology will be based on what are often called core values: freedom, liberty, and equality. Likewise, nearly all Americans would hold that there must be a democratic element to the political ideology as well. Most theorists contend that the most likely candidate for the primary political ideology in the United States is classical political liberalism, which emphasizes individual liberties, private property, and limitations on government power. However, in recent years there has been a more overt discussion among theorists, scholars, political activists, pundits, and politicians about the nature and role of ideology. The ideologies that have been debated and, at times, used to critique the status quo include feminism, postmodernism, communitarianism, political liberalism, conservatism, libertarianism, socialism, religious ideologies, and others. Although there has been a proliferation of the types and number of ideologies in the American political discourse, it is still not clear whether these new ideologies consist of anything more than a critique of the predominant classical liberal ideology to which most Americans either consciously or unconsciously subscribe or whether they are serious competitors to become a mainstream ideology. On an individual basis, ideologies act as a lens through which individuals perceive the world. A person who is liberal, meaning aligned with the left of the political landscape, is likely to have differing explanations for various events than a person who is conservative, meaning aligned with the right side of the political landscape. For example, in analyzing why a particular individual is poor, the liberal and the conservative are likely to have different explanations. The liberal is likely to say that we must look to social conditions or a person’s family background in order to understand why this person is poor. A conservative is likely to say that we must look at the individual’s actions and ascribe individual responsibility as to why
ideology 253
the individual is poor. An individual’s particular ideology will shape what counts as evidence for particular claims, what values must be ranked as our most important, and what goals society should seek. The role that ideas play in shaping our political beliefs has long been recognized. Plato, for example, argued that the best or most correct form of government, a republic headed by a philosopher king, should exile the poets and artists because the ideas that they might teach to the youth are potentially corrupting. Aristotle understood that there must be common goals and a common understanding of virtue in order for a polity to survive. Some theorists, such as Thomas Hobbes (1588–1679), argued that the sovereign has the right to determine the official ideology of the state insofar as that ideology provides for the continued survival of the state. John Locke (1632–1704) argued that the political system that should prevail is based on the laws of nature and should consist of minimal government, the protection of individual rights, and the protection of property. Perhaps the thinker that is best-known for making ideology a central aspect in analyzing political culture, the role of the state, and the status of the individual, is Karl Marx (1818–83). Marx was committed to the study of history and how ideas have helped shape the polity and how the polity helps to shape which ideas are given precedence. Marx argued that “the ideas of the ruling class are in every epoch the ruling ideas.” Marx held that the ideas by which we analyze and understand the world are the ideas of the ruling class since the ruling class has the power to implement and to teach their ideas as being true. Since the bourgeoisie, the ruling class, holds the means of production, their ideas will be the filter through which all of us see the world. This means that the working class will have a false consciousness or a misunderstanding about the true nature of the world. Thus, Marx advocates for a revolution in order to overthrow the ideology to which all are subjected and which is to the detriment of the proletariat. Marx’s analysis of ideology played an influential role for many thinkers throughout the 20th century. For example, Louis Althusser (1918–90) argued that one of the primary differences between the natural and social sciences is the role of ideology. In the natural sciences, knowledge is produced by the problems at hand in the physical world. Ideology, on the other
hand, is knowledge that is produced by the relationship that individuals have to the social setting in which they exist. Science gives us empirical knowledge of the facts that make up the world. In addition, ideology gives us knowledge that is always a reflection of human consciousness in a particular lived situation. Antonio Gramsci (1891–1937) used a Marxist paradigm to argue that the bourgeoisie is able to develop a cultural hegemony, which then leads the working class to misunderstand its own interests. Michel Foucault (1926–84), while not strictly a Marxist thinker, was still influenced greatly by Marx’s thought. Foucault argued that power in many ways resides in the dominant ideology of a society. One of the primary reasons why the dominant ideology exercises the power that it does is because it appears to be neutral or not have a stake in the outcomes of various political battles. In the United States of America, the influence of Marx’s discussion on ideology has come mostly through the academy and the various professors who have taken Marxist thought to heart. The average American is more than likely to be rather suspicious of Marxist thought. In fact, the most obvious case of ideological differences of which most Americans are probably aware is the differences between the American style of government, which emphasizes individual freedoms, versus the old Soviet style of government, which emphasized collectivism. These two ideologies clashed during the period known as the cold war. The collapse of the Soviet Union is an example, for most Americans, of the triumph of liberal ideology over communist ideology. Many American thinkers have taken a nonMarxist approach in looking at ideology. Several American thinkers have advocated that ideologies are at an end. For example, Daniel Bell (1919– ) argued in his The End of Ideology (1960) that the Marxist emphasis on history and ideology had been made obsolete because of the triumph of Western democracies and economic capitalism. This idea was echoed in Francis Fukuyama’s work The End of History and the Last Man (1992). Fukuyama argued that all forms of governance, except for liberal capitalism, have become obsolete and undesirable to everyone because these ways of looking at the world are incapable of meeting the needs of the citizens of the world.
254 int erest groups
In current American politics, the term ideology is often used to insult a political opponent or to deride an opposing point of view. The study of negative campaigning carefully examines this phenomenon. For example, if Jane Smith disagrees with her opponent Robert Jones, then she might dismiss his ideas or position out of hand by claiming that Jones is an ideologue. By doing this, Smith is claiming that Jones does one or more of the following: (1) holds beliefs or positions that he does not have good reason to hold, (2) holds positions without thinking about the consequences or implications of those positions, (3) holds various positions merely to curry favor either with those in power, or with voters, or both, or (4) actually holds to a position sincerely but is not intelligent enough to understand it. In the end, Jones’s beliefs are based “merely” on ideology or an unreflective position concerning the problems at hand. The desired end of such an attack is to get people to start associating the term ideology with a particular candidate, which is a highly undesirable association for most Americans since most Americans see ideologues as people who do not carefully consider the issues at hand and are likely in the service of someone else. The importance of ideology has taken on new life in recent years, especially with the seemingly increasing polarization of the American polity into more trenchant divisions between right and left. Ideologues from both sides claim that the other is out of touch with the American mainstream. In this sense, ideology means something like a correct worldview (ours) opposed to an incorrect worldview (theirs). Many Americans have migrated toward a position of having no affiliation with either the Republican or Democratic Parties. It is possible that this is because of the strident manner in which ideologues often demand adherence to the so-called party line. These ideologues are often not inclined to broaden the tent of their respective parties and seek those who will hold to most of the party platform. This has made politicians devise a strategy of seeking the votes of the middle or nonaligned, since the ideologically devoted will vote the party line more or less consistently. The concept of ideology has also taken an increasingly more important role during the war on terrorism. How to describe the enemies of the United States, the causes of the conflict, and the potential outcomes are often described in terms of ideology.
See also conservative tradition; Liberal tradition. Further Reading Dahl, Robert. Democracy and Its Critics. New Haven, Conn.: Yale University Press, 1989; Edelman, Murray. Constructing the Political Spectacle. Chicago: University of Chicago Press, 1988; Macridis, Roy C. Contemporary Political Ideologies: Movements and Regimes. 5th ed. New York: HarperCollins, 1992. —Wayne Le Cheminant
interest groups From the time of the founding of the United States, organized interests have been a central feature of American politics. In fact, in 1787, James Madison raised the issue of factions in Federalist 10. Although organized interests have been an important element in American politics for two centuries, interest group formation and activities increased considerably as government became more involved in the social,
TOP LOBBY SPENDING ON CONGRESS, 1998–2007 Client T U.S. Chamber of Commerce American Medical Assn. General Electric American Hospital Assn. AARP $112,732,064 Edison Electric Institute Pharmaceutical Rsrch & Mfrs of America National Assn of Realtors Business Roundtable Northrop Grumman Freddie Mac Lockheed Martin Blue Cross/Blue Shield Boeing Co. Verizon Communications General Motors Philip Morris Fannie Mae Natl Cmte to Preserve Social Security Ford Motor Co.
otal $338,324,680 $156,695,500 $138,540,000 $129,982,035 $110,842,878 $105,294,500 $103,890,000 $97,480,000 $95,882,374 $93,370,648 $87,529,965 $85,071,317 $82,038,310 $78,006,522 $77,400,483 $75,500,000 $71,292,000 $69,260,000 $67,910,808
interest groups 255
political and economic life of the American people. By the 20th century, group formation was enhanced as President Franklin D. Roosevelt increased the role of the federal government during the New Deal era, but the number of interest groups actually exploded beginning in the 1960s with the continued increase in federal government programs. In short, over the last 40 years or so, as the size of government increased and played a more active role in public affairs, citizens became politicized and more active in interest groups in an effort to shape public policy. Interest group activism has been the subject of debate over the years. Where some have argued that interest groups function within a competitive system with multiple access points to the political process, others have argued that government has actually been captured by organized interests. Where some observers of organized interests argue that the competition of interest group activity results in collective benefits, others argue that in the process of pursuing narrow concerns, the larger societal interest might be neglected as better organized and well financed groups maintain an advantage in securing group benefits. Three important factors are relevant to the study of interest groups and the political and electoral process. First, individuals with similar interests have a constitutional right to organize in an effort to achieve their common goals and have been active at all levels of government in an attempt to influence the public policy process. Second, due to the separation of powers that divides the legislative, executive, and judicial branches of government, interest groups through the work of their lobbyists (individuals who work on behalf of organized interests) build relationships with members of Congress, work with respective executive agencies, departments, and bureaus, and engage in litigation in the court system. Interest groups also reach out to the public through the broadcast and print media and the Internet. Finally, the system of federalism upon which the United States is built provides numerous opportunities (points of access) for interest groups as they can exercise political power through lobbying in the 50 states and thousands of localities around the country. Consequently, interest groups reflect an organized effort among citizens to exert pressure on governmental institutions and the policy makers within these
institutional arrangements. In doing so, interest groups are attempting to gain benefits for their members. However, while some groups are able to exercise considerable influence, others might not have the same capacity to achieve access or preferred policy outcomes. For instance, the National Rifle Association (NRA) acting on behalf of gun owners and the American Association of Retired People (AARP) that represents the interests of senior citizens are considered very powerful interest groups. In addition to group activism in the political process, organized interests are very active in the electoral process as well. Political action committees (PACs) are very important players in the electoral process. PACs are organizations (representing a variety of sociopolitical and economic interests) that raise and distribute campaign funds to candidates for public office. Thousands of PACs are involved in presidential and congressional elections and tend to support incumbents over challengers. Moreover, where some observers argue that PACs represent democracy in action, others remain concerned about corruption and abuses arising from the connection between campaign funding, the electoral process, and policy making. Membership, money, and the communication process play a most important role in the interest group process. The recruitment of members and a sufficient source of revenue provide the foundation upon which organized interests can function. While a large, mass membership group would appear to be the goal of any group, research has shown that smaller, wellintegrated and cohesive groups can be very effective. A steady stream of funding is needed in order to provide the resources for a group to pursue its goals. From staffing to office needs ranging from advertising to lobbying, money is a very important source of interest group strength. The communication process, represented by the flow of information from interest groups to policymakers, and from policymakers to interest groups, is integral to interest group politics. What motivates citizens to become involved in interest group activity? Do economic or nonmaterial concerns encourage individuals to join others and engage in group action? On the one hand, individuals participate in organized behavior in order to maintain or enhance their economic benefits. On the other hand, some observers of group behavior have argued
256 int erest groups
that individuals are also concerned about nonmaterial causes (e.g., the environment or human rights) and personal benefits including power, prestige, or ideology. If citizens gain “selective benefits” through participating in interest groups, why is it the case that some individuals fail to join and engage in political action? One explanation suggested by an observer of the interest group process is the “free rider” problem. If a citizen will gain tangible benefits as a result of interest group activities, why bother to join when one will gain benefits whether he or she acts or not? Interest groups represent a variety of social, economic, and ideological concerns in the United States. For instance, social concerns involving women and minorities include the National Organization for Women (NOW) and the National Association for the Advancement of Colored People (NAACP) while the pro-choice National Abortion Rights Action League (NARAL) competes with the pro-life National Right to Life Committee (NRLC) over the abortion issue. Economic interests include business and industry (e.g., the U.S. Chamber of Commerce, the National Association of Manufacturers, the American Petroleum Institute), organized labor (e.g., the AFL-CIO, the Teamsters, the United Auto Workers) and agriculture (e.g., American Farm Bureau Federation, the National Farmers Union, the National Cotton Association of America). Ideological issues have been represented by People for the American Way, MovingOn and Americans for Democratic Action on the liberal side while conservative interests have been represented by the American Conservative Union, the Christian Coalition and the Conservative Caucus. Professional organizations have also been active in lobbying for their members’ interests and include the American Political Science Association, the American Medical Association and the American Bar Association. Finally, public interest groups that argue that they work for the larger collective (public) interest include Common Cause, the Union of Concerned Scientists, and Public Citizen. Groups differ in terms of size, resources, the scope of their interests, legitimacy, strategy and tactics, and political clout. For the purpose of description and analysis, groups concerned with environmental policy will be discussed briefly in order to show the similarities and differences in the character
and behavior of organized interests. Organized interests involved in the issue of the environment can be divided into proenvironmental (green) groups, business and industry groups, and property rights groups. While environmental groups are linked by their common interest in ensuring the protection of the environment, they also exhibit shared and diverse characteristics. The environmental movement is represented by large membership groups including the National Wildlife Federation as well as smaller groups such as the Wilderness Society and the National Resources Defense Council. Resources remain a most important factor in the ability of organized interests to pursue their goals, and environmental groups, like groups in other sectors in American society, differ in their ability to recruit members and secure funding in an effort to shape environmental policy. As membership has expanded over the years, so has the ability to attract more resources. An active, committed membership with resources is very important in interest group politics. For instance, the National Wildlife Federation (NWF), one of the largest environmental groups in the country, has the membership and resources to engage in myriad activities on a range of issues. The NWF has employed lobbying, litigation, education programs, product merchandising, and broadcasts in its effort to shape environmental policy. The NWF has been involved in a range of issues including air and water quality, hazardous and toxic wastes, biodiversity, stratospheric ozone depletion, and global warming. Smaller groups with fewer resources including the Wilderness Society and Defenders of Wildlife have promoted more specific interests, in this case, protection of public lands and the protection of endangered and wild species, respectively. The legitimacy of groups also plays a role in their effort to promote their cause. The National Wildlife Federation, the Sierra Club, the Friends of the Earth, the Defenders of Wildlife, the Natural Resources Defense Council, the Wilderness Society and the National Audubon Society are examples of groups that are considered legitimate within American political culture. In contrast, groups including Greenpeace, Earth First! and the Earth Liberation Front are considered by some as outside the mainstream of American politics, due to their philosophy and the types
interest groups 257
of activities in which they engage. These groups employ direct action techniques to pursue their political agenda but alienate parts of the larger society as they do so. However, while Greenpeace remains within the sphere of legitimate, nonviolent organized action, Earth First! and the Earth Liberation Front are considered radical and/or violent groups. For instance, where Greenpeace will place its members between whaling ships and whales or hang banners from tall buildings, Earth First! and the Earth Liberation Front engage in what some call “eco-terrorism” that involves illegal acts that result in property damage. In addition to organized interests that promote the environment, business and industry as well as property rights groups also engage in political action in an effort to influence environmental policy that is favorable to their respective interests. Business and industry, in particular, have ample resources to draw upon, as this constituency lobbies members of Congress, works with personnel in executive agencies, participates in litigation and runs broadcast and print ads to promote their cause to the public. Although there are examples of environmentalists and business and industry working together (e.g., supporting the Montreal Protocol in 1987 that was established to deal with the issue of stratospheric ozone depletion), in many other cases, values and interests divide these two constituencies. Property rights groups that have felt threatened by the environmental agenda pushed by the Congress and several presidents since the 1970s have pursued efforts to protect their interests. Proponents of this constituency prefer state government over federal intrusion into public policy issues, especially as they relate to the environment. This constituency supports an array of actions including opening up the public lands to timber, mining, and grazing interests and allowing more access to drilling for oil and natural gas (e.g., in Alaska’s Arctic National Wildlife Refuge). Proponents of the property rights movement would rather have federal control over public lands replaced by state governments, since subnational government is closer to citizens and is perhaps viewed by this constituency as more attentive to their interests. The discussion above about organized interests working on behalf of specific causes shows the variety rather than the uniformity of political practices within
the American political setting. While most groups employ legitimate, conventional actions in order to shape public policy, they differ across a wide array of factors as they take action in support of their public policy priorities. Public preferences can be aggregated and directed toward government through public opinion, interest groups, and political parties. As far as interest groups are concerned, there has been an explosion in the number of groups since the 1960s. To what extent have organized interests been successful in the public policy domain? The answer is mixed, since we need to assess whether the larger, public interest (in addition to narrow, organized interests) is being well served. On the one hand, an increasing number of interest groups has enhanced the representation of a variety of groups in the United States. On the other hand, the increase in organized interests can also be viewed as making the political process more complex and complicated as government is forced to respond to a greater number of demands. Still others have argued that the playing field is unequal since interest groups are divided by leadership qualities, requisite social skills among their members and material resources that give some groups an undue advantage over other groups in the policy making process. In short, the issue for contemporary American politics is how to ensure that the larger collective interest receives proper attention by policy makers, while at the same time organized interests, some with tremendous resources at their disposal, engage in political action that protects and enhances their narrow interests. Further Reading Ciglar, Allan J., and Burdett A. Loomis, eds. Interest Group Politics. 3rd ed. Washington, D.C.: Congressional Quarterly Press, 1991; Duffy, Robert J. The Green Agenda in American Politics: New Strategies for the Twenty-First Century. Lawrence: University Press of Kansas, 2003; Lowery, David, and Holly Brasher. Organized Interests and American Government. Boston: McGraw Hill, 2004; Lowi, Theodore. The End of Liberalism. 2nd ed. New York: W.W. Norton, 1979; Moe, Terry M. The Organization of Interests: Incentives and the Internal Dynamics of Political Interest Groups. Chicago: The University of Chicago Press, 1980; Olson, Mancur. The Logic of Collective
258 liber al tradition
Action: Public Goods and the Theory of Groups. Cambridge, Mass.: Harvard University Press, 1971; Rozell, Mark J., and Clyde Wilcox. Interest Groups in American Campaigns. Washington, D.C.: Congressional Quarterly Press, 1999; Schattschneider, E. E. The Semi-Sovereign People. New York: Holt, Rinehart & Winston, 1960; Switzer, Jacqueline Vaughn. Green Backlash: The History and Politics of Environmental Opposition in the United States. Boulder, Colo.: Lynne Rienner Publishers, 1997; Truman, David. The Governmental Process. 2nd ed. New York: Alfred A. Knopf, 1971. —Glen Sussman
liberal tradition The liberal tradition refers to a political ideology that started in Western Europe. The liberal tradition started as a critique of the power of the state. It can also be seen as a discussion of the rights of the individual and the limitation of the state’s power. It refers to the values held by many people who live in Western-style democratic states. At its heart, liberalism advocates liberty or freedom. Traditionally, the liberal tradition holds that (1) individual rights are inalienable, (2) the government’s power is to be limited to the security of the state and the protection of individual rights, (3) individual property is to be respected, and (4) the government rules only by the consent of the governed. Another important aspect of the liberal tradition is that the government is representative of the people, meaning that it is democratically elected and the actions of the leaders should reflect the wishes of the people. Two other key elements that generally coincide with the liberal tradition is that the economy is left to what is called the free market or what is otherwise known as capitalism and, most theorists agree, that there needs to be a flourishing civil society in order for the government and economy to survive. This liberal tradition is the political foundation for what are called Western-style democracies. There are many influences that led to the possibility of the liberal tradition, and it is difficult to point to only one writer or thinker as the founder of the liberal tradition. The liberal tradition is a result of both the movements and events of history and the
work of various political philosophers. Some events, such as the signing of Magna Carta (1215) that required the king to give up certain rights and acknowledge that the king was bound by law, certainly played an important role in the advent of the liberal tradition. The argument can be made that the liberal tradition finds its origins as a response to the questions that arose from the various political arrangements in which kings argued that they held a divine right to rule. One of the key ideas that explained and justified the social order during the Middle Ages was what is referred to as the Great Chain of Being. This worldview presents God at the top of creation, as all creation is his, followed by angels, then humans, and then beasts. Humans are further divided into the king at the top (King James I of England quipped that kings are “little Gods on Earth”), the nobles, then serfs. Men are considered the head of the household, with the woman below them, followed by the children. The liberal tradition, among other things, replaced this way of thinking in the West and assigned individuals rights rather than only the obligations that were found in the Great Chain of Being. The concept of the liberal tradition has taken centuries to develop into the tradition that we now know. Political theorists have long pondered the relationship between the state and the individual. How free should an individual be? What is the best form of government in which the individual can flourish? What is the best form of government in order to make certain the state will survive? As one might expect, there have been many answers to these questions. Plato, for example, at various points in his work distrusts the idea of democracy. Likewise, Aristotle argued that a democracy was the worst form of government. We should note that both Plato and Aristotle had different ideas of what democracy meant than we do today. They both understood democracy to be direct self-rule by the entire polity. One of the reasons why both thinkers were distrustful of democracy is because of the excessive freedom that it granted to the polity to follow the whims of emotion. It should be noted that the work of the ancients, in particular that of Aristotle, was very important for the founders of the United States of America. For example, the limitations on who can vote (in the case of early America this was limited to property holding white
liberal tradition 259
males) and buffering representatives from the emotional polity (consider that U.S. senators were not elected directly until the passage of the Seventeenth Amendment in 1913) are ideas that are found in Aristotle’s Politics. Thomas Hobbes (1588–1679) occupies an important position at the beginning of the liberal tradition. Some might think it peculiar to include Hobbes with the liberal tradition given that his masterpiece, The Leviathan, argues for a sovereign who has absolute authority to run the state as he sees fit in order to insure the state’s survival. However, there is much in Hobbes’s work that contributes greatly to the liberal tradition. Hobbes analyzed and understood the radical freedom that each individual has in a state of nature, which is a situation in which there is no government. Certainly, Hobbes was influenced by the fact that he lived through the chaos of the English Civil War (1642–51). The absence of a government leads to a war of all against all, since there is no power to hold all in awe. Because of this situation, there is no security, no prosperity, and no peace. The remedy for this situation is to have a sovereign with absolute authority so that each can enjoy whatever freedom the sovereign grants. All citizens of a state enter a social contract with the sovereign, in which the sovereign guarantees safety and order. In return, the citizens are to abide by the law set forth by the sovereign. The reason why all citizens enter in this contract is out of self-interest. All realize that they are better off in the social contract, where they are provided safety, than in the state of nature, where they may all have rights to do whatever they choose, but their lives are likely to be very short. The idea that citizens act in their own best interests is one of Hobbes’s most important ideas. John Locke (1632–1704) and Adam Smith (1723–1790) are the most important thinkers to the liberal tradition. John Locke’s seminal work, The Two Treatises on Government, outline the basis of the liberal tradition: (1) each individual bears certain inalienable rights to life, liberty, and property and (2) governments rule by the consent of the governed. Locke argued that inalienable rights cannot be severed from the individual without cause and that these rights are not granted by any government but are part of the natural law. According to Locke, the social con-
tract into which the citizens enter with government is one in which the government guarantees protection of these rights and the government is to be dissolved if it fails to do so. Citizens are under no obligation to obey a government that fails to protect these rights. Locke’s influence on the American founders was enormous. Smith took the ideas of liberalism, which emphasize limited government interference in the lives of individuals and a respect for private property, and applied them to the economy. Smith’s ideas, found in The Wealth of Nations, can be seen as a critique of what was known as mercantilism, a theory that states become strong if they control the economy by holding precious metals and emphasizing large exports with little imports. In today’s terms, this would be known as protectionism. Smith argued that effective use of labor, not the artificial constriction of the economy, is the key to providing wealth. The best way to utilize labor is to let people decide for themselves areas of specialty in the free market. Without government interference, we do not get chaos but we get the right amount of goods that people want at prices that people are willing to pay. The invisible hand meets the needs of the society, which is to say the collective self-interest will lead people to meet the needs and demands of others in order to obtain profit for themselves, and this is enough to meet the needs of society as a whole. In the end, a nation is better off by letting its citizens work out economic arrangements in the free market. Every person’s selfish interests of personal advancement will benefit society by finding ways to meet the collective needs of others. The liberal tradition reached new heights with the founding of the United States of America. Despite struggles to get a government off the ground, the founders sought to implement the various principles of Lockean liberalism. The framers of the U.S. Constitution sought to limit government power and make the government responsive to the will of the people by giving it various branches that could check and limit the power of each other. There was vigorous debate at the Constitutional Convention in 1787 concerning how the new government could make certain that no particular faction or group took power to the detriment of others. It was thought by Madison that interests competing against other interests in a system of checks and balances would provide for a
260 lobb ying
limited government while insuring the maximum amount of freedom necessary for a liberal democracy. These discussions can be found in the important Federalist. Other founders, such as Thomas Jefferson, were more suspicious of government power, and they sought to have the rights of the citizens formalized in a Bill of Rights. The committee that drafted the Declaration of Independence sought to embody Locke’s central ideas in practice by stating that “we hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable rights, that among these are Life, Liberty, and the Pursuit of Happiness.” At the time of the ratification of the Constitution, the implementation of liberal principles as we understand them today were not perfect. However, this time was an important step toward making liberal democracies an attractive option for other nations over time. Classical liberalism is equated with Lockean liberalism. There have been advances, adjustments, and arguments concerning the nature of liberalism. Among the many issues that have been debated are the following: Are rights natural, meaning that they are not issued or granted by the state? What rights should be included in any liberal system? Is it enough to have the right to life, liberty, and property? What about the right to an education, a job, and a place to live? Does one have a right to an assisted suicide? Does one have a right to an abortion? Different theorists have answered these questions in numerous ways, and usually the answer depends on how one conceives of the liberal tradition. Among the most important thinkers in the contemporary liberal tradition is John Rawls (1921–2002). In Rawls’s seminal work, A Theory of Justice, he argued that a just society needs to adopt the idea of justice as fairness. For a society to be just, it needs to express the following ideals: (1) everyone has equal claim to the entire set of basic rights and liberties of society; and (2) all inequalities, social or economic, must be of the greatest benefit to least advantaged members in society. In Rawls’s next major work, Political Liberalism, he ponders the question of whether such a society can survive and thrive. Rawls argues that such a society is possible because of the concept of public reason—the common reason that all share—
and the belief that in liberal societies there is an overlapping consensus concerning the validity of the claims of justice outlined above. There have been many other contributors to the debate on liberalism during the 20th century. For example, Martha Nussbaum (1947– ) makes an argument for what she calls the capability approach to reinterpret Rawls’s theory of justice. Michael Sandel (1953– ) levels a communitarian critique at Rawls’s theory. He argues that we are all constituted by the goals and values that we hold and that we cannot abstract ourselves from them in some type of disembodied perspective as Rawls’s asks us to do. Recent reinterpretations of liberalism attempt to include a variety of social goods such as education, affirmative action, and progressive taxation. Thinkers differ as to the reinterpretation of classical liberalism and they are generally divided between those who tend towards the libertarian side of the debate, meaning they are for as little government intervention in the lives of individuals as possible, or the socially progressive side of the debate, meaning they are for government programs and policies that seek to alleviate injustice by reallocating wealth. See also conservative tradition; ideology. Further Reading Locke, John. Two Treatises on Government. Cambridge: Cambridge University Press, 1960; Smith, Adam. The Wealth of Nations. New York: Penguin Books, 1986; Rawls, John. A Theory of Justice. Cambridge, Mass.: Belknap Press of Harvard University Press, 1999. —Wayne Le Cheminant
lobbying Lobbying occurs when an individual representing a group of citizens attempts to persuade elected members of government to adopt a specific point of view, vote a particular way, or come to the assistance of an individual group. Lobbyists are generally paid professionals who represent the well-being of public and private special interest organizations where government decisions are made. The term lobbyist originated during the 19th century when representatives of interest groups would wait in the lobby of the
lobbying 261
Jack Abramoff (right) turns himself in after fraud indictment. (Getty Images)
Capitol building for a member of Congress to appear so interest group representatives could influence the vote a certain way on legislation. Public interest groups, such as the National Rifle Association (NRA) seek to protect the collective interests of all Americans. Private interest groups, such as General Electric, request favorable treatment from the government to protect their own business or financial interests, rather than the interests of the larger society. Many interest groups contract with professional lobbying firms whose job it is to press the concerns of the specific group to the elected members of government. Most lobbying firms maintain permanent offices and
staffs in Washington, D.C. A large percentage of these offices are located along K Street in Washington, D.C. Hence, the lobby industry at the federal level is frequently referred to as “K Street,” or the “K Street lobby.” There are approximately 35,000 registered lobbyists working in Washington, D.C., up from 16,000 in 2000. In 2005, lobbying firms spent more than $2.3 billion dollars on lobbying the legislative branch and the executive branch of the federal government in Washington, D.C., up from $1.6 billion in 2000. The health care industry, an interest group association, spent the most money on lobbying activities
262 lobb ying
in 2005, over $183 million. In 2005 the U.S. Chamber of Commerce, General Electric, AT&T, and Southwestern Bell Corporation (SRC), respectively, were the top spenders on lobbying in Washington, D.C., among private interest groups, paying out more than $10.3 million each on lobbying activities. Lobby groups spend money in order to gain access to the government leaders. The Public Citizen Congress Watch, a public interest watchdog group, summarized the role of money in lobbying, “Lobbyists are plainly expected to make campaign contributions in exchange for the access and favors they seek.” As government spending and the size of the federal government increases, the government encroaches on more and more policy areas. As a result of this, the affected interests in society will organize in reaction to the activities of the federal government. Growth in the size and spending of the federal government over the past several years is the main reason why the number of registered lobbyists has also increased. Interest groups will also spring into existence when there are other groups working against their interest. David Truman refers to this idea as “disturbance theory.” While financial contributions do provide access for lobbyists, interest groups adopt a variety of techniques to influence government. Groups based in Washington, D.C., and at the state levels most frequently testify at legislative hearings, contact government officials directly, and help draft legislation, in that order. Lobbyists will also have influential constituents contact the legislator’s office, initiate letterwriting movements among constituents, make donations to election campaigns, and distribute information to the media. Some interest groups also lobby the United States Supreme Court through litigation or amicus curiae (“friend of the court”) briefs. Organizations attempting to influence the Court will directly challenge a law by sponsoring a lawsuit for an aggrieved party or will submit an amicus brief to the Court, which presents legal arguments the group wants the Court to adopt in its ruling. The American Civil Liberties Union (ACLU) is an example of an interest group that employs this lobby technique. Many interest groups/lobbyists frequently sponsor luxurious travel packages for members of Congress as a lobbying tool. Lobbying is increasingly being associated with providing legislators with travel accommo-
dations and other perks. The Center for Public Integrity reported that in five and a half years, ending in 2005, “Members of Congress and their aides took at least 23,000 trips—valued at more than $50 million—financed by private sponsors, many of them corporations, trade associations and nonprofit groups with businesses on Capitol Hill.” Lobbying is frequently used in conjunction with the term political action committee or “PACs.” While lobbying refers to the act of meeting with a government official, political action committees are legal entities associated with interest groups whose job it is to raise and distribute campaign funds to members of Congress and other candidates for political office. Lobbying by interest groups is frequently employed in combination with campaign donations from political action committees as a more effective way of persuading members of Congress. The 1974 and 1976 Federal Election Campaign Acts prohibited corporations, unions, and other interest groups from donating to candidates for federal elections. This prohibition of interest group donations led to the creation of Political Action Committees (PACs). The Federal Election Campaign Acts of 1974 and 1976 limited political action committees to donations of $5,000 per candidate per election. The Bipartisan Campaign Reform Act of 2002 maintained the $5,000 limit established in 1974. The use of political action committees as a lobbying tool is perhaps one of the most effective methods of influencing government decision makers. Many lobbyists are former members of Congress or former administrators from the executive branch of government. Lobbying firms tend to hire retired government officials or former members of Congress to work as lobbyists because of their familiarity with specific public policy issues and their relationships with the current lawmakers and their staffs. The lobbying organizations anticipate that these former government officials will use their previous professional relationships to the interest group’s own advantage and win the support of members of Congress. The practice of former members of Congress and other officials leaving office and then working for a lobbying company is known as the “revolving door.” Lobbying firms often compete with each other to hire these former government officials. According to the Washington Post, “For retiring members of Congress and
lobbying 263
senior administration aides, the bidding from lobbying firms and trade associations can get even more fevered. Well-regarded top officials are in high demand and lately have commanded employment packages worth upward of $2 million a year.” Current law, passed by Congress in 1995 regulating lobbying in Washington, D.C., prohibits former lawmakers from working as lobbyists for one year after they leave their position in government. President Bill Clinton issued an executive order in 1993 prohibiting former executive branch officials from having contacts with their former agencies or departments for five years and a lifetime ban from working as representatives of foreign governments or political parties. Congress has acted to make lobbying as transparent and ethical as possible so as to avoid the appearance of quid pro quo. In 1946, Congress adopted the Federal Regulation of Lobbying Act, the first real attempt by Congress to limit the power of lobbyists. The 1946 act essentially required hired lobbyists to register with the House of Representatives and Senate and to file quarterly financial reports. However, many of these requirements were voluntary and the act failed to legally define the term lobbyist. Congress again attempted to curb the perceived influence of lobbyists with the passage of the Lobbying Disclosure Act of 1995. The 1995 act defined a lobbyist as “any individual who is employed or retained by a client for financial or other compensation for services that include more than one lobbying contact, other than an individual whose lobbying activities constitute less than 20 percent of the time engaged in the services provided by such individual to that client over a six month period.” The 1995 law also requires lobbyists to register with the clerk of the U.S. House of Representatives and with the secretary of the Senate within 45 days of an initial lobbying contact. Furthermore, the Lobbying Disclosure Act of 1995 compels lobbyists to submit semiannual reports to the House and Senate regarding the lobbying firm, the clients of the lobbying firm, money earned from clients, lobbying expenses, information about house of Congress or federal agency contacted, specific lobby issue (including bill numbers) and the name of the individual in government contacted about the public policy. The House of Representatives and the Senate have instituted their own in-house rules for receiving gifts from lobbyists in addition to these laws.
Representatives and senators are prohibited from soliciting gifts but may accept gifts from private sources in a limited number of circumstances, usually if the value of the gift (including meals) is $50 or less, but they may not accept more than $100 in gifts in a year from any one source. All items received worth at least $10 must be reported in both chambers and count toward the $100 limit. In 2006 a high-powered lobbyist, Jack Abramoff, pleaded guilty to defrauding clients and to bribing public officials in one of the most spectacular lobbying scandals in American history. Abramoff was guilty of providing campaign contributions, seats at his skybox at Washington’s MCI Center, the home of the National Hockey League Capitals and the National Basketball Association’s Wizards, and free meals at the Washington, D.C., restaurant owned by Abramoff in exchange for support of his clients. He also conspired to defraud various Indian tribes of more than $82 million who hired him as their professional lobbyist. A dozen lawmakers were thought to be involved in the scandal, including the former spokesman for House Majority Leader Tom DeLay (R-TX). While not found guilty of committing a crime himself, Representative DeLay resigned from Congress in the wake of this episode. In response to this scandal, which affected more Republicans than Democrats, both the House of Representatives and the Senate passed separate versions of another lobbying reform measure. The Senate passed the Lobbying Transparency and Accountability Act of 2006 in April 2006. The bill banned lobbyists from providing gifts and meals to legislators. However, the companies that hire lobbyists were still permitted to hand out gifts and meals to members of Congress. Second, the law prohibited lawmakers and their senior advisers from lobbying Congress for two years, instead of one as permitted under current law. Third, privately funded trips for members of Congress would be allowed as long as the legislator received prior approval from the chamber’s ethics committee. Finally, the bill would require quarterly reports from lobbyists and make this information available on the Internet. The House of Representatives needed to pass an identical bill in order for these limitations on lobbying to take effect. However, this legislative effort failed. While lobbying plays an important role in our democratic system of government, the interest groups
264 media
who participate in lobbying activities are sometimes referred to as pressure groups or special interests, labels with a negative connotation. Lobbyists are perceived by many to have an undue influence on the decisions of government officials by providing gifts, travel, campaign contributions, and other honoraria to members of Congress and executive branch officials in exchange for favorable legislation. However, lobbying is the hallmark of the American pluralistic system of government where the most organized and motivated interests prevail in the government decision-making process. Further Reading Berry, Jeffrey M. Lobbying for the People: The Political Behavior of Public Interest Groups. Princeton, N.J.: Princeton University Press, 1977; Birnbaum, Jeffrey H. “The Road to Riches Is Called K Street.” Washington Post, 22 June 2005, p. A1; Herrnson, Paul R., Ronald G. Shaiko, and Clyde Wilcox, eds. The Interest Group Connection: Electioneering, Lobbying, and Policymaking in Washington. Chatham, N.J.: Chatham House, 1998; Holtzman, Abraham. Interest Groups and Lobbying. New York: Macmillan, 1966; Kollman, Ken. Outside Lobbying: Public Opinion and Interest Group Strategies. Princeton, N.J.: Princeton University Press, 1998; Lowi, Theodore J. The End of Liberalism. New York: Norton, 1969; Mack, Charles S. Lobbying and Government Relations: A Guide for Executives. New York: Quorum Books, 1989; Mahood, H. R. Pressure Groups in American Politics. New York: Charles Scribner’s Sons, 1967; Nownes, Anthony J., and Patricia Freeman. “Interest Group Activity in the States,” Journal of Politics 60 (1998): 92; Schlozman, Kay Lehman, and John Tierney. “More of the Same: Washington Pressure Group Activity in a Decade of Change,” Journal of Politics 45 (1983): 358; Truman, David M. The Governmental Process. New York: Alfred A. Knopf, 1960; Zeigler, L. Harmon, and G. Wayne Peak. Interest Groups in American Society. 2nd ed. Englewood Cliffs, N.J.: Prentice Hall, 1972. —Harry C. “Neil” Strine IV
media While the term media technically encompasses all forms of communication that transmit information
and/or entertainment to an audience, it often serves as shorthand in American politics just for those that deliver the news through a variety of communications mediums including newspapers, radio, television, and the Internet. The news media plays several important roles in American politics. As people’s primary source of political information, it educates people about their political leaders, the problems facing their society and government, and the attempts of those leaders to solve them. People often rely on information provided by the media in assessing whether their elected representatives are doing a good job in office. The media also serves as an important intermediary for communication between elected officials and their constituents. These important functions lead some to label it the unofficial “fourth branch” of American politics. Largely free from government intervention and regulation, the news media in the United States developed as a commercial enterprise. This sometimes stands at odds with its supporting role in American democracy as political information provider. The information citizens want does not always match what they need. This can create a tension between the pursuit of profits and the reporting of information-laden political news. Today’s journalists generally abide by norms of objectivity in reporting the news but this was not always the case. Faced with production costs too high for subscribers to support, the nation’s early newspapers were typically financed by political parties. As a result, the news they reported was slanted in support of the political party that financed them and biased against the opposing party. This allowed the newspapers to keep the financial support of their sponsoring political party but limited their audience to that party’s faithful. When new printing technology in the late 1800s allowed newspapers to be produced much more cheaply, editors eschewed political party support in favor of objective political reporting that would appeal to as wide an audience as possible. This competition for audiences helped produce the norms of objectivity that reporters for a variety of media continue to abide by today. Once people’s primary source of political information, newspapers have since been eclipsed by the development of new media. The first challenge to
media 265
newspapers came with radio but radio’s heyday as a news source was short-lived, as television quickly took its place. While few today rely on the radio as their main source of news, radio remains a politically relevant medium. Call-in political talk-radio shows offer a venue for citizens to engage in political dialogue and debate. Television took off quickly in the 1950s and became an integral part of many of the nation’s households. News was now delivered by newscasters that audiences could both see and hear. Perhaps because of this personable aspect of television news, audiences consider television news more trustworthy than other news sources. Television news, in its typical half-hour format, cannot offer nearly as detailed coverage as newspapers. In fact, the transcript of a half-hour network evening newscast would not even fill the front page of the typical newspaper. Both to fit in the allotted time and to hold the audience’s attention, news stories must be kept brief. Even a “detailed” report on the evening news rarely lasts more than a few minutes. Television news also requires compelling images. These requirements may limit the amount and type of news about government shown on newscasts. Complex political stories that cannot be adequately covered in a few hundred words or less or be easily depicted visually may not make the evening news. As most people’s primary source of political information, television only provides a limited picture of political events. With the emergence of cable and then satellite technology, came channels devoted solely to delivering news. Except during national emergencies, they typically draw far smaller audiences than network newscasts. These 24-hour news networks offer far greater carrying capacity than a half hour network newscast and allowed for the development of news shows devoted solely to politics and the pursuit of niche markets of people intensely interested in politics. Rather than simply reporting news, these shows often include commentary and lively debate to help draw an audience and may, in the process, blur lines of journalistic objectivity. In this vein, Rupert Murdoch launched a new news network, Fox News. While claiming to offer “fair and balanced” reporting as “the news you can trust,” it quickly drew a decidedly conservative audience dis-
illusioned by what they viewed as the liberal news media. Despite strong norms of objectivity, charges of a liberal news media persist. This alleged bias could manifest itself in the content of stories, the selection of stories, or both. Surveys of reporters show that they are more liberal than the general population and anecdotes can be marshaled together to bolster a charge of liberal bias but systematic scholarly studies of television news content generally find little to no evidence of such a bias. The possibility of selection bias is more difficult to assess since scholars can rarely identify the full universe of stories the media could have covered. Surveys of news consumers suggests that bias may largely be in the eye of the beholder. Conservatives are more likely to perceive a liberal bias than liberals and those who place themselves as extremely liberal sometimes perceive a conservative bias in the news media. While television remains the most used news source overall, the Internet is rapidly catching up and may soon surpass television, especially among younger generations. The Internet offers a plenitude of consumer choices and lacks the limited carrying capacity of television. Its low production costs allow competing news providers to easily enter the market and to go after niche markets rather than aiming for the widest audience possible. As a result, reminiscent in some ways of early newspapers, they may only offer stories that would appeal to those with a particular political slant. Conservatives, for example, often turn to the Drudge Report Web site as an alternative source of news. To help them stay relevant amid new technology, newspapers and television news organizations also maintain their own Web sites that provide access to their news stories and often offer additional content. Journalists and politicians often appear to have a love/hate relationship with each other. Both need each other and yet neither fully trusts the other, nor do they have fully compatible goals. Politicians seek favorable coverage while journalists seek stories and need information from politicians to get them. The juiciest stories, however, may be those that portray the politicians in the least favorable light. Further, elected officials often find themselves frustrated when trying to get their message out through the news media. After all, the media involves, as its name
266 media
suggests, mediated communication, and reporters often filter politicians’ messages. This practice has increased over time on television news with journalists talking more and more in their stories and politicians talking less and less. To hope to make it onto the news, politicians must now attempt to craft their messages into five-to-10-second sound bites. Politicians also use the media in its broader sense to help get their messages out, bypassing the news media and seeking to use a variety of communications technologies to reach people directly. Presidents use the radio as one means of reaching the public. While President Franklin Delano Roosevelt did this best through his weekly fireside chats that helped soothe a nation troubled by economic depression, subsequent presidents including Ronald Reagan, Bill Clinton, and George W. Bush have continued the practice of radio addresses to the nation, only to much smaller audiences. In Reagan’s most infamous address, he failed to realize the microphone was on and joked about bombing the Soviet Union, illustrating by mistake the importance of watching one’s words. Modern politicians rely far more on television than radio to get their messages out and were quick to see its potential as the new technology emerged. President Dwight D. Eisenhower gave his version of “fireside chats” on television, was the first to hire a presidential television consultant, created a television studio in the White House, and used television advertisements in his reelection campaign. President Kennedy expanded the use of television further, holding the first live, televised press conference in 1961. Presidents also make live televised prime-time addresses to promote their policy agendas, but the effectiveness of these appeals has declined with changes in technology that shrank the audience. When viewers only had a few channels to choose from, it was easy for presidents to get airtime from the networks and pull in large audiences. People faced a choice of watching the president or turning off their televisions and in the early years of television, most people left their sets on. With the advent of cable and satellite and the growth of viewing choices, people increasingly change the channel when the president appears. The drop in audience and loss in revenue makes networks less willing to give up airtime, particularly in prime time, to presidents. Mod-
ern presidents must make a strong case for airtime and must make their requests wisely and sparingly for networks to grant their requests. The television era requires politicians to pay more attention to image. They not only need to worry about what they say but how they say it, and how they look saying it. The importance of this was illustrated in the 1960 debates between John F. Kennedy and Richard Nixon. Kennedy, especially in the first debate, in his dark blue suit and makeup for the television cameras came across as far more telegenic than Nixon, who was recovering from the flu and wore no makeup and a pale gray suit that blended with the background. One study suggests that Kennedy was the clear winner among television viewers, while radio listeners declared the debate a draw. Candidates considered this a lesson to learn from and now receive coaching on image and even have help picking wardrobes that will look best on television. The media plays an important role in political campaigns. Candidates rely heavily on the media to get their messages out. Scholars often make a distinction between free media (news coverage) and paid media (commercials). The choice between these methods of communication involves a tradeoff between control over the message and the cost of its transmittal. Running advertisements offers complete control over the message expressed but carries a hefty price tag. News coverage provides free publicity for candidates but candidates have little control over the content of that coverage. The news media’s coverage of campaigns receives criticism for focusing on the “horse race” aspect (who is in first, who might come from behind to win) rather than the issue positions of candidates. Most candidates choose to pursue both avenues and the percentage of campaign budgets devoted to buying airtime for advertising continues to grow. The effect mass communication through the media has on political behavior remains a subject of study and debate. The emergence of television fanned fears of the potential power of propaganda but these fears were quickly put to rest by findings that people’s attitudes were hard to change and that the media had “minimal effects.” Subsequent scholars have found that, while minimal compared to early fears, the media can still have substantial and important effects. While a single campaign advertisement is
multiparty system 267
extremely unlikely to change vote choice, advertisements contribute to what viewers know about politicians and how they view them. The news also shapes how people view political issues. People depend on news for their political information. What the news covers and how it covers it affects what people know about politics and how they think about political figures and issues. Due to limited space and resources, the news media does not and cannot report on every potential story. It acts, instead, as a gatekeeper, covering some stories and ignoring others. By focusing on a certain issue, the news media can bring it to the forefront of people’s minds. This may lead to calls for government action. In this respect, the media serves as an agenda setter. For example, in what was labeled “the CNN effect,” some attributed U.S. intervention in Somalia to the media’s focus on the crisis there. (Hard evidence generally fails to support a direct link between coverage and intervention.) Still, the more people are exposed to news coverage of an issue, the more likely they are to begin to see it as a serious concern. At the same time that the crime rate was declining, more Americans were naming crime as the most important problem the nation faced. Studies suggested that the heavy, exaggerated coverage of violent crime on local news was to blame. Another effect closely related to agenda setting is priming. By calling attention to one issue as opposed to another, the news may prime people to use that issue to evaluate politicians. When the media focuses heavily on the economy, people are more likely to base their evaluations of the president on economic conditions. The media can also affect public perceptions by the way it frames a story. There are often many different ways to tell the same basic story and each different way constitutes a different frame. A story on homelessness, for example, may focus on either individual causes or societal causes. Which frame is employed can affect how viewers attribute responsibility for homelessness and what they think politicians should do about it. Studies have shown that when newscasts attribute responsibility for a problem to the president, viewers are more likely to do the same. See also elections; freedom of the press; political advertising; polling; presidential elections.
Further Reading Bennett, W. Lance. News: The Politics of Illusion. 6th ed. New York: Pearson-Longman, 2005; Hamilton, James T. All the News That’s Fit to Sell: How the Market Transforms Information Into News. Princeton, N.J.: Princeton University Press, 2004; Iyengar, Shanto, and Richard Reeves, eds. Do the Media Govern? Politicians, Voters, and Reporters in America. Thousand Oaks, Calif.: Sage Publications, 1997; Kernell, Samuel. Going Public: New Strategies of Presidential Leadership. 3rd ed. Washington, D.C.: Congressional Quarterly Press, 1997; West, Darrell M. Air Wars: Television Advertising in Election Campaigns, 1952–2004. 4th ed. Washington, D.C.: Congressional Quarterly Press, 2005. —Laurie L. Rice
multiparty system Democracies depend on political parties, and for a democracy to be legitimate, there must be more than one political party. One-party states, while they may claim to embrace democratic principles, and in fact, may hold open elections, cannot long maintain a claim of being democracies if elections are not challenged by alternative views, parties, and candidates. One-party states such as the old Soviet Union engaged in the pantomime of democracy, but lacked the substance that would have required open, competitive elections between competing political parties. Thus, one can see the importance of multiparty systems for democratic government, where each party must compete for the votes of citizens. Having two or more political parties at least guarantees a choice (even in political systems such as the United States, where the parties, on some issues, often seem to be mirror-images of each other and not distinct choices). When voters have choice, the parties will attempt to be more responsive and answerable to constituent and citizen interests, desires, and demands. Where there is no choice, the single dominant party can act without concern for the wishes of the majority because they feel confident they can act without serious challenge and without the threat of dire consequence. Single-party systems may get complacent and not bother to take the interests or desires of the voters into consideration, because there may not be a pressing need for them to do so. Single-party
268 multipar ty system
states can often afford to be arrogant and out of touch with voters, because there is really nowhere for disaffected voters to go. But in a multiparty system, voters can “vote with their feet,” that is, they can go over to the opposition party and, as the old political saying goes, “throw the bums out!” A multiparty system can be a two-party system, such as exists in the United States, or a multiparty system (with five or more competing parties) such as exists in Italy or a variety of other democracies around the globe. What leads to a two versus a multiparty system? The method of election has a great deal to do with the number of viable political parties that can compete for power. In the United States there is a single-member-district, or “first-past-the-post” method of electing political officials. This method gives the victory in a defined electoral district to whoever gets the most votes, even if the “winner” is unable to attract a majority of the votes cast. Granting victory for getting at least one more vote than any opponent, leads to parties competing for the center of the political spectrum as there can only be one winner and parties try to attract a majority in the district. This usually leads to a two-party and occasionally a threeparty system. The United States, with a singlemember-district electoral system, has a two-party system, with Democrats and Republicans competing for power, and third or minor parties virtually locked out of the system. Similarly, the electoral college, which awards all electoral votes for each state in a winner-take-all system (except in Nebraska and Maine, which awards electoral votes by the winning candidates in each congressional district) also perpetuates the two-party system. For example, even though independent presidential candidate H. Ross Perot won nearly 20 percent of the popular vote during the 1992 presidential election, he won zero electoral votes. This shows the difficulty of a third party gaining any real political foothold within the governing process. Great Britain is another single-member-district electoral system and has a three-party system, with Labor, the Liberal-Democrats and the Conservatives competing for power (the Liberal-Democrats have not seriously challenged the Conservatives or Labor for control of government, but they have maintained electoral viability in spite of their inability to gain a majority). Countries such as Canada and India, which
both have winner-take-all, single member district electoral systems, have multiple parties, but these nations are parliamentary democracies and, like Great Britain, have disciplined parties. The most common alternative type of electoral system is a proportional representation model. This system allocates legislative seats according to the overall proportion of votes a party gets in the nation as a whole. Such a system tends to be more open to alternative political parties and usually is characterized by having three or more viable parties. This often means that elections do not grant any one party majority status, and then the party with the highest number of elected legislators must attempt to form a coalition with other lesser parties in order to form a workable majority. After the national elections in Germany in September of 2005, no party had a majority of legislators, and the leading votegetter, the Christian Democratic Union, led by Angela Merkel received a slim 1 percent more of the vote than the runner-up party, the Social Democratic Party. Merkel ended up forming a coalition to form a majority and take control of the government. These coalition governments are sometimes less stable than governments where a majority party controls the legislature, as cleavages more easily come between the divergent parties that comprise the coalition, and the bonds that hold them together are less adhesive, leading to the collapse of the government. If two-party systems seem the most stable, why do not more countries attempt to “rig” their political systems to promote just two major parties? After all, such systems are often more stable, and with two parties, there may be less competition for votes. Part of the answer to this question is to be found in history; political systems develop over time with different traditions, norms, and trajectories, and one size does not fit all. Likewise, where societies have multiple forms of expression and belief, it is inevitable for several factions or parties to form—also, where there are several key religious cleavages, this may lead to different religious-based parties forming. Some societies are divided along regional lines, or linguistic lines, and there may be ethnic cleavages that split the polity. All of these differences may lead to different historical and cultural paths that influence the development of the party systems.
negative campaigning 269
In democratic terms, both the two- and multiparty systems meet the requirements of democratic electoral viability. But multiparty systems may be more diverse, more representative, and more open to various viewpoints. A two-party system tends to be divided along a left and right axis; a multiparty system tends to have a greater range and variety of viewpoints represented. A two-party system often finds each party trying—for purely electoral reasons—to move toward the center of the political spectrum, hoping to gain a majority of votes and thus win power. A multiparty system is not pressured to march toward the middle, and can be more truly representative of the various and differing viewpoints on the political spectrum. In a perfect world, the architects of political systems may wish to create clean, clear political party systems. Political representation may clash with political stability, and who is to say which value should take precedence? The system that sacrifices representation for stability gains something, no doubt, but loses something as well. Politics is often about hard choices to be made between equally compelling and attractive alternatives, where one must choose one or the other, and not both. Multiparty systems emerge for a variety of reasons. They evolve rather than are created anew. To stifle their development for the sake of greater stability or a more streamlined process often means throwing the baby out with the bathwater, something a political system does at great risk. See also third parties. Further Reading Beizen, Ingrid Van. Political Parties in New Democracies. New York: Palgrave Macmillan, 2003; Epstein, Leon D. Political Parties in Western Democracies. New York: Praeger, 1967; Michels, Robert. Political Parties: A Sociological Study of the Oligarchical Tendencies of Modern Democracy. New York: Free Press, 1968; Patterson, Kelly D. Political Parties and the Maintenance of Liberal Democracy. New York: Columbia University Press, 1996. —Michael A. Genovese
negative campaigning Negative television advertisements began with the 1956 series of spots in which an announcer says,
“How’s that again, General?” The advertisement used footage from President Dwight D. Eisenhower’s previous campaign in 1952. In that year, Eisenhower’s ads showed him answering questions from citizens. His answers were used against him to make it appear that the president had not lived up to his promises while in office. This technique has been used countless times since then and can be quite effective, as Vice President Al Gore found in 2000. He was lampooned mercilessly for a statement he made about his role in the creation of the Internet. Footage of Gore making that statement was used against him in an attack advertisement created by the George W. Bush campaign. The voiceover said, “There’s Al Gore, reinventing himself on television again. Like I’m not going to notice? Who’s he going to be today? The Al Gore who raises campaign money at a Buddhist temple? Or the one who now promises campaign finance reform? Really. Al Gore, claiming credit for things he didn’t even do.” The advertisement then shows Gore making a statement about the Internet. The advertisement concludes with another voice saying, “Yeah, and I invented the remote control, too. Another round of this and I’ll sell my television.” Although the term “negative advertisement” has been used by many observers to describe a broad variety of advertisements, they can be divided into two categories: true “attack” advertisements and contrast advertisements. Some ads offer a contrast between a candidate and the opponent, without resorting to fear appeals or mudslinging. Attack advertising appears to be considered less acceptable by voters than advertisements that offer a comparison. Perhaps the most famous negative advertisement was the 1964 “Daisy Girl” advertisement for President Lyndon B. Johnson, in which voters were threatened with the possibility of nuclear war if they did not support Johnson. The advertisement opens with a little girl in a field, picking the petals off of a daisy. She counts each petal. Her voice is overtaken by a man’s voice giving the countdown for a missile launch followed by the sound of an explosion. The picture switches to a close-up of her eye and the viewers see a mushroom cloud. The advertisement ends with an announcer saying, “Vote for President Johnson on November 3rd. The stakes are too high for you to stay home.” This is a classic example of an advertisement that appeals to fear. More recent campaigns have echoed the Daisy
270 nega tive campaigning
advertisement. For example, in 1996 Republican senator Robert Dole’s campaign actually used footage of the “Daisy Girl” in the opening of an advertisement attacking President Bill Clinton’s handling of the war on drugs. The spot told voters that drugs rather than nuclear war were a threat to “her” now and that Clinton’s policies were ineffective. In 2004, supporters of President George W. Bush sought to use fear of terrorism in their attack on Democratic nominee Senator John Kerry. Using images of Osama bin Laden and other terrorists, as well as footage of the World Trade Center on September 11, 2001, the advertisement warns voters, “These people want to kill us. They kill hundreds of innocent children in Russia and killed 200 innocent commuters in Spain, and 3,000 innocent Americans. John Kerry has a 30year record of supporting cuts in defense and intelligence and endlessly changing positions on Iraq. Would you trust Kerry against these fanatic killers? President Bush didn’t start this war, but he will finish it.” Challengers are more likely to utilize negative advertisements than incumbents. Because challengers want to convince voters to unseat an incumbent, their campaigns typically emphasize the need for change. Candidates that possess large leads in the polls generally do not turn to negative advertising. Instead, leading candidates tend to rely on positive ads that tout their strengths, while shying away from mentioning their opponents. In close elections, such as recent presidential campaigns, both challengers and incumbents have engaged in substantial negative campaigning. Indeed, several studies have documented the increase in negative or attack advertisements over the past 20 years. Candidates must be cautious in choosing to run attack advertisements. Candidates that run too many negative advertisements face the possibility of a backlash from both voters and media outlets. Nevertheless, most professional consultants believe that negative advertisements are an essential component of an overall strategy. However, the scholarly literature on the effectiveness of negative advertisements is mixed. It appears that negative advertisements may be more easily recalled by voters. Negative advertisements are especially effective in helping voters remember the challenger’s name. These advertisements also appear to improve
the ability of voters to recall the issues in the campaign. Negative advertising may be a necessary strategy for women and minority candidates, who tend to be challengers. Although women candidates have been told by consultants to avoid negative advertising, and many do, it may actually hurt their candidacies. As noted above, research has shown that the more negative ads in a race, the better voters are able to recall a challenger’s name and the issues in the campaign. This was particularly true for women candidates. Women candidates may be afraid of a backlash against them if they attack an opponent because it runs counter to sex stereotypes. Due to stereotypes, women are expected to be honest and “pure.” Women are also thought to be strong on some issues, such as health care and the environment, and weaker than men on issues ranging from the economy to the military. A woman who attacks her opponent might be seen as a shrew—or worse. However, the conventional wisdom that warns women to avoid going negative may be wrong. Experimental research has demonstrated that women are no more likely to suffer a backlash than their male counterparts. In addition, when women attack on the basis of those issues where they are stereotypically considered weak, they close the gap and are seen as more competent than their male counterparts. Finally, when female candidates do attack, they are less likely to engage in mudslinging and more likely to attack opponents on issues than male candidates. This strategy may work to their advantage. As we have seen, negative advertising has been increasing. One area of inquiry has been the impact of the growing negativity on the electorate. The concern is that the prevalence of negative advertisements could cause voters to grow weary of the whole process, keeping them away from the polls. Some have argued that attack advertisements reduce positive sentiments toward both candidates. They may also have an impact on voter motivation. In their landmark study, political scientists Shanto Iyengar and Stephen Ansolabehere estimate that negative advertising could depress turnout by as much as 5 percent. They point out that negative campaigning may have discouraged as many as 6 million voters from going to the polls in the 1992 election. Thus, the decline in voter participation can be partially attributed to the increase in negative campaigns over time. However,
party conventions 271
other scholars have treated the charge that negative advertising leads to lower turnout with skepticism. Some studies have found that advertising may actually increase the likelihood of voters going to the polls. Among those who believe that attack advertisements motivate viewers to vote, a common theory is that negative advertising intensifies people’s involvement in campaigns. If a candidate runs a smear campaign against his or her opponent, it may motivate the candidate’s supporters; however, it is also possible that the opponent’s camp will be outraged at the attacks and be motivated as well. In addition to concerns over the possibility that negative advertisements could depress turnout, critics also point out that some negative advertisements are simply unethical. Some advertisements that cross the line are those that appeal to racial prejudice. The “Willie Horton” and “Revolving Door” advertisements of the 1988 presidential campaign are a case in point. Both advertisements accused Massachusetts governor Michael Dukakis of being soft on crime. But more than that, they attempted to appeal to racial prejudice, as Horton was a black man who raped a white woman. The Horton advertisement was paid for by an outside group. The advertisement opened with images of President George H. W. Bush and Dukakis and then shows pictures of the convicted murderer as the voiceover says, “Bush and Dukakis on crime. Bush supports the death penalty for first degree murderers. Dukakis not only opposes the death penalty, he allowed first degree murderers to have weekend passes from prison. One was Willie Horton, who murdered a boy in a robbery, stabbing him 19 times. Despite a life sentence, Horton received 10 weekend passes from prison. Horton fled, kidnapped a young couple, stabbing the man and repeatedly raping his girlfriend. Weekend prison passes. Dukakis on crime.” Similarly, an advertisement paid for by the Bush campaign depicted prisoners going through a revolving door to illustrate the advertisement’s contention that the weekend furlough program in Massachusetts has had dire consequences. The voice-over said, “As Governor Michael Dukakis vetoed mandatory sentences for drug dealers he vetoed the death penalty. His revolving door prison policy gave weekend furloughs to first degree murderers not eligible for parole. While out, many com-
mitted other crimes like kidnapping and rape, and many are still at large. Now Michael Dukakis says he wants to do for America what he’s done for Massachusetts. America can’t afford that risk.” Other types of appeals are ethically suspect as well. As Lynda Lee Kaid and Anne Johnston have observed, the use of technology to create misleading advertisements is particularly troubling. They have drawn attention to four of the most common misuses of technology: editing, special effects, dramatizations and computerized alterations. For example, many advertisements manipulate video to show the opponent in an unflattering light. Some even go so far as to “morph” the opponent into another (presumably unpopular or reviled) person. See also media; presidential elections. Further Reading Ansolabehere, Stephen, and Shanto Iyengar. Going Negative: How Attack Ads Shrink and Polarize the Electorate. New York: Free Press, 1995; Diamond, Edwin, and Stephen Bates. The Spot: The Rise of Political Advertising on Television. Cambridge, Mass.: MIT Press, 1992; Goldstein, Ken, and Paul Freedman. “Campaign Advertising and Voter Turnout: New Evidence for a Stimulation Effect.” The Journal of Politics 64, no.3 (2002): 721–740; Kaid, Lynda Lee, ed. Handbook of Political Communication Research. Mahwah, N.J.: Lawrence Erlbaum Associates, 2004; Kaid, Lynda Lee, and Anne Johnston. Videostyle in Presidential Campaigns. Westport, Conn.: Praeger, 2001; Thurber, James A., Candice J. Nelson, and David Dulio, eds. Crowded Airwaves: Campaign Advertising in Elections. Washington, D.C.: Brookings Institution Press, 2000. —Ann Gordon and Kevin Saarie
party conventions Every summer during presidential election years, the two major parties (and often minor parties as well) gather for their national conventions. The national convention is where the delegates who were selected in the primaries and caucuses officially nominate the presidential and vice presidential candidates as well as pass the party’s platform. The AntiMasonic Party held the first national party convention in 1831 in Baltimore, Maryland.
272 par ty conventions
President Gerald Ford’s suppor ters at the Republican National Convention, K ansas Cit y, M issouri (Prints and P hotographs Division, Library of Congress)
At one time, political observers considered the party convention to be extremely important and the public closely followed its events. The original party conventions were held in the “smoke-filled rooms” where party leaders quizzed potential candidates and debated which candidate was most acceptable. It was often unclear who the nominee would be before the convention began. Conventions were considered to be “brokered conventions” in which there was not much agreement on issues and candidates. In fact, in 1924 it took the Democrats 103 ballots to nominate John W. Davis. Today, the party convention is much different. Generally, few disagreements exist between party members (and those that do disagree are usually hidden from the media) and there is little surprise regarding who the party’s presidential nominee will
be. In fact, not since 1952 has one of the two major parties’ presidential candidates required more than one ballot to secure the nomination. This has led many to complain that conventions are purely theatrical events that are designed as pep rallies for the parties and fail to give the public much more than propaganda. It was for this reason that, during the 1996 Republican Party convention, ABC’s Nightline anchor Ted Koppel declared the convention unworthy of further coverage and left. Koppel is not alone in his view about the lack of newsworthiness of conventions. Both television coverage and the percentage of households watching the convention have declined substantially over the years. Initially, when television began broadcasting the conventions, the media provided gavel-to-gavel coverage. Because of the unpredictability of the convention, the three major networks wanted to be sure to capture any disagreements live. The public was consumed with the conventions as well. As the number of households owning televisions grew, the number of people watching the conventions rose substantially as well. Because of the constant media coverage and larger audience, parties quickly realized that the potential existed to persuade many voters. The nominating convention became a springboard for the party’s candidate in the fall campaign. Party leaders also realized, however, that because so many people were watching, the event had to be flawless. Viewers could not see a party divided on the issues or potential nominees; the party needed to appear united, whether in fact it was or not. Also, parties could not afford mistakes, such as the one made at the 1972 Democratic convention in which nominee George McGovern gave his acceptance speech at 2:48 a.m. to a paltry television audience. The convention had to be completely scripted. As a result, party conventions lost their spontaneity, which then lost the interest of the media as well as the public. Instead of gavel-to-gavel coverage, the three major networks covered the two major parties’ 2004 conventions for a total of 6 hours (although cable networks such as C-Span continued longer coverage). Not only did the length of television coverage decline, but so did the number of viewers. Surveys conducted by the Annenberg Public Policy Center and the Shorenstein Center for the Press, Politics, and Public Policy indicated that roughly half of the television audi-
party conventions 273
ence reported that they saw some of the conventions, but most of those viewers only watched a few minutes. A typical party convention is four days long (Monday through Thursday). While there is not a rigid schedule that parties must follow, generally both the Democrats and the Republicans adhere to a similar format. The first two days of the convention are devoted to party business, such as passing the platform, and high-ranking party officials give speeches. Traditionally, the keynote address is given the first night. The goal of the keynote address is to unite the party and heal any divisions that might have occurred during a bitter primary battle. However, the keynote address is not necessarily the most important speech delivered at the beginning of the convention. In fact, in 2000 the Republicans opted not to have a keynote address, while the Democrats’ keynote address, given by Congressman Harold Ford, Jr. of Tennessee, was not shown in its entirety by most newscasts. However, in 2004, Illinois U.S. Senate Candidate Barack Obama generated a substantial amount of favorable coverage for his performance as keynote speaker at the Democratic National Convention. The GOP keynote address by U.S. Senator Zell Miller (D-GA) also received a great deal of notoriety for the scathing attacks he leveled against the Democratic presidential candidate John Kerry. Oftentimes, other speeches draw more attention from the public and the press. In 2000, viewers were interested in hearing Republican presidential candidate John McCain and Democratic presidential candidate Bill Bradley give endorsements to their former opponents (George W. Bush and Al Gore, respectively). Other speeches can be quite controversial, such as Republican Pat Buchanan’s speech in 1992 attacking Bill Clinton and the Democratic Party. In recent conventions, the candidates’ wives have also addressed the delegates. The third convention night is devoted to the official nomination process of the president and the vice president (although the Republicans used a rolling nomination in 2000 in which a few states’ delegates voted each night). In the past, there was some uncertainty about who the vice presidential candidate would be, going into the convention. In 1956, for example, Democratic nominee Adlai Stevenson let the delegates choose between Estes Kefauver and
John F. Kennedy. George H. W. Bush stunned many when he picked the junior senator from Indiana, Dan Quayle as his vice presidential candidate at the 1988 Republican convention. More recently, however, the president has selected his vice presidential candidate before the convention. Vice presidential candidates now utilize their speeches to the convention as a vehicle to both highlight the strengths of their own party’s presidential nominee as well as to criticize the other party’s presidential nominee. In 2004 Vice President Dick Cheney was particularly forceful in praising President George W. Bush’s leadership in the War on Terrorism. The fourth night is when the party nominee makes his or her acceptance speech. The acceptance speech is crucial because it is often the first time the candidate speaks before such a large audience. This is really the first chance the candidates have to “turn the public on” to their candidacies. For example, during the 2000 acceptance speeches, George W. Bush questioned Bill Clinton’s and Al Gore’s leadership abilities, while Gore presented himself as his “own man,” trying to separate himself somewhat from Clinton. In 2004, John Kerry sought to play up his record as a decorated Vietnam War veteran during his acceptance speech. Incumbents typically use the occasion to cite the accomplishments of their first term in office. At the 2004 Republican Convention President Bush emphasized his response to the terrorist attacks on 9/11/01. Acceptance speeches can cause a jump in the candidate’s popularity, but they can also hurt a candidate as well. Democrat Walter Mondale’s promise to raise taxes in 1984 certainly did not help his chances in November; he lost in a landslide to President Ronald Reagan. Because the conventions are so scripted today, many political analysts question their purpose. Others insist that conventions are not meaningless, but in fact serve the important role of uniting the party. Traditionally, candidates have received a convention bounce in their public approval ratings after the convention. Prior to 2004, every candidate since Democrat George McGovern in 1972 received some sort of post-convention boost. However, John Kerry saw no increased level of support following the 2004 Democratic National Convention. The bounces are often short-lived, such as Republican
274 par ty platform
Bob Dole’s in 1996, but in some cases they are longer lasting. In 2000, Al Gore erased a 19-point deficit and remained basically even with George W. Bush until election day. Though the conventions certainly are not the old-fashioned gatherings of the past, supporters of conventions argue that they remain important because of the convention bounce. See also party platform. Further Reading Goldstein, Michael L. Guide to the 2004 Presidential Election. Washington, D.C.: Congressional Quarterly Press, 2003; Mayer, William, ed. The Making of Presidential Candidates 2004. Lanham, Md.: Rowan and Littlefield Publishers, 2004; National Party Conventions, 1831–2005. Washington, D.C.: Congressional Quarterly Press, 2005; Polsby, Nelson W., and Aaron Wildavsky. Presidential Elections: Strategies and Structures of American Politics. 10th ed. New York: Chatham House, 2000; Witcover, Jules. No Way to Pick a President. New York: Farrar, Straus, and Giroux, 1999. —Matthew Streb and Brian Frederick
party platform political parties create platforms as a way to let voters know what candidates stand for while running under the auspices of the party. They provide a shortcut for voters who may not otherwise take the time to understand all the issues. If a voter knows the Republican Party is against abortion, then a voter can reasonably assume the Republican candidate running for office is also against abortion. The platform is a metaphor for a number of planks, or positions on issues, that, put together, create a platform upon which candidates can stand. The drafting of party platforms originated in state party conventions held in the early 1800s. When state party delegates started to conceive of themselves as an organization, they began to work on developing a message they could disseminate to the people in the state. When the national parties began to develop national conventions, the practice of drafting platforms also began. In this case, delegates of the party from all over the country would get together and haggle about what issues were important enough to put in a statement
from the party. In the 20th century, the platform played a somewhat different role. If the party had the presidency, then the president and his staff often directly influenced the platform. If the party did not have the presidency, the platform tended to be more professional as staffers from the national committee had more of a hand in writing it. American political parties have survived through the centuries by adapting their political positions to where American voters are on particular issues. Parties, however, do not generally make a wholesale change to their platform regardless of what is happening politically. They tend to respond to voters by incrementally changing their platform over time. Voters are aware if a party seems to be moving just to win votes, so the parties need to be careful when changes are in the works. Thus, the parties tend to move in accordance with three general principles. First, parties move to find votes on particular issues but tend to choose platform positions that are similar to existing platforms. In this way they protect themselves from the perception they are callously changing positions to garner electoral victories. Second, parties move to moderate positions on issues in an effort to find a winning coalition. They will try to avoid ideologically controversial issues where possible in an effort to appeal to the broadest majority of American voters. Finally, parties tend to stay away from the other party’s issues regardless of the success of a particular issue for that party. They do not tend to encroach on each other’s ground. While adhering to these adaptive principles, parties alter messages to appeal to different sets of voters. Depending on whom they are speaking to, the party will tell a story on a particular issue in a slightly different way. They also maintain policy distance on controversial issues, such as abortion. There can also be policy convergence when an issue has lost its controversy, such as social security reform. Generally, when parties modify their platforms it has happened because voter preferences have changed. There is a tendency by the major parties to adopt positions held by majorities of the public as they are trying to appeal to as broad an electoral base as possible. Thus, when there is near unanimity on an issue the parties tend to agree on policy. The more conflict develops over an issue, the more contrary the parties become. The more polarized the electorate is,
party platform 275
the more room there is for the parties to become contrary to each other in their platforms. The parties will also move in an effort to satisfy a minority in an established coalition. Thus, the platform becomes a method for a minority voice to be heard in American politics as it can become part of a major party’s agenda for the next electoral cycle. The process for drafting platforms has gone through only one major change in history. From about 1832 until the 1970s, party platforms began with an elite committee drafting an initial message. Organized groups would appeal to the committee to put their interests in the platform. Again, if the party was in the presidency then the platform was usually that of the White House and its staff. If the party was out of the presidency, then it was the work of the national committee professional staff. Once the initial draft was ready, the convention saw the polished product and voted accordingly. Sometimes changes would be made from the floor but, most often, the platform was crafted by an elite group. In the 1970s, major reforms transformed primaries as the parties selected nominees rather than the elites. This not only changed the method for choosing nominees but also drastically affected the draft of the party’s platform. Since the nominee was chosen preconvention, the platform became the primary focus of the convention, which put in more pressure on the platform result. Two major types of influences thus began. Issue-oriented delegates would arrive at the convention with the voting power to push issues that were not necessarily in line with mainstream candidates, which meant that candidates may have less of a reason to stand on the party’s platform. This has helped lead to more candidate-centered campaigns where nominees mention their party less often. Second, since delegates were sent to the convention already pledged to a particular candidate from the primary season, they would push the issues of the candidate to become the party’s issues, rather than the party’s issues becoming the candidate’s. Again, this has helped lead to more candidate-centered politics as even the party’s platforms begin to sound like campaign messages depending on the nominees in that campaign season. Generally, as candidates are thinking about winning a general election rather than building a party base, the platforms have become more muted over time to allow as much latitude as possible for the candidate
trying to win. In a responsible party model, as is seen in Canada, the party develops a platform and then holds the candidates who run accountable to that platform. Such platforms provide voters with a more secure shorthand for making voting decisions, as well as allowing them to hold candidates accountable for things that happen in the government. Each party platform has three major categories it addresses to demonstrate to voters where it is in the upcoming electoral cycle. First, there is a rhetorical section that lays out the general principles of the party. American parties generally adhere to some aspect of the American creed in this rhetoric, whether espousing self-reliance or individualism. The second category is the meat of the party platform in which the party evaluates current policy directions of the government. If the party is governing, then generally the evaluations are complimentary. If the party is out of power, then these evaluations attack what is happening. To peruse this evaluative phenomenon, click onto the national parties’ Web sites and you will see one full of praise for the current president while the other attacks everything that is said. The third category is the promise of future policy in which the party lays out its plans for the future. This is a more difficult category to work in as most American voters do not practice prospective voting, but it does give an idea of where the party plans on heading in the future. The platforms do not always anticipate major crises in American politics, but on the more mundane, and thus more often governed, aspects of politics they lay out a plan to move forward. Another feature of the party platform is that it plays a role in agenda setting in American politics. Issues get placed on the American political mind through agenda setting and people talk and think about those issues that are there. Regardless of how important an issue may or may not be, if the major political players are not talking about it, it is ignored. Party platforms serve the role of putting issues on the minds of voters, which in turn allows them to be governed. Party platforms are not just nice enumerated lists of policy ideas. The parties attempt to put into law many of the policies that are promised in their platform. These promises may not make it into law, given the complex process of American law making, but they are not empty promises nonetheless. As a result, if a traditionally underrepresented group can get its issue onto a party platform, its issue will be
276 pa tronage
mobilized. People will start talking about the issue and elected officials will attempt to govern on the issue. In many ways, the political party platform is a very effective vehicle for outsider groups to get into mainstream politics. A final service to American government that the party platform provides is to bind the nation together. Americans live in vastly diverse regions, have vastly diverse opinions on everything from religion to race relations to the economy, and have different political cultures depending on where they live. What a party platform does is allow people from rural Kentucky to speak about politics to people from suburban Oregon or urban Atlanta. As a result, the party platform helps bind together a vast American political culture. It is a political document that binds campaigns together so people are hearing similar messages across the nation and it binds constituency groups together so that people learn to hear what others need from politics. Parties are vital to American government and their platforms are the way for parties to play an important role in a democratic system of government. These promises are made based on democratic principles because the people who influence the platform are selected from people in local districts to attend the national convention. Another important aspect of the party platform is that minority groups who would otherwise have no chance in American politics can get their issues heard by the mainstream. They are brought into the party as a way to build a winning coalition and elect candidates, but their presence requires attention to their issues. In a country as pluralistic in nature as the United States, something needs to bind our citizens together. In the party platform the people can find their voice. Further Reading David, Paul T. “Party Platforms as National Plans,” Public Administration Review 31, no. 3, Special Symposium Issue: Changing Styles of Planning in PostIndustrial America (May–June 1971): 303–315; Kollman, Ken, John H. Miller, and Scott E. Page. “Political Parties and Electoral Landscapes,” British Journal of Political Science 28, no. 1 (January 1998): 139–158; Monroe, Alan D. “American Party Platforms and Public Opinion,” American Journal of Political Science 27, no. 1. (February 1983): 27–42; Patterson, Kelly D. Political Parties and the Mainte-
nance of Liberal Democracy. New York: Columbia University Press, 1996; Walters, Ronald. “Party Platforms as Political Process,” PS: Political Science and Politics 23, no. 3. (September 1990): 436–438. —Leah A. Murray
patronage Patronage, sometimes referred to as the spoils system, generally means supporting or giving a job or favors to someone (usually a loyal member of a political party) as a reward for help in an election campaign. After victory, the winning candidate or party rewards supporters by placing them in plum government jobs or giving resources such as budget allocations, contracts, or other forms of reward, as a “thank you” and a payback to supporters, friends, and loyal members of the party. Patronage goes well back in history and has roots not just in politics, but in the arts as well. It was common for wealthy and influential people or the church to sponsor artists and these patrons were often responsible for the great works of art in Europe and elsewhere. But that form of patronage was to be applauded; the political form of patronage has a less honorable lineage. In politics, elected officials, especially executives, have at their disposal key jobs that offer power and financial reward. As a result, the United States quickly developed a spoils system. In the first presidential administration of George Washington, the president attempted to have a national unity administration, and to stem the rising tide of political party development in the new government. But it did not take long before party divisions split the nation, and the country was soon divided between the Federalists (Washington, Alexander Hamilton, John Adams, and others) and the Jeffersonian Democratic Republicans (led by Thomas Jefferson and James Madison). In Washington’s second term, partisan divisions split the administration, and when John Adams was elected president in 1796, a full-fledged party system was in the making. When Adams lost in his bid for reelection in 1800 and Jefferson became president, the outgoing President Adams made a series of “eleventh hour” appointments (spoils appointments) of Federalists to a number of positions in government. When Jefferson took over he attempted to purge the government of
patronage 277
many of the Federalist holdovers and appoint loyal Jeffersonians to prominent positions. One of these attempts led to the famous case of Marbury v. Madison (1803), wherein the United States Supreme Court established judicial review. In the long run, Jefferson was able to expunge many Federalists from government posts, but that was only the beginning of the partisan battle over patronage and spoils. President Andrew Jackson, elected in 1828, argued that “to the victor go the spoils” and he actively and openly promoted members of his own party to key posts, firing those who were holdovers from the previous administration and opposition party. This guaranteed Jackson loyal supporters, but did not always mean that the best or even an adequate appointment was made. But with the party system established, the rule of spoils dominated the appointment process at the federal level. In city governments, the spoils system also dominated the urban machines so visible in the large cities of America. One of the best known of these machines was Tammany Hall in New York City. When elected, the Tammany Hall boss would appoint cronies to city posts both high and low. Often these appointees were incompetent, and just as often they were corrupt. But the machine had a vested interest in providing the city’s inhabitants with adequate city services as this would get them reelected. And so, the machine provided services, but at a price. Likewise, lucrative city contracts were often given out to friends and cronies (always with a significant kickback to the machine politicians) and the work done was often shoddy and inadequate. On the other hand, these machines often served the needs of the European immigrants who settled in the big cities of America. The Boss became the friend and patron of the immigrant family, finding a job upon arrival, helping out in hard times with food and other social services, intervening when a child got in trouble with the law, etc. Such city services came at a price however. The immigrant was to vote for the machine’s candidate at the next election. If the machine’s candidates won, the services continued; if not, another different machine took control. There was thus a life and level of activity to the urban politics of these times, as the Boss and ward representatives were “up close and personal” with the residents, often attuned to their needs, and just as often responsive to their interests—but always at a price.
At its best, the city machine could be responsive and responsible; at its worst the machine could be corrupt and venal. All too often it was the latter. And when the machine was corrupt, it was often wildly corrupt. The machine politics of the big American city led to a backlash, and reform movements sprang up, calling for good government and a cleaning up of the corrupt city machines. Over time, reformers persuaded enough citizens that the cost in corruption was too high and a new system needed to be implemented. The new system was that of civil service reform. The drive for reform took shape in the 1870s and 1880s in the United States, culminating with the passage of the Pendleton Act of 1883, which set up a Civil Service Commission, and neutral tests based on merit that were used to determine who was qualified for government employment. The goal was to hire neutral and competent employees. This reform effort marked the beginning of the end for the spoils system and merit tests were introduced in city hiring and the civil service took over at the federal level. Today, the president still has control over key appointments within the executive branch, such as the cabinet and top agency heads, as well as the Executive Office of the President. But the president does not control hiring in the bureaucracy and civil service protects workers from political pressures to perform tasks that are political and not governmental in nature. In effect, the United States has tried to strike a balance between a bureaucracy totally responsive to a president and a merit system that protects the integrity of employees from inappropriate pressures that may be applied to career employees. Is the “new” civil service system better than the old spoils system? Ironically, at the presidential level, most “conviction” presidents (those with strong ideological viewpoints and ambitious goals for change) tend to complain that the neutral civil ser vice is their enemy, that they do not respond to presidential directives or leadership, and that they often sabotage efforts at presidential leadership. Such claims, while often heard, are not always well founded in political reality. Most of the time, most civil servants perform their tasks with
278 P olitical Action Committees
professionalism and neutrality, and that is—for some presidents at least—the problem. Many presidents do not want neutrality, they want responsiveness and they also want the civil servant to serve their needs and theirs alone. But civil servants have job security precisely because they are not to be political slaves to the person in the White House. Job security is to insulate them from political pressures. Their job is not to do the president’s bidding but to serve the interests of the nation as neutral, professional servants of the public interest. This may be upsetting to presidents, but it is the intention of the reformers who smashed the spoils/ patronage system and replaced it with a cadre of professional civil servants whose primary task was serving the public interest. Further Reading Eisenstadt, S.N., and René Lemarchand, eds. Political Clientelism, Patronage, and Development. Beverly Hills, Calif.: Sage Publications, 1981; Schmidt, Steffan W., ed. Friends, Followers, and Factions: A Reader in Political Clientelism. Berkeley, Calif.: University of California Press, 1977; Tolchin, Martin and Susan Tolchin. To the Victor . . . Political Patronage from the Clubhouse to the White House. New York: Random House, 1972. —Michael A. Genovese
Po liti calAction Committees (PACs) Political Action Committee (PAC) is a popular term referring to legal entities organized for the purpose of raising and spending money to elect and defeat candidates and/or to use their financial resources as a means to influence legislation and the creation of public policy. PACs are distinct organizations referred to under federal election laws as “separate segregated funds.” Most PACs represent business, labor, or ideological interests. According to federal election law, PACs sponsored by corporations and labor organizations must register with the Federal Election Commission (FEC) within 10 days of being established. Other PACs, such as nonconnected PACs (see discussion below), must register within 10 days after certain financial activity exceeds $1,000 during a calendar year. The history of PACs dates back to the creation of the very first one in 1944 by the Congress of Indus-
trial Organizations (CIO) with the purpose of raising money for reelecting President Franklin Delano Roosevelt. The funds were raised from voluntary contributions from individual union members instead of from union treasury money. The money was raised this way in order to ensure that their donations did not violate the Smith-Connally Act of 1943, which forbade unions from contributing to federal candidates directly. The number of PACs did not grow considerably until the 1970s, which by most accounts began the modern PAC era. In 1972, only 113 PACs existed. However, in the wake of the campaign finance laws born out of reform efforts in the early 1970s, the number of PACs began to increase dramatically. These reforms placed limits on how much money could be contributed directly to candidates, thereby making PACs an attractive alternative for people and organizations to raise and spend greater amounts of money for political purposes. The Federal Election Commission (FEC), created during those 1970s reforms, issues a SemiAnnual Federal PAC Count that keeps track of the growing number of PACs. In 1974, the total number of PACs was 608, comprising 89 corporate PACs, 201 labor PACs, and 318 trade/members/health PACs. In January 2006, the number of PACs totaled 4,210: 1,622 corporate, 290 labor, 925 trade/members/ health, 1,233 nonconnected, 37 cooperative, and 103 corporate without stock. As can be seen, the greatest increase has been in the category of corporatesponsored PACs. Corporations too are prohibited from making direct contributions to federal candidates by the Tillman Act (1907), so their use of PACs allows for participation in the electoral arena. The financial impact of these entities on various elections has also increased. For example, according to the FEC, during the 2003–2004 election cycle, PAC contributions to federal candidates totaled $310.5 million, up 10 percent from 2001–2002. Of this amount, some $292.1 million was given to candidates seeking election in 2004, and the remaining $18.4 million went to candidates running for office in future years or to debt retirement for candidates in past election cycles. Although the term PAC is the general term for any political action committee, several different kinds of PACs exist. The “connected PAC” is directly
Po liti calAction Committees
279
TOP 20 PAC CONTRIBUTORS TO FEDERAL CANDIDATES, 2003-2004* PAC Name National Assn of Realtors Laborers Union National Auto Dealers Assn Intl Brotherhood of Electrical Workers National Beer Wholesalers Assn National Assn of Home Builders Assn of Trial Lawyers of America United Parcel Service SBC Communications American Medical Assn United Auto Workers Carpenters & Joiners Union Credit Union National Assn Service Employees International Union American Bankers Assn Machinists/Aerospace Workers Union Teamsters Union American Hospital Assn American Federation of Teachers Wal- Mart Stores
Total Amount $3,787,083 $2,684,250 $2,603,300 $2,369,500 $2,314,000 $2,201,500 $2,181,499 $2,142,679 $2,120,616 $2,092,425 $2,075,700 $2,074,560 $2,065,678 $1,985,000 $1,978,013 $1,942,250 $1,917,413 $1,769,326 $1,717,372 $1,677,000
Dem Pct 47% 86% 27% 96% 24% 33% 93% 28% 36% 21% 98% 74% 42% 85% 36% 99% 88% 44% 97% 22%
Repub Pct 52% 14% 73% 4% 76% 67% 6% 72% 64% 79% 1% 26% 58% 15% 64% 1% 11% 56% 3% 78%
Totals include subsidiaries and affiliated PACs, if any. *For ease of identification, the names used in this section are those of the organization connected with the PAC, rather than the official PAC name. Based on data released by the FEC on Monday, May 16, 2005.
connected to a specific corporation, labor organization, or recognized political party. Such PACs solicit funds from employees or members and make contributions in the PAC’s name to candidates or political parties in order to advance the organization’s own interests. Among the many examples of entities with connected PACs are Microsoft (corporate PAC) and the Teamsters Union (organized labor). The “nonconnected PAC” (often also referred to as independent or ideological PACs) are those that are not connected to or sponsored by a corporation or labor organization, and which are not related to a candidate’s campaign or to a political party organization. A nonconnected PAC raises money by targeting selected groups (e.g., conservative voters, environmentalists, etc.) and spending money to elect candidates who support their ideals or agendas. Among the many examples of these sorts of PACs include the National Rifle Association (pro-gun) and Emily’s List (promoting female candidates).
Another specific kind of PAC is founded by a candidate and formed in the preprimary phase of an election. This kind of PAC is referred to as an “exploratory campaign.” Potential candidates create these PACs as they explore the possibility of running for office, most notably for president. The PAC funds are used to pay for an undeclared candidate’s political travel and related expenses accrued during such an exploratory campaign. Finally, “leadership PACs” are formed by politicians but are not technically affiliated with the candidate. These PACs, however, are used as a way of raising money to help fund other candidates’ campaigns. The creation of a leadership PAC often is indicative of a politician’s aspirations for leadership positions in Congress or higher office, or stem out of a desire to prove party loyalty and cull support for future pursuits. Limitations have been placed upon the amount of contributions that PACs can make in federal
280 P olitical Action Committees
elections. They are allowed at the most $5,000 per candidate committee per election cycle (primaries, general elections, and special elections are counted separately). Additionally, they are allowed to contribute up to $15,000 annually to any national party committee and at the most $5,000 per PAC per year. PACs are also limited in terms of the contributions they can receive. They are allowed to receive only up to $5,000 from any single individual, PAC, or party committee per calendar year. However, PACs are not limited in how much they can spend on advertising in support of candidates or in promotion of their agendas and beliefs. In a 1985 case, FEC v. NICPAC, the United States Supreme Court further solidified PACs’ role by declaring that PACs should not be limited as to their spending on a candidate’s behalf provided that the expenditure is not made in collaboration with a candidate. In other words, their expenditures must be independent. PACs serve many functions in American politics. On the positive side, they provide avenues for participation and for political liberty. By many, the growth in the number of PACs is seen as a sign that our democracy is healthy. PACs, they argue, increase the flow of information during elections, thereby enhancing political liberty. They also fuel new campaign techniques by opening up other avenues to communicate with various sectors of society. PACs also provide “safety valves” for the competitive pressures in society by allowing a constructive means through which competing political viewpoints can be expressed. Further, through their activities, PACs keep elected representatives responsive to legitimate needs of the public. Although they infuse a lot of money into the system, PACs are regulated and do not give as much money as the total amount contributed by individuals directly to candidates, making them not as significant a problem as some might contend. The participation of PACs in the political process is not without its critics, however. Many observers question whether PACs have a negative impact on competition in elections. Critics point to PACs giving early money to incumbents, which they argue might deter strong challengers from emerging. Furthermore, PACs tend to provide most of their resources to incumbents, which may further reduce competi-
tion even in the wake of a contested election. For example, in 2004, approximately 79 percent of PAC funds were distributed to incumbents (totaling $246.8 million) with only about 7 percent to challengers ($22.3 million). Even in open seat races, arguably the most competitive, in 2004 PACs spent $41.3 million, or only about 13 percent of their funds. Critics also argue that PACs are at the heart of what is making it more expensive to run for office. With their use of television advertising and large-scale messaging campaigns (mailings, grass-root contacts, etc.), PACs increase the costs of what it takes for candidates to wage a successful run at an office. By undertaking their own advertising, PACs might force candidates to spend more to get their own message across, particularly if the candidate needs to compete with a PAC’s spin on an issue. This effect is compounded in races where PACs choose to spend more heavily for one candidate than another. For example, an incumbent who is well supported by PACs raises the bar for the amount of money a challenger needs to muster. This means that PACs become important participants in the early stages of campaigns, as candidates (incumbents and challengers alike) compete fiercely for their support and attention. Furthermore, PACs, because of their participation in advertising efforts against the opponent of the person they are supporting, are often seen as being at least partially responsible for adding to the growth of negative campaigning. Many critics use the Willie Horton ads in the 1988 presidential campaign, paid for by the National Security PAC, as a prime example of this phenomenon. Willie Horton was a convicted murder who had committed a rape and armed robbery while released under a weekend furlough program that Massachusetts governor and Democratic nominee Michael Dukakis supported. The PAC spent $8.5 million on these ads to attack Dukakis as being weak on crime and, in the process, aided the election efforts of the Republican presidential candidate, Vice President George H. W. Bush. Critics also argue that PACs undermine the accountability of representatives to traditional mechanisms like parties. PACs provide alternative sources of funding and support services for candidates. However, this might weaken a candidate’s ties with the party and lessen the fear of severe electoral consequences for disloyalty to party. Further, some
po liti caladvertising
critics argue that PACs, by their very role in American politics in seeking to elect or defeat certain candidates for office, undermine political parties. In particular, ideological PACs can supplant much of the roles that major parties play and draw loyalty away from traditional supporters. Until recently, this appeared to be less of a problem since ideological PACs were fewer in number. As cited above, however, their numbers continue to swell and, according to the FEC, the most substantial growth in recent PAC financial activity came from such nonconnected PACs. Nevertheless, in most ways, parties and PACs seem to have learned to coexist symbiotically. PACs need the information about candidates, intelligence about congressional contests, and access to political leaders that parties provide, and parties seek PAC money for their candidates and their own organizations. Many critics also point to the growth in the number of PACs as a sign that there is more factionalization occurring in politics, thus making it harder to establish accountability. They argue that PACs are inadequately accountable to donors or voters, and, because of their independent activity, often serve to compete with the very mechanisms that provide protections in democracies. For example, many cite independent expenditures by PACs for advertising in campaigns as particularly troubling. Specifically, interest groups use their PAC money to purchase precious media time in the last two weeks of a campaign and, as a result, often prevent the actual candidates from getting the time. Furthermore, voters are often confused and believe that PAC-sponsored ads are coming from the candidates themselves. In some instances, this has even worked against the candidate the PAC is trying to help, and he or she may be helpless to counter the voters’ perceptions. Critics also argue that PACs pose a threat to governmental legitimacy and effectiveness. PACs are seen as “special interests” out for themselves, and their actions are seen as possibly being harmful to our democracy and the fundamental premise that government is supposed to be designed to achieve the public good. Growth in certain kinds of PACs (such as in corporate and labor PACs) raises many to question the true motivation behind their activity. Specifically, although their activity is designed to elect candidates who are more favorable to the PACs’ positions, critics
281
contend that PAC expenditures may allow for increased access to elected officials and, possibly and more disturbing to democracy, undue influence in public policy creation. Although many studies have attempted to ferret out whether a direct correlation exists between the way a legislator votes on the issues of concern to his or her largest PAC donors, only circumstantial evidence can be mustered. Nevertheless, the concern remains that PACs do influence the voting behavior of those they help get elected into office and this suspicion is enough to cast doubt on the political process. Regardless of one’s view of PACs, they are key participants in the electoral landscape of American politics. They are vehicles that provide the necessary resources for organized interests to attempt to influence elections and legislation. More important, though, despite all the criticism levied against them, PACs allow such interests to have their voices heard in the political marketplace of ideas. See also corruption; lobbying. Further Reading Campaign Guide for Corporations and Labor Organizations. Available online. URL: http://www.fec.gov/pdf/ colagui.pdf.; Campaign Guide for Nonconnected Committees. Available online. URL: http://www.fec.gov/pdf/ nongui.pdf. accessed June 30, 2006; Gais, Thomas. Improper Influence: Campaign Finance Law, Political Interest Groups, and the Problem of Equality. Ann Arbor: University of Michigan Press, 1996; Gierzynski, Anthony. Money Rules: Financing Elections in America. Boulder, Colo.: Westview Press, 2000; Money in Politics Data. Available online. URL: http://www.opensecrets. org.; Political Money Line. Available online. URL: http:// www.politicalmoneyline.com. Accessed June 30, 2006. —Victoria A. Farrar-Myers
po liti caladvertising Political advertising in the United States may have begun as far back as 1789, when the song “God Save George Washington” became a unifying force for supporters of the first president. Since that time, campaign advertising has evolved from songs, to slogans, print advertising, television and radio commercials, and Internet spots. Until the 20th century, the only way candidates were able to campaign was through
282 po litical advertising
the use of newspaper advertisements, handbills, and physically appearing in front of crowds to speak, frequently at train stations. With the invention of radio in the early 1900s, candidates found a new way to reach an unprecedented number of people. In 1920, KDKA in Pittsburgh became the first radio station to broadcast election returns. It did not take long for American citizens to embrace this technology. By 1926, the NBC radio network had formed and CBS followed in 1927. Americans tuned in and candidates were able to reach voters like never before. Some candidates chose to run advertisements that featured only their voices, names, and brief messages. Others went as far as creating hourlong dramatizations of their opposition. Although comedic in tone, the attack was hard to miss. Radio advertisements remain an important staple of political campaigns. Radio offers the flexibility to target subpopulations of voters with customized messages and costs less than advertising on television. Television advertising became an essential component of all presidential campaigns with the presidential election of 1952, which saw Republican nominee Dwight Eisenhower make use of short campaign commercials with the “Eisenhower Answers America” series of spots. Before these now famous commercials, candidates appeared on television to give full speeches. Most senate and House campaigns also use television spots, though incumbents are more likely than challengers to have the funding necessary to produce costly television spots. Scholars have proposed a number of schemes for classifying television advertisements. Judith S. Trent and Robert V. Friedenberg proposed that a typology based on the advertisement’s rhetorical purpose is a useful approach. They divide spots into those that “praise the candidate,” “condemn the opponent,” or “respond to charges.” The strategic environment determines reliance on various types of advertisements. Thus, an unknown challenger would make use of autobiographical spots that highlight his or her personal history and accomplishments. Challengers are more likely to use advertisements to attack an opponent, as are candidates who are behind in the polls. A candidate who does not respond to charges quickly risks letting him or herself be defined by the opponent. Lynda Lee Kaid and Dorothy Davidson made an important advance in the analysis of campaign adver-
tising with the introduction of the videostyle concept. Videostyle is both a theory and a method that takes account of the verbal, nonverbal, and production techniques in campaign advertisements. This systematic approach allows for researchers to make detailed comparisons over time and across nations. The verbal content of videostyle includes the choice of words that the candidate uses, such as dramatic language that is meant to provoke certain emotions out of viewers. Videostyle takes account of the dominant purpose of the advertisement, such as whether the spot emphasizes issues or candidate image. Issue advertisements focus on concerns such as domestic or foreign policy, whereas image advertisements look at the personal characteristics of the candidate. Candidates at all levels of office tend to emphasize honesty and experience. Presidential candidates typically portray themselves as aggressive. Nonverbal communication includes a wide variety of content, such as posture, attire and eye contact. Nonverbal categories also note whether the candidate speaks or a proxy makes the appeal on behalf of the candidate. Kaid has observed a pronounced decline in presidential candidates speaking for themselves in spots. Instead, they rely on anonymous announcers as well as family members and other speakers. The setting of the spot also conveys information to the viewer. One prevalent strategy is for incumbents to use the trappings of their office, such as an incumbent president speaking from the Oval Office. Easily recognized symbols are frequently employed, such as the flag or other patriotic images. Candidates also like to appear with families and children, as well as people in various professions such as teachers, farmers, factory workers, and soldiers to signal to their constituents that they are supported by these groups and will work on their behalf. Production techniques include the use of various camera angles, choice of music, colors, settings, and other cinematic conventions. These techniques can have an important impact on how commercials are perceived by viewers. For example, low-camera angles help to underscore the idea that the candidate is a dominant figure. As much as imagery and messages help with name recognition, so too does the use of music. By using the videostyle approach, as well as other content analyses, numerous studies have identified
po liti caladvertising
trends in the content of television advertisements. First, advertisements are more likely to focus on issues than on other content. In their landmark study of television advertising in the 1972 presidential election, Thomas E. Patterson and Robert D. McClure concluded that voters had more opportunity to learn about issues in advertisements than from television news. The news broadcasts, it turned out, paid more attention to campaign events than substantive issues. Subsequent studies have continued to demonstrate the dominance of issue content over the years. Second, advertisements in presidential and congressional elections have grown increasingly negative. There is also evidence from several studies that the use of technological distortions is on the rise. This ethically suspect technique includes such effects as morphing a picture of one’s opponent into another picture. Digital technology allows for many other types of alterations such as making a candidate appear taller or changing the sound of a candidate’s voice. For example, in 1992, President George H. W. Bush’s voice was altered in an advertisement to make it sound lower. Political advertising in federal elections is regulated by the government. New rules went into effect with the passage of the Bipartisan Campaign Reform Act of 2002 (BCRA). The BCRA was promptly challenged in the courts, but the United States Supreme Court upheld most of its provisions. Title I of the BCRA prohibits the national Political parties from accepting soft money. Soft money contributions were a previously unregulated source of funds that often paid for advertising blitzes. Title II of the BCRA establishes a new category of communication. The law identifies any radio or television advertisement that refers to a specific candidate for federal office within 30 days before a primary or 60 days before a general election, as “electioneering communications.” These advertisements are often called “issue ads” because they avoid directly advocating for a candidate’s election or defeat under the pretext of discussing issues. The BCRA prohibits corporations or labor organizations from funding these advertisements. Other individuals or groups can still pay for these ads. However, the law requires the funding source to be reported. Interestingly, the restrictions do not apply to broadcast ads by state or local candidates even if
283
they refer to a federal candidate. They are still prohibited from directly supporting or attacking a federal candidate. Campaign advertisements do indeed have an impact on the electorate. First, political advertisements can increase the electorate’s overall knowledge. Advertisements contribute to learning about issues and candidate positions. Second, they have the potential to influence the way the electorate evaluates candidates. For example, in 1988, Governor Michael Dukakis of Massachusetts, the Democratic presidential nominee, sought to shore up perceptions of his ability to lead the armed forces by driving around in a tank. The strategy backfired however. Instead of looking like a potential commander-inchief, the image was used by the Republican nominee, Vice President George H. W. Bush, to ridicule Dukakis. Similarly, in the 2004 presidential campaign, Senator John Kerry was photographed windsurfing off Nantucket. He may have thought he appeared youthful, healthy, and confident, zigzagging on the waves. Instead, a Bush campaign advertisement mocked him by using the image to underscore the charge that Kerry was a flip-flopper on the issues of the day. Advertisements can also be an effective tool to combat stereotypes. For example, women can use advertisements to combat sex stereotypes that put them at a disadvantage in a campaign. Thus, a woman candidate might use an advertisement to reassure voters that she is tough or capable of handling economic issues, two areas where sex stereotypes might lead voters to believe otherwise in the absence of additional information. Conversely, advertisements can also exploit stereotypes in an effort to play to voter prejudices. A pair of advertisements from the 1988 campaign illustrates this point. The Bush campaign ran an advertisement called the “Revolving Door,” about the Massachusetts program that allowed inmates weekend furloughs from prison. At the same time, an independent group produced another advertisement about a black convicted murderer, Willie Horton, who had raped a white woman while on furlough. As Edwin Diamond and Stephen Bates have pointed out, “Horton was not merely a convict; he was a black man who had raped a white woman, a crime that played to the deepest feelings of a part of the electorate.”
284 po litical cartoons
Finally, advertisements can also affect political behavior, especially in conjunction with other campaign events, such as debates. Political spots have the potential to reinforce voter predispositions. That is, partisans would feel justified in their commitment to a particular candidate after viewing an advertisement. They might even be more motivated to go to the polls. Some voters can actually be converted, or change their votes by viewing one or more advertisements; however, research has shown this effect is rare. See also media; negative campaigning. Further Reading Benoit, William L. Seeing Spots: A Functional Analysis of Presidential Television Advertising from 1952– 1996. New York: Praeger, 1999; Diamond, Edwin, and Stephen Bates. The Spot. Cambridge, Mass.: MIT Press, 1992; Granato, Jim, and M. C. Sonny Wong. “Political Campaign Advertising Dynamics.” Political Research Quarterly 57, no. 3 (September 2004); Kaid, Lynda Lee, ed. Handbook of Political Communication Research. Mahwah, N.J.: Lawrence Erlbaum Associates, 2004; Kaid, Lynda Lee, and Daniela V. Dimitrova. “The Television Advertising Battleground in the 2004 Presidential Election.” Journalism Studies 6, no. 2 (2005); Kaid, Lynda Lee, and Anne Johnston. Videostyle in Presidential Campaigns: Style and Content of Televised Political Advertising. Westport, Conn.: Praeger, 2000; McClure, Robert D., and Thomas E. Patterson. The Unseeing Eye. New York: Putnams, 1976; Overby, L. Martin, and Jay Barth. “Radio Advertising in American Political Campaigns: The Persistence, Importance, and Effects of Narrowcasting.” American Politics Research. 34, no. 4: 451–478. —Ann Gordon and Kevin Saarie
po liti calcartoons Political cartoons, also known as editorial cartoons, are illustrations that depict current political or social events. As one political wag once suggested, “In America, political cartoons are no laughing matter.” By this he meant that while political cartoons are designed to be funny and entertaining, they have a deeper resonance and meaning in the political world. They highlight serious issues, deflate pompous politi-
cians, expose corruption, and communicate to a wide audience a political preference or point of view. A form of political cartooning dates back thousands of years, and was originally designed to elicit a response from an audience that was largely illiterate. If the audience could not read, they could understand drawings and symbols. From cave wall drawings to today’s sophisticated animation, the cartoon has been a staple of political communication and has often been more powerful than the written word. American political cartoons date back to the days before the American Revolution. For example, Benjamin Franklin’s “Join or Die” cartoon, depicting a snake severed in several places, represented the call for the individual colonies/states to unite and form a single force, first in a union of protection against Native American tribes, and later in opposition to British domination. The cartoon and its message became a rallying cry to revolution. Easy to understand and clear at communicating its message, this political cartoon became an iconographic symbol of the political problem of the times, and in depicting disunity, it was a powerful call to unity among the separate states. This cartoon, a drawing with a caption presenting a political message, was designed to inflame the passions of the colonists and ignite revolution. The cartoon was not, of course, the only, nor was it the primary, means of inflaming the passions of the colonists, but it played a role, perhaps even a key role in persuading the colonists to take up arms against the world’s most powerful military nation, and join the fight against imperial rule. “Join or Die” was republished in nearly every newspaper in the colonies and became identified with unity of the colonies and the first true sense of nationalism in the colonies. It quickly became a recognized symbol of unity and an icon for the fledgling colonies. We are today aware of the impact of the visual media on politics, culture, and society, but even in the 18th century, and before, visual symbols were used for political purposes. There are many ways to make political statements: speeches, pamphlets, broadsides, posters, graffiti, and of course, cartoons. A political cartoon is an artistic depiction of a social or political event, often with a short word message, that takes a political point of view or stance, and in a small space, tries to make a political statement, usually intended to send a mes-
po liti calcartoons
285
A political cartoon portraying William M. Tweed as a bullying schoolteacher giving New York City comptroller Richard B. Connolly a lesson in arithmetic. The exaggerated bills for the building of a county courthouse are posted on the wall. (Library of Congress)
sage and get the reader to think critically about the issue depicted. It combines art with journalism and politics and offers creative ways to make political statements. Cartoons are also a shorthanded way to make a point quickly and directly, and they go beyond the written word to use imagery to make a point or raise a question. The art historian E. H. Gombrich believes that the political cartoonist is able to “mythologize the world by physiognomizing it.” Not bound by the same professional or ethical standards as conventional journalists, the political cartoonist is at his or her best when skewering and lambasting the pompous and powerful. This “democratic art” brings the high and mighty down in size and makes them more human as they are lampooned by the artful and often witty pen of the skilled cartoonist.
Perhaps the heyday of political cartoons was the 1870s. At that time, William “Boss” Tweed headed the political machine that ran New York City. Tweed was always embroiled in one scandal or another, but there was one particular scandal that involved about $200 million of missing city funds. Newspapers covered the scandal in depth, and banner headlines flashed across the front pages of newspapers day after day. But it was the work of political cartoonist Thomas Nast that most believe really turned the tide against Tweed. Newspaper articles told the story, but Nast’s cartoons caught the popular imagination and implanted in the heads of voters an image that was in part responsible for bringing down Boss Tweed and his “Tammany Ring” (Tammany Hall was where
286 po litical cartoons
Tweed did his “business”). Nast’s cartoons, very accessible even to a largely illiterate immigrant population in the city, painted pictures for the voters and crafted images for them to latch on to. So devastating were these images of corruption and greed that Tweed was alleged to have fumed, “Stop them damned pictures. I don’t care so much what the papers say about me. My constituents can’t read. But, damn it, they can see pictures!” And see they did. Nast portrayed Tweed and his gang as thieves and bullies, robbers and cheats, living off the fat of the city’s taxes. Nast’s cartoons animated the voters against Tweed. In the end, Tweed was imprisoned, but after a short time, he escaped and went to Spain, where, so the legend goes, he was identified by a customs official who recognized Tweed from “them damned pictures.” When he was rearrested, a set of Nast’s cartoons caricaturing Tweed were said to be found in his suitcase. This celebrated case elevated the political cartoon in status, and paved the way for a permanent place on the editorial pages of the major newspapers in America. During the Civil War, President Abraham Lincoln was often quoted as saying that Nast was his best recruiting sergeant, so compelling were his battle scenes and drawings. As a testimony to Nast’s lasting impact, he was the first artist to use the elephant and the donkey to symbolize the Republican and Democratic Parties, symbols we still use today nearly a century and a half later. Herbert Lawrence Block, commonly known as Herblock, was one of the most prominent political cartoonists of the 20th century. He won three Pulitzer Prizes for Editorial Cartooning, in 1942, 1954, and 1979, and also received the Presidential Medal of Freedom in 1994. Block worked for the Washington Post from 1946 until he died in 2001. In the early 1950s, he coined the term “McCarthyism” while regularly depicting Senator Joseph McCarthy’s campaign against alleged communists within the U.S. government. Block was also known for his pointed attacks against President Richard Nixon during the Watergate scandal. Another prominent political cartoonist during the latter part of the 20th century has been Garry Trudeau, the cartoonist responsible for the popular political cartoon Doonesbury. Trudeau’s biting wit and political blade has struck many a modern American
politician. His assault on corruption and arrogance, his wit and clever drawings became so devastating to the powers that be that often newspapers refused to run his cartoons, while others would, on occasion, remove them from the comics section and place them in the editorial pages of their newspapers. Trudeau won the Pulitzer Prize for Editorial Cartooning in 1975. Roger Fischer argues that successful political cartoonists “fuse creative caricature, clever situational transpositions, and honest indignation” in order to have an impact and be successful in their task. But what makes for a truly successful and important political cartoon? To Charles Press, there are four key elements that go into the making of a successful political cartoon: it must be good or compelling as “art”; it must address some “underlying truth”; it must have fresh and appealing imagery, presented in a way that is “striking, forceful, or amusing, or all three”; and finally, it must be lasting in nature. Some critics argue that political cartoons do not change minds, only reinforce already held beliefs or prejudices. That may be largely true, but clearly political cartoons can highlight as well as educate. Political cartoons can be a thorn in the side of a politician, can highlight certain issues or problems, and can place on the nation’s political agenda issues and ideas that the elected politician may wish not to deal with. And while they are not a complete or comprehensive form of political communication or persuasion, they form a backdrop to the politics and issues of the day that present political viewpoints in funny, biting, and entertaining ways. Further Reading Block, Herbert. Herblock: A Cartoonist’s Life. New York: Three Rivers Press, 1998; Fischer, Roger A. Them Damned Pictures: Explorations in American Political Cartoon Art. North Haven, Conn.: Archon Books, 1996; Katz, Harry, ed. Cartoon America: Comic Art in the Library of Congress. New York: Harry N. Abrams, 2006; Lordan, Edward J. Politics, Ink: How America’s Cartoonists Skewer Politicians, from King George III to George Dubya. Lanham, Md.: Rowman & Littlefield, 2006; Press, Charles. The Political Cartoon. Rutherford, N.J.: Fairleigh Dickinson University Press, 1981. —Michael A. Genovese
po liti calculture, American 287
po liti calculture, American What are individuals’ attitudes toward their government and the political process? Do they have a strong psychological attachment to politics and believe they can make a difference, or are they removed and apathetic? Is government benevolent and useful, or more of a distant abstraction inaccessible to the common person? Do these individual attitudes—expressed collectively—define a country’s political culture, and what consequences do these views have for furthering democracy, liberal politics, civil societies, and stable governments? Are there in fact distinct sets of values and attitudes that define an entire nation’s views toward politics and government? What about regions and states within the United States political system? Do they believe in political, economic, or social change or maintaining the status quo? Do they place a higher priority on the individual or society? Do they possess specific values and attitudes toward politics, participation, business, and government action—and what consequences do these perspectives have for public policy choices, voter turnout, political parties, and the strength of civic institutions? These are among the questions that lie at the heart of explaining what political scientists call political culture. In the realm of political science, the term “political culture” has individual- as well as state- and nationallevel, connotations. But what exactly does the phrase “political culture” mean at these different levels of analysis? In terms of understanding national political cultures, political scientists Gabriel Almond and Sidney Verba studied political culture—or attitudes and values broadly shared by a state’s populace— across five states (England, Germany, Italy, Mexico, and the United States) over a 20-year period. Beginning with the text The Civic Culture in 1963, Almond and Verba viewed a state’s political culture as a collection of its population’s “political orientations” and “attitudes toward the political system and its various parts”—as well as individuals’ attitudes toward their specific role within the political system in question. Core components of political culture were identified as both “cognitive” (referring to individuals’ knowledge about politics) and “affective” (referring to individuals’ feelings about politics). As their research progressed, the authors identified three categories—or types—of political
culture based on citizens’ levels of trust and participation in government. These three political cultures are (1) parochial (citizens possess no sense of political efficacy, display low support for the government, and are not involved in politics); (2) subject (citizens express strong support for government but do not actively participate in politics); and (3) participant (citizens possess a high level of political efficacy and are directly involved in their country’s political processes and institutions). Generally speaking, in the broad national sense, within the context of U.S. politics, Americans’ political culture has focused on liberty, equality, and democracy—with varying degrees of emphasis, depending on static or evolving political, social, religious, and economic dynamics. In terms of identifying state- and regional-level political culture within the United States, however, the insight and scholarly contributions of political scientist Daniel Elazar (1934–99), have formed the foundation of much of what we think about political culture. With an eye toward understanding why states in the United States federal system choose divergent political approaches to very similar (or even identical) phenomena and issues, and why they have different levels of political participation and dissimilar attitudes toward the purpose of government itself, Elazar posits that an explanation for these vastly different political norms and realities—and often opposite approaches to civic and political life—lies in the underlying values shared by the people who reside in these states. Different regions and states, according to Elazar, possess their own distinct political culture, which in turn, affects the scope and shape of political action and the broader political order within each area. Elazar’s precise language defines this state- and regional-level political culture as “the particular pattern of orientation to political action in which each political system is embedded.” To gain an understanding of the political activity and overall political order, an appreciation of the region’s orientation toward politics was needed. Political and governmental action and realities, therefore, are predicated on the region’s (or state’s) prevailing attitudes concerning the proper role of politics and government. What, then, influences these prevailing outlooks on politics? Elazar contends that
288 po litical culture, American
by considering the dominant settlement and migration patterns that occurred across the states—and the cultures of the people that shaped these regions—we can gain an understanding of why the states have different levels of voter turnout, political participation, and views of government’s utility in both our individual lives and the community at large. Elazar’s research ultimately identified three unique political cultures that permeated the landscape of the American states: individual, moral, and traditional. Individualistic political culture—especially evident in Mid-Atlantic states through Illinois (settled largely by German and English groups) as well as some areas of the western United States such as California—espouses a utilitarian attitude toward politics, holding that it would be impractical and ill-advised for the government to be used in areas of policy and public life that are not explicitly demanded by the people. Activist governmental initiatives that delve into private activities, as well as those that seek to engineer outcomes for the betterment of society, and that interfere in the free market, are viewed as unnecessary or unwieldy, and are discouraged. Concern for the society at large—i.e., the public good—is not a prerequisite for political participation and governmental action. An entrepreneurial approach to politics—for example, viewing politics and public service as a career—would not be out of place in a region with a predominantly individualistic political culture. Taking a market-driven view of politics, individualistic political culture views the rough-andtumble world of dirty politics to be expected and necessary. Conversely, moralistic political culture, which is most prominent in New England—especially the northern states in the region—as well as the upperGreat Lakes Midwest, and the Northwest, encompasses a vastly different view of politics and government. For adherents to moralistic political culture, of utmost importance in guiding political consciousness and action is the health and well-being of society (i.e., “the commons” and “the general welfare”), not the individual. As such, in order to truly serve the public good, mass participation in politics and community affairs is expected and encouraged. In addition, this view rejects the notion that political decision making should be left only to the economic and social elite; indeed, since politics affects every member of the
society, a broad range of citizens should shape its endeavors. Perhaps not surprisingly, then, moralistic political culture holds a fundamentally positive view of government, and significant governmental involvement in the economic and social life of the community is viewed as reasonable. Moreover, this political orientation rejects the notion that politics should be about private gain. Participation in politics is driven by the precept that the public good is of utmost importance, not a deliberate stepping-stone en route to greater individual acclaim and private fortune. Politicians, therefore, should not seek higher office in order to further selfish career interests but to work for the betterment of society. And, in the pursuit of the public good, those active in moralistic cultures do not shy away from attention to specific issues (i.e., issue advocacy) that affect the quality of life in the greater community. In the final analysis, in a moralistic culture politics and government are “good” or successful only to the extent that they have been effective in promoting the public good. The bureaucracy that is essential to government action is viewed favorably, as those in public service are viewed as an extension of the people they serve. Given these essential inclinations, it may not be surprising that town hall meetings, which promote direct civic engagement, have historically been a staple in some small towns in New England. Along similar lines, New Hampshire holds as sacrosanct its longstanding role as the nation’s first primary, an ongoing tradition and distinction having been codified through law in the Granite State. Clearly the Scandinavian and northern European influence in the settling of the Great Lakes Midwest, as well as the Puritan heritage of New England, were key factors in the shaping of these regions’ attitudes toward individual responsibility to participate in politics and government. In traditionalistic political cultures—most prevalent in the states of the former Confederacy (i.e., the Deep South)—social and family ties are paramount, and individuals from these elite groups form a natural hierarchy of political and governmental leadership. These select few political actors and elected officials, from prominent families, are expected to take a custodial, conservative role in society and governing institutions—resisting change, maintaining the socioeconomic (and for much of U.S. history and politics, racial and religious) status quo. Everyday citizens who
po liti calparticipation
are not born into the elite families are not supposed to participate in any substantive fashion in the political process, as this would disrupt the established political, social, and economic order—an order built upon an agrarian, plantation-based economy. These perspectives regarding which people should govern, how they should govern, and why, is not shaped by an inherent negativity or hostility toward political action or government in general. In fact, for many in the hierarchy of public leadership, public service and government are viewed quite positively; the elite believe they have an obligation to serve in politics based on their familial and social status. But since maintenance of the longstanding socialpolitical order is the fundamental job of those governing in traditional societies, the hierarchy of prominent individuals comprising the gatekeepers of politics should not undertake initiatory, grassroots politics that would upset the natural order of society. In this end, limited government is viewed as the most appropriate form of political action. See also conservative tradition; liberal tradition. Further Reading Almond, Gabriel A., and Sidney Verba. The Civic Culture Revisited. Newbury Park, Calif.: Sage, 1989; Elazar, Daniel J., American Federalism: A View from the States. New York: Crowell, 1966. —Kevan M. Yenerall
po liti calparticipation Political participation is a broad term that is used to describe the many different ways citizen can be involved in politics. American democracy is based on the foundational concept of “rule by the people,” so citizen participation in government is often considered the lifeblood of a healthy political system. In theory, high rates of political participation enable citizens to clearly voice their concerns and desires to government officials, elect representatives who follow the interests of the public, and hold public officials accountable. There are two main categories or types of political participation: conventional and unconventional. Conventional participation includes activities that revolve around the electoral arena, that is, activities
289
that are related to elections and public officials. Voting generally comes to mind in conventional political participation, but other types abound, including contacting public officials, donating money to a campaign, putting an election sign in the front yard, contributing money to a political party, or volunteering for a campaign. Unconventional participation includes such activities as attending a protest or demonstration, signing an Internet petition, or engaging in a boycott for political reasons. These actions attempt to influence politics and public policy without necessarily involving political campaigns, candidates for office, government officials, or formal governmental entities (e.g., Congress or the city council). In recent years, scholars of political participation have recognized the need to go beyond analysis of just conventional participation to include a broad array of citizen activities that influence politics. Voting is the most basic form of political participation, a right that was not originally granted to most Americans. During the nation’s founding, suffrage (the right to vote) was granted only to white males who owned property. A majority of Americans could not vote at this time, including poor white males, slaves, women, and Native Americans. Even landowning white men with particular religious beliefs were barred from voting in certain states in the early years of American government. Over time, barriers to citizen participation were removed. Most states dropped property and religious requirements for white males in the early 1800s, and black men gained the formal right to vote with passage of the Fifteenth Amendment following the Civil War. However, discrimination at voting places limited black suffrage, most notably through poll taxes (a fee to vote) and onerous literacy tests. Black American suffrage was more fully gained with passage of the 1965 Voting Rights Act. This legislation outlawed poll taxes and literacy tests, and, as a result, black voter registration skyrocketed. Women gained the right to vote in 1920 with passage of the Nineteenth Amendment. This Constitutional amendment was the culmination of years of sometimes violent struggle on the part of political activists called Suffragettes from the National Women’s Party and other early feminist organizations. Activists took extreme measures to persuade Congress
290 po litical participation
to pass legislation that was then ratified by the states, including protesting outside the White House during the war, getting arrested, and engaging in a hunger strike while incarcerated. A gender gap in voting patterns surfaced in the 1980 presidential election, 60 years after women gained the right to vote. A persistent gender gap in political party identification, public policy support, and candidate choice is now a standard aspect of American elections. Two major issues have surfaced in scholarship pertaining to political participation and democracy: inequalities in who participates in politics, and a declining rate of citizen participation in politics. Certain citizens are more active than others when it comes to politics, leading to “representational distortion,” or the interests of active citizens being represented more than others. Voting is a simple way to illustrate the differences in who participates in politics. Rates of political participation vary by income, education, race/ethnicity, age, and gender. Americans with higher incomes are more likely than others to vote, as are those with more education, older Americans, and white citizens. Women are also slightly more likely than men to turn out to vote. In other words, people who vote in elections are not typical Americans when it comes to background characteristics. The significance of different rates of political participation by certain groups in society is debated by scholars. On the one hand, observers note that the candidate preferences of those who do vote look similar to those who do not vote, so election results would be the same if everyone voted. Other scholars argue that these differences matter because public officials are more responsive to the needs and concerns of the people who elected them. People who vote at low rates—younger citizens and people of color with lower socioeconomic status and less education—have different needs for government services than those who vote regularly. Some observers note that public officials would be more responsive to the needs of these groups if they participated more in elections. Another critical issue of political participation in American politics is what has been termed the “paradox of participation,” or the steady decline of the overall rate of voting since the 1950s. This decline is paradoxical because levels of education, an influential
predictor of whether someone is active in politics or not, has increased dramatically during this same time period. In other words, rising levels of education should have caused an increase in voting and other types of political participation in recent decades. However, only about 50 percent of Americans who are eligible to vote turn out in presidential elections, and fewer still show up at the polls in nonpresidential election years. This rate is lower than other advanced industrialized nations, many of which enjoy a 75 percent or higher rate of turnout for national elections. Voting records from the early years of the United States are not as reliable as modern records, but they indicate that rates of voting rapidly increased as suffrage was extended to include groups who were previously not allowed the vote. By the mid 1800s, approximately 80 percent of the eligible electorate turned out to vote in presidential elections, a high rate of citizen participation that would continue until the 20th century. Various reasons have been offered to explain the declining rate of voting in the United States. Some suggest that the political process has become too complicated for voters, while others point to citizen alienation brought about by the assassination of leading political figures (including President John F. Kennedy in 1963, and Dr. Martin Luther King, Jr., and Senator Robert F. Kennedy in 1968), the Watergate scandal of the Nixon administration, and more recent scandals in the White House. Observers also point out that elections have become more negative in recent years, which may cause voters to tune out of the political process, although political scientists disagree about whether negative campaigns alienate citizens and discourage them from voting. Another possible reason for lower voter turnout is lack of citizen mobilization by the major political parties who have shifted their strategies away from grassroots organizing in favor of media-centered campaigns in recent decades. One of the more compelling arguments about why Americans of today vote at lower rates than Americans half a century ago is the advent of mass communication. Robert Putnam finds that television has privatized how Americans spend their time. Instead of being active in local community organizations and activities that have traditionally led to higher
po liti calparties
rates of political participation, citizens now spend their free time watching television and using other communication technology. The result is a lack of “social capital” or connections between citizens that has in turn led to decreased interest and participation in politics, and increased mistrust of politics and public officials. Simply put, television and other forms of mass communication inhibit the development of social connections that were previously sown through community activities with friends and neighbors, such as bowling leagues and PTA meetings. Putnam finds that rates of voting have declined as rates of social capital decline. The “paradox of participation” is further complicated by recent findings that rates of certain political activities have increased in recent decades. There is no doubt that voter turnout has dropped off in the latter half of the last century, but other forms of political participation—protests, Internet-oriented campaigns, and consumer boycotts—are on the rise. A high-profile example is the grassroots campaign against the World Trade Organization that made headlines in Seattle in 1999, when protestors turned out in droves. In addition to an increase in certain forms of unconventional political participation, enrollment in mass-membership organizations, such as the Sierra Club, is also on the rise. This suggests that instead of a clear decline in political participation, citizens are shifting the ways in which they are active in government. Contemporary Americans are expanding political science notions of what is political by going outside of formal governmental channels to voice political concerns. Political scientists disagree about standards for determining whether a citizen action is “political” and therefore an act of political participation. Some scholars think that an activity has to involve a formal governmental entity to be political, while others argue that an action is political even if it goes outside of formal politics, as long as it has the potential to affect how public resources are allocated. Still other scholars believe that citizen actions automatically constitute political participation when they go beyond individual interests to address a broader societal concern. The way in which political participation is measured is important because it determines whether rates of citizen engagement are sufficient for a functioning democracy.
291
Political participation is one of several links between citizens and their government. The health of American democracy is contingent upon an active citizenry; therefore, inequalities in who participates and declining voter turnout are cause for concern. Representational inequalities are a persistent problem in American politics, with certain citizens not engaging in politics. Concern about overall rates of political participation is more complicated, though, as it appears that Americans are shifting modes of participation rather than opting out of the political process. Further Reading Campbell, Angus, Philip E. Converse, Warren E. Miller, and Donald E. Stokes. The American Voter. Chicago: University of Chicago Press, 1960; Norris, Pippa. Democratic Phoenix: Political Activism Worldwide. Cambridge: Cambridge University Press, 2002; Putnam, Robert. Bowling Alone: The Collapse and Revival of American Community. New York: Simon & Schuster, 2000; Verba, Sidney, Kay Lehman Schlozman, and Harry E. Brady. Voice and Equality: Civic Voluntarism in American Politics. Cambridge, Mass.: Harvard University Press, 1996. —Caroline Heldman
po liti calparties In the United States, political parties serve several purposes. First, they formulate and express issue positions, policy proposals, and political beliefs. Second, they organize and represent the political opinions and policy interests of particular voting blocs and interest groups, such as conservative Christians, labor unions, and African Americans. Third, they compete against each other to win elections. Fourth, if they are electorally successful, they try to use the powers, resources, and institutions of government to achieve their policy goals. The American party system is characterized as a two-party system because its politics and government are usually dominated by two major parties, ranging from the Federalist and antifederalist parties of the 1790s to the Democratic and Republican Parties of today. In fact, American politics includes several minor parties, such as the Anti-Masonic Party
292 po litical parties
of the 1820s, the Populist Party of the 1890s, the Reform Party of the 1990s, and the Green Party of the 21st century. Unlike the two major parties, American minor parties are often short-lived, attract very few voters nationally, and usually fail to win elections. Occasionally, however, minor parties have influenced either or both major parties regarding issue positions, policy proposals, and changes in political processes. For example, the Anti-Masonic Party introduced the practice of electing delegates to national conventions for the purposes of formulating party platforms and nominating presidential and vice presidential candidates. The Populist Party influenced the Democratic Party’s rejection of the gold standard as the exclusive basis for American currency in the 1896 presidential election. Several minor parties of the early 20th century, including the Socialist, Progressive, and National Progressive Parties, influenced the Democratic Party’s support of federally sponsored old age pensions, unemployment insurance, and the prohibition of child labor during the 1930s. Consequently, the American political system has maintained a two-party system for more than 200 years. The Democratic and Republican Parties are two of the oldest, continuously existing political par-
ties in the world, partially because of their success and flexibility in co-opting and absorbing policy ideas, voting blocs, and interest groups from each other or from minor parties and political movements. For example, the Free-Soil Party ended shortly after the Republican Party adopted its policy position of opposing slavery. In the 1932 presidential election, African Americans were the most loyal Republican voting bloc since about 65 percent of black voters supported Republican president Herbert Hoover. In the 1936 presidential election, 75 percent of black voters supported Democratic president Franklin D. Roosevelt. Since then, African Americans have remained the most loyal Democratic voting bloc. Unlike two-party systems in unitary, parliamentary forms of government, such as that of Great Britain, the American two-party system is characterized by decentralized party organizations, candidatecentered campaigns, and weak party discipline among legislators and voters of the same party. These characteristics result from the fact that the U.S. government is a federal, presidential system with separation of powers, which uses primaries to nominate each party’s candidates for general elections, and determines presidential elections through the electoral college. Federalism decentralizes and broadly distributes each major party’s power, rules, and procedures throughout the national, state, and local levels of its organization and leadership. Furthermore, voters, rather than party committees and officials, nominate their parties’ candidates for public offices. Combined with the separation of powers among branches of government, which promotes checks and balances, and a bicameral Congress, these factors make it difficult for a president, national party chair, Senate majority leader, or any other national party figure to exercise strong, centralized leadership and discipline toward his party’s candidates, legislators, or voters. The above constitutional, institutional, and procedural factors influence whether the presidency and Congress experience unified government or divided government. Unified government means that the same party controls the presidency and both houses of Congress. Divided government means that one party does not control all three of these elective institutions. The Republican realignment of 1896 and
po liti calparties
the Democratic realignment of the 1930s contributed to frequent, extended periods of unified government. By contrast, dealignment means that there is no majority party in voter identification and behavior, and most voters demonstrate split-ticket voting and weak or no party identification. During an extended period of dealignment, there is often divided government. For example, from 1968 until the 1992 national election results, the Republicans usually controlled the presidency and the Democrats usually controlled the Senate and always controlled the House of Representatives. Divided government resumed after the 1994 congressional elections so that there was a Republican Congress during six years of the eight-year Demo cratic presidency of William J. Clinton. Historically and behaviorally, the American twoparty system has been influenced and maintained by the fact that Americans tend to be divided bilaterally on major, enduring national issues. From the late 18th to mid 19th centuries, the antifederalists and their successor parties, the DemocraticRepublican and then Democratic Party, opposed a national bank while supporting low tariffs on imports and a strict interpretation of the U.S. Constitution to protect states’ rights and maintain a more limited federal government. These policy positions reflected an ideology, or political philosophy, that especially appealed to southern planters. Meanwhile, the Federalist Party and its successors, first the Whig Party and then the Republican Party, generally supported a national bank, higher tariffs on imports, and a more active federal government, especially to promote national economic development. These policy positions reflected an ideology that was especially attractive to northern merchants and bankers. The Civil War (1861–65) influenced the American party system by making the two major parties more regionally exclusive. Under Abraham Lincoln, the first Republican president, the Republican Party became identified with the victorious effort to end slavery and the political and military occupation of the South during the Reconstruction Era (1865–77). Except for some isolated, hill-country areas, the South emerged from the Reconstruction Era as a one-party Democratic region. The overwhelming majority of southern whites, regardless of socioeconomic differ-
293
ences among them, were Democrats because they identified their party with states’ rights, especially for state powers to maintain racial segregation and prevent southern blacks from voting. This region became known as the Solid South for its predictable behavior as the most reliable Democratic region in presidential and congressional elections. Also, after the Civil War, most nonsouthern white Protestants and African Americans were Republicans, regardless of socioeconomic differences among them. The presidential election of 1896 strengthened and confirmed Republican domination of national politics. The Republican campaign of 1896 persuaded most nonsouthern voters, including many previously Democratic urban laborers, that its economic platform supporting high tariffs and an exclusive gold standard contributed to a broad, national prosperity. From the Republican realignment of 1896 until the Democratic realignment of 1932–36, the Democratic Party elected only one president, Woodrow Wilson, and rarely controlled both houses of Congress. After the Great Depression began in 1929, many Americans blamed Republican president Herbert Hoover and his party for the worsening economic crisis. Consequently, Democratic presidential nominee Franklin D. Roosevelt easily defeated Hoover, and the Democrats won control of Congress in 1932. Roosevelt and the Democratic Congress enacted legislation intended to reduce economic suffering and stimulate and reform the economy. Collectively known as the New Deal, these policies included public works programs, new social welfare benefits, agricultural subsidies and production controls, rural electrification, legal rights for labor unions, and stricter federal regulation of banks and the stock market. Combined with southern whites as the Democratic Party’s oldest, largest, and most loyal voting bloc, Catholics, Jews, African Americans, and labor union members provided overwhelming electoral support for Roosevelt and other Democratic nominees in the 1936 elections. The 1936 election results also indicated for the first time since 1856 that most American voters were Democrats. During the 20 years in which the Democratic Party controlled the presidency under Franklin D. Roosevelt and Harry S. Truman and usually controlled Congress from 1933 until 1953, the Republican Party
294 po litical parties
was divided about how to challenge the Democratic Party in elections, policy behavior, and ideology. The moderate wing of the Republican Party, concentrated in metropolitan areas of the Northeast and West Coast, generally cooperated with bipartisan foreign and domestic policies during World War II and the cold war. To a lesser extent, moderate Republicans advocated more limited versions of liberal Democratic domestic policies, especially concerning social welfare, labor, and regulatory issues. In particular, moderate Republicans asserted that they were more sincerely and effectively committed to civil rights for African Americans than the Democratic Party. Conservative Republicans, concentrated in rural areas of the Midwest and Far West, criticized the moderate wing for being dominated by Wall Street, excessively cooperative with Democratic policies, and failing to offer American voters a distinct ideological and policy alternative to the Democratic Party, especially in presidential elections. Except for the Republican presidential nomination of conservative Senator Barry Goldwater of Arizona, moderate Republicans exerted the greatest influence in determining presidential nominations and major platform positions from 1940 until 1980. The conservative wing’s growing and enduring ascendancy in the Republican Party began with the nomination and election of President Ronald W. Reagan in 1980. Assisted by Republican control of the Senate and southern conservative Democrats in the House of Representatives, Reagan succeeded in reducing taxes, domestic spending, and business regulations while increasing defense spending and initially pursuing a more aggressive cold war foreign policy. Despite Reagan’s landslide reelection in 1984, the Republicans lost control of the Senate in 1986 and were politically weakened by investigations of the Iran-contra scandal and a high federal deficit. During the 1970s and 1980s, southern whites increasingly voted Republican for president and, to a lesser extent, for Congress. Coinciding with the rise of the religious right, the development of the South as the most reliable Republican region in presidential elections was facilitated by the party’s conservative positions on social issues, such as abortion, school prayer, gun control, and the death penalty. The socalled Republican “lock” on the electoral college, especially in the South, helped Republican Vice President
George H. W. Bush to win the 1988 presidential election as the Democrats continued to control Congress. After losing three consecutive presidential elections in the 1980s, the Democratic Party examined the possible causes of its electoral failure. Concluding that their party’s liberal image of being soft on crime, weak on defense and foreign policies, and irresponsible in taxing and domestic spending alienated many voters, the Democratic Leadership Council (DLC) encouraged the adoption of more moderate policy positions and rhetoric on crime, welfare reform, and deficit reduction. With these and other moderate policy positions, Democratic nominee William J. Clinton, a DLC member, was elected president in 1992 with 43 percent of the popular vote. Clinton’s victory was also facilitated by the strong independent presidential candidacy of H. Ross Perot, a brief recession, and the public’s focus on domestic issues due to the end of the cold war. The brief resumption of unified government under the Democratic Party ended when the Republicans won control of Congress in 1994. With a political strategy known as “triangulation,” Clinton successfully ran for reelection in 1996 as a moderate power broker who could negotiate between liberal Democrats and conservative Republicans in Congress to produce compromised policies, such as the Welfare Reform Act of 1996 and gradual deficit reduction, acceptable to most voters. Although Clinton became the first Democrat to be elected to more than one term as president since Roosevelt, the Republicans continued to control Congress, which impeached, tried, and acquitted Clinton from 1998 to 1999 over charges originating from a sex scandal. Benefiting from a prosperous economy and a budget surplus, however, Clinton ended his presidency with high job approval ratings. Regarding the electoral competition between the two major parties, the result of the 2000 presidential election was the closest and most controversial since the 19th century. Albert Gore, Clinton’s vice president and the Democratic presidential nominee, received almost 600,000 more popular votes than the Republican presidential nominee, Governor George W. Bush of Texas. The popular vote results from Florida were so close and disputed that both the Gore and Bush campaigns claimed to have legitimately received Florida’s 25 electoral votes. In December 2000, the
po liti calsocialization
United States Supreme Court ruled in favor of the Bush campaign, thereby securing Bush a majority in the electoral college. After the 2004 national elections, unified government under the Bush presidency continued, but Republicans confronted growing public dissatisfaction with the war in Iraq and higher oil prices, lobbying scandals, low job approval ratings for Bush, and increasing intraparty policy differences over immigration, the Iraq war, and the deficit. The Democrats became more optimistic of winning control of Congress, and did just that by capturing both the House of Representatives and the Senate in the 2006 midterm elections. After analyzing the results of the 2000 and 2004 presidential elections and various public opinion polls on the Iraq War and divisive social issues, such as abortion, gun control, school prayer, and gay marriage, some political scientists and media commentators perceived a clear, consistent, and possibly irreconcilable cultural and political division between what they termed Red America and Blue America. Red America consisted of states, like Alabama and Utah, and parts of states where most voters supported Bush and Republican nominees for other major offices while expressing conservative opinions on the Iraq war and social issues. Blue America consisted of states, like California and Massachusetts, and parts of states where most voters supported Democrats for the presidency and other major offices while expressing liberal opinions on the Iraq war and social issues. As Americans prepared for the 2006 midterm elections and the 2008 presidential election, it was uncertain if this political-cultural division would continue or if a new political environment would develop because of new issues, such as illegal immigration and rising inflation. It was probable, though, that the American two-party system with no new, significant minor parties, and the absence of strong party identification among most voters would continue, regardless of whether future elections resulted in unified or divided government, realignment or dealignment. See also third parties. Further Reading Eldersveld, Samuel J., and Hanes Walton, Jr. Political Parties in American Society. New York: Bedford/St.
295
Martin’s Press, 2000; Kolbe, Richard L. American Political Parties: An Uncertain Future. New York: Harper and Row, 1985; Thomas, Evan. Election 2004. New York: Public Affairs Books, 2004. —Sean J. Savage
po liti calsocialization Political socialization refers to the factors that play a role in shaping the political behavior and tendencies of an individual. As an individual grows from childhood to adulthood, many factors or variables play a role in shaping and determining the various political choices that an individual actor will make. In the field of political science, scholars are particularly interested in the factors and influences that help to produce an individual’s political preferences, beliefs, and attitudes. For example, American political scientists and other social scientists have identified several key factors that play a crucial role in determining an individual’s political preferences. They are (1) the family, (2) the education system, (3) television and other media, (4) one’s peer group, (5) gender, (6) geographic location, and (7) religious preferences. Each variable will have different weight depending on the question or problem at hand and the life experience of each individual. Despite the fact that there will be obvious differences in each individual that defy generalization, the areas of political psychology and political sociology have advanced to the point where there are many robust generalizations that can be made about certain political questions. For example, we know people tend to be affiliated with the same political party with which their parents were affiliated. This shows the strength of the family in socializing children. We know that socialization provides the framework for our political beliefs, values, political partisanship, and political ideology. Participating in groups has a complex and important impact on the development of how individual actors see the world. Many political theorists have been concerned with political socialization. For example, Socrates was charged with the crimes of impiety and corrupting the youth of Athens. In other words, in the eyes of the Athenian democracy, Socrates was improperly socializing the youth. Plato was an astute observer of the various factors that go into making a citizen. At
296 po litical socialization
various points, he advocated censoring the poets and artists, since concentrating on such endeavors, not properly aligned with the truth, might corrupt the youth of Athens and make for poor citizens. Aristotle advocated a political organization he called a polity. This form of government, with its large middle class of property owners who were neither too rich nor too poor, would share enough in the way of common traits and values to insure the stability of the state. Jean-Jacques Rousseau (1712–78) argued that there is an important divide between nature and society. He made the case that people are good by nature and that society corrupts otherwise good people. He argues in his work Emile that the way to overcome the corrupting influences of society is to educate a child properly. Rousseau argues that children live according to their instincts, almost like animals, until 12 years of age. From the ages of 12 to 15, the child starts to develop reason and can start to engage in philosophical endeavors and complex reasoning. By 15, the well-educated child is an adult. Rousseau argues that this education should take place in the countryside, the place most suited for humans and away from the corrupting influences of the city. What is most important about Rousseau’s work is not whether he is correct about his methods, but the fact that he identified the role that political socialization plays in shaping the citizens of the state. John Stuart Mill (1806–73) was quite interested in the factors that led to what might be called group decisions. This work is found primarily in his System of Logic. He believed that with individual decision making, the principles of psychology could be applied without many problems. However, expanding these principles to larger groups is a problem. He argues that we need to find a way to explicate the laws of the group or the social laws. This can be done by understanding the many single causes that make up the social laws. Mill recognized that it is very difficult to find all of the single causes of any group decision. The most important argument that Mill made regarding both individuals and groups is that the moral worth of any action can only be decided based on the outcome or consequence of the action. Thus, unlike some philosophers, such as Plato, who argue for a tightly regulated schedule of social inculcation, one can make the case that Mill’s philosophy allows for a variety of beliefs and worldviews since all that matters
is the outcome of any action, not why any single person undertakes an action. From this, Mill contends that we should undertake any action we find permissible insofar as we do not harm others. Thus, as long as we are socialized with the principle that we should not harm others with our actions, the various aspects of socialization are not terribly important. However, Mill does advocate that certain social endeavors are more desirable and advantageous than what he considers to be base actions. For example, it is better to inculcate oneself and, one can argue by extension, children, with literature than with the base pleasures of life. Max Weber (1864–1920) was a sociologist and astute observer of various aspects of social life such as the law, the development of urban centers, and people’s attitudes. His most famous work, The Protestant Ethic and the Spirit of Capitalism, deals with the relationship of various cultural factors and the growth of capitalism. He argues that various Protestant cultures and the inculcation of certain values led to the rise of capitalism in certain European Protestant countries. His major contribution to the field of political socialization was to show the link between various beliefs and how these beliefs can affect the outcome of political and cultural processes. In the study of American politics, there have been many important thinkers who have contributed to the advancement of the study of political socialization. For example, Harold Lasswell (1902–78) made important contributions to the study of how individual psychological states can affect politics. In his seminal work, Psychopathology and Politics, he examines the mental states of various politicians and shows the effects of their mental states on the people in their regimes. Later, the development of survey research helped to provide an empirical basis for the various explanations of political behavior that political researchers sought to provide. Also, the framework that was provided by what became known as the behavioral revolution had a large impact on how social scientists looked at political socialization. To these theorists, it became more important to look at the actual behavior of actors and the outcomes of those actions as opposed to the ideologies and values that might inform those actions. One could make the case that such theorists emphasized less the aspects of society that might account for socialization than
po liti calsymbols
the outcomes of the interactions among various actors. In his important work Bowling Alone, Robert Putnam explores the outcomes of socialization and what happens to the processes of socialization when society undergoes fundamental changes. He makes the case that as technology has changed and Americans spend more time watching television that we spend less time engaged in our communities. The result of this is that television plays a much larger role in socializing the community than does civil society. The result of this is the collapse of traditional bonds that held communities together for decades. In political science, the subfield that has explored political socialization the most is political psychology. Due to the fact that political scientists have typically relied on the theories of various psychologists the factors and variables that are seen as important often depend on the philosophical assumptions held by the investigator. However, there are many difficulties in analyzing political psychology (the soul or essence of the group or polis) given the fact that psychology has generally dealt with individuals and then attempted to make generalizations about the whole from the starting point of individuals. Almost by definition, the study of politics is the study of groups. Still, there have been significant advances made in the last several decades in political psychology. In many ways, a large variety of theories and thinkers have played an important role in the development of how political psychology looks at socialization. The various influences include the following: class struggles, psychological states of leaders, mass participation, alienation, the work of Sigmund Freud and many other psychologists, rational choice preferences, game theory, and more recently, cognitive psychology and neuropsychology have played a significant role. All of these perspectives have added various ways of looking at political socialization. Some of the many important debates that fill the landscape today are the following: there is a robust discussion concerning the socializing effects of civil society. Some theorists contend that without an active civil society, democratization, for example, is impossible. There is a discussion concerning the socializing effects of various political regimes and the wants and desires of individuals. For example, if an individual resides in a totalitarian regime, would it not be the
297
case that his or her preferences will be different than if he or she were to reside in a liberal democratic regime? Another important debate concerning political socialization is the impact of ideology on the political preferences of individuals. For example, after the Soviet Union collapsed in 1989, many liberal democratic thinkers argued that the only important ideology in terms of socializing political actors would be liberal democratic thought, since all other forms of ideology had been discredited. However, the recent rise in religious fundamentalism throughout the world has given many thinkers, who once held this position, pause. Further Reading Plato. The Republic, translated with notes and an interpretive essay by Allan Bloom. New York: Basic Books, 1991; Rousseau, Jean-Jacques. Emile. Translated by Barbara Foxley. New York: Dutton, 1974; Putnam, Robert D. Bowling Alone: The Collapse and Revival of American Community. New York: Simon & Schuster, 2000. —Wayne Le Cheminant
po liti calsymbols Symbols, in politics, are used to represent political viewpoints, beliefs, values, threats, and the common good. The symbol acts as a sort of shorthand for the larger idea, group, or belief that is behind the symbol. For example, in the American political arena two of the best-known symbols are the Elephant, which is used to stand for the Republican Party, and the Donkey, which is used to stand for the Democratic Party. The very concept of a political symbol, or symbolism, is that certain signs or symbols will stand for something larger than the symbol itself. In the preceding example, seeing the Elephant at a political rally signals to a person that he or she is at an event that is being staged by Republicans and one can also assume that many of the people at the event will hold a set of beliefs that is consistent with what many call conservative values. Political symbols can take on many forms. Pictures, designs, various actions, metaphors, songs, and leaders can all take on a symbolic aspect. Effective symbols are those that immediately call to mind a particular idea, institution, or belief without any additional explanation. For example, virtually every American
298 po litical symbols
understands that the bald eagle represents the United States of America. The symbol is particularly effective because it is instantly recognizable and because most observers perceive it to stand for a certain set of traits such as strength, courage, and persistence. In trying to understand symbolism we need to realize that language itself is inherently symbolic. By this it is meant that words stand in place of something else, be it objects, people, places, events, or ideas. Due to the fact that symbols are open to interpretation and are often imperfect in conveying a particular idea, there is often a great deal of contention in trying to control the use of particular symbols. The very concept of representation, which is one of the important functions that symbols fill, is highly debated. Things that stand in for something else or that resemble something can be used as a symbol. Various cultures, groups, and institutions are likely to have symbols that are important to the groups themselves but are difficult for other cultures, groups, or institutions to understand since metaphors and representation themselves are contextual. For example, in the American political arena the use of Native American mascots by athletic teams is seen by some groups, primarily Native American groups, as insensitive and bigoted; whereas, supporters of the use of such mascots believe that the mascots stand for virtues such as bravery and persistence. Part of the problem in the political debate over the use of these symbols is that it often depends on where one stands as to how the symbol will be perceived. The nature of political symbols is that they can carry with them a vast range of meanings which can often be in conflict with one another. Another interesting, and often troubling aspect, of the use of political symbols is the fight over who gets to set what a symbol stands for. The outcome of this fight is a complex web of predispositions, historical bias, current events, and the skill of those attempting to make use of particular symbols. Strategies that worked at one time may not work at another because of the differences in dispositions, current events, and leaders. One of the most famous political ads in modern political history made good use of an array of political symbols that painted one presidential candidate as dangerous and the other as peace loving. President Lyndon B. Johnson’s campaign ran the famous “Daisy Girl” ad against Republican challenger Barry Gold-
water to make it seem to American voters that Goldwater was likely to lead the United States to a nuclear confrontation with the former Soviet Union, though neither Goldwater nor the Soviet Union is mentioned in the ad. The ad was only aired one time, yet it was so effective that many analysts do factor its influence into the election results. The ad shows a little girl standing in a field slowly plucking a daisy. There are birds chirping in the background and, to add to the innocent nature of childhood, the little girl does not seemingly know all of her numbers perfectly. Soon, the viewer hears a countdown from an ominous off-camera male voice as the camera zooms into the girl’s bewildered eyes, and then the flash and mushroom cloud of an atomic bomb is seen. The voice of President Johnson then states that we “must either love each other or we must die.” The symbolism set before such an ominous statement is what makes the ad particularly effective. Certainly the young girl plucking flowers represents innocence and naiveté. The voter would not want to be caught in such a vulnerable position by not going out to vote for the candidate who will help us to avoid such a grievous position. Due to Johnson’s use of the ad, the mushroom cloud and atomic destruction became associated with Goldwater. Symbolic speech is another aspect of political symbols. There have been various notable instances in which citizens have made use of symbolic speech such as burning draft cards or the flag of the United States of America. The courts have been less tolerant of symbolic speech than other forms of speech or expression. In the case of United States v. O’Brien (1968), the Supreme ruled that burning draft cards is not protected speech. The federal law that prevented the burning of draft cards, in the opinion of the United States Supreme Court, was implemented to provide for the military’s need for soldiers rather than to prevent criticizing the government, in which there were still many available paths. However, in Texas v. Johnson (1989) the Supreme Court ruled that burning the American flag is a form of protected speech. The Court’s opinion reasoned that “If there is a bedrock principle underlying the First Amendment it is that the Government may not prohibit the expression of an idea simply because society finds the idea itself offensive or disagreeable.” Symbolic speech in its many varieties and manifestations is still a hotly debated topic in American politics.
politics 299
One of the reasons why political symbols are not particularly stable over time, though many are, is due to the fact that in politics it is often difficult to represent very abstract concepts such as freedom, democracy, liberty, and other political terms. Therefore, there is a strong incentive for competing groups to continue to battle over what symbols the public will accept as legitimate and which ones they ought to deride. Occasionally, things that are used as symbols, such as the George W. Bush administration’s Terrorism Threat Meter, can become objects of derision because it is not clear what it was meant to signify. What was the difference between yellow and orange, for example, and how was the average American supposed to be able to know the difference? More problematic than understanding the meaning of the symbol was that people started to see the use of the meter as a blatant attempt to control people’s sentiments and to garner support for the administration through the use of fear. Whatever changes in the meaning of political symbols occur over the years, it is almost certain that politicians, leaders, opinion makers, and citizens will continue to use them. This is because political symbols serve the function of creating a shorthand for important and complex ideas, political symbols are able to bring up emotional connections concerning complex ideas, and political symbols are often seen as stable, though the meaning and use of them can change over time. Further Reading Edelman, Murray. Constructing the Political Spectacle. Chicago: University of Chicago Press, 1988; Kendall, Willmoore, and George W. Carey. The Basic Symbols of the American Political Tradition. Baton Rouge: Louisiana State University Press, 1970; Jamieson, Kathleen Hall, and Paul Waldman. The Press Effect: Politicians, Journalists, and the Stories that Shape the Political World. New York: Oxford University Press, 2003. —Wayne Le Cheminant
politics Politics derives from the Greek word polis, which means, in the general vernacular, “life in the polis” or “life in the city-state.” Over the last 2,000-plus years,
politics has taken on many meanings. The current understanding of politics was given voice by Professor Harold Lasswell. In his 1938 book, Who Gets What, When, How, Lasswell discusses politics as a struggle over who gets what, when, and how. When one focuses on the verb struggle, one discovers there are two key concepts inherent in the verb itself—process and power. According to Lasswell and the majority of contemporary political scientists and practicing politicians, politics is a process by which it is determined whose preferences will prevail in the making of public policy and how certain resources (principally money) will be allocated. The most dominant characteristic of this process is power. One of the best-known definitions of political power is provided by the political scientist Robert Dahl. In Modern Political Analysis, Dahl wrote: “When we single out influence from all other aspects of human interaction in order to give special attention, what interests us and what we focus attention on is that one or more of the persons in this interaction get what they want, by causing other people to act in some particular way. We want to call attention to a causal relationship between what A wants and what B gets. This definition of power can take the form A got B to do X. There are numerous examples of this definition and perhaps one of the most politically pointed examples can be found in any annual appropriations bill in the form of pork-barrel projects.” It should be noted that this definition of power is not universally accepted by all political scientists and politicians. That is to say, there is some question as to whether all power relationships between or among people can be termed political. Yet, even though there is a lack of unanimity regarding the personalization of power, there is common agreement as to the basic meaning of the term. The following excerpts exemplify this agreement. Karl Deutsch, in Politics and Government: How People Decide Their Fate, states: “Power is the ability to make things happen that would not have happened otherwise. In this sense, it is akin to causality, that is, to the production of a change in the probability distribution of events in the world.” Harold Lasswell and Abraham Kaplan, tell us in Power and Society: “Power is participation in making decisions: G has power over H with respect to the values of K if G participates in the making of decisions affecting
300 politics
the K’s policies of H.” Thomas E. Patterson, in The American Democracy: “Politics is the process by which it is determined whose values will prevail in the making of public policy.” Johnson, Aldrich, Miller, Ostrom, and Rhode, in American Government: “Institutions define who can do what and how they can do it. Power is the who and the what (the how is procedure . . . ).” Clearly each of these excerpts testifies to the claim that power is a necessary condition of politics. Power can be used as a noun, a verb, an adjective, and as an adverb. When used in the context of politics, power always involves a relationship where more than one individual is involved and where one party to the relationship(s) is able to influence/manipulate the other party to do something. When this influence/ manipulation is agreed to by B, when it is voluntary and B gives A the right to influence/manipulate, power then takes on the dynamic of “authority.” Thus, A has the legitimate authority to exercise power over B. When A exercises power over B without B’s consent, then power takes on the dynamic of force or coercion. This sense of power is illegitimate. Because politics involves power, and power can be either legitimate (with consent) or illegitimate (absent consent), one can meaningfully describe and classify politics as being legitimate or illegitimate. An example of power, and therefore politics legitimately exercised, is our constitutionally based republican system where the citizens consent to be governed by elected representatives. An example of power, and therefore politics illegitimately exercised, was the absorption of Lithuania into the now defunct Soviet Union in the summer of 1940. Although politics has always been Lasswellian in character, its content and substance have meant different things at different times throughout history. What follows is a brief analysis of politics over time including three historical periods: the classical Greek philosophical period, the modern period beginning with the 15th- and 16th-century Italian writer Niccolo Machiavelli, and the contemporary period. Life in the polis was the life of the citizen, and the relationship between citizen and state was to be a very special one. The classical understanding of citizenship, although generally speaking limited to males, was nevertheless rich in substance and quite different from what citizenship means in the
United States today. For us, citizenship is fundamentally legal in character. To be a citizen means being subject to the laws and being entitled to the protection of the government at home and abroad. For the classical Greeks, including Aristotle (384– 322 b.c.), for example, citizenship meant participation in the process of governing, in “ruling and being ruled in turn.” Ruling and being ruled in turn presupposed knowing when to lead and when to follow. It mandated an equality of citizenship, shared rule, and equal participation in the political process. Citizenship also involved the claim to shared responsibilities in governing, and equality in both making the law and defining public policy alternatives. Hence, politics was a public affair and the distinction between what is called the public person (the government official, the citizen, the state) and the private person (the individual participating in his ordinary day-to-day affairs) was negligible. This view of politics claimed that the citizen was the state writ small; one was to be a mirror image of the other. This view also encompassed the idea that man was by nature social and political. While the social side of man necessitated that he live with others of his own species, the political nature of man dictated that he live in a political environment. The primary consideration regarding this environment was its nurturing disposition. That is, the polis itself, the political process and institutional arrangements, were to be designed to aid man in his endeavors to become fully human; to aid the citizen in achieving his greatest excellence or virtue (arete). Politics, therefore, was to help us know ourselves. The act of politics, its deliberating character, was to put us in a position to force us to “think about our relations with our fellow citizens, and other regimes and other human beings, and to nature as a whole.” It mandated that we “see ourselves as parts of the whole, neither independent nor isolated, but though limited, far from powerless.” According to many, Machiavellian politics, and what has come to be called “political realism,” marked the end of the tradition of the classical Greek conception. Francis Bacon, for example, once remarked that, “we are much beholden to Machiavelli and others that wrote what men do, and not what they ought to do.” Hence, Machiavelli focused his attention on two
politics 301
questions—what do men seek or aim at, and how do they go about this task? In answer to the first question, Machiavelli claimed that all men pursue what is in their best interest. This is another way of saying that men desire everything because the individual defines what is in his best interest. In answer to the second question, Machiavelli claimed that the principal means by which man secures what is in his best interest is through the exercise of power. Power here is to be understood to mean one’s ability (physical, mental, and psychological) to control others, to control events and situations, and to compel obedience. With Machiavelli then, politics lost its classical ethical character and became an instrument of power. Politics and power were to become so intertwined that any attempt to distinguish one from the other amounted to an exercise in mental gymnastics. Politics was simply what a particular man did. One of the most significant historical turns with regard to Machiavellian politics rests with the rise of social contractarian and liberal thought. A select group of individuals who embraced Machiavellian politics in some form include Thomas Hobbes (1588–1679), John Locke (1632–1704), Charles baron de Montesquieu (1688–1755), David Hume (1711–76), Jeremy Bentham (1748–1832), John Stuart Mill (1806–73), and the framers of the U.S. Constitution. The framers of the U.S. Constitution fundamentally rejected the classical Greek philosophical conception of politics and embraced the empirical foundation of Machiavellianism as well as Machiavelli’s general understanding of human nature, especially Machiavelli’s human nature perspective as given expression by Thomas Hobbes. This perspective is evidenced in a number of Federalist papers, including 1, 10, 15, 17, 26, 37, 51, and 64. S. S. Wolin, for example, has argued that Publius “had accepted as axiomatic that the shape of constitutional government was dictated by the selfish nature of man and his restless pursuit of interest.” According to George Mace, where Hobbes applied only to government the principle of pursuing private interests to attain the public good, “the genesis of our heritage lies in the fact that Publius applied it to society as well, thereby enlarging, refining, and correcting Hobbes’s teaching so it could be converted into the Utility of Practice.”
The utility of practice assumed that conflict was ubiquitous in nature. Hence, the question became, what forms of conflict are to be permitted in society? Publius took the position that commerce dominated by market relations was the preferred form. As a consequence, the public good was inherent in that which is produced within the dynamics of a commercial market society, where individual relationships are essentially market relationships, overseen by a powerful national government that is capable of maintaining regularized, peaceful exchange relations. And politics was the means to this end. As with the classical Greek philosophical tradition, Machiavelli, Hobbes, Locke, and Publius were principally concerned with the question of political beginning. More specifically, while Publius basically accepted the beginnings of government and society as advanced by Machiavelli and Hobbes, a Lockean concern with organization and structure was also accepted. For Machiavelli and Hobbes, however, the beginning was an extreme act, and a republic, therefore, could not be founded democratically. For Publius, the beginning need not be extreme if the body-politic was a true reflection of man’s essence. The reflective character of the republic, therefore, presupposed an organization and structure that was fundamentally Lockean in design. It centered on an “energetic executive,” an executive exercising both force and will. The republic was also predicated on an independent judiciary, a court expressing only “judgment,” yet serving as the guardian of the Constitution and political rights. Finally, the republic presupposed a congress where the members of both houses were to be learned in political methodology, analysis, and analytical skills. What distinguished this structure from the republics of classical antiquity (other than its size) was its separation of powers, checks and balances, staggered terms of office, and the dependency of those who govern on those who are governed. We have now come full circle. The contemporary period views politics on two distinct yet interrelated levels. There is the level of the private person and community-directed life, where politics is given expression in the family (children competing with each other for the attention and/or favoritism of their parents), in school (where students compete with each other in sporting events, course work, etc.), and in the workplace (where individuals jockey for better working
302 polling
conditions, higher incomes, promotions, etc.). This level of politics might be referred to as “soft politics.” The second level of politics today involves the public person and public life. This is the generally understood sense of Lasswellian politics. It is concerned with such matters as controlling the effects of factions/interest groups, labor relations, civil rights legislation, public education, rules that govern hiring and firing practices in the marketplace, national defense, the national budget, to mention only a few. This sense of politics is recognized on the state/local and national levels. Having said this, contemporary politics cannot be described in terms of the classical Greek model. Nor can it be described as purely Machiavellian. Clearly the Machiavellian legacy of power politics finds expression in the adversarial relationship(s) between Congress and the president. This is most evidenced in Madison’s Federalist 46. Furthermore, the Machiavellian legacy of power politics also finds expression in the institutional relationship(s) between Congress and the president. While Congress is laden with a number of endemic weaknesses (sluggishness, irresponsibility, amateurism, and parochialism), and is plagued with a highly complex committee bureaucracy, fragmentation of authority, and a significant sense of decentralization of power, the presidency is much better situated to influence the constitutional balance of power in its favor and to use Lasswellian politics as a means to its end. Finally, if we add to this political scenario the varied dynamics of public opinion and political socialization, political parties, nominations, campaigns, and elections, and interest groups and political action committees (PACs), we quickly discover that contemporary politics is highly complex and multidimensional. We also discover that contemporary politics has transformed the Machiavellian legacy from one where politics was something that was determined by one person or a few individuals, into a Lasswellian expression of politics where money, status, media manipulation, and ideology have come to be considered necessary conditions of political success. Nevertheless, the spirit of Machiavellian power politics via a Lasswellian formulation is alive and well and tends to evidence itself in a mixture of Hobbesian and Lockean relationships.
Further Reading Aldrich, John, et al. American Government: People, Institutions, and Politics. Boston: Houghton Mifflin, 1986; Aristotle. The Politics, edited and translated by Earnest Barker. London, Oxford, New York: Oxford University Press, 1952, 1971; Barker, R. K., et al. American Government. New York: Macmillan, 1987; Dahl, Robert. Modern Political Analysis. Englewood Cliffs, N.J.: Prentice Hall, 1967; Deutsch, Karl. Politics and Government: How People Decide Their Fate. New York: Alfred A. Knopf, 1967; Freeman, David A., et al. Understanding American Government. Redding, Calif.: Horizon Textbook Publishing, 2006; Hobbes, Thomas. Leviathan, edited by R. S. Peters and Michael Oakeshott. New York: Collier Classics, 1966; Kegan, Donald. The Great Dialogue: History of Greek Political Thought from Homer to Polybius. New York: Free Press, 1965; Ketcham, Ralph. The Anti-Federalist Papers and the Constitutional Debates. New York: New American Library, 1986; Lasswell, Harold D. Politics: Who Gets What, When, How. New York: McGraw-Hill, 1938; Locke, John. Two Treatises on Government, edited by Thomas I. Cook. New York: Hofner, 1947; Nelson, Michael, ed. The Presidency and the Political System. 3rd ed. Washington, D.C.: Congressional Quarterly Press, 1990; Paletz, David L. The Media in American Politics: Contents and Consequences. 2nd ed. New York: Longman, 2002; Patterson, Thomas E. The American Democracy. New York: McGraw-Hill, 1990; Wolin, S. S. Politics and Vision. Boston: Little Brown, 1960. —David A. Freeman
polling Public opinion polls are a pervasive feature of the modern political landscape in the United States and throughout much of the world. Whether commissioned by the news media, political parties, interest groups, or candidates for public office, there is an ample supply of polling data available to even the most casual observer of public affairs. Public opinion surveys are performed for the purpose of ascertaining the attitudes and beliefs of citizens on various issues. Because it is often impossible or too costly to interview every member of a given population of interest about a particular subject, a sample of individuals within that population is selected in order
polling 303
to generate an estimate of public opinion. The sample will deviate to some extent from the population and therefore, all polls contain some degree of error in their results. There are two types of samples: nonprobability samples, where selection is determined by some biased or unsystematic set of criteria (such as handing surveys out on a street corner), and probability samples, where everyone in the population has an equal chance of selection. Observers should be skeptical of polls that employ nonprobability samples because the results cannot be generalized to the population as a whole, only to those who participated in the survey. As a result, most polls sponsored and reported by news media organizations employ a probability sample to measure public opinion. When considering the history and background of polls, the first known example of a public opinion poll surfaced in the 1824 U.S. presidential election. It was an unscientific straw poll conducted by The Harrisburg Pennsylvanian comparing support for Andrew Jackson and John Quincy Adams. However, it was the 1936 election that is credited with transforming the process of polling into the scientific industry that it is today. During the presidential contest between incumbent Franklin D. Roosevelt and his Republican challenger Alf Landon, the Literary Digest sampled those who owned automobiles or had listed phone numbers asking them who they would vote for in the upcoming election. The results of this survey incorrectly predicted Landon would triumph over President Roosevelt. Because of the Great Depression, those who owned automobiles or had phones were more affluent than the rest of the population. As a result, the sample favored the Republican Landon, and nothing was done to correct this bias. On the other hand, pollster George Gallup conducted a more scientifically driven survey of a smaller sample of citizens that successfully predicted Roosevelt’s overwhelming victory. This turn of events not only led to the demise of the Literary Digest, but also to the professionalization of polling we see today. In the 1936 election, Gallup used quota sampling, which involved interviewers predetermining categories of people they wanted in the survey and selecting a quota of individuals from the designated groups. This approach turned out to be problematic in the 1948 election when the major polling outfits errone-
ously projected that Republican Thomas Dewey would defeat Democratic president Harry Truman. This development prompted the major pollsters to shift toward using probability sampling. In the succeeding years, election polls have proven to be highly accurate predictors of election outcomes, while surveys in general have become more reliable measures of public opinion. Although survey researchers recognize that taking a sample from a population involves some error, they want to take conscious steps to minimize it to the greatest extent possible. There are four types of survey errors that can arise in the process of public opinion polling: sampling error, measurement error, coverage error, and nonresponse error. Sampling error is based on unsystematic factors (meaning they occur by chance) that are out of the control of the survey administrator. Most researchers use simple random sampling, where each individual in the population has an equal chance of being included in the sample. Unless the simple random sample is of sufficient size, it may not be an accurate reflection of the attributes of a population. Thus, every survey contains some uncertainty or margin of error that allows for calculation of a confidence interval, within which one can be certain the sample is an accurate representation of the population. As the sample size increases relative to the population under study, the margin of error declines. Pollsters interested in surveying the American public routinely set a goal of gathering 1,000 responses, which results in a margin of error of approximately plus or minus 3 percent. For example, if a president’s approval rating is 45 percent with a margin of error of plus or minus 3 percent, the president’s “true” approval rating is likely to be anywhere between 42 percent and 48 percent. Measurement error deals with the reliability and the validity of the survey instrument. Validity involves how accurately the researcher is measuring the concept of interest, while reliability is the consistency of the responses to a question that is asked repeatedly over time. Coverage error is present when a segment of the population under study is excluded from the sampling frame. For instance, individuals without Internet access would not be in the sampling frame of a Web-based survey. Or, those who vote by absentee ballot could not be included in an exit poll
304 polling
(a poll that asks people how they voted when they leave the polling booth). Nonresponse error exists when an individual fails to respond to a particular survey question or does not participate at all. This type of error is of principal concern to analysts because, if there is a systematic reason why some group in the population is less likely to respond (for example, less-educated people are less likely to respond to surveys than more-educated people), the ensuing results may undermine the ability to generalize the results of the survey’s sample to the general population. Survey administrators will scrutinize whether there are significant differences between the pool of individuals who responded and those who did not, in order to evaluate the extent of nonresponse error. If nonresponse error exists, pollsters can “weight” the data to make the demographics of the sample similar to the demographics of the population. Regarding the methodologies of surveys, there are several different ways a survey can be administered including: face-to face surveys, telephone surveys, mail surveys, Internet surveys and mixed-mode surveys that employ a combination of these methods. Telephone surveys are the most popular form of surveys and traditionally have been executed by individual interviewers reading questions over the phone and then recording the responses into computers. A more recent advent in telephone surveying is the use of interactive voice response or touch-tone data entry used by such firms as Survey USA. Telephone surveys are still the preferred method of choice because they are a cost-effective and rapid means of gathering data. The weakness of this approach is that not every household in the United States has traditional telephone coverage, especially among younger people who only use cell phones (it is illegal for pollsters to call cell numbers). Response rates using this approach have also been on the decline, with rise of caller ID and other call screening technologies. The oldest form of survey administration is the face-to-face interview. The face-to-face interview allows for an interview of greater length and complexity, although it can be highly reactive if the interviewer is of a different sex or race than the respondent. On average this way of conducting a survey usually gets a higher response rate than telephone or mail
surveys, but this advantage must be balanced against the potential costs of using this technique, such as the fact that the results of in-person interviews cannot be obtained quickly. Mail surveys can be easier to conduct and are not as expensive, because an interviewer is not necessary. Surveying individuals by mail can also foster a greater sense of privacy and elicit more open responses than can be obtained if an interviewer is involved. However, like in-person interviews, if time is a factor, then mail surveys are not as effective in gathering data with the same kind of speed as a telephone survey. The growth and development of the Internet is a relatively new advance in conducting polls. Surveys can now be conducted online through the Web, where participants are sent to a site and are asked to complete a questionnaire, or via e-mail where a message is sent to a respondent who completes the answers to the survey and sends it back. Electronic surveys of this kind have been utilized more heavily over the past few years, in accordance with traditional polling standards, and should not be confused with the instant polls on various Web sites that do not represent a scientific estimate of public opinion. The major advantage of using the Internet or e-mail is the ability to scale back the enormous costs that are associated with telephone, mail, or face-to-face interviewing. The obvious disadvantage is coverage, even though the number of households with Internet access has been steadily rising. To account for this problem, organizations like Knowledge Networks utilize random dial techniques to recruit participants and then equip respondents with the ability to conduct the survey through the Internet. As access to the Internet expands and the methodology of conducting surveys by Internet is refined, it may become a viable alternative to telephone polls. The subject matter of polls can be quite diverse. While election polls conducted by various news media outlets, or the candidates themselves, garner the most attention, polls are taken for a variety of reasons and are sponsored by a number of different sources. Politicians use them to discern public mood on numerous public issues even when an election is not on the immediate horizon. Media organizations also want to track public opinion on the hot issues of the day. Some of the major media-sponsored polls include: CNN/Gallup, NBC News/Wall Street Journal, ABC
primary system 305
News/Washington Post, CBS News/New York Times and Fox News/Opinion Dynamics, among others. Some critics suggest that the media reports on poll results as a substitute for more legitimate, pressing news. Private companies conduct surveys to enhance the marketing of their products and government agencies commission polls to evaluate citizen satisfaction with the implementation and delivery of programs and services. Interest groups also initiate polls to demonstrate public support for various causes. Public opinion polls have become a cornerstone of modern democracy. The proliferation of polls over the past generation has been nothing short of dramatic. What is less clear is the appropriate role of polls in a democratic society. Some commentators lambaste politicians for slavishly following the polls when public opinion may not be all that informed. Others suggest polls are a vital mechanism in ensuring that government officials are responsive to the citizenry. While there may not be a resolution to this debate any time in the near future, the most efficient and accurate way of estimating public opinion will be to continue to carry out polls in accordance with sound scientific practices. Further Reading Asher, Herbert. Polling and the Public: What Every Citizen Should Know. 6th ed. Washington, D.C.: Congressional Quarterly Press, 2004; Bardes, Barbara A., and Robert W. Oldenik. Public Opinion: Measuring the Public Mind. Belmont, Calif.: Wadsworth, 2002; Gawiser, Sheldon R., and G. Evans Witt. A Journalist’s Guide to Public Opinion Polls. Westport, Conn.: Praeger, 1994; Geer, John G., ed.. Public Opinion and Polling around the World: A Historical Encyclopedia. Santa Barbara, Calif.: ABC-CLIO, 2004; Genovese, Michael A., and Matthew J. Streb, eds. Polls and Politics: The Dilemmas of Democracy. Albany: SUNY Press, 2004; Traugott, Michael W., and Paul J. Lavrakas. The Voter’s Guide to Election Polls. Lanham, Md.: Rowan & Littlefield, 2004. —Matthew Streb and Brian Frederick
primary system The primary system was introduced at the turn of the 20th century as a means by which each party could begin to open up political participation to
greater numbers of citizens. It was a part of Progressive reforms designed to rid party conventions of control by political bosses who dominated political parties and thus the outcome of political elections under a dominant two-party political system. The primary system saw a resurgence in the 1970s, and has generally been viewed as a means of expanding mass participation in the political process. In its more traditional or standard form, it is a process for the selection of a nominee to represent the political party in a general election. However, the primary system operates under a number of different formats, although the use of a secret ballot has become standardized. The primary can be used to select the party’s nominee in local and state elections or in federal elections, and in any situation in which there is a partisan election and the party is to select its standard-bearer. A primary can also be used in nonpartisan elections, typically at the local level but also in selecting congressional candidates in, for example, Louisiana in an election in which candidates from all parties run against each other in a preliminary primary, to see if any candidate could garner a sufficient majority. If not, a run-off between the top finishers then takes place. Under this system, two candidates from the same party may end up being the candidates facing each other in the November election. Or, there may not be an election in November if a candidate wins a majority of the votes cast in the primary. The main purpose of the primary is to narrow down a field of candidates in an election to a choice between the top two or three candidates in a nonpartisan contest, or to the candidate with the greatest support among each party’s field of candidates by voters selecting that party’s nominee from a field of candidates. Typically the primary election operates under rules set by each party, but more practically the party controlling the legislative body that holds the election will determine procedures, format, and guidelines for the primary election, which will then have an impact on all party organizations participating in that election. Usually the state legislative body will set the election guidelines for the state, as will a local legislative body for local elections. The guidelines will generally affect all parties similarly participating in an election in order to
306 primar y system
conduct elections efficiently, in a time- and costconserving manner. In almost all cases, the primary operates under a plurality, winner-take-all system, in that unless there are only two candidates, it is likely that in a multicandidate primary the nominee may win a plurality, rather than a majority of the vote in winning the party’s nomination. If the district or state is heavily dominated by one party, that party’s nominee can then go on to win the November election easily without ever winning a majority vote of constituents. The primary system more Americans are familiar with is that associated with the presidential election, due to the intense media attention given especially to the first contests in each quadrennial race. While voter participation and turnout may be relatively high in the first state in the nation to hold a primary election contest, New Hampshire, as the contest to select delegates to attend each party’s national convention wears on, voter interest declines. As the competition begins to wane, with candidates dropping out as the race progresses, and media attention becoming more diffuse as several states hold contests on the same date and candidates begin to prioritize where to concentrate scarce resources and limit campaigns to a few battleground states, participation declines. To give some idea of the complexity of the American election system, the primary system used in presidential elections since 1972 to select the majority of delegates attending national party conventions to determine who will represent either the Democratic or Republican national parties in the quadrennial presidential elections held in November, will be explored. The national Democratic Party and national Republican Party have given guidelines to their state party organizations as to how to allocate and select delegates to represent the preferences of their state’s citizens in the selection of the party’s nominee. The state party organization may forgo a primary and rely upon a state party convention, which builds upon a series of local party– only caucuses to choose the state’s preferences among the party’s candidates seeking the national nomination. These caucuses and state conventions are usually not well attended, except by those most active in local or state party organizations, and so participation is low, and participation is limited to
those actively involved with the party. With primaries and state conventions scattered over a period of months, the interest in attending a caucus or state convention drops, as candidates drop from the race, and the party’s eventual nominee begins to attain a level of delegate support too overwhelming for any other candidate to match. If provided for in the election guidelines, it might be that a party organization is allowed to bypass a primary and have an alternative means of selecting its nominees to compete in the election. In looking at the primary election system for presidential elections, voters are confronted with a maze of systems that are often unique to just one state. With frequent moves among the electorate, a relocated voter in the United States is confronted not only with the need to reregister once one has had a change in address, and possibly become familiar with an entirely different slate of office holders if in a new electoral district, but the voter may also face an entirely different voting system. In the presidential contest, delegates are allocated to each state according to formulas set by the national party organizations on the basis of either party loyalty in electing members of one’s party to high level office in the state and in federal offices, or on the basis of population. Some states hold what are referred to as “beauty contest” primaries, whereby candidates win a popular vote contest, yet the selection of delegates may be done through a convention process or by a state party chair, and thus may not be connected to the popular vote. Or, one may vote for delegates who are pledged to particular candidates, with citizens able to split votes among several delegates. In this case delegate names may appear on a ballot pledged to a candidate, after the delegate has received a signed waiver indicating candidate endorsement, or after circulating a petition, and gaining a sufficient number of signatures needed to get one’s name on the ballot, with the approval of the candidate to whom one is pledged. At the same time a state could run a parallel “beauty contest” primary to determine the state’s “overall winner.” How a candidate responds to the outcomes from each of these options requires a great deal of preparation and groundwork in learning about the rules and procedures used in each state’s nomination process. Delegates might also be awarded pro-
propaganda 307
portionately on a districtwide or statewide basis from the primary results. Each of these different systems has implications for the amount of organization required of any candidate seeking office. Each system also requires a certain degree of energy and responsibility on the part of the voter to become familiar and engaged with the process. The federal system ensures that at least 50 different systems will be in place for each party’s candidates to understand, embrace, and maneuver through. Further Reading Bibby, John F. Politics, Parties, and Elections in America. 5th ed. Belmont, Calif.: Wadsworth/Thomson Learning, 2003; “Member FAQs,” http://clerk. house.gov/member_info/memberfaq.html; Sorauf, Frank J. Party Politics in America. 3rd ed. Boston: Little, Brown, 1976; Wayne, Stephen J. The Road to the White House 2004: The Politics of Presidential Elections. Belmont, Calif.: Wadsworth, 2004. —Janet M. Martin
propaganda The term propaganda has been applied to many types of political communication, but what does it really mean? When most people think of propaganda, they think of the fiery speeches given by Adolf Hitler in Nazi Germany. In a sense, this is a good place to start, because Hitler had a special “Minister of Propaganda and People’s Enlightenment” in his government named Joseph Goebbels who perfected the art of propaganda. But the history of the use of propaganda predates Nazi Germany and continues to this day in different forms. The problem with understanding how propaganda is used is first to understand what it is. One of the difficulties with defining propaganda is that the word has a decidedly negative quality about it. Propaganda is associated with misrepresentations or distortions of the truth, or even with outright lying. Social scientists disagree on the precise definition of propaganda. There are some things that many definitions have in common, though. Generally, propaganda is considered to be a form of communication that is intended to persuade an audience. However, this definition is too broad in that it includes legitimate political argument and debate in the definition.
So does the definition of propaganda rest in the legitimacy of the argument? It is hard to tell, because what may seem to be a legitimate argument to one person may seem to be propaganda to someone who holds a differing view of the issue at hand. Perhaps it is better to start by saying what propaganda is not. One thing that propaganda is not is objective information. It is not the type of information that merely reports the facts as they are without interpretation. In this sense, it is not the type of news that you would see on TV or read in the newspaper (or, at least what you would expect to see or read). Since propaganda is not objective, it makes sense that it must have a bias of some sort—it must favor one person, group, government, or ideology over another. Since propaganda has a bias, must the bias be intentional? Political scientists are divided on this issue. Some believe that for a communication to qualify as propaganda, the intent of the author of the propaganda must be to persuade or manipulate the target audience. Other political scientists disagree, pointing out that proving a person’s intent can be an extremely difficult, if not impossible task. Oftentimes, someone that we might consider to be a propagandist for one point of view might actually be a “true believer” of their own message. But the fact remains that, ultimately, propaganda is designed to persuade the audience to take a position that is advocated by the piece of propaganda. The philosopher Jacques Ellul has described four means by which propaganda can be persuasive. The first is agitation, which provokes the audience and encourages action and/or sacrifices. The second is integration, which promotes solidarity and conformity among the audience. This is an important part of propaganda, because it helps to establish an in-group/ out-group mentality in the audience, whereby people who disagree with the propaganda would be outsiders. In contrast to integrative propaganda, there is disintegrative propaganda, which destroys solidarity and emphasizes difference among groups. Again, the in-group out-group mentality is established, with the in-group often portrayed as victims of the actions of the out-group, causing resentment among the ingroup. The fourth means of propagandistic persuasion is facilitation. This refers to types of propaganda that are designed to encourage and maintain the
308 pr opaganda
audience’s receptivity to propaganda. This type of propaganda reassures the audience that the propagandist is credible and is looking out for the audiences’ best interest. One of the central ways in which propaganda is successful is through simplification. Propaganda provides shortcuts to audiences in helping them deal with the complexities of real-world politics. It was both easier and more politically expedient for Hitler and the Nazis to blame Germany’s economic woes after World War I on war reparations that it was forced to pay under the Treaty of Versailles and on Jewish bankers. Instead of explaining the intricacies of international finance, the Nazis’ message was simple: the German economy was a victim of forces both outside and within Germany. Another way in which propaganda is effective is through the exploitation of the audience’s emotions. One of the emotions that propaganda most often invokes is fear. Often, this is done by exploiting existing insecurities or racial prejudice. Some examples of Nazi propaganda have included the fear of the loss of Aryan racial purity to Jews and immigrants. Fear of an enemy or eventuality can also be used in propaganda in conjunction with more positive emotions, such as hope and pride. In the example above, the fear of loss of national purity would be accompanied by messages that promote pride in the Aryan race and portray, heroically, the Nazi Party as the nation’s savior from ethnic contamination, giving the audience hope. Propaganda may take many forms, just as there are many different forms of communication. When we think of propaganda, some types of communication immediately come to mind, such as war posters and fiery speeches. But campaign advertisements, movies, symbols, music, plays, pictures, paintings, statues, architecture and even some news sources can be propagandistic. It is important to note that governments are not the only implementers of propaganda. Political parties, candidates, corporations, advertising executives, artists, movie producers, and musicians have been among some of the most effective propagandists. The history of propaganda traces back to the Catholic Church during the time of the Counter Reformation. The Counter Reformation was the Catholic Church’s attempts during the 16th and 17th centuries
to compete against the Protestant Reformation, which was drawing people away from the Catholic religion. The Counter Reformation consisted of a mass appeal of the Catholic Church to retain its members and to reconvert Protestants back to Catholicism. The forms of propaganda took many shapes: from pamphlets, drawings, and cartoons to statues, paintings, and architecture that glorified the righteousness of the Catholic Church. The first modern use of propaganda was by the United States and Britain during the beginning years of World War I. As the war began in Europe, the overwhelming majority of Americans remained isolationist, and the few that wanted to enter the war were divided as to which side to support. In order to change the views of Americans, the Committee on Public Information was started by the Woodrow Wilson administration. The Committee eventually became known as the Creel Commission, after its chairman, George Creel. The Creel Commission launched a tremendous propaganda campaign to turn public opinion toward support for entering the war on the side of the British. In six months they had turned the United States from isolationist to strongly anti-German through the propaganda disseminated through a variety of media, including posters, movies, and pamphlets that demonized the Germans. As the Creel Commission disbanded after World War I, a prominent member of the commission, Edward Bernays continued writing about and perfecting modes of propaganda. Bernays, a nephew of Sigmund Freud, is called the father of spin. Bernays taught public relations at New York University in the 1920s and wrote several books on the manipulation of public opinion. He later worked to shape the public relations of corporations through advertising, and using opinion leaders, celebrities, and doctors as spokespersons for products. One of Bernays’s greatest advertising feats was to change public opinion regarding smoking, making it socially acceptable for women. Bernays’s books were studied in detail by the German propagandist Dr. Joseph Goebbels. Based on Bernays’s writings and on his own research, Goebbels developed what he called “19 Principles of Propaganda.” Beginning in the 1930s, the German population was subjected to an endless stream of propaganda that continued right up until the end of World War II.
propaganda 309
The German Ministry of Propaganda made specific use of movies in their propaganda efforts. Accomplished German cinematographer Leni Riefenstahl was recruited to make the films Triumph of the Will and Olympia, which today are regarded as masterpieces of propagandistic cinematography. During the cold war, the United States produced many anticommunist propaganda pieces in different forms. The Soviet Union and China also produced many propaganda pieces during the time, including— interestingly enough—plays and operas. Moreover, since the Soviet Union and China both had government-run media outlets, they could control the content of entertainment programs as well as news programs. Since the 1950s it has been illegal for the U.S. government to spend federal funds unauthorized by Congress on “covert propaganda” targeted at the U.S. population. Each year Congress stipulates this as a spending restriction in appropriations bills. In the 1980s, the Reagan administration violated the law in its attempts to sway public opinion in support of its policies and in support of the Nicaraguan contras. In 2005, the Bush administration was found to have violated the law by paying prominent journalists to advocate for the administration’s policies. Additionally, the Bush administration misused public funds by hiring a firm to produce Video News Releases (VNRs)—unattributed short videos designed to look like news clips that are sent to TV news directors. VNRs have been used successfully by advertising agencies, particularly those that represent pharmaceutical companies. News directors will often air these videos because they appear to be high-quality reporting, and airing them reduces the cost of producing the news. In actuality, VNRs are advertising and/or propaganda disguised as objective news reporting. But the U.S. government does not necessarily have to use its own money to disseminate propagandistic messages. Often, there are artists, musicians, and movie directors who are more than willing to produce propaganda. Two movies, The Green Berets and Red Dawn, stand out as particularly propagandistic on behalf of U.S. foreign policy. Additionally, there are “advocacy” news outlets that propagandize for one ideology or party over the other. For example, Air America Radio and Pacifica Radio actively portray Democrats and liberal policies favorably whereas Fox
News actively portrays Republicans and conservative policies favorably. News outlets can also manifest a propagandistic bias through their decisions regarding which news stories to cover and how much prominence those stories are given. Additionally, the balance of commentators and “experts” that a news agency uses is important. Because of the difficulty in covering news abroad, news outlets are much more likely to rely on U.S. governmental sources for information than they are on domestic issues. While it is illegal for the U.S. government to propagandize its own citizens, it is not illegal to propagandize citizens of other countries. In 1953, the United States Information Agency was established. Its primary goal was to portray the United States and its policies in a favorable light to the world. In 1999, the agency was folded into the State Department. It is headed by the Under Secretary for Public Diplomacy and Public Affairs. The State Department operates many radio and TV outlets that broadcast into target audience areas. Examples include the Voice of America, Radio Free Europe/Radio Liberty, Radio Marti (targeted at Cuba), Radio Sawa and Radio Farda (targeted at Arab nations) and Radio Free Asia. The U.S. Department of Defense also engages in propaganda. In 2006, the Defense Department was found to have hired a firm that planted stories written by U.S. soldiers into Iraqi newspapers and paid Iraqi journalists to write news stories favorable to the United States. While these actions may be unethical, they are not illegal under U.S. law. In our media-saturated environment, propaganda surrounds our daily lives. Even commercial advertising is a sort of propaganda. It is certainly propaganda on behalf of the product, but there is more to it than that alone. In advertising, something is promised by the purchase of the product (it may be physical attractiveness, excitement, or something else). By promoting the idea that the purchase of products is a desirable activity, commercial advertising reinforces the capitalist system of markets and commerce. So, how does one protect oneself from the influence of propaganda? It is difficult. Today there are more people working in the public relations (propaganda) industry than there are reporters trying to cover stories objectively. Additionally, reporters are
310 pr otest
often overworked, underpaid, and often do not have enough time to get sources for stories beyond their usual governmental sources. One answer to resisting propaganda is organization. The women’s movement organized itself and built a group consciousness that was able to defend itself against the propaganda of patriarchal oppression (such as the use of male-centric words). Another answer is education, particularly in media literacy. In the years of 1937–42, the Institute for Propaganda Analysis was established to understand how propaganda works. The institute identified and exposed several of the tricks that propagandists frequently use, such as those discussed above. It is up to the public to be attentive to these tricks. When people are able to see how others try to manipulate them, they become resistant to the efforts of propagandists. Further Reading Baird, Jay W. The Mythical World of Nazi War Propaganda, 1939–1945. Minneapolis: University of Minnesota Press, 1975; Bernays, Edward. Propaganda. New York: Ig Publishing, 2004; Bytwerk, Randall L. Bending Spines: The Propagandas of Nazi Germany and the German Democratic Republic. East Lansing: Michigan University Press, 2004; Ellul, Jacques. Propaganda: The Formation of Men’s Attitudes. New York: Vintage, 1973; O’Shaughnessy, Nicholas Jackson. Politics and Propaganda: Weapons of Mass Seduction. Ann Arbor: University of Michigan Press, 2004. —Todd Belt
protest When people cannot get what they want from their government and have tried all of the normal channels of influence, they take to the streets to make their demands. Protest is the physical act of dissent. When people protest, they are indicating that the current system is unacceptable, and they are willing to take drastic measures to change it. Protest is usually, though not always, directed at the government and officials in the government. The United States has a long and proud tradition of protest, dating back to before the Revolutionary War. Protest is considered such an important part of American political life that
it is protected by the First Amendment to the U.S. Constitution. Protest is a tool of those who are politically weak. If protestors held legitimate political power, they could work within the system to further their goals and would not need to protest. And because protestors are interested in changing the existing political and social structure, protestors tend not to be conservative ideologically, because conservatives are opposed to change. Instead, protest tends to be the tool of those ideologies that want to change the system: people on the political left (liberals and others) who promote progressive reforms; and people on the extreme political right (reactionaries) who want regressive change. Today, many people are misidentified as conservative when they are actually reactionary—they want to undo the progressive reforms of the 20th century in an attempt to reestablish society as it used to be. The first well-organized act of protest in America was the Boston Tea Party. American colonists had been boycotting tea from the East India Company because the company had been given the right to sell tea to the colonies without having to pay the standard tax (colonial merchants had to pay the tax). The issue came to a head in December of 1773, when colonial merchants, dressed as Mohawk Indians, boarded three ships carrying tea for the East India Company and dumped the tea overboard. The Boston Tea Party became a symbolic act of protest against the tyranny and capriciousness of British rule. The Boston Tea Party led to the establishment of the Continental Congress, which was the American colonialists’ attempt at self-rule. While the Continental Congress had no legal authority, it was a brazen act of protest by the colonists against British rule. The most significant accomplishment of the Continental Congress was the drafting of the Declaration of Independence. The declaration, adopted on July 4, 1776, asserted the right of the governed to abolish a government when it did not protect the peoples’ inalienable rights, and the right of the people to institute a new government in its place. The declaration led to the Revolutionary War, and no single act of protest is more extreme than armed rebellion against the government. But protest is not necessarily violent. In fact, the distinction between violent and nonviolent protest is an important one. The United States has traditions in
protest 311
Black Panthers demonstrating in New York City (New York Public Library)
both violent and nonviolent protest. Within the strategies of violence and nonviolence are specific tactics that protestors use, and these can give rise to different responses from government officials. Following the Revolutionary War, the former colonies were loosely (and ineffectively) governed by the Articles of Confederation. In an act of protest against farm foreclosures, poor farmers, many Revolutionary War veterans, mobilized a small force known as “regulators” to resist the rulings of debtors’ courts. The uprising became known as Shays’s Rebellion (after its leader, Daniel Shays). The regulators remained unchecked for several months before Massachusetts was able to put together militias to halt the rebellion. Though Shays’s Rebellion was ultimately put down by force, it gave rise to the Constitutional Convention the next summer to design a new government. The new Constitution gave the
president power to maintain order. The next time a rebellion came (the Whiskey Rebellion), the new president, George Washington, was easily able to put it down with force. While protest was protected under the First Amendment, it was limited to only peaceful protest, which the Whiskey Rebellion was not. The next major movement that employed elements of political protest was the abolition movement, which was aimed at ending slavery. The abolition movement was a heavily fragmented one and employed several different tactics, mostly within the system (such as pamphletting, preaching, and petitioning). A limited amount of direct or illegal action did take place outside the bounds of the normal political channels. Of particular note was the establishment of what came to be known as the “underground railroad”—a series of safe houses for escaped slaves making their way from the southern to
312 pr otest
the northern states. These safe houses were illegal, and providing shelter to runaway slaves was an act of civil disobedience—breaking the law to prove a law is unjust. Many of those who were active in the abolition movement were also active in the women’s suffrage movement. Thinking that voting rights for freed blacks would be extended to all citizens, suffragists were disappointed by a United States Supreme Court ruling that held that the political rights established for black men as a result of the post–Civil War Constitutional Amendments did not apply to women. Specifically, the court ruled that public political activity was beyond the natural sphere of involvement for women, who were naturally relegated to the private sphere of life (Bradwell v. Illinois, 1873). In response, the suffragists employed peaceful protest tactics. Many of the common protest tactics we see today can be traced back to the women’s suffrage movement. These include picketing, parading, hunger strikes, and the willful breaking of laws (civil disobedience) in order to provoke arrest and to fill up jails. Hunger strikes were particularly effective in attracting sympathy for the suffragists’ cause. Existing alongside of the suffragist movement was the temperance (anti-liquor) movement. Led by the Women’s Christian Temperance Union, the movement organized boycotts and staged protest marches. The movement gained significant political momentum after the turn of the century, resulting in the Eighteenth Amendment. Ratified in 1919, the amendment established the nationwide prohibition against alcoholic beverages. The era of prohibition, known as the “Great Experiment,” ended in 1933 with the ratification of the Twenty-first Amendment. Another movement that existed alongside and coevolved with the suffragist movement was the labor movement. But, whereas the suffrage movement ended in 1920 with the ratification of the Nineteenth Amendment, the labor movement continues to this day. The main tactic of the labor movement is organization into unions in order to threaten a strike (cessation of work) to coerce concessions from industry owners. While striking workers would often use picketing as a common tactic, the sit-down strike would become a tactic that would be copied by subsequent political and social movements. A sit-down strike involved workers showing up to their place of employ-
ment and then peacefully sitting down while on the job. The workers would then have to be forcefully removed by the owner and replaced. This tactic not only disrupted production for industry owners, it also generated sympathy for the strikers, as they appeared to be victims of heavy-handed owners. The sit-down tactic would later become a mainstay of the Civil Rights movement of the 1960s. Other labor protest efforts were directed at the government instead of industrialists. Workers, including young children, held protest marches to demand legislation that would institute a minimum work age. Protestors also demanded greater occupational health and safety laws and a limit on the amount of work that a worker had to do before his/her employer was required to pay overtime. These protests, and the subsequent legislation that accomplished the protestors’ goals, were part of the Progressive Era (1890s– 1920s). The next major era of social protest is associated with the 1960s but can be traced back to 1955, when Rosa Parks refused to give up her bus seat to a white passenger. Rosa Parks’s protest action provided publicity to a growing Civil Rights movement. The most prominent leader of the movement, the Rev. Dr. Martin Luther King, Jr., promoted a strategy of peaceful protest through tactics of civil disobedience. While protest marches and rallies were standard fare during the Civil Rights movements, the sit-in became an effective, yet provocative tactic. Like the sit-down strikes from the Progressive Era, sit-ins were undertaken at hotels, restaurants, and other businesses that refused to serve or hire blacks. Protestors would sit down in these businesses, occupying doorways and other public areas, demanding concessions from the owners. Owners would oftentimes order their staff to retaliate (in some restaurants, this meant pouring hot coffee in protestors’ laps). The police would be called in, and protestors would be charged with trespassing. Protestors would literally have to be dragged away by police because they would “go limp” as the police removed them. In what became known as “Freedom Rides,” white students in the northern states would ride down to the South on buses to participate with blacks in these protests. The tactic of going limp served two important purposes. First, it made things a bit more difficult, time-consuming, and embarrassing for the police and business owners.
protest 313
Second, it protected the protestor from the police, because the police would not be worried about the possibility of the protestor running away and feel the need to use excessive force (although the police frequently would use excessive force against peaceful protestors). The passive, nonviolent protest tactics developed by the Civil Rights movement were emulated by other movements that emerged in the 1960s. These movements included the free speech movement at the University of California, Berkeley, the antiwar movement, and the feminist movement. These passive protest tactics were also used in later movements such as the disability rights movements, the farm workers’ movement, the gay rights movement, ethnic identity movements, later antiwar movements, religious right-wing movements (around issues such as abortion and gay marriage), and antieconomic globalization movements (labeled in the media as such, although these movements were more concerned with the effects of globalization than globalization itself, per se). Music has always held a special place in protest, particularly in its use in labor actions. But protest songs reached their peak of influence in the 1960s and 1970s, when songs such as “We Shall Overcome” and “Give Peace a Chance” became anthems for various movements. The use of music during protest marches and rallies builds solidarity among the protestors and can lighten the atmosphere that is often made tense by a police presence. Some movements in the second half of the 20th century, discouraged by the lack of success of nonviolent protest, adopted more confrontational tactics. Of particular note was the anti–Vietnam War effort, in which the organizers consciously planned a switch from protest to resistance. These confrontational resistance tactics included blockading of draft induction centers, attempts to stop trains and buses carrying draftees, refusal to serve in the military, and the burning of draft cards and American flags (the U.S. Supreme Court has ruled that the burning of the flag is constitutionally protected free speech in the 1989 case of Texas v. Johnson). During the latter half of the 1960s, a resistance movement against police brutality in black communities was led by the Black Panthers in Oakland, which established chapters in other major cities. The Panthers
took it upon themselves to “police the police” by following police patrols and observing how police treated citizens. Because the police were armed, the Panthers also carried guns, and often dressed in leather jackets and wore berets. Tensions between the Panthers and the Oakland police escalated, resulting in the shooting death of a member of the Panthers. Another movement to use confrontational protest tactics was the antiabortion movement that emerged in the early 1980s. One tactic was for members of the movement to block women’s access to abortion clinics. More violent actions included the bombing of abortion clinics and the targeting of abortion doctors for physical harassment and murder. Given the right of the people to free association and to protest, local, state, and federal governments have limited options for controlling peaceful protest. Governments have been able to set time, place, and manner restrictions on protest through the requirement of parading permits. The frequency of protest at major party national conventions has resulted in the designation of fenced-off areas for protests, which restrict protestors’ access to the conventions, delegates, and media. During the late 1950s, the FBI instituted a program called COINTELPRO to suppress the growth of dissent and protest. Activities undertaken by the program included agents infiltrating protest groups in order to disrupt activities, harassing protest leaders, and acting as agents provocateurs—provoking illegal activities so the members of the group could be arrested. The actions of COINTELPRO were publicly exposed in 1976 by the U.S. Senate’s Church Committee. The strategy of nonviolence creates more sympathy for protestors than does the strategy of violence. This is because violence begets more violence—violent tactics give a reason and a moral justification to the government to use force against protestors. Obviously, when protestors break the law, police and government officials can take legal, forceful action to restore order and apprehend those who have broken the law. Local police and state troopers across the country have been trained in nonlethal “crowd control” techniques including the use of batons and shields, water cannons, tear gas, pepper spray, rubber bullets, and tasers. Additionally, police departments maintain “riot gear” that includes gloves,
314 public opinion
boots, body armor, and helmets with visors to protect officers. The United States, born out of violent revolution, has enshrined in the Bill of Rights “the right of the people peaceably to assemble, and to petition the government for a redress of grievances.” The right of the people to self-determination is a deeply rooted part of American political culture. Even Thomas Jefferson wrote “A little rebellion now and then is a good thing. It is a medicine necessary for the sound health of government. God forbid that we should ever be twenty years without such a rebellion.” Rebellion in the form of protest, be it violent or nonviolent, has been and continues to be an essential part of American political life. Further Reading Alinsky, Saul D. Rules for Radicals: A Pragmatic Primer for Realistic Radicals. New York: Vintage Books, 1971; Alvir, Judith Clavir, and Stewart Edward Alvir. The Sixties Papers: Documents of a Rebellious Decade. New York: Praeger, 1984; Meyer, David S. The Politics of Protest: Social Movements in America. New York: Oxford University Press, 2006; Unger, Irwin, and Debi Unger, eds. The Times Were a Changin’: The Sixties Reader. New York: Three Rivers Press, 1998; Williams, Juan. Eyes on the Prize: America’s Civil Rights Years 1954–1965. New York: Penguin, 1988; Zinn, Howard. A People’s History of the United States: 1492 to Present. New York: HarperPerennial Modern Classics, 2005. —Todd Belt
public opinion Public opinion describes citizens’ core beliefs about society and politics, as well as attitudes about government policies, political parties, candidates for public office, and public policy topics of the day. Public opinion is an important part of contemporary democracy in that it is able to convey the “will of the people” to elected officials. Opinion polls or opinion surveys are used to measure public opinion, defined as an overall picture of citizens’ beliefs and attitudes. Early opinion polls, also known as straw polls, were used in politics starting with the 1824 presidential election. Straw polls were not scientific in their approach, and their results
did not reflect public opinion with great accuracy. Modern polling uses systematic sampling and other techniques to produce more reliable results. The era of scientific polling in politics started with the 1936 presidential election when pollster George Gallup correctly predicted that Democrat Franklin D. Roosevelt would defeat Republican contender Alf Landon. That same year, the Literary Digest, a popular magazine of that time, predicted that Landon would win based on a straw poll involving 2.3 million readers. These readers were generally more affluent and Republican than the general population, so the results were skewed in favor of Landon. After their inaccurate prediction, the Literary Digest went out of business, scientific public opinion polling became a staple of American politics, and George Gallup became a household name. He is considered the father of modern opinion polling. Opinion polls are surveys conducted by professional interviewers who ask a standard set of questions to a group of people. People who answer surveys are labeled “respondents” by pollsters. A typical national sample consists of 1,000 to 1,500 respondents who reflect what a nation of approximately 200 million Americans adults are thinking with surprising accuracy. This is possible if the sample that is selected closely represents the whole population in terms of age, education, race, ethnicity, region, and other background characteristics. If the respondents in a given sample do not closely resemble the make-up of the larger population, then their beliefs and attitudes will not accurately represent the larger population from which they are drawn. Random sampling is an important part of obtaining a representative sample. This type of sampling ensures that everyone in the population has the same chance of being selected. A perfectly random sample is not possible given that some people are not available or able to answer surveys (e.g., prison inmates, homeless people, and those without telephones, to name a few), so statisticians use probability theory to determine how closely the results of a given survey follow what the whole population would say if asked the same questions. The results of a randomly selected sample of 1,500 people typically have a 95 percent accuracy rate. In other words, if the same survey were given to 100 different randomly selected samples, the results would be about the same 95 times out of 100.
public opinion 315
Surveys can be administered in different ways, including face-to-face interviews, interviews over the phone, self-administered surveys through the mail, and interviews with voters as they exit their polling place. Internet polling has also become popular in recent years, but this type of polling is generally unrepresentative of the population because not all Americans own or have access to a computer. However, polling firms are working on ways to overcome the biases inherent in Internet polling, and this method of gathering public opinion will likely increase in popularity in coming years given its low cost. Pollsters are faced with a number of challenges when it comes to obtaining an accurate reading of public opinion. Pollsters have to be cautious about question wording and question order. Responses to a question can change depending upon how it is asked; biased or “leading” language can produce false support for or opposition to a public policy. For example, public support for legal abortion is higher when the term “late term abortion” is used versus “partial-birth abortion.” The order of questions can also affect responses to certain questions. For example, support for affirmative action programs is higher in surveys when respondents are asked a series of questions about racial equality prior to the specific question about affirmative action. Pollsters face new challenges in gathering accurate public opinion data with the constantly evolving world of mass communication. Americans have become less willing to respond to surveys because they are inundated with phone calls from telemarketers. Citizens are increasingly using caller identification systems to screen phone calls, and legitimate survey interviewers are being screened out at increasing rates. Another problem for pollsters is the proliferation of cell phones. The number of Americans who are choosing to use a cell phone instead of a land line is still small, but it represents a growing segment of the population that is beyond the reach of pollsters. Young adults are more likely than other Americans to have a cellular phone in lieu of a land line, which means that as this trend continues, their beliefs and attitudes will be underrepresented in opinion polls. Another obstacle in the way of legitimate polling firms gathering quality data is the advent and proliferation of “push polls” or intentionally biased surveys
administered for political purposes. An example of a “push poll” is when a campaign worker calls people who are likely to vote for the opposing candidate to ask them a series of “survey” questions that malign their favored candidate. While documented cases of “push polling” have been few, the media attention they receive has stirred up questions of polling accuracy among the general public. A famous Mark Twain quote encapsulates what many Americans think about polling: there are “lies, damn lies, and statistics.” The idea that opinion polling can generally be dismissed because data can easily be manipulated does not take into account the professional structure in place to mediate against such problems. The leaders of major polling firms typically hold Ph.D.’s, as do many other employees who create and administer polls. They have advanced research skills necessary to administer even the most basic survey. Furthermore, polling professionals follow research protocol established by the American Association of Public Opinion Researchers (AAPOR), an organization with the power to sanction members who violate research standards. Additionally, scientific polling is a costly endeavor, and considering that a firm’s reputation is based on the accuracy of its work, it is within its best interest to produce the most reliable results possible. A common critique of polling is that it is unrepresentative because “I have not been asked to take a poll.” Given that there are more than 200 million American adults, most polls include only about 1,000 respondents, and only a few thousand polls are run each year, the odds of being selected for a poll each year are roughly 1 in 100. Therefore, it is not surprising that many Americans have never been asked to complete a poll. Public opinion polls play an important role in contemporary American politics. Many modern presidents used public opinion polls to determine what policies are favored by the American public. John F. Kennedy was the first president to use polls extensively to see which messages resonated with the public, and presidents since this time have followed in his footsteps. President Bill Clinton took polling to new heights, elevating pollsters to senior advisory positions and using polls to determine what policies to embrace and how to “sell” them to the public. President George W. Bush continued the
316 Republican Party
trend, making frequent use of polls during both terms. The influence of public opinion polls in American politics is debatable. Certain high-profile examples support the idea that public officials are sensitive to the desires of the general public. For example, public opinion was a crucial factor in the United States pulling out of the Vietnam War. Furthermore, government policy parallels public opinion two-thirds of the time, suggesting a relatively tight fit between what the public wants and the actions of public officials. On the other hand, examples abound where government officials pass legislation or take stances that oppose what the majority of the public wants (e.g., President Clinton’s impeachment, gun control, and continued U.S. presence in Iraq with the Second Gulf War). It is unclear under what circumstances opinion polls are used to set public policy, and this issue is further complicated by a debate about whether public opinion should play a major role in setting policy. Political scientists have long debated the role of public opinion in policy making given the public’s lack of knowledge about and interest in politics and policy. In the 1960s, Philip Converse discovered that people seemed to randomly change their opinions to the same survey questions from one interview to the next. He coined the term “nonattitudes” to describe the responses being given by respondents to satisfy pollsters when they lack a solid opinion, and questioned the value of public opinion in policy making. More recently, James Fishkin developed “deliberative polling” to measure whether increased knowledge of policy specifics shifts support for the policy in question, and found that public support for most policies shifts dramatically when citizens become educated about the policy. In other words, public support for many major policies, as measured by opinion polls, would be quite different if the general public knew more about them. A fundamental component of American democracy is rule by the people, and opinion polling plays an important role in conveying the will of the people to elected officials. As noted above, there are potential shortcomings to using polls as the voice of the people: “push polls”; a less informed public; and public officials using polls to “sell” unpopular policy positions. However, the advent of scientific polling in
the latter half of the 20th century opened new doors for understanding the beliefs and attitudes of Americans, and opened up the possibility for democratic rule that more closely follows public desires. Whether this possibility will be fully realized remains to be seen. Further Reading American Association of Public Opinion Researchers, www.aapor.org; Converse, Philip E. “The Nature of Belief Systems in Mass Publics,” in David Apter, ed., Ideology and Discontent. New York: Free Press, 1964; Fishkin, James. Democracy and Deliberation: New Directions for Democratic Reform. New Haven, Conn.: Yale University Press, 1993; Geer, John. From Tea Leaves to Opinion Polls. New York: Columbia University Press, 1996; Page, Benjamin I. and Robert Y. Shapiro. The Rational Public: Fifty Years of Trends in Americans’ Policy Preferences. Chicago: University of Chicago Press, 1992; Weissberg, Robert. Polling, Policy, and Public Opinion: The Case against Heeding the “Voice of the People.” New York: Palgrave. 2002. —Caroline Heldman
Republican Party The Republican Party is one of the two major parties in American politics and government. It was established in 1854. Most of its founding members were former Whigs and disaffected Democrats. On the issue of slavery, the Republican Party adopted the position of the Free-Soil Party, which opposed the extension of slavery to new states and territories. To some extent, the Republican Party may also trace its ideological and historical origins to the Federalist Party, which was established during the 1790s and dissolved shortly after the War of 1812. Running against two Democratic presidential nominees, Abraham Lincoln, a former Whig congressman from Illinois, was elected as the first Republican president in 1860. After the Civil War began in 1861, Republicans were generally united in the belief that the Confederacy must be militarily defeated until it accepted unconditional surrender so that the Union would be preserved. Initially, only a few Republicans believed that the Civil War should be primarily fought to entirely abolish slavery. While Lincoln’s Emancipation Proclamation of 1863 declared all slaves in the
Republican Party 317
A cartoon portraying the elephant as a symbol for the Republican P arty. The elephant became a widely k nown symbol f or the par ty bet ween 1860 and 1872, when a number of magazines pop ularized the association. (Cartoon b y Thomas Nast, HarpWeek, LLC)
Confederacy to be free, it did not apply to slaves in areas under Union control. As the Civil War continued, Republicans in Congress and Lincoln’s cabinet disagreed over military strategy and how to treat the South after the war. Despite the opposition of some Republicans to his renomination, Lincoln was renominated and reelected in 1864. Shortly before his assassination in 1865, Lincoln actively supported the proposed Thirteenth Amendment, which entirely abolished slavery. Andrew Johnson, Lincoln’s vice president and a former Democratic senator from Tennessee, soon proved to be an unpopular, controversial president among Republicans. Believing that he was fulfilling Lincoln’s intentions for the Reconstruction of the
South, Johnson wanted to implement moderate, conciliatory policies. Radical Republicans, led by Pennsylvania congressman Thaddeus Stevens, wanted a more punitive, prolonged occupation of the South. Suspecting Johnson of being a southern sympathizer and abusing his authority, Republicans impeached Johnson in the House of Representatives, but he was acquitted in the Senate by one vote. In the presidential election of 1868, Republicans united behind their presidential nominee, former general Ulysses S. Grant. Grant easily won the election and was reelected in 1872 with his party controlling Congress. The Grant administration concentrated on Reconstruction and the further settlement of the West, but its reputation was tainted by scandals and Grant’s passive leadership. From 1876 until 1896, presidential elections were closely and bitterly contested by the two major parties. By 1876, most southern states were readmitted to the Union and regained their votes in the electoral college. The South emerged as a one-party Democratic region, which used its state powers and party rules to prevent or discourage African Americans from voting. In the 1876 presidential election, Samuel Tilden, the Democratic presidential nominee, won nearly 270,000 more popular votes than Rutherford B. Hayes, the Republican presidential nominee. But the results in the electoral college were disputed, so the Democratic-controlled House of Representatives and Republican-controlled Senate agreed to appoint a bipartisan commission to end the stalemate. By a one-vote majority, the commission ruled in favor of Hayes on March 2, 1877. The absence of a clear, consistent partisan majority among American voters during this period was further complicated by the growth of the Grange and silver movements and minor parties, especially the Greenback and Populist Parties. These minor parties focused on the economic grievances of farmers, especially the gold standard, high tariffs, high railroad rates, and low farm prices. Also, a faction of Republicans known as mugwumps was alienated by the corrupting influence of big business and machine bosses within the Republican Party, now nicknamed the GOP for “Grand Old Party” and symbolized as an elephant by political cartoonist Thomas Nast. This 20-year period of intense, roughly equal twoparty competition combined with the increasing
318 Republican Party
potential for a new, multiparty system ended with the Republican realignment of 1896. Trying to attract votes from the Populist Party and agrarian protest movements in general, the Democratic Party nominated William Jennings Bryan for president in 1896. Bryan was a Nebraska congressman affiliated with both the Populist and Democratic Parties. Bryan’s economic platform included opposition to an exclusive gold standard and high tariffs. Bryan personally campaigned on these issues with an evangelical fervor throughout the nation. Both Republicans and economically conservative Democrats like President Grover Cleveland regarded Bryan as a dangerous agrarian radical hostile to urban and industrial interests. The Republican Party nominated Governor William McKinley of Ohio for president. Managed by Marcus Hanna, an Ohio businessman, the Republican presidential campaign persuasively contended that its economic platform equally benefited industrialists and factory workers, bankers and farmers. Republican campaign literature was mass-produced in several languages and widely distributed in cities with large immigrant populations. Some of this literature accused Bryan of an antiurban, anti-immigrant bias. The Republicans also contrasted the allegedly demagogic, rabble-rousing campaign style of Bryan with McKinley’s restrained, dignified “front porch campaign” from his home in Ohio. McKinley was the first victorious presidential nominee to receive a majority of the popular votes in 24 years. McKinley carried all of the Northeast, most of the midwestern and border states, and California in the electoral college. For a Republican, McKinley received unusually strong electoral support from Catholics, factory workers, and immigrants. From the Republican realignment of 1896 until the Democratic realignment of 1932–36, the Republican Party usually controlled the presidency and Congress. During this period of Republican dominance, the GOP experienced greater intraparty conflict and diversity over its ideology, policy agenda, and leadership. In the late 19th and early 20th centuries, the Progressive movement affected both major parties, but especially the Republican Party. Progressives wanted to weaken machine politics and reform American politics and government through such measures as primaries, civil service merit systems to
replace patronage, the direct election of senators, secret ballots, women’s suffrage, and the use of referenda and initiatives in State and local legislative processes. Progressives also wanted to achieve economic reforms, such as labor laws requiring a minimum wage and the abolition of child labor, a graduated income tax, and stricter regulation of banking and business practices. Progressive Republicans also wanted to reaffirm their party’s historical legacy with African Americans by increasing black participation in the GOP and working with black civic leaders to reduce racial discrimination in northern cities. Meanwhile, the so-called Old Guard of the GOP, closely associated with big business and machine bosses, clashed with Progressives for control of their party’s nominations, conventions, and platforms. The rift within the Republican Party between the Old Guard and the Progressives was especially significant in the 1912 presidential election. After former Republican president Theodore Roosevelt failed to defeat incumbent President William H. Taft for their party’s 1912 presidential nomination, the Progressive Party nominated Roosevelt for president. Nicknamed the Bull Moose Party, the Progressive Party criticized Taft’s support from the Old Guard. Later called the New Nationalism, the Progressive platform emphasized a greater, more centralized role for the federal government in reforming and regulating the economy, higher taxes on the wealthy, and women’s suffrage. The split between Taft and Roosevelt helped the Democrats to win the 1912 presidential election with only 42 percent of the popular votes. Until 1940, the Old Guard or conservative wing of the GOP continued to dominate Republican national conventions, platforms, and presidential nominations. Despite its national control of the GOP during this period, the conservative wing realized that it needed to be responsive to the Progressive or liberal wing’s priorities, especially regarding civil rights for African Americans, more participation by women in the GOP, opposition to machine politics and corruption, and environmental protection. Nonetheless, conservative positions on economics and foreign policy, such as support for high tariffs and opposition to stricter regulations on big business and the League of Nations, prevailed in the Republican Party during the 1920s and early 1930s.
Republican Party 319
The Great Depression that began in 1929 during the Republican presidency of Herbert Hoover discredited the GOP’s long-held claim that its economic policies generated broad prosperity. The ensuing poverty, unemployment, bank failures, and farm foreclosures motivated a substantial number of Republicans to vote Democratic in 1932. Liberal Republicans in Congress, such as Senators Hiram Johnson of California and George Norris of Nebraska, supported early New Deal legislation. The landslide reelection of Democratic president Franklin D. Roosevelt in 1936 and further increases in the Democratic majorities in Congress solidified the Democratic realignment of 1932–36. After the GOP’s electoral debacle in 1936, the selectively and occasionally pro-New Deal, internationalist, liberal wing of the Republican Party began to dominate the party’s presidential nominations and national platforms. Encouraged by impressive GOP victories in the 1938 midterm elections, liberal Republicans secured the GOP’s 1940 presidential nomination for Wendell Wilkie, a former Democrat and Wall Street lawyer. Although he occasionally used anti-New Deal, isolationist campaign rhetoric, Wilkie accepted most of Roosevelt’s foreign and defense policies and the New Deal’s policy goals while criticizing how they were implemented. Roosevelt was reelected, but by narrower popular and electoral vote margins compared to 1936. Except for the 1964 presidential nomination, moderate and liberal Republicans controlled their party’s presidential tickets and major policy proposals from 1940 until 1980. Conservative Republicans like Senators Robert A. Taft of Ohio and Barry Goldwater of Arizona complained that moderate and liberal republicans merely offered voters “me too-ism” and a “dime store New Deal” by not offering a clearly conservative ideological and policy alternative to liberal Democratic policies. Conservative Republicans were frustrated and disappointed by the cautious centrism of Dwight D. Eisenhower’s Republican presidency (1953–61) and his cooperation with Democratic leaders in Congress on major domestic and foreign policy issues. They attributed Republican vice president Richard M. Nixon’s narrow defeat in the 1960 presidential election to Nixon’s appeasement of liberal Republicans in his platform and selection of a running mate.
Despite the overwhelming defeat of conservative Republican presidential nominee Barry Goldwater in 1964, the Republicans gained more seats in Congress in 1966 than they lost in 1964. Also, in 1966, Ronald Reagan, a conservative activist similar to Goldwater, was elected governor of California as a Republican. Preparing for his 1968 presidential campaign, Richard M. Nixon presented himself as a moderate, unifying leader for the GOP and actively sought the support of conservative Republicans for his candidacy. As the Republican presidential nominee, Nixon used conservative rhetoric on “law and order” to co-opt socially conservative white voters, especially in the South, who were attracted to George Wallace’s minor party presidential candidacy. Narrowly elected president in 1968 and reelected by a landslide in 1972, Nixon pursued a so-called “southern strategy” of attracting southern whites to the GOP by opposing court-ordered busing to enforce racial integration of public schools, emphasizing tough crime control, and promising to return more domestic policy responsibilities to the states. Conservative Republicans, however, were alienated by some of Nixon’s more liberal domestic policies, such as affirmative action and more federal regulations on business, and his foreign policy of détente toward the Soviet Union and China. With détente continued by Nixon’s successor, Republican president Gerald R. Ford, conservative Republicans supported Ronald W. Reagan’s race against Ford for the Republican presidential nomination of 1976. Narrowly winning his party’s nomination, Ford lost the 1976 presidential election to Democratic presidential nominee James E. Carter. The steady growth of conservative influence within the GOP was confirmed by the election of Reagan to the presidency and a Republican majority to the Senate in 1980. The reduced Democratic majority in the House of Representatives was further weakened by the support of conservative southern Democratic representatives for Reagan’s tax cuts, domestic spending cuts, and defense spending increases. Reagan’s conservative faith in free market economics to stimulate economic growth and reduce inflation and interest rates was also evident in his administration’s weakening of federal regulations on business. With a more prosperous economy and the popularity of his optimistic leadership style, Reagan
320 thir d parties
was reelected with 59 percent of the popular votes and the electoral votes of 49 states. Reagan’s political strength declined during his second term as the Democrats won control of the Senate in 1986 and investigated his role in the Irancontra scandal. Nonetheless, George H. W. Bush, Reagan’s vice president, easily won the 1988 presidential election. An important factor in the electoral success of Reagan, Bush, and southern Republicans in Congress was the growing political influence of the religious right, i.e., conservative white Christians. Conservative positions on social issues, such as abortion, school prayer, and gun control, helped the Republican Party to prevail in the South in presidential elections and in more congressional and gubernatorial elections. With a quick, impressive American victory in the Persian Gulf War of 1991 and the formal end of the cold war, Bush seemed likely to be reelected in 1992. But a brief recession and independent presidential candidate H. Ross Perot’s receipt of 19 percent of the popular votes helped Democratic presidential nominee William J. Clinton to defeat Bush. In the1994 elections, the Republicans won control of Congress for the first time since 1952 and continued this control during the remaining six years of Clinton’s twoterm presidency. Although Democratic vice president Albert Gore received nearly 600,000 more popular votes than Republican governor George W. Bush in the 2000 presidential election, Bush won the election due to a United States Supreme Court decision (Bush v. Gore, 2000), and his party strengthened its control of Congress in the 2002 elections. During his 2000 campaign and first year as president, Bush identified his party with “compassionate conservatism” and emphasized faith-based initiatives to involve religious organizations more in federally funded social services, passage of the No Child Left Behind Act to help failing schools, and greater diversity in his cabinet appointments. The terrorist attacks of September 11, 2001, transformed Bush’s presidency. After initially receiving high public and bipartisan congressional support for his invasions of Afghanistan and Iraq, Bush’s foreign and defense policies became more divisive during his 2004 presidential campaign. Bush won a second term after defeating his Democratic opponent by 3 million popular votes but very narrowly in the electoral college. As Ameri-
cans anticipated the 2006 and 2008 elections, Democrats became more confident of winning control of both houses of Congress, which they did in the 2006 midterm elections. Candidates for the 2008 Republican presidential nomination included Senator John McCain of Arizona, a critic of Bush’s policies in Iraq, and Governor Mitt Romney of Massachusetts, a moderate on social issues like abortion and gay rights. McCain won the nomination. Further Reading Barone, Michael. Our Country: The Shaping of America from Roosevelt to Reagan. New York: Free Press, 1990; Reichley, A. James. The Life of the Parties. New York: Free Press, 1992; Sundquist, James L. Dynamics of the Party System. Washington, D.C.: Brookings Institution, 1983. —Sean J. Savage
third parties The United States is a two-party political system. Its single-member-district electoral system, often called a “first-past-the-post” method (sometimes also called a winner-take-all system), drives it toward a twoparty system, and over time, the two dominant parties, the Democrats and the Republicans, have fixed the game with rules and regulations that favor themselves and punish outsiders and third-party movements. A third party is any political party other than the dominant Democratic and/or Republican Parties that attempts to get its candidates elected to public office. In the United States, third parties have cropped up time and again, only to be absorbed by one of the two leading parties after overly relying on the personality of a single charismatic leader; in the end they have died out when the leader proved to have feet of clay, or they lost favor with the public, or they atrophied, or emerged as a single-issue party and died out when that issue proved to be little more than the political flavor-of-the-month. Thus, successful third parties are quite rare and short-lived in American politics. Some third parties rise on the back of a charismatic figure such as Jesse Ventura, the professional wrestler turned politician who became governor of the state of Minnesota, or the quirky H. Ross Perot,
third parties 321
billionaire iconoclast who became a popular and intriguing figure in the 1990s. Other third parties rise on the back of a single issue such as slavery in the 1850s, or an anti-civil rights program of 1948 and 1968. Third parties usually emerge when a charismatic figure seeks to circumvent the two-party monopoly on power, or when a particular issue is not adequately being addressed by the two major political parties. In this way, third parties can be a very valuable escape valve for democracy, where the intense feelings and passions of a minority channel their energy into the political process via the electoral system. Often, one of the two major parties will then absorb the issue that has given rise to the third-party
movement, and then, the new party withers away and dies. But the deck is truly stacked against the emergence of third parties in America. Election rules in the various states favor the two main parties and make a new third party leap over high hurdles to get on state ballots. Restricted ballot-access laws make it difficult for most third parties to even get on the ballots in most states. And if they do manage to get on the state ballot, they have often spent all their money and energy just on that effort. Additionally, rarely are third-party candidates allowed to participate in debates with the two main party candidates; they get little television or media coverage, and these new third parties rarely have the resources (organization
322 thir d parties
and money) possessed by the Democrats and Republicans. And if they do manage to overcome all these obstacles, in the end they rarely win elections. Roughly one out of 800 state legislators are members of a third party. Third parties in America thus tend to have short lives. They may burn brightly but not for very long. Throughout American history, third parties have occasionally played a significant role in politics. While the United States has generally been a two-party system, there have been times when new or third parties have cropped up and had a significant impact in a particular policy area; at other times a new party has replaced an old, dying party, and at still other times, a charismatic leader has siphoned enough votes from one of the major parties to give the election to a minority-vote candidate. In 1848, the Free-Soil Party, an antislavery party, pressed the major parties on the issue of slavery, and in that election, the party nominated former president Martin Van Buren as its candidate. Van Buren won 10 percent of the vote and helped give Zachary Taylor the electoral victory. Eight years later, the newly created Republican Party emerged in the midst of the struggle over slavery and nominated John C. Frémont as its presidential candidate. The rise of the Republicans led to the demise of the Whig Party. Four years later, in 1860, the Republicans, amid clashes over slavery and divisions within the party system, nominated Abraham Lincoln and in the prelude to the Civil War, Lincoln was elected president. This is the first instance of a third party supplanting one of the two major parties and the first time a third-party candidate was elected president. In the late 1880s, the emergence of the Populist Party, a grassroots organization that challenged the Democrats and Republicans, ran James Baird Weaver as its presidential candidate, and he won 22 electoral votes and nearly 9 percent of the popular vote. After this election, the losing major party, the Democrats, adopted many of the issues of the Populists, and the Populist Party faded from the scene. In 1912, Theodore Roosevelt, after leaving the presidency and helping his hand-picked successor, William Howard Taft, win the presidency, concluded that Taft was not an effective heir to the Roosevelt tradition, and challenged him for the nomination. Taft, controlling the Republican Party
organization, won the nomination, and Roosevelt challenged him for the general election by forming a new party, the Progressive Bull Moose Party. Roosevelt succeeded in denying Taft the presidency, but in a split between Taft and Roosevelt, the Democratic nominee, Woodrow Wilson, was elected president, winning 42 percent of the popular vote and 435 electoral votes. In the election, Roosevelt won more votes than Taft, becoming the first third-party candidate to outgain an incumbent president in a presidential election, and Taft became the only incumbent president to finish third in a presidential contest. In 1924, Republican reformer Robert M. LaFollette ran as a Progressive and won 17 percent of the popular vote. Then, in 1948 Strom Thurmond split from the Democratic Party to run on an anti–civil rights platform, forming the Dixiecrat Party. In the same election, former vice president Henry Wallace formed an independent third party (or fourth party), the Progressives. In this confusing election, Thurmond received 39 electoral votes, all from southern states; Wallace won a mere 2.4 percent of the popular vote and no electoral votes; and incumbent Democratic president Harry S Truman won a surprising come-from-behind victory over Republican challenger Thomas Dewey. The post–World War II era saw the rise of several third-party efforts, the first of which occurred in 1968 when segregationist George Wallace split from the Democratic Party and formed the American Independent Party and ran for president. Wallace captured 13 percent of the popular vote and 46 electoral votes from southern states. Republican nominee Richard M. Nixon won the presidency over Democrat Hubert Humphrey, winning only 43 percent of the popular vote and a narrow 301 electoral college victory. In 1980, Republican John B. Anderson split from his party and ran as an independent presidential candidate, winning 7 percent of the popular vote. In that year, Republican Ronald Reagan won the presidency, defeating the incumbent president, Democrat Jimmy Carter. In 1992, H. Ross Perot, an independent, won roughly 19 percent of the popular vote but no electoral votes in an election won by Democratic challenger William Jefferson Clinton over incumbent Republican president George H. W. Bush. Perot ran again in 1996, under the banner of a new third
two-party system 323
party, the Reform Party but received only 8 percent of the vote in an election won by incumbent Bill Clinton. In 2000, the presidential contest was not decided until five weeks after the election when the United States Supreme Court (Bush v. Gore) effectively ended a potential recount of votes in the state of Florida, thereby granting George W. Bush the presidency. This election was very close in many states, and the votes in Florida were hotly contested. Thirdparty candidate Ralph Nader who ran on the Green Party ticket, may have cost Vice President Al Gore the election because the vote margin between Bush and Gore was estimated at slightly over 500 votes, and it is believed that Nader’s candidacy siphoned enough votes from the Gore ticket to give Bush the victory. As one can see, while third parties tend to have short shelf lives, they often raise key issues in the political arena, issues that either die out or are adopted by one of the major parties in an effort to attract votes. They also occasionally challenge the legitimacy and monopoly of the two major parties for electoral respectability. And they sometimes siphon votes from one candidate, giving the electoral victory to a challenger. While the deck is stacked against the emergence of third parties in American politics, they have at times served an important public purpose to “keep the parties honest” by challenging and prodding them into being more alive and responsive to voter needs and desires. Further Reading Green, John C., and Daniel M. Shea, eds. The State of the Parties: The Changing Role of Contemporary American Parties. Lanham, Md.: Rowman & Littlefield, 1999; Jelen, Ted G. Ross for Boss: The Perot Phenomenon and Beyond. Albany: State University of New York Press, 2001; McCaffrey, Paul, ed. U.S. Election System. New York: H.W. Wilson, 2004; Scholzman, Kay Lehman. Elections in America. Boston: Allen & Unwin, 1987. —Michael A. Genovese
two- party system For most of its history, politics in the United States has been dominated by a two-party system. There is
nothing about a republican form of government that mandates such a situation. In fact, most democracies in the world have multiparty systems. Influential third parties have occasionally arisen in American politics, but they tend not to last very long. There are also a number of very small parties that persist over time, but they rarely have a significant effect on who takes power. The pairing of parties has altered over time—Federalist v. Democratic Republicans, Whigs v. Democrats, Republicans v. Democrats—but the general structure of major competition between two dominant parties remains consistent. There are four major reasons for this. First, the historical foundations of the two-party system were arguably established with the first major battle over the ratification of the U.S. Constitution. That basic conflict split partisans into two groups—one could be in favor of the proposed Constitution, or one could be opposed to it. Proponents were called “Federalists” and opponents were called antifederalists. That basic disagreement, which the Federalists won, carried over into President George Washington’s administration. Very quickly after the beginning of the current form of government, two very different ideas about governing arose, epitomized in the conflict between Secretary of the Treasury Alexander Hamilton and Secretary of State Thomas Jefferson. Hamilton became the leader of the new Federalist Party, which supported a strong national government and a vigorous commercial economic system. Jefferson’s followers supported stronger state power and a more agrarian economy. This group came to be known as Democratic Republicans. Important early issues such as whether the federal government could establish a national bank and whether the president could declare neutrality at his discretion absent consulting with Congress solidified this two-party struggle in the early years. Jefferson succeeded in destroying the Federalists as a political force, leading to the one-party “Era of Good Feelings,” but that era of comity did not last long. Andrew Jackson inherited Jefferson’s party, now called simply the Democratic Party, but his controversial leadership prompted a coalition of diverse forces to form a new Whig Party. At the same time, men like Martin Van Buren were making arguments favoring the construction of a system of
324 t wo-party system
opposing parties. Although the Whigs did not last long as the principal opposition to the Democrats, major issues such as the extension of slavery tended to divide the nation into two camps. The formation of the Republican Party in response to the slavery issue, coupled with that party’s institutional control of the federal government during the Civil War, established the modern two-party system that has lasted to the present day. Although third parties have arisen numerous times to challenge the two major parties, Democrats and Republicans have usually been successful in keeping the issue debate structured along two dimensions, often absorbing the issues raised by the third parties. A second reason why the United States has been dominated by a two-party system is American political culture. Many scholars argue that American political culture is typified by much greater consensus on many issues than other democracies. Although Americans disagree about many things, they tend to share values such as Democracy, Capitalism, free enterprise, individual liberty, religious freedom, and private property. Differences in the United States are not as deeply split along regional, ethnic, or religious lines, which means there is no real ideological space for parties based on fascism, Communism, Socialism, authoritarianism, or clericalism. Differences certainly exist, but most Americans tend to be more moderate and centrist, not given to extreme political movements. If this description of American political culture is correct—if most Americans are moderate and middle-of-the-road in their beliefs—then it is in the political parties’ interest to move toward the center, because that is where the votes are. One can visualize this dynamic by imagining the ideological breakdown of the American population being placed under a normal curve. The largest part of the curve is in the center, where most of the population lies, but there are people who reside toward the right and left extremes in much smaller numbers. If either the party of the right or the party of the left wants to win elections, it will have to move toward the center to win the votes of the moderate majority. The farther toward the extremes one party moves, the more support it loses. This is relevant to the establishment of a two-party system because a generally moderate population only allows for one party to approach from
the right and one party to approach from the left—in the current American context, Republicans and Democrats. With a centrist population, there is no room for a third party to make serious in-roads absent the collapse of one of the two major parties—as occurred in the 1850s when the Whig Party selfdestructed over the slavery issue, allowing the third-party Republicans to become a major party. Ideologically polarized countries encourage their parties to stay at the extremes, and countries with multiple points of division allow for multiple parties, but the general consensus on major issues in the United States helps perpetuate a two-party system where the two parties tend toward moderation. This is one reason why some third-party leaders have argued, in the colorful language of George Wallace, that “there’s not a dime’s worth of difference” between the two major parties. A third reason why the United States has been dominated by a two-party system has to do with the rules of the electoral process. Because the two parties have dominated American politics and alternated power over the years, those two parties have been able to write the rules of the electoral game in their favor at the state and federal levels. For example, all states need to have rules for how political parties get access to the ballot in an election. The rules for automatic inclusion on the ballot are drafted with Republicans and Democrats in mind, and make it very easy for those two parties to gain automatic inclusion on the ballot. These rules vary from state to state. For example, one state requires that a party, to be automatically included on the ballot, have one candidate who polled more than 20 percent of the total statewide vote in the last general election. Another state requires the party to have one candidate who polled at least 3 percent of the total number of votes cast for governor. Every state has different rules, but in each state it is very easy for the two major parties to get automatic inclusion, and much more difficult for third parties to do so. Third parties usually have to resort to a petition procedure, requiring signatures of some percentage of voters, to get on the ballot, which requires time, money, and organization. And again, the petition rules are different in each state. Federal rules, which are also written by members of the two major parties, perpetuate this dilemma for
voter turnout 325
third parties. For example, Congress is divided into committees based on party membership. If a thirdparty member were to be elected to Congress, they would have to choose to be counted with one of the two major parties to get coveted committee assignments. At the level of presidential elections, third parties usually have to reach a significant threshold of popular support to be included in nationally televised presidential debates, and they have to win enough popular votes to be eligible for federal funds for the following presidential election. For example, H. Ross Perot had sufficient poll support in 1992 to be included in the presidential debates between President George Bush and Bill Clinton, but his support was weaker when he ran again in 1996, causing him to be banned from participation. Thus, third parties are caught in a Catch-22. If they cannot get publicity for their positions, it is very difficult for them to develop a large following, but if they cannot develop a large following, they cannot get publicity for their positions. A fourth reason why the United States is dominated by a two-party system is derived from the third reason. Perhaps the most important rule of the electoral game is the fact that most elections in the United States are based on a single-member simple plurality (SMSP) system. “Single member” means that political leaders come from singlemember districts—one person represents one geographical area. “Simple plurality” means that the election is winner-take-all—whoever gets a plurality of the vote (more votes than anyone else) wins the race, and everyone else loses. This is true at the congressional level, where congressional statute mandates one House of Representatives member per geographic district. It is constitutionally mandated for the Senate, where each senator represents an entire state. It is also reflected at the presidential level in most states, where the winner of the state plurality wins all that state’s electoral votes. Winner-take- all systems are also common in state legislatures and many city council systems. The SMSP system of running elections provides no incentive for parties to form if they represent only a small percentage of the electorate. Parties that win only 5–10 percent of the popular vote have no chance to place political leaders into public office and thus
find it difficult to retain popular support. Only ideological purists are willing to “throw away” their vote consistently to a third party. Instead, the incentive in a plurality system is for political parties to broaden their appeal in an attempt to attract the necessary plurality of voters to win. Ideally, the closer a party can get to a simple majority, the stronger it will be. Typically, however, that means there is room for only two major parties, for all of the reasons stated above. Only geographically concentrated minor parties, such as those led by Strom Thurmond in 1948 and George Wallace in 1968, can win any significant support, and even those cases did not persist beyond one election cycle. Critics of the two-party system argue that the major parties cannot possibly reflect the diversity of the population. They tend to support reforms that would move American politics toward a proportional representation system, in which parties receive a share of seats in the legislature proportional to the share of the popular vote they receive in an election. Such a system requires multimember districts instead of single-member districts, in which parties that win relatively small percentages of the popular vote can still place people into the legislature. Such a system would encourage the proliferation of third parties and presumably end the dominance of the two major parties. Supporters of the two-party system argue that it is a force for moderation and stability in a large continent-sized democracy and has been an essential factor in maintaining healthy debate and participation, while avoiding the extremes often seen in other countries. Further Reading Downs, Anthony. An Economic Theory of Democracy. New York: Harper, 1957; Duverger, Maurice. Political Parties. Paris: Colin, 1951; Hofstadter, Richard. The Idea of a Party System: The Rise of Legitimate Opposition in the United States, 1780–1840. Berkeley: University of California Press, 1969. —David A. Crockett
voter turnout Casting a ballot in an election is among the most important acts that a citizen in a democracy can perform. Through voting, citizens select their leaders.
326 v oter turnout
VOTER TURNOUT IN PRESIDENTIAL ELECTIONS, 1960–2004 Election
Voting Age Population1 T
2004 215,694,000 2000 205,815,000 1996 196,511,000 1992 189,529,000 1998 182,778,000 1984 174,466,000 1980 164,597,000 1976 152,309,190 1972 140,776,000 1968 120,328,186 1964 114,090,000 1960 109,159,000
urnout 122,295,345 105,586,274 96,456,345 104,405,155 91,594,693 92,652,680 86,515,221 81,555,789 77,718,554 73,199,998 70,644,592 68,838,204
% Turnout of VAP 56.69% 51.31% 49.08% 55.09% 50.11% 53.11% 52.56% 53.55% 55.21% 60.83% 60.92% 63.06%
Sources: Federal Election Commission, Office of the Clerk, U.S. Census Bureau 1 It should be noted that the voting age population includes all persons age 18 and over as reported by the U.S. Census Bureau, which necessarily includes a significant number of persons ineligible to vote, such as noncitizens or felons. The actual number of eligible voters is somewhat lower, and the number of registered voters is lower still. The number of noncitizens in 1994 was approximately 13 million, and in 1996, felons numbered around 1.3 million, so it can be estimated that about 7–10 percent of the voting age population is ineligible to vote. Note that the large drop in turnout between 1968 and 1972 can be attributed (at least in part) to the expansion of the franchise to 18-year-olds (previously restricted to those 21 and older). The total number of voters grew, but so did the pool of eligible voters, so the total percentage fell.
Voters may decide to return an incumbent to office who they feel has been doing a good job, or they may replace the incumbent with someone else, who they feel would do a better job of governing. Given the importance of elections to the selection of political leaders, the right to vote has been hard fought as people formerly denied the vote have sought that right. Despite the importance of voting, not everyone votes, and Americans vote at lower rates than people in most other democracies. In the 2004 presidential election, approximately 55.7 percent of the voting age population cast ballots. Why did nearly one-half not vote? Many factors about the individual and the political system in which the citizen lives affect whether a person will vote. Five broad sets of factors seem most important in understanding why some people vote and others do not. They are the legal context of voting, the political context of voting, personal resources for voting, the motivations of individuals, and the mobilization efforts of campaigns. The legal context defines who is eligible and the ease or difficulty of voting. Compared to many other nations, the United States imposed a highly regulated legal context, one that has throughout U.S. history
erected barriers against voting. Some of these barriers have at times excluded people from the right to vote, while others have made it more difficult to vote. The United States is among the oldest of democracies in continuous operation. The founders of the American democracy were often suspicious of democracy and the participation of average people in political processes. The founders feared that demagogues would manipulate average people, encouraging them to react emotionally to issues and political controversies instead of thinking rationally about them. To guard against this, the founders built a political system that limited the impact of voting, for instance by the electoral college, and highly restricted the eligible population. It was not uncommon thus for the voting franchise to be restricted to white males, over the age of 25, who owned a certain amount of property. Over the course of U.S. history, these restrictions have been eased. Beginning in the early 1800s, property qualifications began to fall, opening the franchise to those of lesser means. Racial qualifications were dropped shortly after the Civil War, and in 1920 women were granted the right to vote with the Nineteenth
voter turnout 327
Amendment. In 1971, the age for voters was dropped from 21 to 18 with the Twenty-sixth Amendment. Still, some residue of this historic suspicion of voters exists. The United States is among a small set of nations that requires someone to register prior to voting. Registration was used to insure that only eligible voters would cast ballots and to eliminate fraudulent voting. Vote fraud was relatively common in the 19th century, and though it is less common today, the high rates of vote fraud a century ago reinforced attitudes that access to voting must be regulated that persist to this day. Many contend that registration requirements create a hurdle to voting and help explain the lower turnout of the United States compared to most other nations. Most people who are registered to vote exercise their ballot. In 2000, an estimated 63.9 percent of the voting-age population was registered to vote. Of them, 85.6 percent reported having voted, a turnout rate more in line with voting rates in comparable western European democracies. Noting the relatively low turnout rates in the United States and the vote-suppressing effects of requirements like prior registration, reformers have urged the easing of such requirements. In 1993, Congress passed the Motor Voter Law that allows a person to register while applying for a driver’s license. The Motor Voter Law was especially aimed at young people—applying for a driver’s license is nearly universal among young Americans. Some evidence suggests that registration rates have climbed as a result of the Motor Voter Law and the slight up-tick in turnout in the 2004 presidential election from previous years may be partially due to this reform. Minorities, black Americans in particular, have been targets of policies that erected barriers against their exercise of the franchise. Prior to the Civil War, their status as slaves prohibited them from voting. In 1870, Congress passed the Fifteenth Amendment, which granted and protected the right to vote for blacks. Despite this constitutional reform, various laws and practices, primarily in the South, still kept blacks from voting. For instance, states passed laws that discriminated against blacks (as well as poor whites), such as literacy tests and poll taxes. In other southern states, blacks were precluded from voting in the Democratic Party primary. As the Democratic
Party in those states was dominant, the Democratic primary election in effect determined the final election outcome. Furthermore, fear and intimidation of blacks, such as lynching and other forms of violence and economic retaliation were used to suppress black turnout. As a result, in the former confederate states, where most blacks lived until the mid-20th century, few blacks exercised the franchise. To remedy this situation, Congress passed the Voting Rights Act in 1965. The act outlawed literacy tests and allowed the federal government to register voters in states with low black turnout rates. Congress renewed the act several times, in 1970, 1975, and in 1982, the act was renewed for 25 years. Over the years, the act was expanded to include localities with large Hispanic populations but low Hispanic turnout. As a result, black turnout increased. But still blacks and Hispanics vote at lower rates than whites. The Census Bureau study of turnout found that of the voting age population, 61.8 percent of whites (non-Hispanic) voted in 2000, compared to 56.8 percent of blacks, and 45.1 percent of Hispanics. That blacks vote at rates only slightly below that of whites attests to the impact of the Voting Rights Act. Language barriers probably account for some of the lower turnout among Hispanics. The fact that many Hispanics have recently arrived to the United States as immigrants is also probably a factor. The likelihood of voting increases the longer one is eligible to vote and has exercised that vote. In the future, as more Hispanics become eligible to vote by gaining citizenship and gain experience in voting by doing it repeatedly, the United States is likely to see increases in Hispanic turnout rates. Turnout will vary with the characteristics of the contest. For instance, some elections are considered more important than others because of the office being contested. In the United States, the presidency is considered the single most important office, thus presidential elections tend to generate more public interest and turnout than other elections. In the presidential election years from 1930 to 2004, more votes were cast for the president than in the congressional races of the same year. Across these 19 presidential elections, voting rates in the presidential contest exceeded those for congressional races by an average of 4 percent. Turnout rates for
328 v oter turnout
congressional elections also were higher in presidential than midterm years. In midterm elections from 1930–2004, turnout for congressional races averaged 38.5 percent compared to 51.6 percent during presidential election years. These totals indicate the lure of the presidential election to voters. When a presidential election is being held, more people vote. And more people vote for president than other offices during presidential election years. Aside from importance, the perceived closeness of the contest will affect turnout rates. More people vote when the race is perceived as being close than when one of the candidates appears a sure winner. When the winner appears certain, some people may decide that their vote is not needed to help their favored candidate, while opponents of the leading candidate may think that their absence from voting will not affect the election outcome. In contrast, when the race is close, not only may people think that their vote might make a difference, but the closeness of the race may create suspense about the eventual winner, which leads people to follow the race more closely. As their interest in the race increases, so too does the likelihood that they will vote. For instance, the 2000 presidential election contest was very close, among the closest on record. Turnout in 2000 was about 2 percent higher than in 1996, when the eventual outcome, that Bill Clinton would beat Bob Dole by a large margin, seemed evident to all. Again in 2004, the race seemed tight, and it is likely that the closeness of the 2000 race heightened interest in 2004, and may have led many to think that their votes would count. Turnout in the 2004 election rose to its highest level in decades, 55 percent, or about 4 percent higher than in 2000. People who possess greater resources that are relevant to the act of voting are more likely to turn out than those who do not possess these resources. In the United States, as well as around the world, the most consequential resources are education and wealth. For example, the Current Population Survey of the U.S. Census Bureau estimates that in 2000, only 39.3 percent of those with nine years or less of education reported voting. Generally, as education increased, so did the percentage voting, with 75.4 percent of college graduates and 81.8 percent of those with advanced degrees turning out to vote. Similarly, in the lowest income category (less than
$5,000), 34.2 percent voted, compared to 61.9 percent in the middle income category ($35,000 to $49,999) and 74.9 percent in the highest income category (over $75,000). Education and wealth are resources that a person may draw upon when deciding to vote. For instance, educated people are likely to be better informed than those with less education. Those who lack information about matters such as the candidates’ positions and such basic facts as where to vote, are less likely to vote than those with such information. Moreover, even if an educated person lacks such information, an educated person probably has an easier time finding out about such matters than the less well educated person. Moreover, election campaigns often deal with complex ideas, like which policy direction is the best course to deal with a problem. Educated people likely have an easier time understanding these types of complexities than less educated people. The lack of education imposes a potential barrier to voting. Wealth in itself probably does not directly affect the likelihood of voting very much, but wealth’s effects derive from the association of wealth and education. While the correspondence is not perfect, in the United States more highly educated people tend to earn higher incomes. Some people are motivated to vote, while others are not. This motivation to vote derives from attitudes that make a person feel a part of the political system. For instance, a person who feels part of the political system is more likely to be interested in politics and government and to spend time following government and public affairs. The 2000 American National Election Study, conducted by the University of Michigan, found that 91 percent of those interviewed who said that they follow government and public affairs most of the time voted compared to a voting rate of 45 percent of those who said that they hardly follow government and public affairs. The same survey also found that a voting rate of 84 percent for those who were claimed to be highly interested in the presidential campaign compared to 46 percent for those who were not very interested. Concern over the outcome of the election may also motivate a person to vote. Those who are concerned over the election’s outcome may vote so as to take part in producing that outcome, but also because concern over social processes and institutions naturally leads people to participate in those processes
voter turnout 329
and institutions. Thus, the 2000 University of Michigan poll discussed above found that 84 percent of those who were concerned with the election outcome voted, but only 45 percent of those who did not care very much who won said that they voted. Perhaps the most important attitude that connects a person to the political system is partisanship, whether one identifies or feels as if one is a member of a political party. Identifying with a party invests the person psychologically in that party, much like being a fan of a sports team. Loyalty to that party and its candidates increases, and a loyal supporter of the party will want to do whatever is possible to aid that party, including coming out to vote. The University of Michigan’s 2000 election study shows the effects of partisanship on turnout, with 85 percent of the poll’s respondents who called themselves Republican and 81 percent of Democrats voting. In contrast, 69 percent of independents, those without an attachment to either major party, turned out to vote. Finally, whether one votes is also a function of mobilization efforts by candidates, parties, and other organizations. Historically, political machines in the United States committed organizational resources to get people to the polls, for instance, by providing transportation, applying social pressure, reminding people to vote, and in some instances by bribery. Patronage was a key element that political machines used to stimulate voting among their supporters. Many government workers owed their jobs to political machines. As long as the machine continued to win elections, these workers would keep their jobs. Not only the government workers but also their family members would come out to vote to insure that they kept their jobs. Modern “get out the vote” efforts (GOTV) include direct mailing, telephone contacts, and face-to-face canvassing, in which campaign and other workers would knock on people’s doors, providing information and reminders about voting. Many of the GOTV
efforts are not candidate or party based, but are sponsored by civic and other groups with their efforts aimed at getting people registered, especially those without a history of being registered or of voting. The largest chunk of campaign expenditure, however, involves mass media advertising, especially television. Television advertising is efficient in being able to reach large numbers of people, but such efforts are less effective in inspiring voting than the personal efforts that involve direct contact between the campaigner and the citizen. The shift of campaign resources to television over the past 50 years probably accounts for some of the decline in American voting rates during this time period. Tight competition between the parties, especially in presidential elections, but elsewhere as well, has led the parties and presidential candidates to improve their vote totals by getting previous nonvoters to the polls. Thus, we have seen an increasing use of contacting and canvassing, supported by modern technologies and market research. This allows campaigns to target narrowly defined types of individuals for their campaign efforts. The modest uptick in turnout in recent presidential contests may be partially attributed to the modernization of this venerable form of campaigning. Further Reading Conway, Margaret M. Political Participation in the United States. 3rd ed. Washington, D.C.: Congressional Quarterly Press, 2000; Franklin, Mark N. Voter Turnout and the Dynamics of Electoral Competition in Established Democracies since 1945. Cambridge: Cambridge University Press, 2004; Hill, David B. American Voter Turnout: An Institutional Perspective. Boulder, Colo.: Westview Press, 2005; Patterson, Thomas E. The Vanishing Voter: Public Involvement in an Age of Uncertainty. New York: Knopf, 2002; Piven, Francis Fox and Richard A. Cloward. Why Americans Don’t Vote. New York: Pantheon, 2000. —Jeffrey E. Cohen
LEGISLATIVE BRANCH
advice and consent
Alexander Hamilton was a proponent of this view. In Federalist 66, he noted that “there will of course be no exertion of choice, on the part of the senate. They may defeat one choice of the executive, and oblige him to make another, but they cannot themselves choose. . . . they can only ratify or reject the choice he may have made.” In practice, though, the need to obtain Senate approval creates a powerful incentive for presidents to take senatorial preferences into account when making nominations. Individual senators retain substantial influence—in 1969, Senator Robert P. Griffin (R-MI) remarked that “judges of the lower federal courts are actually ‘nominated’ by Senators while the President exercises nothing more than a veto authority”—through the practice of blue slips and the custom of senatorial courtesy. The contemporary norm allows for senators to inquire into a nominee’s background and judicial philosophy, although nominees tend to deflect questions about how they might decide future cases. Under senatorial courtesy, when nominating lower-court judges, a president will usually defer to suggestions offered by the senior senator from the state in which a judicial vacancy occurs, but there is no formal rule defining the scope of the courtesy. Sometimes, the junior senator, or even a commission, is involved; often, presidents only accept advice from senators from their own party. Individual senator influence is further enhanced through the use of blue
Article II of the U.S. Constitution grants the president powers to make treaties and certain appointments “by and with the Advice and Consent of the Senate.” The “consent” element of this language is straightforward, requiring a Senate majority for appointments and a two-thirds vote for treaties. The need for Senate support is a fundamental constraint on presidential discretion. The meaning of the “advice” component is less clear. Does it confer substantive power to the Senate, which obligates the president to obtain Senate advice before making a nomination or while negotiating a treaty, or is the Senate’s power limited to approving or disapproving a name or treaty submitted by the president? There is no universally accepted understanding of the Senate’s “advice” power. The constitutional language clearly envisions that the president and the Senate would share the appointment and treaty powers, but from the beginning of the republic, there has been controversy concerning who should have the preponderance of power, whether the president should have to consult with the Senate prior to making nominations, and concerning the appropriate criteria for confirming or rejecting nominees. Supporters of a presidency-centered power note the significance of placing the appointment power in Article II and argue that the Senate’s role is limited to confirming or rejecting presidential nominees or treaties. 331
332 advic e and consent
slips, a process in which the Senate Judiciary Committee solicits the views of both home-state senators, irrespective of party. Generally, the committee would refuse to consider a nominee when either home-state senator objects, although during the chairmanship of Senator Orrin Hatch (R-UT), the practice was that even one objection from a home-state senator would block a nomination. During the past 10 years, it has been increasingly common for the Senate to refuse to complete floor action on judicial nominees, neither confirming nor rejecting them. One account calculated that from 1995 to 2000, the Republican-controlled Senate held full-floor votes on slightly more than half of President Bill Clinton’s nominations to the federal appeals court. In 2001–02 when the Democrats had recaptured a Senate majority, less than one-third of President George W. Bush’s appellate-court nominations received full-floor consideration. When the Republicans recaptured the Senate in the 2002 midterm elections, Democratic senators used the filibuster to block floor action on six judicial nominations. In turn, the Republican majority threatened to change Senate rules to prohibit filibusters of judicial nominations. The structure of the advice-and-consent language is different for treaties. On appointments, the Article II language makes it clear that the presidential nomination precedes consideration by the Senate: “he shall nominate, and by and with the advice and consent of the Senate, shall appoint” judges, ambassadors, and other officials. For treaties, Article II seems to create a more collaborative relationship: Here, the president “shall have the power, by and with the advice and consent of the Senate, to make Treaties, provided two-thirds of the Senators present concur.” As a legal matter, the Senate approval process is not “ratification.” Rather, it is the president who ratifies a treaty, signifying that the United States is bound by the international agreement. The constitutional language means that the president may not take this step without Senate approval. Some sources argue that the early practices reflected this more collaborative view of treaty powers, as presidents often included senators in the negotiation phase. President George Washington solicited the advice of senators on treaties and even sought their approval on the names of negotiators and their instructions. Legal scholar Louis Fisher notes that both Andrew Jackson
and James K. Polk asked the Senate for advice on treaty negotiations and that William McKinley, Warren Harding, and Herbert Hoover included both representatives and senators on treaty delegations. Members of Congress were directly involved in treaty negotiations concerning the United Nations, the North Atlantic Treaty Organization, the Panama Canal treaty, and nuclear arms limitations. President Woodrow Wilson took the view that treaty negotiation was a purely presidential power. As president, Wilson himself served as the head of the U.S. delegation that negotiated the terms of the Treaty of Versailles, and he refused to accept any changes to the submitted agreement. Wilson’s intransigence was a key factor in the Senate’s rejection of the treaty; the Senate voted 49–35 in favor of the treaty, short of the two-thirds majority required. The complexity of the treaty process gives the Senate more options under the constitutional-consent provisions. The Senate may reject a treaty outright but more often has blocked treaties by simply refusing to take action. The Senate may approve a treaty with “reservations” or other amendments, making final ratification conditional on specific changes, a precedent dating to 1797. Presidents have increasingly relied on executive agreements to negotiate with foreign governments. Unlike treaties, executive agreements do not require Senate approval, although in theory their scope is limited to matters within the president’s exclusive executive authority or to those agreements made under a specific statutory authorization or a prior treaty. Under congressional-executive Agreements, Congress can delegate authority to the president to negotiate an agreement with one or more foreign governments, with final approval conditional on approval by both houses of Congress. Critics argue that such agreements are unconstitutional since they circumvent the explicit treaty language in Article II. The power of the Senate’s advice power, in the end, stems from the fact that presidents cannot put their nominees in place or push through treaties without Senate approval. Further Reading Fisher, Louis. Constitutional Conflicts Between Congress and the President. 4th ed. Lawrence: University Press of Kansas, 1997; Maltzman, Forrest. “Advice
appropriations 333
and Consent: Cooperation and Conflict in the Appointment of Federal Judges.” In The Legislative Branch, edited by Paul J. Quick and Sarah A. Binder. New York: Oxford University Press, 2005; Mansfield, Mike. “The Meaning of the Term ‘Advice and Consent.’ ” Annals of the American Academy of Political and Social Science 289: 127–133 (Congress and Foreign Relations, September 1953); United States Senate, Committee on Foreign Relations. Treaties and Other International Agreements: The Role of the United States Senate. S. Prt. 106–71, 106th Congress, 2nd session. January 2001; White, Adam J. “Toward the Framers’ Understanding of ‘Advice and Consent’: A Historical and Textual Inquiry.” Harvard Journal of Law and Public Policy 103 (2005–06). —Kenneth R. Mayer
appropriations The power to determine how and where government money is spent is given to Congress in the U.S. Constitution. Article I, Section 9, Clause 7 of the Constitution reads: “No money shall be drawn from the Treasury, but in Consequence of Appropriations made by Law; and a regular Statement and Account of the Receipts and Expenditures of all public Money shall be published from time to time.” The framers of the Constitution believed that the power of the legislative branch to control government spending was central to the nature of representative government. James Madison, one of the most influential figures in designing the U.S. form of government, wrote in Federalist 58 that the appropriations power was the “most complete and effectual weapon with which any constitution can arm the immediate representatives of the people, for obtaining a redress of every grievance, and for carrying into effect every just and salutary measure.” However, the appropriations power is not exclusively congressional; the president does maintain some constitutional leverage over the power of government to spend money through the veto power, and legislative attempts to improve the management and oversight of government budgeting have increased the power of the president in determining the amount and location of government spending. Much of the history of the appropriations process in the U.S. system of government can be understood in the context
of the ebb and flow of presidential power and the ongoing tug of war that exists between separate branches with shared powers concerning public spending. During the early part of the nation’s history, Congress dominated the process for allocating government spending. Government agencies dealt directly with congressional committees in determining the course of government spending; there was little formal role for the president in the appropriations process until legislation appropriating funds appeared on the president’s desk to be signed into law. Powerful committees (the Ways and Means Committee in the House of Representatives and the Finance Committee in the Senate) were established to deal with major issues of taxation and appropriations. In the early years of U.S. history, a single appropriations bill met the needs of the entire country. As the country expanded and the budget grew, the work of the House Ways and Means and the Senate Finance Committees became overwhelming, and the power concentrated in these key committees became enormous. The appropriations function was taken from the Ways and Means Committee in 1865 with the creation of the House Appropriations Committee; the Senate Appropriations Committee was established soon after, in 1867. In practice since 1837 in the House and 1850 in the Senate, a two-step process has been required for government funds to be appropriated. Congress must first pass a bill to authorize a program, and subsequently a second bill is required to appropriate or spend the actual money. The distinction between authorizing legislation, which creates government programs and agencies and gives them the power to spend government money, and appropriations legislation, which enables the outlay of funds, continues today, frequently creating tensions between committees that have authorization power and the appropriations committees. The Budget and Accounting Act of 1921 dramatically altered the way the appropriations process worked, creating for the first time a formal role for the president in budget formation. While it is true that presidents did become involved from time to time in budgeting prior to 1921, the Budget and Accounting Act required the president to formulate an annual budget request based on presidential
334 appr opriations
priorities and created the Bureau of the Budget (later renamed the Office of Management and Budget) to help coordinate and centralize the executive budget process. Presidential budget requests still form the basis of the appropriations bills enacted by Congress; presidential success in the appropriations process varies considerably by president and partisan division of government. The Budget and Accounting Act of 1921 ushered in a period of strong presidential influence on the appropriations process. Presidents such as Franklin D. Roosevelt and Lyndon Johnson came to office with ambitious policy agendas and used the powers given to the president by the Budget and Accounting Act to control the policy agenda. The increase in presidential power peaked during the presidency of Richard Nixon who claimed that presidents had the inherent power of impoundment—the power to refuse to spend money appropriated by Congress. Nixon’s repeated refusal to spend money on social programs supported by congressional Democrats led to the passage of the Budget Control and Impoundment Act of 1974 in the final month of the Nixon presidency. The Budget Control and Impoundment Act curtailed the use of impoundments by presidents. In addition, the law required Congress to pass an annual budget resolution that establishes total revenue, spending, surplus or deficit levels; sets debt totals; and allocates spending among 20 functional categories. Both the House and the Senate created special budget committees to develop the budget resolution and to give more order to the budget process. The budget resolution, which must cover five fiscal years, is supposed to be adopted by April 15; however, Congress rarely meets this deadline, and there is no penalty for late adoption (or even failing to pass a budget resolution). The budget resolution is not sent to the president and does not have the force of law. It serves as a guide to other congressional committees that have a role in the budgetary process, including the appropriations committees. In reality, the value of budget resolutions is dubious; as one member of the House Appropriations Committee said: “Budget resolutions are not worth the paper they are printed on.” There is no constitutional requirement that appropriations bills originate in the House of Repre-
sentatives as revenue measures must; however, by tradition, the House begins the annual appropriations cycle. In both the House and the Senate, the work of crafting appropriations bills is done by the subcommittees, each of which has a defined jurisdiction over particular categories of government spending. Until recently, both the House and the Senate committees had 13 parallel subcommittees that performed the work of the allocating funds by program by passing 13 separate appropriations bills. In the 109th Congress, the House and then the Senate reorganized appropriations subcommittees. There are currently 12 Senate and 10 House appropriations subcommittees with the following jurisdictions. In the Senate, they include Agriculture and Rural Development; Defense; Energy and Water; State Department and Foreign Operations; Homeland Security; Interior; Labor, Health, and Human Services and Education; Military Construction and Veterans; Commerce, Justice, and Science; Transportation, Treasury, Housing, and Judiciary; District of Columbia; and Legislative Branch. In the House, they include Agriculture, Rural Development, and Food and Drug Administration; Defense; Energy and Water; Foreign Operations and Homeland Security; Interior and Environment; Labor, Health and Human Services, and Education; Military Quality of Life and Veterans; Science, State, Justice and Commerce; and Transportation, the Treasury, Housing Judiciary, and District of Columbia. The House and the Senate appropriations committees are among the most powerful committees in Congress because of their ability to determine how roughly one-third of the $2.7 trillion dollar federal budget is spent. Former committee member Representative Jack Kemp (R-NY) referred to the committee as the most powerful in the history of civilization. The committee is the most highly sought after assignment in the House and the Senate, and members are often willing to give up seniority on other committees for a placement on the House or Senate appropriations committee. The appropriations process for regular appropriations bills follows an annual schedule that begins with the presentation of the president’s budget to Congress, which must occur on or before the first Monday in February of each year. Each of the subcommittees begins the congressional process by hold-
appropriations 335
ing hearings to gain additional information from agency officials about their spending requests. Prior to subcommittee action on a bill, the full appropriations committee determines each subcommittee’s total outlay and budget authority limits. This process, known as the 302b allocation, is done largely by the appropriations committee chair with the assistance of the full committee staff (known as the front office). This allocation is extremely important to the subcommittees as it sets the limit on spending for the programs within each subcommittee’s jurisdiction and, once set, typically can not be increased without finding offsetting reductions in another subcommittee’s allocation. The subcommittee’s bill is put together by the subcommittee chair (commonly known as a “Cardinal”—as a group the subcommittee chairs are known as the “College of Cardinals”) who has great discretion over the content of his or her spending bill. The level of consultation with the ranking minority member (the most senior member of the subcommittee from the minority party) of the subcommittee varies over time and by subcommittee, but there is a stronger tradition of bipartisanship on the appropriations committees than on most other committees. In addition, subcommittees rely heavily on professional staff members throughout the legislative process. Subcommittees then proceed to “mark up” their bills, the process by which subcommittees review, amend, and report out legislation. Although technically only full committees may report bills, the appropriations subcommittee markups are rarely altered by the full committees. Floor consideration of appropriations bills depends on the differing rules of the House and the Senate, with each chamber providing special treatment for appropriations bills. Once the House and the Senate have voted to approve appropriations bills, differences between the chambers’ bills are ironed out by conference committees—committees established for the sole purpose of producing a single compromise bill that can pass both the House and the Senate. Following passage of an identical bill by both houses, the bill is sent to the president to be signed into law. Once enacted, agencies must follow congressional intent in spending government funds and may not exceed statutory limits imposed in appropriations bills.
Political scientists who studied the appropriations process prior to the 1974 Budget and Impoundment Act found that the committees operated with a bipartisan goal of budget reduction. More recent studies of the appropriations process conclude that the days when the appropriations committees acted as the guardian of the Treasury are ended and that one of the major reasons why legislators seek to become members of the appropriations committees is due to a desire to spend money, typically in a way that will assist constituents. Recent presidents have been particularly critical of the congressional appropriations process, claiming that it leads to inefficient, wasteful spending on the parochial needs of reelection-oriented members of Congress. These criticisms have been particularly popular at times when the federal budget is in a deficit position. Unlike most governors, the president does not have the ability to cancel out unwanted items in appropriations bills while signing the bill into law (a power known as the line-item veto). The president must either sign the entire bill into law or veto the entire bill. All modern presidents since Ronald Reagan have sought a line-item veto for the stated purpose of curtailing parochial congressional spending, but a version of the line-item-veto bill passed by Congress in 1996 was struck down by the U.S. Supreme Court as unconstitutional. The congressional practice of passing omnibus appropriations bills and continuing resolutions containing numerous spending accounts in a single bill has lessened the president’s power to exercise veto control over congressional spending priorities. Vetoing an omnibus appropriations bill or continuing resolution might lead to a government shutdown, which carries political risks for both the president and Congress. Presidents often use statements of administration policy (known as SAPs) as a means of signaling Congress about potential disagreements early in the process so as to avoid the passage of an appropriations bill that is not acceptable to the administration and might result in a veto. Presidents do posses rescission authority, which allows the executive to propose reducing or eliminating planned spending. For rescission language to take effect, Congress must pass a bill approving of the president’s rescissions within 45 days of continuous session of Congress, or the appropriation must be spent.
336 bicamer al legislature
In addition to the regular appropriations process that must occur on an annual basis to fund the operations of government, there are supplemental appropriations that are used to fund unexpected expenses that occur from time to time during the fiscal year, such as natural disasters and wars. A third type of appropriations, known as continuing appropriations (often called continuing resolutions since they are passed as joint resolutions rather than bills) are used to fund agencies and programs in the short term when the regular appropriation has not been enacted in time for the start of a new fiscal year. Another source of tension between the president and Congress involves the practice of earmarking. Generally, Congress provides appropriations in lump-sum amounts for each spending account. Earmarking refers to the congressional practice of specifying the dollar amount and specific project or location of appropriations within an account in laws and accompanying committee reports. Using this tactic, members of Congress can target expenditures to a particular district or state and, presumably, reap the electoral benefits (more votes). According to the Congressional Research Service, the practice of earmarking has increased dramatically in recent years. Presidents have consistently opposed congressional earmarking claiming that only the executive branch should make projectspecific spending decisions based on factors such as need and technical expertise and that Congress should pass appropriations bills that allow the executive branch to exercise discretion in the allocation of grants and contract spending. Presidents argue that executive-centered decisions result in a more “efficient” use of the taxpayers’ dollars. Members of Congress counter that the executive branch does not have a monopoly on the information necessary to distribute government funding efficiently and effectively—arguing that members of Congress know better than the president what the needs of their constituents are—and that they have increased the numbers of location-specific earmarks included in bills and reports. Several scandals involving the allocation of earmarks, including the resignation and conviction of former House member Randall “Duke” Cunning-
ham (R-CA) for providing earmarks in exchange for bribes have brought earmarking into the public eye and have led to congressional efforts to reform the earmarking practice. In addition to proposals to limit the number and scope of earmarks, proposals to create a constitutionally acceptable version of a lineitem veto are being considered by Congress. However, Congress relishes the ability to make appropriations decisions, and it is unlikely that reforms will seriously impede the ability of Congress to continue to allocate money to favorite projects in the future. What is certain is that the inherent struggle between the president and Congress to control the appropriation of government money will continue. Further Reading Fenno, Richard P. The Power of the Purse: Appropriations Politics in Congress. Boston: Little, Brown, Inc., 1966; Frisch, Scott A. The Politics of Pork: A Study of Congressional Appropriation Earmarks. New York: Garland, 1998; Munson, Richard. The Cardinals of Capitol Hill: The Men and Women Who Control Government Spending. New York: Grove Press, 1993; Schick, Allen, and Felix Lostracco. The Federal Budget: Politics, Policy, Process. Washington, D.C.: Brookings Institution, 2000; Streeter, Sandy. The Congressional Appropriations Process: An Introduction. A Congressional Research Service Report to Congress, 2004. —Scott A. Frisch and Sean Q. Kelly
bicameral legislature Congress is the first branch of government, given the fundamental republican task of enacting legislation in response to the popular will. The framers of the U.S. Constitution made Congress a bicameral legislature, dividing it into two chambers—a House of Representatives and a Senate. Both chambers must cooperate in the lawmaking process to send bills to the president for signature, but they are structured very differently from each other. Although the framers sought a republican form of government, they did not want Congress simply to follow the transient and shifting will of the majority unless the majority sentiment was conducive to the common good. Bicameral-
bicameral legislature 337
ism serves to check the power of the legislature and thus the people, as a whole, and it helps bring different and complementary qualities to the legislative process. One of the strongest checks and balances in the U.S. system of government is Congress’s bicameral structure. The framers believed that in a republican form of government, the legislature would be the strongest branch and thus the branch that would tend to absorb all power. It is inherently stronger than the other branches, in part because it enjoys the greater sympathy of the people since its members are drawn from local communities. However, the legislature also has the power of the purse and the power to pass laws that might be used to encroach on the powers and functions of the other branches. The primary examples of legislative power experienced by the framers at the Constitutional Convention were largely negative. Most state governments were marked by legislative supremacy in which the legislative branch dominated politics and tended to become abusive. James Madison puts it most succinctly in Federalist 48: “The legislative department is everywhere extending the sphere of its activity and drawing all power into its impetuous vortex.” As the strongest branch, then, the framers placed the strongest checks on Congress, and one of those
checks is to divide the legislature internally into two chambers. Federalist 51 constitutes the central argument explaining the checks and balances system, and there Madison argues that this internal division of Congress into two chambers will “render them, by different modes of election and different principles of action, as little connected with each other as the nature of their common functions and their common dependence on the society will admit.” So, although Congress as a whole is founded on popular support, its two chambers are structured differently, in part to separate them and to make collaboration difficult. Lawmaking requires the agreement of two different legislative chambers, each of which can check the other. If a conspiracy against the people ever began to work its way into the lawmaking branch of government, it would have to infect two very different institutions, with different incentives and different political perspectives. Thus, Congress’s bicameral structure increases the protection of the nation against the strongest branch. In addition to serving the cause of checks and balances, bicameralism also contributes to the deliberative process that is central to lawmaking by causing the two chambers of Congress to play different roles in the legislative process because they are structured so differently. It is important here to highlight
The western side of the United States Capitol building. The U.S. Capitol serves as the location for Congress, the legislative branch of the U.S. federal government.
338 bicamer al legislature
these structural differences before explaining how they contribute to the deliberative process. First, the qualifications to be a senator are more stringent than those to be a member of the House. In addition to living in the state one represents, a qualification common to both chambers, House members must be at least 25 years old and seven years a citizen. To serve in the Senate, one has to be at least 30 years old and nine years a citizen. Second, the selection process was, until the Seventeenth Amendment, quite different in the two chambers. From the beginning, the framers intended the House to be the more democratic chamber of Congress, requiring states to allow anyone eligible to vote for the most numerous branch of the state legislature to also vote for the House. Within a few decades, this led to universal white male suffrage, giving the House a broad base of popular support. The original constitutional design for the Senate, however, called for that body’s members to be chosen by state legislatures. Only with the Seventeenth Amendment, ratified in 1913, did the selection process for the Senate become identical to that of the House. Third, members of the House, an institution based on proportional representation of the states, come from districts of relatively small size. Thus, large states such as California and Texas have many representatives, while small states such as Wyoming and Delaware have only one representative. By contrast, members of the Senate, an institution based on equal representation of the states, are elected statewide, with two senators from each state, no matter how big or small the state is. Fourth, the Senate is a much smaller institution than the House. Congressional statute currently limits the size of the House to 435 representatives. With 50 states in the union, at two senators per state, the Senate currently has 100 members—less than one-fourth the size of the House. Finally, House members serve two-year terms before facing reelection, with the entire chamber facing popular judgment at one time. Senators serve sixyear terms—three times the length of their House counterparts—and the chamber is divided into three cohorts, with only one-third of the senators facing election in any specific election year. There are other differences between the two chambers, including different roles in foreign policy, staffing, appropriations, and impeachment,
but these constitutional features highlight the divergent roles the two chambers were designed to play in the legislative process. The House was designed to represent the people as a whole, not the states. It represents the most significant break from the system under the Articles of Confederation, where each state had equal representation in Congress. The framers wanted the base of popular support for the House to be as broad as it was in state legislatures. They also wanted the widest possible eligibility for service. In Federalist 52, Madison argues that “the door of this part of the federal government is open to merit of every description, whether native or adoptive, whether young or old, and without regard to poverty or wealth, or to any particular profession of religious faith.” While the congressional districts were to be large enough to make more likely the election of highly qualified leaders, this chamber remains “the people’s house.” The relatively short two-year terms were designed both to give House members enough time to become good at their jobs, while also keeping them dependent on the people. The fact that the entire House stands for election at the same time reinforces its accountability to the people, for dramatic changes in popular opinion will be immediately reflected in the House. In a very real sense, the framers wanted as many people as possible to be involved at this level of federal politics, and this close connection between the people and their representatives fosters deliberation about the merits of policy that is acutely dependent on the popular will. These same features, structured so differently in the Senate, demonstrate that chamber’s more deliberative and nondemocratic function. The qualification differences indicate that the framers believed the Senate requires more of an individual than the House. In Federalist 62, Madison argues that “the nature of the senatorial trust” requires “greater extent of information and stability of character,” which in turn “requires at the same time that the senator should have reached a period of life most likely to supply these advantages.” The framers wanted older people to serve in the Senate, based on the common notion that age brings greater wisdom, knowledge, experience, and stability. The naturalization requirement makes it more likely that an immigrant will be a citizen long enough not to be subject
bill (acts of Congress) 339
to foreign influence. Selection by state legislatures made it more likely that the most highly qualified people would be elected to office—a goal apparently not always easily achieved through popular election. It also gave the states a voice in the formation of the federal government, highlighting the fact that the Senate represents entire states, thus reinforcing the federal nature of the structure of government. The Seventeenth Amendment eliminated some of these distinctions. The Senate’s much smaller size makes it better able to resist the “sudden and violent passions” that sometimes overcome larger assemblies. Its much longer term length not only gives individual senators greater experience, especially in such areas as foreign policy where the Senate has greater responsibilities but also the capability of resisting the changing whims and passions of the people. Many framers believed that republics were uniquely susceptible to what Madison calls “mutability” in the law—the tendency to make too many laws and change them too often in response to rapid swings in popular opinion. Long terms enable senators to resist those rapid swings because they have six years to persuade the people of the wisdom of their unpopular stand. Staggered terms in the Senate reinforce the ability of this institution to resist popular opinion, for in any given election two-thirds of the Senate is shielded from electoral volatility. Whereas voters can theoretically replace the entire House at one time, they would have to sustain such anti-incumbent fever for at least two election cycles to have a similar effect in the Senate. These dramatically different features clarify the functions of the House and the Senate in a bicameral legislature. The House represents the people and is designed to be most responsive to the popular will. The Senate is designed to be, in Madison’s words, “an anchor against popular fluctuations.” While the framers strongly believed in popular sovereignty, they also believed that democracy had its negative side that had to be tamed or controlled in some fashion. Both democracy and stability are necessary in a political system, and the framers feared that a legislative body that was too democratic might become abusive in its response to popular passions. The Senate was designed to blend stability with liberty by bringing more wisdom and knowledge and experi-
ence into the legislative process. It was designed to be the more deliberative of the two chambers. The radically different structures of the two chambers of Congress give lawmakers very different perspectives and incentives, and the result is a very complex lawmaking process. The larger goal of this structure, however, is to foster a deliberative process that is responsive to the popular will after that will is modified and refined in a way conducive to the common good. Further Reading Hamilton, Alexander, James Madison, and John Jay. The Federalist Papers, Nos. 48, 51–58, 62–63. Edited by Clinton Rossiter. New York: New American Library, 1961. —David A. Crockett
bill (acts of Congress) A bill is a proposed new law, a piece of legislation that has not yet been enacted. The term bill comes from Section 7 of Article I of the U.S. Constitution which sets forth the steps of the legislative process through which a bill can become a law. Laws are the main way in which public policy is enacted, so bills can illustrate how politics are translated into policies. Bills are the main type of legislation that Congress considers, but the other types are the resolution, the joint resolution, and the concurrent resolution. Simple resolutions concern the internal operations of either the House of Representatives or the Senate, concurrent resolutions concern the internal operations of both chambers, and most joint resolutions are essentially identical to bills. While most resolutions are merely administrative, bills seek to become laws, which are authoritative and judicially enforceable. A bill can be major or minor, radical or conservative, long or short, broad or specific, and public or private. Many bills concern unimportant or symbolic matters, while others concern the distribution of desirable goods and even matters of life and death. Bills are as varied as the policies and practices that members of Congress seek to enact via law. Public bills have general applicability, while private bills apply only to an individual or a narrowly
340 bill (acts of Congress)
defined entity. Public bills can address regulations in specific policy areas, taxing and spending, or any other broad matter, while private bills are far more specific. For example, Congress passed a controversial measure that was arguably a private bill in March 2005, as it adjusted a judicial jurisdiction to facilitate a lawsuit by the parents of Terri Schiavo to keep their comatose daughter on life support. Private bills are now quite rare, but in practice the distinction between public and private bills can be inexact, as many companies enjoy tax breaks or subsidies or other benefits that are intentionally directed only at them but that are written in language that is ostensibly generally applicable, say, by referring to any and all firms of a certain size in a certain industry in a narrowly defined geographic region when in reality that amounts to only one company. If a bill targets a specific individual for punishment, then it is a bill of attainder, which is unconstitutional. Many bills originate from within the federal bureaucracy, and they are almost always written by lawyers who work for a federal agency or department or a congressional committee. However, only a sitting member of Congress can formally introduce a bill for consideration. Even if the president of the United States wants to introduce a bill, this executive must find a member of Congress who will do it. In the 109th (that is, 2004–06) Congress, all but two of the 435 members of the House introduced at least one bill. The member of Congress who introduces a bill is known as its sponsor. He or she often tries to convince other members to be cosponsors, as the number of cosponsors can be an indication of the level of support in Congress for passing the bill. Every bill is given a number, such as H.R. 1, or House Resolution 1. Many bills also have clever acronyms for names; for example, the USA PATRIOT Act of 2001 is actually titled “Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism.” A bill typically has several sections, including a description of its purpose, some findings about a problem that it seeks to address, a mention of the basis of Congress’s constitutional authority to legislate on the issue, and then a section establishing the new policy. A bill can be introduced in either the House or the Senate, but bills that concern raising revenue must originate in the House. After a member of Con-
gress introduces a bill, the Speaker of the House refers it to a committee whose jurisdiction covers the policy area that the bill addresses. The chairperson of the committee may then refer the bill to an appropriate subcommittee, which may choose to hold public hearings during which people testify for and against the bill. After the hearings, the subcommittee may well revise or “mark up” the bill and then vote to send it to the full committee, which may itself hold hearings and enact further changes. If the full committee approves the bill, then it is sent to the full House for debate. Before the House debates a bill, it first determines a rule to govern the debate of the bill, including how long it can be debated and the number and nature of amendments that may be attached to it. After the House debates the bill, it votes on whether to pass the bill. If a majority of the full House votes in favor of the bill, then it passes. At that point, the bill is sent to the Senate, where the legislative process essentially starts anew with hearings and revisions at the committee and subcommittee levels, after which it may or may not be passed along to the whole Senate for debate and a vote. Since the House and the Senate are different institutions with different constituencies and different political pressures, they often pass versions of the same bill that are quite different. In such cases, Congress creates a conference committee to iron out the differences in the two bills. Such committees are composed of members of both chambers and both political parties and usually have fairly senior or powerful members of Congress. Even though a conference committee comes along fairly late in the legislative process, many bills undergo substantial revisions there since a lot of hard bargaining can be required to produce a single version that both chambers can support. Once the House and the Senate pass the identical versions of the bill, it is sent to the president. The Constitution gives the president 10 days to act on a bill, and the president can decide to sign it into law, to veto it, or to let it pass without signature. If the president rejects or vetoes a bill, then Congress may override the veto by a two-thirds vote in each chamber, in which case the bill becomes law over the president’s veto. When the president signs a major bill into law, this is often done at an elaborate signing
bud getpro cess
ceremony, even a name-signing with multiple pens, which are given to prominent members of Congress or supporters as souvenirs. When a bill becomes a law, it is officially called an act rather than a bill, but in practice some laws retain their name as a bill even after passage. For example, the 1944 Servicemen’s Readjustment Act that created benefits for returning World War II veterans is still commonly known as the G.I. Bill. Laws that are permanent and generally applicable are placed into the U.S. Code. As the above description suggests, the process by which a bill becomes a law is usually complicated, long, and arduous. Some bills are rushed through Congress, especially if they concern an urgent issue, but most bills take a long time to become law. Because the legislative process is so slow and has so many steps, there are many points at which a bill can be changed or derailed, and there are many opportunities for different groups to try to influence the process. Further, given the complex nature of the legislative process, bills typically emerge much changed from their initial versions, and many contain related but distinct measures that have been added as amendments. Bills expire at the end of every Congress, with the election of a new Congress every two years. If a bill is not passed in that time, then it must be reintroduced in the next Congress and start the legislative process all over again. Many bills are reintroduced every couple of years but never advance, while others take several congressional sessions to become law. During the course of the two years that a Congress is in session, it may consider roughly 10,000 bills. On average, only one out of 10 bills will ever become a law. Indeed, most bills never become laws, and most never even make it out of the committee to which they are referred: They may be formally rejected, or they may be just “tabled” or ignored, but either way most bills “die in committee.” The legislative process is designed to make it difficult for a bill to become a law, so bills tend to succeed only if a significant majority supports them. Some bills are introduced not so much in the hopes of becoming law but rather for more symbolic purposes or as a means of taking a public position. In other words, a member of Congress may sponsor or support a bill that has little chance of passing but that appeals to an important political constituency such that he or she can tell supporters that he or she
341
tried to help them by introducing or supporting legislation on their behalf. Insofar as bills are the main way that members of Congress seek to make public policy, bills can serve a great variety of political purposes. Further Reading Davidson, Roger H., and Walter J. Oleszek. Congress & Its Members. 10th ed. Washington, D.C.: Congressional Quarterly Press, 2006; Waldman, Stephen. The Bill. New York: Penguin, 1996. —Graham G. Dodds
bud getpro cess The budget process of the United States is generally centered on the understanding of three budget acts: the Budget and Accounting Act of 1921 (BAA), the Congressional Budget and Impoundment Act of 1974 (CBIA), and the Gramm-Rudman-Hollings Act of 1985 and 1987 (GRHA). These three acts were put into place in the 20th century because prior to this, there was no need for a budget process. During the 19th century, most federal expenses were covered by customs revenues. A very tight federal expenditure system meant that surpluses were given back to the states after state debts had been paid off. There was no need to address how federal budgets were handled as money was not an issue. In the late 19th century, deficits first appeared, which meant that Congress had to start to address budgeting in more detail. Congress began to convene committees to handle the issue. The Taft Commission recommended that the president should be responsible for reviewing the executive branch and then letting Congress know who needed money. In this way, the budgeting for the national government would be held in the hands of the president, at least initially, in an effort to make the budgeting process more efficient. Congress ignored this recommendation, but the conversation itself set the stage for the enaction of the first of the three important budget acts: BAA. BAA created the Bureau of Budget in the executive branch as the general budget liaison with Congress. Presidents were required to draft budgets and submit them to Congress, which would then send the relevant pieces to the appropriate congressional committees. In this way, the budget process became less
342 bud get process
piecemeal and began to have more of a holistic approach. The BAA was the process in place until World War II when a more modern presidential system began to question the efficiency of the budget process system. The United States became a more modern national system around the time of World War II. By many accounts that traced U.S. history, the country took a step toward a stronger presidency, a stronger national government in the federalist system, and generally a much larger role on the international stage. As a result, it was necessary to have much more money flowing in the national coffers. Major debts caused Congress to look more to the president to handle the budget process. In this way, the process was made even more efficient as more money was necessary. The burden of shaping a national budget and thus a fiscal agenda became one person’s alone. While the president during this time was responsible for estimates only, the Congress rarely appropriated more than what the president asked. This method of handling the national budget went on until the reform era of the 1970s. In many ways, this decade of the 20th century saw Congress reasserting itself across the board in national policy. Especially during the 1970s, an inflation crisis caused Congress to revisit the budget issue. The second of the relevant budget acts, the CBIA, was passed in an effort to reassert congressional control in a policy area that had long been dominated by the president. Whereas the presidency had an institutional entity to handle the budget, Congress heretofore did not. The legislative branch created corresponding budget committees in the House and the Senate to deal with the process. Second, Congress created the Congressional Budget Office, which would provide independent information about the budget for Congress. As a result, both the Office of Management and Budget (OMB) in the presidency and the CBO in Congress began to forecast budgets and spending. The CBIA required Congress to enact a budget with limits and then to instruct appropriate committees to spend within limits. In this scenario, instead of relevant pieces being sent to appropriate committees and then voted on, the budget is determined in a budget committee which then instructs the appropriate committees how much money with which they have to work. As a result, a major piece of the budget process
was added in 1974: reconciliation. Congress has to reconcile bills passed out of appropriate committees with spending limits set by the budget committees. Another check on spending placed in the CBIA was the ability of the president to withhold any funds after Congress appropriates it. The two pieces of the GRHA, most often known as Gramm-Rudman, was passed in the 1980s in response to rising deficits. It passed spending caps on the federal government in an effort to bring the budget into balance. Every budget since has to take into account the spending caps by Congress that binds the budget; however, the budget only balanced in the 1990s because of rising surpluses, given the growing economy and tax increases. Currently, the budget process entails both legislative and executive branches taking a crack at it each year. In the presidency, the OMB is the agency responsible for the budget. The OMB is able to take a look at what is coming out of the CBO to give the president a view of what Congress is thinking. Thus, the OMB is responsible for the entirety of the presidential role in the process. This role includes setting parameters, setting estimates, and setting the budget agenda. By detailing what money should be spent where, the president signals policy directives early in the political season. The time line runs as follows: In January, the OMB conducts a preview with all executive agencies. It takes a look at what the agencies spent their money on in the previous fiscal year and then takes a look at what the agency expects to need money for to spend in the upcoming fiscal year. The OMB then does further study based on these reviews throughout the summer. During this time, the office may be working with agencies to fine tune what is expected. In the fall, the OMB turns to developing a budget specifically with decisions made based on the administration’s fiscal policy in mind. Here, we see the presidential stamp on the budget agenda. In December, the White House negotiates directly with specific agencies about amounts, if necessary. Throughout this process, both the Council of Economic Advisors and the OMB are advising the president on fiscal policy. What is important to note about the congressional role in the process is how it changed dramatically with the CBIA. Prior to this law, the president’s budget went directly through appropriate committees. In
casework 343
this way, the president could have negotiated directly with committees on the passage of his budget. Now his budget gets sent to the budget committees in both chambers by March. As a result, there are two levels of review of the president’s agenda, and the chief executive has more with which to negotiate while working to push a budget through Congress. The budget committees in each chamber send out a first draft of the budget resolution by April 15. One month later, both chambers have passed a version of the budget resolution, and the spending committees get to work. The budget needs to be passed by September because the fiscal year begins in October. Throughout this time line, the main congressional adviser is the CBO which looks to see where legislation is in committee and helps the budget committees where necessary. The thing to note in the modern budget process is the relationship between the president and the Congress. Prior to CBIA, the Congress was generally reactive to the president’s plan. The interesting piece to the CBO is that it is bipartisan and is responsible to both chambers of Congress while the OMB is only responsible to the president and thus can be very partisan. As a result, the two types of forecasting can be different. While the budget process may not seem like the most exciting of U.S. government topics, it is very important to the health of the government. It also has an influence on the way the government has run. In 1921, the BAA began a shift away from a personal, individualized presidency to a presidential organization, or an institutionalized presidency—a transformation that allowed the presidency to wrest some budgetary power away from Congress. As a result of the budget process shift, the president began to acquire more staff and take more power in the battle for control of the national government. This helped lead to the modern, powerful presidency we have today. In the 1990s, the Congress and President Bill Clinton were battling over control of the national agenda, and the battle led to a shutdown of the national government: The branches refused to compromise on the budget, and the government shut down. Suddenly, families who were shut out of national treasures such as Yellowstone National Park or the Lincoln Memorial became more aware of the type of budget they contribute to. Where citizens spend their money and where they get it is the most
controversial conversation elected leaders in the U.S. government have. If there were enough money to go around, there would be no politics, because citizens could just pay for everything. Scarcity of funds leads to fights concerning the agenda, which means that the budget process is the key to understanding U.S. government. While the budget process seems remote from ordinary citizens, it is not. How the government spends its money is greatly affected by public opinion. If the people of the United States think that too much money is being spent in one area and not enough money in another area, the budget may move to reflect it, or elected officials may pay the price in subsequent elections. Second, germane committees take testimony from interested parties about the budget, so citizens can have an influence through their interest groups as well. See also Appropriations; Ways and Means Committee. Further Reading Ferejohn, John, and Keith Krehbiel. “The Budget Process and the Size of the Budget,” American Journal of Political Science 31, no. 2 (May 1987): 296–320; Fisher, Louis. “Federal Budget Doldrums: The Vacuum in Presidential Leadership,” Public Administration Review 50, no. 6 (Nov.–Dec. 1990): 693–700; Hartman, Robert W. “Congress and Budget-Making,” Political Science Quarterly 97, no. 3. (Autumn 1982): 381–402; Johnson, Bruce E. “From Analyst to Negotiator: The OMB’s New Role,” Journal of Policy Analysis and Management 3, no. 4 (Summer 1984): 501–515; Ragsdale, Lyn, and John J. Theis, III. “The Institutionalization of the American Presidency, 1924–92,” American Journal of Political Science 41, no. 4 (October 1997): 1280–1318; Shumavon, Douglas H. “Policy Impact of the 1974 Congressional Budget Act,” Public Administration Review 41, no. 3 (May–June 1981): 339–348. —Leah A. Murray
casework Casework is one of the most important activities in which legislators engage while serving in office. Casework consists of a legislator performing a task within
344 casew ork
government on request from a person he or she represents. One of the most often used examples of casework is the time when a senior citizen does not receive social security checks because of some sort of bureaucratic mistake or delay within the Social Security Administration. When this delay happens, the Social Security recipient contacts the U.S. representative’s office to ask for assistance in receiving this guaranteed government benefit. The member of Congress or his or her staffer in turn investigates the matter and pushes to resolve the problem quickly. The inquiry usually involves the member of Congress contacting the federal agency to find out why their constituent is not receiving their guaranteed government benefit. Members of Congress are also frequently contacted on problems dealing with workmen’s compensation claims, veteran’s benefits, medical care, home loan guarantees, and immigration problems. The people of the United States reach out to their legislator for help with filling out government forms, asking for an explanation of a government decision, and for help in applying to one of the U.S. military academies. Members of Congress regard casework as a very important component of their job. As a matter of fact, the research on casework indicates that members of Congress allocate a significant amount of resources in their district and Washington, D.C., offices to handle casework requests. Legislators often designate at least one staff member, oftentimes several workers, at the district and Washington, D.C., offices to handle casework requests. Casework is so important to legislators that both the House Ethics Manual and the Senate Rules similarly address the responsibilities of the lawmaker with casework. For example, according to the Ethics Manual for Members, Officers, and Employees of the U.S. House of Representatives for the 102nd Congress, “Members may properly communicate with agencies on behalf of constituents: to request information or status reports; to urge prompt consideration of a matter based on the merits of the case; to arrange for appointments; to express judgment on a matter; and/ or to ask for reconsideration, based on law and regulation, of an administrative decision.” Further, the House rules suggest the U.S. representative “should make clear to administrators that action is only being requested to the extent consistent with governing law and regulations.” Senate rules do not specify that the
casework must be completed by the member for someone from their home district and state. The House of Representatives understands casework requests as coming from people living in the member’s legislative district. The rules in both chambers clearly prohibit the legislator from accepting any current or future “contributions or services” in exchange for casework. Lawmakers frequently establish rigid guidelines for responding to constituent mail and casework requests. Normally, a caseworker from the representative’s office will send the constituent a letter indicating that their request was received. Following the acknowledgement letter, members of Congress will have their caseworker send out at least one update letter on the progress of the request, and then the constituent will receive a final letter informing them of the outcome of their request. In some cases, not all of the requests for help are able to be met by the member of Congress, and when this happens, the constituent is notified about the situation. Legislators who retire or who lose their bid for reelection will often turn over incomplete casework to their successor, give it to another member of Congress from the same state, or return the incomplete file to the constituent. Research suggests that casework provides the member of Congress with greater visibility among constituents and creates a more positive evaluation of the lawmaker. Legislators generally view casework as a strategic service which will earn them more votes and win the next election. Most legislators have a significant amount of space devoted to constituent service on their Internet Web sites. Political scientists assume that reelection is the number-one goal of legislators and that doing casework is just one of the behaviors legislators engage in for reelection. Lawmakers, particularly the members of the U.S. House of Representatives also engage in credit claiming, position taking, and advertising to help them in their reelection efforts. Members of Congress usually claim credit for particularized benefits that return to their community, such as bringing money home to the district to fix roads and bridges or to help build a senior citizen activity center. Position taking involves a member of Congress staking out a position with which it is difficult for others to disagree. For example, the lawmaker could take a position where he or
casework 345
she would claim to stand “for the environment” and “for the well-being of our children and grandchildren.” Members of Congress also try to develop a brand loyalty with their constituents through advertising. Members exercise this advertising through mailing newsletters or other informational mailings with the franking privilege, the ability of members of the House and the Senate to send mail with their signature, rather than a stamp for postage. Research has also demonstrated that constituents (those who live in the district of a specific lawmaker) are not generally concerned with the policy positions or even the ideology of their representative. Lawmakers appear to recognize that their constituents will tend to vote for them based on a specific favor their representative or senator completed for them. Lawmakers can present themselves to their constituents by either person-to-person contact or by performing service to the district such as helping a company achieve tax-exempt status from the government or interceding to help individuals deal with government bureaucracy, all forms of casework. The people of the United States most often do not have the time or the understanding to follow the business in Congress, but they do remember a personal contact with the legislator or when the lawmaker assists them individually or as a member of a group to receive special status or material benefits from government. Hence, members of Congress and other lawmakers place special emphasis on performing casework while in office because it is an activity that will assist in their reelection. This is especially the case when a member of Congress serves a district in which a relatively small proportion of voters identify with the member’s party or a marginal district where party control of the seat switches between the Democratic and Republican candidate. The amount of casework has increased tremendously during the past 30 to 40 years. One study noted a correlation between the rise in the number of safe districts and the growth of the federal bureaucracy. Since the Great Society legislative programs of President Lyndon B. Johnson which targeted the problem of poverty, the number of sitting members of Congress facing a significant challenge on election day has dropped precipitously as the number of federal programs (which make up the federal government) has increased in number and size. The
explanation behind this correlation is simple. The reason more incumbent members of Congress are easily winning elections in what are known as “safe” districts is because the growth in government programs leads to more government rules and regulations that will provide citizens with more problems dealing with the bureaucracy. The people, who are having problems with the bureaucracy have increasingly turned to their U.S. representative or senator for help in dealing with the government. The member of Congress and staff, in turn, complete more casework, motivating constituents to reelect their member of Congress because of the casework completed on their behalf. The argument presented is that members of Congress are not really concerned about good public policy but rather on creating agencies to deal with policy issues, leading to more casework and a higher likelihood of reelection. There has also been an increase in the amount of e-mail sent to members of Congress. In 2002, there were more than 120 million e-mails sent to members of the House and the Senate. On an average day, House offices receive more than 234,000 e-mails, and senators receive almost 90,000 emails. Many of these e-mails are requests from constituents. With the dawn of the Internet, all representatives and senators have Internet home pages designed specifically to educate the public and to welcome casework requests. In addition to the electoral advantage of performing casework, the number of casework requests can also suggest whether or not a federal program or agency is solving the problem legislators meant for it to correct. According to the Congressional Research Service report for Congress, “Responding to constituents’ needs, complaints, or problems gives a Member an opportunity to determine whether the programs of the executive agencies are functioning in accordance with legislative mandates and may indicate a need for new legislation.” Clearly, a federal agency or program that is accomplishing its goals will not generate many casework requests from the people of the United States. While lawmakers engage in casework for their constituents, research suggests that some are more likely than others to perform casework. In a 2001 study of state lawmakers from 49 states, the authors found that state legislators who are liberal, not ambitious, perceive themselves as delegates of the people,
346 caucus , legislative
represent a rural district with a small population, and are paid; professional members of the legislature are more likely to do casework for their constituents. Ultimately, the authors conclude that state lawmakers “are strongly motivated by their conceptions of constituent preferences and requests” to carry out service activities for those who elected them to office. See also incumbency. Further Reading Baldwin, Deborah. “Taking Care of Business,” Common Cause 11 (September/October 1985): 17–19; Bond, Jon R., Cary Covington, and Richard Fleisher. “Explaining Challenger Quality in Congressional Elections.” Journal of Politics 47 (May 1985): 510– 529; Cain, Bruce, John Fere, and Morris Fiorina. The Personal Vote: Constituency Service and Electoral Independence. Cambridge, Mass: Harvard University Press, 1987; Ellickson, Mark C., and Donald E. Whistler. “Explaining State Legislators’ Casework and Public Resource Allocations.” Political Research Quarterly 54, no. 3 (September 2001): 553–569; Fenno, Jr., Richard F. Homestyle: House Members in Their Districts. Boston: Little, Brown, 1978; Fiorina, Morris P. Congress: Keystone of the Washington Establishment. New Haven, Conn.: Yale University Press, 1977; Mayhew, David R. Congress: The Electoral Connection. New Haven, Conn.: Yale University Press, 1974; Peterson, R. Eric. “Casework in a Congressional Office: Background, Rules, Laws, and Resources.” CRS Report for Congress, December 27, 2005; Pontius, John S. “Casework in a Congressional Office.” CRS Report for Congress, November 19, 1996; Serra, George, and Albert D. Cover, “The Electoral Consequences of Perquisite Use: The Casework Case.” Legislative Studies Quarterly 17, no. 2 (May 1992): 233–246. —Harry C. “Neil” Strine IV
caucus, legislative A legislative caucus is an informal group of legislators who organize to promote or advocate a specific shared interest, and they can be influential in the policymaking process within Congress. Some caucuses are formed to focus temporarily on a particular issue, while others exist for years. Roughly 200 caucuses have existed in Congress at any given time in recent
years, most in the House of Representatives. The goal of a caucus is to educate fellow legislators about particular issues to help in the formation and passage of legislation. They also allow members of Congress to align themselves publicly with a particular issue or policy objective. Many caucuses will write what is known as a “Dear Colleague” letter to promote a particular policy. Caucuses, particularly those with national constituencies, also allow members to gain more recognition within Congress, which can also provide more clout in pushing legislation. Caucuses are formally organized as congressional member organizations (CMO) and are governed by the rules of the U.S. House of Representatives (members of the U.S. Senate can also be members). Some groups do not use the title of caucus but instead will call themselves a task force, a working group, a study group, or a coalition. Certain rules apply to the members of these groups. First and foremost, members are allowed to form a CMO to pursue a common legislative objective. In addition, each CMO must register with the Committee on House Administration and provide the name, the statement of purpose, the officers, and the staff members associated with the group. Members of both the House and Senate can participate in the group, but at least one of the officers must be a member of the House. A CMO cannot accept funds or services from private organizations or individuals; official resources at the disposal of individual members are used instead. However, the franking privilege cannot be used by a CMO, and CMOs cannot have independent Web pages. Instead, a member can devote a section of his or her official government Web page to the group. Caucuses are organized on a wide variety of topics, interests, and constituencies. For example, caucuses exist to represent particular industries, such as the textiles, the steel, the entertainment, or the medical industries, or they can be organized along partisan lines as are the Republican Study Group or the Blue Dog Coalition (a group of fiscally conservative House Democrats). There are also regional caucuses (such as the Northeast–Midwest Congressional Coalition), caucuses representing the interests of a particular nation (such as Croatia, Indonesia, New Zealand, Pakistan, Taiwan, and Uganda to name just a few), and caucuses devoted to specific policies (such as antipiracy and international conservation), health
caucus, legislative 347
issues (such as mental health, food safety, and oral health), and a variety of other interests (such as the Sportsman’s Caucus, the Boating Caucus, and the Horse Caucus). Some of the more prominent and at times influential caucuses represent certain national constituencies. Those in this category include the Congressional Caucus for Women’s Issues, the Congressional Black Caucus, the Congressional Hispanic Caucus, and the Congressional Asian Pacific American Caucus. National constituency caucuses serve as a forum for their members to develop a collective legislative agenda; they also monitor the actions of the executive and judicial branches (such as executive orders and judicial appointments), and they can serve in an advisory capacity to other groups (such as congressional committees, executive departments, or the White House). The Congressional Caucus for Women’s Issues, a bipartisan group founded in 1977, has fought for passage of many bills dealing with economic, educational, and health-care issues, among others. One of the biggest challenges the caucus has ever faced came in 1995 when Republicans gained control of both houses of Congress for the first time in 40 years. With a commitment to reduce the cost and the size of government, the Republican leadership eliminated all legislative service organizations (which included the Women’s Caucus). Such organizations could still exist but were to be renamed congressional member organizations and would no longer receive public funding to pay for office space and staff. As a result, the cochairs of the organization (one woman from each party) now take on the responsibilities of the caucus in addition to their regular duties as a member of the House. Most but not all women in the House are members, and legislative priorities that became laws in recent years have included: stronger child-care funding and child-support provisions as part of the welfare reforms in 1996; increased spending for and the eventual reauthorization of the Violence Against Women Act programs; contraceptive coverage for women participating in the Federal Employee Health Benefits Program; Medicaid coverage for low-income women diagnosed with breast cancer; bills to strengthen stalking, sex-offender, and date-rape laws; and many other pieces of legislation dealing with women’s issues. With the exception of a few years during the 1990s, the bipartisan membership of the Caucus has agreed
to take a neutral stand on the issue of abortion to keep the organization more inclusive. The Congressional Black Caucus was formed in 1969 by the 13 black members in the House who joined together to strengthen their efforts to address legislative issues of importance to black and minority citizens. In 1971, the group met with President Richard Nixon and presented him with a list of 60 recommendations involving domestic and foreign policy issues. While members considered Nixon’s response inadequate, it strengthened the resolve of the group to pursue its goals. As of 2006, there were 43 members of the Congressional Black Caucus, and membership included a wide diversity of representatives from both large urban and small rural congressional districts. During the past three decades, the caucus has promoted a variety of legislative priorities, including full employment, welfare reform, South African apartheid, international human rights, minority business development, and expanded educational opportunities to name a few. The group is also known for the “alternative budget” which is produced every year in sometimes stark contrast to the actual federal budget in an attempt to highlight the need for better funding of federal programs impacting a variety of citizens who are economically disadvantaged or those within the middle class who benefit from programs such as student loans or small business loans. Members of the Congressional Black Caucus refer to themselves as the “conscience of Congress.” Similarly, the Congressional Hispanic Caucus, founded in 1976, is dedicated to advancing legislative priorities affecting Hispanics in the United States and Puerto Rico, as well as addressing national and international issues and the impact that policies have on the Hispanic community. Comprised of 21 members in 2006, the Congressional Hispanic Caucus serves as a forum for the Hispanic members of Congress to work together on key issues on the legislative agenda and to monitor issues and actions within the executive and judicial branches of government. The legislative agenda of the group covers all areas that have a direct impact on the Hispanic community, and members often work in smaller task forces that draw on their expertise and develop priority legislation within each policy area. One of the most prominent issues on the agenda of the Hispanic Caucus in recent years has been immigration reform.
348 c ensure, legislative
The Congressional Asian Pacific American Caucus has also placed immigration reform at the top of its policy agenda in recent years. Founded in 1994 by former congressman Norman Mineta of California (Mineta was the first Asian American to serve in the cabinet, first as Secretary of Commerce during the Clinton administration, then as Secretary of Transportation during the George W. Bush administration), the group includes members of both the House of Representatives and the Senate who have strong interests in promoting issues of concern to Asian-Pacific U.S. citizens. Among the goals of the caucus are to educate other members of Congress about the history, contributions and policy concerns of Asian-Pacific Americans. With the wide variety of caucuses that have existed in recent years, the Women’s Caucus, the Black Caucus, the Hispanic Caucus, and the Asian Pacific American Caucus have remained as some of the most resilient and successful with their legislative priorities. See also caucus, legislative. Further Reading Berg, John C. Unequal Struggle: Class, Gender, Race, and Power in the U.S. Congress. Boulder, Colo.: Westview Press, 1994; Brown, Sherrod. Congress from the Inside: Observations from the Majority and the Minority. 3rd ed. Kent, Ohio: The Kent State University Press, 2004; Davidson, Roger H., and Walter J. Oleszek. Congress & Its Members. 10th ed. Washington, D.C.: Congressional Quarterly Press, 2006; Gertzog, Irwin N. Women and Power on Capitol Hill: Reconstructing the Congressional Women’s Caucus. Boulder, Colo.: Lynne Rienner Publishers, 2004; Perry, Huey L., and Wayne Parent, eds. Blacks and the American Political System. Gainesville: University Press of Florida, 1995; Singh, Robert. The Congressional Black Caucus: Racial Politics in the U.S. Congress. Thousand Oaks, Calif.: Sage Publications, 1998. —Lori Cox Han
censure, legislative Censure is a formal reprimand issued by an authoritative organization against an individual for inappropriate behavior. In the United States, censure usually
refers to the Congress issuing a censure against a president. It is a congressional procedure short of impeachment designed to announce publicly that the president of the United States has done wrong, but the level of wrongdoing has not risen to meet the high bar of impeachment. Censure can also be voted by the Congress against individual members of the Congress as well. Censure, when issued against a president, attaches no penalties or fines. It is merely a statement that the president has done some wrong in the eyes of the Congress and may serve as a public embarrassment. What is the constitutional or statutory base of censure? Unlike impeachment, censure is nowhere mentioned in the U.S. Constitution and is a purely political, and not a legal or constitutional, procedure. It was invented as a way to dig at the president when his actions were deemed for some reason to be inappropriate but not strictly illegal, unconstitutional, or impeachable. No formal structure or procedure exists for censuring a president. In effect, the Congress makes up the rules as it goes along. Usually, censure takes the form of a concurrent resolution wherein both houses of Congress express disapproval of a presidential act or acts. But one House of Congress may issue a censure vote against a president, as occurred in 1834 against President Andrew Jackson. Only once in United States history has a president been censured, although censure efforts were initiated against several presidents. In 1834, President Andrew Jackson, a Democrat, and the Whigcontrolled Congress clashed over a variety of issues including the president’s opposition to and veto of the bill reauthorizing a Bank of the United States. Jackson believed that he was defending the interests of the average citizens from the elites of society; the Whigs in Congress believed that Jackson was becoming a demagogue and was against the commercial interests of the nation. The Whig-dominated Senate goaded Jackson on several issues and asked him to supply a variety of executive branch documents to the Senate. Jackson refused. The Senate then voted censure against Jackson for assuming “authority and power not conferred by the Constitution, but in derogation of both.” Jackson was outraged and asserted that “without notice, unheard and untried, I thus find myself charged on the records of the Senate, and in a
census 349
form hitherto unknown in our history, with the high crime of violating the laws and Constitution of my country.” Jackson insisted that if Congress wanted to charge him with a violation of the Constitution, it had a constitutional remedy—impeachment. Anything short of that was an abuse of power. Intended as a public embarrassment, censure had no criminal or any other penalties attached to it, and the censure vote became an issue in upcoming elections. In the next election, the Democrats won control of the Senate, and in 1837, the new Senate expunged the censure vote from the record. But censure did not die with the Jackson administration. In the middle of the Monica Lewinsky scandal in 1998, some legislators (mostly Democrats) and even former Republican President Gerald R. Ford suggested that the Congress censure President Bill Clinton, a Democrat, for inappropriate sexual activity with a White House intern and then lying about the relationship under oath during a deposition in a civil trial. But the Republican-controlled Congress decided instead to pursue impeachment against the President. Ironically, toward the end of the impeachment process, as many of the more moderate Republicans (for example, Representative Peter King [R-NY]) saw the political as well as the legal writing on the wall that conviction was not possible, some members of Congress instead attempted to resurrect censure as a means of punishing the president short of impeachment. But House Speaker-elect Robert Livingston (R-LA) (who would be forced to resign his post shortly afterward due to his own sex scandal) said that censure is “out of the realm of responsibility of the House of Representatives.” The House approved an impeachment resolution against Clinton, but the Senate failed to gain even a simple majority vote against Clinton on any of the impeachment articles. More recently, in 2004 and again in 2006, calls for censure of President George W. Bush for allegedly misleading the public and the Congress in the lead up to the war against Iraq came from some members of Congress and several special-interest organizations. Is censure an effective slap on the wrist against a president who has misbehaved, or is it a partisan tool to attack a president? Clearly a means is needed to call a president to account that is short of the draconian method of impeachment. But is censure that answer? Censure, if not overtly political, can be an
effective tool against a president who has committed inappropriate acts that do not rise to the high level of an impeachable offense. But the history of censure does not lend itself to the view that censure votes are used effectively or for purely good government reasons. It is impossible to remove the partisan element from censure, and therein lies the problem. Had the Congress censured Bill Clinton with a non- or bipartisan vote, it would have been an effective and probably appropriate punishment for his inappropriate behavior. But partisan hatreds entered into the equation, and in the end, efforts to destroy Clinton overwhelmed good judgment, and the impeachment effort, although doomed from the start, went ahead. Does that mean that censure has no future in U.S. politics? Is it an acceptable alternative to impeachment? It is entirely possible to resurrect a nonpartisan censure effort as a means of making a statement and rendering a judgment in a nonimpeachment case. The key will be to have a truly bipartisan and not overtly partisan censure. Only then can it serve a useful public policy purpose, and even then, it may be considered an inappropriate and extraconstitutional method of criticizing the president. Further Reading Adler, David Gray, and Michael A. Genovese, eds. The Presidency and the Law: The Clinton Legacy. Lawrence: University Press of Kansas, 2002; Baker, Peter. The Breach: Inside the Impeachment and Trial of William Jefferson Clinton. New York: Scribner, 2000; Devins, Neal, and Louis Fisher. The Democratic Constitution. New York, Oxford University Press, 2004. —Michael A. Genovese
census The census is one of the most important documents in the U.S. political and governmental system. The census provides the basic information on which to make such important decisions as allocating House of Representative seats among the states, redistricting seats within state legislatures and city councils, distributing federal aid each year to state and local governments, and assessing taxes. The census also helps determine how many electoral votes are awarded to each state.
350 c ensus
The concept of counting citizens dates back to at least 3800 b.c. when the Babylonians were said to have counted their citizens. Later, leaders of the Roman empire created the word census from the Latin word censere, meaning to assess. Hence, the goal of Roman leaders was to use their census to develop tax bills for their citizens. Likewise, much later William the Conqueror in 11th-century England mandated a census to reveal who owned what property and to help him establish and collect taxes. His subsequent Domesday Book, remains one of the most important documents in the history of western civilization as a portrait of life in the Middle Ages. The U.S. census, taken every decade since the first one was recorded in 1790, was mandated in the U. S. Constitution. Article I, Section 2, of the Constitution directs that the census be taken each decade thereafter. Hence, the U.S. census has been taken in each year whose number ends in a 0. Information for the U.S. first census was taken by 650 federal marshals making unannounced calls on every home in each of the 13 states. Information that they collected included the name of the head of the household and the numbers of all residents living in or on the property. The marshals counted 3.9 million people and spent $45,000 to complete the 18month program. The U.S. census is the longest-known continuously run such process in history. Many European nations subsequently began census programs in the early years of the 19th century. The size and scope of information gathered in the census has expanded at a steady rate throughout the years. Questions about manufacturing capabilities were added in 1810, and additional items seeking information about business and industry were added in 1840, along with questions about mental health. Ten years later, after consulting with scientists and receiving special funding from businesses, census collectors began to ask detailed questions about all members of every household. By 1890, federal marshals had stopped collecting census data, and the federal Census Bureau had machines replace hand counting of data gathered. In the 1930 census, with the onset of the Great Depression undoubtedly fresh on their minds, Census Bureau officials added questions about employment, income, and migration history to their ever-growing survey.
STATE POPULATION AND THE DISTRIBUTION OF REPRESENTATIVES State Alabama 4,461,130 Alaska 628,933 Arizona 5,140,683 Arkansas 2,679,733 California 33,930,798 Colorado 4,311,882 Connecticut 3,409,535 Delaware 785,068 Florida 16,028,890 Georgia 8,206,975 Hawaii 1,216,642 Idaho 1,297,274 Illinois 12,439,042 Indiana 6,090,782 Iowa 2,931,923 Kansas 2,693,824 Kentucky 4,049,431 Louisiana 4,480,271 Maine 1,277,731 Maryland 5,307,886 Massachusetts 6,355,568 Michigan 9,955,829 Minnesota 4,925,670 Mississippi 2,852,927 Missouri 5,606,260 Montana 905,316 Nebraska 1,715,369 Nevada 2,002,032 New Hampshire New Jersey New Mexico New York North Carolina North Dakota Ohio 11,374,540 Oklahoma 3,458,819 Oregon 3,428,543 Pennsylvania 12,300,670 Rhode Island South Carolina South Dakota Tennessee 5,700,037 Texas 20,903,994 Utah 2,236,714 Vermont 609,890 Virginia 7,100,702 Washington 5,908,684 West Virginia Wisconsin 5,371,210 Wyoming 495,304 Total 281,424,177
Repr Population
1,238,415 8,424,354 1,823,821 19,004,973 8,067,673 643,756
1,049,662 4,025,061 756,874
1,813,077
esentatives 2002 to 2010 7 1 8 [+2] 4 53 [+1] 7 [+1] 5 [–1] 1 25 [+2] 13 [+2] 2 2 19 [–1] 9 [–1] 5 4 6 7 2 8 10 15 [–1] 8 4 [–1] 9 1 3 3 [+1] 2 13 3 29 [–2] 13 [+1] 1 18 [–1] 5 [–1] 5 19 [–2] 2 6 1 9 32 [+2] 3 1 11 9 3 8 [–1] 1 435
census 351
Technological innovations continued to characterize the evolution of the U.S.’s census throughout the 20th century. An innovation added to the 1940 census was the development of their “long form” that was sent to a randomly selected minority of respondents. Census Bureau officials had added random-sampling techniques to their array of information-gathering skills. Most citizens were included by information gathered in a “short form” that featured a collection of basic questions. Technological innovations took another major leap in the 1950 census with the use of electronic digital computers to record and tabulate data collected that year. The 1970 census featured the introduction of mail forms to the census-gathering process. A majority of households (about 60 percent) were asked to complete their forms and mail them back to the Census Bureau. Enumerators were only used to contact the remaining 40 percent of the households or respondents who failed to return their mail forms or to gather information from respondents whose mail-order forms contained errors. Use of the mail forms increased steadily through the end of the century to the point in which they include almost all households in the nation. The U.S. census has remained politically controversial throughout its lengthy history. The first such controversy occurred during the Constitutional Convention when delegates debated at length about how to count slaves. While some delegates wanted to credit one person for each slave, others argued against it, maintaining that slaves were not citizens and, hence, should not count at all. After lengthy debate, the delegates finally compromised on Article I, Section 2. Slaves were counted as threefifths of a person, and Native Americans not under U.S. jurisdiction, were not counted at all. The Fourteenth Amendment to the Constitution, ratified in 1868, directed that all persons would be counted as whole persons. The census finally began to count Native Americans in the 1890 census. However, the most controversial aspect of the census has been undercounting citizens, particularly members of minority groups. Residents that the Census Bureau has been unable to contact and include in their totals have been termed the undercount. The Census Bureau officials have been struggling to overcome undercount problems throughout
much of the 20th century. They began to try to measure the undercount when collecting the 1940 census. By the 1970s, the debate about the causes and consequences of the undercount spanned much of the nation and had reached an almost frenzied intensity level. Following the 1980 census, the Census Bureau confronted 54 lawsuits. Most of the lawsuits were filed by civil-rights organizations representing minority groups. They charged the Census Bureau with employing unconstitutional and unfair methods of counting citizens where members of minority groups would either be undercounted or not counted at all. In addition, children, the homeless, and persons renting apartments or homes tended to be undercounted. Census Bureau officials have attempted numerous techniques to overcome the problem of undercounting. These efforts included an extensive advertising campaign to publicize the census in areas whose citizens often were undercounted. In addition, the Census Bureau established a toll-free telephone number where workers could help citizens fill out their census forms. In other areas, Census Bureau officials attempted to count homeless citizens by having numerators visit at 3 a.m. traditional sleeping places of the homeless, such as under selected highway underpasses, public parks, doorways, shelters, and heating grates. Members of Congress tried to help resolve the undercount problem. In 1991, Congress approved the Decennial Census Improvement Act. This law directed the secretary of commerce to hire the National Academy of Sciences to help develop a plan to enable the Census Bureau to make the most accurate count possible. The scientists reported that the only way to reduce the undercount would be to use statistical sampling methods similar to those used in publicopinion polling or marketing survey research. As has been the case in most census controversies, this proposal attracted lawsuits. In the major lawsuit on statistical sampling, a majority of the members of the U.S. Supreme Court ruled in Department of Commerce, et al. v. United States House of Representatives, et al. that using sampling to apportion representatives among the states violated the census clause of the U.S. Constitution. See also gerrymandering; districts and apportionment.
352 code of legislative ethics
Further Reading Alterman, Hyman. Counting People: The Census in History. New York: Harcourt Brace, 1969; Anderson, Margo. The American Census: A Social History. New Haven, Conn.: Yale University Press, 1990; Anderson, Margo, and Stephen Feinberg. Who Counts: The Politics of Census-Taking in Contemporary America. New York: Russell Sage, 1999; Haley, Dan. Census: 190 Years of Counting America. New York: Elsevier/ Nelson, 1980; Skerry, Peter. Counting on the Census?: Race, Group Identity, and the Evasion of Politics. Washington, D.C.: Brookings Institution, 2000. —Robert E. Dewhirst
code of legislative ethics A code of legislative ethics is a set of regulations governing the conduct (mainly financial) of the members of a legislative body. The U.S. Senate, the House of Representatives, and all 50 state legislatures have adopted legal codes governing their members’ acceptance of gifts, honoraria, and campaign contributions, regulating their contact with lobbyists, and specifying how they can avoid potential conflicts of interest or appearances of impropriety. Why is there a special code of ethics for legislators? For as long as sovereigns have delegated their powers of government to ministers of trust, there has been a need for rules of proper conduct for those ministers, particularly with respect to their financial dealings and potential conflicts of interest. Today, ethics statutes govern the conduct of not just legislators but all government employees and many other private individuals who do business with the state. Codes of legislative ethics, however, are special in that they are laws specially designed to govern the conduct of lawmakers. Under the U.S. separation-ofpowers doctrine, if legislators do not regulate their own conduct in this regard, no other branch of government is empowered to control their behavior. The U.S. Constitution states that each chamber of the Congress may “punish its Members for disorderly Behaviour, and with the Concurrence of two thirds, expel a Member.” Instituting a separate code of conduct for legislators thus serves several functions: It permits the chamber to limit the conduct of its own members more strictly than it could if they were ordinary citizens; it enables them to review and judge
cases involving members themselves; and it addresses concerns about public trust while upholding the separation of powers. Thus, all legislatures have rules establishing what constitutes improper conduct for its members and have created special committees tasked with recommending rule changes and reviewing particular cases of alleged member misconduct. In addition, 39 states have independent ethics commissions that review and advise on issues of propriety among state employees as well as legislators. In one sense, the term code of legislative ethics is something of a misnomer: The scope of conduct which these codes address is far narrower than we ordinarily associate with the term ethics. Strictly speaking, codes of legislative ethics define legal limits of questionable conduct, the most a legislative chamber will permit its members to do before subjecting them to legal punishment. In other walks of life, those who skate to the very edge of legality are rarely described as “ethical.” So while violations of codes of legislative ethics are also almost always violations of ethics more broadly conceived (as conformity to demanding moral principles), the reverse is not true; merely adhering to the strictures of a code of legislative ethics is hardly sufficient to make one “ethical.” Nor is it the case that codes of legislative ethics seek to address the full range of ethical concerns specific to a legislator’s role. Some theorists of political ethics have made inroads into this subject, seeking to understand the particular moral permissions and role responsibilities associated with the legislator’s task. But this is not the aim of a code of legislative ethics, for to write such ethical principles into law would be to undermine the legislator’s autonomy, independence, and judgment in the exercise of his or her office. Instead, for better or worse, a code of legislative ethics seeks only to define the limits of strictly impermissible conduct, leaving any higher ethical purposes to the legislators’ own consciences or to the responses of their constituents. Not all the actions prohibited by codes of legislative ethics are necessarily wrong in and of themselves. Often, ethics codes also restrict actions which, while legitimate, might appear otherwise to an impartial observer. So a general principle of such ethics codes is that the appearance of impropriety is for the most part to be avoided as much as actual impropriety itself. This is because public trust in both legislators
code of legislative ethics 353
themselves and in the democratic institutions and processes which they serve may be damaged as much by the appearance of impropriety—whether or not it independently constitutes a real misuse of power—as by wrongdoing that goes unseen. Codes of legislative ethics distinguish permissible from impermissible conduct to single out violators of the standards that they establish for individual judgment. Yet, when poorly constructed or poorly applied, such codes can merely substitute an institutional version of corruption for the individual corruption it seeks to contain. Thus, for codes of legislative ethics to serve the purposes of self-government, it is not only the case that individual conduct needs to be judged within the framework of the code; it is also the case that the standards of the code itself need to be judged for their institutional effects and for how well they serve the larger purposes of the community. What kinds of conduct do codes of legislative ethics govern? The scope of legislative ethics codes typically encompasses the following kinds of rules: 1. limits on what gifts, honoraria, and campaign contributions it is acceptable for members (and their families and staffs) to receive; 2. rules for disclosure of such gifts and contributions; 3. regulations of members’ contact with lobbyists, as well as rules preventing their quick entry into lobbying work after leaving office (the so-called “revolving door” prohibitions); and 4. guidelines for avoiding conflicts of interest and nepotism. Limits on acceptable gifts and contributions vary from legislature to legislature. The U.S. House and Senate prohibit gifts of more than $50 (with certain specified exceptions; and permit no more than $100 in gifts per year from any single individual). At the state level, close to half the legislatures set a monetary limit for gifts to legislative members, and all states prohibit any gifts that serve to influence official action. In addition, the U.S. Congress and almost half of the state legislatures prohibit honoraria (payments for appearances, addresses, or writings connected in some way with the legislator’s office or duties). Campaign contributions are also limited, though these are usually governed by campaign finance laws (which apply to candidates as well as incumbents) rather than through codes of legislative ethics. Gifts and honoraria can be restricted in different ways, from
outright prohibition to permission with disclosure requirements. Most state legislatures as well as the U.S. Congress require their members to disclose facts about their income and assets deemed to be relevant to questions of public trust. Legislators are frequently required to state the names of their clients, creditors, and debtors as well as connections with government officials and lobbyists. (In addition, legislators must disclose extensive information about those who contribute to their campaign funds, though again this is usually governed by campaign finance laws rather than directly by codes of legislative ethics.) The U.S. Congress and all 50 state legislatures regulate contact between lobbyists and legislators to ensure that attempts to influence legislators are (and also appear to be) legitimate and proper. Lobbyists must register with the state or federal government and report on their activities and expenditures. Legislators are prohibited from receiving certain types of gifts or favors from lobbyists. At the federal level and in a little more than half of the states, farmer legislators are also prohibited from entering into lobbying work themselves until a specified period of time (one year for U.S. Congress members) has elapsed, to limit the use by farmer members of influence and connections acquired in public service for their personal profit. Conflict-of-interest rules seek to prevent members from using their official position or confidential information for personal gain. Members must refrain from voting or otherwise exercising the powers of their legislative office on matters in which they personally have a direct personal or pecuniary interest. Conflict-of-interest rules frequently prohibit members from receiving remuneration from sources outside their legislator’s salary except within certain strictly defined limits. They may also prohibit nepotism (the member’s hiring of a relative) in certain situations; 19 states have a broad ban on any form of nepotism. In the U.S. Congress, violators of the code of legislative ethics are liable to three major categories of sanctions. A member may receive a reprimand, usually employed for minor or unintentional infractions of the code, or a censure, a stronger form of punishment that may require the member to lose their committee or leadership positions for a period of time. Both reprimands and censure require a majority vote
354 c ommittee system
ate, Select Committee on Ethics, ethics.senate.gov. Warren, Mark E. “Democracy and Deceit: Regulating Appearances of Corruption,” American Journal of Political Science 50 (January 2006): 160–174. —John M. Parrish
of the chamber in question. The most serious form of punishment at the disposal of a legislature is expulsion: the Constitution requires a two-thirds majority before permitting a chamber to expel a member and remove the member from office. Violators of state codes of legislative ethics are subject to similar penalties that vary in accordance with state constitutions and laws.
committee system
Further Reading Anechiarico, Frank, and James B. Jacobs. The Pursuit of Absolute Integrity: How Corruption Control Makes Government Ineffective. Chicago: University of Chicago Press, 1996; Applbaum, Arthur. “Democratic Legitimacy and Official Discretion,” Philosophy and Public Affairs 21 (Summer 1992): 240–274; Council on Governmental Ethics Laws, www.cogel.org; Hamilton, Alexander, John Jay, and James Madison. The Federalist Papers. Edited by J. R. Pole. Indianapolis, Ind.: Hackett, 2005; Lowenstein, Daniel H. “Political Bribery and the Intermediate Theory of Politics,” UCLA Law Review 32 (April 1985): 784–851; MacKenzie, G. Calvin, and Michael Hafken. Scandal Proof: Do Ethics Laws Make Government Ethical? Washington, D.C.: Brookings Institution, 2002; National Conference of State Legislatures, Center for Ethics in Government, www.ncsl.org/ethics; Noonan, John T. Bribes. Berkeley: University of California Press, 1984; Rose-Ackerman, Susan. Corruption and Government: Causes, Consequences, and Reform. Princeton, N.J.: Princeton University Press, 1999; Rosenson, Beth A. The Shadowlands of Conduct: Ethics and State Politics. Washington, D.C.: Georgetown University Press, 2005; Sabl, Andrew. Ruling Passions: Political Offices and Democratic Ethics. Princeton, N.J.: Princeton University Press, 2002; Stark, Andrew. “The Appearance of Official Impropriety and the Concept of a Political Crime,” Ethics (January 1995): 326–351; Stark, Andrew. Conflict of Interest in American Public Life. Cambridge, Mass.: Harvard University Press, 2000; Thompson, Dennis F. Ethics in Congress: From Individual to Institutional Corruption. Washington, D.C.: Brookings Institution, 1995; Thompson, Dennis F. Political Ethics and Public Office. Cambridge, Mass.: Harvard University Press, 1987; United States House of Representatives, Committee on Standards of Official Conduct, www.house.gov/ethics; United States Sen-
Woodrow Wilson in 1885 said that committees do the work of the Congress. More than a century later, this is still true. The U.S. House of Representatives and the Senate each have nearly 20 standing or permanent committees. They receive bill referrals, conduct hearings, mark up bills, make bill recommendations to the chamber’s full membership, undertake investigations, and conduct oversight of executive branch policy. Their names and their policy jurisdictions roughly reflect the full set of things done by the national legislature. The committees today remain the indispensable work units of the modern bicameral Congress on both the House and the Senate side of Capitol Hill. Committees can be either permanent or temporary. The most important ones are standing committees, which exist from one Congress to the next. All of these can report proposed legislation to the floor (the full membership of the parent chamber). There are also select committees, ad hoc committees, joint committees, and conference committees. Select committees are usually temporary investigatory bodies, as are ad hoc committees; but some emerge to address new policy problems and eventually assume status as standing committees. Their formal naming may take a while to catch up to reality. In the 109th Congress of 2005–06, the House and the Senate intelligence committees were 30 years old, yet both still carried the label “Permanent Select Committee on Intelligence.” Joint committees are shared across the bicameral Congress, but they lack legislative authority and have become less important in recent decades. Conference committees are temporary entities to iron out the difference in bills reported from committees in the House and the Senate. Standing committees have extensive formal authority under rules of the parent chamber. Almost all of them have legislative authority and the capacity to call hearings. These include authority to issue subpoenas to compel testimony from witnesses. There is a basic division of power
committee system 355
between authorization committees and appropriation committees. The former can write bills that create legal authority for programs to exist, and they often write a ceiling on the program’s budget as well. Appropriating committees do the annual allotments of money to these programs. In practice, there is frequent poaching by one type of committee on the other. Authorizers write detailed restrictions on allowable spending by issuance of “annual authorizations”; and appropriators commonly issue riders such as the antiabortion Hyde amendment (“No funds shall be expended . . .” for certain policy purposes). The most important collective property of a standing committee is its established policy jurisdiction. Scholars have shown that this is “common law property” even after the occasional effort at major reform and streamlining of committee operations. Policy jurisdiction conflicts do regularly occur, universally known on Capitol Hill as turf wars. Chairpersons of committees are expected to be vigilant in a committee’s turf defense. At least one quiescent House Democratic committee chairperson in the 1990s was booted from his post for failure to discharge that duty. Committee resistance to reform also shows that they are important power centers for the members. Richard Fenno identifies three preeminent motives of members on committees: Seek reelection through service to districts or states, make good public policy, and gain power and prestige with other members. All are achievable through standing committee assignments, and all members seek and receive these (save one bizarre case of a member en route to ejection from the House and eventual federal imprisonment). Between 1911 and 1975, Congress was basically run as government by committees. Committee chairpersons in the 20th century were the chief rivals to congressional party leaders for preeminent power on Capitol Hill. Their heyday came after a 1946 consolidation plan reduced committees from many dozens down to the 20-odd units the Congress still has today. These were sufficiently powerful that the Senate parties not long after 1946 adopted a “Johnson rule,” first instituted in the 1950s by Senate Majority Leader Lyndon Johnson (D-TX), limiting any member to a single chairmanship, and by the 1970s, the House parties followed suit. But not all committees or chairpersons are seen as equals in power or prestige to the
members. Committees have a recognized rankordering of desirability with the top positions occupied by the money-handling committees in deference to congressional “power of the purse.” Members of Congress have long been careerists, and a first prerogative of these long-term members has been to stake claims to committee seats. When seats go vacant in high-prestige committees, there is a rush of petitions to fill those vacancies. Once desirable seats are secured, they are regarded by members as cherished property. Members have made the committee system highly resistant to overall reform. Resistance to reform is shown by examining the lists of committees by name and number of seats in successive Congresses of recent decades. There are many name changes, especially when the Republicans took over sustained majority control of Congress beginning with the 104th Congress (elected November 1994, serving in 1995 and 1996), but there are few authentic alterations of jurisdiction, operational authority, or seat size. On rare occasions, a new committee is created or an old one abolished (by demotion to subcommittee status within one of the other standing committees). The most prominent recent addition is the Committee on Homeland Security, created after the 9/11 terrorist attack of 2001. The two-party system of Congress is fully represented in the committee memberships. Each membership is divided by party with the parent chamber’s majority party holding a committee seat majority of comparable size. The total seat allotment on each committee and the division between the parties is a result of extended bipartisan negotiation. On the most prestigious committees, the seat ratio is weighted toward extra seats for the majority to ensure its ultimate control of key policy. The House rules committee is a special object of majority party rule, being weighted 9 to 4 in its favor with all nine members being directly selected by the Speaker of the House. One committee, the House Committee on Official Standards (also known as the House Ethics Committee), has an evenly divided party representation for the sake of doing its unpleasant job of reviewing ethics violations by members. Seat allotments of recent decades are fairly stable from one Congress to the next, reflecting the low turnover of members and the resistance to systematic reform.
356 c ommittee system
The property claims by members begin with petition for seats. This is done within each party alongside negotiation of seat allotments between the two party leaderships. Freshmen receive assignments from their own party leadership, as do more senior members petitioning to move onto desired vacancies on new committees. All senators and most representatives have more than one committee assignment. There are two reasons for this. One is necessity. In the 108th Congress of 2003–04, there were 19 standing committees with 860 seats to allot among 435 House members, leaving about 2.0 seats per representative. The 17 Senate committees had 349 seats divided among 100 senators, leaving almost 3.5 seats per senator. The other reason is clamoring by members for additional seats from which to engage in bill filings, position taking, and credit claiming. Individual members use committee posts to build careers. The large difference of House and Senate membership size governs several important distinctions of House and Senate committees. Career paths of House members are much more heavily governed by committee positioning than are senatorial careers. House members who want to establish legislators’ brand names in certain policy areas must become members of committees with jurisdiction over that policy. It can go further, as they also seek seats on subcommittees that specialize in facets of the parent committee’s policy realm. Senators often can circumvent these requirements by participating directly in that chamber’s postcommittee floor proceedings with little constraint on amending of a committee’s bill. In the House, the likelihood of a given House bill’s survival is deeply influenced by whether its primary sponsor is on that committee, and when a House committee’s bill does reach the floor, it usually arrives under a partymediated rule that restricts freewheeling amendment efforts. Senate committees are notorious for numerous hearings and even bill markups where only one or two senators are present. The remainder of the work is done by permanent committee staff. This is the commonplace problem of membership shirking, or failure to do assigned work. Shirking is common in both the House and the Senate but more so in the overburdened smaller body. Committee staff work is crucial to operation of both House and Senate committees, but with only 100 senators spread so thinly,
their role is more central on the Senate side. Top committee staffers are careerists, too, with considerable technical expertise and “institutional memory” of what that committee has done. The House manpower advantage reflects in their conferral of exclusive status to its four most important committees (Appropriations, Commerce, Rules, and Ways and Means). The intent is that seat holders there will sit on no other standing committee (although some members find ways around this). That is a powerful incentive for members to devote full attention to that committee’s work and not to shirk it. It has usually succeeded, thus giving the House conferees a significant advantage over Senate conferees during conference committees. Once the committee membership and party ratio is established with each Congress, there is an exactly defined committee seniority ladder on which every member is located. One can readily see it at House or Senate committee Web sites. Each committee lists all members on two partisan ladders in descending order from longest-serving down to most recent seat holders. From this we get a traditional allotment of the committee chairmanship via committee seniority: The chair went to the majority party member with the longest unbroken service on that committee. The ranking minority member is determined the same way. All chairmanships are held in the majority party, so a change of party control of the parent chamber such as Republicans taking over in 1995 (the 104th House and Senate) means that the ranking members normally ascend to chair posts. Seniority is, like exclusivity, a powerful incentive for the more senior membership not to shirk the committee’s workload. However, the rise of forceful party leadership in recent years has led to some curtailment of committee seniority rights in the U.S. House (but not in the Senate). Senior chairpersons during the long Democratic Party dominance of the House (from 1955 to 1994) were notoriously autonomous, sometimes dictatorial, and usually a lot more conservative than the rank-and-file Democrats. This produced a 1975 uprising in which three senior southern Democrats were summarily booted out of their chairmanships. In 1995, the new majority House Republicans followed suit, denying the chair to some aspirants who had earned it the old way through continuous tenure on the seniority ladder.
Congressional Budget Office, The 357
The Senate also saw Republican takeover in 1995 but with no overturning of chairmanships. That is mainly because few members face lengthy waits on the short Senate committee seniority ladders (unlike the House) and because Senate committees are far less important as policy gatekeepers that can define what reaches the floor and what will not. Even more important than seniority ousters, since 1995, the House Republicans instituted a three-term limitation on any committee’s chairperson tenure. With six consecutive House majorities through the 109th Congress, this has produced limited tenures and numerous changes in chairmanships. The intent was to weaken the traditional autonomy of chairpersons and of the standing committees, and it has accomplished both of these. The Senate instituted a three-term limitation on chairpersons in 1997, but its impact to date is far smaller than the House rule. Senators have more committee assignments and are more able to author legislation from outside a committee. Term limitations may reduce the ability and incentive of members to invest in developing personalpolicy expertise through long tenure and devotion to its policy. Committees in each house are centers of informational expertise for that parent body. That is an elementary source of prestige or standing for members among their colleagues. It is also an institutional bulwark for use by the parent chamber defending itself from the separate executive branch. The U.S. Congress is distinguished by having a far larger and more elaborate committee system than any of the world’s other legislative bodies. This is derived from the separation-of-powers doctrine. In parliamentary bodies such as Great Britain, the House of Commons derives its policy information and analysis largely from ministries. In the U.S., divisions and power confrontation across Pennsylvania Avenue prompt Congress to defend itself by investing resources in its own committee system rather than “downtown” in the executive bureaucracies. That, ultimately, is the basis for the elaborate and hard-to-reform committee system of the modern Congress. The committee system is an enduring and central feature of the U.S. legislature. It is unlikely to go away or to be changed drastically anytime soon. It reflects members’ incentives to build careers and deliver benefits to districts and states. That makes it difficult to reform and impossible to ignore.
See also incumbency; Ways and Means Committee Further Reading Davidson, Roger H., and Walter J. Oleszek. Congress & Its Members. 9th ed. Washington, D.C.: Congressional Quarterly Press, 2004; Fenno, Richard F., Jr. Congressmen in Committees. Hinsdale, Ill.: Scott Foresman & Co, 1973; King, David C. Turf Wars: How Congressional Committees Claim Jurisdiction. Chicago: University of Chicago Press, 1997; The Center for Legislative Archives. Research interview notes of Richard F. Fenno, Jr., with members of the U.S. House of Representatives, 1959–1965. Washington, D.C.: The National Archives. Available online. URL: http:// www.archives.gov/legislative/research/special-collec tions/oral-history/fenno/interview-notes-index.html. —Russell D. Renka
Congressional Budget Office, The The Congressional Budget Office (CBO) plays an integral role in the national governing process. Its members, largely professional economists and statisticians, are charged with conducting nonpartisan, or politically unbiased, analyses of important publicpolicy issues that often affect millions of Americans. These can range from estimating the cost of cleaning up New Orleans after the Hurricane Katrina disaster to projecting the net benefits to U.S. society of immigration or reducing carbon-dioxide emissions. However, as its name implies, the CBO’s primary responsibility is to provide Congress with regular assessments of the U.S. federal budget. With total outlays, or money spent, and revenues, or money raised through taxes and other sources—each well into the trillions of dollars per year—the CBO’s analyses and recommendations are considered essential to many aspects of congressional decision making. That the CBO is able to wield such influence over important national decisions may seem surprising from several perspectives. For instance, while the total number of federal government employees runs into the millions, the CBO’s staff hovers around 230. Moreover, although federal outlays surpassed $2.7 trillion in fiscal year 2006, the CBO’s operating budget was only $35.5 million. Also, while the budgetary needs of the federal government have demanded
358 C ongressional Budget Office, The
attention for more than two centuries, the CBO has been operating for barely three decades. Finally, its nonpartisan analyses must somehow satisfy the demands of an increasingly partisan Congress. However, as will be explained below, when viewed from within this same evolving political environment, the CBO’s influence becomes understandable, if not logical. The CBO was created as part of the Congressional Budget and Impoundment Control Act (CBICA) of 1974. Several motives produced this act. More narrowly, in the wake of Republican President Richard Nixon’s impounding, or refusing to fully spend, funds that it had appropriated for certain programs, the Democratic Congress sought to restrict this prerogative of the executive branch. However, the CBICA addressed broader policy goals as well. The congressional budget process had become unwieldy by the early 1970s. The cold war, and especially the recent Vietnam conflict, had necessitated increased spending on national defense. In addition, Democratic President Lyndon Johnson’s Great Society agenda demanded unprecedented spending to alleviate poverty. The existing congressional budget process had always been highly decentralized, based largely on the preferences of the many individual committees of the House and the Senate. However, with the growing list of federal initiatives undertaken by the liberal Democratic Congress, such decentralization made coordination of the overall budget increasingly difficult. If Congress was to control its annual budgetary agenda, especially when facing a Republican president, it needed a more efficient method for crafting spending and tax bills. Furthermore, this need for more agenda-setting efficiency was not limited to the daily functions of the House and the Senate. By the early 1970s, Congress was increasingly at a disadvantage vis-à-vis the executive branch when creating the annual budget. In 1970, the president’s ability to draw up the budget was greatly enhanced with the creation of the Office of Management and Budget (OMB). This small but powerful office provided the president with loyalists who demanded that the many departments and agencies of the executive branch more closely follow his guidelines and goals for policy making. In addition to this partisan discipline across specific policies, the
OMB included professional economists and budget experts who “crunched the numbers” regarding the federal government’s taxing and spending as well as projected the state of the macro economy (that is, the growth rates of real gross domestic product and inflation, the level of unemployment and interest rates, and so on). For several ensuing years, Congress thus depended on the OMB’s calculations for its budgetary decision making. This institutional relationship cannot be overemphasized. For more than a half-century, national and international events allowed the executive branch to gain progressively greater policy-making influence, including the federal budget, over the Congress. The landmark 1921 Budget and Accounting Act helped promote greater efficiency in the executive branch’s control of the federal budget. Then, after the needs of the Great Depression warranted a more centralized approach to economic policy, the 1939 Executive Reorganization Act further enhanced the president’s control over the federal budget. Later, fear of collapsing back into the Depression after the artificial stimulus of World War II ended, Congress officially took responsibility for managing the national economy by passing the Employment Act of 1946. This act pledged to use appropriate fiscal and monetary policies to keep the macro economy growing. In doing so, this act formally redefined the federal budget and fiscal policy from merely paying for the operations of government to deliberately influencing the entire economy. In the broader sense, the presidency gained still more relative control of the national agenda during the next 20 years. The cold war and advent of nuclear weapons deliverable in minutes combined with the U.S.’s new leadership in the world and need for a more centralized approach to lead the fight against racism and poverty at home to make the presidency what John F. Kennedy called “the center of action.” Indeed, by the late 1960s, the growth in the executive branch and, more important, in the general expectations of it as the leader across many policy issues created what some historians and political scientists came to call the “Imperial Presidency.” In other words, the modern presidency had grown in policy-making power at the expense of Congress. As noted above, the advent of the Office of Management and Budget in 1970 only
Congressional Budget Office, The 359
enhanced the president’s advantage when defining budgetary and economic priorities. From this perspective it is now clearer that the Congressional Budget and Impoundment Control Act of 1974 addressed both narrow and broad policy needs. Besides limiting the president’s capacity to refuse to spend funds duly appropriated by Congress, it sought to reassert Congress’s budgetary powers. The CBICA was crafted under the assumption that a more centralized—or OMB-like—process would allow Congress to better compete with the president’s proposals. CBICA streamlined the process by ceding agenda-setting power to new House and Senate budget committees. Early each year, these committees would compose a budget resolution that would set out broad but strict guidelines regarding how much each of the regular standing committees could authorize in new spending on its policy initiatives. The budget resolution also set overall revenue goals which the tax-writing committees (that is, the Ways and Means Committee in the House and the Finance Committee in the Senate) would have to meet. Still, the budget resolution could only suggest, rather than mandate, the specific ways of doing so. The other committees would never have allowed such power to be granted to the budget committees. Also, so as not to be wholly dependent on the executive branch’s OMB for budget and economic projections, the new act created the CBO, which gave Congress its own professional number crunchers. As noted earlier, these analysts respond to the needs of members of Congress. These range from individual members’ requests for quantitative analyses of policies that affect their constituents to systematic projections of macroeconomic and budgetary conditions. For instance, the CBO keeps Congress appraised by conducting monthly and quarterly (that is, every three months) reviews of the federal budget. These largely incorporate the latest data on both the spending and taxing sides of fiscal policy. More significantly, the CBO also releases five-year projections of the levels of spending and tax revenues. These entail what is known as “baseline forecasting.” Almost every year, Congress and the president propose new initiatives for spending public funds and how best to raise revenues through changes in tax policy. CBO analysts will use a “baseline” forecast to estimate the likely costs or benefits of
these policy changes. Specifically, this forecast will assume that the new policies had not been enacted and then calculate the likely level of overall spending and tax revenues. Next, after calculating the new initiative’s likely effects on spending or revenue for five, sometimes 10, years into the future, the CBO then compares the two levels and credits the difference in spending or revenue levels to the new initiative. Inasmuch as the OMB does the same for the president, the CBO’s value to Congress becomes readily apparent. CBO staff are not charged with making or even enforcing policy, but their analyses often influence fiscal initiatives’ chances for becoming law. Whether a new spending program or tax cut is considered “fiscally responsible” by enough lawmakers to win approval often depends on the CBO’s assessment. The director of the CBO, chosen by the Speaker of the House and the president pro tempore of the Senate for a four-year term, will testify before Congress, usually the House and the Senate budget and appropriations committees and the tax-writing Ways and Means and Finance Committees, as to what his or her staff has concluded. The director will not typically recommend supporting or rejecting a particular initiative, instead only convey the CBO’s analyses. These assessments are regularly compared to those of the OMB and of “Blue Chip” academic and business economists. Since its inception in the mid-1970s, the CBO has been called on to assess the budgetary and economic impact of such major fiscal initiatives as the tax cuts and reform during the Reagan presidency, the GrammRudman-Hollings and deficit reduction plans during the Clinton presidency, and the recent tax cuts of the George W. Bush presidency. As must now be obvious, much like the U.S. Supreme Court, whose members are expected to be similarly nonpartisan in their decisions, the CBO’s unbiased judgments often have serious political implications. Indeed, almost as much care is taken when choosing the CBO director as is taken when choosing a Supreme Court justice. Moreover, Congress has often sought to influence how the CBO conducts its “objective” analyses. For instance, in the mid-1970s, the Democrats, who had firm control of Congress, selected Dr. Alice Rivlin, a respected economist who staunchly defended the prevailing Keynesian paradigm of fiscal policy which suited their philosophy of activist
360 c ongressional immunity
government, to be the CBO’s first director. By the mid-1990s, the new Republican Congress attempted to get the CBO to reflect its philosophy of more limited government. In one case, Republicans mandated that the CBO must do a cost-benefit analysis of any congressional policy proposal that would entail new burdens on businesses or state and local governments. In another instance, the Republicans attempted to convince the CBO to use what was called “dynamic scoring” of tax initiatives. Republicans argued that the traditional “static” models, which do not allow for changes in overall economic behavior possibly elicited by the changes in tax policy, often overestimated the loss in revenues from tax cuts or, conversely, overestimated the increase in revenues from tax increases. As such, the purpose of this brief discussion has been to convey the notion that the Congressional Budget Office is far more than a small collection of number crunchers doing mundane analyses for members of Congress. The CBO is the logical product of the political evolution of both the needs of Congress and the broader political and economic environment. It arose because, by the early 1970s, more was expected from the federal government as the United States became a world superpower and certain domestic problems needed concerted national solutions. When the presidency became the preferred branch for leading such solutions, the Congress needed to reassert itself. The CBO, with its staff of professional economic analysts, was an integral part of this resurgence. Since those early days, the CBO has become an essential part of budgetary and fiscal policy-making. Further Reading Davidson, Roger H., and Walter J. Oleszek, Congress & Its Members. 10th ed. Washington, D.C.: Congressional Quarterly Press, 2006; Keith, Robert. The Budget Reconciliation Process. Hauppauge, N.Y.: Nova Science Publishers, Inc., 2006; Rosen, Harvey. Public Finance. New York: McGraw Hill, 2004; Rubin, Irene S. The Politics of Public Budgeting: Getting and Spending, Borrowing and Balancing. 5th ed. Washington, D.C.: Congressional Quarterly Press, 2005; Schick, Allen, and Robert Keith. The Federal Budget Process. Hauppauge, N.Y.: Nova Science Publishers, Inc., 2006. —Alan Rozzi
congressional immunity The U.S. political system is characterized by, among other things, a separation of powers and government under the rule of law. The separation of powers is more accurately characterized as an overlapping, blending, and sharing of powers, but the three branches of government are often particularly jealous in guarding and defending their perceived exclusive spheres of power. Likewise, they often seek refuge in the rule of law as interbranch disputes occur. One area where we have in recent years seen a great deal of interbranch rivalry and give and take is in the area generally referred to as congressional immunity. The American Heritage Dictionary (1991) defines immunity as “The quality or condition of being immune.” Immune is defined by the dictionary as “exempt.” Congressional immunity thus refers to conditions under which members of Congress are immune or exempt from certain forms of punishment or detention in the course of performing their official functions as member of Congress. But immunity is not an absolute “free pass.” It contains certain exemptions from congressional immunity under which members are still held to account. Congressional immunity is based in the speech and debate clause of the U.S. Constitution, which can be found in Article I, Section 6, Clause 1 and states that members of both Houses of Congress “. . . shall in all Cases, except Treason, Felony, and Breach of the Peace, be privileged from Arrest during their attendance as the Session of their Respective Houses, and in going to and from the same, and for any Speech or Debate in either House, they shall not be questioned in any other Place.” Thus is established that both immunity as well as limits beyond immunity does not extend beyond what is stated in the Constitution. Placing these protections into the Constitution mark the culmination of a long, sometimes bloody, and very difficult struggle in which the British parliament fought for protections against the arbitrary powers of the king, and over the course of British history, it was the Parliament that eventually achieved “parliamentary supremacy” and thus protection against the political pressures of the Crown. The House of Commons fought long and hard against the Tudor and Stuart monarchs (who were not averse to using criminal as well as civil law to harass, suppress, and
congressional immunity 361
intimidate members of Parliament who were a nuisance to the Crown) to attain these protections. The king did not give in easily, but in time, the Parliament asserted its cause and eventually earned the protections of immunity that restricted what mischief the king could do in efforts to interfere with parliamentary business. At the time of the founding of the American republic, it was not a very controversial step to add these same protections to the U.S. Constitution. In effect, the heavy lifting in this area had already been done by the British Parliament, and the members of the new Congress in the United States were the beneficiaries of their struggle and their victory over the executive. But lessons of legislative versus executive power struggles were not lost on the new legislative assembly in the United States, and thus the new Constitution, learning well the lesson of the parliamentary struggle in England, included congressional immunity as a protection against the new U.S. president. The separation-of-powers argument for congressional immunity as found in the Constitution is to prevent a president or any other officer of the state or the executive branch from arresting members of the legislature as a way of preventing them from voting, participating in debates, or conducting congressional business. Congressional immunity thus gives the member of Congress the freedom to disagree with the other branches and not fear reprisal or detention in attempting to perform their jobs. Such interferences were fairly common in England in the days before the American Revolution, and the Founders were well aware of the potential damage that could be done by an unscrupulous executive bent on interfering with the legislative process. Congressional immunity is designed to guarantee the freedom and independence of the legislature and not allow the executive to have undue or improper influence over the congressional process. The states also have various forms of legislative immunity, as do most democratic systems of government. Presidents have their own brand, their comparable form of immunity, referred to as executive privilege and defined as the right of a president to withhold information sought by one of the other branches of government. This “right” was first claimed by President Thomas Jefferson when he refused to honor a subpoena from John Marshall in the treason
trial of vice president Aaron Burr. Throughout history, a number of presidents relied on claims of executive privilege in an effort to withhold from Congress papers or materials thought to be within the province of the executive branch and not subject to congressional inquiry or subpoena. Usually the Congress and the president were able to reach some form of bargain or compromise in interbranch disputes of this nature, but there were times when compromise was not possible. One such point was reached during the Watergate crisis of the Richard Nixon presidency. Nixon pressed his privilege to the limit, and his claims finally made it to the U.S. Supreme Court. It was not until 1974 that the Court weighed in on the constitutionality of executive privilege in the case United States v. Nixon, when President Richard Nixon attempted to withhold the release of White House tape recordings that were subpoenaed by the Special Watergate Prosecutor Leon Jaworski as well as the Congress and a federal criminal court. The Supreme Court unanimously ruled against Nixon, compelling him to turn over the tape recordings, but it also concluded that the Constitution supports a general claim of executive privilege, arguing that the claim of privilege was “presumptive” and not absolute. In 1997, the Court rejected President Bill Clinton’s argument that the Constitution immunizes the president from suits for money and damages for acts committed before the president assumed office. The case, Clinton v. Jones (1997) stemmed from a suit filed by former Arkansas state employee Paula Jones who claimed that as governor of Arkansas, Clinton had sexually harassed her. Congressional immunity as well as executive privilege were means by which the framers of the Constitution sought to guarantee a measure of independence and separation between the three branches. No one branch was to have too much control over another. The president had a measure of protection and could be removed from office via the impeachment process. The same was true for the Judiciary which had lifetime appointments but who also could be impeached. The Congress was granted certain freedoms and protections from liability for speeches given in Congress and were given protections when going to and from business of the Congress. But such protections, as we can see from United States v. Nixon, which dealt with the executive branch, are
362 c ongressional leadership
not necessarily absolute. In 1979, the Supreme Court in Hutchinson v. Proxmire, for example, was asked to decide if congressional immunity extended beyond debates on the floor of the Congress and could be applied as well to press releases and statements made to the press. In this case, the Court ruled that immunity extended only to floor debate and not to statements made in public for public consumption. These immunities do, however, apply to committee reports, resolutions, voting, and those acts generally subsumed under the normal job description of a legislator doing the business of government. But how far does immunity go? The protections of immunity seem to apply to protecting members from civil suits but do not apply to criminal cases. The phrase in the Constitution including “treason, felony or breach of the peace” as unprotected actions does not allow for immunity to be a catch-all category protecting all forms of behavior. A number of Court cases have refined as well as defined the terms and limits of congressional immunity. Cases such as Kilbourn v. Thompson (1881), Dombrowski v. Eastland (1967) Powell v. McComack (1969), Gravel v. United States (1972), and United States v. Brewster (1972) served both to narrow the scope of immunity while protecting the core function for which congressional immunity was established. In recent years, questions of congressional immunity have made headline news as a function of charges of abuse of the privilege, as well as in criminal investigations. In 1999, a very prominent U.S. senator was involved in a car accident in Fairfax, Virginia, a Washington, D.C., suburb, that seemed to clearly be his fault. But as the police officer began to write a ticket, the senator pulled a copy of the U.S. Constitution from his pocket and, pointing to Article I, Section 6, Clause 1, argued that the officer had no right to interfere with a senator in the act of driving to or from doing the business of Congress. The officer, confused, escorted the member to the Fair Oaks police station where the shift commander, after discussing the situation with the Fairfax Commonwealth’s attorney, released the senator without issuing a ticket. On a much more serious note, in 2006, the FBI raided the congressional office of Representative William J. Jefferson (D-LA). The raid, part of an investigation into charges of bribery, caused controversy because the
FBI went into and removed possible evidence from the office of a member of Congress. As the case involved a felony, the FBI claimed to be on solid constitutional ground, but members of both parties, attempting to protect the sanctity of congressional offices, cried foul and put a great deal of pressure on the FBI. The rule of law implies that no one, no president, member of Congress, or the Court, can be above the law, and in some senses, congressional immunity protects members of Congress and seems to place them above the law. But remember that these rules were instituted to protect members from the undue and inappropriate political pressures that the executive might apply to them in efforts to intimidate or interfere with the business of Congress. While the direct causes that led to including congressional immunity in the Constitution may be relics of a more raucous past, the core rationale remains relevant even today. The courts during the years have attempted to strike a balance between the absolute claim of privilege and immunity and the proper functioning of a criminal justice system. While obtaining a justright balance may be a difficult task in a complex world, the courts have tried to protect the core function of immunity as practiced within a separation-ofpowers system, while preserving respect for the rule of law. Further Reading Dodd, Lawrence. Congress Reconsidered. Washington, D.C.: Congressional Quarterly Press, 2004; Geyh, Charles Gardner. When Courts and Congress Collide: The Struggle for Control of America’s Judicial System. Ann Arbor: University of Michigan Press, 2006; Oleszek, Walter J. Congressional Procedures and the Policy Process. Washington, D.C.: Congressional Quarterly Press, 2004; Story, Joseph. Commentaries on the Constitution of the United States. 1883. Reprint, Durham, N.C.: Carolina Academic Press, 1995. —Michael A. Genovese
congressional leadership The U.S. Constitution provides for three legislative officers for Congress and blurs the doctrine of
congressional leadership 363
separation of powers with one of those offices. First, in looking at the Senate, the vice president is given constitutional authority to serve as president of the Senate. This hybrid role, with one foot in the executive branch and one foot in the legislative branch, has led to more of a personal than a constitutional dilemma. The vice president, whose only other constitutional role is that of succeeding the president on his or her death, resignation, or removal from office (including through a declaration that the president is unable to “discharge the powers and duties of his office,” thus assuming the duties of acting president under the provisions of the Twenty-fifth Amendment), has found a legislative role that is limited. For example, Vice President Richard M. Nixon labeled his vice presidency under President Dwight D. Eisenhower one of the six crises in an early autobiography. Lyndon B. Johnson found himself unwel-
come in the Senate once he had ridden down Pennsylvania Avenue and left the Senate for a role as vice president in the Kennedy Administration. Recent vice presidents have found a more fulfilling role to be the one defined by a president willing to share power with their vice president, along the models of Jimmy Carter and Walter Mondale, Bill Clinton and Al Gore, and George W. Bush and Dick Cheney. The vice president can preside over the Senate if particular maneuvering is to play out to the advantage of his party—and if his or her vote is needed to break a tie. The actual participation of a vice president in the workings of Congress was not expected since the Constitution called for a president pro tempore to preside over the Senate in the absence of the Senate president. This position is of symbolic as well as of substantive importance, and a partisan system was not the underlying dimension envisioned in designing
U.S. Speaker of the House Nancy Pelosi with U.S. Senate majority leader Harry Reid (Saul Loeb / AFP Getty Images)
364 c ongressional leadership
the structure of U.S. government. Rather the concern was over power being too centralized in the hands of a few in the national government or too decentralized among the states. A close working relationship between the executive and the Senate was expected, which is clear given the executive functions given the Senate and not the House of Representatives (that is, confirmation of presidential appointments, ratification of treaties, and confirmation of appointments to the federal courts). Yet a close working relationship also emerged between the House and the executive branch early on, before the rise of political parties. The party leadership positions we know today, for example, majority and minority leader, did not emerge until the end of the 19th century and followed a run where power was assembled through the hands of committees and committee chairs. With the emergence of political party machines and party bosses, it was natural to see a stronger role for party leaders to also emerge in Congress as the 19th century drew to a close. The roles of floor leaders in the House were formally established by the end of the 1800s and in the Senate by the 1920s. The position of president pro tem (or their designate to preside over the modern Senate) has evolved to take on a partisan role the same as that of the Speaker in the House in that the majority party in the Senate decides who will be president pro tempore. The post has gone to the party’s member having the greatest seniority in that chamber, thereby making the office important due to the seniority of the individual and other roles they may have (for example, Robert Byrd as chair of appropriations and president pro tem within the Democratic majority in the 110th Congress, or Ted Stevens holding the same posts with Republican control in the 109th Congress). The other constitutional position is that of the Speaker of the House. This position is quite prominent in determining the fate of legislation and all other legislative activity. The Speaker will head his or her party in the House, yet the role of Speaker is defined in the Constitution not in partisan terms but as the leader chosen by members of the entire House of Representatives. The major leadership positions that have evolved since the turn of the last century have been that of the floor leader, the whip (or assistant floor leader), caucus or conference chair, secretary and/or vice
chair of the caucus or conference, policy chair, steering committee or committee on committees chair, and congressional campaign committee chair. The position of Speaker, a constitutional office not requiring membership in the House for selection, is a position elected by all members of the House, with each party submitting a nominee. The majority party’s candidate inevitably wins. The other positions are either elected or appointed within each party’s membership. These party positions evolved as the role of party leader began to take on a more significant role. The institutionalization of Congress is reflected in the growth of an elaborate party leadership structure that has, at times, supplanted the power of committee chairs. Throughout the 20th century, power at first became centralized in the hands of party leaders, and after the tyranny of Speaker Joseph G. Cannon (R-IL), waxed and waned with power gains made by chairs. In the late 1970s, power was centralized in the hands of party leaders, yet also decentralized in the hands of subcommittee chairs. Under the Republican revolution ushered in by a newly elected Speaker, Republican Newt Gingrich of Georgia in 1995, power again became concentrated in the hands of the party leaders, with the seniority system’s impact weakened. The 1990s also saw an intense party polarization within Congress that helped to strengthen the hands of party leaders by their ability to command party loyalty as party-line voting became the modus operandi, especially in the House. That parties in Congress had become inextricably linked to campaigns rather than policy per se is evident in the elaborate leadership structure that evolved by the time of the 109th Congress, both on the part of the Democrats and the Republicans, especially in the House but also in the Senate. The impact of campaign finance-reform efforts that have precluded public financing of congressional elections and in effect have served as an incumbent protection act have placed a more coherent emphasis on national party unity in Congress, strengthening a certain type of party leader and party structure, while at the same time strengthening the hands of incumbents. The 109th Congress saw a tremendous and sudden expansion of leadership posts in both the House and Senate, by both Republicans and Democrats. The growth cannot be explained solely by the size of the House or a desire to win back control of the House and the
congressional leadership 365
Senate on the part of Democrats since parallel structures emerged in all four chambers by both Democrats and Republicans. Among the leadership positions in the House for Democrats were a breakdown of titles within the Democratic Congressional Campaign Committee including 10 Regional Recruitment Chairs, a National Jewish Outreach Chair, a Frontline Democrats Chair, various vice chairs, and even the position of Chair Emeritus (posthumous) for the late Robert Matsui of California. Republicans included positions on their National Republican Congressional Committee for several cochairs for each of the following regions: West, Southeast, Midwest, East, and Central, as well as cochairs for Incumbent Support, Incumbent Retention, Incumbent Development, and Candidate Recruitment. On the Senate side, including leadership positions on the Democratic Senatorial Campaign Committee, there were more than 20 separate leadership titles that could be identified; the Republicans, in the majority, had at least 17 separate leadership titles. These titles do not include standing or select committee chairs. The rise in leadership posts focusing on campaign politics may come at the expense of policy. The 109th Congress was viewed by many as a “do-nothing” Congress, with both parties anticipating a potential change in party control of at least one house of Congress in the 2006 midterm elections. With Senator Majority Leader Bill Frist announcing a presidential bid before the 2006 elections, having already announced a selfenforced term limits on his tenure in the Senate and ultimate departure and lame-duck status, there was little incentive for legislative action during the fall of 2006, and Frist virtually disappeared from the Senate. The 110th Congress saw a number of members announcing as early as January 2007 a run for their party’s 2008 presidential nomination. The formation of so many leadership positions tied to partisan politics and elections will likely work against a traditional leadership structure that focused on power and influence within each respective chamber and the making of policy rather than on upward mobility outside each chamber and individual power bases of members. When he delivered his State of the Union Address before a joint session of the 110th Congress in January 2007, President George W. Bush noted an even more profound change that had taken place in the leadership of Congress. With a return to power by the Demo-
U.S. Speaker of the House Nanc y Pelosi conversing with former vice president Al Gore (U.S. House of Representatives)
crats, for the first time in U.S. history, one of the three branches of government had a constitutional office headed by a woman. It had often been said that if a woman were to become president, she would first become vice president. Similarly, if an African American would join the ranks in heading a constitutional office he or she would first serve in a lower office, and William Gray was long speculated to be the heir apparent to the Speakership, before he left the post of Democratic Party Whip to head the United Negro College Fund. In Congress, African Americans were able to gain seniority and move up the leadership ladder faster than women due to the opportunity structure presented by majority black districts and a seniority system that helped African Americans gain ground vis-à-vis women who had not seen their numbers increasing until the 1990s or members gaining the same seniority as were African Americans. Also, in the leadership structure, for the most part, women had been relegated to the post of secretary or vice chair of the Caucus or Conference. As had happened to women when first appointed to a position in the president’s cabinet, that one post became the only post to which women could ensure appointment (for example, Labor, Education or HEW/HHS). Representative Nancy Pelosi’s election to the post of minority leader was fortuitous in that it paved the way for her to be elected Speaker when the Democrats became the majority party with the 2006 election. So, on January
366 c ongressional staffing
23, 2007, President Bush could begin his State of the Union with the historic words, “And tonight, I have a high privilege and distinct honor of my own—as the first President to begin the State of the Union message with these words: Madam Speaker.” See also party politics Further Reading Brown, Lynne P., and Robert L. Peabody. “Patterns of Succession in House Democratic Leadership: Foley, Gephardt, and Gray, 1989.” Paper presented at the 1990 annual meeting of the American Political Science Association, San Francisco, California; Congressional Quarterly Weekly Report, weekly issues, 2006–07: Washington, D.C.: Congressional Quarterly Press; Davidson, Roger H., and Walter J. Oleszek. Congress & Its Members. 10th ed. Washington, D.C.: Congressional Quarterly Press, 2006; Hutson, James H. To Make All Laws: The Congress of the United States, 1789–1989. Washington, D.C.: United States Government Printing Office, Library of Congress, 1989; Martin, Janet M. Lessons from the Hill: The Legislative Journey of an Education Program. New York: St. Martin’s Press, 1994; ———. The Presidency and Women: Promise, Performance and Illusion. College Station: Texas A & M University Press, 2003; Office of the Clerk of the House of Representative, http://clerk.house/gov; Pinkus, Matt. How Congress Works. 3rd ed. Washington, D.C.: Congressional Quarterly Press, 1998; Senate Historical Office. Available online. URL: http://www.senate.gov. The Washington Information Directory, 2006–2007. Washington, D.C.: Congressional Quarterly Press, 2006. —Janet M. Martin
congressional staffing The only organizational structure in addition to a bicameral, or two-house, legislative body provided for in the U.S. Constitution for Congress is a “Speaker and other Officers” in the House of Representatives, and for the president of the Senate to be the vice president of the United States, and in cases when the vice president is absent, there is to be substituted a president pro tempore as the presiding officer and provisions for “other officers” as needed. There is no specification as to these “other Officers” in either the House or the Senate.
Congress develops as an institution over time, yet has relied on assistance of a formal nature nearly from the beginning of its place in the federal government. Among the first of the legislative acts passed by Congress was legislation to set the compensation for members of the House and the Senate. Included in the legislation were provisions to compensate a chaplain for the House and the Senate, a secretary of the Senate, a clerk of the House, plus additional clerks as needed when Congress was in session. In addition to these positions were officers of Congress who have also continued on to this day: a sergeant at arms, and House and Senate doorkeepers, and other “laborers” as needed. While it would take more than 100 years before Congress would become a permanent source of employment for incumbents, Congress followed the model of a professional legislature even when not meeting year round in that it relied on a permanent set of assistants, as well as the executive branch for help. The ebb and flow in the relationship between the executive branch and the legislative branch as to which would dominate was a part of an overarching structure that had more in common with today’s members than most observers of modern national politics would think possible. In part, this might have had much to do with the part-time nature of the federal government in the early years and the establishment of boardinghouses to accommodate members of Congress, traveling judges, and executive branch appointees in Washington, D.C., in a government in which the lines separating the three branches were clearly blurred. Members of Congress and officials and staff of the executive branch lived and ate together, thus gaining a working familiarity with each other needed to advance the work of Congress and to assist constituents needing the help of the federal government (for example, in seeking patent protection that had been a state prerogative under the former Continental Congress). Once a two-party political system had been established, and before the institution of a civil service system in the 1880s, individuals seeking employment in the executive branch (which at the time consisted of far fewer cabinet departments and agencies and not nearly the number of employees as is the case today), sought out the endorsement and patronage of their representative if a member of the majority party.
congressional staffing 367
In addition, due to a Capitol under construction and then the building of House and Senate office buildings, private office space was at a premium. Physical structure played a role in defining how Congress grew as an institution. Committees were created since chairs of committees generally were able to be assigned a private office and personal assistants. The Capitol was completed in stages. The first part, creating the two chambers was completed by 1819; the second stage, which gave the Capitol the expanded look it has today was not completed until the period just before the Civil War, in 1859. The oldest and most established committees, such as Ways and Means in the House, were among the first to acquire staff assistants from the clerk of the House. Even into the 21st century, the assignment of staff to individual committee members is tied to the role of chairs and longevity and prestige of committees, with members of appropriations having assigned to them additional committee staff routinely, distinct from other committees. Not until the 1890s were there more than 100 congressional staff. To this day, space is at a premium, especially on the House side of Capitol Hill, with staff occupying carved out spaces in nooks and crannies and spaces that in an earlier era may have served as a private restroom. The easy familiarity that the House and the Senate members may have had by virtue of working in close proximity either in private premises or in the Capitol itself came to an end as construction of the Capitol and then new office buildings neared completion. The early 1900s also saw the rise in a permanent year-round Congress, with members making the job a career. In 1890, the parties began to establish a leadership structure that would evolve more fully in the 1900s. With the construction of the Capitol complete and separate House and Senate buildings completed, the groundwork was laid for the explosive growth to come of a more permanent and institutionalized role for staff in the 20th century. The development of both a committee and a staff structure preceded the emergence of a formal party-leadership structure that might be a factor contributing to an uneasy tension members have always had between following party leaders or keeping any one party leadership structure intact in exchange for a more entrepreneurial role of individual members, especially in working through
their own personal staffs in the 21st century and through committees and/or subcommittees. As will be discussed below, there were nonpartisan positions in Congress, but personal staffs for members were still a long way into the future. However, with the close relationship between patronage positions in the executive branch owing some loyalty to members of Congress, the roots for the eventual “iron triangles” and “issue networks” of the 20th century described by a number of political scientists were clearly established. The Constitution gives Congress the power to raise revenues and with it the power of appropriations. Congress did such a good job in raising revenues through a system of customs and tariffs that the issue of a budget was not truly a problem until after World War I. Congress was faced with the enviable task of finding places to build roads, canals, and bridges to use the surplus of funds in the late 1800s. However, by the end of World War I, deficits had come into the picture, and the prior pattern of departments sending budget requests directly to committees would no longer work. An experiment by Congress to join authorizing committees with the appropriations process also had failed in short order. To gain responsibility and control over the budget, the 1921 Budget and Accounting Act created a Bureau of the Budget to assist the president in formulating a budget based on recommendations from the departments, which would then be sent to Congress. The General Accounting Office (GAO) was created for Congress to give it assistance in auditing and analyzing executive branch expenditures, but Congress did not provide itself with a process or staff system as it had provided the executive branch to develop a full budget package and analysis of revenues and expenditures. The president would gain the upper hand in a system that became less one of shared powers than of separated powers, at least in terms of budgeting. Congress relied on its system of committees and staff until a new budget process and staff structure was created, to supplement the work of GAO, this time focusing on the budget itself. This was in 1974, with the creation of the Congressional Budget Office. Congress would pass budget resolutions to respond to the president’s budget requests, engage in oversight of spending by the executive branch, and
368 c ongressional staffing
reconcile revenues with expenditures. However, Congress also created new committees in the House and the Senate (House and Senate Budget Committees, with staff associated with each while retaining its former committees engaged in the budget process, that is, Ways and Means, and Finance, respectively, to deal primarily with revenue issues in the House and the Senate, along with the authorizing committees created under the 1946 Legislative Reorganization Act, and the appropriations committees in the House and the Senate). In 2004, the General Accounting Office (GAO) was renamed the Government Accountability Office under the Republican majority. Whether its role would take a different turn did not have much of an opportunity to develop before the Democrats reclaimed majority power in both the House and the Senate in the 2006 midterm elections. Shortly after the executive office of the president was established in 1937, following the recommendations of the Brownlow Committee that the president needs help in managing and running the government, Congress turned to these same themes to develop more fully the modern staff structure we see today. But even before the 1946 Legislative Reorganization Act gave Congress provisions for staff, personal staff assistants had begun to be assigned to members who had become committee chairs and, eventually, to all House members beginning in 1885 and senators in 1893, resulting in approximately 1,500 personal staff on the Hill by 1946. Up until this point, individual members of Congress had been dependent on the executive branch for assistance in crafting legislation. Constituent casework, a term that was to become widely known and used by members and eventually their constituents, was handled by the individual representative and an assistant, but since casework involved a member’s interceding on behalf of a constituent in resolving a problem with the executive branch bureaucracy, not until the New Deal programs of the 1930s were fully in place, and then later the Great Society programs of President Lyndon Johnson in the 1960s had passed were a full complement of personal staff needed by members in either Washington or back in the districts (district staff offices proliferated throughout the 1970s and 1980s, and by the end of the 1990s, every member of the House and the Senate was expected to have an
ongoing presence in the district). The 1970 Legislative Reorganization Act would bring about another major change in staffing patterns with an increase in personal staff and committee staff for members with a focus on the provision of staff for subcommittee chairs and for minority party members. The relationship between the bureaucracy and Congress was more closely tied together than is the case in the 21st century. Before this period, Congress had traditionally been dependent upon the executive branch for assistance. Congress also played a far greater role in defining and hiring personnel in the executive branch, with federal governmental workers beholden to the political currents in the House of Representatives in particular in seeking out patronage positions in government. Congress had early on established the Library of Congress, a gift of Thomas Jefferson’s library initially, for the research needs of members of Congress. To this day the Library of Congress, while available to the nation, has as its first priority the needs of members of Congress. A division of the library, The Congressional Research Service (CRS), was designed to meet the research needs of members and committees and is nonpartisan in its service. For a long time, its reports were only available to members of Congress, and constituents gained access to its invaluable research reports through members and their staff. Now reports are available through a number of online databases, although the work of CRS is tailored to the needs of Congress. CRS holds training programs for staff on the range of policy areas that come before Congress to help committees in their oversight of the executive departments and agencies; trains staff on the use of databases available to members and their staff; and provides procedural training and manuals on everything from setting up an office to introducing legislation or responding to constituent requests for flags that fly over the Capitol. The Office of Technology Assessment (OTA) had a relatively short life as a staff agency for Congress. OTA was created in the 1970s, but unlike the GAO or CRS, the work of OTA was done at the request of individual committees of Congress and not individual members. This may explain its demise in 1995 at the start of the 12-year reign of Republicans from 1995 to 2007. From the time of OTA’s creation until the time the Republicans won back control of the House of
constituency 369
Representatives, there never had been a time when a Republican had chaired a committee, and therefore, in the driver’s seat in requesting work products from OTA. With Republicans in the minority for years and not controlling committee agendas, in spite of the nonpartisan nature of OTA, it might not have been found to be as useful as CRS to individual members. Staffs today include individual employees hired from staff budgets for each member, in addition to the committee and institutional staff described above, as well as interns and fellows paid from other nongovernmental sources and senior executive-branch staff detailed to offices and committees. Further Reading Huston, James H. To Make All Laws: The Congress of the United States, 1789–1989. Washington, D.C.: Library of Congress, 1989; ———. Lessons from the Hill: The Legislative Journey of an Education Program. New York: St. Martin’s Press, 1994; Martin, Janet M. The President and Women: Promise, Performance and Illusion. College Station: Texas A & M University Press, 2003; Ornstein, Norman J., Thomas E. Mann, and Michael J. Malbin. Vital Statistics on Congress 1997–1998. Washington, D.C.: Congressional Quarterly, Inc., 1998; Polsby, Nelson W. “The Institutionalization of the U.S. House of Representatives.” American Political Science Review 62 (1968): 144–168, reprinted in Herbert F. Weisberg, Eric S. Heberling, and Lisa M. Campbell, Classics in Congressional Politics. New York: Longman, 1999; Ragsdale, Bruce A., ed. Office of the Historian. Raymond W. Smock, Historian and Director. Origins of the House of Representatives: A Documentary Record. U.S. House of Representatives. Published in Commemoration of the Two-Hundredth Anniversary of the First Congress, Washington, D.C.: U.S. Government Printing Office, 2000; Rosenbloom, David H. “John Gaus Lecture: Whose Bureaucracy Is This, Anyway?” Congress’s 1946 Answer. PSOnline (October 2001). Available online. URL: www.apsanet.org/ imgtest/2001WhoseBureaucracy.pdf. Accessed June 20, 2008; Rourke, John Francis. “The 1993 John Gaus Lecture: Whose Bureaucracy Is This, Anyway? Congress, the President, and Public Administration.” PS: Political Science and Politics 26, no. 4 (December 1993): 687–691; Shuman, Howard E. Politics and the Budget: The Struggle between the President and the
Congress. Englewood Cliffs, N.J.: Prentice Hall, 1988; Smith, Steven S., Jason M. Roberts, and Ryan J. Vander Wielen. The American Congress. 4th ed. Cambridge, England: Cambridge University Press, 2006; U.S. Congress. 103rd Congress, first session, Joint Committee on the Organization of Congress. 1993. “Background Materials: Supplemental Information Provided Members of the Joint Committee on the Organization of Congress.” S. Prt. 103–55, Part IV. —Janet M. Martin
constituency In the scope of the U.S. political system, a constituency is a group of people linked to a public official by means of elections. In a representative type of democracy such as that found in the United States, citizens do not participate in the government themselves. Rather, they select people, known as representatives, to act for them. In the United States, that process is generally an election. Representatives are responsible to the people, or constituency, that put them into office, and representatives can remain in office generally only so long as they satisfy their constituents. While the term constituency has a very specific meaning in terms of elected officials, the word is often used in different contexts. The idea of a constituency is closely tied to the concept of representative government. A constituency presupposes the notion that someone is acting on the behalf of others or that someone is representing others. For instance, one does not normally think about a king’s subjects as a constituency. It is true that a king acts on behalf of his subjects, but the subjects have no control over the king’s actions as a constituency does. People cannot vote a king out of office. One of the key elements of a constituency in the United States is that the constituents can hold their representatives accountable. It is worth noting that the U.S. idea of a constituency exercising accountability through elections is not the only understanding of the term. Sometimes, a government perceives citizens as constituents even though there is no mechanism of accountability. For instance, in Great Britain, one of the legislative bodies is called the House of Lords. Membership in the House of Lords is hereditary, and its members are not subject to elections. Yet it is not uncommon to hear discussion of the constituency of the
370 c onstituency
people of Great Britain. In other words, because the members of the House of Lords are supposed to act in the best interests of the citizens of the country, those citizens can be perceived as a constituency. While U.S. representatives are charged with making decisions that further the interests of their constituents, as well, the electoral connection between representative and constituent is the defining characteristic of a constituency in the United States. In discussions of representation, the kind of governments in which people do not exercise electoral control over their representatives is called virtual representation, while actual representation features electoral accountability. Of course, the concept of a constituency in the United States is not quite that simple. While a representative’s constituency is most often defined strictly as the people who elected him or her, sometimes the term is more broadly used here, as well. For instance, a senator is electorally linked to the residents of a particular state. Obviously, that is the formal constituency. Yet, according to the senate’s role as the more deliberative body, senators are also perceived to possess a more national perspective. Their authority over national issues such as treaty approval and appointments, for example, suggest that the Senate is supposed to consider the interests of the entire nation, as well as their own state. A senator’s relationship with the national constituency is not the same as the one with the state constituency since only the constituents of the state can exercise accountability. In a formal sense, the constituency is everyone who resides in the geographic region that the elected official represents. So the Wisconsin senator’s constituents include all Wisconsin residents. In practice, even the formal understanding of constituency is more complex. Richard Fenno has done extensive research on representatives and their constituencies, and he has identified four different constituencies that are important for every representative. These constituencies can be pictured as a set of nesting concentric circles. In other words, the largest circle, or constituency, contains the other three, as well, while the smallest is a member of all four. He finds that the relationship between a representative and the different constituencies varies widely: All constituents are not treated the same. The first is the most obvious—the geographic constituency. This is the formal definition of constituency—
the group to which the representative is accountable. The notion of a geographically bounded space implies that the residents share important characteristics that need to be represented in political debate. While this is often true, it is also the case that many geographic constituencies are quite heterogeneous, which makes it difficult to talk about a unified constituency interest. Sometimes, the interests of one section of the geographic constituency conflict with the interests of another section. For example, the coal industry is very important in some districts of West Virginia. In these places, one can talk about furthering the interests of the constituency by supporting policies favorable to the coal industry. On the other hand, some constituencies may include rural and urban areas, such as Illinois, for example. A senator from Illinois has to represent the farming communities in the south part of the state, as well as the city of Chicago, one of the largest cities in the country. Farming policy may be significant for the rural constituents but might not be relevant at all to the city residents. In fact, on an issue such as farm subsidies, the two groups may disagree vehemently. Another problem relates to how constituencies are created. It is often the case that district boundaries are drawn to advantage a certain political party and may have little to do with joining people with common traits and interests. These heterogeneous constituencies can be very challenging for representatives. The second constituency is the reelection constituency. These are the people who voted for the representative. A representative views this group quite differently than the entire geographic population. There are many people within a representative’s district or state that did not vote for him or her. Some of them voted for the opposition. Some people did not vote at all. While an elected official represents all of these people, certainly a representative is going to be more attentive to the interests of supporters than opponents. Every representative knows that he or she cannot satisfy all members of the geographic constituency. For representatives to win reelection and stay in office, they need to maintain the support of a majority of the voters. Keeping supportive constituents happy is an important element of representative behavior. This is not to say that representatives ignore all of their other constituents. However, there are some constituents who will never support certain representatives, whether they are from a different political
constituency 371
party or disagree on specific policy issues. For instance, a pro-life voter may never support a pro-choice candidate. Rather than wasting valuable effort courting constituents who cannot be satisfied, a representative is apt to focus on people who supported him or her in the past. The third constituency is called the primary constituency. These are people who are very strong supporters and usually become involved beyond the act of voting for a representative in a general election. These constituents can be counted on to support a representative solidly over time. They come out and vote in the primaries, and they are the constituents who volunteer their time to help the candidate. Closely related is the fourth and final constituency—the personal constituency. This is the smallest group and can amount to a mere handful of constituents. These people are also solid and sustained supporters of a representative, but they are even closer to the representative. They comprise his or her closest advisors, and a representative counts on them for advice and strategy. From the perspective of a representative, a constituency is not simply a collection of people. A representative perceives all of these different types of constituents and responds differently to individuals in each group. While representation is described as a process by which an elected official represents an entire constituency, representatives in practice distinguish between people based on their level of support. The concept of a constituency is traditionally associated with the legislative branch. Most research on constituencies focuses on the relationship between legislators and their constituents. However, the term is applied much more broadly in the United States. For instance, the U.S. president is often portrayed as a representative of the U.S. public, and it is common for presidents and observers to speak about a president’s constituents. While the president was not originally conceived as a popularly elected office, the advent of primaries and a general democratization of the position have served to firmly link the president to a national constituency. While presidents are still selected by the electoral college, the views of the voters play a significant role in who is elected. Of course, one can see that the U.S. public at large is an incredibly diverse constituency, and it is difficult for any president to respond to specific preferences and interests in a meaningful way. Often, the relationship
between president and constituency is characterized in an overarching sense that rises above specific policy views. In other words, the president can be said to represent the national interest or the public good, a description that attempts to unify the disparate elements over which he or she governs. On a practical basis, however, the president is still accountable to diverse constituents (at least for two elections), and they are constantly trying to build majority coalitions out of the diversity of interests in the national constituency. For instance, President George W. Bush does not court the pro-choice constituents at all because he is pro-life. Other constituencies that are important to Bush include fundamentalist Christians and conservative Republicans. All elected officials face the task of maintaining support among constituents, but for the president, the size of the constituency makes the task even more daunting. There is one other usage of the term constituency that bears discussion. Until now, the concept of a constituency has implied a clear connection between representative and constituent. The representative is held accountable to the constituency in the United States by means of elections. Sometimes, the term constituency is used, however, even when electoral accountability does not exist. For instance, Richard Neustadt in his classic book Presidential Power refers to foreign governments as one element of the president’s constituency. One can see why he would do so. As the leader of a powerful nation, the U.S. president must negotiate policy decisions with other nations, especially when the United States wants cooperation from them. It is important to note, however, that this is not the formal understanding of constituency. The U.S. president is not accountable to other countries in the same way that representatives are accountable to their constituents through elections. While there is an element of bargaining that goes on between nations, and a president must sometimes satisfy foreign preferences to achieve his or her own goals, this is not, strictly speaking, a constituent relationship. In one sense, the concept of a constituency is extremely simple. It is a group of people, usually organized geographically, that is represented by an official that they have chosen in an election. In trying to describe any single constituency, however, the matter becomes much more complex. In a country as diverse as the United States, most constituencies contain a
372 delegation of legislative powers
plethora of viewpoints, and a representative is held accountable to all of these. Some representatives manage better than others, and the various strategies representatives use to address constituents are best left to a discussion of representation. While sometimes the term is used to describe any political relationship in which an individual or group speaks on behalf of others, the element of accountability is key in defining a substantive constituency in the United States. See also casework; incumbency; pork-barrel expenditures. Further Reading Fenno, Richard F., Jr. Home Style: House Members in their Districts. New York: HarperCollins Publishers, 1978; Mayhew, David R. Congress: The Electoral Connection. New Haven, Conn.: Yale University Press, 1974; Pitkin, Hanna Fenichel. The Concept of Representation. Berkeley: University of California Press, 1967. —Karen S. Hoffman
delegation of legislative powers In the modern world, across systems and across regions, executives have advanced in power while legislatures have declined. In Great Britain, the Parliament has given way to the stronger central core executive; in the United States, the powers of the presidency have grown as the powers of the Congress have shrunk. This story is repeated in system after system, country after country, and region after region. Some argue that the forces of modernization have been unkind to legislative assemblies; others claim that executives have a built-in adaptation capacity sorely lacking in legislatures. The modern world, with its dramatic advances in technology, has indeed made the world smaller and faster. Communication is instantaneous, travel much faster than 50 years ago, weapons technology has increased the potential risks as it has heightened the potential damage of warfare, human migration is altering borders, climate changes are threatening ways of life, the spread of disease knows no borders—the list could go on and on. To face these fast-paced pressures, a government needs to be flexible, adaptable, and able to move
quickly. The executive is capable of moving quickly, while the legislature is not. This creates an “adaptation crisis” for the Congress as events pass it by while legislators, mired in arcane procedures and rules of order, debate and discuss the issues. This leaves it to the executive to act, and when the president acts, he often preempts Congress and compels it to follow his decision. If the Congress continues to be a slow, deliberative body, a power vacuum might be created. Nature and politics abhors a vacuum, and more often than not, it is the executive who is left to fill that vacuum. In such cases, power drifts or is delegated to the presidency and away from the Congress, and power, once ceded, is rarely returned. If the Congress is to reclaim its lost or delegated powers, it must find a way to make the institution more adaptable, more streamlined, and more modern. History is not on Congress’s side in this issue, but there have been times when the Congress has been organized to lead. This occurs when there is strong leadership at the helm, but to be truly effective, both the House of Representatives and the Senate must have strong leadership and party cohesion, and that rarely occurs. For a brief time, the Republicancontrolled Congress led by Speaker Newt Gingrich, from 1995 until 1998, was able to capture the political agenda and reclaim some power. Following the 1994 midterm elections in which the Republicans won control of both houses of Congress, Gingrich, armed with an agenda on which the Republicans staked their electoral fate, the “Contract With America,” the new Speaker was able to enforce party discipline and lead a united party in a series of policy victories over the embattled Democratic President, Bill Clinton. But this congressional resurgence did not last, as scandals and challenges to Gingrich’s leadership led to the Speaker resigning his leadership post and resigning from the Congress in late 1998. Legislatures rose in a quieter, slower paced time. As “deliberative” bodies, they are hard-wired to debate, discuss, bargain, compromise—all of which takes time, time that the modern world may not afford a nation. Legislatures are accustomed to and capable of doing business in a slower, more deliberate manner. The legislative process is often slow, cumbersome, and full of delays and roadblocks. It may be better suited to the 18th rather than the 21st century. Some critics have gone so far as to argue that legisla-
delegation of legislative powers 373
tive assemblies no longer serve the purposes for which they were established and may in many ways be relics of a quieter and slower past. The executive, on the other hand, is a “modern” institution, capable of acting with speed and certainty. It is a more adaptable and flexible institution because with one strong and steady hand at the helm it is possible to move quickly in new directions. It should not surprise us then, that in the modern era, the executive has often dominated the legislature. How does this relate to the delegation of power? Delegation means the empowering of one to act for another. The delegation of legislative power is the empowering of the executive branch (the president or a government agency) to act or make decisions for the legislature. Laws are written in general terms and cannot account for every contingency; therefore some amount of discretion or delegation is necessary. Laws are written in broad, not specific language; they set broad goals but only rarely give minute guidance. Small or administrative matters often must be delegated to others who have the responsibility for implementing the will of the legislature. But such delegation can raise alarms. If delegation is too broad or if the administering agent goes too far afield of the intent of the legislature, the letter and the spirit of the separation of powers may be violated. And worse—there have been times when executive branch officials have knowingly attempted to frustrate or change the legislative intent when implementing laws. Worse still— there have been times when the Congress has willingly ceded or delegated what are clearly congressional powers to the executive, as was the case in 1921 when Congress, unable to present federal budgets in a timely fashion, gave or delegated to the executive, the right to present to the Congress an executive-generated budget. This gave to the president a tremendous amount of power: The power to present the budget is the power to shape the debate over federal spending and priorities. More recently, this occurred in the aftermath of the September 11, 2001, terrorist attacks when the Congress gave to the president the authority to take whatever steps he deemed appropriate to strike back at the terrorists. Article I, Section 1 of the U.S. Constitution states that “All legislative powers herein granted shall be vested in a Congress of the United States . . . ,” but how much of this authority may Congress delegate to
others? As stated, some modicum of delegation is vital to the proper functioning of government, but how much and within what parameters? While the courts have recognized a need for delegation in some areas, on the big issues they hold to the legal maxim delegata non potestas non potest delegari, or “delegated power cannot be delegated.” Translated into more common language, that means that if the Constitution gives a power to one branch, that power cannot be given to another, and as the British political philosopher John Locke wrote in The Second Treatise on Civil Government, “The legislative cannot transfer the power of making laws to any other hands; for it being a delegated power from the people, they who have it cannot pass it over to others.” Therefore, when an act is considered “making” law, it cannot be delegated from one branch to another. When such delegation involves the legitimate implementation of congressionally passed law (administrative rulemaking to better implement the will of the legislature), a government agency has greater flexibility and discretion. But the limits and distinctions are often blurry. This is especially true in foreign policy where for example the U.S. Supreme Court recognized in United States v. Curtiss-Wright Export Corp. (1936) that the Congress may and must at times give to the president greater discretionary power “which would not be admissible were domestic affairs alone involved.” But even in the domestic realm, the Supreme Court has recognized that discretion and delegation must be limited. In Panama Refining Co. v. Ryan (1935), the Supreme Court ruled invalid legislation that authorized the president to prohibit interstate shipment of oil production that exceeded state quotas on the ground that it was an unconstitutional delegation of legislative authority to the executive; in Schechter Poultry Corp. v. United States (1935), the Supreme Court struck down provisions of a law that allowed the president to establish “codes of fair competition” in business. But over time, the wall of delegation has eroded, and the Congress and federal courts have been more lenient in recent years in allowing the delegation of power from the legislature to the executive. Many feel the War Powers Resolution, approved by Congress in 1973, is an example of the Congress delegating to the president its constitutional war-making authority to the executive. After all, the Congress gives
374 distric ts and apportionment
the president a certain amount of time in which he or she can commit U.S. troops to combat without the prior approval of Congress. Is this delegation to the president constitutional? As this has not been tested in court, we do not know how the judiciary might respond to such a claim, but even so, this lends support to the view that in the modern period, Congress has often willingly ceded to the president many powers expressly granted to the Congress in what may be an unconstitutional delegation of congressional power to the executive. Modern Congresses have been delegating powers to the executive for the past 75 years. At times, Congress tries to take back some of its delegated powers, as was the case with the War Powers Resolution of 1973, but usually to no avail. Power once given is difficult to regain. That Congress feels the need to give its powers to the president is a sign of problems within the Congress, problems that cannot be corrected by delegating this or that power to the executive. The delegation of power to the executive is a symptom of deeper problems that the Congress has in becoming a “modern” institution, capable of exercising leadership and authority. In the absence of reform of the Congress, we are unlikely to see the Congress reclaiming (and retaining) very many of its lost powers. This allows the executive branch to swell in power and undermines the system of checks and balances that is so vital to the effective functioning of the separation of powers system in the United States. Further Reading Barber, Sotirios A. The Constitution and the Delegation of Congressional Power. Chicago: University of Chicago Press, 1975; Kerwin, Cornelius M. Rulemaking: How Government Agencies Write Law and Make Policy. Washington, D.C.: Congressional Quarterly Press, 1999; Schoenbrod, David. Power without Responsibility: How Congress Abuses the People Through Delegation. New Haven, Conn.: Yale University Press, 1993. —Michael A. Genovese
districts and apportionment Most legislative representation in the United States, at both the federal and state levels, is based on members elected from distinct geographic districts. Con-
gress has required single-member district elections since 1862, meaning that only one person may be elected from each district. At the state level, 13 states have at least some of their legislators elected from multimember districts. District-level representation allows constituents to have close ties to their elected officials and makes it easier to represent the district as a cohesive geographic unit. Congress first required House members to be elected from single-member districts in 1842. Prior to that date, six states were electing their entire congressional delegation in statewide elections. The singlemember law was repeatedly ignored, however, and Congress continued to seat elected delegations. In 1850, Congress dropped the single-member district requirement but then reinstated it in 1862 and in all subsequent apportionment laws. Apportionment is the process of determining how many seats a state has in the House of Representatives. The U.S. Constitution specifies that House seats be distributed by population, with every state guaranteed at least one representative. The goal of the apportionment process, as put by the clerk of the House of Representatives, is “dividing representation among the several states as equally as possible.” The difficulty is that there is no foolproof method of allocating members in a way that satisfies every aspect of fairness. Although most methods tend to produce similar results, some have the potential to produce paradoxical outcomes, in which, for example, states lose seats even if the size of the legislature increases. After the 1880 census, the chief clerk of the Census office discovered that under the apportionment method then used, Alabama would receive eight seats if the House had 299 members, but only seven if the House had 300 members. This anomalous result was dubbed the “Alabama Paradox” and is only one of a general class of problems that can emerge from different apportionment methods. The method now used is called the “method of equal proportions.” It involves a complex set of calculations that begin by awarding each state one representative and then calculates a “priority value” for the 51st through 435th congressional seat. A state’s priority value is a function of the number of seats it already has been awarded, and its overall population, according to the formula is: Population ÷ 1/[N(N–1)]1⁄2 where N is the number of seats a state has been
districts and apportionment 375
awarded plus 1. This method has the effect of giving greater weight to smaller states, which are somewhat more likely to be awarded a second or third seat than California is to be awarded a 53rd seat. The apportionment method does not resolve all disputes since it relies on census population figures. After the 2000 Census, Utah received three House seats. A tiny population increase—a few hundred people—would have resulted in Utah receiving an additional seat; instead, this last seat was awarded to North Carolina, using the method of equal proportions. Utah sued, claiming that the Census Bureau’s method of calculating population understated the actual number of Utah residents. In one lawsuit, the state claimed that the Census Bureau
should have counted overseas Mormon missionaries as Utah residents in the same way that overseas military personnel are counted as residents of their home states. This claim was rejected by a federal district court. In a second action, Utah argued that the census method of estimating the population of homes where the actual number of residents is unknown was illegal. This claim was rejected by the U.S. Supreme Court in Utah v. Evans 536 U.S. 452 (2002) The “apportionment revolution” of the 1960s changed the entire process of redistricting. In a series of landmark decisions [including Baker v. Carr 369 U.S. 186 (1962); Reynolds v. Sims 377 U.S. 1 (1964); and Wesberry v. Sanders 376 U.S. 1 (1964)] the Supreme Court held that both congressional and state
376 divided government
legislative districts must have equal population. Unequal district size, the Court ruled, violated the Fourteenth Amendment guarantees of equal protection by giving voters in smaller districts more voting power than those in more populous districts. As a consequence, after each census, states are required to redraw congressional and state legislative district lines to ensure nearly equal populations. Redistricting maps are evaluated on the basis of several criteria that can be used to compare and assess competing plans. Districts typically are contiguous, meaning that every part of the district is attached and that there are no separated “islands” or disconnected areas. Districts are, in theory, supposed to be “compact” or formed of regular geometric shapes and not dispersed over broad and irregular areas. Where possible, districts should also respect subsidiary political boundaries, such as county and city limits. Districts should keep “communities of interest” or areas that share particular political or economic interests together. The Voting Rights Act of 1965 imposes additional obligations on states and prohibits redistricting plans that reduce the voting power of certain racial or language minorities. In practice, district lines are increasingly drawn to maximize political advantages for one or the other major political party. The practice is not new—the term gerrymandering arose during the 1812 round of redistricting in which Massachusetts Governor Elbridge Gerry approved a plan that packed Federalist voters into an irregularly shaped district. In recent redistricting cycles, critics have charged that the process has become increasingly politicized, with parties using sophisticated software and computing power to draw finely crafted districts that extract every possible partisan advantage or in which the parties strive to protect incumbents. There is an increasing scholarly consensus that partisan redistricting is a major cause of declining congressional and state legislative competitiveness. To date, there have been no successful court challenges to partisan gerrymandering. In a 2004 case (Vieth v. Jubelirer 541 U.S. 267), the Supreme Court refused to overturn a Pennsylvania congressional redistricting plan, enacted by legislative Republican majorities, that clearly benefited Republican candidates. Nationally prominent Republicans were open about their intention to “adopt a partisan redistricting
plan as a punitive measure against Democrats for having enacted pro-Democrat redistricting plans elsewhere” (Vieth, Scalia opinion, p. 2). The Court refused to overturn the plan, ruling that there were no clear standards for measuring the partisan fairness of a redistricting plan. Further Reading Lublin, David, and Michael McDonald. “Is It Time to Draw the Line? The Impact of Redistricting on Competition in State House Elections.” Election Law Journal 5:144–157 (No. 2, 2006); Martis, Kenneth C. The Historical Atlas of United States Congressional Districts, 1789–1983. New York: The Free Press, 1982; Monomeier, Mark. Bushmanders and Bullwinkles: How Politicians Manipulate Electronic Maps and Census Data to Win Elections. Chicago: University of Chicago Press, 2001. —Kenneth R. Mayer
divided government When the presidency has a president of one major party and at least one of the chambers of the Congress is held by the majority of the other party, it is known as divided government. The idea behind the name is that the government is divided between the two major party philosophies in attempting to govern. To understand the concept of divided government, a brief description of the role of political parties in the U.S. government is necessary. Since the founding, U.S. politics has generally grouped itself into two major factions of political philosophies, from the Federalists and the anti-Federalists at the time of the ratification debates to the modern day Democrats and Republicans. These parties provide loose coalitions of governing philosophies that allow elected officials to work together in formulating policy. When the president and the Congress are of the same political party, they are essentially of the same governing philosophy and thus, the idea follows, can come more easily to agreement on policy formulation. Since political parties help smooth the interaction between the branches in a deliberately designed slow and fragmented system, the partisan structure can have a major influence on the ease of policy formulation at all levels of government. Unified government occurs when the same party holds the majority in all the
divided government 377
branches, thus making the separated system operate more smoothly, at least in theory. Divided government can often result in less cooperation between the branches. This situation has begun to happen with more frequency in relatively modern times as the electorate moves to more splitticket voting. The U.S. political system has also seen a rise in candidate-centered campaigns that add to divided governments as voters then vote for an individual person more so than a governing philosophy. This means that more individuals are elected for Congress and an individual is elected for the presidency, so voters are not thinking about voting for a party. Consider that Republican George W. Bush and Democrat John Kerry barely mentioned their respective parties while running for the presidency in 2004. As a result, it is easier for voters to disconnect their vote for a Congress member from their vote for the president. Although political parties have always been a part of the U.S. political system, the national government has not always entailed divided government. From the beginning of the nation through the 1820s, there was no divided government. It was prevented as a result of the election process that selected presidential nominees. During this time, congressional caucuses were responsible for choosing the candidate for the general election. The caucus would select a person who would be in line with its governing philosophy. Then the political party would run its governing philosophy against the other option, and between these two, the voters would choose. As a result, the president was never a member of the opposite party from Congress. Voters would select the governing philosophy they wanted, and all the elected officials would generally match up with the governing philosophy of the majority of the population. During the 1820s, as a result of President Andrew Jackson’s effect on national politics, the congressional caucuses no longer selected presidential candidates. The political parties began to have national conventions where delegates would arrive from all over the nation to select the presidential candidate themselves. In this, Jackson had achieved a bit of a democratic victory by bringing people into the system in a more direct way. This did not, however, change the situation of the political parties offering a choice between two major governing philosophies. It did allow more
opportunity for divided government when a particular presidential candidate was able to sell himself as a candidate to choose regardless of the majority party situation (an example would be President Woodrow Wilson’s election in 1912). In the latter half of the 20th century, a shift occurred in the institutional structures, and a shift toward more independent voters made it possible for divided government to start to become the norm more than the anomaly. The parties started to reform the way they make their selections. For example, they incorporated the use of primary elections to bring more people into the selection process. Again, this is a democratic step away from the elitism of the early country. Once the party moves away from the more elite members making the decisions about nominees, the nominee can go more directly to the voters for the decision. We begin to see our political system shift toward a candidate-centered campaign rather than the party-centered governing philosophy campaign. Candidates formulate their policy positions early so that they can run for office. Their messages and methods for delivering them are polished before the party is even in control of them. As a result, voters have seen more divided government. Congress members run against their parties and even against the institution itself. Even though Congress still has high party unity inside the institution, the members can come home to their districts and sell an entirely different message to their constituents who are electing them. Another argument about why voters have been getting more divided government of late is the idea that voters are selecting a divided government because moderate voters tend to believe that the most reasonable policy happens when the parties are split. As a result, they are choosing to have divided government as much as possible. Since 1832, when it was first possible to have divided government at the federal level, through the 2004 election, U.S. elections have resulted in 53 instances of unified government and 33 instances of divided government. Most of these have happened in the later phases of the 20th century, and the more voters move toward democratic methods of selecting party nominees, the more divided government will occur. Even as the nation becomes more polarized, the opportunities for more instances of divided government will increase. As a result, the possibilities
378 filibust er
for running the separated system smoothly will decrease. The literature in political parties is divided on the implications for U.S. government when the system is operating in an instance of divided government. One argument is that the system becomes significantly blocked and the executive branch can run amok when each party controls one branch of government. With regard to the first claim, the argument follows that it is difficult enough to legislate in the separated system that adding divided government actually blocks significant legislation. Further, the president has been found to be more likely to oppose more items coming out of Congress under divided government than under unified. This means that the system is not working as easily under divided government but that one branch is actually working actively to slow the system down and hinder its progress. When a bill is controversial, all the stars need to be properly aligned for the system to work. Second, if there is no party trust connecting the presidency to the Congress, then the latter branch cannot oversee what is happening as easily. The president is more disposed to not include the Congress in conversations about governing. On the other hand, some scholars have noted that there is no difference between the governing product when there is an instance of divided government rather than unified. This argument says that the branches actually keep on top of each other more when divided and that the separated system causes the legislative capacity to slow down on its own. The party situation does not to hinder or ease the process. Another implication for U.S. government when the system is divided is the accountability issue. Presidents with large legislative majorities tend to enjoy the most success in Congress. Basically if the president is of the same governing philosophy as the majority party in Congress, he or she tends to get his or her legislation passed. This makes sense as his or her party would agree with him or her on the governing philosophy. This is good from the perspective of the party that is ruling because its policy agenda can move through the legislative system easily and possibly even quickly. It is not necessarily a good thing, however, if something goes wrong in the implementation of said policy. If there is divided government, then the president can always blame Congress for the
failures and vice versa. Divided government thus allows elected officials to hide behind the partisan structure instead of being accountable for the consequences of their policy making. In this way, divided government can also be problematic for U.S. government and a democratic system. Another potential problem with divided government is that the longer the government runs under divided government, the less citizens tend to vote. Because divided government mutes clear policy preferences, it is unclear what the government is doing. The more unclear it is, the less likely citizens are to pay attention and engage in politics. Therefore, divided government can have an adverse effect on the efficacy of citizens which can be detrimental to the democratic system as well. Further Reading Carsey, Thomas M., and Geoffrey C. Layman. “Policy Balancing and Preferences for Party Control of Government,” Political Research Quarterly 47, no. 4 (December 2004): 541–550; Edwards, George C., III, Andrew Barrett, and Jeffrey Peake. “The Legislative Impact of Divided Government,” American Journal of Political Science 41, no. 2 (April 1997): 545–563; Fiorina, Morris P. “An Era of Divided Government,” Political Science Quarterly, 107, no. 3 (Autumn 1992): 387–410; Franklin, Mark N., and Wolfgang P. Hirczy. “Separated Powers, Divided Government, and Turnout in U.S. Presidential Elections,” American Journal of Political Science 42, no. 1 (January 1998): 316–326; Mayhew, David R. Divided We Govern: Party Control, Lawmaking, and Investigations 1946–1990. New Haven, Conn.: Yale University Press, 1991; Nicholson, Stephen P., Gary M. Segura, and Nathan D. Woods. “Presidential Approval and the Mixed Blessing of Divided Government,” The Journal of Politics 64, no. 3 (August 2002): 701–720. —Leah A. Murray
filibuster The filibuster is perhaps the most interesting parliamentary tool in the U.S. Congress. It is, simply stated, an attempt to delay, modify or block legislation in the senate. The most well-known version of the filibuster is a nonstop speech, famously exhibited by Jimmy
filibuster 379
Stewart in the 1939 movie classic Mr. Smith Goes to Washington. But there are other versions of the filibuster as well. For example, senators might offer scores of amendments to bills or demand numerous roll call votes in the hopes of delaying measures indefinitely. Even the threat of a filibuster can be an effective way to gain bargaining power. The U.S. Senate typically follows four steps when a bill reaches its floor. First, the majority party leader secures a “unanimous consent agreement” of the Senate that specifies when a bill will be brought to the floor for debate, as well as the conditions for debating it. Second, the majority and minority party floor managers give their opening statements. Third, any amendments to the bill are proposed and debated. Fourth, a roll-call vote takes place on the bill’s final passage. The ultimate goal of a filibuster is to either prevent the final vote from happening (thus defeating the bill) or to modify the bill in such a way that it better reflects one’s policy preferences. One way in which a senator might accomplish this is by holding the floor indefinitely. Section 1(a), Rule XIX of the Standing Rules of the Senate states that “No Senator shall interrupt another Senator in debate without his consent.” What this means, in effect, is that one senator may speak during floor debate for as long he or she wants (or can). For example, in 1957 Senator Strom Thurmond of South Carolina set the record for the Senate’s longest filibuster when he held the floor for 24 hours 18 minutes trying to kill a civil-rights bill. Extended debate is unique to the Senate. The U.S. House of Representatives typically allows one hour of debate, equally divided between the minority and majority parties (for complex bills as many as 10 hours may be scheduled). Because of their strict debate requirements, filibusters do not occur in the House. Though filibusters have long been part of the Senate, unrestricted debate raised little concern during much of the 19th century. As Walter J. Oleszek notes, “the number of senators was small, the workload was limited and lengthy deliberations could be accommodated more easily.” But in time the Senate would increase in size, workload, and institutional complexity making extended debate more burdensome and the filibuster, in turn, more effective. In 1917, the Senate adopted Rule XXII which gave the Senate the formal means to end extended
debate. This procedure is referred to as cloture. Until that time, debate could only be terminated by the unanimous consent of all Senators (an impossibility in the face of a filibuster) or exhaustion. After several revisions, Rule XXII now permits three-fifths of the Senate (60 members) to shut off debate. Once cloture is invoked, 30 hours of debate time remain before the final vote. Further, Senators are permitted to speak for no more than one hour on a first come, first served basis. Besides shutting down filibusters, Rule XXII has one other important implication. It, in effect, makes the U.S. Senate a “supermajoritarian” institution because 60 votes are commonly required to enact major and controversial legislation (as opposed to the typical simple majority of 51 votes). The political reality is that, as Republican Majority Leader Robert Dole once stated, “everything in this Senate needs 60 votes.” This all but requires the majority party of the Senate to reach out and collaborate with the minority party during the policy process. Little surprise that the Senate is often referred to as the “60vote Senate.” The number of overall filibusters has risen sharply since the 1970s. Although a number of explanations have been offered to account for the increase, two stand out in particular. First, Richard F. Fenno, Jr., a noted scholar of the U.S. Congress, argues that the Senate has transformed from a “communitarian” institution, where senators were expected to use extended debate sparingly and only for high stakes national issues, to a more “individualistic” Senate. He notes, “Changing political processes—more openness, more special interest group participation, more media visibility, more candidate-centered elections, weaker partisan ties and party organizations—produced newcomers with an ever stronger sense of their political independence.” These new individualistic tendencies produce legislators willing to push their own agendas even if the Senate’s institutional activities grind to a halt. The second reason for the rise in filibusters is the increased partisanship of the U.S. Senate. The recent increase in partisanship in the U.S. Congress is well documented. Because of shifts in the electorate (among other things), congressional parties have become more cohesive internally as the differences between them have grown larger. According to Sarah
380 floor debate
Binder and Stephen Smith, two noted scholars of the U.S. Senate, these changes have made holding together a party-backed filibuster significantly easier. Those who study the U.S. Congress generally find filibusters to be reasonably effective. For example, some scholars find that filibusters have killed legislation approved by a majority of senators at an increasing rate in the 20th century. Others show that legislation is less likely to be enacted when it encounters a filibuster in the Senate. Still others argue that Senate leaders have fought to alter the rules of the Senate to combat increasing obstructionism by minority coalitions. In fact, this last issue—that is, the right of Senate majority leaders to alter rules to prevent the minority party’s filibusters—was the source of an intriguing debate in the 2005–06 Senate. The Republicans (the majority party) were incensed by the Democrats’ continued use of the filibuster to thwart President George W. Bush’s federal court nominations. Senate Majority Leader Bill Frist (R-TN) threatened to employ a “nuclear option” (or “constitutional option” as the Republicans prefer to call it) in the 109th Congress to end filibusters against judicial nominees. Roger Davidson and Walter Oleszek explain one form of the nuclear option: A GOP senator raise[s] a point of order that further debate on a judgeship nominee is . . . out of order. The president of the Senate (the vice president—then Republican Dick Cheney) would then issue a parliamentary ruling sustaining the point of order, thereby setting aside and disregarding the existing Rule XXII (or 60-vote) procedure for ending a talkathon. If a Democratic senator appeals the ruling of the presiding officer, a Republican would move to table (or kill) the appeal, establishing a new, majority-vote precedent for Senate approval of judicial nominations. The Democrats threatened to create massive gridlock and stalemate in Senate decision making if the Republicans employed the nuclear option. Both tactics were eventually avoided when Senate moderates from both parties reached a last minute compromise on President Bush’s judicial nominees. This debate reflects two interesting developments in the U.S. Senate. First, it shows how the Senate has shifted away from its onetime norms of collegiality, civility, and accommodation. Today’s Senate is mostly characterized by intense partisan bick-
ering and animosity. Second, it speaks to just how contentious the filibuster has become. Defenders of the filibuster say it protects minority rights and permits thorough consideration of bills. Critics contend that it enables minorities to force unwanted concessions and bring the Senate to a halt. Needless to say, we have not heard the last of the debate over filibusters, especially as long as the Senate continues its partisan ways. See also legislative process; public bills. Further Reading Binder, Sarah, and Steven Smith. Politics or Principle? Filibustering in the United States Senate. Washington, D.C.: Brookings Institution, 1997; Davidson, Roger H., and Walter J. Oleszek. Congress & Its Members. 10th ed. Washington, D.C.: Congressional Quarterly Press, 2006; Fenno, Richard F. “The Senate Through the Looking Glass: The Debate Over Television.” Legislative Studies Quarterly 14: 313– 348, 1989; Oleszek, Walter J. Congressional Procedures and the Policy Process. Washington, D.C.: Congressional Quarterly Press, 2004; Sinclair, Barbara. Unorthodox Lawmaking. Washington, D.C.: Congressional Quarterly Press, 2000; Smith, Steven S. Call to Order: Floor Politics in the House and Senate. Washington, D.C.: Brookings Institution, 1989. —Michael S. Rocca
floor debate There are two types of floor debate in the U.S. Congress: legislative and nonlegislative. Legislative debate consists of periods in which members of Congress address current or pending legislation, such as during general debate in the House of Representatives and the Senate. Nonlegislative debate, on the other hand, are forums where members may address any topic they wish, be it policy or nonpolicy in nature. Examples of these forums are one-minute speeches and special-order addresses in the House and morning-hour debates in the Senate. Walter Oleszek, a noted scholar of the U.S. Congress, observes that floor debate in Congress consists mostly of set speeches rather than typical “give-andtake” debate. He discusses its symbolic and practical purposes. First, it assures both legislators and the public that Congress makes its decisions in a demo-
floor debate 381
cratic fashion, with due respect for majority and minority opinion. A House Republican remarked, “Congress is the only branch of government that can argue publicly.” Indeed, tourists who visit Washington and feel that they should see Congress usually attend a debate session of the House or the Senate. Thus, floor debate has significant symbolic meaning. Oleszek also discusses some of its more important practical purposes in today’s Congress: “. . . [Floor] debate forces members to come to grips with the issues at hand; difficult and controversial sections of a bill are explained; constituents and interest groups are alerted to a measure’s purpose through press coverage of the debate; member sentiment can be assessed by the floor leaders; a public record, or legislative history, for administrative agencies and the courts is built, indicating the intentions of proponents and opponents alike; legislators may take positions for reelection purposes; and, occasionally, fence-sitters may be influenced.” Thus floor debate can be used by legislators to send signals to constituents, other branches or agencies, lobbyists or campaign donors, and even other members of Congress that they are effective and responsible legislators. This is important if they wish to be reelected. Further, legislators use general debate to take positions on important issues that their constituents care about. Indeed, the viability of general debate as a position-taking activity only increased after C-SPAN began to televise congressional proceedings in 1979. In addition, junior legislators can use floor debate to send signals to their party leaders that they are active and engaged legislators. Leaders might, in turn, reward them with promotions or desirable committee assignments. Moreover, debate can also be used to send “informational” signals. As one noted political scientist argues, Congress is organized to encourage efficient transmittal of information. Members of Congress, especially committee leaders, use general debate to explain complicated policy to other members of Congress. Although debate in Congress takes place in a variety of settings—during committee proceedings and nonlegislative forums, for instance—arguably the most recognizable forum is general debate. In the House, general debate is the first order of business after the Speaker declares the House resolved into
the Committee of the Whole. Thus, general debate is actually the second step after legislation is reported to the floor. The first step in bringing a major bill to the floor is adoption of a special rule issued by the rules committee. The House (not the Committee of the Whole) debates whether to adopt the rulescommittee resolution containing the conditions under which the bill will be considered. There are four restrictions. General debate on a particular bill occurs after the bill has been reported out of committee and sent to the floor of the House. One hour of debate is usually allowed for each bill, equally divided between the minority and majority parties and managed by members from the committee of jurisdiction (each party has a floor manager). For more complex legislation, as many as 10 hours of debate may be scheduled. Due to significant time constraints in the House, members on average rarely speak for more than two minutes at a time. It is the floor managers’ responsibility to direct the course of debate on each bill. The manager on the majority side is usually the chair of the committee that reported the legislation to the floor. The senior minority member of the reporting committee is usually the manager for the minority side. The floor managers’ roles are important to policy outcomes; effective management increases the chances for smooth passage. Their principal tasks are as follows: Inform colleagues of the contents of the bill; explain the controversial issues in the bill; explain why the committee did what it did; and provide lawmakers with reasons to vote for the legislation. In some respects, general debate in the Senate is similar to debate in the House. For instance, debate on a bill occurs after it has been reported out of the committee of jurisdiction and is coordinated by floor managers representing the majority and minority parties. There are some significant differences between the Senate and the House debate proceedings, however. One important difference is that unlike general debate in the House, the Senate follows a principle of unlimited debate. Once a lawmaker is recognized by the presiding officer, that senator may hold the floor for as long as he or she chooses. Only when senators yield the floor may others be recognized to speak. There are four restrictions to unlimited debate, however: Unanimous consent agreements may limit debate; invoking cloture, which stops debate by a 3/5
382 fr anking
vote; a motion to table; or debate-limiting features actually built into bills. This opens the door to filibusters, a timedelaying tactic used by a minority to prevent a vote on a bill or amendment. Although there are a variety of types of filibusters, the most recognizable tactic takes the form of an endless speech. This tactic is not possible in the House because general debate is governed by rules that restrict the amount of time members may address the chamber. Whether general debate has any effect on electoral outcomes is not clear at this point. As David Mayhew notes, the effect of position taking, in general, on electoral behavior is very difficult to measure. Still, the electoral consequences of general debate are most likely small due to its limited audience. But Mayhew also explains that there can be “no doubt that congressmen believe positions make a difference.” So despite our inability to find systematic political consequences of position taking such as general debate, it is important to note that members of Congress behave as if these activities make a difference. While general debate is an important positiontaking and entrepreneurial activity, it is most likely not important to the final vote. As one House Republican noted, general debate is akin to professional wrestling; the outcome is predetermined. Oleszek writes, however, that once in a while, debate, especially by party leaders just before a key vote, can change opinion. He recounts a 1983 speech by Speaker Tip O’Neill (DMA) on U.S. involvement in Lebanon. One House Democrat told him that it was one of the “few times on the House floor when a speech changed a lot of votes.” Relative to other congressional actions, such as voting and sponsoring legislation, the effect of debate and oratory in Congress on policy and election outcomes is probably tenuous at best. However, as discussed above, debate has important symbolic and practical purposes. For those reasons, floor debate in Congress will always be politically significant. See also legislative process; public bills. Further Reading Davidson, Roger H., and Walter J. Oleszek. Congress & Its Members. 10th ed. Washington D.C.: Congressional Quarterly Press, 2006; Dodd, Lawrence C., and Bruce Oppenheimer, eds. Congress Reconsidered.
Washington D.C.: Congressional Quarterly Press, 2005; Fenno, Richard F. Home Style: House Members in Their Districts. New York: HarperCollins, 1978; Krehbiel, Keith. Information and Legislative Organization. Ann Arbor: University of Michigan Press, 1991; Mayhew, David. Congress: The Electoral Connection. New Haven: Yale University Press, 1974; Oleszek, Walter J. Congressional Procedures and the Policy Process. 6th ed. Washington, D.C.: Congressional Quarterly Press, 2004. —Michael S. Rocca
franking By both constitutional design and the day-to-day operation of Congress, incumbents have a tremendous advantage in winning their reelection efforts. That means that once someone is elected to the House of Representatives or the senate, challengers have an almost impossible time beating a current member of Congress in a general election contest. This incumbency advantage comes from a variety of factors, including the relative ease of raising money for reelection once in office, the perks associated with holding the job (such as name recognition, the ability to “get things done” for the constituents back home, and other perks, such as franking), the lack of term limits for members of Congress, the professionalization of Congress in recent years (members now view the position as a career instead of short-term public service), and redistricting that often favors incumbents by creating “safe seats” in the House of Representatives. All of this has a great impact not only on those who decide to run for office but also on those who serve in Congress and, in turn, are shaping the policy agenda and making laws. In recent years, reelection rates for incumbents have been as high as 98 percent in the House and 90 percent in the Senate. In 2004, 98 percent of House incumbents were reelected, while 96 percent of Senate incumbents were reelected. Not only do current members of Congress almost always have better name recognition than their challengers among voters, but other perks of the office also help incumbents in their reelection efforts. For example, each member is permitted several free mailings annually to constituent households, a privilege known as franking. Generally, mass mailings promote
franking 383
bills that the member has supported or other work that he or she has done for the folks back home, an important way for members to have direct contact with a large number of voters. The average House member sends out more than one million pieces of franked mail in an election year, although members are barred from sending franked mail within 60 days of a Senate election or 90 days of a House election. However, congressional web pages and e-mails to constituents, which all members use, have no such restrictions and provide an additional means of contact between members and potential voters. According to congressional scholars Roger H. Davidson and Walter J. Oleszek, the franking privilege is the “traditional cornerstone of congressional publicity.” Federal courts have upheld the practice of franking, which is viewed as the right of congressional members to keep their constituents informed of their work within the Congress. Members send out mail at no cost to them with their signature (which is the “frank”) in place of a postage stamp. The practice of franking dates back to the 17th-century British House of Commons. The American Continental Congress adopted the practice in 1775, and the First Congress wrote it into law in 1789. In addition to senators and representatives, the president, cabinet secretaries, and certain executive branch officials also were granted the frank, and it is now viewed as an important tool for being reelected in the contemporary Congress. As a result, reforms of the franking privilege were put in place by Congress during the 1990s. Many members had abused the privilege as part of their reelection efforts by sending out mass mailings prior to an election. Not only do members now have the above-stated time restrictions prior to an election, but limits have also been placed on the total number of outgoing pieces of mail. For example, one piece of mail is allowed for each address in the state for senators and three pieces of mail for each address in the district for representatives in the House. In addition, rules were also put into place to limit what would be considered campaign advertising in mailings, including personal references to and pictures of the member of Congress. Mailing costs for each member of Congress have also been integrated into the member’s other office expenditures, which means that spending money on franked mail often must compete with other necessary resources such as hiring necessary
staff members. This, along with the other reforms, has decreased the overall costs associated with franked mail being sent from the Congress. When reforms were enacted on the issue of franking, it also came at a time when both Congress and the public were dealing with the increased campaign costs of running for public office as well as the power of incumbency (which spurred the attempts in many states to enact term limits for members of Congress and other public officials). At the time, during the early 1990s, franking was seen as a powerful tool for incumbents in Congress and was seen as being associated with campaign advertising efforts. This, in turn, ties in with the power of incumbency that comes from the fundraising advantage, which is increasingly important with the rising cost of campaign advertising, polling, and staffing. The ability to out–fund raise an opponent is often a deterrent for a challenger either in the primary or general election to even enter the race against an incumbent. In addition, help from party campaign committees and political action committees (PACs) is also more readily available for incumbents since they are a known quantity in political circles and already have a voting record to show potential supporters. Incumbents are also more likely to receive news media coverage and can be seen “on the job” by their constituents via C-SPAN on cable television. In addition, along with these other incumbent advantages, the ability to send out mail to constituents at no cost to the member’s campaign was seen as an important perk for congressional incumbents. During the 19th century, Congress was not viewed as a long-term career opportunity. Turnover of as many as half the seats in any given election was not uncommon. Traveling to and from Washington, D.C., was difficult, and members did not like being away from their families for months at a time (particularly during the hot and humid months of summer). Serving in Congress at that time was closer to what the framers envisioned for public service, yet it was regarded more as a duty than as rewarding work. That all began to change during the 20th century when careerism in Congress began to rise. Particularly in the years following implementation of the New Deal programs in the 1930s and 1940s that greatly expanded the scope of the federal government, members of Congress began to stay in office longer as Washington became the center of national political power. Today,
384 gerr ymandering
most members of Congress are professional politicians; in 2004, the average length of term for a member of the House was nine years and for Senators, 11 years. Members of Congress earn a fairly high salary (about $162,000 a year) with generous health care and retirement benefits. Each member also is given an office suite on Capitol Hill, an office budget of about $500,000 a year for staff, and an additional allocation for an office within their district or state. Due to the prestige of their positions, particularly in the Senate, most members serve for many years, if not decades. This professionalization of Congress contributes greatly to the incumbency factor, which is also benefited by the perks of members of Congress such as franking. Further Reading Davidson, Roger H., and Walter J. Oleszek. Congress & Its Members. 10th ed. Washington, D.C.: Congressional Quarterly Press, 2006; Dodd, Lawrence C., and Bruce Oppenheimer, eds. Congress Reconsidered. Washington D.C.: Congressional Quarterly Press, 2005; Historical Minute Essays of the U.S. Senate. Available online from the U.S. Senate Web page. URL: www. senate.gov; United States Postal Service, Rules for Official Mail (Franked). Available online. URL: http://pe .usps.gov/Archive/HTML/DMMArchive1209/E050. htm; “What Is the Frank?” Committee on House Administration, U.S. House of Representatives Web page. Available online. URL: http://cha.house.gov/ index.php?option=com_content&task=view&id=170. —Lori Cox Han
gerrymandering (reapportionment) As stated in the U.S. Constitution, members of the House of Representatives must be at least 25 years of age, are required to have lived in the United States for at least seven years, and must be legal residents of the state from which they are elected. Members of the House of Representatives are elected every two years by voters within their congressional districts (also known as a constituency). Frequent elections and, in most cases, representing a smaller number of citizens in districts (as opposed to states) was intended to make this chamber of Congress more responsive to the needs of citizens and more localized interests. During the first session of Congress in 1789,
the House of Representatives consisted of 65 members. The current number of 435 members in the House of Representatives has stood since 1929 when Congress adopted the size by statute. Congress temporarily increased the size to 437 in the 1950s when Alaska and Hawaii became states but then changed back to 435 in 1963. Currently, California has the largest congressional delegation with 53 House seats, followed by Texas with 32 seats, New York with 29 seats, Florida with 25 seats, and Pennsylvania and Illinois with 19 seats each. Due to their small populations, seven states only have one House seat, called a member at large which is guaranteed by the U.S. Constitution. Those states include Alaska, Delaware, Montana, North Dakota, South Dakota, Vermont, and Wyoming. There are also nonvoting delegates representing the District of Columbia, the U.S. Virgin Islands, American Samoa, and Guam, and a nonvoting resident commissioner representing Puerto Rico. The size of the states’ delegations is reassessed every 10 years based on new population figures from the census. The census, as mandated every 10 years by the U.S. Constitution (and conducted the first time in 1790 under the guidance of Secretary of State Thomas Jefferson), requires the federal government to determine the population of the nation as a whole as well as that of individual states. Reapportionment is the reallocation of House seats among states after each census as a result of changes in state population. The general rule when reapportionment occurs is to provide a system that represents “one person, one vote” as much as possible. Therefore, with a set number of 435 seats, changes must be made when one state increases its population or the population of another state decreases. As a result, some states will gain congressional seats while other states may lose them. For example, after the most recent census in 2000, eight states gained seats (Arizona, California, Colorado, Florida, Georgia, Nevada, North Carolina, and Texas), while 10 states lost seats (Connecticut, Illinois, Indiana, Michigan, Mississippi, New York, Ohio, Oklahoma, Pennsylvania, and Wisconsin). Following reapportionment, new congressional district boundaries must be drawn for states that either gain or lose House seats. State governments are responsible for this process, known as redistricting. The goal is to make all congressional districts as
gerrymandering 385
equal as possible based on population (the theory of “one person, one vote”). This may seem like a relatively straightforward and simple process, but instead it is often a complicated and highly partisan one. In most states, members of the state legislature draw new district lines and then approve the plan, which goes into effect for the first congressional election following the census (which was most recently the 2002 election). However, the party in power in the state legislature is often motivated to redraw district lines that benefit its own party members. This is known as gerrymandering, which involves the deliberate redrawing of an election district’s boundary to give an advantage to a particular candidate, party, or group. The term itself comes from Governor Elbridge Gerry of Massachusetts, who in 1812 signed into law a state plan to redraw district lines to favor his party, including one odd-shaped district that looked like a salamander. The problem of gerrymandering goes back many years and has been a problem historically in the United States. The partisan nature of redis-
tricting also contributes to the high rate of incumbency in Congress by creating what are known as “safe seats.” Since the 1960s, the U.S. Supreme Court has stepped in to give its opinion on the practice of gerrymandering in several cases. Their rulings have stated that: Districts must follow the principle of “one man, one vote” with fair borders; congressional and state legislative districts must be evenly apportioned based on population; manipulating district borders to give an advantage to one political party is unconstitutional; an attempt to gerrymander a district to dilute the voting strength of a racial and/or ethnic minority is illegal under the Voting Rights Act of 1965; and redrawing district lines to enhance representation of minority voters in Congress can be constitutional if race is not the most dominant factor in the redistricting process. In short, these issues continue to find their way into federal courts, including the U.S. Supreme Court, because political parties in power want to maximize their majorities. Therefore, with
386 G overnment Accountability Office
elected politicians in charge of redrawing district lines (as is the case in most states), the problem of gerrymandering seems almost inevitable. A recent example illustrates this problem. Following the 2000 Census, a divided state legislature in Texas (Democrats held the House of Representatives while Republicans held the Senate) could not agree on a plan to redraw district lines. Instead, according to state law, a judicial panel stepped in to handle the redistricting. In the 2002 elections, Republicans won a majority of seats in the state House, giving them control of both houses for the first time ever. As a result, Republican leaders, with a lot of outside help from then-U.S. House Majority Leader Tom DeLay, attempted to redistrict yet again, this time favoring their party in an attempt to pick up several Republican seats in the 2004 congressional election. To avoid taking a vote on the bill, several Democratic members of the legislature first fled the state to Oklahoma in May 2003 and then to New Mexico in July to make sure that Republicans did not have a sufficient quorum to call the vote. While making national headlines, Democrats in Texas eventually lost the fight as the Republican redistricting plan went into effect in time for Republicans to pick up five seats in the U.S. House of Representatives in the 2004 election (causing four Democratic incumbents from Texas to lose their seats). Most of the plan was also ultimately upheld as constitutional by the U.S. Supreme Court in 2006 in the case League of United Latin American Citizens v. Perry, with the exception of one district in South Texas that violated the Voting Rights Act of 1965 because it diluted minority voting rights. While creative means of redistricting have often been used by both parties over the years in state legislatures for political gains (for example, Democrats in California during the 1960s and 1970s were quite successful in creating strong Democratic districts through partisan gerrymandering), DeLay’s involvement in the process in Texas was the first time that a powerful Washington politician played such a public and prominent role in influencing the redistricting outcome. See also districts and apportionment Further Reading Baker, Ross K. House and Senate. 3rd ed. New York: W.W. Norton, 2000; Davidson, Roger H., and Walter J. Oleszek. Congress & Its Members. 10th ed. Wash-
ington, D.C.: Congressional Quarterly Press, 2006; Dodd, Lawrence C., and Bruce Oppenheimer, eds. Congress Reconsidered. Washington D.C.: Congressional Quarterly Press, 2005; Hamilton, Lee H. How Congress Works and Why You Should Care. Bloomington: Indiana University Press, 2004. —Lori Cox Han
Government Accountability Office The Government Accountability Office (GAO) is a congressional support agency, the self-described “congressional watchdog.” Created in 1921, the GAO’s mission is “to support the Congress in meeting its constitutional responsibilities and to help improve the performance and ensure the accountability of the federal government for the benefit of the American people.” This essay reviews the historical evolution of the GAO, describes its modern organization and operation, and examines its role in a separation-of-powers system. Although the Government Accountability Office (GAO) has become an important adjunct to Congress’s legislative work, it did not always serve this purpose. During its 80-year history, the GAO has transitioned from the federal government’s primary auditing agency to an investigatory and evaluative arm of Congress. (The GAO changed its name in 2004 to reflect the changes in its mission.) Throughout its history, two characteristics have defined the GAO: its independence and its ambiguous role in a separation-of-powers system. The GAO (originally the General Accounting Office) was created by the Budget and Accounting Act of 1921 (P.L. 67–31) and charged with investigating “all matters relating to the receipt, disbursement, and application of public funds.” The 1921 Act transferred auditing and claims settlement functions from the Department of the Treasury to the GAO. As such, from 1921 until World War II, the GAO interpreted its mandate to mean reviewing individual expenditure vouchers to ensure all federal spending was legal and proper. (This period is referred to as the “voucherchecking era.”) During this period, the GAO frequently sparred with President Franklin D. Roosevelt over New Deal program spending. After Comptroller General John McCarl (1921–36) retired, President Roosevelt fought Congress over the authority of the GAO for four years before naming a replacement,
Government Accountability Office 387
Fred Brown (1939–40), who was generally supportive of Roosevelt’s programs. Comptroller General Lindsay Warren (1940–54), another Roosevelt appointee, was even less inclined to challenge the president. The GAO was charted independent of the executive branch (Congress was trying to re-exert some control over the federal budget), but the 1921 Act did not specify that it was a congressional agency either. The GAO’s independence was important to Congress. Congressman Charles Goode (R-NY) stated, “Unless you throw around the Comptroller General all of the safeguards that will make him absolutely independent . . . auditors and the Comptroller General dare not criticize an executive official.” The independence of the GAO raised constitutional questions surrounding the separation of powers. During the 1920s, for example, the GAO and the War Department fought over whether the GAO had to approve voucher payments before they could be made. Attorney General Harry Daugherty ruled that the GAO, since it was not part of the executive branch, did not have constitutional authority to block payments. Comptroller General John McCarl (1921–36) held that it did. As a result, some agencies followed the GAO’s procedure and others (such as the War Department) followed their own. World War II led to significant changes in the GAO. The increase in federal spending quickly overwhelmed the GAO’s voucher-checking efforts (it had a backlog of 35 million vouchers in 1945 despite a full-time staff of more than 15,000), so the GAO turned from auditing individual expenditures to comprehensive audits of agency spending supplemented with periodic site audits. Agency financial management became the GAO’s primary focus, and it began to prescribe accounting principles for the federal government (a function codified in the Budget and Accounting Procedures Act of 1950). The Budget and Accounting Procedures Act of 1950 also provided the legislative foundation for the gradual move of most of the government’s auditing function back to the Treasury Department. (The GAO is still responsible for promulgating governmental accounting procedures.) During this period, the GAO also increasingly began to publish reports in response to congressional requests for evaluations of proposed and existing programs and for legal rulings on agency actions. Between
1945 and 1947, for example, the GAO examined several proposals for what became the Department of Defense. These reports have become the primary work products of the GAO. In the 1960s and 1970s, under the direction of Comptroller General Elmer Staats (1966–81), the GAO continued its transformation from an auditing to an evaluative agency. One early set of program evaluations, known collectively as the “Prouty Work” (more than 60 reports were published), came in response to a statutory mandate and focused on the efficiency and effectiveness of federal poverty programs. The GAO’s move to program evaluation was codified in the Legislative Reorganization Act of 1970 (P.L. 91–510), which mandated that the Comptroller General “review and analyze the results of Government programs and activities carried out under existing law, including the making of cost benefit studies.” Through the GAO Act of 1974, the GAO gave up the last direct connection to its original function when responsibility for auditing transportation vouchers was transferred to the General Services Administration. The debate surrounding the 1974 act also presaged what would become a significant issue for the GAO: How much independent authority does the GAO have to force executive-branch compliance with its rulings and requests? Although the 1921 act gave the GAO responsibility for auditing government spending, the Act provided little in the way of an enforcement mechanism should the GAO find an expenditure to be illegal or in error. The conflict with the War Department had shown the limits of the GAO’s influence in that regard. The issue of forcing compliance came to a head again in 1966. In response to a congressional request, the GAO ruled that a Labor Department requirement (called the Philadelphia Plan) that all federal contract bids include an “acceptable” affirmative-action plan before a contract could be issued violated federal regulations because the Labor Department had not defined what an acceptable affirmative-action plan was. The ruling led to a protracted battle (spanning two presidencies), but the GAO had to rely on other actors (namely aggrieved contractors, who lost) to pursue its case in the courts. To resolve the issue, the GAO asked Congress for the authority to sue in federal courts (with the comptroller general as plaintiff) to force agency compliance in 1974.
388 G overnment Accountability Office
While Congress demurred at that time, the GAO Act of 1980 resolved some, but not all, of the issues raised in 1974. The 1921 Budget and Accounting Act required executive agencies to provide the GAO with any requested information and granted the GAO “access to and the right to examine any books, documents, papers, or records of any such department and establishment.” The 1980 Act allowed the comptroller general to seek a court order forcing agencies to produce any requested materials and to subpoena information from federal contractors. It also provided limited authority to audit expenditures made only on the approval of the president or an agency official, and it gave Congress a greater voice in determining who would head the GAO. The powers granted in the 1980 GAO Act faced a significant challenge in 2001–02 when the GAO tried to obtain records from the National Energy Policy Development Group (NEPDG), a group chaired by Vice President Richard “Dick” Cheney and composed of several department secretaries and other federal officials. President George W. Bush created the NEPDG in January 2001, and after extensive consultation with private-sector individuals the NEPDG issued its report in March 2001. In April, following press reports that the NEPDG had excluded environmental groups from its meetings, Congressmen Henry Waxman (D-CA, then ranking member on the House Government Reform Committee) and John Dingell (D-MI, then ranking member on the House Energy and Commerce Committee) asked the GAO to obtain “the names and titles of individuals present at any NEPDG meetings, including any non-governmental participants” as well as other information about activities of the NEPDG. (The scope of the GAO’s request was subsequently narrowed in an attempt to negotiate a settlement with the vice president’s office.) Vice President Cheney’s office refused to provide any information and claimed that the GAO had no legal basis for requesting the information and further that the request represented an unconstitutional intrusion by the GAO “into the heart of Executive deliberations.” Comptroller David Walker filed suit on behalf of the GAO in the D.C. Circuit Court in February 2002 to obtain the records. The court in December 2002 dismissed the suit, holding that Comptroller Walker did not have standing.
In deciding Walker v. Cheney (230 F.Supp.2nd 51), the court questioned the ability of Congress to delegate its investigative authority to the GAO. The court framed the issue as one of separation of powers and held that the “highly generalized allocation of enforcement power to the Comptroller General . . . hardly gives this Court confidence that the current Congress has authorized this Comptroller General to pursue a judicial resolution of the specific issues affecting the balance of power between” Congress and the president. Lacking congressional support (and facing a direct threat to the GAO’s budget from the Appropriations Committee), Comptroller Walker chose not to appeal the court’s ruling, claiming that the ruling applied only to the NEPDG and reserving the right to sue for information in the future. Headquartered in Washington, D.C., the GAO also has offices in 11 cities across the United States, including Atlanta, Boston, Chicago, Dallas, Los Angeles, and Seattle. The GAO is headed by the Comptroller General, currently David Walker, who is appointed by the president, is confirmed by the senate and serves a 15-year term. (Comptroller Walker’s term will expire in 2013.) Its 3,200-person workforce is divided into three areas: the General Council, which manages the GAO’s bid protest, rule review, and appropriations law work; a group of 13 research, audit, and evaluation “teams” covering specific policy areas (for example, Acquisition and Sourcing Management; Education, Workforce, and Income Security; Health Care; Homeland Security and Justice; and Natural Resources and Environment); and administrative offices under the direction of the chief administrative officer/chief financial officer. In 2005, the GAO’s budget was more than $474 million. Reflecting its stature as a congressional support agency, the GAO’s budget comes from the legislative branch appropriations bill. The GAO’s principal work products are the reports that it produces for Congress and congressional committees and members. Historically, the GAO publishes between 800 and 1,200 reports each year; its personnel testify before congressional committees between 200 and 300 times each year. The reports cover a variety of subjects and reflect the evolving functions of the GAO: financial and management audits (now only about 15 percent of the GAO’s work), program and
Government Accountability Office 389
performance evaluations, investigations, and policy analyses. The bulk of these reports go to a handful of committees: House Government Reform and Senate Governmental Affairs, the two appropriations committees, the two armed services committees, House Energy and Commerce, and House Ways and Means. These eight committees received 60 percent of all the reports sent to committees between 1990 and 1999 (this figure stems from the author’s own data). Unless classified, the reports are publicly available through the GAO’s web site (http://www.gao.gov/). The GAO’s independence is important to the work it does. Unlike the Congressional Research Service and the Congressional Budget Office, the GAO generally (though not always) decides the congressional requests to which it will respond. The GAO has an explicit preference order for its projects, described in its Congressional Protocols: The GAO will respond first to congressional mandates (in statutes, congressional resolutions, conference reports, and committee reports), second to senior congressional leader and committee leader (including ranking minority members) requests, and third to individual member requests, with preference given to members serving on a committee with jurisdiction over the program. It is unclear how far the GAO will pursue requests from members who are not committee chairs in the wake of Walker v. Cheney, however. In addition to congressional mandates and requests, the GAO “reserves a limited portion of its resources for work initiated under the Comptroller General’s authority to (1) invest in significant and emerging issues that may affect the nation’s future and (2) address issues of broad interest to the Congress.” The GAO also issues legal opinions about specific contracting and program decisions made by federal agencies and programs. Many of these opinions address appeals by would-be contractors who failed to win a government contract or felt that they were excluded from the bidding process. Others deal with major rules and regulations promulgated by federal agencies (authority granted by the Small Business Regulatory Enforcement Act of 1996 and the Congressional Review Act of 2000). The GAO formerly ruled on civil and military employees’ pay and benefits, but that function was transferred to the Office of Management and Budget in 1996. The GAO’s mission is to assist Congress in nearly every aspect of
its legislative responsibilities. The GAO assists in Congress’s responsibility to oversee executive-branch agencies and federal programs through its program evaluations, audits, and testimony. It assists the creation of new legislation through its investigations and policy analyses. How successful is the GAO in implementing its mission; that is, what comes of the recommendations that the GAO makes in its reports? When do agencies implement the recommendations? When do new programs, policies, and procedures result from the GAO’s work? Certainly the GAO sees significant contributions from its work. In 2006, the GAO claimed that each dollar in its budget returned $105 (generally in cost savings) to the federal government. The GAO also claimed that federal agencies had implemented 82 percent of the recommendations it made in 2005. This number has increased significantly between the 1970s and 1990s. Whether this pattern is the result of greater agency responsiveness or changes in how the GAO measures implementation is unclear. Certainly, with the increased focus on performance management, there is an incentive to demonstrate success to Congress. Yet the GAO frequently faces criticism that its recommendations go unheeded, and many of its reports document how agencies have failed to implement past recommendations fully. Moreover, committees and members have frequently ignored the reports that the GAO produces. The GAO’s effectiveness depends first on the degree to which the agencies and members of Congress oppose the recommendations being made. Principal-agent theory has shown that where there is little policy disagreement between Congress and the executive branch, Congress (and here, by extension, the GAO) is able to obtain the policy outcomes it desires. Where policy disagreement exists, the GAO’s effectiveness depends on whether agencies believe that it has legitimate authority to make its recommendations and that they have an obligation to follow them. If there is a clear legal mandate for the GAO’s actions (as when promulgating accounting standards for the federal government), and if the GAO has the backing of Congress or the relevant committee chair, then the GAO can be successful. Its recommendations carry force when the authorizing and appropriations committees use their legislative authority to back them up. If Walker v. Cheney is a guide, the
390 House of Representatives
GAO’s information requests will carry weight when the courts believe that there is a clear mandate from a committee or Congress. The GAO is effective, then, when agencies want the policies that it recommends and when Congress helps it to be effective. Further Reading Halstead, T. J. “The Law: Walker v. Cheney: Legal Insulation of the Vice President from GAO Investigations.” Presidential Studies Quarterly 33 (September 2003): 635–648; Mosher, Frederick C. The GAO: The Quest for Accountability in American Government. Boulder, Colo.: Westview Press, 1979; Trask, Robert. GAO History, 1921–1991. GAO/OP–3–HP. Washington, D.C.: U.S. General Accounting Office, 1991; U.S. General Accounting Office. GAO’s Congressional Protocols. GAO–04–310G. Washington, D.C.: U.S. General Accounting Office, 2004; U.S. Government Accountability Office. Performance and Accountability Report, Fiscal Year 2006. Washington, D.C.: U.S. Government Accountability Office, 2006; Wallace, Earl Walker. Changing Organizational Culture: Strategy, Structure, and Professionalism in the U.S. General Accounting Office. Knoxville: University of Tennessee Press, 1986. —Keith W. Smith
House of Representatives Often called the “lower house” of the U.S. Congress, the framers designed the House of Representatives to be the legislative chamber most responsive to the popular will. The House has its origin in the Virginia Plan, a proposal by Edmund Randolph at the Constitutional Convention to revise significantly the Articles of Confederation. Under the Articles, each state enjoyed equal representation in Congress. The Virginia Plan proposed a bicameral legislature, with each chamber based on proportional representation—that is, larger states would enjoy greater representation and thus greater political influence. Smaller states objected to this plan, and the Great Compromise resolved the dispute by creating a Senate based on equal representation, but the House of Representatives retained its basic principle of proportional representation. Key structural features of the House indicate its particular place in the federal legislature. First, the
qualifications to serve in the House are more relaxed than those to serve in the Senate. In addition to living in the state one represents (there is no constitutional requirement to live in the congressional district one represents), House members must be at least 25 years old and seven years a citizen, as opposed to 30 years old and nine years a citizen in the Senate. There are no other formal requirements. The framers wanted the widest possible eligibility for service in this institution, a sentiment best expressed by James Madison in Federalist 52: “the door of this part of the federal government is open to merit of every description, whether native or adoptive, whether young or old, and without regard to poverty or wealth, or to any particular profession of religious faith.” The qualifications and selection mechanisms for the other branches of the federal government make more likely that they will be staffed by elites, but the constitutional qualifications to serve in the House make it the most democratic federal institution. Second, the selection process to choose members of the House was, prior to more recent amendments, more democratic than the other institutions of government. Although the nation has, for all practical purposes, universal suffrage for those over the age of 18, the actual constitutional language for electors of the House simply mandates that the states allow anyone eligible to vote for the most numerous branch of the state legislature to also vote for the House. During the early years of the republic, it was not uncommon for states to impose barriers such as property qualifications to vote for various offices in the government. If the states had complete power to determine voter eligibility for the House, they might have made the qualifications even more stringent than those for the state legislature. The result would have been a population more attached to their states than to their national representatives. The U.S. Constitution prevents such a development. The framers recognized the need for a broad popular base of support for the federal government. They intended that base of support to be centered on the House. They secured that support by mandating that voting qualifications for the House be as democratic as voting qualifications for the state legislature. As the states over time democratized their own political systems, their actions also democratized participation in House elections.
House of Representatives 391
Thus, the House represents the people as a whole, not individual states. Third, all House members serve two-year terms before facing reelection, with the entire chamber facing popular judgment at one time. This is in stark contrast to the six-year staggered terms in the Senate. The framers believed that the House needed “an immediate dependence on, and an intimate sympathy with, the people.” In this most democratic of chambers, that would come through frequent elections— the shortest terms of any institution in the federal government. Such short terms make House members very attuned to the popular will in their districts, for the next election is never far off. Interestingly, many early critics of the Constitution believed that twoyear terms were too long. They adhered to the popular aphorism of the day: “Where annual elections end, tyranny begins.” Madison addresses this point in Federalist 52 and 53, arguing that two-year terms will keep House members sufficiently dependent on the people while also giving the members enough time to gain the knowledge and experience necessary to
become good legislators. In contrast to the contemporary notion of some that political leaders ought to be amateurs drawn from the general population, Madison suggests that it is preferable to encourage some level of professionalism and expertise in the most popular branch. Fourth, the House is much larger than the Senate. Current statute limits it to 435 voting members, as opposed to 100 senators. The proper size of the House was an important issue for the framers. Everyone acknowledged the danger in having an overly small legislative chamber that would be too susceptible to conspiracy and cabal, not to speak of being unrepresentative of the nation as a whole. Many antifederalists believed that the primary representative assembly had to be very large. They argued that only a very large legislature could be trusted with power and that representatives needed to understand the local circumstances of their constituents, which could only come with small congressional districts, and thus a larger chamber. They believed representatives should be a reflection of the population—that they should in fact
392 House of Representatives
mirror or be descriptively similar to those they represent. By that standard, the Constitution’s minimum ratio of one representative for every 30,000 people was unacceptably large. Of course, had such a standard been set in stone, the House today would approach 10,000 members. By contrast, the framers argued that the House should be large enough “to secure the benefits of free consultation and discussion”—to be able to engage in fruitful deliberation—but not so large that it would result in “the confusion and intemperance of a multitude.” Madison argues in Federalist 55 that “passion never fails to wrest the scepter from reason” in very large assemblies. Thus, although the framers wanted the House to be large enough to represent the people as a whole, they also believed there was a natural upper limit to that size necessary to prevent mob rule and demagoguery. Finally, the question of size is also connected to another critical structural feature of the House, and
House of Representatives (National Archives)
that is the size of the constituency. Where senators represent entire states, members of the House represent local districts within those states. While the smaller district size makes House members more attached to local interests, the framers hoped that the districts would be large enough to encourage them to take a more national view of public policy. In fact, the core argument in the famous Federalist 10 centers on the need for large electoral districts. Madison argues there that large districts will make it more likely that the people will find individuals with the wisdom and civic virtue required to serve as “enlightened statesmen.” The larger the district, the less chance a wealthy character could buy an election victory. Madison also argues that large districts will cause representatives to focus more on “the great and aggregate interests” of the nation than parochial concerns. Thus, although designed to represent the people, the
incumbency 393
framers hoped that even in this most democratic of institutions the selection process would provide quality leaders. The combination of large electoral districts with a moderately sized (and eventually fixed) legislative chamber makes necessary a large republic. Several other procedural features reinforce the House’s status as the more democratic and popular chamber. According to Article I, Section 7 of the Constitution, all revenue bills must originate in the House. After that point, the Senate plays its traditional role in the legislative process, but this constitutional rule testifies to the idea that the legislative chamber closest to the people should be principally responsible for spending the people’s money. The House also has the sole power of impeachment—the sole power to accuse government officials of high crimes and misdemeanors. On the other hand, the framers also understood that the tendency of House members to be very attuned to public opinion due to the frequency of their elections might militate against justice in an impeachment trial. They feared that the government institution most responsive to popular opinion might not be the proper location for a process that should depend more on justice than on factional strength. Thus, the framers gave the power to try impeachments to the Senate. Similarly, the Senate has staffing and foreign policy roles the House lacks, such as confirming executive branch and judicial nominations and ratifying treaties. The framers believed that the local orientation of House members and their relatively brief terms made that body a less than ideal location for the greatest foreign policy concerns. Institutional rules also reinforce this dynamic, for the House tends to pass measures very quickly, as the majority party can more easily push matters to a conclusion there than in the Senate. All of these features answer the question of who is being represented in the House and how they are being represented. The House was designed to represent the people as a whole, not the states. The framers wanted the base of popular support for the House to be as broad as it was in state legislatures, and they wanted as many people as possible eligible to serve. Two-year terms keep House members dependent on the people—the most dependent branch in the fed-
eral government—while also giving them enough time to become good at their jobs. The fact that the entire House stands for election at the same time reinforces its accountability to the people, for dramatic changes in popular opinion will be immediately reflected in the House. It is not uncommon for midterm elections in the House to be interpreted as popular referenda on the performance of the president. Ultimately, the framers supported the idea of republican government that is responsive to the popular will, but in which the excesses of democracy are controlled and restrained by other institutions. While branches such as the Senate, the presidency, and the U.S. Supreme Court perform this constraining function, it is the House that is most responsive to the popular will. Further Reading Hamilton, Alexander, James Madison, and John Jay. The Federalist Papers, Nos. 10, 52–58, 64–65. Edited by Clinton Rossiter. New York: New American Library, 1961. —David A. Crockett
incumbency Holding government office offers a wide array of advantages that those seeking office must overcome if they are to unseat those holding the reins of power. The power of incumbency affords those politicians currently in office many means by which to defend their power bases, even in races against well-funded and well-qualified challengers. Thus, challengers face a steep—but not impossible—set of electoral circumstances. This electoral prowess—while certainly enjoyed by many elected officials in the United States, including governors, mayors, state representatives, and others—is most prominent among incumbent members of the U.S. Congress. In the 2004 congressional elections, 98 percent of representatives and 96 percent of senators who sought reelection were victorious. Many of these congressional incumbents either faced no major-party competition or won with large margins, and a significant portion of the small number of defeated incumbents occurred in House districts in which two incumbents faced each other. In
394 incumbenc y
part, incumbents succeed at such high rates because they are “single-minded reelection seekers” whose government activities translate nicely into advantages that help them get reelected. Congressional incumbents sit on committees with policy jurisdictions relevant to their states and districts; many are able to “bring home the bacon” in the form of federally funded projects and programs; and scores of members command the attention of local media on a regular basis. Further, the setting in which incumbents contest elections against those who challenge them has become much more favorable in recent times. At both state and congressional levels, the districts in which legislative incumbents defend their seats have become more homogenous in their composition. Republican districts have gotten “redder,” and Democratic districts have gotten “bluer.” The process of redistricting has concentrated likeminded constituents in districts that have reelected representatives of the same partisan stripe cycle after cycle. Occurring in each state at least once every 10 years following the national census, redistricting is the process by which the lines of state and congressional legislative districts are redrawn to account for population shifts within a state and to accommodate the reapportionment of House seats among the states. Over time, state legislatures have manipulated—or gerrymandered—district lines in ways that have minimized the competition between the two major parties and maximized incumbents’ reelection rates. Across the nation, this process has concentrated partisans in safe districts, effectively reducing the battlefield for competitive challengers to less than 10 percent of the seats in the House of Representatives. Although they do not benefit from the process of redistricting (state boundaries are immutable in the electoral process), incumbents in the U.S. Senate also enjoy high reelection rates. Generally, Senate incumbents’ reelection rates tend to be slightly lower than those of their House counterparts as they attract more challengers who have held previous elective office as well as some who themselves are well known across the state they intend to represent. In recent cycles, levels of competition appear to have dropped in Senate races, however, as
some Senate incumbents have faced no major-party opposition. More generally than the ever-brightening electoral setting, incumbents hold an advantage that challengers have a very difficult time matching: name recognition. The average congressional incumbent holds a 40 percent name-recognition advantage over the average challenger: 90 percent of voters are able to recognize—if not recall exactly—a congressional incumbent’s name from a list while only 50 percent are able to recognize that of a challenger. This name recognition advantage, one likely held by incumbents in other government positions as well, translates into a significant edge in terms of votes on Election Day and derives from the variety of governing activities in which incumbents are engaged and challengers are not. First, those in office can credibly claim credit for the positive results of their service in office and then advertise their efforts on behalf of their constituents. Members of Congress tout their roles in bringing much-needed federally funded projects (what is commonly referred to as pork-barrel legislation) to the district, or sitting governors might publicize their roles in representing their states by obtaining disaster declarations to expedite the distribution of federal emergency relief funds. Second, incumbents also have the luxury of being able to stake out—through highly visible actions such as roll-call votes, speeches, and press releases—positions on major issues of the moment facing the nation. Because incumbents represent the voters, these positions are likely to be ones that will appeal to a large number of those who will be asked to put them back into office. Third, government officials also employ the resources of their offices in ways that increase their name recognition. Members of Congress perform a myriad of services—such as troubleshooting problems that their constituents might have with the federal bureaucracy and recommending young men and women for admission to the service academies—for their constituents. Representatives and senators also use the privilege of free postage for official business to send newsletters—often emblazoned with their names and pictures and publicizing their activities—to their constituents. Fourth, incumbents’ decided advantage over challengers in campaign money perpetuates their electoral edge. In the
incumbency 395
2003–04 cycle, congressional incumbents held a threeto-one advantage in total receipts and an eleven-toone edge in receipts from political action committees; however, incumbents’ having more money is not the challengers’ principal problem. Challengers’ money dilemma relative to incumbents appears to be one of not having enough. Recent research indicates that— because they begin with less name recognition than incumbents—challengers can get more “bang” from their campaign buck and tend to do best against incumbents (and win at their best rates) when they are able to raise and spend the most in the race. Finally, incumbents often hold a large advantage in name recognition because they generally face challengers
who are political amateurs. Quality challengers—those who hold governmental experience applicable to the positions they seek—generally wait for favorable electoral conditions (that is, an open seat or a weakened incumbent) before attempting to climb the next step on the political ladder. The considerable electoral advantages they hold leave many incumbents facing only those with little previous political experience, and the resulting electoral routs build the perceived advantages held by incumbents even higher in the minds of potential challengers. For example, in 2004, when no formidable Democrat opposed an incumbent viewed by many political observers as a strong
396 incumbenc y
one, U.S. Senator Judd Gregg of New Hampshire, Doris “Granny D” Haddock—best known for walking across the country in support of campaign finance reform—entered the race, refused to take PAC contributions, and garnered only 38 percent of the vote. Despite the electoral edge they engender, one should not, however, view the advantages of incumbency as absolute. Certain political conditions diminish the power of incumbency to levels at which incumbents can lose elections at significant rates. When these conditions occur, quality challengers—who have been waiting strategically for the right moment—are more likely to enter the race, raising the probability of defeat for incumbents. First, a weak economy might increase the electoral vulnerability of current officeholders. Voters expect those in government to promote policies that support a growing economy and, rightly or wrongly, tend to blame those in office when it slumps. In an economic downturn, voters feeling the sting of either high rates of unemployment or high interest rates may blame current officeholders and support challengers who pledge alternative economic policies if elected. Second, a cloud of scandal may severely weaken the power of incumbency. Voters may view a scandal as a violation of the public trust and confidence placed in government officials, and they may be more inclined to vote for alternative candidates in whom they might better place their trust and confidence. Recent research indicates that voters are inclined to punish incumbents’ transgressions for which they view each respective political party as least likely to commit: Republican incumbents lose because of sex scandals, and Democratic incumbents fall on the basis of scandals involving money. Scandals may also attach to a political party’s entire slate of candidates up and down the ballot. For example, the congressional elections of 1974 appear to have been a harsh judgment of the Watergate scandal involving President Richard M. Nixon. Despite their efforts to avert political damage by encouraging the disgraced president to resign, congressional Republicans found themselves with significantly fewer copartisans when the 94th Congress convened in January 1975. Finally, political circum-
stances may combine in ways to foster a partisan tide in certain electoral cycles. Voters appear to transform certain elections into referenda on the governing party, and their choices against the incumbents and candidates of a particular party often result in dramatic shifts in control of the reins of power in government. Voters opted for dramatic change in 1932 not only when they chose Franklin D. Roosevelt to replace Herbert Hoover but also when they sent a large number of Democrats to Congress to help the new president deal with the Great Depression. Conversely, the voters sent many congressional Democrats home in 1994 when anti-incumbent fervor brought Republican majorities into both chambers of Congress. Incumbency confers on current officeholders several electoral advantages, none of which voters are incapable of surmounting in a democratic system. Thus, all officeholders—even with all of their inherent electoral advantages—are held to account by the governed. Further Reading Abramowitz, Alan I., Brad Alexander, and Matthew Gunning. “Incumbency, Redistricting, and the Decline of Competition in the U.S. House of Representatives.” The Journal of Politics 68: 62–74, 2006; Brown, Lara M. “Revisiting the Character of Congress: Scandals in the U.S. House of Representatives, 1966– 2002.” The Journal of Political Marketing 5: 149–72, 2006; Fenno, Richard F. Home Style: House Members in Their Districts. Boston: Little, Brown, 1978; Jacobson, Gary C. “The Effects of Campaign Spending in House Elections: New Evidence for Old Arguments.” American Journal of Political Science 334–62, 1990; Mayhew, David R. Congress: The Electoral Connection. New Haven, Conn.: Yale University Press, 1974; Squire, Peverill. “The Partisan Consequences of Congressional Redistricting.” American Politics Quarterly 23: 229–40, 1995; Wrighton, J. Mark, and Peverill Squire. “Uncontested Seats and Electoral Competition for the U.S. House Over Time.” The Journal of Politics 59: 452– 68, 1997. —J. Mark Wrighton
legislative branch 397
legislative branch The first three articles of the U.S. Constitution are known as the distributive articles because they deal with the three branches of the national government and distribute powers among them. It is no accident that the first article deals with Congress because the legislature is the most basic institution of a republican form of government. A legislature’s ultimate goal is lawmaking, and individual members of a legislature are elected to represent an equal number of citizens (or in the case of the U.S. Senate, states are represented equally with two members each) in the policy-making process. While some members may have more seniority or may be members of the majority party, which may in turn provide them more powerful committee or leadership positions, all legislatures within the United States operate on the simple premise of “one person, one vote.” In most areas of the decision-making process, a simple majority among the members in both houses is required to pass a bill that will then be sent to the executive branch to be signed or vetoed by the president or state governor. As a result, consensus building and
cooperation among members is necessary to pass legislation. Since the founding of the nation in 1776, nearly 12,000 people have served in the national legislature (the Continental Congress from 1776 to 1787, followed by the current U.S. Congress). Congress is a bicameral institution (which means that it has two separate houses), with 435 members in the House of Representatives and 100 members in the Senate. The Continental Congress established under the Articles of Confederation was unicameral, with just one house that included one representative from each of the 13 states. Also, how seats were to be allocated in both the House and the Senate was a major component of the Great Compromise during the Constitutional Convention. States would receive equal representation in the Senate, known as the upper house, which pleased states with smaller populations. The membership of the House of Representatives, known as the lower or “people’s” house, was to be determined based on state populations. This pleased the larger states, which would have more members and a more prominent say in the outcome of legislation.
398 legisla tive branch
Today, the U.S. Congress is a complex and often chaotic institution bound by many traditions, procedures, and rules. The party in power may have many advantages, such as control over committees or, in most cases, a majority of votes to pass legislation, but there is no guarantee for control over the policy agenda. Often, individual members or small coalitions can shape the debate over particular issues. Public support or lack thereof can also go a long way in dictating how Congress will react to certain situations, as do lobbying and campaign contributions from powerful interest groups. Congress, at least in theory, represents the branch of the federal government that is most democratic in its procedures and most responsive to the needs of the public. Yet, it is the branch of government with the least amount of public support, and it is often blamed for much of the policy gridlock (the inability to get things done) that occurs in Washington. Many would argue that today’s Congress is a far cry from what the framers of the U.S. Constitution had in mind. While creating all three branches of the federal government, the framers had faced an important dilemma. They wanted a strong and responsive government that would not allow any one individual or small group of individuals to hijack the governing process based on selfish or antidemocratic goals. James Madison’s ideal was for Congress to be a deliberative and public-spirited legislature that represented the public in the governing process. As a result, Congress was designed to serve as the federal branch responsible for making laws, while the executive branch would implement the laws and the judicial branch would interpret them. Congress is the governing body with the most dispersed nature of power through its many members, representation of states as important political bodies, and its complicated committee system. The many and diverse constituencies represented in Congress means that, with the exception of the most compelling national needs (for example, the need to declare war if the United States is attacked by a foreign country or the need to respond with federal aid to a natural disaster), creating national policy is a rather slow process that can require tremendous compromise. Constitutionally speaking, Congress has more authority than the other two branches. First, in general, the framers gave Congress the authority to make
laws. Second, and more specifically, the framers also granted Congress a long list of enumerated powers (which means that certain powers are specified), at least compared to the executive and judicial branches. As a result, mostly through the powers to tax and spend, Congress received the largest grant of national authority in the new government. Most of these powers are found in Article I, Section 8 and are followed by a general clause permitting Congress to “make all laws which shall be necessary and proper for carrying into Execution the foregoing powers.” Enumerated powers include the ability to: lay and collect taxes, borrow money, regulate commerce among the states, control immigration and naturalization, regulate bankruptcy, coin money, fix standards of weights and measures, establish post offices and post roads, grant patents and copyrights, establish tribunals inferior to the U.S. Supreme Court, declare war, raise and support an army and navy, and regulate the militia when called into service. The “necessary and proper” or “elastic” clause has at times allowed Congress to expand its powers over state governments in creating policy that is related to the enumerated powers listed above (such as the creation of a national bank that led to the Supreme Court case of McCulloch v. Maryland in 1819). However, the framers also placed many checks and balances on Congress, not only from the other two branches (for example, a presidential veto or the ability of the Supreme Court to rule on the constitutionality of a law passed in Congress), but those checks and balances can also be found within Congress due to the differences of each house. For example, the House of Representatives is responsible for initiating all revenue bills (those dealing with the collection of taxes). The Senate, on the other hand, is to offer advice and consent to the president on appointments and is the only house that approves nominations to the federal judiciary, ambassadors, and cabinet positions. In regard to impeachment of the president, the vice president, or federal judges, the House of Representatives initiates the process by passing articles of impeachment, while the Senate conducts the trial to determine if the official is to be removed from office. Other differences between the two houses can be found both in the selection of members and differences in the way policies are deliberated.
legislative branch 399
By both constitutional design and the day-to-day operation of Congress, incumbents have a tremendous advantage in winning their reelection efforts. That means that once someone is elected to the House or Senate, challengers have an almost impossible time beating a current member of Congress in a general election contest. This incumbency advantage comes from a variety of factors, including the relative ease of raising money for reelection once in office, the perks associated with holding the job (such as name recognition or the ability to “get things done” for the constituents back home), the lack of term limits for members of Congress, the professionalization of Congress in recent years (members now view the position as a career instead of shortterm public service), and redistricting that often favors incumbents by creating “safe seats” in the House of Representatives. All of this greatly impacts not only who decides to run for office but also who serves in Congress and, in turn, shapes the policy agenda and making laws. In recent years, reelection rates for incumbents have been as high as 98 percent in the House and 90 percent in the Senate. In 2004, 98 percent of House incumbents were reelected, while 96 percent of Senate incumbents were reelected. During the 19th century, Congress was not viewed as a long-term career opportunity. Turnover of as many as half the seats in any given election was not uncommon. Traveling to and from Washington, D.C., was difficult, and members did not like being away from their families for months at a time (particularly during the hot and humid months of summer). Serving in Congress at that time was closer to what the framers envisioned for public service, yet it was considered more of a public duty than rewarding work. That all began to change during the 20th century when careerism in Congress began to rise. Particularly in the years following implementation of the New Deal programs in the 1930s and 1940s that greatly expanded the scope of the federal government, members of Congress began to stay in office longer as Washington became the center of national political power. Today, most members of Congress are professional politicians; in 2004, the average length of term for a member of the House was nine years and 11 years for senators. Members of Congress earn a fairly high salary (about
$162,000 a year) with generous health-care and retirement benefits. Each member also occupies an office suite on Capitol Hill, is allotted an office budget of about $500,000 a year for staff, and is given an additional allocation for an office within their district or state. Due to the prestige of their positions, particularly in the Senate, most members serve for many years, if not decades. The power of incumbency also comes from the fund-raising advantage, which is increasingly important with the rising cost of campaign advertising, polling, and staffing. The ability to out-fund raise an opponent is often a deterrent for a challenger either in the primary or general election to even enter the race against an incumbent. In addition, help from party campaign committees and political action committees is also more readily available for incumbents since they are a known quantity in political circles and already have a voting record to show potential supporters. Incumbents are also more likely to receive news media coverage and can be seen “on the job” by their constituents via C-SPAN on cable television. Another factor contributing to the incumbency advantage that has become a more pressing problem in recent years is the increasing number of “safe seats.” With the help of redistricting, which occurs at the state level, political parties who hold the majority in state legislatures have been able to create safe districts for their members where the opposing party has little or no chance of defeating an incumbent. As a result, nearly one-fourth of all congressional seats in the most recent elections have seen incumbents running unopposed in the general election. This is particularly problematic for women and minority candidates trying to break into the political arena and explains why both groups tend to do better in openseat elections where there is no incumbent on the ballot. The framers could not have foreseen the type of political environment in which Congress now operates. Both legislators and the voters that elect them are often motivated by calculations of selfinterest. The dual role of Congress consists of its role as a lawmaker and its role as a representative body. However, members are often torn between how to best represent constituents while implementing laws for the good of the nation. This represents
400 legisla tive process
the competing theories of what it means to be a member of a representative government like Congress. One theory rests on the notion that the member is a trustee, an elected representative who considers the needs of constituents and then uses his or her best judgment to make a policy decision. This position was articulated by British philosopher Edmund Burke during the 18th century. Another theory is that of a delegate, a representative who always votes based on the desires of his or her constituents and disregards any personal opinion of the policy. Members of Congress must balance these two aspects of the job while trying to determine how to vote on proposed legislation. Most want to stay in office, so reelection demands (such as meeting the needs of the constituents back home in their state or district with spending projects) can sometimes outweigh what is best for the nation as a whole (such as passing legislation to balance the federal budget and forgoing those spending projects back home). As a result, individual members in Congress have gained more prominence and power during the years as the power of Congress as an institution in the policymaking process has waned. In addition, Congress as an institution has rather low public approval ratings as many citizens believe that it is “out of touch” with the needs of average citizens. The irony remains, however, that incumbency dominates so much of the political process, which means that voters, while disliking Congress as an institution, like their individual representative enough to keep electing him or her. See also legislative process. Further Reading Baker, Ross K. House and Senate. 3rd ed. New York: W.W. Norton, 2001; Davidson, Roger H., and Walter J. Oleszek. Congress & Its Members. 10th ed. Washington, D.C.: Congressional Quarterly Press, 2006; Hamilton, Lee H. How Congress Works and Why You Should Care. Bloomington: Indiana University Press, 2004; Sinclair, Barbara. Unorthodox Lawmaking: New Legislative Processes in the U.S. Congress. Washington, D.C.: Congressional Quarterly Press, 2000. —Lori Cox Han
legislative process Making laws is one of the most basic functions of government. Presidents can issue executive orders and proclamations, the bureaucracy can promulgate regulations and administrative measures, and courts can hand down writs and injunctions, but laws are the main way in which the federal government makes policy and governs the governed. Article I of the U.S. Constitution states that “All legislative powers herein granted shall be vested in a Congress of the United States,” and while Congress also serves representative and oversight functions, creating legislation is the main purpose that Congress serves. The German politician Otto von Bismark once remarked that laws are like sausages, in that it is not always pleasant to see how they are made. Indeed, the U.S. legislative process can be maddeningly complicated and tedious. Section 7 of Article I of the U.S. Constitution provides a broad outline of this legislative process, but the process is also based on longstanding institutional norms and practices. It has many steps, but the main ones may be explained as follows. Congress regularly considers four main types of legislation: bills, resolutions, joint resolutions, and concurrent resolutions. Bills are the most common type and can be either broadly applicable (that is, public) or narrowly tailored for an individual or small group (that is, private). Simple resolutions concern the operation of either the Senate or the House of Representatives, concurrent resolutions concern the operation of both chambers, and most joint resolutions are essentially identical to bills, though some are merely advisory and not legally binding. Because most legislation takes the form of bills, this essay will focus on them. Many bills originate within the federal bureaucracy, and almost all bills are written by lawyers from the staff of congressional committees or the bureaucracy, but only a sitting member of Congress (whether a senator or a representative) can formally introduce a bill. Even if the president of the United States wants to introduce a bill, he must find a member of Congress who will to do it for him. The member of Congress who introduces a bill is its sponsor. He or she then often tries to convince other mem-
legislative process 401
THE STAGES OF THE LEGISLATIVE PROCESS 1. Bill introduction
10. Floor amendment
2. Referral to committee(s)
11. Vote on final passage
3. Committee hearings
12. Reconciling differences between the House and the Senate
4. Committee markup 5. Committee report 6. Scheduling legislation 7. House: special rules, suspension of the rules, or privileged matter 8. Senate: unanimous consent agreements or motions to proceed 9. Floor debate
13. Amendments between the houses, or 14. Conference committee negotiations 15. Floor debate on conference report 16. Floor vote on conference report 17. Conference version presented to the president
18. President signs into law or allows bill to become law without his signature, or 19. President vetoes bill 20. First chamber vote on overriding veto 21. Second chamber vote on overriding veto 22. Bill becomes law if 2⁄3 vote to override is achieved in both chambers 23. Bill fails to become law if one chamberfails to override
bers of Congress to be cosponsors, as the number of cosponsors can be an indication of the level of support in Congress for passing the bill. A bill can be introduced in either the House or the Senate, but bills that concern raising revenue must originate in the House. For that reason, we will first look at the House. The bill’s sponsor initiates the legislative process by delivering the bill to the clerk of the House, who places it in the “hopper” and gives it a number, such as “H.R. 1.” The bill also receives a name, sometimes something very bureaucratic, sometimes something
dramatic or a catchy acronym. The Speaker of the House begins the consideration of the bill by referring it to a committee. Sometimes, the subject matter of a bill indicates which committee should consider it, but often there are several committees that might reasonably have jurisdiction over a bill. In such cases, the Speaker’s power of determining to which committee the bill is sent is significant, as different committees may be more or less kindly disposed toward it. In other words, deciding where to send the bill may greatly help or hurt its chances of eventually being passed. Usually, the bill is sent to just one committee, but in the past, there was the possibility of multiple referrals in which a bill is sent to more than one committee at the same time. Congress ended that practice in 1995, but there is the chance that a bill will receive a sequential referral: After one committee deals with a bill, it is sent to another committee. When multiple committees are involved, it tends to slow down the already laborious legislative process. Congressional committees play a central role in the legislative process, as it is there that proposed legislation receives the most scrutiny. Once a bill is referred to a committee, the chairperson of the committee largely decides what happens next. Often, the bill is sent from the full committee to a relevant subcommittee. In that case, the chairperson of the subcommittee largely determines how to proceed. If the chairperson favors the legislation, then he or she can schedule public hearings, during which people testify for and against the proposed legislation. Based on the reactions to the bill, the subcommittee may well alter or “mark up” the measure in a process known as markup. Once the subcommittee passes the bill, it goes to the full committee, which may itself choose to hold more hearings on it and further change it. Committee members may also offer a variety of amendments to the bill. For a bill to advance from the committee to the whole chamber, a majority of the committee members must vote in favor of it, and most bills never make it out of committee—they “die in committee.” Such bills may be formally “tabled” or simply ignored. Representatives who are not on the committee can try to force it to pass on a bill to the whole House by circulating a “discharge petition,” but the chairperson and members of a committee are
402 legisla tive process
the main forces in determining whether legislation advances. If a bill makes it out of committee, then it advances to the whole House of Representatives. The consideration of a proposed item of legislation in the whole House can be very complicated. Before a bill can be debated, the House must first determine the nature of the debate that will be permitted, and this often entails passing a separate piece of legislation governing the debate for a particular bill. Legislative debate in the House is governed by a set of rules, which are set by a standing rules committee and then passed by the whole House usually as a simple resolution. This attention to the particular rules of debate may seem odd, but debate can be very important in determining whether and in what form a bill will pass. For example, the rules for debate determine how much time will be granted for debate and how many and what type of amendments can be offered. An open rule may permit lengthy debate and an unlimited number of unrelated amendments, any one of which could introduce controversial issues that may or may not be relevant to the main bill and that could doom the bill. In contrast, a closed or restrictive rule eliminates many of the ways in which a bill could be stalled or derailed altogether. Once the rules for debate are established, the bill is placed on one of several calendars that the House uses for floor debate. The debate itself is divided between supporters and opponents and managed by leaders of the two political parties, even if some members cross party lines with regard to the legislation. Depending on the rule for debate, the House may allow members to introduce amendments to the bill. The rule will also stipulate whether amendments must be germane or can address substantively different matters, in which case the forces for and against the bill may significantly change. Members of Congress can try to help the bill’s chances by offering “sweetening” amendments that seek to make it more attractive to more members. This can lead to political “pork,” with members trying to attach their own pet projects to the broader measure. Members of Congress may also try to hinder the bill’s chances by attaching a “poison pill” or “killer” amendment that would doom the chances of the whole measure passing. Members may also vote to “recommit” the bill to committee to kill it, to stall
it, or to further change it. Once the debate and amendments are done, members vote on the bill, often first by a voice vote and then by a roll-call vote. The United States has a bicameral Congress in which both chambers are equally important. Therefore, once the House passes a bill, it must also be passed by the Senate where the bill is treated as a brand new proposal, even if it has already been subjected to considerable scrutiny by the other chamber. The president of the Senate receives the bill and sends it to a committee. As in the House, different committees may be more or less kindly disposed to the bill. The committee chairperson may well send the bill to an appropriate subcommittee, which can then hold hearings and modify the bill. When and if the subcommittee passes the bill, it goes to the full committee, which may itself hold hearings and make changes. If the committee votes to pass the bill, then it goes to the full Senate for debate. The process for floor debate in the Senate is similar to the House, but it is often less formalized. In general, legislation advances in the House according to formal rules and procedures, but legislative progress in the Senate depends crucially on consensus. Like the House, the Senate has calendars on which a bill may be placed for consideration, but usually the Senate proceeds by means of a “unanimous consent agreement.” These agreements are negotiated between party leaders, usually the majority and minority leaders, and they govern the nature of the debate for the measure, much as the House rules committee does. While the House leadership can largely control the nature of debate there, debate in the Senate is less constrained. This is because the Senate purports to be “the world’s greatest deliberative body.” Once a senator is formally recognized on the floor of the Senate by the presiding officer, he or she can speak as long as he or she wishes. Sometimes, a senator will use this privilege to try to derail a bill by talking it to death, which is called a filibuster. For example, in 1957 Senator Strom Thurmond (R-SC) spoke for more than 24 hours straight in an effort to filibuster a civil-rights bill. Sixty senators can stop a filibuster by voting to invoke “cloture,” but often the mere threat of a filibuster is sufficient to derail a bill or to wrest concessions. For this reason, senators try to ensure
legislative process 403
that all views can be heard and that opponents of a bill can have their fair say. Because the House and the Senate are different institutionally, procedurally, and politically, the version of a bill that emerges from the Senate is often different from the version that passed through the House. Constitutionally, the two houses of Congress are equally important, and both must pass a bill in identical form for it to advance. Often, this means that one chamber or the other chooses to pass the version that the other chamber passed. But sometimes the differences are so great that it is difficult to reconcile the differences, and in such cases, the bills are sent to a conference committee that is specially created for the sole purpose of ironing out the differences in the two versions. The membership of a conference committee is usually composed of senior members from both chambers and both parties. The conference committee can be a crucial point in the legislative process and often involves tough bargaining and significant revisions. Once the differences are resolved, both chambers must pass the revised version. After the House and the Senate have passed identical versions of the same bill, it is delivered to the president, who has 10 days to act on it. He can sign it into law, veto it, or ignore it. If the president vetoes or rejects a bill, then Congress may override him by a two-thirds vote in both chambers, in which case the bill becomes law over or despite the president’s veto. Members of Congress are often reluctant to vote against a president of their own party who has vetoed a bill, so votes to override a veto can be dramatic conflicts between the legislative branch and the executive branch. If Congress is within 10 days of the end of a session, then the president may simply ignore the bill and let it expire, which is known as a “pocket” veto. The president may also choose to let a bill become law without his signature. For example, in 1893, President Grover Cleveland let a bill for income tax become law without signing it. While the president’s actions usually mark the end of the legislative process, the judicial branch may eventually become involved, as the U.S. Supreme Court can declare a law unconstitutional and therefore void. Such is the legislative process. Again, it is long and complicated. Not every bill follows this exact path, but most do. There are exceptions for fast-track legislation, omnibus legislation, and the budget pro-
cess. Additionally, Congress can at times move very quickly, and it has occasionally done so in time of crisis or during a president’s first 100 days in office. On occasion, Congress itself has rushed through the usual legislative process, as it did with the GOP’s Contract With America in 1995. But these are all exceptions, and usually the legislative process is arduous, complicated, and slow. Most bills do not successfully make it to the end of the process and become laws. Bills expire at the end of a congressional session, which lasts two years. If a bill is not passed in that time, then it must be reintroduced in the next Congress. Many bills are reintroduced every few years but never get far. Of the roughly 10,000 bills that are proposed during a congressional term, only 10 percent become laws. Of the 90 percent of bills that do not become laws, some 90 percent of them fail because they die in committee. It is simply very hard to pass new legislation. But that difficulty does not mean that the process functions poorly; rather, it is an indication that the process is working well because it was designed to work slowly. We may fault the founders for creating a system in which it is hard to get things done, but we ought not to fault them for designing a faulty system since the legislative system works much as they wanted it to. Also, while the legislative process can be confusing and frustrating, it can also be highly democratic in that there are so many opportunities in the legislative process for influence and so many points at which a bill can be changed or defeated. Political minorities or powerful special interests can be heard, and they have some ability to block or change legislation, but if a significant majority wants a bill to become law, it will usually prevail. Further Reading Elving, Ronald D. Conflict and Compromise: How Congress Makes the Law. New York: Simon and Schuster, 1995; Fenno, Richard F. Learning to Legislate: The Senate Education of Arlen Specter. Washington, D.C.: Congressional Quarterly Press, 1991; Sinclair, Barbara. Unorthodox Lawmaking: New Legislative Processes in the U.S. Congress. Washington, D.C.: Congressional Quarterly Press, 2000; Waldman, Stephen. The Bill. New York: Penguin, 1996. —Graham G. Dodds
404 Libr ary of Congress
Library of Congress The Library of Congress occupies a special place in the U.S. experience. When President John Adams signed the congressional act that transferred the home of the national government from Philadelphia to a new capital city, Washington, D.C., on April 24, 1800, he also established the Library of Congress. The act appropriated $5,000 “for the purchase of such books as may be necessary for the use of Congress . . . , and for fitting up a suitable apartment for containing them . . . .” The original library was housed in the Capitol building. On August 24, 1814, the entire catalogue (3,000 volumes) of the Library of Congress was destroyed when British troops occupied and burned Washington. A total of 189 years after this event, on July 17, 2003, British Prime Minister Tony Blair officially apologized for the burning of the Library of Congress in a speech delivered to Congress. Within a month of the fire, Thomas Jefferson offered to sell his personal library to Congress to “recommence” its library. After 50 years of collecting books on a variety of topics and in many different languages, Jefferson’s library was considered to be one of the greatest in the country. The debate that followed Jefferson’s offer ultimately dictated the future direction of the library. Some were concerned about the appropriateness of purchasing the collection because it contained books that were well beyond the bounds of what was expected to be in a legislative library. Jefferson argued that his collection was not too comprehensive because there was “no subject to which a Member of Congress may not have occasion to refer.” In January 1815, Congress allocated $23,950 to purchase Jefferson’s 6,487 books. This acquisition provided the impetus for the expansion of the library’s functions. It was Jefferson’s belief in the power of knowledge and the connection between knowledge and democracy that has shaped the library’s philosophy of sharing and expanding its collections. Today, the library’s collections are among the finest in the world. The library houses one of three perfect existing vellum copies of the Gutenberg Bible. It also housed the original copies of the U.S. Constitution and the Declaration of Independence before relinquishing control to the National Archives. It managed to retain control of Jefferson’s handwritten
draft of the Declaration of Independence, complete with handwritten notes in the margins by Benjamin Franklin and John Adams. Recently, it obtained the only known copy of the first edition of Alexis de Tocqueville’s De la Démocatie en Amérique (Democracy in America) in the original paper wrappers. The Library also contains 43 percent of the approximately 40,000 items known to have been printed in America prior to 1801. Recognized by The Guinness Book of World Records, the Library of Congress is the “World’s Largest Library.” The Library’s current collection fills more than 530 miles of shelf space. The collection contains more than 130 million items, 29 million of which are books. It holds more than 1 million U.S. government publications, 1 million issues of world newspapers covering the last three centuries, the largest rarebook collection in North America, 4.8 million maps, 2.7 million sound recordings, the world’s largest collection of legal materials, and even 6,000 comic books. It is the world’s largest repository of maps, atlases, printed and recorded music, motion pictures and television programs. The United States Copyright Office is housed within the Library of Congress. This has aided the library’s ability to collect items deemed “significant.” Every day, the Library receives 22,000 new items that are published in the United States. Because the Copyright Office requires two copies of every published item before it can be copyrighted, the library has access to the most important documents published in the country. On a typical day, it adds an average of 10,000 items to its collection. Items that are not retained are used in trades with other libraries around the world or are donated. Due to the shear volume of materials in their collection, the library created a system to classify books called the Library of Congress Classification. Most U.S. research and university libraries have adopted this system. The system divides subjects into broad categories, which were based primarily on the library’s organizational needs. Although the Library of Congress was originally designed to provide Congress with research materials, the library has become the de facto national library. This is partially due to the sheer volume and breadth of its collection. It is also a result of the Library’s decision to open to the public. Anyone older
Library of Congress 405
than the age of 18 and who has a government-issued picture identification can gain access to the collection. Only members of Congress, U.S. Supreme Court justices and their staff, Library of Congress staff, and certain other government officials are allowed to check out books. The Library will also grant interlibrary loans but as the “library of last resort.” In the tradition of Jefferson, the library has sought to continue its goal of preserving books and other objects from all over the world to allow legislative, scholarly, and independent research. In 1994, the library launched “American Memory.” This Internetbased program provides an archive of public-domain images, audio, videos, and Internet resources. Building on this success, it announced its intentions to create the “World Digital Library” in 2005. Its goal is to digitize important documents from every nation and culture in the world, thus preserving and simultaneously allowing the general public access to millions of documents to which they would otherwise never gain access. In addition to these services, the library is the home of the National Library Service for the Blind and Physically Handicapped, a program that provides talking and Braille resources to more than 750,000 people. It sponsors musical, literary, and cultural programs across the nation and the world, and it houses the nation’s largest preservation and conservation program for library materials. The library is also the home of the nation’s poet laureate. The library is responsible for providing an online archive of the activities of Congress. These archives include bill text, Congressional Record text, bill summaries and status updates, and the Congressional Record index. The Congressional Record is the official record of the proceedings and debates of Congress. For the library to fulfill its expectations in all of these areas, their annual appropriations from Congress have grown from $9 million in 1950 to more than $330 million in 2000. The relationship between the library and Congress has allowed it to grow into the institution that it is today. In 1950, the eminent librarian S. R. Ranganathan stated, “The institution serving as the national library of the United States is perhaps more fortunate than its predecessors in other countries. It has the Congress as its godfather . . . . This stroke of good fortune has
made it perhaps the most influential of all national libraries in the world.” The library has also become dependent upon contributions to the Trust Fund Board, which helps to fund exhibitions, the poet laureate, and other projects. The board was created in response to outside demands through the Library of Congress Trust Fund Board Act of 1925. In addition to financial contributions, this act allowed the library to accept gifts and bequests of private collections without having to purchase them. The Trust Fund Board accepts gifts of all varieties, including a donation of five Stradivarius instruments and the funding for concerts where these instruments were to be played. The library has undergone many changes since its inception. Initially, it was seen as the research arm of the legislature. By 1802, Congress realized that the library needed an official librarian, the Librarian of Congress. Between the procurement of Jefferson’s collection and 1850, the library’s collection slowly grew. In 1851, a fire destroyed two-thirds of its 55,000 volumes and two-thirds of Jefferson’s library. In 1852, Congress appropriated $168,700 to replace the lost volumes, but they did not allocate anything to expand the collection. Growth came to a standstill when Congress removed the library’s primary source of books, the U.S. Copyright Office, in 1859. Following the Civil War, President Abraham Lincoln appointed Ainsworth Rand Spofford as the librarian. He served from 1865 until 1897. Spofford is credited with transforming the library into the institution it is today. He adopted Jefferson’s beliefs on the importance of the library obtaining a varied collection. He viewed the library as a U.S. national library not limited to the needs of Congress. Taking advantage of the rapid social and industrial changes that followed the Civil War, Spofford positioned the library to benefit from the favorable political environment. His persuasiveness was rewarded with six laws or resolutions that allowed a national role for the library. He convinced Congress to move the Copyright Office back to the library, and responding to his requests, Congress passed the Copyright Act of 1870, which forces all U.S. copyright registration and deposit activities to the library. By 1870, it was obvious that the rapidly expanding collections of the library were too much for the space provided by the Capitol building. After years of
406 Libr ary of Congress
debate, Congress authorized the library to build its own building in 1886. The Library of Congress finally moved out of the Capitol building in 1897. Today, the library has expanded to three buildings in Washington, D.C. The original building has been renamed the Thomas Jefferson Building. In 1938, the library built an annex, which was later named the John Adams Building. The most recent addition, the James Madison Memorial Building, opened in 1981 and is now its headquarters. Right before the move into their new building, President William McKinley appointed a new librarian, John Russell Young. Young oversaw the move and the subsequent organization within the library. He decided to honor Jefferson’s influence on the library by creating a special room to house the remaining books of his collection. His greatest contribution to the library was the library service for the blind and physically handicapped. In the spring of 1899, President McKinley appointed Herbert Putnam as the new librarian of Congress. Putnam’s belief was that the library should be “Universal in Scope: National in Service.” As the first experienced librarian to serve in this role, he strived to expand access to the vast collections. Putnam was responsible for the Library of Congress Classification. He also initiated an interlibrary loan program. Although the library was designed for congressional research, Putnam defended his loan policy as being worth the risk because “a book used, is after all, fulfilling a higher mission than a book which is merely being preserved for possible future use.” With the new rules concerning obtaining a copyright within the United States, the library was growing with every new item published. Because these new laws enabled the library to grow without the need to spend any money, Putnam shifted his focus toward building international collections. In 1904, he purchased a 4,000 volume library on India. He acquired an 80,000 volume library of Russian literature from a private collector in Siberia. He also sought out collections of early opera librettos, Chinese and Japanese books, and collections on the Hebrew Bible. Putnam justified these acquisitions because he “could not ignore the opportunity to acquire a unique collection which scholarship thought worthy of prolonged, scientific, and enthusiastic research, even though the immediate use of such a collection may prove mea-
ger.” It was during Putnam’s tenure that the library paid $1.5 million for a 3,000 volume collection that included the Gutenberg Bible. In 1939, President Franklin D. Roosevelt appointed writer and poet Archibald MacLeish as the new librarian. MacLeish only served until 1944, but he accomplished quite a bit during his short tenure. Perhaps his greatest accomplishment stemmed from the country’s involvement in World War II. He argued that the library must have the “written records of those societies and peoples whose experience is of most immediate concern to the people of the United States.” MacLeish’s replacement, political scientist Luther H. Evans, was appointed by President Harry Truman in 1945. He expanded MacLeish’s programs to obtain international publications. In doing so, Evans strove to achieve Jefferson’s vision of “completeness and inclusiveness.” He used the postwar environment to his advantage, advocating that “no spot on the earth’s surface is any longer alien to the interest of the American people.” He argued that “however large our collections may now be, they are pitifully and tragically small in comparison to the demands of the nation.” Evans fought to expand the reaches of the library. For instance, the Library of Congress Mission in Europe was created to acquire European publications. Through this resource, the library initiated automatic book-purchasing agreements with international publishers and expanded international exchange agreements with foreign national libraries. The exchange agreements were aided by Evans’s belief that original source materials should reside in their country of origin. In 1954, President Dwight D. Eisenhower appointed L. Quincy Mumford as the new librarian. Mumford oversaw the greatest expansion in the library’s efforts to obtain international documents. In 1958, he convinced Congress to allow the library to use U.S.-owned foreign currency under the terms of the Agriculture Trade Development Assistance Act of 1954 to purchase books. With the help of President Lyndon B. Johnson, the library was ordered to try to obtain all current library materials that were valuable to scholarship published throughout the world. With the growing collection of books, Mumford realized that a system for cataloging bibliographic information essential. In the mid-1960s, he adminis-
logrolling 407
tered the creation of the Library of Congress MARC (Machine Readable Cataloging) format for communicating bibliographic information. The MARC format became the official national standard in 1971 and an international standard in 1973. Historian Daniel J. Boorstin was named by President Gerald R. Ford as Mumford’s replacement in 1975. Boorstin placed a great deal of emphasis on collection development, book and reading promotion, the symbolic role of the library in U.S. life, and in the library becoming “the world’s greatest Multi-Media Encyclopedia.” His programs succeeded in increasing the visibility of the library. On September 14, 1987, President Ronald Reagan appointed another historian, James H. Billington, as the 13th Librarian of Congress. Billington is credited with many of the library’s embraces of technology. Billington oversaw the “American Memory” program and its developing the “World Digital Library” program. He has argued that the time and money needed to develop these new technologies are important for achieving Jefferson’s ideals by making their ever-increasing “knowledge available to Americans in their local communities.” Therefore, “even those Americans far from great universities and the most affluent schools and libraries can still have access to the best of the nation’s heritage and the latest in upto-date information.” Further Reading Computer Science and Telecommunications Board, and National Research Council. LC21: A Digital Strategy for the Library of Congress. Washington, D.C.: National Academy Press, 2000; Conoway, James. America’s Library: The Story of the Library of Congress, 1800–2000. New Haven, Conn.: Yale University Press, 2000; Goodrum, Charles A., and Helen W. Dalrymple. The Library of Congress. Boulder, Colo.: Westview Press, 1982; Rosenberg, Jane Aikin. The Nation’s Great Library: Herbert Putnam and the Library of Congress, 1899–1939. Urbana: University of Illinois Press, 1993. —James W. Stoutenborough
logrolling The U.S. Congress, one of the most independent national legislatures in the world, often suffers from
the collective action dilemma resulting in the failure to act in a coordinated manner for common legislative goals. Collective action is difficult to achieve because congressional members represent distinct geographical constituencies approximating a modest proportion or a minority of the national electorate. These members act rationally seeking the most efficient means to please their constituencies to win reelection (that is, sponsoring constituency favored legislation) while often neglecting to pursue legislation collectively, especially for the common good of the nation. Therefore, collective interests are generally superseded by the individual policy platforms of congressional members, and this individualistic tendency can jeopardize the institutional maintenance of Congress. While several scholars feel that this collective action dilemma is exacerbated by logrolling, others believe that logrolling is one way in which this collective action dilemma can be resolved. Logrolling is a form of bargaining achieved by exchanging support for legislation through votes and other measures (that is, pork barrels or earmarks; other forms of bargaining involve compromise through legislative modification and nonlegislative favors). Due to the uncertain, competitive environment of the national legislature, congressional members find it necessary and rational to cooperate with each other by reciprocally supporting dissimilar policies such as agriculture interests for farm states (that is, price supports) in exchange for food stamps and other nutritional programs for urban areas. As a result, coalitions are built among groups of legislators who concede to the other their least important policy choices to win their favored legislation. For example, representatives 1, 2, and 3 who feel strongly about legislative measure A will exchange their support with another group of legislatures comprised of 4, 5, and 6 favoring legislative measure B. As a result, representatives 1, 2, and 3 vote for measure B, and representatives 4, 5, and 6 vote for measure A yielding a coalition of six voting members for both measures. In this way, logrolling assists the collectivity of individual policy platforms through the exchange of geographically based legislative support. Furthermore, individual logrolls like these may also be employed to facilitate the passage of broadly based or “general interest” legislation for the common good of the nation.
408 logr olling
The prevalence and functionality of logrolling has been examined through several generations of legislative theories involving the demand-side and the supply-side characteristics of Congress. While some scholars favor the demand side, “gains from exchange” argument of logrolling (as discussed above), others are skeptical that the main purpose of Congress is to facilitate vote trading deals between individualistic, reelection-seeking legislators focusing instead on the institutional structures through which these policy objectives are channeled. Arguably, these varied approaches are not mutually exclusive but are complementary, and the prevalence of logrolling will depend on whether the time period under study is one from unified or divided control. Admittedly, while such logrolling exists during periods of both partisan unity and disunity, this pattern will become more prevalent during divided government when majority party mechanisms of control are weakened. The competitive, uncertain environment of securing desired distributions therefore compels rational congressional members to cooperate with each other to an ever greater extent when the presidency is held by the opposite political party. First generational models of congressional studies emphasize the demand side of institutional life highlighting the rationality of “gains from exchange” during low party cohesion. Due to the competitive environment of securing desired distributions, such as needed district expenditures including earmarks or pork-barrel projects, congressional members cooperate with each other primarily through logrolling (vote trading). While first-generational theorists view logrolling as a prevalent legislative practice, many of them do not necessarily consider the usefulness of this practice in its own right. Alternatively, other scholars subscribing to this perspective suggest that logrolling “greases the wheels” of Congress, thus facilitating consensus building for broadly based legislative agendas. In this sense, overcoming the collective action problem that is endemic in the national legislature is achieved through strategic logrolling, and this method is more widespread with diminished partisan unity between the separate branches. In this divided state, achieving minimum winning coalitions is also more uncertain than during unified control and with this uncertainty, universalistic coalitions granting “something for everyone” and reciprocity or
mutual noninterference become more common practices, enabling congressional members to win their policy objectives and hence reelection more readily. Other scholars from the second- and thirdgenerational models are skeptical that the main purpose of Congress is to facilitate legislative logrolling deals between individualistic, reelection-seeking legislators focusing instead on the institutional structures through which these policy objectives are channeled. These scholars are doubtful about the frequency of logrolling because of the lack of institutionalized monitoring efforts as well as the absence of enforceable stipulations for such vote-trading arrangements. Instead, these supply-side scholars employ informational and partisan theories among others to elucidate congressional organization and practices. Informational theories, for example, emphasize policy expertise as a collective good claiming that once policy specialists obtain such expertise through their respective committees, they share this information, which benefits all legislators. Additionally, according to partisan theories, party leadership is significantly enhanced through the promotion of party-favored initiatives and the reduction of dissident partymember effects. Specifically, the appointment and scheduling powers strengthen the majority party’s advantage in determining who will preside over committee deliberations, which legislative proposals will be accepted to the floor, and when the policy measures will be moved to the floor. Thus, parties solve collective action dilemmas by enabling organized activity to remedy “the rational but unorganized action of group members” through extensive monitoring by party leaders. While congressional members spend considerable time fulfilling their electoral aspirations, they adhere to the partisan institutions that serve as a moderating force. As a result, while the majority party leaders continue to be influenced by their individual districts, their party’s success impels them to embrace wider issues, thereby enhancing the majoritarian principle of the congressional body via the committee system, especially, one could argue during periods of unified government when the majority partisan vision is followed more readily without contestation. Arguably, these varied approaches to congressional organization are not mutually exclusive but
logrolling 409
complementary as the first-generational theories examine how rational actors influence the institutional mechanisms of legislative business to achieve their reelection and policy goals while the secondand third-generational theories assess how these individual actors are constrained by the structural elements of the legislative institution especially since the 1970 congressional reforms. By itself, neither approach is holistic. With respect to the partisan rationale, for example, while the authority of the majority party leadership in Congress has been enhanced following the 1970 procedural reforms, such authority is often compromised when the presidency is held by the opposite political party of one or both of the legislative houses. Therefore, during divided government, there will be less party unity within Congress because the administration will curry favor with the opposite party in attempting to initiate its policy agenda often neglecting its own party. This negligence is often evidenced by weaker partisan unity in roll-call votes. In this divided state, the formation of minimum winning coalitions is uncertain leading rational congressional members to pursue universalistic coalitions to achieve their reelection and policy objectives. Moreover, regarding the informational rationale, congressional members are assigned to committees which accommodate their policy expertise as well as their quest for tailored district subsidies. More important, the idea of sharing policy expertise for the good of the Congress also suffers from the collective action dilemma due to the lack of institutionalized monitoring efforts as well as the absence of enforceable stipulations for such information trading arrangements, especially during divided control. The functionality and long-term stability of logrolling should therefore not be overlooked, especially in the House of Representatives. While it may not be the predominant form of legislative organization, logrolling does enable bargaining among and hence representation of diversified, minority interests. This helps to ameliorate the deficiencies of a “winner-take-all plurality system” where the needs of minorities are negated unless they are congregated within an electoral district. In this way, the majoritarian objectives are balanced with the minority interests rather than monopolized, thus defusing the exclusive and antagonistic environment of the lower legislative chamber. Moreover, such logrolling
through pork barreling or earmarking accounts for a smaller percentage of the overall appropriated budget in any given year than is usually implied by skeptical observers including the media. Finally, while logrolling is facilitated through informal mechanisms rather than formal, enforceable contracts and monitoring efforts, such mechanisms should not be underestimated. With the increasing professionalization of the national legislature, congressional members endeavoring to pursue long-term careers will have less incentive not to honor their informal vote-trading agreements as they anticipate the need for cooperation with the same veto players in the future. As a result, the reciprocal arrangements made by congressional members through various types of logrolling perform an important legislative purpose for a complex, heterogeneous state. While the theoretical rationale for logrolling has been established, the challenge to scholars of congressional studies involves improving the research methods used to evaluate this practice especially between times of divided versus unified government. Further Reading Collie, Melissa P. “Universalism and the Parties in the U.S. House of Representatives, 1921–80.” American Journal of Political Science 32, no. 4 (1988): 865–883; Cox, Gary W., and Samuel Kernell. The Politics of Divided Government. Boulder, Colo.: Westview Press, 1991; Cox, Gary W., and Mathew D. McCubbins. Legislative Leviathan: Party Government in the House. Berkeley: University of California Press, 1993; Evans, Diana. Greasing The Wheels: Using Pork Barrel Projects to Build Majority Coalitions in Congress. Cambridge England: Cambridge University Press, 2004; Fenno, Richard F. Congressmen in Committees. Boston: Little, Brown, 1973; Ferejohn, John. “Logrolling in an Institutional Context: A Case Study of Food Stamp Legislation.” In Congress and Policy Change, edited by Gerald C. Wright, Leroy N. Rieselbach, and Lawrence C. Dodd. New York: Agathon Press, 1986; Fiorina, Morris P. “Universalism, Reciprocity, and Distributive Policymaking in Majority Rule Institutions.” In Research in Public Policy Analysis and Management. Vol. 1, edited by John P. Cresine, 197–221. Greenwich, Conn.: JAI Press, 1981; Kernell, Samuel, and Gary C. Jacobson. The Logic of American Politics, 2nd ed. Washington, D.C.:
410 par ty politics
Congressional Quarterly Press, 2003; Krehbiel, Keith. Information and Legislative Organization. Ann Arbor: University of Michigan Press, 1991; Lee, Frances E. “Geographic Politics in the U.S. House of Representatives: Coalition Building and Distribution of Politics.” American Journal of Political Science 47, no. 4 (2003): 714–728; Lichbach, Mark Irving, and Alan S. Zuckerman. Comparative Politics: Rationality, Culture and Structure. Cambridge, England: Cambridge University Press, 1997; Lowi, Theodore J. The End of Liberalism: The Second Republic of the United States, 2nd ed. New York: W.W. Norton & Company, 1979; Mayhew, David R. Congress: The Electoral Connection. New Haven, Conn.: Yale University Press, 1974; Oleszek, Walter J. Congressional Procedures and the Policy Process. 6th ed. Washington, D.C.: Congressional Quarterly Press, 2004; Olson, Mancur. The Logic of Collective Action: Public Goods and the Theory of Groups. Cambridge, Mass.: Harvard University Press, 1971; Rhode, David W. Parties and Leaders in the Postreform House. Chicago: University of Chicago Press, 1991; Shepsle, Kenneth A., and Mark S. Bonchek. Analyzing Politics: Rationality, Behaviors and Institutions. New York: W.W. Norton & Company, 1997; Shepsle, Kenneth A., and Barry R. Weingast. Positive Theories of Congressional Institutions. Ann Arbor: University of Michigan Press, 1995; Stratmann, Thomas “Logrolling.” In Perspectives on Public Choice: A Handbook, edited by Dennis C. Mueller. Cambridge, England: Cambridge University Press, 1997; Wildavsky, Aaron, and Naomi Caiden. The New Politics of the Budgetary Process. 5th ed. New York: Pearson and Longman, 2004. —Pamela M. Schaal
party politics In the past 30 years, the scholarly literature has paid considerable attention to the so-called party crisis. Before Katz and Mair (1995) provided a new interpretation for the changes that parties in industrial societies were undergoing, parties were believed to be in a crisis for a variety of reasons. Parties were considered to be in a critical state because of the growing detachment between parties and society, the emergence of new social movements, the increasing impor-
tance of interest groups in the political arena, and the increasing similarity between parties. U.S. parties were no exception to this global trend. The party-crisis argument was also applied to the U.S. case. U.S. parties were in crisis because not only were they no longer able to mobilize voters, though some scholars claim that U.S. parties are actually unwilling to mobilize voters, but because their strength in the legislative arena which had steadily declined since the early 20th century was at an all time low. Specialists of U.S. party politics use a variety of metrics to estimate parties’ legislative strength, which is the ability of parties to control their members, to structure legislative voting and to shape legislative outcomes. Some of the variables employed to assess party strength are party vote, party unity, likeness, and party rule (see data presented in table on page 411). The party vote variables indicate the percentage of roll-call votes in which at least 50 percent of the Democrats oppose 50 percent of the Republicans. The party unity variable measures the average percentage of representatives who vote with their party majority in the party votes. The likeness variable estimates the extent to which two parties vote alike. The party rule variable measures the percentage of time that majority party members make up a majority in a party vote. Higher levels of party strength are associated with higher levels of party votes, party unity, and party rule while they are associated with lower levels of likeness. From the inception of the 73rd Congress (1901) to the end of the 81st Congress (1911), the percentage of party votes ranged from a minimum of 56 percent in the 60th Congress (1907–09) to a maximum of 90.8 percent in the 58th Congress (1903–05). In this period, the party unity for the Democratic Party ranged from a minimum of 89 percent in the 60th Congress (1907–09) to a maximum of 93.3 percent in the 59th Congress (1905–07), whereas the party unity of the Republican Party in this decade varied from a minimum of 87.5 percent in the 61st Congress (1909– 11) to a maximum of 93.5 percent in the 57th Congress (1901–03). This decade was also characterized by low levels of likeness (from 21 in the 58th Congress to 49 in the 60th Congress) and high levels of
party politics 411
MEASURE OF PARTY STRENGTH Congress
Year
57th 58th 59th 60th 61st 73rd 74th 75th 76th 77th 90th 91st 92nd 93rd 94th 103rd 104th 105th 106th 107th
1901–03 1903–05 1905–07 1907–09 1909–11 1933–35 1935–37 1937–39 1939–41 1941–43 1967–69 1969–71 1971–73 1973–75 1975–77 1993–95 1995–97 1997–99 1999–2001 2001–03
P
arty Vote Demo 67.0 90.8 74.3 56.3 80.2 72.7 60.4 63.9 71.4 42.8 35.6 28.9 32.2 36.1 41.8 63.8 67.4 52.5 45.2 41.6
cratic Unity Republican 89.9 93.0 93.3 89.0 91.4 85.2 84.7 81.0 83.3 81.7 73.4 71.1 71.2 73.9 74.8 88.6 83.5 85.4 86.5 87.0
party rule. With the sole exception of the 61st Congress when the party rule score fell below 50, the party rule score was well above the 70 percent mark for the rest of the decade. The data provide a fairly consistent picture: Before the reforms of the Progressive era, parties were remarkably strong. Parties had distinct political agendas, were able to control their members, and, by doing so, were able to shape political outcomes. The reforms of the Progressive era weakened in the long run the ability of parties to structure political outcomes and created the conditions for the emergence of the so called “Textbook Congress.” Discussing the impact of the progressive reforms, Joseph Cooper noted that “these changes did not immediately diminish the control of party leaders in the House. In the two Congresses that followed the revolt against the Speaker the Democrats took over the House and ruled it as tightly through reliance on the majority leader and the caucus as it had been ruled by the Republicans in the era of the czar rule. . . . Still, whatever the immediate failings of reform, the begin-
93.5 92.9 88.6 88.1 87.5 88.4 85.9 87.2 87.5 85.2 78.8 71.6 76.5 72.6 76.3 87.3 91.9 89.5 89.5 93.4
Unity Lik
eness P 40.6 21.0 34.8 49.0 34.8 42.6 51.9 49.4 45.2 63.3 74.4 80.7 77.3 75.7 71.3 46.0 44.6 55.1 60.7 61.3
arty Rule 79.0 73.4 83.1 80.8 49.4 87.5 83.6 82.7 59.3 55.4 21.2 10.9 16.3 13.4 52.6 75.0 68.4 47.9 40.4 42.8
nings of a transition to a very different type of House were set in motion.” The “Textbook Congress” differed from the previous Congress in many respects. When the Democrats took control of the Congress in 1911, they “imposed several changes in the rules. Cannonism [the rule of Speaker Joe Cannon, 1903–11] had rested upon three principal powers: floor recognition, control of the rules committee, and the power on committee appointment. Of these, the first two had been limited in the preceding Congress . . . and the Democrats now removed the last of the speaker’s major powers, his control over committee appointment.” Yet, in spite of all these differences, parties were still strong in the legislative arena. From the beginning of the 73rd Congress in 1933 until the end of the 77th Congress in 1943, the percentage of party votes was still fairly high. In fact, with the exception of the 77th Congress when less than 43 percent of the votes were party votes, in all the other Congresses of the decade, at least 60 percent of the roll-call votes were
412 par ty politics
party votes. Parties were also very cohesive. The party unity score for the Republican Party never fell below 85 percent, while the unity score for the Democrats never fell below 81 percent. In spite of the fact that parties were no longer as united as they had been at the beginning of the decade, they displayed high levels of cohesion and ability of the parties, that is the Democratic Party’s ability, to shape political outcomes was even greater than it had been three decades earlier. From 1933 to 1943, party rule never fell below 55.4 percent, and in three congressional sessions (the 73rd, 74th, and 75th) it was well above the 80 percent mark. In the 1970s, when David Mayhew wrote Congress: The Electoral Connection, the Congress, defined as a “post-reform Congress,” was quite different from what it had been in the New Deal era. One of the major differences was represented by parties’ weakness, which was signaled by all the indicators of party strength. In fact, from the beginning of the 90th Congress in 1967 to the end of the 94th Congress, the percentage of party votes varied from a minimum of 32.2 percent in the 92nd Congress to a maximum of 41.8 percent in the 94th Congress. Party unity was also very low. The Democratic party unity and the Republican party unity never reached, respectively, the 75 and the 79 percent mark. Given the low partisanship of the Congresses of this era, it is not surprising that in this decade the party rule score recorded some of its lowest values: 10.9 in the 91st Congress, 13.4 in the 93rd Congress, 16.3 in the 92nd Congress—values that ranged from one-fifth to onefourth of the party rule scores that had characterized the beginning of the century as well as the early years of the “Textbook Congress.” By the time Mayhew wrote Congress: The Electoral Connection, parties were extremely weak (especially if compared to what they had been in previous decades), and Mayhew’s work was basically an attempt to explain the weakness of the parties. Mayhew’s explanation was quite ingenious. According to Mayhew, to understand the legislative behavior of members of Congress, their voting patterns, and the low levels of party unity, it was necessary to perform a paradigmatic shift from party-centered analysis to individual-centered analysis. Mayhew’s account is fairly straightforward: constituency preferences are different across the country, mem-
bers of Congress of the same party are elected in very different constituencies, members of Congress need to please their constituents to win re-election, and they do so by voting the constituents’ preferences rather than by voting the party line. This explains the remarkably low levels of party unity registered from the late 1960s to the early 1980s. It is impossible to overestimate the importance of Mayhew’s contribution to the scholarly debate. By shifting the analytic focus from the party to the individual congressman, and by positing that the individual, micro level is the correct level to understand the legislative behavior of members of Congress, he provided the theoretical foundations for the party politics debate in the past two decades. Mayhew’s idea that party-centered studies of Congress will not go very far led some scholars to argue that party government is conditional. For John Aldrich and David Rhode, parties are strong and party strength is high when members of a given party have the same preferences. According to Aldrich and Rhode, party homogeneity (within parties) is a function of policy disagreement (preference conflict) between parties and, more importantly “these two considerations—preference homogeneity and preference conflict—together form the ‘condition’ in the theory of conditional party government.” Aldrich and Rhode noted, however, that party homogeneity does not depend exclusively on interparty disagreement but it also depends, to a considerable extent, on the electoral process. In fact, “a party’s primary electorates are similar across the country, the policy preferences of the candidates selected within that party will be relatively similar,” while they are quite diverse otherwise. The conditional partygovernment theory provides an important framework for the analysis of Congress. It explains why members of Congress’s preferences change over time, why changes in members’ preferences are responsible for fluctuations in the levels of party unity, and why, under some conditions, there is something that resembles party government. This theory, by design, does very little to show whether the members’ legislative behavior is affected by a party factor independent from their individual preferences. The issue is of some importance. According to Keith Krehbiel, the existence of a party factor is confirmed only if members of congressional parties act in a way “that is consistent with known party pol-
party politics 413
icy objectives but that is contrary to personal preference.” The analyses performed by Krehbiel with the data from the 99th Congress show that the presence of a significant party factor is rare. The debate as to whether party unity and interparty agreement are due only to members’ preferences or are due also to the party factor is far from over. The only thing that is quite clear is that regardless of whether the behavior of members of Congress reflects their preferences, their party preferences or (a combination of) both, the Congress (and members’ legislative behavior) in the 1990s was quite different from what it had been in the past. From the beginning of the Clinton presidency in 1993 to the end of the first Congress of the George W. Bush presidency in 2003, that is, before the inception and after the fall of the Republican Congress (or the Newt Gingrich Congress), congressional party politics had been quite homogeneous and quite distinct from the party politics of other eras. At the beginning of the century as well as in the first decade of the “Textbook Congress,” there was a lot of interparty disagreement coupled with very high levels of intraparty agreement. From the late 1960s to the early 1980s, low levels of intraparty agreement were coupled with high levels of interparty agreement. This period between 1993 and 2003 combined high levels of interparty agreement (great party unity) with fairly high levels of interparty agreement. The party unity levels for both the Democratic Party and the Republican Party from 1997 to 2003 were as high as the levels registered from 1901 to 1911, but while in the time period from 1901 to 1911 party votes amounted to 73.7 percent of all roll-call votes, in the time period from 1993 to 2003 party votes amounted to only 54.1 percent of all roll-call votes. Is the combination of high levels of interparty agreement with high levels of intraparty agreement good for the U.S. political system and for U.S. democracy? The answer to this question varies depending on one’s normative assumptions. If one assumes that the existence of different policy views is essential for democracy, that the winning party should implement its program, and that the opposition should oppose the implementation of the government program, scrutinize the government actions, and show the mistakes of the government party with the hope of win-
ning the next election, then the high levels of interparty agreement of the past 15 years are bad for democracy as they indicate that the opposition has forfeited its role. If one assumes that policy stability is good and that bipartisan majorities are to be instrumental in ensuring policy stability, then one should positively judge the low levels of partisanship of the recent Congresses. If one believes, in a sort of Madisonian fashion, that only good policy measures can appeal to broad majorities of legislators, then the bipartisan nature of congressional majorities is good because it prevents bad policy measures from being passed. But high levels of interparty agreement can be seen in a very different and much more cynical way. The high levels of interparty agreement are consistent with the notion that political supply, namely the production of legislation, does not vary to adjust to changes in popular demand but is instead fixed through interparty agreements. One can in fact speculate that high levels of interparty agreement reflect the fact that only those bills that receive a sort of allparty support are passed, while those bills on which there are partisan disagreements are not pushed through the legislative process. Hence, according to this interpretation, it is the existence of interparty agreements and not the existence of social demands that determine whether a piece of legislation is enacted or not. U.S. parties collude to decide what kind of legislation can be enacted and, by doing so, they operate like a cartel—a behavior which is bad, if not entirely inconsistent with, democracy. See also lobbying. Further Reading Aldrich, John H., and David W. Rhode. “The Logic of Conditional Party Government: Revisiting the Electoral Connection.” In Congress Reconsidered. 7th ed., edited by Lawrence C. Dodd and Bruce I. Oppenheimer. Washington, D.C.: Congressional Quarterly Press, 2001; Cooper, Joseph. “The Twentieth Century Congress.” In Congress Reconsidered. 7th ed., edited by Lawrence C. Dodd and Bruce I. Oppenheimer. Washington, D.C.: Congressional Quarterly Press, 2001; Cooper, Joseph, and Garry Young, “Partisanship, Bipartisanship, and Crosspartisanship in Congress Since the New Deal.” In Congress Reconsidered. 6th ed., edited by Lawrence C. Dodd and Bruce I.
414 pork -barrel expenditures
Oppenheimer. Washington D.C.: Congressional Quarterly Press, 1997; Daalder, Hans. “A Crisis of Party?” Scandinavian Political Studies 15, (1992): 269–288; Crenson, Matthew, and Benjamin Ginsberg. Downsizing Democracy: How America Sidelined Its Citizens and Privatized Its Public. Baltimore, Md.: Johns Hopkins University Press, 2002; Ginsberg, Benjamin, and Martin Shefter. Politics by Other Means. 3rd ed. New York: W.W. Norton, 2002; Katz, Richard S., and Peter Mair. “Changing Models of Party Organization: The Emergence of the Cartel Party.” Party Politics 1, no. 1: 5–28; Krehbiel, Keith. “Where Is the Party?” British Journal of Political Science 23 (1993): 235–266; Krehbiel, Keith. “Macro Politics and Micro Models: Cartels and Pivots Reconsidered.” In The Macro Politics of Congress, edited by E. Scott Adler and John Lapinski. Princeton, N.J.: Princeton University Press, 2006; Lawson, Kay. “When Linkage Fails.” In When Parties Fail: Emerging Alternative Organizations, edited by Kay Lawson and Peter Merkl. Princeton, N.J.: Princeton University Press, 1988; Mayhew, David R. Congress: The Electoral Connection. New Haven, Conn.: Yale University Press, 1974; Peters, Ronald M., Jr. The American Speakership. Baltimore, Md.: Johns Hopkins University Press, 1997; Schattschneider, E. E. Party Government. New York: Holt, Rinehart and Winston, 1942. —Riccardo Pelizzo
pork- barrel expenditures In August 2005, President George W. Bush signed into law the “Safe, Accountable, Flexible and Efficient Transportation Equity Act: A Legacy for Users,” or SAFETEA-LU, a massive and controversial transportation bill that cost $286.5 billion. What made the bill so controversial was that approximately 8 percent of the bill’s spending was reserved for roughly 6,000 specific highway and transit projects in members’ districts. The bill was lampooned by critics from across the political spectrum including Jon Stewart on The Daily Show, who pointed out several of the bill’s seemingly worst offenses—$35 million for a new road at Wal-Mart headquarters in Arkansas, $2 million for parking at the University of the Incarnate Word in San Antonio, Texas, and $1.6 million for the American Tobacco Trail in North Carolina. But Stewart saved most of his scorn for Don Young, a Republican
from Alaska and chairman of the House Transportation and Infrastructure Committee. Young, along with Senator Ted Stevens, also a Republican from Alaska and chairman of the powerful Senate Appropriations Committee, had secured $1 billion of the money for his state, including $223 million for a mile-long bridge project which would connect the mainland to an island populated by just 50 residents. The project was dubbed the “Bridge to Nowhere,” and it eventually became so controversial that Congress was forced to redirect the funds, though the money is still likely to go to other transportation projects in Alaska. This episode serves as just one of the more dramatic examples of pork-barrel expenditures. One popular U.S. government text defines pork-barrel legislation as “Appropriations made by legislative bodies for local projects that are often not needed but that are created so that local representatives can carry their home district in the next election.” But this definition betrays two important biases about pork. First, while appropriations may still be the most important and widely recognized form of pork, it does come in a wider variety of forms than simply direct spending on local projects. Second, while this definition of pork asserts that pork is generally wasteful spending initiated by self-interested incumbents using taxpayer dollars to support their own electoral efforts, just how wasteful one believes pork-barrel legislation is often depends on where one sits. The negative understanding of the term is no surprise given the term’s origins. The term pork-barrel legislation would seem to be derived from the common practice in the pre–Civil War south in which slave owners would put out salt pork for their slaves. As Diana Evans puts it, “The resulting frantic rush for the barrel inspired the unflattering popular image of much better-fed politicians grabbing benefits for their constituents with the fervor of starving slaves scrambling for food.” Evans argues a more neutral and inclusive definition of the pork-barrel would be one that views pork as a distributive policy—one that “targets discrete benefits to specific populations such as states and congressional districts but spreads the costs across the general population through taxation.” This definition would importantly include not just direct spending, like the earmarks described above, but also locally targeted tax benefits, regulatory benefits, and other legally sanctioned “rents.”
pork-barrel expenditures 415
This more inclusive definition aside, the most popular and widely-recognized form of pork is the earmark which Allen Schick defines as “spending with a zip code attached” and Marcia Clemmitt defines more narrowly as “monies members of Congress secure for their hometowns or businesses they favor.” Earmarking is increasingly controversial in part because of the recent and rapid increase in both the number and the cost of earmarks doled out by the Congress every year. According to the nonpartisan Congressional Research Service (CRS), the 13 FY 2005 appropriations bills included 16,072 earmarks that cost a total of $52.1 billion. This represents a dramatic increase just during the past decade. In the 13 appropriations bills in FY 1994, CRS identified a total of just 4,203 earmarks totaling $27.0 billion. These data suggest two important trends. The total amount of money spent on earmarks is rising and, perhaps more interestingly, the total number of earmarks is rising much faster. As a result, during this period the
average size of a single earmark declined by about half from $6.4 million to $3.2 million (based on Congressional Research Data). The process of requesting and obtaining earmarks differs in different policy areas, but generally speaking, members of Congress submit their requests for earmarks to the chair of the relevant appropriations subcommittee. Far more requests are received than can be accommodated. Simply sifting through the requests requires significant staff time and effort. The relevant subcommittee chair usually makes most of the decisions in consultation with the ranking minority member of the subcommittee and, though the process is usually bipartisan, members of the majority can generally expect to receive more earmarks than members of the minority, and committee and subcommittee members can expect to receive more money than nonmembers. The language that actually designates where funds are to be spent is usually included in the committee report rather than the
416 pork -barrel expenditures
actual appropriations bill itself, and these statements are usually respected by agencies implementing the law and spending the money. Some committees are known to hand out more earmark benefits, and others are known to be more stingy. However, in recent years, nearly all of the appropriations subcommittees handed out large numbers of earmarks with 10 of the 13 subcommittees handing out at least 400 separate earmarks in FY 2005. In that year, the largest number of earmarks, slightly more than 3,000, was handed out in the Labor, Health and Human Services, Education, and Related Agencies Appropriations Bill (based on Congressional Research Data). In dollar terms, defense-related projects tend to be most popular for a variety of reasons including the widespread support for national defense and the belief that presidents are highly unlikely to veto any defense bill. This perception often shields such “defense pork” from criticism as the criticism of the “Bridge to Nowhere” described above. Despite the popularity of the practice of earmarking and pork-barrel expenditures among members of Congress, critics abound with a wide variety of reasons for opposing pork and seeking reform of the process. One of the main arguments leveled against pork-barrel expenditures is that they are economically inefficient. In this line of argument, all pork is deemed to be wasteful spending that does more to boost the electoral prospects of individual members of Congress than it does to achieve important public goals. These critics argue that, particularly in light of the structural and large federal budget deficits, the wasted money is literally drawn away from other areas in critical need. For instance, according to some, recent supplemental spending bills for the wars in Iraq and Afghanistan have been loaded up with pork projects that drew money away from items considered vital for supporting the troops such as helicopter rotor blades, tank tracks, and spare parts. A second line of arguments against pork-barrel expenditures has to do with the perception of corruption. The most recent and perhaps the most egregious case in point is former Representative Randy “Duke” Cunningham, a Republican who represented northern San Diego County in California. Cunningham pleaded guilty in 2005 to accepting $2.4 million in bribes to earmark funds for defense contractors. Cases such as this are exceedingly rare, but many
observers worry about the common practice employed by party leaders in Congress of handing out or denying earmarks as rewards or punishments for following the party line on key votes. Even where corruption is not as explicit as it was in Cunningham’s case, the mere perception that legislators are trafficking in local benefits at taxpayer expense to support their electoral efforts violates basic notions of fair electoral play and democracy in races between incumbents and their challengers. Others counter these arguments by pointing out that cases of corruption are exceedingly rare and by arguing that whether pork is wasteful or not is really a matter of one’s perspective. Those who defend pork against charges of corruption point out that Duke Cunningham was corrupt but that the process is not. They argue that the process is just as open to scrutiny as any other legislative process and that legislators are supposed to be, after all, advocates for local interests and that they know best (better than bureaucrats in Washington, D.C., anyway) how to represent those interests. Additionally, they argue that claims of the economic inefficiency of pork are overblown. Evans argues that relatively limited amounts of pork can go a long way in helping to enact major pieces of legislation deemed to be in the general interest (such as the North American Free Trade Agreement). The reality is that pork represents a tiny portion of the total amount of discretionary spending and that the budget would not come close to balance even if all the pork was eliminated from it. Finally, many of the projects do represent real investments in important public goals, such as enhanced transportation systems, new courthouses, and research for cures for terminal diseases, and it is simply wrong to suggest that the public receives nothing of value for its money. A recent poll indicated that a majority of voters favor ending earmarks altogether. But whether one views pork sympathetically or not, pork-barrel expenditures are here to stay as long as Congress retains control of the spending power granted to it in Article I, Section 8 of the U.S. Constitution and as long as members of Congress represent single-member districts that are contiguous and compact in shape. Both the means and the motive to “bring home the bacon” are too much to succumb to the frequent campaigns for fundamental reform. There is at least some reason
private bills 417
to believe, however, that the continued existence of pork-barrel expenditures is not quite the evil it is often portrayed to be. See also lobbying. Further Reading Clemmitt, Marcia. “Pork Barrel Politics.” CQ Researcher. Available online. URL: http://library. cqpress .com/ cqresearcher/ cqresrre2006061600; Congressional Research Service. “Earmarks in Appropriation Acts: FY1994, FY1996, FY1998, FY2000, FY2002, FY2004, FY2005.” January 26, 2006. Available online. URL: http://www.fas.org/sgp/crs/misc/ m012606.pdf; Dlouhy, Jennifer A. “Alaska ‘Bridge to Nowhere’ Funding Gets Nowhere.” San Francisco Chronicle, November 17, 2005; p. A7; Evans, Diana. Greasing the Wheels: Using Pork Barrel Projects to Build Majority Coalitions in Congress. New York: Cambridge University Press, 2004; Ferejohn, John A. Pork Barrel Politics: Rivers and Harbors Legislation, 1947–1968. Palo Alto, Calif.: Stanford University Press, 1974; Frisch, Scott A. The Politics of Pork: A Study of Congressional Appropriation Earmarks. New York: Garland Publishing, 1998; Lowi, Theodore J., Benjamin Ginsberg, and Kenneth A. Shepsle. American Government: Power and Purpose. 9th ed. New York: W.W. Norton, 2006; Poole, Isaiah J. “Details of Transportation Law.” CQ Weekly Report. September 26, 2005: 2578; Schick, Allen. The Federal Budget Process: Politics, Policy, Process. Washington, D.C.: Brookings Institution Press, 2000; Stein, Robert M., and Kenneth N. Bickers. Perpetuating the Pork Barrel: Policy Subsystems and American Democracy. New York: Cambridge University Press, 1997. —Lawrence Becker
private bills Private bills date to the earliest days of Congress and began as a way for members of Congress to deal with the individual needs of private citizens or business when affected in some individual manner by actions taken by the federal government. It might be that an exemption from a government law or regulation was needed. For example, under special circumstances, there may be a need to waive the usual requirements for citizenship, or financial relief for an individual
citizen or company that was affected by an action taken by the government may be necessary where the need is obvious if the government is at fault. Yet the requirements of the U.S. Constitution remain such that no monies could be drawn against the Treasury of the United States without first having had a bill of appropriations passed by both the House and the Senate and signed into law by the president. Only this procedure would authorize such action and provide the necessary funds. In time, Congress established special administrative courts to handle some of these claims, and not every extraordinary circumstance could be handled by Congress. In number, private bills enacted included more than 1,000 in the early 1950s and several hundred throughout the 1960s. By the 1980s, this number had declined to about 50, and by the 1990s, the number had again fallen to the teens. Yet hundreds still were proposed each year by members of Congress who were seeking relief on behalf of their respective constituents. Examples of private bills are abundant. It might be that an exemption from a government law or regulation is needed—for example, waiving the usual steps in the process for citizenship in cases of hardship when an elderly applicant might not live long enough to meet the residency requirements if they are stricken by cancer—or a private bill could revolve
Subjects of Private Bills Immigration (e.g., naturalization, residency status, visa classification) Claims against the government (domestic and foreign) Patents and copyrights Vessel documentation Taxation (e.g., income tax liability, tariff exemptions) Public lands (e.g., sales, claims, exchanges, mineral leases) Veterans’ benefits Civil Service status Medical (e.g., FDA approvals, HMO enr ollment requirements) Armed services decorations
House Committee Judiciary Judiciary Judiciary Transportation and I nfrastructure Ways and Means Resources Veterans’ Affairs Government Reform and Oversight Commerce National Security
418 private bills
around special circumstances—such as providing financial relief for an individual citizen or a company that was affected by an action of the government that had a particularly harsh financial or emotional impact on a family, such as the oftentimes described case of a CIA agent who participated in classified LSD experiments in the 1960s and then committed suicide as a result of these experiments, leaving behind a wife and several children in need at least of financial support. There are also cases where the Army Corps of Engineers accidentally caused flooding of a farmer’s field that had not been intended, thus destroying a year’s crop and livelihood, which the government compensated through a private bill. From the first Congress, it became apparent that private bills would become a part of the routine legislative process. The first hundred years of Congress is replete with petitions from constituents asking the government for a “redress of grievances” as provided for in the First Amendment to the Constitution. Representatives saw that private bills, yet not necessarily referred to by that name, could become a way for the federal government to set a national uniformity of law in place of piecemeal laws set by each state which had prior been the case during the years of the Continental Congress. This was particularly the case when the issue of patents arose, an especially important issue at the time of the Industrial Revolution. In this case, protection of the government was sought to apply to an invention by an individual or a set of individuals in a certain state with patent applicability thus national in scope, reflecting early on the federal aspects absent from the Continental Congress and desired under the new Constitution. For a number of years, Congress has operated under a system of calendars that are especially important to the larger House of Representatives. In the House, the Union Calendar (the “Calendar of the Committee of the Whole House on the state of the Union”) contains tax bills, appropriations bills, and major authorizations bills, and the House calendar contains all other types of legislation, including legislation that is placed on the “Discharge Calendar” and “Private Calendar.” Each party assigns objectors to keep watch on the floor of the House to any action that assigns bills to the Private Calendar and any action that might bring these bills to the floor for con-
sideration under unanimous consent procedures. Only bills that truly meet the merits of a private bill would be allowed to stay on the calendar and be passed through expedited procedures (for example, private calendar items have privileged status and thus can be called up on the first or third Tuesday of the month, unless the Speaker has sufficient support to change the agenda and can suspend the rules with a two-thirds vote or through unanimous consent agreement). Private bills have primarily dealt with land and property claims, military actions, pensions, and other actions taken by the government having a direct and limited impact on either civilians or military personnel. The types of private bills have changed over the years, and thus the numbers have declined. Since 1946, with changed statutes, individual citizens have found it easier to sue the federal government in cases of negligence The great influx of immigration private bills following World War II and lasting until the 1970s were the result of strict immigration quotas, with evaluation procedures at the individual and subcommittee level favoring members’ use of private bills to assist constituents before the Republican takeover of the House in 1995. Private bills, even if expedited through unanimous consent procedures in the Senate, still require approval by both the House and the Senate, and consideration, review, and signature by the president to become law. Recently Congress has added additional procedural means to accommodate claims against the government (for example, through creation of the Court of Claims or administrative procedures that obviate the need for private bills), again contributing to the declining passage of private bills in recent years. Private bills have historically been a means by which critics of Congress have pointed to special favors granted by a member of Congress to a privileged few who have special connections to powerful members. This aspect has contributed to the negative public perception of Congress. A classic example from the 1980s is the “Abscam scandal” in which FBI agents pretended to be Arab sheiks offering money to members of Congress. In exchange, private bills would be introduced to help Arab nationals enter the United States in an expedited fashion
public bills 419
and bypass the usual immigration procedures. Eventually six representatives and one senator were convicted of ethics violations in connection with this scandal. Further Reading CQ Weekly Report. Available online from the Web sites of the House and Senate. URL: www.senate.gov or www.clerk.house.gov; Oleszek, Walter J. Congressional Procedures and the Policy Process. 6th ed. Washington, D.C.: Congressional Quarterly Press, 2004; Ragsdale, Bruce A., ed. Origins of the House of Representatives: A Documentary Record. Office of the Historian, U.S. House of Representatives. Washington, D.C.: U.S. Government Printing Office, 1990; Saturno, James V. January 11, 2005. “How Measures Are Brought to the House Floor: A Brief Introduction,” CRS Report for Congress (received through the CRS Web Order Code RS20067); Smith, Stephen S., Jason M. Roberts, and Ryan J. Vander Wielen. The American Congress. 4th ed. New York: Cambridge University Press, 2006. —Janet M. Martin
public bills A bill is a legislative proposal that will become law if it is passed by both the House of Representatives and the Senate and approved by the president. Each bill is assigned a bill number. HR denotes bills that originate in the House and S denotes bills that originate in the Senate. Most bills are public bills, which means that the bill will affect the general public if it is enacted into law. Private bills, which are less common, are introduced on behalf of a specific individual or organization and will only affect those individuals if enacted into law (often, private bills will deal with immigration and naturalization issues). From start to finish, the path that a bill takes from being introduced in Congress to having the president sign the bill into law is a complex maze of committees, subcommittees, lobbying, compromises, rules, procedures, and debates. Few bills, once introduced, actually survive what is referred to as the “legislative labyrinth” that is Congress. A public bill (which is a proposed law) is first introduced in either the House or the Senate and then sent to the relevant committee. Only members of Congress can introduce bills in either chamber, but
bills are often drafted by the White House, interest groups, or other outside parties. The member who introduces the bill becomes its sponsor, and it is given a number and a title before making its way to the committee and eventually the appropriate subcommittee. Much legislation begins as similar proposals in both the House and the Senate, and most work on a bill occurs in the subcommittee. If the subcommittee thinks that the bill is important, they will schedule hearings on the bill to invite testimony on the relevant issue from experts, Executive branch officials, or lobbyists. After the hearings, if the subcommittee still believes that the bill is worthy of consideration, it is then voted back to the full committee for consideration (and possible additional hearings). If the full committee then approves the bill, it goes to the full chamber for a vote. Along the way, a bill can be “marked up” or revised in either the committee or subcommittee. Less than 10 percent of bills actually reach the floor for a vote as most are killed or tabled by the committee or subcommittee due to a lack of interest. In the House of Representatives, the rules committee sets the rules for debating a bill. The issues to be decided include when the bill will be introduced, the length of debate, when the vote will occur, and whether or not the bill will receive a “closed” or “open” rule (which determines whether or not amendments are allowed). Often, the majority party (who controls the rules committee) will opt for closed rules on the deliberation of a bill to prevent the minority party from altering the substance of the bill. Few House members actually speak for or against a bill when it is being deliberated on the floor. The decision of who will speak usually goes to the bill’s sponsor and a leading opponent from the originating committee. The various rules set forth during debate help a large legislative body such as the House operate more efficiently by limiting the ways that individual members can alter or stall the legislative process. In the Senate, determining the rules for debate is the responsibility of the majority leader, who usually consults with the minority leader (the Senate rules committee has much less power in the Senate than in the House). Also, the Senate is committed to unlimited debate for its members. Once a senator has taken the floor, he or she may speak for as long as he
420 r epresentatives
or she wishes. During the years, this power has been used by many members of the Senate to prevent action on a certain piece of legislation. This tactic is known as a filibuster. To end a filibuster, threefifths (or 60) senators must agree to limit the debate to 100 hours. This Senate rule is known as a vote of cloture. Filibusters are a strategy often used by the minority party to talk a bill to death in an attempt to make the majority party give in and withdraw the bill from consideration. The filibuster has been used by senators opposed to the passage of civil-rights legislation in the 1950s and 1960s (particularly southern Democrats), Republican opposition to President Lyndon Johnson’s pick of Associate Justice Abe Fortas to replace retiring Chief Justice Earl Warren in 1968, and more recently, Democratic opposition to many of President George W. Bush’s judicial nominees. The Senate also differs from the House in that any member can make any amendment, even if unrelated, to any bill; in the House, the amendment must be related to the bill under consideration. Amendments that are unrelated to the original bill are called riders. Each amendment must be voted on before the bill can come to a final vote. Senators can also place a hold on a piece of legislation. This is a request from a senator that he or she be informed before a particular bill is brought to the floor and is a strong signal to the Senate leadership that a colleague may object to the bill. The hold can remain in place until the issue with the objecting senator is worked out. Ultimately, there are more opportunities for members to derail legislation in both houses than there are to pass legislation. For example, during a 20-year span from 1985 to 2005, a total of 88,642 bills were introduced in Congress, with a total of 5,196 enacted into law (a rate of nearly 6 percent). Once legislation is debated and any amendments proposed, the bill is voted on by the full House or Senate. This is called roll-call voting and provides a public record of how members of Congress vote on all bills and amendments. A bill is passed if a simple majority (50 percent plus one) of the members approve. If the bill is passed in one house but has not been considered in the other, then the bill is sent to the other for consideration. If both houses pass similar bills, then each version of the bill goes to a conference committee. The job of this committee, which is a temporary committee usually made up of members
of the original standing committees that considered the bill, is to work out any differences between the two houses in the bill. Once a compromise is reached, the new version of the bill is then sent to both houses for a vote. Only bills approved by both houses go to the president, who can then sign the bill into law or veto the bill. Congress can override a presidential veto by a vote of two-thirds in each house. The president can also pocket a bill for 10 days (Sundays excluded), and if the legislature is in session, the bill automatically becomes law. But if the legislature is adjourned, then the bill dies. This is known as a pocket veto, which forces Congress in its next session to start over from the beginning in the legislative process. As a result, the bill must again pass both chambers and is again subject to presidential veto. Further Reading Davidson, Roger H., and Walter J. Oleszek. Congress & Its Members. 10th ed. Washington, D.C.: CQ Press, 2006; Dwyre, Diana, and Victoria Farrar-Myers. Legislative Labyrinth: Congress and Campaign Finance Reform. Washington, D.C.: Congressional Quarterly Press, 2001; Hamilton, Lee H. How Congress Works and Why You Should Care. Bloomington. Indiana University Press, 2004; Sinclair, Barbara. Unorthodox Lawmaking: New Legislative Processes in the U.S. Congress. Washington, D.C.: Congressional Quarterly Press, 2000. —Lori Cox Han
representatives Congress is a bicameral institution (which means that it has two separate houses), with 435 members in the House of Representatives and 100 members in the senate. How seats were to be allocated in both the House and the Senate was a major component of the Great Compromise during the Constitutional Convention. States would receive equal representation in the Senate, known as the upper house, which pleased states with smaller populations. The membership of the House of Representatives, known as the lower or “people’s” house, was to be determined based on state populations. This pleased the larger states, which would have more members and a more prominent say in the outcome of legislation.
representatives 421
As stated in the U.S. Constitution, members of the House of Representatives must be at least 25 years of age, are required to have lived in the United States for at least seven years, and must be legal residents of the State from which they are elected (but they are not required to live within their district). Members of the House of Representatives are elected every two years by voters within their congressional districts. Frequent elections and, in most cases, representing a smaller number of citizens in districts (as opposed to states) was intended to make this chamber of Congress more responsive to the needs of citizens and more localized interests. Regarding eligibility to serve in the House, the Fourteenth Amendment requires that any federal or state officer must take an oath to support the Constitution. If that officer later engages in rebellion or aids the enemies of the United States, he or she is disqualified from serving in the House. This amendment, ratified in 1868—just three years after the end of the Civil War—was intended to stop anyone sympathetic to the Confederacy from serving. The Constitution also permits the House to expel any member by a two-thirds vote. Only five members have ever been expelled; one was just recently in 2002—James Traficant, a Democrat from Ohio, was expelled following his conviction on corruption charges. The House can also censure its members by a simple majority vote and has used this in cases in which it wished to admonish members for such things as violating the House Code of Ethics in such areas as the breaking of rules governing lobbying. Members of the House use the prefix The Honorable before their names and are referred to as either Congressman or Congresswoman or simply Representative, although the latter is less common. During the first session of Congress in 1789, the House of Representatives consisted of 65 members. The current number of 435 members in the House of Representatives has stood since 1929 when Congress adopted the size by statute. Congress temporarily increased the size to 437 seats in the 1950s when Alaska and Hawaii became states but then changed back to 435 seats in 1963. As constitutionally mandated, the House must consist of no less than one member for every 30,000 people; however, as the U.S. population grew, the cap of 435 became necessary. House seats are reapportioned every 10
years following the census based on state populations. The census, as mandated every ten years by the U.S. Constitution (and conducted the first time in 1790 under the guidance of Secretary of State Thomas Jefferson), requires the federal government to determine the population of the nation as a whole as well as that of individual states. States with more than one representative are divided into single-member districts. These elections are known as “winner-take-all” or “first-past-the-post” elections, which mean that only a simple majority of the total vote is required to win the seat. As a result, minor or third parties have very little chance of ever gaining representation in the House of Representatives since the two major parties dominate the electoral system; the single-member district rule within the Constitution protects the two-party system since it does not allow for proportional representation (as is common in many European parliaments). If a candidate wins by only the slightest margin of 50.1 percent of the vote, that candidate will represent the entire district. Reapportionment is the reallocation of House seats among states after each census as a result of changes in state population. The general rule when reapportionment occurs is to provide a system that represents “one person, one vote” as much as possible. Therefore, with a set number of 435 seats, changes must be made when one state increases its population or the population of another state decreases. As a result, some states will gain congressional seats while other states may lose them. For example, after the most recent census in 2000, a total of eight states gained seats (Arizona, California, Colorado, Florida, Georgia, Nevada, North Carolina, and Texas), while a total of 10 states lost seats (Connecticut, Illinois, Indiana, Michigan, Mississippi, New York, Ohio, Oklahoma, Pennsylvania, and Wisconsin). Currently, California has the largest congressional delegation with 53 House seats, followed by Texas with 32 seats, New York with 29 seats, Florida with 25 seats, and Pennsylvania and Illinois with 19 seats each. Seven states only have one House seat, which is guaranteed by the Constitution, due to their small populations. The lone representative in each of these states is known as a member at large. Those states include Alaska, Delaware, Montana, North Dakota, South Dakota, Vermont, and Wyoming. There are
422 r epresentatives
also nonvoting delegates representing the District of Columbia, the U.S. Virgin Islands, American Samoa, and Guam, and a nonvoting resident commissioner representing Puerto Rico. Following reapportionment, new congressional district boundaries must be drawn for states that either gain or lose House seats. State governments are responsible for this process, known as redistricting. The goal is to make all congressional districts as equal as possible based on population (the theory of “one person, one vote”). In most states, members of the state legislature draw new district lines and then approve the plan, which goes into effect for the first congressional election following the census (which was most recently 2002). However, the party in power in the state legislature is often motivated to redraw district lines that benefit its own party members. This is known as gerrymandering, which involves the deliberate redrawing of an election district’s boundary to give an advantage to a particular candidate, party, or group. After the 2000 census, Texas actually redrew its district boundaries twice after the Democrats lost control of the state House of Representatives in the 2002 statewide elections (this was the first time that Republicans had ever controlled both state houses). The efforts to redraw the lines twice triggered many legal challenges from both sides, including a case that made its way to the U.S. Supreme Court. The Republicans won the legal battle and were able to maintain its preferred district boundaries in all but one area in the southern part of the state. The dual role of Congress, particularly for members of the House, consists of its role as a lawmaker and its role as a representative body. However, members are often torn between how to represent constituents best while implementing laws for the good of the nation. This represents the competing theories of what it means to be a member of a representative government like Congress—whether or not the role should be that of a trustee, an elected representative who considers the needs of constituents and then uses his or her best judgment to make a policy decision (as argued by British philosopher Edmund Burke during the 18th century) or the role of a delegate, a representative who always votes based on the desires of his or her constituents and disregards any personal opinion of the policy.
Members of the House of Representatives must balance these two aspects of the job while trying to determine how to vote on proposed legislation. Most want to stay in office, so reelection demands (such as meeting the needs of the constituents back home in their district with spending projects) can sometimes outweigh what is best for the nation as a whole (such as passing legislation to balance the federal budget and forgoing those spending projects back home). As a result, individual members have gained more prominence and power during the years as the power of Congress as an institution in the policy-making process has waned. According to Lee Hamilton, a former member of the House of Representatives, the job entails a variety of responsibilities: that of national legislator, local representative, constituent advocate, committee member, investigator, educator, student, local dignitary, fund raiser, staff manager, party leader, and consensus builder. However, the two most important jobs of a member of Congress are passing and overseeing legislation and representing constituents. House incumbents have a tremendous advantage in winning their reelection efforts. That means that once someone is elected to the House, challengers have an almost impossible time beating a current member in a general election contest. This incumbency advantage comes from a variety of factors, including the relative ease of raising money for reelection once in office, the perks associated with holding the job, the lack of term limits for members of Congress, the professionalization of Congress in recent years, and redistricting that often favors incumbents by creating “safe seats” in the House of Representatives. All of this greatly impacts not only who decides to run for office but also who serves in Congress and, in turn, in shaping the policy agenda and making laws. In 2004, 98 percent of House incumbents were reelected. Since the goal of political parties is to elect as many of its members to public office as possible, it makes sense that even in an era of weakened parties that both Democrats and Republicans would work hard to find strong and competitive candidates to maintain control of particular seats or add new ones. Those who are viewed as qualified usually have some prior experience in public office and are viewed as attractive to voters (skilled at campaigning with no personal or professional scandals in their
resolution 423
past). Usually, the most likely candidates for a position in Congress come from the ranks of state legislators, county officers, mayors, city council members, or even governors (particularly from small states). Members of the House are often successful in their attempts to move to the opposite end of Capitol Hill by winning a Senate seat. For many candidates, this path to political power begins with a law degree and experience as an attorney. Competitive candidates must also be skilled fund raisers as the cost of running for Congress has risen dramatically in recent years. In 2003–04, the average cost of a House race totaled $532,000. Today, most members of Congress are professional politicians; in 2004, the average length of term for a member of the House was nine years. Members of Congress earn a fairly high salary (about $162,000 a year for members of both the House and the Senate) with generous health-care and retirement benefits. Each member also occupies an office suite on Capitol Hill and has an office budget of about $500,000 a year for staff and an additional allocation for an office within their district or state. Due to the prestige of their positions, many members serve for many years, if not decades. This professionalization of Congress contributes greatly to the incumbency factor. The power of incumbency also comes from the fundraising advantage, which is increasingly important with the rising cost of campaign advertising, polling, and staffing. The ability to out–fund raise an opponent is often a deterrent for a challenger either in the primary or general election to even enter the race against an incumbent. Other incumbency advantages come from congressional casework, access to congressional staff, and travel to and from a member’s district. Incumbents are in a unique position when running for reelection in that they usually have a long list of accomplishments to show voters in their districts. So when voters ask “What have you done for me lately?” members of Congress can respond with having helped individual constituents with many different issues. Most commonly, casework (considered individual requests by district or state residents for some sort of help with a federal agency, department, or program) involves requests to help find a government job, help with federal programs such as Social Security or veteran’s benefits, or help with tax or
immigration problems. Much of this casework, in addition to the job of legislating, is accomplished with the help of congressional staffs. House members usually employ about 20 staffers. This can be a tremendously important resource to a member of Congress, not only in terms of handling the regular responsibilities of the job but also providing political expertise during a reelection campaign. Typically, congressional staffers respond to constituency requests, draft legislation, write speeches, and are liaisons with other government agencies and lobbyists. This insider advantage is something that challengers do not have. Each House member is also allowed about 30 visits to his or her home district each year, all at the public’s expense. This helps to balance the needs of doing the work of Congress while at the same time running for reelection. Further Reading Baker, Ross K. House and Senate. 3rd ed. New York: W.W. Norton, 2000; Davidson, Roger H., and Walter J. Oleszek. Congress & Its Members. 10th ed. Washington, D.C.: Congressional Quarterly Press, 2006; Hamilton, Lee H. How Congress Works and Why You Should Care. Bloomington. Indiana University Press, 2004. —Lori Cox Han
resolution The U.S. Congress utilizes three types of resolutions: simple, joint, and concurrent. Joint resolutions have the force of law and do not differ significantly from bills. However, simple and concurrent resolutions do not have the force of law and differ in many ways from a traditional bill. Both simple and concurrent resolutions are most commonly used for internal tasks within Congress such as appointing standing committee members in the House of Representatives or establishing the date and time at which Congress will adjourn. These common “housekeeping” tasks are not generally very newsworthy, but sometimes one or both houses will use resolutions to make policy recommendations to the president or to express Congress’s views on specific international affairs. Such resolutions are often discussed in the national media. Joint resolutions must be passed in the same form by both the House and Senate, and then must go to
424 r esolution
the president for his signature or veto. They have the force of law and are legally no different than bills. Joint resolutions that begin in the House are designated H.J.Res., and those that begin in the Senate are designated S.J.Res. Joint resolutions are traditionally used in the place of bills to deal with very limited issues such as a single appropriation for a specific purpose. They are also used to provide government expenditures when an appropriations bill has not yet been passed. While expenditure issues are the most common use for joint resolutions, Congress also uses them to introduce constitutional amendments. While this type of joint resolution does not require presidential approval, it must be passed by a two-thirds affirmative vote of the members of both houses. To become effective, the resolution must then be ratified by three-fourths of the states. During nearly every recent session of Congress, members have introduced joint resolutions proposing constitutional amendments related to such issues as a federal balanced budget, flag burning, school prayer, marriage, and citizen voting procedures. In 2000, during the 106th Congress, a member even introduced a joint resolution to amend the Constitution to allow foreign-born citizens to be president. When Arnold Schwarzenegger was elected governor of California in 2003, the constitutional requirement that foreign-born citizens are unable to run for president came to the forefront of national politics. Given the popularity of Schwarzenegger at the time, it should come as no surprise that such a joint resolution was introduced. However, as with the previous issues mentioned, it did not pass the introduction stage. Certainly, the members who introduce such resolutions do not generally expect them to be passed. In fact, it is not uncommon for a member of Congress to introduce a joint resolution to amend the U.S. Constitution for purely strategic reasons. If school prayer is a salient issue in a particular member’s district or state, that member may choose to introduce a resolution proposing a constitutional amendment on this issue. This allows the representative to return to his or her district having done what he or she could to ensure that the issue is kept on the minds of his or her congressional colleagues. Simple resolutions (or just “resolutions”) differ from bills, joint resolutions, and concurrent resolu-
tions in that they are only passed by a single chamber. Once passed, they do not have the force of law, and they only affect the house in which they are passed. Both houses of Congress utilize such resolutions. House resolution numbers are preceded by H.Res., and Senate resolution numbers are preceded by S.Res. While we typically think of resolutions within the context of the U.S, Congress, we should also note that some state and local governments can utilize resolutions. In fact, some city and county governments have passed resolutions as a way of communicating their desires to the federal government. (For example, in 2006, several cities and towns across the nation passed resolutions suggesting that President George W. Bush and Vice President Richard Cheney should be impeached.) In Congress, most resolutions deal with housekeeping issues within each chamber. However, they can also be used to send a signal to the president or other world leaders. The House or the Senate can each express their position on a specific policy or international event through a simple resolution. They also use resolutions to acknowledge important events officially such as the death of a statesman or an important federal holiday. Additionally, since impeachment proceedings are the sole jurisdiction of the House of Representatives, the official impeachment order takes the form of a resolution. Most resolutions are not well publicized because they deal only with the internal workings of each chamber. For example, House resolutions at the beginning of each session of Congress establish committee memberships and party leadership positions. In January 2007, the House passed H.Res.10, which established the times that the chamber would convene each day. Other resolutions are symbolic in nature. Both the House and the Senate passed resolutions honoring President Gerald Ford and sending official condolences to his family after his death in December 2006. In October 2005, Iran’s President Mahmoud Ahmadinejad said publicly that “Israel must be wiped off the map,” described Israel as “a disgraceful blot [on] the face of the Islamic world,” and declared that “[a]nybody who recognizes Israel will burn in the fire of the Islamic nation’s fury.” The House responded to this and other Iranian threats against Israel by reaffirming the U.S.’s alliance with Israel and calling on
resolution 425
the United Nations Security Council and other nations around the world to reject such statements in H.Res. 523. The Senate passed its own resolution, S.Res. 292 calling on President George W. Bush to thoroughly repudiate, in the strongest terms possible, the statement by Mr. Ahmadinejad. Resolutions of this kind are mostly symbolic statements meant to send a signal regarding each chamber’s views on an issue. Symbolic statements can also take the form of concurrent resolutions in which both chambers join together to make a joint statement regarding a policy or current event. Like simple resolutions, concurrent resolutions do not require the signature of the president and do not have the force of law. However, concurrent resolutions deal with issues related to both houses and thus must be approved by the majority of the members of both houses. They are most commonly used for housekeeping issues that affect both houses such as setting the time and date for the adjournment of Congress. Concurrent resolutions are also used to express the joint sentiment of Congress on a particular issue. Concurrent resolutions introduced in the House are designated as H.Con.Res., and concurrent resolutions introduced in the Senate are designated as S.Con.Res. Concurrent resolutions are used to allow for a recess or adjournment of Congress for more than three days and to establish a Joint Session of Congress. Joint Sessions are typically called to hear the president’s annual State of the Union Address. Issues dealing with the Capitol itself are also addressed using concurrent resolutions. For example, H.Con. Res.84, introduced during the 109th session of Congress, sought the creation of a monument “to commemorate the contributions of minority women to women’s suffrage and to the participation of women in public life.” Concurrent resolutions are also used to express the joint sentiments of both houses of Congress. H.Con.Res.200 of the 104th Congress was issued to honor the members of the U.S. Air Force who were killed in the 1996 terrorist bombing of the Khobar Towers in Saudi Arabia. H.Con.Res.413 of the 109th Congress expressed the appreciation of both houses for the life and services of Lloyd Bentsen, the former Democratic senator from Texas who also served as the first Secretary of Treasury in Bill Clinton’s admin-
istration, and extended Congress’s sympathies to his family on his passing. Probably the most important and well-known concurrent resolution is the concurrent budget resolution. The budget resolution is based on 20 categories of federal government spending and guides the official budget-making process. Each year, both the House and the Senate budget committees draft their own budget resolutions and present them to their chambers. Once the resolutions pass each chamber, a conference committee is established to reconcile the differences in the two versions of the resolution. Once the conference committee resolves the differences, the conferees present their report to each chamber. A vote is then taken on the changes made by the conference report, and if both chambers agree to the changes, the concurrent budget resolution is adopted and finalized. Because concurrent resolutions do not have the force of law, the budget resolution does not actually appropriate any money. However, it serves as a blueprint for the actual appropriations process and allows each chamber to be aware of the wishes of the other chamber as outlined in the conference committee’s report. We can intuitively understand the importance of resolutions dealing with budgets or rules or constitutional amendments. However, we know less regarding the influence of symbolic resolutions that condemn the actions of another government’s leader or make official policy recommendations to the president. Opportunities for research in this area abound. What influence do more symbolic simple or concurrent resolutions have on the actions of the executive branch or other international governments? If they do not influence the groups toward which they are addressed, what purpose do they serve? Could they simply be a means by which the House and the Senate can communicate their views to the U.S. public? If so, are they effective? Do members reference resolutions in their reelection campaigns? These and other research questions related to resolutions remain to be answered. Further Reading Davidson, Roger H., and Walter J. Oleszek. Congress & Its Members. 10th ed. Washington D.C.: Congressional Quarterly Press, 2006; Library of Congress: Thomas. Located online at http://thomas.loc.gov;
426 rider
Mayhew, David. Congress: The Electoral Connection. New Haven, Conn.: Yale University Press, 1974; Stewart, Charles, III. Analyzing Congress. New York: W.W. Norton, 2001. —Melissa Neal
rider A rider is an amendment to a piece of legislation that is considered nongermane, meaning that it has a purpose that is substantially different from the bill to which it is to be attached. Often used as a tool for moving legislation that would not be able to pass on its own merits, for circumventing legislative committees, or for giving minority parties in legislatures opportunities to debate issues objectionable to the majority, the term rider has become something like a pejorative term, and many consider their abolition to be essential for reforming the U.S. Congress. Yet the many advantages they offer to legislators have prevented any substantive reform in the rules of parliamentary procedure employed by the Congress that would seriously limit the use of riders. It is worth pointing out up front, however, that classifying an amendment as a rider is very subjective. Amendments may be nongermane riders even when the presiding officer of a legislative body has ruled that they are germane. Alternatively, perfectly legitimate amendments may be ruled nongermane and therefore out of order simply because the majority party objects to them. It is therefore virtually impossible to make an accurate count of the number of riders attached to legislation in Congress during any given year or over any period of time. Strictly speaking, a rider is an amendment that is proposed to a bill under consideration by an entire legislative body rather than in committee which is considered to be different in primary substance from the bill or nongermane to the bill (nongermaneness is supposed to be formally determined in Congress). By tradition, though not always by strict rule, individual legislative bills deal with only a single subject so that the addition of new material on an unrelated subject is seen, at best, as a poor legislative practice and, at worst, deliberately deceptive. Nonetheless it has become relatively common practice in the U.S. Congress to attach rider amendments to bills when they are being debated on the floor of the House or the
Senate, though due to differences in the power of committees and governors, the practice tends to be somewhat less common in state legislatures. There are a number of different reasons why individual members of Congress would wish to propose amendments and why other legislators, especially the presiding officer of a legislative body who could rule the amendment nongermane and therefore out of order, would choose to approve the rider. The U.S. House of Representatives is perhaps the more complex case because the House has rules specifically requiring all amendments offered on the floor to be germane to the bill at hand thus forbidding the use of riders (for example, House Rule XXI prohibits riders to appropriations bills). Any House member, whether in the majority or minority parties, may raise a formal point of order to question the germaneness of an amendment and thereby force the presiding officer to make a decision and perhaps rule the amendment nongermane. Nonetheless, riders are not uncommon in the House. Though the powerful House rules committee may strictly limit the number and type of amendments offered to a bill on the floor, it may also sanction the adding of riders to a bill thereby virtually assuring their inclusion in the final bill. This permits members of the House to be able to offer rider amendments successfully on the floor, usually quietly through the use of unanimous consent requests. Unlike the House of Representatives, the U.S. Senate has no rule on the germaneness of amendments, making it relatively easy to attach riders to legislation. At times, this creates friction with the House. Germaneness requirements and the restrictions imposed by the House rules committee often result in very focused pieces of legislation, but the lack of these restraints in the upper chamber permits senators to add large numbers of riders to bills that may sometimes be repugnant to the House. Perhaps this has been most clearly seen on appropriations bills. Where House leaders have used the rules committee to keep major spending bills free of extraneous riders, they have often found themselves confronted later with the same bill in a conference committee weighted down with large numbers of riders requiring a great deal of extra spending because Senate leaders cannot enforce fiscal discipline. Indeed, rank-and-file senators consider it their
rider 427
prerogative to add riders to bills benefiting their home states or otherwise advance their policy priorities, and they often fail to understand why the House objects. The lack of an equivalent to the House rules committee and a tradition of freedom to offer riders also provide the minority party an opportunity to raise and debate issues important to them but opposed by the majority. They may introduce amendments on any topics regardless of germaneness and force debate and often votes that they may choose to use against the majority party in the next election. Thus the opportunity to offer riders, if not have them accepted, provides minority party senators their second most important tool, just behind the filibuster, for advancing an alternative agenda. There are four reasons why legislators might choose to attach riders to bills. The first involves the often contentious relationship between the House and the Senate. If members of one chamber, such as the House, desire a bill to become law but doubt that the Senate would be willing to pass it, they might attach it to a larger unrelated bill and thus put the Senate in the position of being forced to debate the issue if they wish to strip the rider out. During the second George W. Bush administration, House leaders on several occasions attached a highly contentious provision to open the Arctic National Wildlife Refuge for oil exploration even though Senate leaders, lacking the votes to pass it, did not wish to bring up the issue. Second, if legislative leaders of either chamber believe that the president is likely to veto a bill, they may attach it to another as a rider, usually one that the president deeply desires or a “must-pass” bill such as an appropriations bill that they believe the president would not dare veto. Because presidents are denied the line-item veto and therefore can only accept or reject bills in their entirety, riders have proven to be an effective tactic for Congress to force presidents to accept legislation they have resisted. In the 1990s, congressional Republicans at times used this tactic to pressure President Bill Clinton into approving legislation that was part of the Republican political agenda but opposed by the president. In many cases, however, Clinton vetoed the legislation anyway, and Congress was unable to override most of these vetoes.
Third, burying riders in very large bills is an effective means of passing highly unpopular legislation without the notice of the public, the news media, or even many other legislators. This secretive method of lawmaking has been used to pass pay raises for members of Congress and facilitate pork-barrel politics. Riders may be added to bills creating new projects for members of Congress in home districts or allocating funding for those projects. Using unanimous consent requests relieve members of Congress from recording formal roll-call votes on these riders or even discussing them before they are added to the parent bill. Finally, a rider may be added to a bill to prevent it from passing. If opponents to a particular piece of legislation believe that they cannot find the votes to stop it from passing either the House or the Senate or be vetoed by the president, they may find that they have the votes to attach a highly controversial rider that will prevent their own chamber or the other chamber from passing it or move the president to veto it. Such “poison pill” amendments are not used successfully all that frequently because it is hard to secure enough votes to add a rider to kill a bill that, presumably, has enough votes to pass. But when they are attached, they tend to be highly effective tools for defeating bills. Because they are nongermane to the bill to which they are attached and because they are often bills that by themselves cannot garner enough political support to pass, riders have become associated with political corruption, and reformers have called for them to be banned. Yet there are several arguments made in support of the right to add riders. For the minority party in the Senate, riders are nearly the only way in which they can debate issues that may be important to the electorate but distasteful to the majority party; therefore, outlawing riders would merely muzzle the minority. Other proponents argue that riders offer a way to quickly short-circuit the regular legislative process. Moving bills through committees can be very time consuming. At times, Congress may need to respond quickly to national emergencies with legislation; riders permit Congress to act swiftly by attaching a badly needed law to another bill, one that is already moving into the final stages of the lawmaking process. Riders are also seen, especially in the Senate, as a way to overcome committee gate-keeping rights.
428 rules committee
Requirements in Congress that bills be referred to, considered by, and favorably reported by a committee before final legislative action give committees, especially their chairs, the power to hold back bills that a majority may want. Riders offer a way to move legislation on which committees have refused to act and thus circumvent a minority of lawmakers who happen to constitute a majority on the committee. Nonetheless, riders have come under considerable criticism from advocates of good government management and transparency in lawmaking because they are most frequently used as a way to slip provisions into law that are not desired by the majority. They are, opponents argue, nothing more than a secret means for moving highly unpopular legislation such as congressional pay raises and pork-barrel spending that violates fiscal discipline and drives up the budget deficit. They point to large scale abuses of riders in Congress where dozens of riders adding election-year projects for congressional districts or benefits for special interest groups would be attached to a moving bill, turning them into what have been called Christmas Tree Bills. Though some of the worst abuses of riders under congressional Democrats were barred when Republicans took control of the Congress in 1995, the use of a rider as a popular means to move a bill has once again become a topic of congressional reform. With the superlobbyist Jack Abramoff scandal again directing public attention at the relationship between lobbyists and legislators, reform advocates have pushed reform legislation that virtually eliminated the use of riders. The lack of support for this reform from legislative leaders, however, makes it unlikely that riders will be banned from regular use in Congress in the near future. Further Reading Baker, Ross K. House and Senate. 3rd ed. New York: W.W. Norton, 2000; Congressional Research Service. Senate Amendment Process: General Conditions and Principles. Library of Congress: Washington, D.C., 2001; Congressional Research Service. Appropriations Bills: What Are “General Provisions”? Library of Congress: Washington, D.C., 2003; Greenberg, Ellen. The House and Senate Explained. New York: W.W. Norton, 1996; Hamm, Keith E., and Gary F. Moncrief. “Legislative Politics in the States.” In Politics in the American States. 8th ed, edited by Virginia
Gray and Russell L. Hanson. Washington, D.C.: Congressional Quarterly Press, 2004; Oleszek, Walter J. Congressional Procedures and the Policy Process. 6th ed. Washington, D.C.: Congressional Quarterly Press, 2004. —Thomas T. Holyoke
rules committee Nicknamed the “traffic cop” of the U.S. House of Representatives, the rules committee is a standing committee responsible for regulating the flow of most legislation forwarded to the chamber floor. One of the most powerful panels on Capitol Hill, membership on the rules committee is one of the most highly prized assignments in Congress. The primary task of the committee is to craft a “rule” or “special order” governing floor debate for every major bill (other than the federal budget) sent before the full House. The rule also sets the time limit for the length of floor debate, the parameters for floor amendments to the bill, and which, if any, parliamentary rules and procedures will be set aside. The committee’s rule governing floor debate must be approved by a vote of the full House before the chamber can consider the proposal itself. If the House declines to adopt the committee’s rule, then the floor debate must be governed by the chamber’s standing orders for conducting business, a cumbersome process that makes it difficult to win passage of any legislation. The history of the rules committee dates from the first session of Congress. In 1789, the chamber established a temporary 11-member panel to write standing House rules governing how members would conduct their business. Throughout the next 90 years, the House at the beginning of each new Congress formed a temporary select committee to review and write proposed changes to chamber rules. In 1859, the Speaker of the House was added to the committee and automatically became the panel’s chair. In 1880, the committee’s status was changed from being a select panel to a standing committee whose composition would automatically continue from one congressional session to the next. Three years later, the rules committee was given the basic mission it holds today—of preparing rules to govern floor debate.
rules committee 429
The rules committee played a prominent role in one of the most tumultuous times in the history of the House—the era of strong Speakers. During the decades before and after the turn of the 20th century, the rules committee served as an important platform from which powerful speakers such as Thomas Reed (R-ME) and Joseph Cannon (R-IL) dominated the chamber. While Reed’s nickname was “czar” because of his powerful and arbitrary rule, his speakership was not as tyrannical as Cannon’s leadership. Cannon’s leadership practices led to the famous bipartisan “revolt” in 1910 which led to massive changes to both the office of the Speaker and the rules committee he dominated. The reforms were immediate and sweeping. The Speaker was tossed off the committee, and its membership was expanded from five to 10, with six from the majority party and four from the minority party. The full House was authorized to vote members onto the committee while seniority was established as the guiding principle for assigning members to all other standing committees. The size and the composition of the committee varied in subsequent congresses, generally ranging from 10 to 17 more senior members, with the majority–minority party ratio varying depending upon the balance of power in the chamber. By the late 1930s, dominance of the committee had gravitated to the “conservative coalition” of Republicans and conservative (and mostly from southern states) Democrats. This block dominated the committee through 1967. The apex of power for this group spanned 1955 through 1967 when it operated under the chairmanship of Howard W. “Judge” Smith, a Democrat from Virginia. The conservative coalition was particularly aggressive in blocking civil rights and welfare legislation. Smith operated the panel as his personal fiefdom and would allow the committee only to report out legislation he favored. The primary targets of his wrath at this time was the leadership of the Democratic Party, both in Congress and in the new administrations under presidents John F. Kennedy and Lyndon B. Johnson. This political setting led the rules committee to be the scene of a sea change reminiscent of the revolt against Speaker Cannon generations earlier. In 1961, the Speaker of the House, Sam Rayburn, a Democrat from Texas, worked with President Kennedy to
increase the size of the rules committee from 12 to 15. The three new members would all be liberal Democrats and strong supporters of civil rights and welfare programs. The new members succeeded in breaking Smith’s iron grip on the committee and enabled the passage of the landmark civil rights and social-welfare legislation which went to the panel later that decade. The rules committee later underwent changes but none as significant as those of the Cannon and Rayburn eras. In the mid-1970s, liberal Democrats successfully won passage of a House reform mandating that all committee chairs be selected by an election of the full chamber rather than being determined automatically by seniority. The Speaker also was empowered to nominate members to the rules committee. The Republicans in 1989 changed their rules to allow their House party leader to select all of their rules committee members. In sum, the rules committee today remains a tool for the leadership of the major political parties to influence House floor debate. Committee members must have established reputations as loyal members of their political party. The contemporary rules committee receives requests for rules from each of the standing committees in the House. The chairs of those committees request a rules-committee hearing and lists preferences for rules to govern the floor debate on their bill. Commonly testifying before the rules committee are the chairs of the committee and subcommittee reporting the bill, the sponsor of the bill, opponents of the bill, and any other member wishing to influence the rules committee’s proposal. Afterward, rulescommittee members, after receiving strong guidance from the chamber’s majority party leadership, debate and write their proposed rule to govern the subsequent House floor debate. The committee commonly reports out one of four primary types of rules governing amending bills. These types of rules include: open, modified open, modified closed, and closed. A closed rule forbids all amendments and calls for a “straight up or down” vote where members will either accept or reject the entire proposal. On the other extreme is an open rule that allows any member to propose an amendment, as long as it complies with House rules, to any part of the bill.
430 S enate
Rules can be “modified” in any number of ways to help meet the goals of the majority-party leadership or members and leaders of the rules or standing committees reporting out the bill to begin with. Modified rules can limit the parts of the bill that can be amended, set time limits for floor debate, or who can propose amendments, for example. Most significant legislation, particularly highly visible bills important to the chamber’s majority party or perhaps also to the president, have tended to be allowed more time for floor debate than more routine legislation. In sum, throughout its lengthy history, the history of the rules committee has been the history of the House of Representatives itself. The committee remains a focal point of power, influence, and activity. More than two centuries of politicking indicate that this record is highly unlikely to change. See also legislative process. Further Reading Brown, William. House Practice: A Guide to the Rules, Precedents, and Procedures of the House. Washington, D.C.: U.S. Government Printing Office, 1996; Deering, Christopher, and Steven Smith. Committees in Congress. 3rd ed. Washington, D.C.: Congressional Quarterly Press, 1997; Matsunaga, Spark M., and Ping Chen. Rulemakers of the House. Urbana: University of Illinois Press, 1976; Remini, Robert. The House: The History of the House of Representatives. New York: HarperCollins, 2006; Robinson, James. The House Rules Committee. Indianapolis, Ind.: Bobbs Merrill Co., 1963. —Robert E. Dewhirst
Senate Often called the “upper house” of the Congress, the framers of the U.S. Constitution designed the Senate to be the more deliberative legislative chamber compared to the more democratic House of Representatives. The Senate exemplifies the federal principle of the equal status of states, in contrast to the proportional representation found in the House, where the larger states enjoy greater power and influence. Under the Articles of Confederation, each state had one vote in Congress, no matter how big or small it was. When Edmund Randolph presented the Virginia
Plan at the Constitutional Convention in 1787 calling for proportional representation in both houses of a bicameral Congress, small states understandably resisted. They countered with the New Jersey Plan, which called for a unicameral legislature in which each state had one vote. The Great Compromise resolved the dispute by creating a bicameral legislature based on different principles of representation. Representation in the House would be based on population, while representation in the Senate would be based on the equality of states as states. In this case, each state would have two senators. Some framers, including James Madison, strongly objected to this resolution as violating basic republican principles, but the compromise was necessary for the success of the convention, and Madison accepted it as a “lesser evil.” Key structural features of the Senate indicate its particular function in the federal legislature. First, the qualifications to serve in the Senate are more stringent than those to serve in the House. In addition to living in the state one represents, senators must be at least 30 years old and nine years a citizen as opposed to 25 years old and seven years a citizen in the House. The age and citizenship differences are not random or capricious features. Madison speaks in Federalist 62 of “the nature of the senatorial trust,” which apparently requires “greater extent of information and stability of character.” One way to make more likely the appearance of such individuals is to ensure that the “senator should have reached a period of life most likely to supply these advantages.” The framers believed that the Senate required more of an individual than the House, and they wanted older people to serve in the Senate under the idea that with age brings with it wisdom, knowledge, experience, and stability. As for the citizenship difference, the framers wanted both chambers of Congress open to qualified immigrants, but they also wanted to make sure that such individuals were not agents of foreign governments. The slightly longer citizenship requirement for service in the Senate indicates that the framers wanted to strengthen that qualification for the upper house. The Senate serves as the more aristocratic element in the federal legislature—a place where merit and wisdom would be more likely to surface. Because the United States has no notion of inherited ranks and status, the qualifications of the Senate
Senate 431
432 S enate
attempt to capture these features through age and naturalization. Second, the selection process to choose senators was, prior to the Seventeenth Amendment, very indirect from a democratic perspective. Madison speaks of the need for a “select appointment” to this chamber, implying that direct popular election would not provide this. Selection by state legislatures made it even more likely that the most highly qualified people would be placed into office. The basic republican principle of popular control remains, for the people selected their state legislators, but those legislators, in turn, would exercise their wisdom and discretion to choose members of the U.S. Senate. This process reinforces the federal character of this institution, the fact that it represents states as a whole, and Madison commends this arrangement for giving state governments a direct “agency in the formation of the federal government.” In fact, during the 19th Century, it was not unusual for senators to be directed by their states on how to vote on certain matters. If a senator believed that he could not in good conscience follow his state’s direction, he might resign the office. Later, the Populist and Progressive movements argued for an end to this selection mechanism in favor of popular election. The Seventeenth Amendment, ratified in 1913, eliminates selection by state legislature. Senators are now chosen by a direct vote of the people, just as they select members of the House. This development broke the direct connection between state governments and the federal government, and it makes the types of campaigns waged by candidates for the two institutions much more similar. Third, senators serve six-year terms, with the chamber divided into three cohorts, only one of which is up for election in any specific election cycle. This is in stark contrast to the brief two-year terms in the House, where the entire body stands for election every election cycle. While the framers wanted Congress to be responsive to the popular will, they also feared the negative side of democracy—the possibility that the majority might seek unjust designs. They wanted a legislative chamber that would be more deliberative and stable. Six-year terms help provide those qualities. Madison outlines in Federalist 62 and 63 the benefits of lengthy terms. For example, long terms allow senators to acquire greater knowledge and experience in the areas about which they have to
make decisions, especially in the areas where the Senate has greater responsibilities, such as foreign policy. Long terms also help senators resist the changing whims and passions of the people. Many framers believed that one of the critical weaknesses of republics was their tendency to make too many laws and change them too often in response to rapid swings in popular opinion. They thought it was important, if the nation were to enjoy the confidence of foreign nations and commercial interests and the reverence for the law of its own people, for there to be some stability in the legislature. Long terms give senators an incentive to resist rapid swings in popular opinion because they have six years to persuade the people of the wisdom of their unpopular stand. Staggered terms reinforce this desire for stability, for in any given election, two-thirds of the Senate is shielded from electoral volatility. While voters can theoretically replace the entire House at one time, they would have to sustain such anti-incumbent fever for at least two election cycles to have a similar effect in the Senate. The framers believed this stability would also cause the Senate to develop a sensitivity to the views and concerns of other nations that the House might lack—an international perspective important in the federal legislature. Fourth, the Senate is much smaller than the House—less than one-fourth its size. The framers believed that the House was small enough to resist the “sudden and violent passions” that they believed too often marked large popular assemblies. If they miscalculated, however, the Senate’s much smaller size makes the protection against mob rule and demagoguery doubly secure. The question of size, however, connects directly to another structural feature of the Senate, and that is constituency. Where members of the House typically represent local districts within states and are thus more attached to those local interests, senators represent entire states. This federal aspect of the legislature cannot be easily changed by amendment, as Article V of the Constitution makes clear, and it results in some dramatic violations of the stereotypical one-man one-vote principle. For example, based on 2000 census data, a senator from Wyoming represents a little less than a half-million residents. By contrast, a senator from California represents nearly 34 million residents. A voter from Wyoming, then, is 68 times as influential
Senate 433
A committee hearing in the U.S. Senate (U.S. Senate)
as a voter from California. By the standards of simplistic democracy where equality is the paramount principle, such an arrangement appears grossly disproportionate. It is important to remember, however, that the Senate is designed to represent entire states as constituent elements in a federal republic. This feature places all senators, no matter their state of origin, on an equal playing field and contributes to the greater sense of equality and collegiality that seem to mark this chamber. It is also why most presidential candidates who come from Congress come from the Senate, not the House. Several other features reinforce the Senate’s status as the more deliberative and prestigious chamber. Although the House has the sole constitutional authority to originate revenue bills and impeach— that is, accuse government officials of high crimes and misdemeanors—the Senate enjoys a larger number of exclusive powers. For example, the Senate has the sole power to try impeachments. The framers
feared that the House would be too attached to public opinion, which might militate against justice in an impeachment trial. The Senate’s smaller size and longer terms, as well as senators’ higher qualifications, made that body a safer location for deciding such important matters. Similarly, the Senate has the exclusive power to ratify treaties with foreign nations, under the theory that the House, with its shorter terms and more parochial orientation, lacked the necessary knowledge and international focus essential for such a task. The Senate also has the unique responsibility to confirm all executive branch and judicialbranch nominations, making it a partner with the president in staffing the other two branches of government. Perhaps the most famously unique feature of the Senate is its use of the filibuster. A filibuster is a parliamentary tactic used by the minority party to prevent action on legislation by continuously holding the floor and speaking until the majority
434 sena tors
gives in. In the House, the majority party can usually pass measures very quickly. The deliberative traditions of the Senate, however, allow unlimited debate on a subject. Senate Rule 22 is the procedural mechanism that allows filibusters to take place, and it currently requires a 60 percent threshold for a cloture vote—a rule to limit debate and end a filibuster—to succeed. Thus, any 41 senators can essentially halt most measures. This procedural feature reinforces the basic nature of the Senate by making it more difficult to follow the will of the majority and thus more difficult to pass a law since it requires a supermajority to stop it. All of these features answer the question of what role the Senate plays in the federal government. If the House is the legislative chamber most representative of the people and most accountable to the people, the Senate is designed to be, in Madison’s words, “an anchor against popular fluctuations.” The framers believed in popular sovereignty, but they also believed republican government had inherent weaknesses that needed to be tamed or controlled in some fashion. Both democracy and stability are necessary in a political system, and the framers feared that a legislative body that was too democratic might become abusive in its response to popular passions. The Senate was designed to blend stability with liberty by bringing more wisdom and knowledge and experience into the legislative process. In making laws responsive to the popular will, the Senate is designed to foster a deliberative process that modifies and refines that popular will in a way conducive to the common good. Further Reading Davidson, Roger H., and Walter J. Oleszek. Congress and Its Members. 10th ed. Washington, D.C.: Congressional Quarterly Press, 2006; Hamilton, Alexander, James Madison, and John Jay. The Federalist Papers, Nos. 62–65, 75. Edited by Clinton Rossiter. New York: New American Library, 1961. —David A. Crockett
senators The U.S. Constitution declares that each one of the 50 states of the Union shall select two senators and that they shall serve six year terms. Thus the
Senate is composed of 100 senators working within a six-year electoral cycle that is staggered (that is to say, one-third of Senate seats are up for election every two years). In the event of a vacancy, a state governor may make a temporary appointment until the next statewide election is held to fill that vacancy. To be able to serve in the United States Senate, the Constitution mandates that a person be at least 30 years of age, have been a citizen of the United States for not less than nine years, and live in the state that he or she is to represent. Millions of U.S. citizens readily meet these formal qualifications, but history informs us that a much more narrow band of demographic characteristics defines who has served as U.S. senators. Of note, in recent times, the Senate’s roster has become more diverse with entering freshman classes expanding the range of personal attributes that senators possess. At the end of the 109th Congress (2005–06), the average age of senators is 60 years old. The youngest is 41 years old and the oldest is 88 years old. The vast majority of senators have received a university education, and the average senator is white, male, and typically far wealthier than most citizens. The dominant profession of senators is the law, followed by public service/politics and business. Protestantism is the most popular of religious affiliations among senators (with a variety of such denominations represented— Episcopalians, Methodists, Baptists, Presbyterians, and others) and Roman Catholicism being the largest single denomination in the body. The average length of service in the Senate is approximately two terms (12 years). In terms of gender and ethnicity beyond the highly dominant white males, in 2006, there were 14 women, one African-American, two Hispanics, and two of Asian/Pacific Islander heritage. Questions of representation have been consistently raised in terms of how such a demographically unrepresentative sample of the U.S. people as seen in the Senate can adequately and properly represent their interests. Virtually half of the membership of the Senate is now composed of former members of the House of Representatives, and the motivations are strong to transfer from the “lower chamber” to the “upper house” for a variety of reasons. Senators have longer terms of office, incur more perquisites in office, are much less numerous than House members,
senators 435
garner more prestige, are more publicly prominent, and have more responsibility in foreign affairs compared to their House counterparts. This is why the Senate in the past has commonly been referred to as the “world’s most exclusive club.” To date, 15 senators have gone on to serve in the nation’s highest elected office, the presidency—only two were able to move directly from the Senate to the White House, John F. Kennedy and Warren G. Harding. Also to date, 15 senators have been appointed and served as justices on the U.S. Supreme Court, the highest U.S. court. The current annual pay for senators is $162,500. Beginning in the 1980s, leaders of the Senate have collected greater salaries than other senators—these leaders consist of the majority leader, the minority leader, and the president pro tempore—and their salaries are at $180,000 per annum. Scholars have observed a notable change in the 20th century from the older, traditional Senate of the past to the newer, contemporary Senate of the present and how those alterations have impacted senators. Distilled, the Senate has changed from an exclusive and insular men’s club to a publicity machine designed to aid in the permanent campaign of individual reelection efforts by senators. This transformation started to occur in the 1960s and became solidified by the 1980s and onward. The old institutional orientation was for senators to conduct themselves collegially where members knew each other well and where they perceived legislative governance as a team effort and acted in such a manner. The new contemporary orientation is where senators have become much more individualistic in their behaviors and mindsets with the resultant breakdown in camaraderie and cooperation among the members. In other words, the prior deliberative and consensual clubbiness of the institution has been transformed into an adversarial and fragmenting particularism with increasingly bitter partisanship among senators. How have congressional scholars discerned these two orientations to have been manifested in the behavior of senators? The Senate of earlier times was distinguished by a comparatively inequitable distribution of influence among senators and accompanying norms of constraint, or “folkways” as these norms have been labeled. As noted by political scientists, this Senate was a committee-centered, expertise-
dependent, inward-looking, and relatively closed institution. The typical senator during that era was an issues specialist who focused only on the policies and problems with which his committee dealt directly. The legislative activities of senators were restricted mostly to the committee room with little conduct by senators on the floor of the Senate and rather sparse to negligible engagement of the mass media. The average senator was aware of his junior status and thus was ordinarily deferential to more senior members. Loyalty to the institution of the Senate was at a premium for senators and clearly observed in their conduct and actions, and, as well, senators evinced serious restraint in their exercise of the great powers that Senate rules grant to each and every senator. The Senate of current times shows a different orientation and set of behaviors by senators. Influence is much more equitably apportioned among the senators, regardless of seniority, and members are granted rather expansive freedom of action or choice—they are much less constrained in the use of the powers accorded them by the rules of the Senate and much less deferential than that was seen in the earlier orientation. As scholars have concluded, the Senate has developed into a publicly more open, less insular, more staff-dependent, and outward-looking entity in which significant decision making occurs in a diversity of arenas beyond committee rooms. The typical senator does not specialize as he or she once did; instead he or she participates in an expanded scope of issue areas beyond the jurisdictions of the committees on which he or she sits directly. This new orientation on the part of senators also results in their being more active on the floor of the Senate and a much greater participation in various massmedia outlets. Quickly summarized then, the old Senate of the 1950s was clubby and collectively oriented with a clear hierarchy and centralization of power in the chamber and great restraint shown among the senators in their conduct—the new Senate of the 1970s and onward to contemporary times is much more fragmented internally with aggressive individualism on the part of senators being the operative orientation. A discussion of some of the leading and most important individual senators in the history of this body now follows. The three giants of the Senate in the 19th century were the “great triumvirate” of Daniel
436 sena tors
Webster, John C. Calhoun, and Henry Clay. Daniel Webster (1782–1852), Federalist and Whig elected from the state of Massachusetts, served in the Senate from 1827 to 1841 and again from 1845 to 1850. He was best known for the extraordinary eloquence in his oratory during a time in U.S. politics when speechmaking was a high art and necessary skill. His abilities as a debater are legendary, and he is best known for his effective rhetoric in arguing for the preservation of the Union in the period leading up to the Civil War in the 1860s. Another critical senator at this time was John C. Calhoun (1782–1850), Republican and Democratic–Republican from South Carolina, who served in the Senate from 1832 to 1843 and again from 1845 to 1850. He was the leading voice for the Southern states in the years leading up to the Civil War, and as a result, Webster and Calhoun engaged in a series of dramatic debates over slavery, federal power, and states’ rights. Calhoun became the most ardent articulator and advocate for the doctrine of nullification, which contended that individual states had the right to declare null and void federal laws they found objectionable. Henry Clay (1777–1852), Whig from Kentucky, was the third pivotal senator of this period, and he served in the Senate from 1806 to 1807, 1810 to 1811, 1831 to 1842, and 1849 to 1852. Quite eloquent and charismatic himself, Clay earned the moniker of “the Great Compromiser” in the Senate for his concerted actions in trying to forestall sectional disagreements concerning the slavery issue. He is especially noted for his work in crafting the Compromise of 1850 which permitted California to join the Union as a free state and consolidated the runaway slave law. This was an attempt to moderate the increasing tension between the free states and slaveholding states and thus preserve the Union. Turning to the leading senators of the 20th century, Everett M. Dirksen (1896–1969), Republican from Illinois, served in the Senate from 1951 to 1969. He is considered to have been one of the most colorful senators in the history of this chamber. His excessively flowery style of speaking led him to be called “the Wizard of Ooze,” and his serving as minority leader of the Senate from 1959 to 1969 demonstrated his mastery of negotiation, compromise, and persuasion. His combined skills of oratory and negotiation are evidenced in his famous declaration that “the oil can is mightier than the sword.” Dirksen’s political
dexterity was well tested by his endeavoring to advance Republican interests in a Democratic-controlled Congress for many years. Analysts of U.S. government consider Lyndon B. Johnson (1908–73), Democrat from Texas, to be one of the most capable and effective leaders of the Congress from both inside (as Senate majority leader and minority leader) and outside (as president) the institution. Johnson’s legislative skills allowed him to work successfully with a variety of senators covering the ideological continuum from conservative to moderate to liberal, as well as with Republican president Dwight D. Eisenhower. His time of service in the Senate was from 1949 until 1961, when he left the chamber to become President John F. Kennedy’s vice president. He is widely regarded as a diligent, industrious, wily, and ambitious member. His endeavors in the post of Democratic leader in the 1950s led to the increased authority and greater standing of that position in the chamber. His capacity to form alliances through a variety of tactics, and his unequaled ability of persuasion to get fellow senators on board with his own wishes is now legendary, leading the historian Robert Caro to call Johnson “master of the Senate.” William Fulbright (1905–95), Democrat from Arkansas, is regarded by scholars to have been the dean of foreign affairs in the Senate. During his long length of service in the upper chamber from 1945 until 1975, Fulbright operated as a consistent critic of U.S. foreign policy post–World War II from his perch as chair of the Foreign Relations Committee. His primary orientation toward international relations and foreign affairs was one of direct and open communication, rapprochement, and a building of mutual understandings as opposed to continued distrust, animosities, and hostility. He became best known for his opposition to the Vietnam War and other criticisms of both President Johnson’s and President Richard Nixon’s respective foreign policies. Thus, Fulbright was liberal on international issues, but his southern constituency dictated his conservative positions on domestic policies, such as his opposition to civil rights legislation. Mike Mansfield (1903–2001), Democrat from Montana, succeeded Lyndon Johnson as majority leader, and his leadership style was notably in direct contrast to Johnson’s. Mansfield served in the Senate from 1953 to 1977, and his manner as leader was
term, congressional (including special sessions) 437
more gentlemanly, civil, and tolerant of difference of opinion and disagreement. His reputation as the “gentle persuader” evidenced a collegial orientation that set in stark relief the aggressive and overt manipulative tactics of Johnson. Mansfield was liberal, laconic, and was one of the first Democrats of national prominence to turn against the Vietnam War as waged by President Johnson. Robert Dole (1923– ), Republican from Kansas, served in the Senate from 1969 to 1996 and was a leadership force to be reckoned with in the 1980s and 1990s. An adroit legislative infighter, not unlike Lyndon Johnson in terms of arm-twisting and cajoling abilities, Dole was capable of bringing together divergent senators into the same coalition behind particularly contentious legislation. Well known for his acerbic wit and acid tongue, Dole’s leadership style differed sharply from that of his immediate Republican predecessor, Howard H. Baker of Tennessee, who was known for his friendlier and more approachable touch. With more than 40 years of service in the Senate from 1963 to the present, Edward M. Kennedy (1932– ), Democrat from Massachusetts and brother of the slain President John F. Kennedy, constitutes the leading liberal lion of the entire Congress. His time in the Senate has been marked by his unswerving support of core Democratic Party values (such as advancement of labor interests, increase of healthcare insurance coverage, environmental protections, and funding for education) and his ability at times to cross the partisan aisle and build majority coalitions to help enact initiatives he advocates. His Senate leadership and presidential aspirations were permanently injured when one night in 1969 he drove his car off a bridge in Chappaquiddick, Massachusetts, and a companion of his in the car was killed. He remains one of the most important and leading voices of the liberal wing of the Democratic Party at the national level. When Robert Dole decided to enter the presidential race in the 1996 election, Trent Lott (1941– ), Republican from Mississippi, succeeded to the majority-leader position after serving in the Senate for eight years (he first entered the Senate in 1989 and held office until his resignation in 2007). His leadership style indicated a pragmatic/practical approach to those responsibilities, and he nimbly and accord-
ingly adjusted to the consensus building needs of the upper chamber. His ability to work across the partisan divide with Senate Democrats and try to find common ground on which to advance fruitful legislative initiatives is an underappreciated aspect of his legacy as majority leader, especially considering the strongly conservative nature of his home state. He was forced to step down from this position amid the uproar over an ostensibly racist verbal gaffe he made during the 100th birthday celebration for retiring Senator Strom Thurmond, Republican from South Carolina. Bill Frist (1952– ), Republican from Tennessee, was catapulted into the majority-leader post by virtue of Senator Lott’s abrupt resignation, laid out directly above. Frist served in the Senate from 1995 until 2007, and analysis asserts that the efforts and wishes of President George W. Bush led to Frist’s selection as majority leader. His background as a successful medical doctor had helped lend credibility to Republican actions dealing with health-care and drug issues. Aspirations for a potential presidential run somewhat complicated Frist’s calculations and decisions as Senate leader prior to his retirement from the chamber after serving two full terms. Further Reading Baker, Ross K. House and Senate. 3rd ed. New York, London: W.W. Norton, 2001; Campbell, Colton C., and Nicol C. Rae, eds. The Contentious Senate: Partisanship, Ideology, and the Myth of Cool Judgment. Lanham, Md.: Rowman and Littlefield Publishers, Inc., 2001; Gould, Lewis L. The Most Exclusive Club: A History of the Modern United States Senate. New York: Basic Books, 2005; Sinclair, Barbara. The Transformation of the U.S. Senate. Baltimore and London: The Johns Hopkins University Press, 1989. —Stephen R. Routh
term, congressional (including special sessions) All members of the U.S. Congress serve for a fixed term set by the U.S. Constitution. Members of the House of Representatives serve two-year terms, while members of the Senate serve six-year terms. The different term lengths point to the different roles the framers of the Constitution hoped the two legislative chambers would play in the federal republic.
438 term, congressional (including special sessions)
For example, two-year terms in the House demonstrate the framers’ belief that some part of the federal government needed “an immediate dependence on, and an intimate sympathy with, the people.” The House was constructed to be the most democratic institution in the federal government, and that responsiveness to the popular will would come through frequent elections. Short terms make House members very attuned to the popular will in their districts, for the next election is never far off. At the same time, however, the framers also feared the negative side of democracy. They recognized that the popular majority might seek unjust goals targeting the minority. To create a more deliberative and stable legislative chamber, they gave senators six-year terms—three times the length of those who serve in the House. Lengthier terms allow senators to acquire greater knowledge and experience in such areas as foreign policy, where they have greater responsibilities. They also give senators an incentive to resist rapid swings in popular opinion because they have a long time to make the case for the wisdom of their positions. The Senate is also divided into three cohorts, one of which faces reelection every two years. Thus, while the entire House can be replaced in any single election, twothirds of the Senate is shielded from such electoral volatility. This breakdown in term length also establishes the length of each congressional term. The Constitution requires Congress to assemble as a body at least once a year. Since each member of the House serves for two years before facing reelection, and one-third of the Senate is up for reelection every two years, there must be at least two different assemblies of a single group of elected officials before they change personnel. Each two-year term is numbered as a “congress.” Thus, the congress that met for the first time in 1789 was the First Congress. The famous congress that battled President Harry S. Truman as he ran for election in 1948 was the 80th Congress. The congress that met from January 2005 until the end of 2006 was the 109th Congress. Each numbered congress is also typically divided into two sessions. The first session begins the first time a congress convenes with its new members and usually lasts until it breaks for an intersession adjournment toward the end of the year. Congress then reconvenes the following year for its second session.
The coordination of constitutional terms of office also has an effect on the political dynamics of congressional terms. This is most clearly seen in the era prior to the ratification of the Twentieth Amendment in 1933. As originally written, the Constitution called for Congress to convene on the first Monday in December. Presidents, however, took office on March 4. This led to a number of problems. Following election years (every even-numbered year), there was a 13-month gap between the election of members of Congress and their entry into office. The transition period between the victorious presidential candidate and the inauguration was a fairly lengthy four months, but the new president then served for eight months before Congress convened. Then, once Congress convened, there were only 11 months left before those members faced another election. The second congressional session usually convened after that election—after its successors had been elected but before they had taken office. This session came to be known as the “lame duck” session since it included many defeated members whose power was presumably diminished as a result of their defeat. The disjuncture in the beginning of congressional and presidential terms had an impact on the beginning of a president’s term of office and on the end of a congressional term. The potential problem at the beginning of a president’s term was the fact that Congress normally did not come into regular session for eight months. The theory behind these staggered term starts is that the president, as the person responsible for the execution of the laws, is always “on the job.” Legislators, however, come and go throughout a term as they make the laws. Of course, some presidential duties require congressional participation. For example, the president needs to staff the executive branch with his appointees, and that process requires Senate confirmation of his nominations. Thus, it was not unusual during much of U.S. history for the Senate to convene in special session prior to the first Monday in December to handle the appointments process. More important, presidents have the constitutional power, “on extraordinary occasions,” to convene Congress in a special session. This would typically occur during that long period between the president’s inauguration and Congress’s December conven-
term, congressional (including special sessions) 439
ing date, especially if there were an emergency that required congressional action. The two greatest emergencies in U.S. history led to special sessions. When President Abraham Lincoln took office on March 4, 1861, the nation was already in a full-blown secession crisis. Slightly more than a month later, rebel forces fired on Fort Sumter, South Carolina, which began the Civil War. Lincoln took extraordinary measures to preserve the Union, eventually calling Congress into special session on July 4. Similarly, when President Franklin D. Roosevelt took office on March 4, 1933, the nation was nearly four years into the greatest economic disaster of its history. Roosevelt immediately called Congress into special session to deal with the economic crisis, beginning the famous “One hundred days” that inaugurated the New Deal. Lesser crises have also led to special sessions. Triumphant Whigs, controlling both elective branches for the first time in 1841, intended to transform public policy along Whig lines through a special session, but their efforts were complicated and ultimately destroyed when the Whig president, William Henry Harrison, died after only a month in office, leaving the job to his vice president, John Tyler, a man less devoted to Whig principles than the party leadership in Congress. Special sessions were once one of the president’s more powerful constitutional tools to set the national agenda. By calling Congress into an extraordinary session, the president was able to set specific goals for the legislature and establish the terms of debate. On the other hand, because the president has the power to call Congress into special session, he also has the ability to set the timing for that session. In Lincoln’s case, the delay in convening Congress in special session enabled the president to make vigorous use of the powers of his office to make important decisions and set the initial war policy during the Civil War— most of which were later ratified by Congress after the fact. The potential problem at the end of a congressional term concerns the biennial lame-duck session of Congress, which could cause mischief for an incoming president who had to wait until March 4 to take office, especially if that president came from a different political party. It was a lame-duck session of a Federalist-controlled Congress that packed the judicial branch with appointments, including the new
chief justice, John Marshall, prior to Thomas Jefferson’s inauguration. Fractious interparty transitions between Presidents James Buchanan and Abraham Lincoln and Presidents Herbert Hoover and Franklin D. Roosevelt did nothing to redeem the institution. The lame-duck session of Congress also created potential constitutional problems for that session would be responsible to choose a president and a vice president in a contingency election if the electoral college failed to select a president, not the newly elected Congress. The experiences of contingency elections in 1801 and 1825 demonstrate the problems with such a system. Congress might improperly influence a new president, or a president might improperly influence defeated members of Congress hoping for a federal appointment. As transportation and communications technology advanced, reformers saw the disjuncture in the beginning of congressional and presidential terms and the large gap between election and inauguration as unnecessary and even dangerous. Their solution was to pass and ratify the Twentieth Amendment to the Constitution. The Twentieth Amendment cleared Congress in March 1932 and became part of the Constitution less than a month before Roosevelt’s inauguration, making it one of the fastest amendments to be ratified by the states. The amendment changed the start of the congressional term to January 3 and the start of the presidential term to January 20. This change eliminated the lengthy second lameduck session of Congress following an election. Congress may still have a lame-duck session following an election to wrap up unfinished business, but it will not last longer than six weeks. Perhaps more consequentially, the amendment reduced the time between a president’s election and the inauguration and began congressional terms two to three weeks prior to the start of presidential terms. This change greatly reduces the likelihood of special sessions since Congress is already in session awaiting a new president when he or she is inaugurated. In fact, the last time Congress was called into special session was 1948 when President Harry S. Truman sought to make congressional inactivity (since Congress was controlled by Republicans) in a presidential election year a major theme of his campaign. He successfully tagged the 80th Congress with the “do-nothing” label, helping to assure his own election. With
440 t erm limits (legislative)
Congress coming into office before the president, however, and staying in session seemingly perpetually, there is little opportunity for presidents to employ this power anymore. Finally, if a contingency election proves necessary in the event of an Electoral College deadlock, it will be the newly elected Congress that handles the task, rather than the outgoing members. Further Reading Kyvig, David E. Explicit and Authentic Acts: Amending the U.S. Constitution, 1776–1995. Lawrence: University Press of Kansas, 1996; Milkis, Sidney M., and Michael Nelson. The American Presidency: Origins and Development, 1776–1998. 3rd ed. Washington, D.C.: Congressional Quarterly Press, 1999. —David A. Crockett
term limits (legislative) Whether the terms of service of those elected to Congress or state legislatures should be limited has been a perpetual controversy in the U.S. political system. The primary justification is that such a procedure ensures rotation in office and therefore accountability to the people. Conversely, opponents of the tactic contend that it limits experience and is undemocratic. Both due to the writings of European political philosophers and to the practice of their own governments, those nations who colonized America in the 17th century favored limits on the terms of legislators. For instance, the Dutch implemented the practice in New York. Evidence of Britain’s attitude about term limits for assemblies in colonial America is derived from the New England Confederation of 1643 and the 1682 frame of government for Pennsylvania. During America’s revolution against Britain, several states wrote constitutions that limited the terms of legislators. While the Pennsylvania constitution restricted the terms of all elected officials, the states of New York, Delaware, and Virginia mandated rotation of those elected to the upper chamber of the legislature. After gaining independence from Britain, the United States formulated its first national government, the Articles of Confederation. Containing only
a legislative branch, members of the Congress were restricted to serving just three years in any sixyear time frame. Meanwhile, as an unmistakable legacy of the colonial period, most states severely hampered the election, term length, and powers of their governors while granting extensive authority to legislatures. The combination of a weak national government and impotent state governors clearly indicated the supremacy of state legislatures for most of the 1780s. Convened to consider revisions to the Articles of Confederation, the Constitutional Convention of 1787 would eventually establish a federal system featuring three branches of the national government. The debate over the direction of Congress was included in the Virginia Plan among others. The early draft of the latter plan left it to the convention to determine when members of the lower chamber of the legislature could return for another term after an initial term expired. While the discussion concerning the length of term and the number of chambers in the national legislature was clearly apparent at the Constitutional Convention, the question of term limits was secondary. The diminution of concern about the number of terms national legislators would serve was probably due to the principle of separation of powers established in the U.S. Constitution, to the short term granted to the House of Representatives, and to the expectation that members of Congress would closely adhere to instructions emanating from the states that elected them. When the final draft of the Constitution was approved by convention delegates, no term limits were placed on either congressional or presidential service. The ratification of the Constitution featured extensive and detailed arguments concerning the new government’s structure and procedures. Though ratification proved successful, the views of the Constitution’s opponents on term limits for elected representatives perpetuated a voluntary tradition of rotation in office. This trend—which would last until the end of the 19th century—was buttressed by the example set by George Washington, by laws and policies affecting appointed officials in the executive branch, by the philosophy and practices of presidents such as Thomas Jefferson and Andrew Jackson, and even by the challenges of traveling to and living in the nation’s capital.
term limits (legislative) 441
With the advent of the 20th century, legislators at the national level began to regard continuous service in Congress as more important, which led to the possibility of spending an entire career in Congress. This change was hastened by several factors. First, the U.S. party system stabilized after the Civil War, creating the conditions under which seniority in terms of years served generally meant more influence in Congress. Second, the modernization of American life made transportation and communication faster and easier. More members of Congress were choosing to live in Washington, D.C., not just work there. Third, the high percentage of turnover in the executivebranch bureaucracy ushered in by the spoils system
had the opposite effect on Congress. Fourth, the growing complexity of American society led to an increase of professionalism in government—including an augmentation in the size of both the executive and legislative branches—and with it longer service in the latter branch. By the turn of the century, only about one-fourth of the members of the U.S. House of Representatives were first-term members. Finally, the ratification of the Seventeenth Amendment in 1913 gave the people the authority to elect U.S. senators directly; the impact was to reduce turnover in that chamber. Because state legislatures were much less affected by the aforementioned changes, there was still a high
442 t erm limits (legislative)
level of rotation at that level of U.S. government. Several states had outright restrictions on the number of terms that state legislators could serve. Certainly, a state’s political culture and traditions helped to determine whether legislative term limits were approved and maintained. The post–World War II period of U.S. history ushered in a new period of congressional tenure. With the already established trend of careerism in Congress came a growing advantage for incumbents. The advent of television and professional campaigns combined with overwhelming superiority by incumbents in funding resulted in a very high reelection rate for members of both the House and the Senate. Between 1946 and 1990, the percentage of those House members reelected rose from 75 percent to an average of 90 percent while the percentage of Senate incumbents who were reelected increased from 56 percent to 96 percent on average. Another reason for the longer length of service by members of Congress had to do with how the people viewed the institution as a whole. Though individual members occasionally fell into trouble, the institution of Congress fared well compared to the president from 1965 to 1980. The legacy of the Vietnam War and the Nixon administration’s transgressions in the Watergate scandal precipitated a decline of support for the executive branch. Turnout in both presidential and congressional elections suffered a consistent downturn between 1960 and 1980. Though elections during the 1980s generally brought more citizens to the polls, only onethird of registered voters cast ballots for Congress in 1990, the lowest recorded percentage in 60 years. This phenomenon worked to the advantage of congressional incumbents. The contemporary movement for term limits in Congress began in the 1980s and reached its zenith in 1995. Among those factors that contributed to the swing of the pendulum toward term limits was the perceived arrogance of congressional incumbents, which was exhibited by a series of scandals. These included allegations of influence buying, of sexual misbehavior, and of financial misdealing. The revelations that were exposed in the early 1990s included overdrafting by members in writing checks at the House-controlled bank and deficient accounts by members in post office and restaurant accounts. As a
result, 15 percent less House members and senators were reelected in 1992 as had been in 1990. A second explanation for the renewed interest in term limits for members of Congress during the 1980– 95 span is attributed to the manner by which Congress raised its own pay. Rather than overt votes on several pay raises that transpired within the latter duration, Congress adopted back-door procedures that automatically increased members’ salaries. The opposition to this tactic gained such momentum that a longsuspended constitutional amendment—one that had originally been part of the Bill of Rights—was resuscitated and ratified in the same year, 1992. The Twentyseventh Amendment requires that an election be held between the time that Congress votes itself a pay raise and actually receives the higher pay. A third justification for congressional term limits focused on growing partisanship and bickering between political parties combined with the divided party government of the president and Congress which began in 1981, continued unabated until 1992, and was reinstituted following the 1994 midterm election. The blowback against careerism in Congress during the first part of the 1990s led state legislatures to interpret the Constitution so as to claim the authority to limit congressional terms on their own. By 1995, half of the states in the Union had approved legislation to restrict congressional terms. Congressional Republicans, aware of the public’s negative view of Congress and the Democrat majority in particular, promised to impose self-regulated limits on House and Senate terms in their 1994 campaign manifesto, the “Contract With America.” As a consequence, the Republicans recaptured control of both chambers of Congress for the first time since 1947, though many legislators reneged on their promise to quit after 12 years. In 1995, the U.S. Supreme Court had the opportunity to decide a case dealing with the question of whether states could limit the terms of members of Congress. In a 6-3 ruling in U.S. Term Limits v. Thornton (1995), a majority of the Court found that the Constitution prohibited states from imposing congressional qualifications additional to those enumerated in the text. Since the Court determined that the only way that specific term limits could be adopted for Congress was by way of a constitutional
veto, legislative 443
amendment, the immediate impact of the Court’s holding was a congressional attempt to approve an amendment. That attempt failed in the House of Representatives in 1997, though there have been several term-limit amendments proposed in the ensuing decade. The movement to limit the terms of members of Congress has affected elected officials at the state and local level. Since 1990, 16 states have limited the terms of state legislators. These include: Arizona, Arkansas, California, Colorado, Florida, Louisiana, Maine Michigan, Missouri, Montana, Nebraska, Nevada, Ohio, Oklahoma, South Dakota, and Wyoming. In addition, these 16 states and 20 additional states have term limits for governors. Nearly 3,000 cities nationwide have enacted term limits for various officials; the existence of municipal term limits dates back to 1851. Large metropolitan cities that rely on term limits include New York, Los Angeles, Houston, Dallas, San Francisco, Washington, D.C., Kansas City, New Orleans, Denver, and Cincinnati. While the diversity in the size, authority, and fulltime versus part-time nature of state legislatures makes their comparison with Congress suspect, there has been reciprocal influence between the national and state governments in determining each other’s legislative tenure. The ratification of the Twentysecond Amendment—limiting the president’s time in office to two elected four-year terms or a maximum of 10 years—together with certain other contemporary characteristics of the U.S. political landscape has contributed additional arguments to the controversy. Though the intensity of the debate may have alternated in time, the permanence of the disagreement over legislative term limits has not. See also incumbency. Further Reading Benjamin, Gerald, and Michael J. Malbin, eds. Limiting Legislative Terms. Washington, D.C.: Congressional Quarterly Press, 1992; Dodd, Lawrence C., and Bruce I. Oppenheimer, eds. Congress Reconsidered. Washington, D.C.: Congressional Quarterly Press, 1993; Kirland, Philip B., and Ralph Lerner, eds. The Founders’ Constitution. Vol. 2. Chicago: University of Chicago Press, 1987; Ornstein, Norman J., Thomas E. Mann, and Michael J. Malbin. Vital Statistics on Congress, 2001–2002. Washington, D.C.:
AEI Press, 2002; Quirk, Paul J., and Sarah A. Binder, eds. The Legislative Branch. New York: Oxford University Press, 2005; U.S. Term Limits Web page. Available online. URL: http://www.ustl.org/index.html. —Samuel B. Hoff
veto, legislative The legislative veto is a procedure where one or both chambers of the United States Congress review executive branch actions and prohibit or alter those regulations or procedures with which the legislators disapprove. Legislative veto requirements vary widely because they are established on a bill-by-bill and program-by-program basis. Veto provisions have been written to be triggered by a simple majority or a two-thirds majority vote of either the House of Representatives or the Senate (by passing a simple resolution) or of both chambers (by passing a joint resolution). Subsequent legislative veto provisions often would mandate that Congress exercise its veto before a deadline of 60 or 90 days, for example. Another form of a legislative veto allowed Congress to review and eliminate regulations previously in effect. Early forms of the legislative veto were focused on Congress reviewing and altering presidential plans and programs to reorganize agencies and departments of the executive branch. By 1983, nearly 200 legislative veto provisions had been established. In addition, occasionally the legislative veto authority has been given to the primary standing committees reporting out the bill from either the House or the Senate. President Woodrow Wilson in 1920 vetoed a bill because it contained committee legislative veto provisions. However, intense pressures during the Second World War to produce weapons and munitions as rapidly as possible led to the wide expansion of including legislative veto provisions in military expenditure legislation. On the other hand, starting with President Harry S. Truman, every president has forcefully objected to standing-committee legislative veto provisions and promising not to honor them should the issue arise. Dating back to the 1930s, members of Congress increasingly began to use legislative veto provisions as important tools to oversee how the federal bureaucracy implements policies and programs.
444 v eto, legislative
This approach reversed the constitutional procedure where presidents could veto policies and programs created by Congress. Presidents such as Herbert Hoover and Franklin D. Roosevelt sought legislative veto provisions as part of their efforts to reorganize the executive branch. Both presidents interpreted such use of legislative veto provisions as mechanisms for increasing presidential power because they would not have to return to Congress for approval of each step of their reorganization plans. The presence of the legislative veto in the Reorganization Act of 1939, for example, allowed the president to continue his efforts until and unless he was interrupted by a legislative veto or a growing threat of a legislative veto. Ironically, by the 1970s the legislative veto had widely become viewed as a significant expansion of legislativebranch rather than executive-branch powers. A focal point of this view was the rapid expansion of legislative vetoes seeking to control federal agency rule making. Responding to mounting pressure from constituents who complained that they were increasingly burdened by oppressive rules made by federal agencies, members of Congress looked to legislative vetoes as a way to reign in aggressive government bureaus and departments. Throughout the 1970s, Congress passed an array of legislative vetoes seeking to enable them to overturn rules adopted by agencies such as the General Services Administration, the Commissioner of Education, the National Highway Traffic Safety Administration, the Federal Energy Regulatory Commission, and the Federal Trade Commission. These and similar efforts allowed members of Congress to reward constituents and powerful interest groups with custom-made remedies to their perceived problems. However, this use of the legislative veto was at its heart a confrontational posture against the executive branch. This contrasted markedly with the more cooperative traditional use of legislative veto found when reorganizing agencies and departments of the federal government. Officials in both Democratic and Republican administrations during the years became increasingly concerned that members of Congress would abandon their agency-by-agency legislative veto efforts (already unpopular with the executive branch) in favor of a measure granting blanket powers covering all federal regulations. This growing rift between the rival branches of government ultimately
led each side to the court system to resolve their dispute about the limits of the legislative veto. Throughout the 1970s, the federal courts considered legislative veto issues narrowly, primarily on a one-case-at-a-time basis with the primary contested legal issue focused elsewhere. However, the legal confrontation started to crystalize in 1982 when the District of Columbia Court of Appeals ruled against three separate legislative veto plans involving the Federal Energy Regulatory Commission, the Federal Trade Commission, and the Department of Housing and Urban Development. The ruling revealed that the federal courts wanted to declare unconstitutional all forms of the legislative veto. All eyes immediately focused on the U.S. Supreme Court to resolve the disagreement. The seminal case was the Immigration and Naturalization Service (INS) v. Chadha. The ruling, handed down on June 23, 1983, held that onechamber legislative vetoes were unconstitutional. The case involved a man, born in Kenya, whose parents were from India and who held a British passport. Chadha had lived in the United States on a nonimmigrant student visa. Unfortunately, he had let his visa expire. The INS responded by initiating deportation proceedings against him. The U.S. Attorney General’s office intervened, suspended the deportation proceedings, and made Chadha a permanent resident. In accord with government procedure, the Attorney General’s office forwarded its suspension order to Congress for approval. However, the House responded by passing a resolution canceling the suspension order and mandating that Chadha and five others in the same category be deported. Chadha appealed the case to the U.S. Supreme Court. In the majority opinion in INS v. Chadha, the justices ruled that there were three reasons that they found the legislative veto to be unconstitutional. First, they said that the legislative veto violated the constitutional principle of the separation of powers because it allowed members of Congress to make detailed decisions about the operation of agencies of the executive branch. Second, the legislative vetoes in question violated the presentment clause of the U.S. Constitution because they did not direct that the laws be presented to the president for his signature. Finally, those justices in the majority said that the
Ways and Means Committee
legislative veto violated the constitutional principle of bicameralism because the primary legislative veto in question was passed by the House of Representatives but not the Senate. Subsequent congressional and presidential implementation of the INS v. Chadha ruling has been one of the great ironies of U.S. political history. In sum, leaders on each side of this issue have quietly agreed to ignore the ruling. Each president serving since the ruling has signed legislation featuring legislative veto provisions. Why have leaders of each branch chosen to ignore the ruling? Insiders consider it to be politically expedient for both branches to continue the legislative veto process. In sum, legislative vetoes allow officials in each branch of government to continue negotiating the details of a policy or program as it is being implemented. On one hand, executive branch officials could be said to benefit because members of Congress, knowing that they can later negotiate implementing a program or policy, feel comfortable in giving executive branch officials leeway in implementing those activities. On the other hand, members of Congress continue to use legislative veto activities to influence the daily implementation of policies and programs. There are at least two obvious—and politically unappealing—alternatives to the fluid nature the legislative veto provides to the daily operation of the federal government. One would be for Congress to pass detailed “micromanaged” policies and programs that would leave no leeway for the bureaucracy implementing them. Another option would be for members of Congress to increase its already existing efforts to “manage” federal programs and policies by tinkering with their annual budget and expenditure outlays. Each option likely would greatly increase the political conflict level in Washington, which is a development leaders on each end of Pennsylvania Avenue certainly would not want to experience. Hence, the unconstitutional legislative veto likely will remain well into the foreseeable future. Further Reading Fisher, Louis. “The Legislative Veto Invalidation: It Survives,” Law and Contemporary Problems, Autumn 1993: 273–292; Fisher, Louis. Constitutional Conflicts Between Congress and the President. 4th ed.
445
Lawrence: University Press of Kansas, 1997; Immigration and Naturalization Service v. Chadha, 462 U.S. 919, 1983; Oleszek, Walter. Congressional Procedures and the Policy Process. 6th ed. Washington, D.C.: Congressional Quarterly Press, 2004. —Robert E. Dewhirst
Ways and Means Committee The Ways and Means Committee in the U.S. House of Representatives is one of the most powerful and prominent committees on Capitol Hill. Although its jurisdiction has varied slightly over the years, Ways and Means is primarily responsible for legislation to finance the operation of the federal government. This politically charged arena continuously attracts the undivided attention of individuals and interest groups spanning the U.S. economic landscape. The committee spearheads the House of Representative’s constitutional directive, as outlined in Article 1, Section 7, mandating that all bills for raising revenue will originate in the chamber. Hence, Ways and Means is immersed in all of the minute details of the federal tax code. The committee thus writes and amends legislation governing personal and corporate income taxes, plus excise, estate, and gift taxes, in addition to tariffs and even the sale of federal bonds. Moreover, the committee’s jurisdiction spans governing the national debt, international trade, and the allocation of funds for such major domestic programs as Social Security, Medicare, and numerous unemployment programs. In sum, the Ways and Means committee is at the hub of some of the most visible, controversial, and important policies and programs in U.S. government. Not surprisingly, membership on the committee long has been highly valued. This committee traditionally has been among the largest in the House with the most recent committee including 41 members serving on one or more of six permanent subcommittees: trade, oversight, health, social policy, human resources, and select revenue matters. Ways and Means has a long and often colorful history. Although it technically dates from 1789, the committee could be said to date from 1794 when the House created a select (meaning it had to be reformed each new Congress) committee to study revenue matters. The name Ways and Means was
446 Ways and Means Committee
restored to the committee the following year as members began to try to determine estimates of future revenue and expenditures. Finally, in 1802, the House, under the direction of President Thomas Jefferson, rewrote chamber rules creating a standing (meaning it had continuity from one Congress to the next) Ways and Means Committee. Significantly, the reforms added appropriations to the committee’s jurisdiction. During the first half of the 19th century, a history of the Ways and Means Committee’s work is almost a history of major domestic U.S. policy making. The committee was at the heart of the massive debate over a national bank, financed the Louisiana Purchase, the War of 1812 and the Civil War, and wrote major tariffs passed in 1816, 1833, 1842, 1846, 1857, and 1861. Adding to this impressive list the committee also wrote the U.S.’s first income-tax law and also won passage of the Legal Tender Act authorizing the federal government to issue paper currency. In fact, the committee remained so busy that in 1865 the House reassigned the panel’s jurisdiction over appropriations, banking, and currency to two new committees. Significantly, Ways and Means was empowered to report its bills directly to the House floor whenever panel members desired. In the second half of the 19th century, Ways and Means members devoted most of their time on revenue bills and holding hearings to investigate several major financial scandals at the time, such as the Sanborn contracts and the Pacific Mall Steamship Company. In 1885, the chair of the committee was appointed by the Speaker of the House and made one of three majority-party members of the House rules committee. During this period, popularly know as the “gilded age,” the committee was known for its opposition to reform and for policy making that favored the interests of the wealthy and major corporations at the expense of small businesses and the average citizen. Just after the start of the 20th century, Ways and Means Committee members attained unprecedented influence in the House. In 1910, a bipartisan revolt against the autocratic rule of the Speaker of the House, Joe Cannon, a Republican from Illinois, led to important reforms redistributing power in the chamber. One reform was to award Ways and Means the authority to assign members to all standing commit-
tees. This made the majority Democrats on Ways and Means their party’s committee on committees—an incredible position from which to exert influence throughout the entire House. Although this lasted only a few years, it allowed members at least temporarily an unprecedented amount of power within the chamber. In addition, during the early decades of the 20th century, policies developed by the Ways and Means Committee often varied widely, depending on which party controlled the House. Under Republican dominance, the committee raised tariffs, such as the McKinley and Payne–Aldrich tariffs, to protect favored industries. On the other hand, the Democrats won passage of the Underwood Tariff Act to lower rates and eliminate protection provisions before later winning passage of the first corporate and personal income tax in accordance with the Sixteenth Amendment to the U.S. Constitution, which was ratified in 1913. The political pendulum swung back in the Republicans’ favor in the 1920s and culminated with the passage of the Smoot–Hawley tariff in 1930. The committee’s jurisdiction changed significantly after the Democrats gained control of both Congress and the presidency in the 1930s. Passage of the Reciprocal Trade Agreements Act of 1934 authorized the president to negotiate tariff rates with other nations. This released the Ways and Means committee from its power to maintain import duties and allowed it only to monitor U.S. trade agreements negotiated by the president. Hence, committee members shifted their attention to developing new tax policies. Their efforts included the passage of the Wealth Tax in 1935 and an array of revenue bills establishing progressive income-tax rates to distribute the tax burden fairly from the poor and the middle class to the wealthy. The new tax rates also enabled the nation to finance both the Second World War and the Korean conflict in the early 1950s. During this time, the House Ways and Means Committee was responsible for developing and helping win passage of several landmark bills. The most notable effort was the passage of the Social Security Act of 1935, with amendments to the act in 1939 and 1954. Included in these efforts, committee members developed social security tax and benefit rates and
Ways and Means Committee
eligibility requirements. Later, during the presidency of Dwight D. Eisenhower, the committee became a battleground among conservatives wanting to reduce the tax rates on the wealthy, the moderates such as Eisenhower wanting to maintain the tax rates to balance the federal budget, and the liberals wanting to increase tax rates to finance programs to help the poor. Not surprisingly, these political cross pressures led to making the federal tax code more complex than ever. An important event in the evolution of the committee occurred in 1958 when Wilbur Mills, a Democrat from Arkansas, became the chair of the panel. Mills quickly established and maintained a reputation as a master legislator, particularly for building bipartisan coalitions and arriving at a consensus among committee members. Mills tended to have the committee make its decisions as a group and out of the view of both the news media and interest groups. Mills also carefully screened new committee members to assure their willingness to compromise on fiscal issues. In addition, Ways and Means had been authorized to bring all of its tax legislation to the House floor under a “closed” rule, meaning that no part of the bill could be amended. Members were forced either to vote in favor of or against the entire proposal. Finally, if this was not enough, Mills recorded an incredibly successful rate for winning approval of his proposals during conference committee negotiations with the Senate delegation. He helped win passage of President John F. Kennedy’s tax cut, the Medicare Act of 1965, and important amendments to the Social Security Act that were passed in 1967 and 1972. Unfortunately for Mills, by the 1970s he increasingly came under the fire of liberal House Democrat reformers. Reforms that won passage in the mid1970s essentially broke up his system of closed com-
447
mittee deliberations. All standing committees were divided into at least four subcommittees, bills could be referred to more than one standing committee, and a House Budget Committee was created with jurisdiction rivaling the work of Ways and Means. Not long afterward Mills resigned from the House after being publicly embarrassed by a personal incident in Washington. Since that time, the committee has failed to regain the prominence it held during much of the 20th century. During the era of Republican dominance, which began following their capture of both the House and the Senate in the 1994 midterm elections, the Ways and Means Committee frequently has become a partisan battleground. The committee played a role in helping President George W. Bush win approval of his tax cuts for the wealthy and the passage of the president’s Medicare plan to add prescription drugs to the program in 2003. The highly visible and volatile jurisdiction of the committee almost assures that the panel will remain at the forefront of the attention of Congress, the president, and the public alike. See also bill; legislative process. Further Reading Manley, John. The Politics of Finance: The House Committee on Ways and Means. Boston, Mass.: Little Brown, 1970; Schick, Allen. Congress and Money: Budgeting, Spending and Taxing. Washington, D.C.: The Urban Institute Press, 1980; Strahan, Randall. New Ways and Means: Reform and Change in a Congressional Committee. Chapel Hill: University of North Carolina Press, 1990; Zelizer, Julian. Taxing America: Wilbur Mills, Congress and the State, 1945–1973. New York: Cambridge University Press, 1998. —Robert E. Dewhirst
EXECUTIVE BRANCH
administrative presidency
Congress. Frustrated by the Congress’s refusal to follow presidential leadership, presidents look for ways to achieve their policy goals without going through the cumbersome and frustrating legislative arena. One approach popularized by recent presidents is to devise an administrative strategy for governing. When Congress fails to legislate, presidents may try to “administrate.” That is, they seek to impose managerial or administrative means to gain policy ends. This trend has increased dramatically in the past 35 years. President Richard M. Nixon is often credited with “inventing” this strategy, but it did not fully blossom until the Ronald Reagan presidency in the 1980s. President Reagan aggressively employed a strategy that politicized the managerial side of the office in an effort to circumvent Congress and govern without their input. President Bill Clinton learned that lesson well and continued using an administrative strategy to govern, especially after 1995 and the Republican takeover in Congress. In the post-September 11, 2001, period, President George W. Bush likewise sought ways to govern without Congress, using administrative strategies instead of relying on legislative strategies. Using executive orders, memoranda proclamations, signing statements, and other administrative devices, presidents have been able to make policy without and sometimes against the will of Congress. This form of unilateral leadership gives the president the ability to impose his will in the absence
The separation of powers system within the U.S. Constitution fragments and disperses political power, but the public expectation is that the president shall solve the nation’s problems. For the president, this creates a gap between what is expected of him and the resources that he has to deliver on these inflated expectations. Thus, presidents must search for ways, which are sometimes extraconstitutional, to increase their power. One of the most important tools a president can use to overcome the natural lethargy built into the separation of powers model of government is to develop an “administrative presidency.” Article II, Section 1 of the U.S. Constitution makes the president the chief executive officer of the nation. The president is responsible for the management of the federal government, but he also shares power with Congress and the courts, and the president is by no means independent in the exercise of managerial leadership. In policy terms, this means that the president shares power, especially with the Congress, and yet, the expectation is that the president is to lead. This creates an expectation/power gap as the demands placed on the president fall short of the power he has to achieve these expected results. If a president must go through Congress to set policy, he often finds himself frustrated when the Congress fails to respond positively to the president’s initiatives. As presidents fail to convince Congress to respond favorably to their proposals, ways are sometimes found to “get around” 449
450 administr ative presidency
of congressional approval and, with the support of the U.S. Supreme Court, which has held that executive orders have, under most circumstances, the force of the law, presidents are often able to govern without Congress. An executive order is a directive issued by the president to assist in his capacity as chief executive. These orders are issued unilaterally by the president and do not involve obtaining congressional approval. There is no officially accepted definition of what an executive order is, but a 1957 report from the House of Representatives defined them as “a written document issued by the President and titled as such [executive order] by him or at his discretion.” (U.S. House Committee on Government Operations, December 1957, p. vii.) Originally, the executive order was intended for minor administrative and rule-making functions to help the nation’s chief administrative officer administer the law more efficiently and effectively. In time, however, the executive order has become an important and often controversial tool enabling the president to make policy without the consent of Congress as required by the Constitution. When used for major policy issues, the executive order seems antithetical to the spirit as well as the letter of the separation of powers. As the nation’s chief executive officer, the president bears significant administrative responsibilities. To do this job, a president needs the power and authority to issue administrative instructions. Of that there can be no doubt. But when presidents use the executive order to undermine the Congress and the checks and balances, a political backlash may occur. The executive order is an “implied” power, which means that it is not specifically mentioned (not enumerated) in the Constitution but is believed to be essential for the proper functioning of government. Thus, presidents occasionally rely on executive orders to better fulfill their constitutional role as chief executive officer. The first executive order was issued by President George Washington on June 8, 1789. It instructed heads of departments (what we today call cabinet officers) to make a “clear account” of matters in their departments. Under the National Administration Procedure Act of 1946, all executive orders must be
published in the Federal Register. Congress, if it wishes, can overturn an executive order. Executive orders can also be challenged in court on grounds that they may violate the Constitution. Over time, presidents have gone far beyond the use of executive orders for minor administrative matters and have begun to use such orders to “make law” on more substantive and controversial matters. Increasingly frustrated by the slow and sometimes obstructive nature of the congressional process, presidents have turned to administrative techniques such as executive orders in an effort to bypass the Congress, making an “end run” around the legislative branch, and rarely does the Congress overturn a presidential executive order. Thus, presidents use executive orders along with proclamations, memoranda, findings, directives, and signing statements to increase their administrative reach over policy. Such efforts to bypass Congress may sometimes overstep the bounds of what is an appropriate use of administrative tools of the office. Presidents have been accused, often with justification, of “going around” Congress and “legislating” independently of Congress. Presidents have used executive orders to implement some controversial policies. In 1942, during World War II, President Franklin D. Roosevelt interned Japanese-American citizens in detention centers scattered up and down the Pacific Coast. In 1948, Harry S. Truman integrated the U.S. military via an executive order. In 1952, President Truman also attempted to seize control of steel mills (this was one of the rare instances when the U.S. Supreme Court overturned an executive order). In 1994, President Bill Clinton directed the U.S. Coast Guard to return Haitian refugees found at sea to Haiti. In 2001, President Bush issued a series of orders aimed at undermining terrorist organizations in the United States and abroad. The temptation to turn to an administrative strategy to govern was most visible when there was “divided government” (when the president was of one political party and the Congress was controlled by the opposition). One such time was during the presidency of Richard M. Nixon. President Nixon, a Republican, faced a Congress controlled by the Democrats. He very quickly learned that the Democrats would be disinclined to support his legislative agenda, so Nixon sought ways to get what he wanted without going
appointment power 451
through Congress. Thus, he helped devise an “administrative strategy” of presidential leadership. Nixon was only moderately successful in these efforts. It would take the Reagan presidency in the 1980s to more fully bring the administrative presidency to bloom. President Reagan took the seeds that were planted by Nixon and developed a more aggressive and sophisticated means of governing without Congress. From that point on, succeeding presidents had new tools in the presidential arsenal with which to govern. When the partisan tables were turned during the Clinton years in 1995 (when Clinton the Democrat faced a Congress controlled by the opposition Republicans), he was sometimes able to use the tactics learned from his Republican predecessors against the Democrats and turn them around on the Republican Congress. Ironically, when George W. Bush became president, a Republican president who also had a Republicancontrolled Congress, he continued to employ an aggressive administrative strategy. In fact, he took the administrative presidency even further than any of his predecessors, asserting a bold and unilateral brand of presidential leadership (under the “unitary presidency” theory) that the president has a variety of independent powers and did not need the Congress and that the Congress had no authority to question his actions under the claimed powers of the administrative presidency in wartime. In the early days of the war on terrorism, Bush’s claims went virtually unchallenged, but as the war effort in Iraq deteriorated, and as a series of other problems developed, the Congress began to challenge the president’s power grab and to fight back to reclaim some of its lost or stolen powers. In all likelihood, the administrative presidency is here to stay. Unless the Congress legislates against it, which is unlikely, or the Court sets out a definitive judicial ruling (also unlikely), one can expect presidents to continue to embrace administrative tools of leadership and the Congress to put up only a token defense of its institutional prerogatives and powers. Further Reading Hess, Stephen. Organizing the Presidency. Washington, D.C.: The Brookings Institution, 1988; Mayer, Kenneth R. With the Stroke of a Pen: Executive Orders and Presidential Power. Princeton, N.J.:
Princeton University Press, 2001; Pfiffner, James P., ed. The Managerial Presidency. College Station: Texas A&M University Press, 1999. —Michael A. Genovese
appointment power The U.S. Constitution provides relatively few enumerated powers to the president in Article II. Of these enumerated powers, one of the most important to presidential governance is the power to appoint. The appointment power delegated to the president in Article II of the Constitution was intended by the framers to ensure that the president, and not the Congress, controls the executive branch. The framers created a system of government with three separate branches in which the president executed the laws passed by Congress. Unlike the British system of cabinet government in which Parliament was integrally woven into both the legislative and the executive functions of government, the new U.S. government would have separate and distinct branches. By giving the president the constitutional power to appoint within the executive branch, the framers created a clear division of responsibility. Congress would pass the laws and the president would execute the laws with the assistance of staff appointed solely by him. As originally intended by the framers, the entire executive branch was to be appointed by the president with the caveat that certain senior positions were to be approved and confirmed by the Senate. Positions that require Senate confirmation today include the 15 cabinet officers and their senior staff and deputies, certain parts of the Executive Office of the President (EOP) such as the director of the Office of Management and Budget and the United States Trade Representative, the entire ambassadorial corps, the entire federal judiciary including the U.S. Supreme Court, the heads of independent agencies such as the Environmental Protection Agency, and the heads of limited-term agencies such as the Central Intelligence Agency, and the heads of commissions such as the Federal Election Commission. In addition, a number of positions, such as those on the White House staff, are presidentially appointed but do not require Senate confirmation. Positions that require Senate confirmation are specified by statute. The Constitution
452 appointment power
provides in Article II that Congress may specify which positions are subject to Senate confirmation. The federal bureaucracy that emerged in 1789 during the administration of George Washington was entirely composed of presidential appointments. All of the employees of the three executive departments (State, Treasury, and War) and the U.S. Attorney General’s office were appointed by the president or by his cabinet officers. Three years later, in 1792, the Post Office was added to the departments directly under the president. By 1801, these departments within the federal government employed 3,000 people, all appointed directly or indirectly by the president. During this period, employees in the federal government tended to be friends of the president or of the cabinet officers. Most federal employees were highly educated and considered among the U.S.’s upper class, a pattern which led to a “government by gentlemen.” Federal employees, although presidentially appointed, were generally considered to be well qualified for their positions. They were rarely removed from their jobs for political reasons, even a change in administration. This pattern of a “government by gentlemen” changed during the administration of Andrew Jackson when the hiring process became less one based on personal relationships with the president and educational background than one of political reward. Jackson created the “spoils system,” in which jobs in the federal government were often a reward for supporting the presidential campaign. Job qualifications and experience became less important than political connection under the spoils system. Jackson removed many of the federal employees, replacing them with campaign supporters. The age of Jackson ushered in a new era of presidential appointments, an era based on using the federal government as a source of patronage. This practice continued unchallenged until 1883 when the Pendleton Act was passed. The Pendleton Act, better known as the Civil Service Act, provided for federal office seekers to take an examination to qualify for a federal job. The law created a Civil Service Commission to administer the examinations and to classify jobs and set salary levels for the classified jobs. When the law went into effect, only 14,000 positions, or 10 percent, of the federal workforce was covered. The jobs of the
remaining 90 percent of the federal workforce remained at the discretion of the president for hiring and removal. In the intervening years since the Jackson Administration, the Civil Service Commission has gained control of nearly all of the approximately 3,000,000 civilian jobs in the federal government today. Presidents control very few of the positions within the federal government, in contrast to 1789 when George Washington controlled all positions. The cabinet officers and their senior policy makers, the White House staff, and most of the staff in the Executive Office of the President (EOP) remain the only noncivil service positions within the federal government, with the exception of the federal judiciary, certain independent agency and commission heads, and the ambassadorial corps. Approximately 5,000 positions are currently available for presidential appointment within the executive branch that are not controlled by the Civil Service Commission. A list of these 5,000 positions is published biannually by the Committee on Government Reform in the U.S. House of Representatives in a report entitled “Policy and Supporting Positions,” better known as the Plum Book. The report lists each of the positions available to the president, both those which require Senate confirmation such as cabinet secretaries and assistant secretaries and those which do not require Senate confirmation such as senior staff within the departments and White House staff. By limiting the number of positions available to each new president, the federal government is able to maintain an institutional memory, a high degree of policy expertise, and the confidence of the people of the United States. Merit rather than political connection controls the vast majority of hiring within the federal government today. Only the top echelon of federal employees can be hired and fired by the president due to civil service laws and regulations. The number of presidential appointments is a small part of the federal workforce but is a critical part of presidential governance. Whether or not the president had the authority to remove presidential appointments that were confirmed by the Senate was unclear in the Constitution. While Article II states that Congress may define the positions that are subject to Senate confirmation, it
appointment power 453
does not state whether the president in turn must seek the Senate’s approval to remove these appointments. The Constitution simply states in Article II that the president “shall nominate, by and with the consent of advice and consent of the Senate Ambassadors and Consuls . . . and all other Officers of the United States.” Presidents have consistently argued that the lack of a clear statement in the Constitution that requires the Senate to approve removal of an appointee provides the president sole removal authority. The Senate maintained for many years that the framers intended “parallel construction” in their language, which authorized the Senate to both confirm and remove. The first test of the ambiguous language on removal power in Article II came during the administration of President Andrew Johnson. After Johnson fired Secretary of War Edwin M. Stanton in 1867 in a dispute over Stanton’s opposition to Johnson’s reconstruction policies, the House of Representatives impeached Johnson. Congress asserted that Johnson had violated his constitutional responsibility by failing to seek Senate approval for the removal of Stanton. The Senate did not convict Johnson, and he continued to hold office for the remainder of his term, and, not surprisingly, Stanton did not regain his office. The issue of the removal power was resolved during the 19th century through the political process of impeachment. However, the issue surfaced again in the 20th century and was resolved this time through the judicial process when the Supreme Court adjudicated the issue in two Court cases. In essence, the Court stated that if the responsibilities of the appointee were totally within the executive branch, removal was allowed. But if the responsibilities of the appointee were quasi-judicial or quasi-legislative, removal was not allowed. In the 1926 court case of Myers v. United States, the Supreme Court ruled that President Woodrow Wilson did have the authority to remove a postmaster, Frank S. Myers, in Portland, Oregon. Chief Justice William Howard Taft wrote in the opinion that “Article II grants to the President the executive power of the government . . . including the power of appointment and removal.” Taft further stated that the “moment that he [the president] loses confidence in the intelligence, ability, judgment, or loyalty of one of
them [political appointees], he must have the power to remove him without delay. Since the postmaster performed a role completely within the executive branch, he was subject to removal by the president. Nine years later, the issue of removal power came before the Supreme Court again when President Franklin Delano Roosevelt tried to remove a commissioner from the Federal Trade Commission. Under the legislation governing the Federal Trade Commission, the president could appoint but not remove without cause. Removal for political reasons, such as differences in political parties, was barred by the enabling statute. William E. Humphrey was a conservative Republican appointed to the Federal Trade Commission by President Herbert Hoover in 1931. When Roosevelt entered office, he wanted to remove Humphrey. Although Humphrey died soon after, his executors appealed Roosevelt’s decision. In Humphrey’s Executor v. United States in 1935, the Supreme Court ruled that a commissioner [Humphrey] of the Federal Trade Commission functions in both a quasi-judicial or quasi-legislative manner in addition to performing duties within the executive branch. As a result, the Court ruled, the removal power was subject to the language of the enabling legislation which limited removal to “inefficiency, neglect of duty, or malfeasance in office.” Roosevelt could not remove Humphrey since his responsibilities were not solely executive. During the latter half of the 20th century, another issue arose in the debate on the broad issue of the president’s appointment power. When Congress passed the Federal Election Campaign Act of 1971, it included a provision that a newly created Federal Election Commission would oversee the implementation of the law. The Federal Election Commission was to have members appointed by both the Congress and the president. In 1976, the Supreme Court ruled in Buckley v. Valeo, which challenged a number of major provisions of the law, that only the president had the appointment power. The Court’s decision stated that Congress could include in the legislation that appointments to the Federal Election Commission must be confirmed by the Senate, but they could not include in the legislation a requirement for any of the members to be appointed by the Congress. The appointment power, the Court concluded, resided solely with the president.
454 assassina tions
Further Reading Pfiffner, James P. The Modern Presidency. 4th ed. Belmont, Calif.: Wadsworth, 2005; Warshaw, Shirley Anne. The Keys to Power: Managing the Presidency. 2nd ed. New York: Longman, 2004. —Shirley Anne Warshaw
assassinations The term assassination generally refers to the murder of a prominent or public person, usually, though not always, a governmental official. The word assassin is derived from the 13th-century Muslims whose primary goal was to murder Christian Crusaders. The assassination of Archduke Francis Ferdinand of Austria in 1914 led to the outbreak of World War I. In
addition, many other prominent public figures have been killed by assassins: Mohandas K. Gandhi, the Reverend Martin Luther King, Jr., Robert F. Kennedy, and Egypt’s Anwar Sadat, to name but a few. One need only go back and read the historical plays of William Shakespeare to realize that political intrigue, backstabbing, and assassinations have deep roots in political history, as well as in literature. While there have been many attempted assassinations against U.S. presidents, only four have been successful. Presidents Abraham Lincoln, James Garfield, William McKinley, and John F. Kennedy were all felled by assassin’s bullets. Because of the threat of assassination, many government officials receive protection. The president is protected by the Secret Service, a branch of the Department of Homeland Security.
Leon Czolgosz shoots President McKinley with a concealed revolver at the Pan-American Exposition reception, September 6, 1901. Photograph of a wash drawing by T. Dart Walker (Library of Congress)
assassinations 455
The assassination of a president is often a traumatic national experience. The presidency is a highly personal office; U.S. citizens invest a great deal of symbolic as well as political importance into their presidents, and the abrupt and violent nature of an assassination often leaves the public shocked, sickened, and outraged. The people often identify with their president and after an assassination feel deep grief and anxiety. On April 14, 1865, President Abraham Lincoln was shot while attending a play, Our American Cousin, at the Ford’s Theater in Washington, D.C. Five days earlier, General Robert E. Lee had surrendered to General Ulysses S. Grant, marking the end of the Civil War and the defeat of the Confederate South. John Wilkes Booth, 26, an actor and southern sympathizer, had been planning for months to either kidnap or kill Lincoln. Booth, along with two coconspirators, had planned to kill the president, vice president Andrew Johnson, and Secretary of State William Seward. Booth was to kill Lincoln, George Atzerodt was to kill Johnson, and Lewis Thornton Powell was to kill Seward. The vice president escaped harm when Atzerodt backed out at the last minute. Secretary of State Seward was stabbed by Powell, but he survived. Booth, climbing into the president’s box at the theater where the president’s bodyguard had left his post to watch the play, approached the president and, pointing the gun just inches from Lincoln’s head, fired and hit the president. The bullet struck the president behind the left ear and planted behind the right eye. After firing, Booth leapt from the balcony onto the stage, shouting, “Sic simper tyrannis” (ever thus for tyrants). The leap onto the stage caused Booth to break a bone in his leg, and he hobbled off and escaped. But on April 26, Booth, while trying to elude capture, was cornered by soldiers and shot to death. The president died at 7:22 on the morning of April 15. Following Lincoln’s death, an overwhelming outpouring of grief befell the nation. James Garfield had been president for only four months when he was felled by an assassin’s bullet. On the morning of July 2, 1881, the president and Secretary of State James G. Blaine were about to board a train in Washington D.C., bound for Elberton, New Jersey, where the president was headed to visit his ail-
ing wife, when they were approached by Charles Guiteau, a mentally unstable lawyer who was disappointed when President Garfield had not appointed him ambassador to France. Guiteau fired two shots, hitting the president. Garfield lingered for 80 days and died on September 19, 1881. While Guiteau was almost certainly incompetent to stand trial, he did in fact go to trial where he was found guilty and sentenced to death. He was executed on June 30, 1882. Garfield, while technically the victim of an assassin’s bullet, may not have died from the gunshot wounds. Many believe that it was incompetent medical treatment after the shooting that actually led to his death. President William McKinley was killed in Buffalo, New York, where he was attending the PanAmerican Exposition. On the afternoon of September 6, 1901, McKinley, having just delivered an address at the exposition, was greeting people in the crowd when Leon Czolgosz, a self-proclaimed anarchist, worked his way through the layers of protection around the president, and fired two bullets at McKinley. One bullet bounced off a button on the President’s jacket, and the other lodged in McKinley’s stomach. McKinley held on, seemed to improve, but in the early morning hours of September 14, 1901, he died. Czolgocz was quickly taken into custody. Fiftythree days later, after a speedy trial and conviction, he was executed in the electric chair. The first three assassinations of presidents had all occurred at close range. The fourth assassination was different. It continues to this day to remain controversial and many believe unsolved. On November 22, 1963, President John F. Kennedy was on a political trip to Texas, and his motorcade was traveling down a Dallas street when shots rang out. The president had been hit. He was rushed to the hospital but died moments later. Lee Harvey Oswald was arrested as the lead suspect in the assassination. As Oswald was being transferred to a different jail, Jack Ruby, a Dallas night-club owner approached Oswald and shot and killed him. President Lyndon Johnson appointed a commission to investigate the murder of President Kennedy. Led by Chief Justice of the Supreme Court Earl Warren, the Warren Commission report seemed to raise more questions than it answered. The commission concluded that Lee Harvey Oswald, acting alone, had
456 A tomic Energy Commission
killed the president. But a cottage industry has grown challenging the conclusions of the Warren Commission, and to this day, many believe that it is not known with certainty who killed President Kennedy. The fact that the assassination of President Kennedy occurred in the age of television gave citizens a close up look at the crisis and made this event more personal and more central to their lives and experiences. In the aftermath of scandals in the Central Intelligence Agency revealed in the Watergate era, a presidential Executive Order was signed by President Gerald Ford, forbidding the United States from engaging in assassination attempts overseas. The backlash against such efforts was the result of embarrassing and sometimes comic revelations of CIA attempts to assassinate Cuba’s Fidel Castro. This ban could be lifted with the signing of a Presidential Finding, but is otherwise in place and has the force of law. But in a post 9/11 age of antiterrorism, such bans have been lifted or softened, and with the spread of secrecy in the administration of George W. Bush, it is not known the extent to which such efforts may have been resumed. It is a thorny moral as well as political question, one that can—if it is revealed or if it backfires—dramatically hurt a nation. Thus, assassinations remain a contentious and problematic issue, especially for democratic nations that value the rule of law. Further Reading Clarke, James W. American Assassins: The Darker Side of Politics. Princeton, N.J.: Princeton University Press, 1990; McKinley, James. Assassination in America. New York: Harper & Row, 1977. —Michael A. Genovese
Atomic Energy Commission The U.S. Atomic Energy Commission (AEC) is the successor to the commission created in the aftermath of World War II by the Atomic Energy Act of 1946 and is responsible for the control and development of the U.S. atomic energy program. When created, there was a debate about whether the commission should be primarily concerned with the military or civilian development of atomic power. This was because, at that time, the cold war was in its early stages, and the military uses of atomic power posed both problems
and opportunities for the United States and the world. In the late 1940s, the United States was the only nation that possessed or had used atomic weapons (against Japan at the late stages of World War II), and while the Soviet Union was trying desperately to obtain or develop atomic weapons, at this time the United States still enjoyed a monopoly on atomic weapons—a monopoly that would shortly end with the development of the first atomic weapons by the Soviet Union a few years later. But in 1946, the United States saw an opportunity, as the world’s only atomic superpower, to have a broad impact on the development and proliferation of atomic and nuclear power and weaponry. It would be a brief window of opportunity but an important one. How would the United States face up to this opportunity and how would the Soviet Union and the rest of the international community respond? The 1946 Act provided for the creation of a fivemember commission, appointed by the president with the advice and consent of the Senate, that would set policy for the nascent nuclear- or atomic-energy industry. In many respects, the atomic age came as a surprise to the world. After all, the United States dropped atomic bombs on Hiroshima and Nagasaki, Japan, in August 1945, and from that point on, the dilemma of the modern world was how to control the use of nuclear power and weapons while not endangering the world at large. The atomic weapons were of unprecedented explosive power, and they ignited a fear among the world’s population that demanded that such destructive weapons be controlled. After World War II, President Harry S. Truman urged the Congress to create a commission to deal with this new and potentially dangerous but potentially useful new source of energy, power, and destruction. Truman wanted this new commission to “control all sources of atomic energy and all activities connected with its development and use in the United States.” The immediate postwar period was one of heated conflict between the United States and the Soviet Union as the early days of the cold war were a time of fear and uncertainty. After the United States dropped the first atomic bombs on Japan toward the end of World War II, the nuclear genie was out of the bottle, and the world was groping for a way to control the spread of nuclear weapons and eventually to learn how to safely use atomic power. Should the United
attorney general, United States 457
States attempt to bottle up nuclear technology, or should it open up its information for world use? Could international agreements lead to cooperation in the area of nuclear power and weaponry, or would events spiral out of control, greatly endangering the world? It was a confused and confusing time for policy makers as every option seemed to hold both promise and peril. In 1954, Congress attempted to reform the process with the passage of the Atomic Energy Act amendments, which actually encouraged the commercial development of nuclear power. The act encouraged the development of atomic power and was the impetus for the creation of a nuclear energy industry in the United States, and although the 1954 act gave the commission responsibility for the “safe” development of nuclear power and energy, critics believed that the commission was the tool of industry and did not serve the needs or interests of the public at large. In 1974, faced with bad publicity and diminishing public and congressional support, the Atomic Energy Commission was abolished and replaced by the Energy Reorganization Act. This created the Nuclear Regulatory Commission, which took responsibility for the safe and peaceful development of nuclear power. But by the late 1980s, nuclear power became so politically unpalatable that it receded into the back burner in the United States. The need for energy, however, and the emergence of the presidency of George W. Bush, revived interest in the potential for nuclear power development. This was especially apparent during the 2006 rise in gas prices when consumers faced inflated and, to many, exorbitant prices for a gallon of gas. Calls for new and less costly sources of energy came from the political left and right, and President Bush, facing lagging popularity ratings, began to call for energy independence and new energy exploration. This revived the hopes of the nuclear industry that nuclear power might be a more attractive source of energy for a nation that was tired of paying such high prices for gas. In this atmosphere, calls for the building of new nuclear power plants as a way to wean the United States off expensive and foreign sources of energy had resonance. But the problems inherent in nuclear power remain and any major effort to transform United States energy production to nuclear power
would certainly be met with hard resistance from the scientific and environmental communities. This might well prove too powerful a force to overcome and might doom such efforts to failure. The Atomic Energy Commission is not to be confused with the International Atomic Energy Agency (AEIA), an arm of the United Nations that is designed to monitor international nuclear arms agreements and norms. The AEIA, headquartered in Vienna, Austria, deals with international issues concerning the use of nuclear power and also the spread of weaponry. See also bureaucracy. Further Reading Loeb, Paul Rogat. Nuclear Culture: Living and Working in the World’s Largest Atomic Complex. New York: Coward, McCann & Geoghegan, 1982; Webb, Richard E. The Accidental Hazards of Nuclear Power Plants. Amherst: University of Massachusetts Press, 1976. —Michael A. Genovese
attorney general, United States The attorney general is the chief law officer of the nation, the head of the Department of Justice, and an important legal adviser to the president and the executive branch. The Justice Department handles most of the U.S. government’s legal affairs, from investigation and prosecution, to imprisonment and parole. The attorney general and his or her staff also recommend legislation to Congress related to criminal and civil law, the administration of the courts, and law enforcement powers. The attorney general, like other executive department heads, is nominated by the president and is confirmed by the Senate. In 1792, the attorney general joined the cabinet and now is considered to be a member of the “inner” cabinet, along with the secretaries of State, Defense, and the Treasury. Since September 11, 2001, the attorney general has assumed even greater authority in the domestic antiterrorism effort. In some administrations, the law officer also serves as a key policy advisor to the president on a wide range of issues. The office became part of the line of succession to the presidency in 1886; the attorney general is seventh in line, following the vice president, the Speaker of the House,
458 attorney general, United States
Hon. Caleb Cushing of Massachusetts (Library of Congress)
the president pro tempore of the Senate, and the secretaries of State, the Treasury, and Defense. The Justice Department’s jurisdiction covers a wide range of issues. This has led some observers to note that an attorney general’s priorities can have a significant impact on U.S. society at large. For example, some attorneys general have concentrated resources on fighting white collar crime, while others prioritize civil rights enforcement. With finite staff and resources, the attorney general sets priorities that determine which federal laws are more rigorously enforced than the others. Unlike other cabinet officers, attorneys general customarily do not engage in direct political activity to assist the administration or promote a president’s reelection because the position continues to be seen as quasi-judicial. The attorney general is an officer of the court as well as a member of an elected administration. In a sense, the office sits at the nexus of law and politics. Occasionally, this has led to a tension between loyalty to the president and loyalty to the law. The unique obligations of the office are often noted in Senate confirmation hearings and tend to
trigger greater congressional scrutiny and higher public expectations than do other executive positions. The attorney general’s office was consciously drawn from the English office of the same name, which dates back to the 15th century. Colonial governments also had attorneys general who were officially subservient to the English law officer but who—in actual practice—operated with a high degree of autonomy. The first attorney general in the Americas was appointed in Virginia in 1643, followed by one for the colony of Rhode Island in 1650. The office continued when the colonies declared independence and became states. However, there was no national attorney general under the Articles of Confederation; instead, the Continental Congress hired lawyers to represent its interests in state courts. Ratification of the new U.S. Constitution, which created a national government based on the people and not on the states, set the stage for the creation of a nationallevel attorney general. In one of its earliest acts, Congress proposed and passed legislation creating the lower federal courts and the post of attorney general. That statute—the Judiciary Act of 1789—called for the appointment of “a meet person, learned in the law, to act as attorney general for the United States,” who would have two quasi-legal duties: “to prosecute and conduct all suits in the Supreme Court in which the United States shall be concerned, and to give his advice and opinion upon questions of law when required by the President of the United States, or when requested by the heads of any of the departments, touching any matters that may concern their departments.” While still the responsibility of the attorney general, both of these functions are now largely delegated to the Office of U.S. solicitor general (created in 1870) and the Office of Legal Counsel (created in 1933), respectively. The first attorney general—Edmund Randolph— was appointed by President George Washington in 1790. The current attorney general (as of 2008), Michael B. Mukasey, is the 81st law officer. The longest serving was William Wirt, whose 12-year tenure (1817–29) has yet to be matched. The second longest serving was Janet Reno, with an eight-year term (1993–2001). The vast majority have served terms of four years or less. Six law officers went on to be named to the U.S. Supreme Court, two serving as
attorney general, United States 459
chief justice (Roger B. Taney and Harlan Fiske Stone). Although early attorneys general considered themselves as heads of a “law department,” there was no formal Justice Department until the Judiciary Act of 1870. Instead, in its first 60 years, the office was not well institutionalized. President James Monroe complained to Congress in 1817 that there was no office space, stationery, messenger, clerk, or even fuel for Wirt, his new attorney general. Wirt also was shocked to discover that his eight predecessors had not even kept copies of their legal opinions and correspondence. He began the process of institutionalization: He set up a system of record keeping and petitioned Congress for funds for a clerk, stationery, and a reference library of state laws. Congress provided the clerk and the messenger, and, a few years later, funds for a room. Not until 1831 did Congress pay for law books and office furniture. Congressional resistance appeared to derive from a fear of a strong attorney general and a centralized legal bureaucracy at the national level. For the same reason, efforts to bring other government lawyers under the attorney general also initially failed. Instead, starting in 1830 with the post of solicitor of the Treasury, Congress created law offices in other executive departments, which fragmented the nation’s legal business. Soon, Congress had added solicitors to Internal Revenue, War, the Navy, and the Post Office. Until the appointment of Caleb Cushing in the Franklin Pierce Administration, the attorney general was paid much less than the other cabinet secretaries because the position was considered part time. Attorneys general were expected to maintain their private law practices. Instead of being seen as a potential conflict of interest, private practice was regarded as a means by which attorneys general would sharpen their courtroom skills. In fact, as private counsel, early attorneys general argued some of the major cases in constitutional law before the Supreme Court, including Gibbons v. Ogden (1824), Dartmouth College v. Woodward (1819), and Barron v. Mayor of Baltimore (1833). Their regular appearance before the U.S. Supreme Court enhanced their professional status and brought further remuneration. However, maintaining a successful private practice kept the early attorneys general out of Washington, D.C., much of
the time, which meant that—until Cushing—they played a limited policy role in their administrations. As the first full-time attorney general, Cushing (1853–57) expanded the duties of the office, taking on some functions that had been handled by the secretary of state, such as pardons, extraditions, correspondence with other departments, and legal appointments. Despite Cushing’s ability, ambition, and energy, his office remained small, with only two clerks and a messenger. In the 1860s, Congress finally added two assistant attorneys general and a law clerk and made the geographically dispersed U.S. attorneys subordinate to the attorney general. Even so, historians have noted that the nation’s legal business remained heavily decentralized and disjointed. It was the post–Civil War period that brought the most change to the attorney general’s office. An explosion of litigation involving war claims required the government to hire private attorneys around the country, and this expense forced Congress to reconsider an expanded justice department. Some congressional resistance to centralization continued. Finally, in 1870, a bill passed to create the Department of Justice, adding two more assistant attorneys general and a solicitor general to share the Supreme Court caseload. With this change, the attorney general increasingly became absorbed in administrative work, with other tasks delegated to assistant attorneys general and the solicitor general. In the 20th century, the attorney general’s duties grew along with the general expansion of the federal government and law. Six divisions—each headed by an assistant attorney general—handle legal business related to civil and criminal law, civil rights, tax and antitrust law, and the environment and natural resources. The attorney general oversees two investigative agencies as well, the Federal Bureau of Investigation and the Drug Enforcement Agency. Also housed in the Justice Department are the U.S. Marshals Service, the Bureau of Prisons, the Bureau of Alcohol, Tobacco, Firearms, and Explosives, and other agencies related to law enforcement, prosecution, and detention. In addition to legal advice, the attorney general customarily advises the White House on Justice Department priorities, budget requests, and policy implementation. Presidents also may rely on their attorneys general for broader policy advice, and some
460 bully pulpit
attorneys general have become important policy advisers, particularly those politically or personally close to their presidents before appointment. Among those who have been included in the president’s inner circle of advisers are Robert Kennedy, Jr., in John Kennedy’s administration and Edwin Meese III in Ronald Reagan’s administration. Others have not been close presidential associates. In many cases, they were appointed following an administration scandal, selected specifically because of their perceived independence of the White House. This group includes Harlan Fiske Stone, a Columbia University law professor and dean appointed after the Teapot Dome scandal by Calvin Coolidge in 1924, and Edward Levi, a legal scholar who had been president of the University of Chicago, selected by Gerald Ford in the highly contentious post-Watergate environment. While not generally included in the inner circle, these law officers played a critical role in restoring credibility in the aftermath of scandal. Further Reading Baker, Nancy V. Conflicting Loyalties: Law and Politics in the Office of Attorney General, 1789–1990. Lawrence: University Press of Kansas, 1993; Cummings, Homer, and Carl McFarland. Federal Justice: Chapters in the History of Justice and the Federal Executive. New York: Macmillan, 1937; Meador, Daniel. The President, the Attorney General and the Department of Justice. Charlottesville: Miller Center of Public Affairs, University of Virginia, 1980. —Nancy V. Baker
bully pulpit The president commands the attention of the press in the United States and around the world. His presence is ubiquitous. At a moments notice, the president can command widespread media attention, and his words will be broadcast to every corner of the globe. This gives the president the potential to have a great impact on world events as well as public and elite opinion. A president skilled in the arts of selfpresentation and self-dramatization can command attention and generate support. This has created what some call a “rhetorical presidency.” The first president to exploit the public elements of the presidency was the first president, George
Washington. But his approach was subtle and designed to highlight the dignity of the office, imbuing it with an aura of respectability and gravitas. Andrew Jackson was the first president to truly embrace a public presidency. He claimed to be the voice of the people and attempted to convert this concept into political power. The start of the rhetorical presidency and the president’s use of the bully pulpit are credited to Theodore Roosevelt. He advanced the president’s role as the national leader of public opinion and used his rhetorical skills to increase the power of the presidency through popular support. Roosevelt believed that the president was the steward of the people and that weak presidential leadership during the 19th century had left the U.S. system of government open to the harmful influence of special interests. He expanded presidential power to the furthest limits of the U.S. Constitution by drawing on broad discretionary powers, the first president to do so during peacetime, as opposed to a more conservative and literal reading of presidential powers within the Constitution. Roosevelt’s “Stewardship Doctrine” demanded presidential reliance on popular support of the people and also increased the public’s expectation of the president and the office. He often appealed directly to the U.S. public through his active use of the bully pulpit to gain support of his legislative agenda in an attempt to place public pressure on Congress. He referred to his speaking tours around the country as “swings around the circle.” Roosevelt’s use of the presidency as a bully pulpit changed the people’s view of the office and helped to shift power from the legislative branch to the executive branch during the 20th century. Later presidents, though not all, would follow Roosevelt’s strategy of relying on the bully pulpit to elevate the power of the office as an attempt to lead democratically as the spokesperson for the public. Woodrow Wilson contributed to a more dominant view of the presidency through his use of the bully pulpit and broke with a 113-year tradition by becoming the first president since John Adams to deliver his State of the Union Address in person before the Congress in 1913. Through his rhetorical skills, especially during World War I, Wilson established the presidency as a strong position of leadership at both the national and the international level. Franklin D.
bully pulpit 461
Roosevelt relied heavily on the bully pulpit, particularly his use of radio, to persuade the public gradually to support his New Deal policies during the 1930s and U.S. involvement in World War II during the 1940s. Use of the bully pulpit has become especially important since the start of the television age where a president’s overall success or failure as a leader can be determined by his rhetorical skills and public influence. Since the 1960s, three presidents stand out as successful in their use of the bully pulpit—John F. Kennedy, Ronald Reagan, and Bill Clinton. All were known for their frequent use of inspiring and eloquent speeches about public policy and their visions for the country. Kennedy talked of a “New Frontier” and motivated many to become active in public service. Reagan saw the bully pulpit as one of the president’s most important tools and, relying on his skills as an actor, provided a strong image of moral leadership that restored faith in government institutions. Clinton’s skills as an orator and his ability to speak in an extemporaneous and empathetic manner aided his leadership on some, if not all, of his legislative priorities, such as affirmative action and education. Other presidents during the 20th century either abdicated the bully pulpit or used it ineffectively, which diminished presidential power during their terms and curtailed their leadership potential by allowing other political actors to shape the public debate. As it has evolved during the past century, a president’s skillful use of the bully pulpit is necessary to promote his or her philosophy for governing as well as the overall moral and political vision of the administration. It can also determine the effectiveness of presidential governance and whether or not a president can accomplish his or her policy and broader ideological objectives through rhetorical skills. However, some view this as an institutional dilemma for the modern presidency. Since the current political culture now demands that the president be a popular leader by fulfilling popular functions and serving the nation through mass appeal, this suggests that the presidency has greatly deviated from the original constitutional intentions of the framers. The rhetorical presidency, through the use of the bully pulpit, is viewed by some as a constitutional aberration by removing the buffer between citizens and their representatives that the framers established.
Many scholars have documented this shift during recent decades to a style of presidential leadership that is increasingly based on rhetorical skills and the effective use of public activities. Those public efforts have become increasingly important for presidents who have occupied the White House during the last several decades as the expansion of media technology has contributed to the expansion of the rhetorical presidency and the need to develop successful communication strategies. As it has evolved with recent administrations, a communication strategy consists of various components, including the presidential/press relationship, presidential public activities, the presidential policy agenda, and the leadership style of the president. A successful communication strategy can determine the relationship that the president has with both the press and the public. As the essential link between the president and the public, the news media has contributed to the expansion of the executive branch as an institution; the extent to which the White House handles both press and public relations is evident by the number of people now employed in both the press and communication offices. Presidents of the modern era have also utilized public support by increasingly “going public,” a style of presidential leadership where the president sells programs directly to the people. Given all that is now known about how presidents utilize public activities and the strategy that is developed within the White House in an attempt to capitalize on the president’s effective use of the bully pulpit, an important question still remains as to how presidents rely on public aspects of the office during their reelection efforts. Presidents now rely so much on attention-focusing activities that they do not necessarily need to change their public strategies during the reelection campaign at the end of the first term. In general, presidents now partake in extensive public-exposure efforts such as public addresses and other appearances throughout their first term in office, not just during the reelection campaign, due in part to the increased media attention of the presidency during the past several decades. Presidents now realize that public exposure can be more helpful to them for reelection when it is undertaken in their role as president rather than as a candidate seeking reelection.
462 bur eaucracy
Further Reading Cronin, Thomas E., and Michael A. Genovese. The Paradoxes of the American Presidency. 2nd ed. New York: Oxford University Press, 2004; Gelderman, Carol. All the Presidents’ Words: The Bully Pulpit and the Creation of the Virtual Presidency. New York: Walker and Company, 1997; Han, Lori Cox. Governing from Center Stage: White House Communication Strategies during the Television Age of Politics. Cresskill, N.J.: Hampton Press, 2001; Lammers, William W., and Michael A. Genovese. The Presidency and Domestic Policy: Comparing Leadership Styles, FDR to Clinton. Washington, D.C.: Congressional Quarterly Press, 2000; Milkis, Sidney M., and Michael Nelson. The American Presidency: Origins and Development. Washington, D.C.: Congressional Quarterly Press, 1999; Tulis, Jeffrey K. The Rhetorical Presidency. Princeton, N.J.: Princeton University Press, 1987. —Lori Cox Han and Michael A. Genovese
bureaucracy The word bureaucracy comes from the word bureau, or the covering on a desk used by French government employees. Later, the word was used to signify the desk itself and, later, to the person working at the desk—a bureaucrat who worked in a bureaucracy at a desk. The bureaucracy has become a significant force in U.S. politics and is often referred to as the fourth branch of government. While U.S. citizens are highly critical of the bureaucracy, the bureaucracy serves an important function, is necessary, and even is indispensable for modern government. Every advanced industrial nation and every developing nation needs and has an infrastructure of bureaucrats to do the day-to-day drudgery that is the work of modern government. Large tasks require large organizations. Modern government would be impossible without bureaucracy, and although the framers of the U.S. Constitution could never have anticipated the growth and power of the modern bureaucracy, these political veterans would quickly recognize both the importance and the dangers in the creation of a large bureaucratic state. First and foremost, bureaucracies are organizations. They are structured hierarchically with a clear chain of command and set rules and in which special-
ization of task and division of labor predominate. German sociologist Max Weber defined a bureaucracy as having four essential features: hierarchy, specialization, explicit rules, and merit. The goal of a bureaucracy is to achieve “neutral competence” in the performance of complex tasks. In this way, the members of the bureaucracy—bureaucrats—are to be politically neutral but professionally competent. Bureaucracies are criticized for their power, size, lack of democratic responsiveness, excess of red tape, and remoteness from the citizens they were created to serve. To some critics, bureaucracies have an inherent tendency to seize control of their areas of policy and political expertise, serve their own narrow interests, insulate themselves from the control of elected officials, and become unresponsive to the citizens. In this view, bureaucracies may have a tendency to become isolated and unresponsive. The United States is unique among industrialized nations because it has a relatively small and politically weak bureaucracy. That has not stopped critics from focusing attention on the faults of the bureaucracy. Comparative weakness aside, the bureaucracy is important, is at times politically influential, yet does at times serve its own interests over those of the public. This dilemma can be especially vexing in a separation-of-powers system where a political vacuum is often created due to the conflict between the president and Congress. At times, the bureaucracy may be able to fill that vacuum, thereby gaining political clout. In many industrialized nations, bureaucrats are more highly trained, have more political influence, have more prestige, are better paid, and are treated with greater respect than are bureaucrats in the United States. The U.S. bureaucracy is made up of roughly 2.7 million employees who administer thousands of government programs and policies. Only 15 percent of federal employees work in Washington, D.C. The remaining 85 percent of federal employees work in state and local governments, and about one in four works for the military. Less than 10 percent work in social-welfare functions. The federal bureaucracy is made up of the cabinet departments, independent agencies, regulatory agencies, government corporations, and presidential commissions. More than 90 percent of all bureaucrats earned their jobs via merit criteria through the civil
bureaucracy 463
service system. Most people think the bureaucracy is “bloated,” but compared to other industrial nations, the U.S. bureaucracy is rather small in size. The U.S. civil service comprises roughly 13 percent of total employment of the United States. In Australia, the bureaucracy comprises 15 percent, in Canada 19 percent, in France 25 percent, and in Sweden 31 percent of total employment. The delegates who wrote the U.S. Constitution could not have imagined the size and scope that government would eventually reach. In their day, they thought little about a bureaucracy. Government at that time was small, and the responsibilities of government at all levels quite minor. The bureaucracy did not become a problem until the size and scope of government grew in response to the Depression of 1929, World War II, the cold war, the Great Society, the national security state, and then, the postSeptember 11, 2001, war against terrorism. In 1800, the federal government was quite small, employing only about 3,000 people. Nearly all of them were appointed by the president, and they were usually among the ablest and most educated in the nation. By the 1820s, things began to change. President Andrew Jackson believed that government employment should be a reward for political service to the victorious party, and that a newly elected president (or governor or mayor) should appoint his cronies and supporters to high government posts. “To the victor goes the spoils” went the phrase, thus beginning what became known as the “Spoils System.” In this brand of political patronage, each new president rewarded his partisan supporters with government jobs. This made them responsive to the newly elected president, but in time, the spoils system came to be seen as a problem. Critics argued that it too narrowly rewarded partisan behavior, was prone to corruption and incompetence, and often failed to provide the promised and needed services of the government. Soon, as the United States developed, the size of the government’s responsibilities began to expand. With industrialism and the nation’s emergence into world politics, the government took on added responsibilities, and the scope and size of government grew. Good government advocates called for reform. When, in 1881, President James Garfield was shot by a disgruntled job seeker, calls for civil-service reform grew.
In 1883, the Congress passed the Pendleton Act, establishing a merit system for hiring government employees. This created a career civil service and, with it, the emergence of professional bureaucrats. But even with a more professional civil service, the federal bureaucracy grew slowly. It was not until the federal government was compelled to respond to the economic Depression of 1929 that the bureaucracy really began to expand. In the 1930s, with President Franklin D. Roosevelt’s New Deal, the number of federal employees grew significantly. The New Deal signified the growth of federal responsibility for economic stability and security. After the Depression of 1929, the public demanded increased federal responsibility for domestic and economic problems, and the New Deal policies of FDR added to the size and scope of the federal government. It marked the beginning of the welfare state in the United States. After World War II, the federal government grew yet again, this time as a result of America’s new super power status. With the onset of the cold war came the creation of the National Security State to deal with America’s world leadership responsibilities. After the Great Depression and World War II, the bureaucracy continued to grow but at a slower and steadier pace. However, during the 1970s, a political backlash against the growing federal, state, and local bureaucracies led to the rise of the antigovernment movement in the United States. Fueled by problems of enlarged size, unresponsiveness, and remoteness from the interests of the citizen, the bureaucracy soon became a four-letter word, and the antigovernment movement became more popular and more mainstream. Led by President Ronald Reagan, who got political mileage out of government and bureaucracy bashing, and fed by antigovernment activists across the nation, the bureaucracy became a prime political target. “Government is not the solution,” Reagan used to say, “Government is the problem.” So powerful was this antibureaucracy sentiment that even some Democrats felt compelled to jump on the bandwagon. This was evident when President Bill Clinton announced in his 1996 State of the Union Address that “The era of big government is over.” The antigovernment movement had become such a significant force in U.S. politics that one could not run for the presidency (or any major political office) unless one also ran against the government. While it
464 bur eaucracy
seems ironic, one ran for the government by running against the government. Bureaucracy bashing became high political sport in the United States, but there is a difference between the healthy skepticism that is so necessary for the proper functioning of a robust democracy and the deep cynicism and even antigovernment paranoia that becomes self-destructive. This lesson was finally brought home by the tragedy of the April 15, 1995, terrorist bombing of the Alfred P. Murrah Federal Building in Oklahoma City. A truck bomb exploded outside the federal building, killing 168 people, including 19 children who were in the building’s day-care center. The terrorist responsible for the bomb was Timothy McVeigh, an alienated member of the government-hating underground. McVeigh believed that the government was taking away his liberty and further believed the government to be “the enemy” and deserving of destruction. In the wake of the September 11, 2001, terrorist attacks against the United States, the scope, size, and power of the federal government again expanded dramatically. The George W. Bush administration orchestrated a military response to terrorists abroad and created the new Department of Homeland Security within the cabinet. It was the beginning of the Antiterrorist State in the United States. What is the work of the bureaucracy? By constitutional design, the legislature makes laws, and the executive branch implements or executes the law. As part of the executive branch of government, the bureaucracy is responsible for carrying out or implementing the will of the legislature. In doing this, the bureaucracy handles several key tasks: implementation, policy making, administration, adjudication, and regulation. The bureaucracy implements laws and regulations. Through implementation, which includes the skill and resources necessary to put policy into action, the bureaucracy brings to life the will of Congress and the president. Laws and regulations are not selfexecuting. They must be brought to life. Bureaucrats do this by implementing policy. Implementation, it is often said, is politics by other means. The vagueness of many laws allows wide latitude for bureaucratic interpretation, and Congress delegates some discretion concerning this to the bureaucracy. This discre-
tion often means that, even as they implement laws, bureaucrats make policy. As the administrative wing of the executive branch of government, the bureaucracy also performs the ongoing and routine tasks of the federal government. In implementing and administering laws and regulations, bureaucracies also interpret the law, serving as judges of sorts. As laws are not self-executing, neither are they self-explanatory. They need to be put into action, the intent of the law must be interpreted, and the means of implementation must be determined. This gives the bureaucracy a good deal of administrative discretion, which is the source of its power. Perhaps the most controversial of the bureaucratic tasks involves regulation. Bureaucrats are rule makers, and the development of formal rules is called regulations. All government regulations are published in the Federal Register (which has about 60,000 pages of rules each year). The Administrative Procedures Act of 1946 requires that the government announce any proposed new regulation in the Federal Register; hold hearings; allow the public and interest groups to register inputs, complaints, and suggestions; research the economic and environmental impact of new proposed rules; consult with elected officials; and then publish the new regulations in the Federal Register. The complaint that government is too intrusive or “regulation–happy” led to a deregulation movement in the 1970s and 1980s. Some of the deregulation worked well; some did not. This led to a reregulation movement, especially where health and safety issues were concerned. Today, bureaucratic regulation remains a problematic yet necessary element of modern government. Bureaucracies exist within a bureaucratic culture with its own operating procedures, values, preferences, and outlook designed to protect and promote the organization. The first rule of any organization is “protect the organization” as that is where your “home” is. In this way, bureaucracies are self-protecting units. Bureaucrats are well aware of the old saying, “politicians come and go, but bureaucrats go on and on.” Their future, their employment, security, and work conditions will be shaped less by outside forces of elected politicians than by those immediately around them, so their first loyalty is to their own bureaucratic unit. This is why most politicians see bureaucracies as largely impenetrable.
cabinet 465
If bureaucracy remains a dirty word in U.S. politics, it also remains a vital element in the governing process. No modern political system could exist without a bureaucracy, and problems aside, bureaucracies are here to stay. Further Reading Aberbach, Joel D., and Bert A. Rockman. In the Web of Politics: Two Decades of the U.S. Federal Executive. Washington, D.C.: Brookings Institution Press, 2000; Gerth, H. H., and C. Wright Mills, eds. From Max Weber. New York: Oxford University Press, 1946; Gormley, William T., Jr., and Steven J. Balla. Bureaucracy and Democracy: Accountability and Performance. Washington, D.C.: Congressional Quarterly Press, 2003; Wood, B. Dan, and Richard W. Waterman. Bureaucratic Dynamics: The Role of Bureaucracy in a Democracy. Boulder, Colo.: Westview Press, 1994. —Michael A. Genovese
Abraham Lincoln’s cabinet at Washington (Library of Congress)
cabinet While the U.S. Constitution refers to “the executive departments” of government; it makes no mention of a “Cabinet.” But the president’s cabinet emerged very early in the republic as a group of departmental secretaries who comprised the inner core of the president’s administration. President George Washington’s first cabinet of 1789 was composed of only three departments: Treasury, State, and War. It was not until 1793 that this collection of department heads and advisers first became known as a “cabinet.” In 1798, a fourth department was added to Washington’s cabinet, the Department of the Navy. While not officially designated as such until later, the U.S. attorney general was generally considered to be a part of Washington’s cabinet as well. During the years, new departments were added and taken away. Currently, the cabinet includes a total of 15 departments. They include (with the date established): the
466 cabinet
Department of State (1789; formerly the Department of Foreign Affairs), the Department of the Treasury (1789), the Department of Defense (1947; formerly the Department of War), the Department of Justice (1870), Department of the Interior (1849), the Department of Agriculture (1889), the Department of Commerce (1903), the Department of Labor (1913), the Department of Health and Human Services (1953; formerly part of the Department of Health, Education, and Welfare), the Department of Housing and Urban Development (1965), the Department of Transportation (1966), the Department of Energy (1977), the Department of Education (1979), the Department of Veterans Affairs (1988), and the Department of Homeland Security (2002). Six other positions within the executive branch also are considered to be cabinet-level rank, including the vice president of the United States, the White House Chief of Staff, the administrator of the Environmental Protection Agency, the director of the Office of Management and Budget, the director of the National Drug Control Policy, and the United States Trade Representative. The governing structure of a cabinet within the executive branch is derived from the legacy of the British heritage found within the U.S. government. In Great Britain, a group of informal advisors to the king emerged as a cabinet. This group was given the formal title of the King’s Council during the 14th century. By the 16th century, these advisors took on a stronger and more visible management role, including the administration of government offices. By the 18th century, as the task of running a growing nation increased, monarchs were often disinterested in the administrative details of governing, which provided more power over the decision-making process to these cabinet positions, and many of these officials developed their own political followings and supporters. By the 20th century, with the rise of the prime minister and the decline of the power of the monarch, the British cabinet model had evolved into a system of management where cabinet secretaries have individual responsibility for departmental management and collective responsibilities for national policy making. The modern U.S. cabinet does not share a collective policy-making role in the federal government, but its advisory and administrative duties are deeply rooted in British tradition.
Predating the U.S. Constitution, Robert R. Livingston became the first U.S. cabinet secretary in 1781 as head of the Department of Foreign Affairs, which had been formally established on August 10, 1781, to replace the Committee on Foreign Affairs that had been established by Congress under the Articles of Confederation. When the Constitution was ratified, Congress passed legislation to continue the existence of the Department of Foreign Affairs. During the Constitutional Convention in Philadelphia in 1787, the creation of a presidency and the role that advisors would play was quite a volatile subject among the delegates. Fearful of a monarchical and tyrannical leader, such as the king of England, the delegates wanted to limit the powers of a president appropriately through a system of checks and balances as formalized through the separation of powers. One of the options considered to limit the central authority of the new president was that of a plural executive with an advisory council to whom the president must share the responsibility of the decision-making process. However, advocates of a single executive (which included Alexander Hamilton) eventually won the debate, arguing that a plural executive with an advisory council could allow a president to be overruled by the council or to deflect responsibility for certain actions to his advisers (as argued in Federalist 70). The debate on this issue led to concern about inserting the words cabinet or council within the Constitution for fear that such a system might emerge at a later date. As a result, neither of the words was inserted, and the only reference that appears within the document is a brief mention of “heads of the departments.” Washington set the important precedent of using his cabinet secretaries as advisors and held regular meetings to achieve this purpose. Since then, there has been no set schedule or regularity to calling cabinet meetings, and the extent to which cabinet secretaries are used as advisors depends on each president and his view of the usefulness of the cabinet or individual members. Today, the cabinet is a group of departmental secretaries who serve at the pleasure of the president. They head the 15 departments that now have cabinet-level status in the executive branch. While many administrative experts recommend that the cabinet process be strengthened, in reality, presidents only marginally rely on their cabinets as either
cabinet 467
deliberative or decision-making units. Instead, the modern trend is for presidents to rely on their staffs for advice and services. This trend began in the post– World War II period, and those few efforts to turn this trend around have failed miserably. Today, the cabinet is important but not nearly as important or as powerful as the cabinet was in the first 150 years of the republic. By custom and tradition, the cabinet officers meet as a group to advise the president on the operations of their departments. Department secretaries must be confirmed by the Senate, testify before congressional committees, and have those committees approve their department budgets. This is how Congress checks authority within this part of the executive branch. The cabinet is usually selected for political reasons, not personal ones. Issues of diversity and geography often dictate certain selections. Both Presidents Bill Clinton and George W. Bush vowed to appoint cabinets that looked more like the diversity of the U.S. population, and both had solid records in regard to appointing women and minorities in their cabinets. Clinton appointed the first women to the posts of attorney general (Janet Reno) and secretary of state (Madeleine Albright), and Bush appointed the first black secretary of state (Colin Powell). Clinton also appointed the first Asian American to a cabinet post (Norman Y. Mineta as secretary of commerce), and Bush would keep Mineta, a Democrat, in his cabinet with an appointment to secretary of transportation. Although some cabinets are filled with distinguished members, rarely do they serve as a collective source of advice for the president. In the United States, there has never been a tradition of collegial or collective cabinet responsibility as exists in the British core executive where the prime minister is, as the title implies, the prime minister but a minister nonetheless, and his or her cabinet colleagues exercise some measure of control over the outcome of cabinet meetings. In Britain, while it may not be common, the cabinet may overrule a prime minister. This serves as a form of checks and balances within the British system. Thus, Britain has a type of plural executive, and while an effective prime minister can usually get his or her way with a compliant cabinet, the possibility exists for the cabinet to hold real decision-making power as well as to hold the prime minister accountable.
In the United States, the cabinet is the servant of the president, and what he says goes. The cabinet may try to persuade a president that a proposed policy is unwise, but it is powerless to stop him. In Great Britain, a determined cabinet can stop a prime minister, especially a prime minister who has a weak majority or is politically wounded or vulnerable. Presidents have varied in how they have used the cabinet. Most hold semiregularized cabinet meetings that often are little more than photo opportunities with no real business and no serious decisions made there. Presidents may use the cabinet meetings for information conveying or support building or even for photo opportunities to demonstrate unity, but they rarely use these meetings to discuss or argue policy or make important decisions. Most key decisions are made prior to the cabinet meeting, and the members of the president’s cabinet are informed of decisions already taken. President Franklin D. Roosevelt preferred to use his staff for important tasks and used his cabinet infrequently. Most modern presidents have followed this model. While President Dwight D. Eisenhower tried to bring his cabinet more broadly into the decision-making process, even he grew to rely on a few key advisers for help. Since Eisenhower’s time, presidents have gravitated toward their personal staffs for the important work of government, and cabinet officers have become little more than managers of the departments of the federal government. There is an “inner” and an “outer” cabinet of the key and the peripheral advisers. The secretaries of Defense, State, the Treasury, and perhaps one or two others will usually serve as an inner cabinet and as close advisers to the president. Most other secretaries rarely meet with and are generally not influential with the president. Since the time of President Andrew Jackson, some presidents have grown to rely on what is known as a “kitchen cabinet,” a set of close friends and other associates outside the government on whom they rely for political advice. Another problem that tends to drive presidents away from their cabinets is that some presidents believe that the secretary becomes “captured” by the department or the client group the department serves. This is sometimes referred to as “going native.” Clearly, the secretary is pulled in several different directions at once. A secretary must serve the president who
468 C entral Intelligence Agency
appointed him or her, the Congress that authorizes the budget and sets policy through law, the client group that must be served, and the department itself. Given these competing pulls, presidents may believe that the secretary is not loyal to him or her or is divided in his or her loyalties. Ironically, by being so well placed in the web of information and politics, the cabinet secretary is actually better positioned than almost anyone else to give the president the type of advice that is most needed. Thus, the person best positioned to give the president solid political and policy advice is often the person least to be asked for input. In recent years, presidents have had a difficult time filling their cabinets with highly qualified candidates for these posts. Confirmation woes have plagued the modern presidency in filling the cabinet posts. The confirmation process has become so highly politicized and overly personalized that some otherwise highly regarded people would not want to put themselves and their families through such a trial. While very few presidential nominees are rejected by the Senate, the distasteful nature of the process dissuades some from accepting nomination, making it harder to fill the cabinet with first-rate people. This leads to a spiral of downward effectiveness and influence for the cabinet: As the importance of the Cabinet declines and as confirmation hearings are more and more distasteful, fewer and fewer top-flight nominees are found, and this further sends the importance of the cabinet into a downward tailspin. Today, cabinet secretaries are primarily administrators. They execute and administer policies and programs and only rarely develop them. Other than some of the more visible inner cabinet posts such as State and Defense, most in the president’s cabinet are rather anonymous figures and are virtually unknown to the general public. The cabinet is thus an underutilized and underachieving institution of U.S. government. It seems unlikely that this trend will change any time soon, as most presidents greatly prefer to rely on their staff instead of on the Cabinet. See also executive branch Further Reading Bennett, Anthony J. The American President’s Cabinet From Kennedy to Bush. New York: St. Martin’s Press, 1996; Warshaw, Shirley Anne. Powersharing: White House-Cabinet Relations in the Modern Presi-
dency. Albany: State University of New York Press, 1996; Fenno, Richard. The President’s Cabinet. Cambridge, Mass.: Harvard University Press, 1959; Hess, Stephen, and James P. Pfiffner. Organizing the Presidency. 3rd ed. Washington, D.C.: The Brookings Institution, 2002; Reich, Robert B. Locked in the Cabinet. Thorndike, Maine: Thorndike, 1997; Warshaw, Shirley Anne. The Keys to Power: Managing the Presidency. New York: Longman, 2000. —Michael A. Genovese
Central Intelligence Agency The Central Intelligence Agency (CIA), the most well-known and arguably most important organization within the intelligence community, has gained a new focus and relevance after the terrorist attacks of September 11, 2001. During the cold war, the CIA’s primary mission was to provide intelligence on the military capabilities and intentions of the Soviet Union and its communist allies. With the end of the cold war and the collapse of the Warsaw Pact and the Soviet Union in 1989–91, the CIA began to shift its attention to other, more diverse but less dangerous threats, particularly those that came from nonstate actors, such as narcotics traffickers, terrorists, and organized criminals. After 9/11, the CIA has found a renewed importance as one of the critical organizations responsible for fighting the “war” on terrorism. At the same time, however, the CIA has come under scrutiny for its alleged intelligence failures, specifically its failure to foresee or prevent the 9/11 attacks, and has also been the target of continuous calls for reorganization to prevent future failures. The CIA was created in 1947 by President Harry S. Truman as a national-level intelligence agency whose primary customer would be the president and the cabinet. As such, it differentiated itself from the military intelligence agencies who served the Pentagon (Department of Defense). Clear lines were drawn between foreign intelligence gathering, under the purview of the CIA and domestic intelligence gathering for which the FBI and the police were responsible. It was also created, as its name implies, to be a centralized clearing house for the intelligence community. At the time, memories of the Japanese attack on Pearl Harbor in December 1941 were still fresh, and policy makers created the CIA as part of a
Central Intelligence Agency 469
broader reorganization of the national security establishment to make sure that they would not be the victims of future strategic surprise attacks. The CIA performs three central tasks within the intelligence community. One task is to provide allsource intelligence analysis to the president and top-level administration officials. To provide this intelligence analysis, analysts at the CIA monitor raw intelligence reports from various collection sources. These sources include signals intelligence (SIGINT) from the National Security Agency (NSA), which include intercepts of electronic and communication signals; imagery intelligence (IMINT) from the National Geospatial-Intelligence Agency (NGA), which involves the use of overhead reconnaissance platforms to take what are essentially photographs of enemy locations; human intelligence (HUMINT) from the CIA itself as well as other sources; and opensource intelligence (OSINT) from newspapers, radio, the Internet, and the like. Analysts at the CIA take these raw intelligence reports and synthesize them into an analytical product. Within the CIA, this analytical mission is housed within the Directorate of Intelligence. A second task for the CIA is the collection of HUMINT. This aspect of the CIA has been portrayed by Hollywood and in fiction as the glamorous world of cloaks and daggers. In fact, however, HUMINT is often far from glamorous. At its essence, obtaining HUMINT is basically an exercise in recruiting a foreign citizen to offer you information and, in the extreme, spy for you by betraying his or her own country. A third task of the CIA is conducting covert actions or clandestine operations. (Both covert operations and human intelligence collection are run by the CIA’s Directorate of Operations.) According to John Jacob Nutter, “a covert action is an operation intended to change the political policies, actors, or institutions of another country, performed so that the covert actor’s role is not apparent, and if that role is discovered, the actor can claim he was not involved (this is called plausible deniability). This includes paramilitary actions, terrorism, assassinations, manipulating or ‘fixing’ elections, arms supplying, and propaganda campaigns.” Many of the CIA’s most noted successes, as well as its most publicized failures, have been related to its covert operations missions.
During the cold war, the Soviet Union and its massive conventional and nuclear forces were a direct threat to U.S. national security. The U.S. homeland could have been annihilated with nuclear weapons, while U.S. allies in Europe and Asia were in danger of being overrun by Soviet conventional forces. The CIA’s primary mission during this time was to assess both Soviet capabilities and intentions. However, assessing Soviet intentions with any degree of confidence was nearly impossible, so instead, the CIA and the intelligence community relied more on technical collection assets (that is, satellites rather than people) to monitor Soviet capabilities. For example, in 1955, the United States believed that there was a “bomber gap” in which the Soviets had a significant advantage in the number of airplanes that were capable of delivering nuclear weapons. President Dwight D. Eisenhower ordered the development of what was to become the U-2 spy plane, which first flew in 1956 and helped the CIA determine that the bomber gap was merely a deception campaign by the Soviets and that Soviet bomber capabilities were overblown. The CIA-run U-2 program also helped alleviate fears of a “missile gap” with the Soviet Union that arose in the late 1950s and early 1960s. A further success story for the U-2 program was the discovery of missiles in Cuba in 1962, the discovery of which became the catalyst for the Cuban Missile Crisis. While the U-2 continues to fly missions for the CIA, the majority of the imagery intelligence now comes from reconnaissance or “spy” satellites and from unmanned aerial vehicles (UAVs). Soviet capabilities and intentions were also monitored by signals intelligence, which monitored communications and electronic signatures by satellite and ground or sea-based collection systems. In addition to its intelligence-gathering mission during the cold war, the CIA was also asked to help stop the spread of communism by covertly promoting democratic, or at least pro-American, governments in unaligned developing nations. This was accomplished through several covert operations, some of which were successful, some of which were not, and others which were simply unnecessary or misguided. For example, in 1953, the CIA helped overthrow the parliamentary elected leader of Iran, Mohammed Mossadegh, who had tried to nationalize Iran’s oil industry. In similar fashion, in 1954, the
470 C entral Intelligence Agency
CIA fomented a coup against the Guatemalan president, Jacob Arbenz Guzmán. Guzmán had been democratically elected in 1950 but had run afoul of the business interests of the United Fruit Company for his land-reform programs and was subsequently labeled a communist and then deposed in a largely bloodless coup. Perhaps the most well-known and controversial clandestine or covert operation was the failed invasion of Cuba at the Bay of Pigs in 1961, which was intended to overthrow the regime of Fidel Castro. On the morning of April 17, 1961, about 1,500 CIAtrained Cubans landed on the beaches at the Bay of Pigs, but because of inadequate air cover, poor logistics, and Castro’s knowledge of the attack, the attempted coup turned into a complete disaster, poisoned relations with Cuba, and tarnished the reputation of the John F. Kennedy administration. The CIA was also covertly involved with the Vietnam War, especially through its operations in Laos. In addition, there are also unconfirmed rumors of the CIA’s involvement in General Augusto Pinochet’s military coup against the democratic Marxist President Salvadore Allende in Chile in 1973. While the cold war presented the CIA with a critical role in combating the spread of communism, the end of the cold war brought changes to the CIA with new missions and new challenges. The collapse of the Soviet Union, in itself, was problematic for the agency. The agency came under intense criticism for failing to predict the demise of the Soviet Union, especially since the CIA’s core mission at the time was to monitor all military, political, and economic aspects of the Soviet state. Nevertheless, with the end of the cold war in 1989 and the dissolution of the Soviet Union in 1991, the CIA found itself without a clearly defined enemy. Instead, the CIA broadened its focus to include more, but less dangerous, potential threats. While nuclear war with the Soviet Union was less of a possibility, there were still national intelligence concerns about the rising power of China, North Korean nuclear weapons, the territorial ambitions of Iraq during the Gulf War and the subsequent period of sanctions and no-fly zones, the politically and economically unstable democratizing of Russia, the ethnic violence in Bosnia and Kosovo, civil war in Rwanda, anarchy in Somalia, and a host of other low-level threats.
The post–cold war world also witnessed the rising importance of nonstate actors as national security threats. Accordingly, the CIA began to pay more attention to terrorists, narcotics traffickers, and organized crime networks. Throughout this post–cold war era, morale was generally low at the CIA not only because of the lack of a clear mission but because of a revolving door of directors, because of the revelations of just how disruptive and dangerous the Russian mole Aldrich Ames had been to the agency, and because of the intelligence “failures” associated with the surprise Indian nuclear test in 1998 and the mistaken bombing of the Chinese embassy in Belgrade during the Kosovo campaign in 1999. The terrorist attacks on September 11, 2001, proved to be a watershed event in the CIA’s history. Immediately after the attacks, many observers criticized the CIA for failing to warn of, or even stop, the attacks, despite evidence that the CIA had been producing reports up through September 2001 warning of the increasing activity of al-Qaeda affiliated operatives. The 9/11 attacks also changed the central focus of the CIA as it shifted even more resources to monitoring and disrupting terrorist groups, although it continued its mission to provide intelligence on states as well as on nonstate actors. As a central organization in the “global war on terror,” however, some of the CIA’s activities were controversial. In particular, the practice of extraditing suspected terrorists to secret prisons where harsh interrogation methods (allegedly torture) were employed sparked controversy. The CIA was also heavily criticized for its inaccurate pre-Iraq War assessments of Iraqi weapons of mass destruction (WMD) programs and the nonexistent Iraqi links to al-Qaeda. The CIA’s greatest success against terrorism was probably its operations, along with U.S. special operations forces, in helping the Northern Alliance topple the Taliban regime in Afghanistan in 2001. The CIA and the broader intelligence community will continue to face multiple challenges in the near and long-term future. One of the most important challenges to the CIA concerns how it should be organized to best perform its mission. In its current form, the CIA is highly bureaucratized, with multiple levels of management. Additionally, information is funneled vertically (“stovepiped”) instead of horizontally to other analysts working on related issues. Calls for a
Chief of Staff, White House 471
more decentralized and flatter organization, however, have met with resistance. The globalization and the increasing immediacy of the news media will also challenge the CIA to produce analysis that is better and delivered earlier than what is available from open sources. Also, the CIA must remain removed from politics; this issue came to the fore with the allegations that CIA estimates of Iraqi weapons of mass destruction (WMD) were influenced by political considerations during the Bush administration. Finally, issues of oversight will become even more important for the broader intelligence community (although perhaps less so than for the CIA), particularly as the historical distinctions between domestic and external security become blurred in this war against terrorism. Ensuring that both the civil liberties of U.S. citizens and national security are protected will be a critical task. For any intelligence agency, its task is fundamentally a difficult one: It is charged with finding out information that our adversaries want and try to keep secret. In the fight against terrorism, this task is even more difficult because the adversary is now a small group of nonstate actors who can maintain the initiative and control information much more easily than states. Nevertheless, the need for good intelligence is crucial, and, ultimately, the best defense against terrorism lies in gathering the best possible intelligence on who the terrorists are and what they plan to do. As a result, the CIA will continue to have a central role in this new global environment. See also executive agencies; foreign-policy power. Further Reading Jeffreys-Jones, Rhodri. The CIA & American Democracy. New Haven, Conn.: Yale University Press, 1998; Kessler, Ronald. The CIA at War: Inside the Secret Campaign against Terror. New York: St. Martin’s Press, 2003; Kessler, Ronald. Inside the CIA: Revealing the Secrets of the World’s Most Powerful Spy Agency. New York: Pocket Books, 1992; Nutter, John Jacob. The CIA’s Black Ops: Covert Action, Foreign Policy, and Democracy. Amherst, N.Y.: Prometheus Books, 2000; Odom, William. Fixing Intelligence: For a More Secure America. New Haven, Conn.: Yale University Press, 2003; Ranelagh, John. CIA: A History. London: BBC Books, 1992; Zegart, Amy B.
“September 11 and the Adaptation Failure of the U.S. Intelligence Agencies.” International Security 29, no. 4 (Spring 2005). —Michael Freeman
Chief of Staff, White House In the early days of the republic, presidents had very few staff assistants to help them manage the business of the executive branch of government. The executive branch was small, and the day-to-day business of running the presidential office was not overly taxing. In time, especially in the post-Depression/post– World War II era when the demands placed upon the federal government grew, and as the power and responsibilities of the presidency also grew, it became clear that, in the words of the famous Brownlow Committee, “the president needs help.” Help came in the form of added staff, but with the swelling of the size of the executive branch came problems having to do with the management of the office and its many assistants. Presidents could not manage the executive branch alone and became buried in the managerial demands of the institutional growth of the office. The “help” that the president got became a problem. One of the solutions to this problem can be seen in the evolution of the office of the White House Chief of Staff. In the aftermath of World War II as the United States became the world’s leading superpower, and as the executive branch represented the central governing agent of the United States, the size and the power of the presidency grew significantly. The office, in effect, became too large for the president to handle. Where President Franklin D. Roosevelt had a handful of staff assistants (numbering somewhere in the 40s), by the 1950s, the executive branch and the Executive Office of the President (EOP) grew to more than 200 assistants who reported directly to the president. In time, the number of staff assistants grew dramatically and with the added numbers came added power. During the past 30 years, the staff has become more important than the cabinet in serving the needs of the president as well as more powerful in policy development and political affairs. The president may be one person, but the presidency is a collection of many. So who manages the
472 Chief of Staff, White House
presidency? The president is too busy to serve as his or her own chief manager of the executive branch. Therefore, to organize and manage their growing staffs, presidents began to rely on one person who served as a chief manager of the executive branch, usually referred to as the chief of staff. President Harry S. Truman relied on John R. Steelman as a chief of staff, but he was rarely referred to as such. President Dwight D. Eisenhower was the first president to formally designate one person as chief of staff, appointing trusted adviser Sherman Adams to the post. Adams was the president’s eyes, ears, chief manager, and gatekeeper. Eisenhower was accustomed to the military style of operations and wanted a more defined and hierarchical system of White House management. Following Eisenhower, Presidents John F. Kennedy and Lyndon B. Johnson tried to manage without the official designation of a chief of staff, serving in many ways as their own chiefs of staff. But, increasingly, public-administration experts argued that efficient and effective management required someone designated as master of management for the president. It was just too time consuming for the president to serve as his own chief of staff. President Richard M. Nixon, who had served as Dwight Eisenhower’s vice president and saw the more hierarchical system employed by Eisenhower firsthand, established a very rigid and hierarchical system of management and placed trusted adviser H.R. “Bob” Haldeman in the gatekeeper post of chief of staff. A former advertising executive with little political experience, Haldeman was nonetheless a trusted ally, and the president relied heavily on Haldeman to manage the executive branch for him. Haldeman ran a tight ship (perhaps too tight, as some would observe), often feeding into the worst elements in the Nixon psyche (such as his paranoia about political enemies), and rather than softening the harsh edges of the often tormented president, Haldeman too often fueled the fires that burnt within Nixon. Instead of putting the brakes on what became known as the Watergate scandal, Haldeman served as a facilitator and enabler of Nixon in the execution of many of the illegal activities of the administration. It should be noted that this is precisely the type of chief of staff Nixon demanded on having. If he was an isolated president, it was because Nixon insisted on being isolated. Haldeman, who received a great deal of criticism at the time, was
really only doing what the president both wanted and insisted. When Jimmy Carter became president following the Watergate crisis in 1977, he promised to be the “un-Nixon.” Part of that approach involved demystifying the presidency and opening up the office to greater transparency. In the immediate aftermath of the Watergate scandal and of accusations of an imperial presidency during the Johnson and Nixon years, this new approach seemed to perfectly fit the demands if not the needs of the time. Initially that meant not having a chief of staff, but such a goal proved unworkable as government had grown too large and too complex. Not having a chief of staff seemed worse than having an overbearing one. Thus, after a few years, Carter relented and appointed former campaign manager Hamilton Jordan as his first chief of staff in 1979. When Ronald Reagan became president, he experimented quite successfully with a shared center of managerial power, opting for what was known as a troika rather than a single chief of staff. Reagan used Chief of Staff James A. Baker, III as manager-in-chief, Attorney General Edwin Meese as chief policy coordinator, and Deputy Press Secretary Michael Deaver as chief of communications. These three men ran the Reagan White House in a collegial fashion, although Baker often took the managerial lead. This model proved fairly effective and is credited with smoothly running the executive branch in the first Reagan term. But instead of maintaining this effective managerial look, Reagan did a dramatic shift in the second term. Baker and Treasury Secretary Donald Regan came to the president and asked if they could switch jobs. Baker had his own political ambitions, and Regan wanted to be closer to “the action.” Without asking a single question or raising any concerns, the president merely said yes. It proved a monumental mistake. Regan took control of the Reagan White House with an iron fist, alienating friends, insulting the Congress, and infuriating adversaries. The troika was out, and a strong chief of staff was in. But Regan proved an ineffective and often offensive manager, and the Reagan presidency stumbled politically during the second term. Eventually, First Lady Nancy Reagan intervened and insisted that her husband fire Regan. The president did so. Former Senator Howard Baker
Chief of Staff, White House 473
was brought in to get the administration back on track following the Iran-contra scandal in 1987 and received high marks for keeping the Reagan presidency on track politically until the end of Reagan’s second term in 1989. The lessons of the Nixon and Reagan presidencies (that is, do not have an overbearing chief of staff) failed to penetrate to Reagan’s successor, George H. W. Bush. President Bush appointed former New Hampshire Governor John Sununu as his chief of staff. Sununu was seen by some as abrasive, sometimes rude, and even a bit of a bully. His tactics did not sit well with the Congress, and eventually he even alienated his own staff. When ethical questions hounded him, he was eventually forced out, and a “kinder, gentler” chief of staff Samuel Skinner, was brought in. While the Bush presidency ran more smoothly from that point on, the damage had been done, and the president never really recovered his stride. President Bill Clinton appointed businessman Thomas F. “Mack” McLarty III as his first chief of staff. McLarty, a long time friend of Clinton’s from his boyhood days in Arkansas, was inexperienced in the ways of Washington and suffered from just the opposite affliction as that which affected Haldeman, Regan, and Sununu: He seemed too mild mannered and not enough of an enforcer and gatekeeper with a president who wanted to be involved in every decision made within the White House. As a result, McLarty facilitated many of the dysfunctional traits of President Clinton who was himself undisciplined and unmanageable. If Nixon was overdisciplined and rigid, Clinton was underdisciplined and too unconcerned about time-management issues. Soon, it became evident that the Clinton White House needed “adult supervision.” The president appointed former member of Congress Leon Panetta to the post of chief of staff. Panetta was much more effective in bringing discipline and order to the disorderly Clinton White House and served the president effectively for several years before leaving the post during Clinton’s second term. President George W. Bush appointed Andrew Card as his first chief of staff. Card was a Washington insider, had an unassuming personality, was organized and bright, and had worked in the George H. W. Bush White House. He proved an effective manager
for the younger President Bush. The fact that he attracted little media attention was a tribute to his self-effacing approach to managing the White House. If Card had a major flaw, it was not insisting that the president examine policy from all sides. Card would often “let Bush be Bush,” and the inclination of the president was to rely on his ideological predispositions and not examine all the evidence available on major topics—especially if that ran counter to his personal or ideological predispositions. This led to serious problems in the prosecution of the war against Iraq and elsewhere. What lessons can we draw from the experiences of the modern presidency and its relationship with the position of White House Chief of Staff? First, it is clear, that with the size of the executive branch and the White House that the president truly does need help since he cannot manage the job by himself. Thus, presidents need a chief of staff, but not just any chief of staff. The second lesson is that to be effective, a chief of staff cannot be a bully nor can he or she be a wallflower. A strong, quiet chief of staff seems to work best, someone who knows Washington, has the trust of the president, and can manage the complex organization that is the White House. Third, the chief of staff should be an honest broker for the president, ensuring that the president has all the information he needs to make effective decisions. Most presidents want people around them who make them feel comfortable, but effective presidents know that they need people around them who bring all the information and all the ideas to the table, as uncomfortable as that information might make the president feel at times. This may mean occasionally bringing bad news to the president, even when he does not want to hear it. Fourth, the chief of staff must serve as a guardian or gatekeeper of the president and his valuable time. In this sense, the chief of staff cannot worry about being popular within the president’s inner circle, because he or she should be busy worrying about being effective. Fifth, the chief of staff must be a good manager of both people and process, making sure that the running of the White House is smooth and efficient. Finally, the chief of staff must have a high level of political skill. The chief-of-staff position has become perhaps embedded in the center of political activity in the White House and in Washington, D.C., generally. In many ways, it has become
474 civil service system
perhaps the second-most-important position in Washington. As such, the chief of staff must be a political operator who can deal with members of Congress, the press, the bureaucracy, and the inner circle of the White House. He or she must be a jack of all trades, as well as a master of all trades. In the modern era, as the presidency has swelled as an institution, presidents have increasingly needed help in managing the institutional aspects of the office. An effective chief of staff can be a great benefit to a president often occupied with matters of state and complex policy. If the president can rely on a chief of staff to handle most of the ongoing managerial elements of governing the executive branch, the president may be free to focus on other policy related activities. Thus, presidents will continue to look to and rely on powerful and important managers as chiefs of staff to operate the baroque office that is the modern presidency. Further Reading Arnold, Peri E. Making the Managerial Presidency. Lawrence: University Press of Kansas, 1998; Burke, John P. The Institutional Presidency. Baltimore, Md.: Johns Hopkins University Press, 1992; Hess, Stephen, Organizing the Presidency. Washington, D.C.: The Brookings Institution, 1988; Pfiffner, James P. The Managerial Presidency. College Station: Texas A&M University Press, 1999. —Michael A. Genovese
civil service system The civil service system refers to the organization and regulations with respect to employment in government departments or agencies (with the exception of the armed forces). The term applies to the federal, state, county and municipal levels of government. The purpose of the civil service system is to ensure that only properly qualified people are hired and promoted in government employment. The system also provides standards for training, discipline and pay scales. Additionally, the system attempts to ensure political neutrality by prohibiting civil servants from engaging in certain political activities. Close to 20 million Americans work in a civil service job. What motivates people to take a civil service job? A “government job” might seem like a tedious,
bureaucratic nightmare to some. Having endless redtape, regulations and routines seems like a boring way of life. The system is complex and rule bound, and after all, the term public servant is a bit demeaning. So why do people do it? There are many motivations for why people enter civil service. Many people want to contribute actively to society, and this is one way of doing it. People like to feel as though they are an important part of something, and people like to feel they are getting things done and helping people. While jobs in government generally do not pay as much as those in the private sector, they generally provide good benefits packages, including pensions. Finally, government jobs have a high degree of job security—unlike private companies, it is highly unlikely that the government will ever go out of business. The development of a civil service in the United States trailed those of its European counterparts by more than a decade. One reason for this was that the United States was a relatively new country with a small territory to govern. By contrast, many European countries ruled vast empires across the globe, necessitating the development of a large bureaucracy for governmental administration. Another reason had to do with antistatist opinion in the United States. The citizens of the former British colonies still had their unfortunate experiences with British governors fresh in their minds, and were skeptical of giving any government too much power over the people. Still, the fledgling government of the United States gradually developed a limited bureaucracy to administer over its expanding territory. When Andrew Jackson became president in 1829, he institutionalized the idea of the “spoils system” of federal employment (no president prior to Jackson had used it to such an extent). Jackson appointed friends and political supporters to federal jobs, getting rid of employees who were appointed by earlier administrations, coinciding with the notion that “to the victor go the spoils.” These jobs were called “patronage” jobs. In one way, the system makes sense because it is democratic—it allows voters to throw out one set of government officials to have them replaced with another one. This system continued for the next 50 years. While the system ensured that the president maintained political support among governmental employ-
civil service system
ees, it also presented some problems. The use of the spoils system meant that many appointees were illqualified for their jobs, which contributed to government inefficiency. Additionally, the spoils system promoted the mixing of politics with public administration, which resulted in a tremendous amount of corruption. Government employees used their positions to advance their party’s candidates and to hinder others. Government regulators became involved in raising money from companies they regulated, creating conflicts of interest. Public resentment of the spoils system grew. In a classic example of bottom-up policy making, cities began to form civil-service-reform organizations and committees while the federal government dragged its feet. Then, in a defining historical moment, President James Garfield was shot in 1881 by a political supporter who had not received an expected patronage job. The shooting occurred a few months after Garfield’s inauguration, and he died two-and-a-half months later from complications resulting from the wound and subsequent treatment. This event spurred Congress to pass the Civil Service Reform Act of 1883, which is also known as the Pendleton Act. This act was designed to eliminate the spoils system and replace it with a system based on merit. It established that certain classes of jobs could not be awarded as patronage jobs. Instead, these jobs would be open to all citizens through a system of open competition that included examinations. The act also provided protections for employees from being fired, demoted, or otherwise pressured to undertake political acts. The Civil Service Reform Act placed responsibility for the implementation and administration of the system in the hands of a new Civil Service Commission. The commission was appointed, but the president maintained the authority to designate that jobs fell under the commission’s jurisdiction. Originally, only 10 percent of federal jobs were covered by the commission. Today, more than 75 percent are covered. The military and high-ranking administrative posts are not covered. Since the original Civil Service Reform Act was implemented, Congress has passed several modifications and supplements to it. The La Follette Act of 1912, named after the progressive Wisconsin senator, guaranteed civil servants the right to membership in labor unions. It also strengthened rules regarding the
475
termination of employment—providing that agencies must inform employees of the grounds for their dismissal and give employees the right to answer the charges against them. Finally, it provided “whistleblower” protection—guaranteeing the right of communication with members of Congress to federal employees. Two important changes in the civil service code occurred during the 1920s. In 1920, Congress passed the Civil Service Retirement Act which established pensions and survivorship benefits for federal employees. Equal pay for equal work was established in 1923 under the Classification Act. Still, there existed a tremendous amount of concern regarding the potentially corrupting influence of politics in government administration. In 1939, Congress passed the Hatch Act to deal with these problems. The act made it unlawful for federal officials to intimidate or coerce the voting and political activity of employees or anyone who has business with their agency. It made it illegal for a government official to interfere in an election or party nomination and prohibited the solicitation and/or receiving of political contributions. Finally, it made it illegal for a government official to belong to a political party that advocates the overthrow of the U.S. government. Subsequent amendments to the act and interpretation by the courts have also made the following activities illegal: wearing political buttons while on duty; engaging in political activity while either on duty, in a government office, or in a governmental uniform; and using a governmental vehicle for political activities. Employees may contribute money and be active in politics only on their own time and with their own resources. Finally, federal employees may run for political office only in nonpartisan elections. Many of the restrictions on political activities for federal employees have been extended to state and local government employees. One of the difficulties that the federal government faces is recruiting and retaining qualified employees. The problem rests in wages—many government jobs require skills that could earn an employee much more working in private business than in government. This is especially true in management (or “white-collar”) jobs. Congress attempted to remedy the situation with the passage of the Classification Act of 1954 and again in 1962 with the Federal Salary
476 civil service system
Reform Act. These acts established a number of steps and salary ranges within each employment grade, permitting a possible variance of 30 percent within the salaries of employees classified within the same grade. This made it easier for the government to reward and retain productive employees as well as recruit highly qualified employees. As technology evolved, the world became more complicated and the government bureaucracy grew in size. In 1978, Congress passed the Civil Service Reform Act to keep up with the times. The main purpose of the act was to make the federal government more effective and manageable. The act replaced the Civil Service Commission created under the 1883 Pendleton Act with a new Office of Personnel Management. Additionally, it established a Senior Executive Service comprised of the three top employment grades. The act also based pay raises more on performance and merit and less based on length of service. Finally, in recognition that government employees working for different agencies could earn different salaries, it established a number of “merit principles” by which agencies should provide “equal pay for work of equal value.” The Merit Systems Protection Board was created to handle these concerns. From the 1980s onward, the pace and complexity of life, both private and public, only increased. More measures were needed to accommodate those changes and to deal with the expanding—and sometimes bureaucratic and wasteful—federal government. In 1993, President Bill Clinton established a commission called the National Performance Review which was headed by then-vice president Al Gore. The commission released a report that in its title described the challenges of contemporary governance: Creating a Government That Works Better and Costs Less. The report advocated a number of principles that would affect the civil service, many of which were borrowed from private-sector management. The first was to gear governmental employees to focus on service recipients as “customers” in an effort to make government employees more responsive to the needs of the public they served. Another central feature of the plan was “empowering employees to get results” by giving more authority to employees. The purpose of this was to put power into the hands of the people who were on the front lines of serving the public and to avoid bureaucratic micro-
management by higher-ups. Congress granted authority to the president to make these changes. The last set of civil service reforms came about as a result of the Homeland Security Act of 2002. The reorganization of the federal bureaucracy that took place under the act gave the agencies involved in homeland security broad liberties in terms of organizing their personnel management toward the goal of protecting the United States from terrorist attacks. The Department of Homeland Security (which initially included 170,000 employees) was exempted from standard civil service laws and procedures. Two of the main changes included limiting the influence of labor unions and rolling back whistleblower protections. The influence of labor unions was curtailed because any strike by security workers could endanger the country. Whistleblowers were offered fewer protections because their revelations could potentially compromise the effectiveness of secret security programs. Time will tell if these rollbacks in civil service protections will be extended to other agencies and departments. In conclusion, the switch from the spoils system to the civil service system based on merit presented many advantages. It opened up federal jobs to the common people instead of merely reserving them for the social and political elites. There have been both benefits and drawbacks from the switch to the meritbased civil service. Some have argued that the difficulty in firing government employees gives rise to the “Peter Principle”— which maintains that employees inevitably end up in a job for which they are not qualified because employees continue to be promoted through the merit system as they do a good job. However, these employees will eventually reach a level of employment in which the duties of the job are beyond their competence. Because of the bureaucratic difficulties in firing or demoting these people, the employees assume this last office until they retire. The Peter Principle insures that employees are retained in positions for which they are incompetent. The ability of government officials to retain their offices, however, does present some distinct advantages. The spoils system was based on the notion that change in government was desirable. However, as the world has become more complex and specialized, it does not make sense to run a system in which you
coattails 477
have to retrain a new set of employees every few years. Moreover, the government would be nearly crippled at the outset of every new turnover in power as new employees learned their jobs. Instead, it makes sense to run a system that provides incentives to keep people in their jobs so that the institutional memory of the agency, that is, the common mechanisms for dealing with a variety of problems that are built over years of experience, is not lost. Finally, because government agencies pay comparatively less than private businesses for positions that require specialized skills, it is important to provide employees with a sense of job security so that they are less likely to leave a government job. Further Reading Ban, Carolyn, and Norma M. Riccucci. Public Personnel Management: Current Concerns, Future Challenges 3rd ed. New York: Longman, 2001; Ingraham, Patricia W. The Foundation of Merit: Public Service in American Democracy. Baltimore, Md.: Johns Hopkins University Press, 1995; Pfiffner, James P., ed. The Future of Merit: Twenty Years after the Civil Service Reform Act. Baltimore, Md.: Johns Hopkins University Press, 2000; Rosenbloom, David H., and Robert S. Kravchuk. Public Administration: Understanding Management, Politics and Law in the Public Sector. 6th ed. New York: McGraw-Hill, 2005. —Todd Belt
coattails In 1848, a young new member of the House of Representatives Abraham Lincoln referred to the military coattails of generals-turned-presidents Andrew Jackson and Zachary Taylor. The phrase caught on. Today, the term coattails refers to the ability of the candidate at the head of an electoral ticket, usually the presidential candidate, to attract voters to that ticket and have candidates for lesser offices, usually Congress, elected as a result of the ability of the presidential candidate to draw votes to himself and thereby to other members of his party running at that election. Sometimes referred to as the “presidential boost,” a presidential candidate who is able to draw other voters to his party’s ranks can be both electorally more viable and, in office, better able to persuade the member of Congress that the interests and poli-
cies of the new president merit the support of the electorally dependent member. Coattails may be politically important because if a president is seen to have long coattails, those members of Congress elected as a result of the president’s coattails may be in his debt for their electoral victories. Presidents can then use this leverage to pressure members of Congress to vote for their proposals on the ground that the president is responsible for getting that member elected. Today, presidential coattails are seen as very short and rarely of political consequence. In Congress, incumbents win reelection at more than a 90 percent rate, the presidential candidate often runs behind the member in his or her own district, local candidates have their own fundraising capabilities, and rarely does the candidate at the top of the ticket have a measurable impact on the overall vote in other races. Also, with most members of Congress elected from safe seats, there are very few seats up for grabs, and so there is little room for coattails to take effect. Coattails were thought to be more important in previous years when straight party voting was more prevalent. But today, with the rise of split-ticket voting and the emergence of a higher number of independent voters, presidential coattails matter less and less than before. In 1932, during the Great Depression, President Franklin D. Roosevelt won election by a wide margin. He also helped bring a large number of new Democrats into the House and the Senate. He leveraged these coattails into congressional support on many key New Deal bills before the Congress. In 1964, President Lyndon B. Johnson did much the same, leading to the legislative success of the Great Society programs that Johnson sponsored. After his election victory in 1980, President Ronald Reagan was able to persuade many newly elected Republicans that they owed their elections to his coattails and was able to convert that mandate into key votes on his early agenda. But even if a president wins by a wide margin there may not be coattails. In 1972, President Richard Nixon won reelection by a huge margin but failed to bring many new Republicans into Congress and thus could not claim a coattail effect. If coattails are a thing of the past, presidents will find it harder to govern. Already limited in the
478 c oattails
authoritative and legitimate uses of power, they will be less able to claim a mandate and less able to pressure Congress to vote for the president’s legislative agenda. This will make it harder for them to govern effectively. Constitutionally, presidents are rather anemic in power. Most of the real power conferred by the U.S. Constitution goes to the Congress in Article I. Article II of the Constitution, the executive article, gives the president few independent powers and some shared authority (mostly with the Senate). In this sense, the president’s power cupboard is somewhat bare. But if presidents face high expectations with limited power, they will inevitably look for ways to boost their authority to meet expectations. Thus, presidents use the bully pulpit, their political party, their executive authority (sometimes referred to as unilateral power), and other extra constitutional mechanisms to close the expectation/power gap. One of the extra constitutional means to gain power is to help your political party gain seats in elections, thereby boosting the number of party loyalists on whom you can rely within the legislative arena and thereby passing more legislation than you would if, for example, the opposition controlled the majority of seats in Congress. If a president’s party controls a majority of seats in the Congress and if the president is credited with helping some of those legislators be elected, he or she is said to have coattails. A president with coattails can “lean on” members of his or her party to gain support in tight legislative battles. If a legislator believes that his or her political fate rises and falls with the popularity of the incumbent president and that his or her election may hinge on that president being successful, the president has leverage and will use that leverage in gaining votes. Making the legislator feel dependent on the president can move political mountains. The member of Congress will then work to achieve success for the president because a successful presidency translates into a successful reelection bid for the member or so they believe. It is a president’s job to cultivate this belief and to feed into it. If the president runs ahead of a member in his or her own district, the president may claim that his or her drawing power is what got the member elected. Such a view seems plausible, and in the eyes of the
members, they would be less willing to risk proving the point. Thus, they will “feel” dependent on the president and see their careers linked to the presidential star. When that star is rising, so too is theirs. When that star is descending, theirs falls as well. Thus, they will work to make the president’s legislative agenda a successful one both for the success of the president and their own electoral bid. When a president runs behind the legislator in the race in their district, the member feels no such dependency or gratitude and can and often will be more independent of presidential influence and less likely to see their electoral fate rise and fall with the fate of the president. These legislators can be more aloof and independent and more resistant to appeals from the president for party loyalty or loyalty to the president. Such legislators are not influenceimmune, but they feel they can be more independent of the president as they do not see their careers on the line at every presidential vote in the Congress. They can more freely defy the president because they do not see his or her success as necessarily important to their success. The incentives for backing a president are weaker, and the legislator may seek out other “rewards” in determining how they will vote (that is, what the constituency wants, what big donors want, what powerful special interest groups want). For the president, such calculations complicate efforts to herd votes in Congress, and sometimes require the president to make deals, bargains and compromises that he or she would rather avoid. But in such cases, the president enters the contest in a weaker position than if he or she had had long coattails. As coattails have become a less frequent element in electoral calculations, as they have become shorter, they have also become a less significant source of presidential power. This often makes a president more vulnerable and less authoritative in dealings with Congress. For a president already weakened by a limited constitutional base of authority, this can spell trouble as he or she faces a Congress less beholden to him or her for their political survival. To lead, presidents need to enhance their limited constitutional authority with extraconstitutional levers of power. If the presidential coattail is removed, the president is less powerful and less able to lead. See also presidential election.
commander in chief
Further Reading Bond, Jon R., and Richard Fleisher. The President in the Legislative Arena. Chicago: University of Chicago Press, 1990; Davidson, Roger H., and Walter J. Oleszek. Congress and its Members. Washington, D.C.: Congressional Quarterly Press, 1997; Schneier, Edward V., and Bertram Gross. Legislative Strategy. New York: St. Martin’s Press, 1993. —Michael A. Genovese
commander in chief The president of the United States, by virtue of Article II, Section 2 of the U.S. Constitution, is the commander in chief of the armed forces. The article reads, in part, “the President shall be Commander in Chief of the Army and Navy of the United States, and of the Militia of the several States, when called into the actual Service of the United States.” What exactly does this mean? Is this a grant of independent power to the president, granting him or her the sole authority to be the commander of the armed forces and decider of war, or is the end of the clause, where the Constitution brings to life the commander power only “when called into the actual Service of the United States” the key? And who does the calling up? Because of the awesome power of the U.S. military and because of the superpower status of the United States, the role of commander in chief takes on added importance and merits increased scrutiny. Wielding such awesome power unchecked was not the model of power envisioned by the framers of the U.S. Constitution. But in the post–World War II era, presidents have exercised, sometimes with but often without the support or approval of the Congress, this authority bolstered by claims that the role of commander in chief gives the president independent authority to shape the foreign policy of the United States as well as to commit U.S. forces to combat without the input of the Congress. This claim of unilateral and nonreviewable power by the modern presidents has caused a tremendous amount of controversy and has led to a re-evaluation of the role of the president as commander in chief of the armed forces of the United States. Originally, the Congress was granted broad powers over war and foreign policy. After all, Article I of the U.S. Constitution gives to the Congress the sole
479
power to declare war, raise troops, and determine how the government’s money shall be spent. These, as well as other constitutional provisions, give a key role in foreign affairs and war to the Congress. But the president also has a claim on foreign policy authority, as evidenced by an examination of Article II wherein the president is made commander in chief, given the authority to receive ambassadors, and head the departments of government (for example, the Defense and State Departments). In this sense, foreign policy and war are shared responsibilities where the Congress and the president have overlapping and interrelated roles in the development of foreign policy and war. As a historical note, only one president actually led troops as the commander in chief during a military conflict. During the War of 1812, President James Madison actually led troops in the field at Bladensburg, Maryland, as British troops soundly defeated the Madison-led U.S. military. After that, the British advanced on Washington, D.C., and burnt the White House and other public buildings. That was the first and only time a president has actually led troops into battle as commander in chief. After such an inauspicious start, it is no wonder that other presidents refrained from taking the risk of repeating Madison’s effort. Understandably, many presidents have taken the broad view of the commander in chief clause, defining it as an authority that allows them to both serve as sole commander of the armed forces of the United States and as a grant of power allowing them to decide when the U.S. armed forces shall be called into action. This clause, when joined with the “executive power” clause (sometimes understood as the “coordinate construction” view of presidential power in foreign affairs), give presidents, as they sometimes claim, an independent power to a president to commit U.S. troops to war without congressional authorization. But there is little historical support for this broad view. Most experts who examine the language of the Constitution, the writings of the framers, and early constitutional and political practice conclude that the commander in chief clause is contingent on the Congress authorizing the president to assume control of the armed forces. That is, they argue that the “when called into the actual Service . . .” requirement means that only the Congress can commit the United States
480 c ommander in chief
President George H. W. Bush meets with U.S. troops in the Persian Gulf, 1991. (George Bush Presidential Library)
to military action. It should not surprise us, however, that most presidents choose to read into the Constitution a much broader interpretation of presidential authority in war and foreign policy making. U.S. Supreme Court Associate Justice Robert H. Jackson, in Youngstown Sheet & Tube Co. v. Sawyer (1952), warned of the “sinister and alarming” use by presidents of the commander in chief clause where presidents assume the “power to do anything, anywhere, that can be done with an army or navy.” Jackson’s warning should not be lost on a modern audience, stated with claims by presidents that they possess independent war-making authority. This warning may be especially relevant in a war against terrorism that the public is told will be a never-ending war.
The framers of the Constitution were men deeply concerned with the arbitrary power of the king against whom they had recently fought a revolution. The king, they remembered, could lead the country into war for reasons large and small, important or personal. To “chain the dogs of war,” the framers were determined to separate the power to declare war from the power to conduct war. The former they gave to the Congress in Article I, the latter to the president in Article II. By separating the power to declare from the power to conduct war, the framers hoped to stem the tide of executive warfare and make constitutional and limited the government’s decision on war. This separation, about which John Locke warned in The Second Treatise of Government (1690), could only
commander in chief
bring “disorder and ruin” was nonetheless precisely what the framers sought. They saw the disorder and ruin in the king’s ability to make war on his and his decision alone. The framers sought to imbue the war declaring power with a democratic responsibility and thus gave this power to the representatives of the people, the Congress. But when a war was to be fought, a central authority, the president, was needed to conduct the war. Alexander Hamilton recognized this in Federalist 74 when he wrote that the decisions during a war “most peculiarly demands those qualities which distinguish the exercise of power by a single hand.” But he also recognized in Federalist 75 that “The history of human conduct does not warrant that exalted opinion of human virtue which would make it wise in a nation to commit interests of so delicate and momentous a kind, as those which concern its intercourse with the rest of the world, to the sole disposal of . . . a President.” James Madison warned in a 1798 letter to Thomas Jefferson that “The constitution supposes, what the History of all Govts demonstrates, that the Ex. Is the branch of power most interested in war, & most prone to it. It is accordingly with studied care, vested the question of war in the Legisl.” It was not until the post–World War II era that presidents began to claim independent war-making authority. Prior to that time, even those presidents who stretched the limits of the constitution at the very least paid lip service to the restraints and limits imposed on them by the Constitution, and the superior authority to declare war vested with the Congress. But with the advent of the cold war and the rise to superpower status by the United States, presidents began to exert a bolder, more independent brand of leadership. Their claims of independent foreign and war-making authority grew, and their deference in word and deed to the Constitution diminished. The real break began during the presidency of Harry S. Truman in the post–World War II era. The Truman Administration first claimed an independent authority to lead U.S. troops to war during the Korean conflict. That the Congress did not fight for its warmaking authority may strike some as unusual, but in the crisis-ridden days of the early cold war, to stand up to a president intent on war might have appeared weak or soft on communism, and few had the political or personal courage to defend the constitutional
481
prerogatives of Congress or to question a president intent on fighting communism. During the long and unpopular war in Vietnam, the Congress, after giving President Lyndon B. Johnson a blank check in the form of the 1964 Tonkin Gulf Resolution, tried to reclaim some of its lost and delegated war-making authority by passing the War Powers Resolution of 1973. Designed to force the president to consult with and engage Congress in decisions relating to the introduction of U.S. troops into potentially hostile situations, the resolution has not had its intended effect. Presidents often have interpreted the resolution in ways that are very narrow and have understood consult to mean inform. They have often adhered to the strictest and narrowest letter of the resolution, but they only rarely consult with Congress prior to making war-related decisions. In this sense, the War Powers Resolution has not limited but in many ways has actually liberated presidents from the constitutional limits imposed on them. After all, the War Powers Resolution gives presidents 60 days, and if they so request, an additional 30 days to initiate military actions, and while the Congress may force the president to withdraw troops in such an incident, the practical reality is that it is hard to imagine a Congress that will defy the will of a president as U.S. men and women are fighting and dying in some far-off place around the globe. This has led many experts on war and the Constitution to argue that the War Powers Resolution should be terminated and that the United States should attempt to return to the constitutionally prescribed method of war. It was in the post–September 11, 2001, era of the war against terrorism that yet another leap in executive claims of independent commander in chief authority was advanced. Not only did members of the George W. Bush administration claim independent war-making authority, but they further claimed that their acts were “nonreviewable” by the Congress or the courts. Perhaps surprisingly, this bold and constitutionally groundless claim met with virtually no opposition by the Congress, and the war against terrorism trumped constitutional integrity. Congress gave in to the president as commander in chief and again allowed a sapping of its constitutional authority toward the executive branch. Thus, the commander in chief clause of the Constitution has come
482 Council of Economic Advisors
full circle from the arbitrary and independent power of one man, the king, more than 200 years later to presidents who claim, and Congresses that allow, almost independent presidential decision making in matters of war. If the framers of the U.S. Constitution wanted to chain the dog of war, the political climate in an age of terrorism has chosen a different path— the path of presidential primacy in war and foreign affairs. The commander in chief clause of the Constitution is now largely and erroneously understood as granting to the president the primary authority to determine questions of war and peace. We have come—both constitutionally and politically—a long way from a war against the arbitrary power of a king to start war to the embrace of presidential primacy in the decisions relating to war and peace. See also foreign-policy power; war powers. Further Reading Adler, David Gray. “The Constitution and Presidential Warmaking: The Enduring Debate.” Political Science Quarterly 103 (1988): 1–36; Keynes, Edward. Undeclared War: Twilight Zone of Constitutional Power. University Park, Md.: Pennsylvania State University Press, 1982; Schlesinger, Arthur M., Jr. The Imperial Presidency. Boston: Houghton Mifflin, 1973; Wormuth, Francis D., and Edwin B. Firmage. To Chain the Dog of War: The War Power of Congress in History and Law. Dallas, Tex.: Southern Methodist University Press, 1986. —Michael A. Genovese
Council of Economic Advisors In trying to cope with the Great Depression, President Franklin D. Roosevelt had to rely on advice from an informally recruited group of academic advisers known as the “Brain Trust.” There was no group of advisers with expertise in economics on whom Roosevelt could call for assistance. The Council of Economic Advisors (CEA) was established by the Employment Act of 1946, which was a direct consequence of the Great Depression when one-fourth of the workforce was unemployed. Unemployment virtually ended with the advent of World War II, but afterward, fear of renewed unemployment during peacetime led President Harry S. Truman to request that Congress enact “full employment” legislation authorizing the
federal government to take responsibility for the nation’s economic well-being. But the business community, Republicans, and political conservatives were opposed to government “planning” of the economy, so what emerged from Congress was a more “symbolic” act that established the CEA, also the Joint Economic Committee of Congress, and declared desired economic goals (such as maximum employment and price stability). The 1946 act stipulated five responsibilities for the CEA. First, it assists in the preparation of the Economic Report of the president, which is presented to Congress every year. Second, it gathers “timely and authoritative information concerning economic developments and economic trends” and analyzes that information toward achieving the economic goals stated in the law. Third, it appraises “the various programs and activities” of the federal government to determine whether they are or are not contributing to the achievement of the stated economic goals. Fourth, the CEA is required “to develop and recommend” to the president “national economic policies to foster and promote free competitive enterprise, to avoid economic fluctuations . . . and to maintain employment, production, and purchasing power.” Fifth, it provides studies, reports, and recommendations on “economic policy and legislation as the President may request.” There are three members on the CEA appointed by the president and confirmed by the Senate. Originally the CEA operated as a collegial body, but Reorganization Plan No. 9 of 1953, under President Dwight D. Eisenhower, elevated the chairperson to be the operating head of the CEA. The chairperson speaks for the CEA, gives testimony to Congress, and hires the permanent staff. The CEA is supported by a small professional staff of 20. Ten are economists, usually professors on one- or two-year leaves from their universities, who are the senior staff, and 10 more junior staffers are either graduate students in economics or economic statisticians. The CEA is an unusual agency within the federal bureaucracy given this “in-and-out” mode of recruitment, but the temporary tenure of the CEA and its staff serves to keep it attuned to the latest theories and norms of the economics profession. With few exceptions, CEA members are academic economists, and most are recruited from a
Council of Economic Advisors 483
relatively small number of elite universities. Since 1946, the largest number of CEA members received their Ph.D. degrees from the University of Chicago, Harvard University, or the Massachusetts Institute of Technology (M.I.T). Economists from the University of Chicago have been almost always appointed by Republican presidents, whereas Harvard and M.I.T. graduates usually were recruited by Democratic presidents. Five CEA members appointed by President Ronald Reagan had University of Chicago degrees, whereas the first three CEA appointments by President Bill Clinton had M.I.T. degrees. The CEA is considered to be a professional and nonpartisan agency. It serves as an in-house research organization, providing economic analysis, and its small size means that the CEA can respond quickly when faced with new issues. Rarely is the CEA assigned operational responsibilities for implementing policy, though presidents have utilized the CEA on interagency committees or working groups to assure that decision makers receive an economic perspective. The CEA also makes recommendations on the various tax, budgetary, and regulatory proposals that are generated within the departments and agencies of the federal government. Thus, the CEA exemplifies the “neutral-competent” agency since it can give the president unvarnished economic advice without having to “represent” any constituency or interest group, other than its professional relationship to the discipline of economics. As professional economists, CEA members, regardless of party or ideology, share wide agreement on “microeconomics” (how firms and households make decisions) since they value free markets and competition. But the differing patterns of recruitment signal that there are different approaches on how best to guide the macroeconomy. Historically, liberal economists were influenced by the British economist John Maynard Keynes that taxes and budgets should be manipulated in hopes of spurring economic growth and preventing recessions. This Keynesian viewpoint had been embraced by economists at Harvard and M.I.T. (and other Ivy League universities), but a dissenting position was argued by conservative economist Milton Friedman, who taught many years at the University of Chicago. The University of Chicago became a stronghold of the “monetarist” approach, with its emphasis on the money supply as the key
policy instrument. Not only can CEA members disagree about which policy instruments to employ, such as taxes and spending or monetary policy, but their macroeconomic objectives also may differ. Interviews of CEA chairmen by Professors Hargrove and Morley indicated that a Democratic-appointed CEA generally focused on fighting unemployment, whereas Republican-appointed Councils emphasized fighting inflation. Presidents want economic advisers who are philosophically compatible. President Truman replaced the first CEA chairman, Edwin G. Nourse, with the more liberal Leon H. Keyserling. President Eisenhower was comfortable with the cautious approach to economic policy of Arthur Burns, just as Alan Greenspan supported the moderately conservative views of President Gerald R. Ford. The high point of political compatibility occurred during the 1960s when Presidents John F. Kennedy and Lyndon B. Johnson appointed liberal Keynesian economists to the CEA. On the other hand, President Reagan chose Beryl Sprinkel (a business economist with Harris Trust and Savings Bank of Chicago) because Sprinkel was a known advocate of “supply-side” economics, which argued for cutting federal taxes. The Council of Economic Advisors, the secretary of the Treasury, and the director of the Office of Management and Budget are the so-called “Troika” that informally meet to discuss economic issues. Sometimes, they are joined by the chairman of the Federal Reserve Board. That arrangement was utilized by President Kennedy and continued by President Reagan, but more recently, presidents have sought to formalize economic decision making. Since there are other agencies that affect economic policy (for example the United States Trade Representative) and because the domestic economy is so interconnected with the global economy, in 1993 President Clinton established the National Economic Council (NEC), which continued under President George W. Bush. Its stated purposes are (1) to coordinate policy making for domestic and international economic issues, (2) to coordinate economic policy advice for the president, (3) to ensure that policy decisions and programs are consistent with the president’s economic goals, and (4) to monitor implementation of the president’s economic policy agenda. Since the director of the NEC is also the assistant to the president for
484 c ozy triangles
Economic Policy, the chairman of the Council of Economic Advisors must develop a working relationship with this new office. Whether the CEA will lose influence under this organizational arrangement is an open question, and, clearly, competition or cooperation may characterize the relationship between the CEA and the NEC. It is possible that the CEA chairperson will focus its energies on economic analysis, whereas the NEC Director, or others (such as the Treasury secretary) will act as the public voice for the administration in economic policy. See also executive agencies. Further Reading Bailey, Stephen K. Congress Makes a Law. New York: Columbia University Press, 1950; Dolan, Chris J. “Economic Policy and Decision-Making at the Intersection of Domestic and International Politics,” Policy Studies Journal 31 (2003): 209–236; Feldstein, Martin. “The Council of Economic Advisers and Economic Advising in the United States,” The Economic Journal 102 (September 1992); Hargrove, Erwin C., and Samuel A. Morley, eds. The President and the Council of Economic Advisers. Boulder, Colo.: Westview Press, 1984; Norton, Hugh S. The Employment Act and the Council of Economic Advisers, 1946– 1976. Columbia: University of South Carolina Press, 1977; Stein, Herbert. Presidential Economics: The Making of Economic Policy from Roosevelt to Reagan and Beyond. New York: Simon and Schuster, 1984; Tobin, James, and Murray Weidenbaum. Two Revolutions in Economic Policy: The First Economic Reports of Presidents Kennedy and Reagan. Cambridge, Mass.: M.I.T. Press, 1988. —Raymond Tatalovich
cozy triangles Sometimes referred to as iron triangles or subgovernments, the term cozy triangles refers to the threepronged connection, usually between government agencies or departments, client or interest groups, and the congressional committees or subcommittees with jurisdiction over the areas affecting that particular public-policy issue. Conceptualized as a triangle that connects these three key actors or groups in the policy process, the members of the triangle can sometimes gain a stranglehold on policy that is difficult to
break and that gives certain advantages to those who are fortunate enough to be in and control the triangle. Such triangles serve the narrow interests of the parties involved in the relationship but also serve to undermine democratic control of the policy process. Many policy experts argue that the existence of these cozy triangular relationships makes it difficult to impose rationality on the policy-making process, and they thwart change—even much needed change in the policy arena. While there is no one single set model for all such triangles, they do share several common characteristics and function in fairly similar ways. This is due to the closed nature of the relationship, with each prong in the three-cornered system playing a key role and relationship to the other corners of the triangle. The congressional committee decides policy and budget allocations that affect the client group; the client group has self-interest at stake in agency regulations and congressional policy relating to its domain; the committee relies on the client group for campaign donations, and the agency wants to promote its interests and protect its domain. It is a cozy and closed triangle of mutually supporting activities, and it is very difficult for an outsider (such as a new president) to penetrate this mutual benefit relationship. Thus, these cozy triangles are triangles because they are three sided; they are cozy because as the saying goes, “politics makes strange bedfellows,” and the cozier the relationship, the more benefits can be generated to each side of the triangle. It is a relationship that greatly benefits each side of this triangle, and attempts to intrude on, change, or break up the close and cozy nature of the relationships are resisted with a ferocity that can be astonishing to the outsider and painful for those that try to intervene. Such cozy triangles impede development, stunt industry growth, keep prices high for consumers, exasperate potential rivals, and undermine free markets. They continue because they benefit the members of the triangle, and as the Congress makes the laws, as long as key members of Congress benefit—and they do— it is unlikely that these cozy triangles will soon be broken. But at times they are broken. How and why? Usually, significant social disruptions (such as the depression and the New Deal that followed it), major social and economic changes (for example, the rise of indus-
cozy triangles 485
trialization), the emergence of new industries to replace old ones (for example, the emergence of the personal computer), or other technological changes (for example, the invention of radio or television) can so disrupt the old order that a new order emerges. But often, these changes merely lead to new cozy triangles emerging with new players at the ends of the triangles. In a way, the more things change, the more they stay the same: the players may change, but the cozy triangles go on and on (for some very understandable economic reasons). The logic behind the cozy triangle is similar to that of any cartel—like the old saying, “you scratch my back I’ll scratch yours.” It is based on the belief that if you can keep out any rivals, you can corner and control the market. This allows the cartel to be the prime or only seller of goods and/or services, and the consumer is thus forced to pay not what the free and open market will demand, but the price the cartel sets (which, of course, is always higher than the market). It is estimated that the cozy triangles in U.S. markets alone cost the consumer billions of dollars in extra costs. It is thus a closed system and for the insiders, a mutual benefit society. It is important for all sides of the triangle to maintain their positions as each gains important benefits from membership in the triangle. It is likewise difficult to break into or break up the triangle as all sides have a strong vested interest in its preservation. In many policy areas, these triangles dominate policy and are as strong as iron. Mutual dependency and self-interest keep these triangles strong and together. John W. Gardner, former cabinet secretary and the founder of the public interest group Common Cause, in testifying before the Senate Government Operations Committee in 1971, described the cozy triangle in these terms: “. . . questions of public policy . . . are often decided by a trinity consisting of (1) representatives of an outside body, (2) middle level bureaucrats, and (3) selected members of Congress, particularly those concerned with appropriations. In a given field, these people may have collaborated for years. They have a durable alliance that cranks out legislation and appropriations on behalf of their special interests. Participants in such durable alliances do not want the department secretaries strengthened. The outside special interests are particularly resistant to such change. It took them years to dig their particular
tunnel into the public vault, and they don’t want the vaults moved.” This three-way policy alliance controls many policy areas such as sugar quotas, milk prices, and weapons contracting. They work in tandem to gain mutual benefits. Each corner of the triangle works to protect its own interests while also supporting the other two corners of the triangle. External events, new technologies, changes in markets, or old cleavages can sometimes break up cozy triangles, and either fluidity characterizes the policy area or a new triangle emerges. These cozy triangles can be very frustrating for a new president who comes to office hoping and expecting to have an impact on public policy. After all, the president is constitutionally the chief executive officer of the nation. By law, he or she is to submit to the Congress a proposed federal budget. He or she also, usually in the State of the Union Address to Congress, has an agenda that he or she wishes to see implemented. This agenda takes shape in the form of legislative proposals for action. But if the cozy triangle has a stranglehold on policy in a particular area, just how open to change can one expect it to be? Clearly such a triangle will be resistant to efforts at presidential penetration. Thus, presidents often have a difficult, sometimes an impossible time, having an impact in certain public policy areas. For this reason, presidents will often try to go around the congressional process and use government regulations, executive orders, and other forms of unilateral authority to impose the presidential will in a particular policy arena. Such efforts at unilateral power are often effective, at least for a time, but in the long run, cozy triangles have a way of utterly frustrating presidential efforts at penetration. For a president to have a significant impact in a policy domain controlled by a cozy triangle, she or he must use valuable time and limited political resources (that is to say, spend her or his political capital) to cajole, coax, pressure, persuade, and otherwise compel, the different elements within the triangle to respond favorably to her or his initiatives. Such effort comes at a high cost, and presidents are rarely willing to pay such a price—at least not very often. There are just too many other, more pressing areas in which to spend their valuable and limited political capital. The result, in nearly all cases, is that the cozy triangle can outlast
486 deba tes
and often outsmart the president, who soon becomes frustrated and moves on to potentially greener political pastures. After all, no president wishes to waste resources and opportunities, and most will soon see the futility in trying to make inroads into this closed system. The existence of iron or cozy triangles undermines the ability of government to make good policy as these triangles possess a stranglehold over policy and efforts to penetrate this triangle usually fail. Thus, a self interested segment of society can control areas of public policy, and outside interests are usually helpless to intercede. While these cozy triangles are most visible and powerful at the federal level, they can and do also exist at the state level as well. Where resources are great and the power is accessible, do not be surprised that different groups try to corner markets and close out competition. Cozy triangles are an irresistible way to do just that. See also legislative branch; legislative process; lobbying. Further Reading Grossman, Gene M., and Elhanan Helpman. Interest Groups and Trade Policy. Princeton, N.J.: Princeton University Press, 2002; Hunter, Kennith G. Interest Groups and State Economic Development Policies. Westport, Conn.: Praeger Publishers, 1999; Wilson, Graham K. Interest Groups in the United States. New York: Oxford University Press, 1981. —Michael A. Genovese
debates Political debates are critical components of the discourse in modern political life. Since the time of the ancient Greeks and Romans, democratic (and even nondemocratic) societies have practiced the art of debate to advocate positions and persuade fellow citizens. More recently, these structured events allow for voters and the media to view, analyze, and select the most appropriate candidate to win the election. Therefore, these events often serve as a “winnowing” process where decision makers (voters and the news media) make judgments about which candidate they most favor. Perhaps the most important role of political debating is in the role of “primer” of political issues.
Candidates may seek to select one or two issues and “prime” the audience to believe that those are the most critical issues in the campaign. However, the declining trend in substance, frequency, and viewership in recent years has called into question the importance of political debates as a form of political communication. Political debates are most visible at the presidential level; however, before televised debates, few mass public presidential debates occurred (these events were formerly smaller-venue events, often for partisan crowds). Prior to the “great debate” of 1960 when Republican candidate Richard Nixon and Democratic candidate John F. Kennedy held the first televised debate, there were no mass organized general-election debates, although there were a few primary election debates not televised. However, after the debates of 1960, there were no formal presidential generalelection debates held until 1976. This gap in debating is argued to exist for many reasons, primarily related to the equal time provisions set forth by the Communications Act of 1934 (which caused discontented candidates as well as creating dissatisfied organizations that wanted to sponsor the debates). Since their inception in 1976 (and roots back to the “great debate” of 1960), there have been several stylistic and structural changes to presidential debates. In time, political debates have become more about style than substance. Certainly television is a major culprit in furthering image-based debating where spin, sound bites, and clever phrasings of complex issues tend to dominate political discourse instead of in-depth discussions of the issues of the day. Part of the cause of this trend toward image-based debating is the internal debate rules set by each side. In recent presidential debates, each side has sought to make the time to speak and respond shorter, encourage less interaction between candidates and limit time for questions. These limited formats have also encouraged more campaign “sloganeering” and personal attacks as strategies in debating. One important development in 1992 was the emergence of the “town-hall” format where carefully selected members of noncommitted voters are chosen to attend the debate and ask questions of the candidates. This format arrests several trends in declining interaction and speaking time, limits the use of personal attacks, and allows for more participation by the voters.
debates 487
George W. Bush and John Kerry in a final presidential debate (Joe Readle / Getty Images)
There are several issues concerning presidential debates, including the important factor of whether or not debates affect election outcomes. The potential significance of presidential debates is great in the United States, as they are broadcast widely through several high-profile and mass-communication technologies, such as television and the Internet. Unfortunately, studies have shown that many people do not listen to or watch the debate, especially presidential debates in prime time, although many informed voters will catch an update or a briefing on it from a news source at another time. This may underscore the public perception of debates as “performance” and not important to political decision making. However, these debates are potentially important strategies for lesserknown candidates who can use nationally televised debates to increase their public exposure.
For those who do argue that political debates affect election outcomes, the grounds on which debates affect these outcomes is debatable. As noted, in 1960, the first televised presidential debates occurred between John F. Kennedy and Richard Nixon. People often argue that Kennedy won the presidency because he looked rested and tan during the debates. On the other hand, people argue that for those who listened to the radio and did not watch the television debates, Nixon was more persuasive, and his ideas resonated with the public more strongly. There are many ways to look at this one particular debate, and it is difficult to prove whether Kennedy’s looks or Nixon’s vocal argument made profound impacts on the election. The real importance of this first presidential debate is that it ushered in an era of televised campaigning and emphasized that style is as
488 deba tes
important (if not more important) than the content of one’s arguments. As argued, it is important to recognize the effect of presidential debates because citizens may base their vote on any number of things that occur within a debate. People may base their vote on how good the candidate looked, how well he or she sounded, or how well he or she answered their opponents’ questions. As a result of these ambiguities, it is often quite difficult to conclude what type of affect debates have on an election. While many people argue that it is how a candidate presents himself or herself, others find that it is how they discuss the relevant political issues. Thus, it is important to understand that while debates may have an effect in an election, it can be different for each voter. It is for these reasons that presidential campaigns put so much emphasis in meticulously preparing for these debates by practicing with opponents, conducting focus groups to test arguments, and arguing for debate rules that will favor their candidate’s strong suits (such as the length of address or the questioning of opponents). To centralize and organize presidential debates in the United States, the Commission on Presidential Debates was established in 1987 to ensure the sponsorship and ability to produce presidential debates as a permanent part of presidential elections. Both the presidential candidates as well as the vice presidential candidates participate in these televised debates, ensuring public awareness about the major candidate’s issues. The Web site (www.debates.org) set up by the Commission on Presidential Debates offers transcripts and streaming videos of all debates recorded, which allows for research and comparative data to take place. One of the most important aspects of presidential debates is the ability to access past footage to compare and contrast the debates. For example, in watching videos from the “Great Debate” of 1960 in comparison with the presidential debates of 2004, it can be seen that the style and debating techniques from the candidates are quite different. There are many reasons for this, such as newer technologies, more polling and focus group assistance in crafting political messages, higher political stakes in each debate, and more focus put on presidential debates than in the past. Several presidential debates presented interesting (and entertaining) moments that highlighted the personalities and issues of the candidates, and often, these
witticisms and gaffes affected public perception of the candidates. For example, in 1976, President Gerald Ford argued that “there is no Soviet domination of Eastern Europe” a claim that was not accurate and implied that President Ford was not prepared to be president in his own right after taking over for President Richard Nixon in 1974. In 1984, President Ronald Reagan, running for reelection at the age of 73, noted to former Vice President Walter Mondale that he would not make age an issue in the election by “exploit[ing] his opponent’s youth and inexperience.” This quip may have diffused concerns about President Reagan’s age as a detriment to his potential reelection. In 1988, Governor Michael Dukakis was posed a question by CNN’s Bernard Shaw about his feelings toward the death penalty in the event of the hypothetical rape and murder of his wife. Governor Dukakis responded almost emotionlessly about his opposition to the death penalty, rather than discussing his feelings toward the events, signaling to many that Dukakis was too unengaged to be an effective president. The most recent presidential debate occurred in 2004, in which presidential candidates John Kerry and George W. Bush talked about many important concerns, including things such as the status of Social Security, the war in Iraq, and homeland security (often discussed with the topic of the tragedy of September 11, 2001). In the 2004 presidential debates, presidential candidates Senator John Kerry and President George W. Bush deliberated several times. Because the outcome of the 2004 election would affect both U.S. citizens as well as many others in the world, much of these debates focused on the war in Iraq and national security concerns. As it is truly difficult to determine if there is a “winner” or a “loser” of these debates, people still argue over who they think was the better debater. Polling results after the first debate suggested the John Kerry had won, while polling suggested that President Bush won the second and third debates. Further Reading Corrado, Anthony. Let America Decide: The Report of the Twentieth Century Fund Task Force on Presidential Debates. New York: The Century Foundation, 1996; Farah, George. No Debate: How the Two Major Parties Secretly Ruin the Presidential Debates. New York: Seven Stories Press, 2004; Friedenberg, Rob-
Department of Agriculture 489
ert. Rhetorical Studies of National Political Debates: 1960–1992. 2nd ed. Westport, Conn.: Praeger Publishing, 1993; Jamieson, Kathleen Hall, and David S. Birdsell. Presidential Debates: The Challenge of Creating an Informed Electorate. New York: Oxford University Press, 1990. —Jill Dawson and Brandon Rottinghaus
Department of Agriculture The U.S. Department of Agriculture (USDA) is responsible for assisting farmers; providing food to the undernourished and poor; maintaining the nation’s forests and rural areas; regulating and inspecting the safety of meat, poultry, and eggs; marketing agricultural products domestically and abroad; and researching agriculture and food-related topics. Originally designed to provide seed and plants to U.S. farmers as part of the Office of the Commissioner of Patents, the U.S. Department of Agriculture was established in 1862 and achieved cabinet-level status on February 9, 1889. Twenty-seven secretaries of agriculture, who have been appointed by the president and confirmed by the U.S. Senate, have headed the Department. With 109,832 employees and a budget of more than $94.2 billion (about $22 billion of which is discretionary) in total 2003 outlays, it ranks sixth in size and third in budget in comparison with all other federal departments for fiscal year 2003. For much of U.S. history, agriculture has played a prominent role in the U.S. economy. With more than half of the people relying on farming to make a living in 1862, the USDA assisted farmers as people traveled westward and settled the rich farming lands of the Great Plains. When once-fertile homesteading lands were depleted of the nutrients that were necessary for farming, the USDA developed scientific techniques to boost agricultural production and farmer incomes. Even though the percentage of citizens who lived on farms declined to about 35 percent in 1910, the responsibilities of the department grew to help farmers feed a growing population of city dwellers. The Department of Agriculture continued to expand during the Great Depression as a function, in part, of falling market prices and surplus stockpiles of grain, corn, and milk that depressed farm commodity prices, and passage of the Agricultural Adjustment Act of 1933. Congress and the president have reorga-
nized the department several times. During World War II (1939–45), many bureaus split to form the War Food Administration. Afterward, these bureaus were reorganized again and returned to their previous locations in the larger department. The department’s mission is broadly defined, with a focus on farming, agriculture, and food. In 2006, the U.S. Department of Agriculture identified seven mission areas: food and nutrition; food safety; marketing and regulation; natural resources; rural development; research, education, and economics; and farm and foreign agriculture. Numerous bureaus, agencies, and services implement specific aspects of these broad missions. Some mission areas even require coordination with other departments, agencies, and branches of government for successful implementation. Although each mission area will be discussed more broadly, it is the farm and foreign agriculture portion of the USDA to which this essay devotes substantial attention, as it has typically attracted the most interest from scholars. Through Food and Nutrition Services, the USDA attempts to alleviate hunger and promote good nutrition. The Center for Nutrition Policy and Promotion, for example, develops dietary guidelines (in conjunction with the Department of Health and Human Services)—as illustrated by the food pyramid—for people. The Food and Nutrition Service administers food programs for low-income people, including food stamps, to women, infants, and children, and via the National School Lunch Program. These departmental units engage in extensive marketing efforts through public-service announcements on television, informational fliers, and other strategies to inform the people about their services. Food Safety and Inspection Services (FSIS) oversees commercial food production, labeling, and packaging to ensure that the U.S. egg, meat, and poultry supply is safe and labeled properly. FSIS inspects animals before and after slaughter and checks numerous processed meat, poultry, and egg products. It sets standards for plant and packaging facilities and issues recalls for products that fail inspection after they have been distributed to retailers and consumers. Marketing and Regulatory Programs consist of two primary units: the Grain Inspection, Packers and Stockyards Administration (GIPSA) and Plant Health Inspection Service (APHIS). The Grain Inspection,
490 Depar tment of Agriculture
Packers and Stockyards Administration markets farm products—including livestock, poultry, meat, and cereals—both domestically and internationally. It is also an advocate for the fair and competitive trade of these products for the benefit of the U.S. farmer (by ensuring fair prices for U.S. agricultural products) and the U.S. consumer (to help keep food prices affordable to most Americans). APHIS “protects American agriculture” by ensuring the health and care of animals and plants. It includes more specific units to execute its mission. Veterinary Services protects animals, for example, while Plant Protection and Quarantine regulates the transportation and spread of plants to help maintain existing ecosystems and existing plant diversity. Natural Resources and Environment, including the Forest Service and Natural Resources Conservation Service, manages and maintains the nation’s public and private lands. The Forest Service is charged with managing the nation’s public lands. This includes providing camping and recreation facilities for citizens and logging permits for lumber companies. Natural Resources Conservation Service helps private landowners, such as farmers and ranchers, manage and conserve their own national resources. The focus of Rural Development is to improve the economy and quality of life in rural America. It does so by helping rural citizens and communities obtain the financial and technical assistance they need to develop facilities for public water and sewer facilities, health-care clinics, and other emergency services. Rural Development also provides loans to businesses to promote economic growth. Research, Education, and Economics provides countless data for farmers and researchers of agriculture alike. The National Agriculture Statistics Service surveys farmers directly on a yearly basis for demographic, crop, economic, and environmental information concerning U.S. farms. It also conducts the Census of Agriculture every five years. The Agriculture Research Service and Economic Research Service provide specific data and information on agriculture and economic issues that concern U.S. farms. Domestic and Foreign Agriculture comprises the second-largest mission area with a budget of nearly $45 million (fiscal year 2005), but it is in some ways the most central to the mission of the U.S. Depart-
ment of Agriculture. (Food and Nutrition Services ranks first with a budget slightly more than $51 million.) This unit—comprised of the Farm Service Agency, Foreign Agriculture Service, and Risk Management Agency—is designed to help farmers weather the uncertainties of farming and agriculture. Primarily, it allocates federal funds to augment and stabilize farm incomes in the face of natural disasters and unfavorable market shocks. Until the 1990s, the federal government used some form of price support to increase farm income when commodity prices were low. Initially, the USDA calculated price supports according to a percentage of parity set by Congress with responsibility for determining and distributing price support payments lying with the secretary of agriculture. Although the specifics of price supports (such as payments at what percent of parity, target prices, set-asides, or fixed payments) have changed, the basic process of government payments to farmers has remained the same. Over time, the USDA has adapted to farm crises by adjusting its focus of assistance. The Great Depression stimulated modern farm policy with the Agricultural Adjustment Act (AAA) in May 1933, which stabilized commodity prices and reinvigorated the U.S. farmer. The Agricultural Adjustment Act of 1938—a response to United States v. Butler (291 US 1 1936), which declared parts of the original AAA unconstitutional—established price supports to supplement farmer income further. Farmers received price supports so long as they abided by planting restrictions set by the secretary of agriculture in accordance with federal law. The 1938 Act established the basic price and production control system under statute for the next 50 years. Subsequent agriculture acts passed in 1954 and 1958 also continued to support prices on grain and other commodities. In response to commodity surpluses created by price support policies, the John F. Kennedy administration began such food distribution programs as food stamps, school lunches, Food for Peace, and it encouraged farmers to export surplus farm commodities. Since the 1970s, Congress has pushed farmers to be more market oriented, with a safety net of government assistance through target prices, deficiency payments, and export assistance programs. Naturally, as world markets changed (with an increase in developing-nation food production), U.S. commodity
Department of Agriculture 491
exports fell in the 1980s, and the USDA reestablished traditional price supports on major commodities. Congress, once again, attempted to wean farmers off of federal support payments and encourage marketbased agriculture with the so-called Freedom to Farm Act, passed in 1996. As agricultural markets changed once again, the 107th Congress effectively overturned the 1996 act and reinstated price supports for most commodities. Despite its continued relevance to the U.S. farmer, there is reason to question the benefits to farmers of commodity price and other assistance programs. Yet, efforts to place farm commodities in an open market with the so-called Freedom to Farm Act (1996) actually increased price-support-related expenditures but did not actually wean farmers off of traditional price supports as intended because price supports were reinstated in the 2002 version of the farm bill. Although price supports tend to boost farm income in the short term, political and fiscal constraints limit their consistent success. At times, the success of U.S. agriculture policy has been held hostage to an international agriculture economy that dictates much lower market prices than what price supports can offer. Agriculture policy also operates in a subgovernment. This means that congressional committee members adopt policy in conjunction with agriculture agencies and the farm lobby. Subgovernments, by definition, rely on members of congressional committees and subcommittees, outside interest groups, and the agency itself to promote the adoption and implementation of farm policy. The agriculture subgovernment was most cohesive during the 1930s and 1940s at the foundation of U.S. farm policy. Through the 1960s, the farm lobby maintained a monopoly on policies that affected farmers. As rural populations continued to decline, however, farmers witnessed a persistent decline in the level of federal assistance from payments based on parity to deficiency payments and further to other near-market solutions. Simply, a decline in the U.S. farm population and the subsequent decrease in rural representation relieved Congress of some of its farm-constituency representation and allowed for decreases in benefits to farmers. Even as the farm population has declined to less than 2 percent of the total U.S. population, nonetheless, farmers still benefit from significant federal subsidies
in the form of price supports and disaster assistance payments, due in part to the subgovernment nature of farm policy and the continued influence of interest groups and key legislators from agricultural states. Organizationally, the program is administered top-down, as the secretary of Agriculture has had discretion in distributing payments to farmers since 1949. Nevertheless, the secretary is not in a position to dictate all components of the farm program. For instance, the secretary sets payment levels and production controls, but they must be consistent with federal agriculture law. Furthermore, he or she cannot force compliance unless farmers approve mandatory participation rules through referenda. Instead, the USDA abides by the 1938 Agricultural Adjustment Act that stipulated voluntary compliance with farm-subsidy and price-support programs. None of the production controls are mandatory, and farmers can opt out of federal farm-subsidy programs but, of course, forego any payment income. Farmers must abide by certain legislated rules and requirements to receive payments. The secretary of agriculture’s discretion over government payments to farmers has varied with each new piece of farm legislation. In the 1970s, for example, Congress granted itself more control over farm payments by establishing fixed “target prices” (prices based on the national average) for each commodity. If the market failed to meet the target price, farmers could collect deficiency payments (the difference between market and target prices). This basic formula held until 1996 when the Freedom to Farm bill limited the secretary’s discretion over basic commodity price supports and set fixed payments to farmers. Although Congress has gradually wrested control of the income support component of the farm program from the bureaucracy, several components of the federal program of payments to farmers still allow for some discretion by the secretary of agriculture. Congress enacts several minor components (loan deficiency payments, water programs, support if crops are afflicted with particular diseases) in every farm bill from which farmers can benefit if they qualify and register for these services. It is the job of members of the Farm Service Agency—the primary implementer of price-support programs, created in 1994 with the Department of Agriculture’s Reorganization Act of that year—to reach out to inform farmers
492 Depar tment of Commerce
and various farm groups about changes in federal law and that they qualify for assistance provided they complete the appropriate applications and paper work. The U.S. Department of Agriculture may not be the largest of the cabinet-level bureaucracies, but it is certainly one of the most diverse. With at minimum seven mission areas that are comprised of myriad policy and task-specific organizational units, the USDA is probably one of the most difficult departments for Congress to oversee and the president to lead. It is no wonder, then, that much of the department’s direction depends on the discretion of the secretary of agriculture—and his or her ability to lead the Department—and the subgovernment nature of agriculture policy. It is for these reasons and more that farmers continue to benefit from extensive price support and other assistance payments from Congress despite a substantial decline in rural population since the turn of the 20th century. Indeed, as rural population has declined, the USDA’s seven mission areas have arguably become more important to the U.S. farmer, consumer, and government. See also appointment power; cabinet; executive agencies. Further Reading Browne, William P. Private Interests, Public Policy, and American Agriculture. Lawrence: University Press of Kansas, 1988; Eshbaugh-Soha, Matthew. The President’s Speeches: Beyond “Going Public”. Boulder, Colo.: Lynne Rienner Publishers, 2006; Tweeten, Luther G. Foundations of Farm Policy. 2nd ed. Lincoln: University of Nebraska Press, 1979; Ulrich, Hugh. Losing Ground: Agricultural Policy and the Decline of the American Farm. Chicago: Chicago Review Press, 1989; U.S. Department of Agriculture Web site. Available online. URL: http://www.usda. gov/wps/portal/usdahome. —Matthew Eshbaugh-Soha
Department of Commerce The U.S. Department of Commerce, a cabinet-level department, is concerned with promoting economic growth. It was created by an Act of Congress (32 Stat.825). The Department of Commerce and Labor was established on February 14, 1903, and was acti-
vated four days later. President Theodore Roosevelt appointed the first Secretary of Commerce and Labor, George B. Cortelyou, who served from February 18, 1903, to June 30, 1904, when he resigned to become chair of the Republican National Committee. Ten years later, the department’s labor-related functions were transferred to the new Department of Labor, and the department was renamed the Department of Commerce (37 Stat. 737; 5 U.S.C. 616). President Woodrow Wilson appointed William C. Redfield, who served one term in the U.S. House of Representatives from New York (1911–13), as the first secretary of the Department of Commerce. Redfield was the author of The New Industrial Day (1912), which was about what he called the “new scientific spirit of management.” He served from March 5, 1913, until October 31, 1919. According to Section 2.01 of Department Organization Order (DOO) 1-1 (effective 4 April 2005), the “Mission and Organization of the Department of Commerce” states “The historic mission of the Department is to foster, promote and develop the foreign and domestic commerce of the United States.” Its responsibilities include promoting and assisting international trade, gathering economic and demographic data, issuing trademarks and patents, helping set industry standards, and assisting states, communities and individuals with economic progress. The Census Bureau, which conducts the constitutionally mandated decennial census, is part of the department’s Economics and Statistics Administration. The development of the department reflects the changes that have taken place in the U.S. economy. In 1905, under the leadership of Secretary Victor H. Metcalf (1904–06), the Bureau of Manufactures was established with the charge of promoting domestic manufactures. Seven years later, the bureau would be renamed the Bureau of Foreign and Domestic Commerce when its duties would be consolidated with those of the Bureau of Statistics. In 1915, the Bureau of Corporations, a commerce bureau responsible for investigating interstate corporations, was absorbed by the new independent Federal Trade Commission (FTC). The FTC was the major action taken by President Wilson against trusts, and the new agency was responsible for enforcing the Clayton Act (ch. 323, 38 Stat. 730), one of the nation’s major antitrust statutes.
Department of Commerce 493
Calvin Coolidge once said that “the business of America is business,” and the Commerce Department’s activities expanded during the Coolidge presidency (1923–29). During the “roaring twenties,” the department was headed by Herbert Hoover (1921– 28), the longest serving secretary in the department’s history, who had distinguished himself as head of the American Relief Administration that had fed millions in Central Europe in the years after the First World War. The Patent Office, originally established as the Patent Board in 1790, was transferred from the Department of Interior to the Commerce Department in 1925 (Executive Order 4175, March 17, 1925). Less than three months later, the Bureau of Mines was transferred from the Interior to the Department of Commerce by Executive Order 4239 of June 4, 1925. In the years after World War I, a commercial aircraft industry emerged as the British and French began commercial service in Europe, and Standard Airlines was founded in California in 1926. At the behest of the industry, Congress enacted the Air Commerce Act of 1926 (44 Stat. 568) placing the administration of commercial aviation under the Department of Commerce. The Secretary of Commerce was charged with fostering air commerce, issuing and enforcing air-traffic rules, licensing pilots, certifying aircraft, establishing airports, and operating and maintaining aids to air navigation. An Aeronautics Branch was created (renamed the Bureau of Air Commerce in 1934) to administer the law. At the height of the depression, the Federal Employment Stabilization Board was established by the Employment Stabilization Act of 1931 (46 Stat. 1085) to collect data and prepare reports on employment and business trends and to provide assistance to other agencies in planning public works. Consistent with President Hoover’s belief that the market should regulate the economy, the board did not have the authority to undertake public-works projects. Three years later, the board would be abolished by President Franklin Roosevelt’s Executive Order 6623 of March 23, 1934, and a Federal Employment Stabilization Office would be set up in its place. Under Daniel C. Roper (1933–38), Harry Hopkins (1938–40), and Jesse Jones (1940–45), the department was divested of a number of functions. The
Bureau of Mines was transferred back to the Department of the Interior by Executive Order 6611 of February 22, 1934. The Federal Employment Stabilization Office, the successor to the Hoover Administration’s Federal Employment Stabilization Board, would cease to function due to a lack of funds on June 30, 1935 (it was formally abolished by Section 4 of Reorganization Plan No. 1 in 1939; 53 Stat. 1423). The Bureau of Air Commerce was transferred to the Civil Aeronautics Authority, an independent agency, by the Civil Aeronautics Act of 1938 (52 Stat. 973). The Lighthouse Service, part of commerce since the formation of the Department of Commerce and Labor in 1903, was transferred (53 Stat. 1431) back to its original home, the Department of the Treasury. In 1942, the Bureau of Marine Inspection and Navigation was transferred to the U.S. Coast Guard by Executive Order 9083. One of the department’s more visible activities became part of the department in 1940. The Weather Bureau (since 1970, the National Weather Service) was transferred to Commerce from the Department of Agriculture (Reorganization Plan IV of 1940). Following World War II, President Harry Truman asked former President (and Commerce secretary) Herbert Hoover to head a commission to improve the efficiency and effectiveness of the federal government. The Commission on Organization of the executive branch of the government, popularly known as the Hoover Commission, delivered its report and recommendations to the president and Congress in 1949. Among the commission’s proposals for the Department of Commerce that were implemented was the transfer of the Public Roads Administration from the General Services Administration to the Department of Commerce (Departmental Order 117 of May 24, 1949). Another was the abolition of the U.S. Maritime Commission (Reorganization Plan 21 of 1950), an independent agency, with its responsibilities being split between two new Commerce Department units. The Federal Maritime Board was given responsibility for regulating shipping and for awarding subsidies for the construction and operation of vessels while a maritime administration was tasked with maintaining the national-defense reserve merchant fleet and operating the U.S. Merchant Marine Academy at Kings Point, New York. In recognition of the department’s growing role in
494 Depar tment of Commerce
transportation, an office of the Under Secretary for Transportation was established by Department Order 128 of 1950. This office (the predecessor agency to the Department of Transportation) would supervise the Commerce Department’s transportation functions. These functions would expand substantially in the next decade. During the Korean War (1950–53), the department played a role in what would become one of the most important constitutional law cases, Youngstown Sheet and Tube v. Sawyer, 343 U.S. 579 (1952). When steel workers went on strike against the major steel companies, the Truman Administration decided that this would disrupt production by defense contractors and have a negative impact on the economy. On April 8, 1952, President Truman announced that the government would seize the steel mills while keeping workers and management in place to run the mills under federal control. The steel companies filed suit, seeking an injunction against the seizure of the plants by Charles Sawyer, Truman’s Secretary of Commerce (1948–53). By a 6-3 vote, the U.S. Supreme Court ordered Sawyer to return the mills to the companies, holding that the president did not have an inherent power to seize private property, Under Eisenhower’s three Secretaries (Sinclair Weeks, 1953–1958; Lewis L. Strauss, 1958–59, and Frederick H. Mueller, 1959–1961), the department’s role in administering the nation’s transportation system would grow. Eisenhower was a major proponent of an interstate highway system. In 1919, as a young army lieutenant colonel, Eisenhower was part of a U.S. Army transcontinental convoy that traveled from the White House to San Francisco. It took the convoy two months to reach its destination, and it demonstrated the need for better highways (in his 1967 book, At Ease: Stories I Tell to Friends, Eisenhower devoted a chapter to this event, “Through Darkest America in Truck and Tank”). This experience, and his exposure to the German autobahn network during World War Two while Supreme Commander of Allied Forces in Europe greatly influenced Eisenhower. In 1954, Eisenhower announced his “grand plan” for highways. Urging approval of his proposal, in his 1956 State of the Union Address, Eisenhower said that “if we are ever to solve our mounting traffic problem, the whole interstate highway system must be authorized as one project . . .” Later that year, Con-
gress would enact the Federal Highway Act of 1956 (70 Stat. 374), authorizing construction of a 40,000 mile network of highways over a 10-year period. The Bureau of Public Roads would be charged with the responsibility of administering the project. In 1958, the Commerce Department assumed responsibility for the St. Lawrence Seaway Development Corporation (Executive Order 10771). The project, when completed the following year, linked the Great Lakes directly to the Atlantic Ocean. Once the principal agency responsible for formulating and implementing commercial aviation policy, the department saw its role in this area ended with the abolition of the Civil Aeronautics Administration and the transfer of its functions to an independent Federal Aviation Agency through the Federal Aviation Act (72 Stat. 810). The Commerce Department’s part in Lyndon Johnson’s “War on Poverty” was carried out by the Economic Development Administration, which was established under the Public Works and Economic Development Act of 1965 (79 Stat. 569). The agency’s mandate was to provide federal grants to generate jobs, retain existing jobs, and stimulate industrial and commercial growth in distressed economic areas. In 1966, for the second time in its history, the department was split to create a new cabinet level department. The transportation functions of the department had grown substantially, and President Johnson proposed the creation of a cabinet-level Department of Transportation. On October 15, 1966, Congress enacted legislation (Public Law 89–670) transferring the office of Under Secretary for Transportation, the Bureau of Public Roads, the Great Lakes Pilotage Administration, and the St. Lawrence Seaway Development Corporation to the new Department of Transportation. The new department would commence operations April 1, 1967. Maurice Stans was appointed secretary by President Richard Nixon and would oversee the department’s movement into new areas of responsibility. Stans established an Office of Minority Business Enterprise, an Office of Telecommunications, and the National Technical Information Service. The National Oceanic and Atmospheric Administration was formed in October 1970, replacing the Environmental Sciences Services Administration (which included the National Weather Service). Stans would leave the department
Department of Commerce 495
in February 1972 to chair President Nixon’s reelection campaign. He was succeeded by Peter Peterson, who would serve 11 months in the position before leaving to become chairman and chief executive officer of Lehman Brothers. Peterson was replaced by Frederick Dent (1973–75). The National Fire Prevention and Control Administration (NFPCA) was established in the Department of Commerce by the Federal Fire Prevention and Control Act of 1974 (88 Stat. 1535) to aid state and local governments to develop fire prevention and control programs. Central to the NFPCA’s mission was the National Academy of Fire Prevention and Control, to develop model training programs and curricula for firefighters. The NFPCA would be renamed the U.S. Fire Administration (USFA) in October 1978 (92 Stat. 932). In 1979, the USFA would become (by Executive Order 12127) part of the newly organized independent agency, the Federal Emergency Management Agency (FEMA). Jimmy Carter became president in 1977 following a campaign in which he cited his experience leading government reorganization as governor of Georgia (1971–75). Carter appointed Juanita Kreps, the first woman to head the department, to marshal his reorganization of Commerce. In addition to the shift of the USFA to the newly created FEMA, the Commerce Department’s Office of Energy Programs was transferred to the Department of Energy in 1977 (Executive Order 12009). Two major internal reorganizations took place during Kreps’s tenure. In 1977, a number of units within Commerce were reorganized into the Industry and Trade Administration (in 1981, this agency would be replaced by the International Trade Administration). The National Telecommunications and Information Administration, merging Commerce’s Office of Telecommunications and the Office of Telecommunications Policy in the Executive Office of the President, was established in 1978 to centralize support for the development and regulation of telecommunication, information, and related industries. Ronald Reagan became president in 1981, and he was committed to stimulating economic growth by deregulating business and encouraging entrepreneurship and free trade. Reagan appointed Malcolm Baldridge as Secretary of Commerce. Baldridge, who
had been a corporate chief executive prior to entering government service, would play a major role in expanding trade with the Soviet Union and China. Baldridge would also play a major role in securing Congressional approval of the Export Trading Company Act of 1982 (Public Law 97–290), a law intended to stimulate U.S. exports by allowing banks to make investments in export trading companies, by permitting the Export–Import Bank to make financial guarantees to U.S. exporters, and by granting U.S. companies limited immunity from antitrust laws. To encourage exports further, a U.S. Export Administration was created in October 1987 (the name was changed in January 1988 to the Bureau of Export Administration). Baldridge was killed in a 1987 rodeo accident and has been memorialized through the Baldridge National Quality Program. Congress created this program (Public Law 100–107) to encourage quality by U.S. corporations. The award, which was first given in 1988 to Motorola, Inc., the Commercial Nuclear Fuel Division of Westinghouse Electrical Corporation, and Globe Metallurgical, is administered by the U.S. National Institute of Standards and Technology, which was established in the Department of Commerce by the Omnibus Trade and Competitiveness Act (102 Stat. 1107). William Verity, Jr., was nominated by Reagan to succeed Baldridge. During his 15 months (October 1987–January 1989) in office, he established the Technology Administration to promote the nation’s economic competitiveness and an Office of Space Commerce, intended to promote commercial travel. President Bill Clinton appointed Ronald Brown, the chair of the Democratic National Committee, to be his first secretary of the department. Brown was the first African American to lead the department. Brown was killed on April 3, 1996, while on an official trade mission as his military plane crashed in Croatia. Three additional secretaries would serve under Clinton: Mickey Kantor (April 12, 1996–January 21, 1997); William Daley (January 30, 1997–July 19, 2000), and Norman Mineta (July 20, 2000–January 20, 2001). President George W. Bush appointed his campaign chair, Donald Evans, to serve as his first Commerce secretary. He served until February 7, 2005, when he was replaced by the present secretary, Carlos
496 Depar tment of Defense
M. Gutierrez, who had been the CEO of the Kellogg Company. The organizational structure of the department is composed of the office of the secretary, which is the general management arm of the department; and the operating units, which are responsible for carrying out the department’s programs. The operating units include the Bureau of Industry and Security; the Economics and Statistics Administration; Economic Development Administration; International Trade Administration; Minority Business Development Agency; National Oceanic and Atmospheric Administration, and the National Telecommunications and Information Administration. In 2006, the department employed 36,000 and had a budget of $9.4 billion. See also appointment power; cabinet; executive agencies. Further Reading Bowers, Helen, ed. From Lighthouses to Laserbeams: A History of the U.S. Department of Commerce. Washington, D.C.: U.S. Department of Commerce, Office of the Secretary, 1995; Brandes, Joseph. Herbert Hoover and Economic Diplomacy; Department of Commerce Policy, 1921–1928. Pittsburgh: University of Pittsburgh Press, 1962; Fehner, T. R., and J. M. Holl. Department of Commerce 1977–1994: A Summary History. Washington, D.C.: U.S. Government Printing Office, 1994; Redfield. William C. With Congress and Cabinet. Garden City, N.Y.: Doubleday, Page, 1924; U.S. Congress, House Committee on Science. H.R. 1756, the Department of Commerce Dismantling Act: Markup before the Committee on Science, U.S. House of Representatives, 104th Congress, 1st Session, September 14, 1995. Washington, D.C.: U.S. Government Printing Office, 1996. —Jeffrey Kraus
Department of Defense “The direction of war implies the direction of the common strength: and the power of directing and employing the common strength forms a usual and essential part of the definition of executive authority.” Written in 1788 with the intent of influencing votes in favor of the new U.S. Constitution, these words of Alexander Hamilton explain the critical role of the
U.S. president in national defense. The Department of Defense (DOD), 220 years later, is the president’s principal arm in the execution of national security policy and strategy. The DOD is divided organizationally into four primary military departments: the United States Navy, the United States Army, the United States Air Force, and the United States Marine Corp under the leadership of the Secretary of Defense. The Department of Defense organizes and deploys its forces under a number of operational commands, the most high priority of which are led by four-star “flag” officers (of the rank of general and admiral). These commands are divided into geographic areas of responsibility that cover the globe: European Command (EUCOM) which is responsible for Europe and much of Africa; Southern Command (SOUTHCOM) which is responsible for Latin, Central and South America; Northern Command (NORTHCOM) which was created after the terrorist attacks on September 11, 2001, to protect the continental United States; Central Command (CENTCOM) which covers most of the Middle East; and Pacific Command (PACOM) which address security issues in the Asia–Pacific region. In addition, there are a variety of functional areas that are addressed by other unified commands. Examples include Special Operations Command (SOCOM); Joint Forces Command (JCOM); Strategic Command (STRATCOM—which deals primarily with nuclear weapons and missile defense); and Transportation Command—(TRANSCOM the logistics heart and soul of the U.S.’s global reach capacity). This list of commands is far from comprehensive; rather it serves to illustrate the multifaceted and broad ranging structure that characterizes the Department of Defense organizational matrix. The complexity of the Department of Defense is derivative of its size, which in fiscal 2006 consisted of approximately 1.3 million active-duty soldiers, 1.1 million National Guard and Reserve forces, and 670,000 civilian personnel, making it the U.S.’s largest employer. With a budget of approximately $440 billion, the Department of Defense absorbs approximately half of the discretionary spending in the federal budget (but only 20 percent of the overall budget). Even though U.S. spending on defense is seven times more than China’s (and more than the next 10 countries combined), in terms relative to gross domes-
Department of Defense
497
Aerial view of the Pentagon (U.S. Department of Defense)
tic product (GDP), the Pentagon now spends approximately 4 percent of U.S. economic output, a historically low figure that averaged 6 percent during the cold war (1947–91). Indeed, in comparative terms, the United States ranks 26th globally in defense spending as a component of GDP, which speaks volumes as to the enormity of the U.S. economy. The need for a Department of Defense originated in World War II when the army and the navy existed as separate agencies (the air wing at that time being a part of the army). Despite complete victory, the war revealed the need for closer cooperation between the armed services and led Congress to pass the National Security Act of 1947, one of the most monumental pieces of legislation in U.S. history. The act created the National Military Establishment, which in 1949 was changed to the Department of Defense. In addition, the act gave birth to the Central Intelligence Agency (CIA) and established
the National Security Council (NSC). It also established the Joint Chiefs of Staff (JCS), who as the head of their respective services would serve as primary intermediaries, brokers, and conduits between the three armed services (Note: The U.S. Marine Corps was raised in 1952 to equal status of the other branches) and the civilian-dominated Office of the Secretary of Defense (OSD). For the next six decades, the Department of Defense expanded into the large, multifaceted agency that it is today. This expansion has witnessed a significant number of major reform efforts that have attempted to rationalize, streamline, and make more efficient a cumbersome bureaucratic colossus that (according to an internal DOD audit in 2001) reportedly has lost track of more than $1.1 trillion during its lifetime. In 1986 the Goldwater-Nichols Defense Reorganization Act was passed, ushering in the most sweeping changes to the Department of Defense
498 Depar tment of Defense
since its inception. Prior to Goldwater-Nichols, the United States military was organized along lines of command that reported to their respective service chiefs. These chiefs individually reported to both the secretary of defense and to the president (who is constitutionally mandated as the commander in chief of all U.S. armed forces). As such, this structure promoted interservice rivalry (for example, what was good for the navy was good for the country) and resulted in poor cooperation on the battlefield. This structure undermined the evolving joint-forces doctrine that envisioned coordinated ground, naval, air and space systems acting in concert to defeat an enemy. Both the aborted Iranian hostage-rescue mission in 1980 and problems illustrated during the invasion of Grenada in 1983 helped spur forward the reform effort. Under the Goldwater-Nichols Act, military advice was centralized in the person of the chairman of the Joint Chiefs. The chairman is designated as the principal military adviser to the president of the United States, to the National Security Council, and to the secretary of defense. Notably, it also provided greater command authority to “unified” and “specified” field commanders in the geographic combatant commands. In short, Goldwater-Nichols changed the way the services interact with each other. Rather than reporting to a service chief within a branch, each unit regardless of service now reports to the commander responsible for a specific function (Transportation, Space, Special Operations) or for a geographic region of the globe (EUCOM, CENTCOM, PACOM, and so on). This restructuring promoted a unity of effort, helped integrate planning, allowed shared procurement, and significantly reduced interservice rivalry between commanders. For example, the PACOM commander is now assigned air, ground, and naval assets to achieve his or her objective, but no single service branch continually “owns” any given command (for example, on regular rotation, a navy admiral may be replaced by an army, marine, or air force general or vice versa in any given command). Goldwater-Nichols invoked the “joint” or combined era of the U.S. armed forces. As a result, crossservice procurement allowed the various branches to share technological advances in evolving “stealth” and “smart” weapons that have greater accuracy and result in fewer “collateral damage” deaths amongst non-
combatants. These changes provided other commonsense benefits such as the interoperability of radios between services, the ability to shared ammunition, and greater effectiveness on the battlefield. The first test of Goldwater-Nichols was the 1991 Gulf War (“Operation Desert Storm”), where it functioned mostly as planned, allowing the U.S. commander, army general Norman Schwarzkopf, to exercise full control over army, air force, and navy assets without having to negotiate with the individual services. Changes in the Department of Defense’s organizational structure have been paralleled during the years in the role played by the secretary of defense. Originally conceived as a weak coordinator of the military branches, the secretary of defense has evolved into one of the most powerful and influential foreign policy advisers to the president, often rivaling and routinely surpassing the influence of the National Security Advisor and the secretary of state. The essential elements of the secretary’s role have been described as follows: “Foreign policy, military strategy, defense budgets and the choice of major weapons and forces all closely related matters of basic national security policy. The principal task of the secretary of defense is personally to grasp the strategic issues and provide active leadership to develop a defense program that sensibly relates all these factors.” The influence of the secretary of defense has grown due to the extensive expansion of his/her staff in the Office of the Secretary of Defense. OSD now controls a large number of supporting agencies such as the National Security Agency (NSA), the Defense Intelligence Agency (DIA), the Defense Advanced Research Projects Agency (DARPA), and others. As sources of intelligence and information, these agencies are instrumental to the secretary of defense for the core mission of advising the president, and as a result, they are integral to the unmatched power and influence of the OSD within the federal government. The Department of Defense culture is in sharp contrast to other edifices of the U.S. government. Even though each service branch has its own subculture and traditions, as a whole, the military is a tribe unto itself. Whereas State Department personnel come mainly from elite schools in the East, military officers tend to come from the South, the Midwest, and the West. The Department of State tends to
Department of Defense 499
be politically liberal, urban, and secular, while the Department of Defense is more conservative, rural, and religious. Although the post–9/11 era has witnessed greater recognition by both departments that each is indispensable, most observers agree the Department of State and the Department of Defense speak a different language, get along poorly, and share a mutual disdain for each other. It is often said that while military personnel generally have a “can-do” attitude, from which the words “no sir, we can’t do that” are never spoken, the Pentagon itself has a reputation for discouraging new initiatives, smothering innovation, and punishing creative thinking. There is layer upon layer of bureaucracy with such detailed regulations for every activity that only the most skilled bureaucratic warriors are able to make things happen. As a result, the skills and attributes needed to be a successful soldier, sailor, flier, or marine in peacetime are sometimes at odds with those more essential when the nation goes to war. Within the Department of Defense, there are two general styles of management: military leadership and bureaucratic (nonmilitary) leadership. Each brings an array of approaches, with both positive and negative aspects, to managing people, projects, and organizations, and each style has distinct advantages and disadvantages. An overall stable defense culture is difficult to sustain due to the often incongruent goals that are pursued by the military and civilian components. In particular, military leaders assigned to bureaucratic organizations are confronted with the need to adapt their style to succeed and be promoted to higher rank. It is often said that the ideal peacetime “Pentagon General” never makes a decision until he/she is forced to do so, and these individuals are usually armed with a robust quiver of quantitative metrics and PowerPoint briefing slides to convince members of Congress and their civilian bosses in the Office of the Secretary of Defense that the taxpayers money has been well spent. However, as witnessed most recently in Iraq, the leadership qualities that resulted in a successful peacetime career have not translated into victory on the battlefield. Finally, one of the most interesting developments at the Department of Defense during the past decade is the increased “outsourcing” to private contractors of many jobs once held by civilian employees and
active duty personnel. Driven by political winds to downsize government, the net result of that has been a huge increase in the number of defense contractors such as Haliburton Corporation and its subsidiary Kellogg Brown and Root (KBR). Since the beginning of Operation Iraqi Freedom in 2003, private contractors have made up the second largest contributor of forces to the international coalition. Indeed the private sector is so firmly embedded in the Department of Defense’s worldwide operations, including combat, humanitarian relief and peacekeeping duties, that the U.S. military could not function without it. In theory, this privatization effort was designed to save money and increase the quality of services; however, even if the latter has been achieved to some degree, with the profit motive being injected even deeper to the system, overall expenditures have escalated, corruption has increased, and special interest money has flowed back into U.S. elections. Throughout history, huge amounts of money have always been at stake in military contracting; however, because of the “privatization of war,” today’s Department of Defense has become an organization more politicized than any time in its history. In sum, the Department of Defense is a powerful and important instrument in the executive branch of the U.S. government. It has been responsible for many successes in its 60-year history but has also been marred by failure. Faced with addressing serious potential threats from such nation-states as China, North Korea, and Iran while at the same time fighting the global war on terror, the Department of Defense will undoubtedly continue reinventing itself in much the same manner as it has in the past. See also appointment power; cabinet; executive agencies. Further Reading Enthoven, Alain C., and Smith, K. Wayne. How Much Is Enough. New York: Harper and Row, 1971; Hamilton, Alexander, James Madison, and John Jay. The Federalist, Max Beloff. 2nd ed. London: Basil Blackwell, 1987; Jordan, Amos A., William J., Taylor, Jr. and Michel Mazar. American National Security. 5th ed. Baltimore, Md.: The Johns Hopkins University Press, 1999; Mandel, Robert. Armies Without States: The Privatization of War. Boulder, Colo.: Lynne Rienner Publishers, 2002; Wiarda, Howard J. American
500 Depar tment of Education
Foreign Policy: Actors and Processes. New York: HarperCollins, 1996. —Douglas A. Borer
Department of Education The U.S. Department of Education, a “cabinetlevel” agency, was created by an act of Congress, the Department of Education Organization Act (Public Law 96–88), which was signed into law by President Jimmy Carter on October 17, 1979. The department, once part of the Department of Health, Education and Welfare (established March 31, 1953), began formal operations as a separate cabinet-level agency on May 4, 1980. The mission statement of the department is that the agency is “to ensure access to education and to promote educational experience throughout the nation.” While education has historically been the responsibility of the United States’s state and local governments, the Department of Education’s functions include: establishing policies on federal financial aid for education and distributing and monitoring the use of such funds; collecting data on U.S. schools and disseminating research; focusing national attention on key educational issues; prohibiting discrimination; protecting the privacy rights of students; and ensuring equal access to education. While the present Department of Education was established in 1980, federal involvement in the field reaches back more than a century. On March 2, 1867, Congress created a Department of Education. The main purpose of this department was to collect data about the nation’s schools. However, there was concern that the new department would have too much control over local schools, and there were calls for the agency’s abolition. In 1868, the department was demoted to an Office of Education located in the U.S. Department of the Interior. For the next 70 years, Interior would house a federal education agency (Office of Education, 1868–69; Bureau of Education, 1869–1930; and Office of Education, 1930–39). On July 1, 1939, the Office of Education became part of the new Federal Security Agency, established by Reorganization Plan No. 1 of 1939, 4 FR 2727, 3 CFR, 1938–43 Comp., p. 1288, effective July 1, 1939. The Federal Security Agency was abolished by Reorganization Plan No. 1 of 1953, 18 FR
2053, 3 CFR, 1949–53 Comp., p. 1022, effective April 11, 1953, and the Office of Education was transferred to the new Department of Health, Education, and Welfare (HEW). The effort to create a cabinet-level education department has a long history. According to Radin and Hawley, between 1908 and 1975, more than 130 bills to form a cabinet-level department were introduced in Congress. In 1920, Socialist Party presidential candidate Eugene V. Debs supported the establishment of a Department of Education, (as well as a separate Department of Labor). The proposal gained momentum during the late 1950s and 1960s as the federal budget for education exceeded that of a number of cabinet departments. In 1972, the National Education Association (NEA) formed a political action committees (PAC) and joined with other unions to form the Labor Coalition Clearinghouse (LCC) for political campaigns. In 1975, the LCC released a report entitled “Needed: A Cabinet Department of Education.” In 1976, NEA endorsed Democratic presidential candidate Jimmy Carter, who promised to support the creation of a cabinet department. Following Carter’s election, a bill was introduced in the Senate by Connecticut Democrat Abraham Ribicoff (who had served as Secretary of Health, Education, and Welfare during the Kennedy administration). The legislation was opposed by many Republicans, who saw the establishment of a cabinetlevel department as a federal bureaucratic usurpation of what was traditionally a state and local responsibility. Following its passage, Carter appointed Shirley Hufstedler, a judge of the U.S. Court of Appeals for the Ninth Circuit, as the first secretary. During the 1980 campaign, Ronald Reagan called for the dissolution of the new department. Following his election, he appointed Terrel Bell (a former commissioner of education during the HEW era) secretary and charged him with the task of dismantling the department. Reagan’s actions ignited opposition from many of the agency’s proponents, who argued that the move would, as the National Council of Teachers of English stated in their resolution opposing the president, “deprecate the status and impact of the federal education effort.” In the face of opposition from interest groups and Democratic members of Congress, Reagan did not move forward with his plans to dissolve the department.
Department of Education 501
In 1983, the department issued a report entitled, A Nation at Risk. This report, from the National Commission on Excellence in Education was critical of the nation’s educational system, holding it responsible for a “rising tide of mediocrity.” The report was critical of high school curricula in the United States, finding that it had “been homogenized, diluted, and diffused to the point that they no longer have a central purpose.” The report called for increased study of mathematics, science, computer science, and foreign languages in the nation’s high schools; more rigorous academic standards; more effective use of school time (and longer school days and/or years) and improvements in teacher training. While the report did not lead to any legislation, it did focus national attention on education. Bell was succeeded in 1985 by William J. Bennett. Bennett advocated the reintroduction of a core curriculum for all schools based on what he considered the classics and what he called his “three C’s” (content, character, and choice). Bennett’s agenda included competency testing and merit pay for teachers, as well as ending tenure, proposals opposed by professional education associations and unions. He also criticized public schools for low standards, calling the Chicago public school system the “worst in the nation” in 1987. President George H. W. Bush convened the first National Education Summit in 1989, bringing together the nation’s governors. This summit led to the formulation by the National Governors Association (NGA) of six long-term national goals (eventually eight) for public education which encouraged the states to set higher standards and assess educational outcomes. These goals were: that all students would start school ready to learn; that the high school graduation rate would increase to 90 percent; that all students would leave grades four, eight, and 12 having demonstrated competence in challenging subject matter; that teachers would have access to professional development programs; that U.S. students would become first in the world in mathematics and science achievement; that every adult American would be literate; that every school would provide a safe and secure environment conducive to learning; and that every school would promote parental participation in the social, emotional, and economic growth of their children.
During Lamar Alexander’s tenure (1991–93) as secretary, the department developed America 2000, which the secretary referred to as a “crusade” for national school reform. In addition to the NGA’s goals, the plan recommended merit pay and alternative paths to certification for teachers, a longer school year, improved adult literacy programs, national standards in core subjects, and the creation of a 535 new U.S. schools and school choice (vouchers) for parents. The plan was controversial and was rejected by the U.S. Senate in 1992. Another controversial decision by Alexander was his determination that, under Title VI of the Civil Rights Act of 1964, college scholarships designated specifically for minorities was illegal. In 1993, a federal appeals court held that colleges could offer scholarships specifically to AfricanAmerican students. President Bill Clinton appointed former South Carolina governor Richard Riley as education secretary in 1993. As governor of South Carolina (1979– 87), Riley had led an effort to reform public education in the state. Under Riley, the department moved to implement the Clinton administration’s plan to encourage nationwide standards-based education. Toward that end, the department released Goals 2000, which called for the realization of the National Education Goals by 2000; to increase expectations for student performance; to establish voluntary national standards for occupational skills; and to encourage reform at the local level through federal support. Congress approved the plan in 1994 as the Goals 2000: Education Reform Act. (Public Law 103– 227). Riley, who would become the longest-serving education secretary (1993–2001), presided over a number of other significant achievements. The department developed the Student Loan Reform Act of 1993 (Public Law 103–66), which authorized the William D. Ford Federal Direct Loan Program. This program provides low-interest loans to college students through participating educational institutions, with capital provided by the department, which is the sole lender rather than commercial lending institutions. The proponents of this legislation argued that it would save taxpayers money by eliminating the “middle man.” By lending the money directly to students, the federal government would no longer have to subsidize the “below-market” interest rates charged to
502 Depar tment of Education
students under the existing Federal Family Education Loan Program. The School to Work Opportunity Act (Public Law 103–239) increased technology education for students who planned to enter the workforce immediately after high school graduation. The program provided states with funding to create plans for effectively linking school-based and work-based learning so as to provide students with an easier transition to the workplace. The Improving America’s Schools Act (Public Law 103–382) was part of the Clinton administration’s effort to reform public education. It included the funding of grants for charter schools, increased immigrant and bilingual education funding, and extra help for disadvantaged students with requirements that schools be held accountable for the educational outcomes of these students. In 2001, George W. Bush became president and appointed Rod Paige, superintendent of the Houston Independent School District (HISD), as the seventh secretary of the department. Paige, the first school superintendent to lead the department, led Bush’s efforts to secure congressional approval of the No Child Left Behind Act of 2001 (Public Law 107–110), the reauthorization of the Elementary and Secondary Education Act (Public Law 89–10). No Child Left Behind contained four important elements: stronger accountability on the part of schools for educational outcomes; increased flexibility and local control; expanded options for parents and students (including the right of parents to transfer students out of “failing” schools); and reliance on proven teaching methods. The law required all states to develop “challenging state standards” that would be measured annually by state tests, which would then be measured against a national “benchmark” test. In exchange for developing and implementing these standards, states would be given more flexibility in spending federal education funds. Passed with bipartisan support, the law has been criticized by a number of Democratic members of Congress and state government and education officials who contend that the administration has imposed millions of dollars in unfunded federal mandates on the states and has not provided sufficient funding for implementation of the law. In August 2005, the state of Connecticut filed suit against the federal govern-
ment (State of Connecticut and the General Assembly of the State of Connecticut v. Margaret Spellings, in her official capacity as secretary of education). As of this writing, the suit is pending. The department is presently (2008) organized into three principal office components: the Office of the Secretary, the Office of the Deputy Secretary, and the Office of the Under Secretary. The Office of the Secretary includes the offices of Communications and Outreach; Planning, Evaluation, and Policy Development; Civil Rights; Management; Legislative and Congressional Affairs; and International Affairs. The general counsel, the inspector general, the Institute of Educational Sciences, the chief financial office, and the Center for Faith-Based and Community Initiatives also report directly to the secretary. The Office of the Deputy Secretary includes the offices of Elementary and Secondary Education; Safe and Drug Free Schools; English Language Acquisition, Language Enhancement, and Academic Achievement for Limited English Proficient Students; Special Educational and Rehabilitation Services, and Innovation and Improvement. The Office of the Under Secretary includes the Offices of Postsecondary Education, Federal Student Aid, and Vocational and Adult Education, and the White House Initiative on Tribal Colleges and Universities. The department has created 18 advisory committees and operational committees that provide advice on policy and program issues. These committees are established under the Federal Advisory Committee Act of 1972 (Public Law 92–463). In 2006, the department employed 4,500 and had a budget of $71.5 billion. See also appointment power; cabinet; executive agencies. Further Reading Hess, Frederick M., and Michael J. Petrilli. No Child Left Behind Primer. New York: Peter Lang, 2006; McGuinn, Patrick J., No Child Left Behind and the Transformation of Federal Education Policy, 1965– 2005. Lawrence: University Press of Kansas, 2006; Radin, Beryl A., and Willis Hawley. The Politics of Federal Reorganization: Creating the U.S. Department of Education. New York: Pergamon, 1988; Stephens, David. “President Carter, the Congress, and NEA: Creating the Department of Education,” Political
Department of Energy 503
Science Quarterly 98 (1983) U.S. National Commission on Excellence in Education. A Nation at Risk: the Imperative for Educational: A Report to the Nation and to the Secretary of Education. Washington, D.C.: Superintendent of Documents, United States Government Printing Office, 1983. —Jeffrey Kraus
Department of Energy The U.S. Department of Energy, the 12th cabinetlevel department, was created by an Act of Congress, the Department of Energy Organization Act (Public Law 95–91), which was signed into law by President Jimmy Carter on August 4, 1977. The department consolidated functions performed by the Energy Research and Development Administration (established January 19, 1975, by Executive Order 11834, January 15, 1975), the Federal Energy Administration (established, effective June 28, 1974, by the Federal Energy Administration Act of 1974, Public Law 93–275), the Federal Power Commission (established in 1920 by the Federal Water Power Act, ch. 285, 41 Stat. 1063), the Department of Commerce’s Office of Energy Programs (created in 1975) and the Bonneville Power Administration (established August 20, 1937, by the Bonneville Project Act, ch. 720, 50 Stat. 731), Southwestern Power Administration (established September 1, 1943, by Secretarial Order 1865, August 31, 1943, pursuant to Executive Order 9366, July 30, 1943, and Executive Order 9373, August 30, 1943), Southeastern Power Administration (established by Secretarial Order 2558, March 21, 1950), and Alaska Power Administration (established by Secretarial Order 2900, June 16, 1967), all housed in the Department of the Interior. The Department of Energy began formal operations as a cabinetlevel agency on October 1, 1977. The department’s mission is to “advance the national, economic, and energy security of the United States; to promote scientific and technological innovation in support of that mission; and to ensure the environmental cleanup of the national nuclear weapons complex.” The department defines itself as a “national security agency.” Its responsibilities include the nuclear-weapons production, energy conservation and production, energy-related basic and the applied research, and the disposal of radioactive waste.
While the present Department of Energy was established in 1977, the functions assumed by the department at that time reach back more than a half-century and have been carried out by a number of federal agencies. The Federal Power Commission was established as an independent agency (consisting of the secretaries of Agriculture, Interior, and War) with responsibility for licensing hydroelectric power plants located on either federal lands or navigable waters in the United States (The Federal Water Power Act, 41 Stat., 1063). The Federal Power Act of 1935 (Ch. 687, August 26, 1935; 49 Stat. 803) changed the structure of the commission so that it consisted of five commissioners appointed by the president, with no more than three from the same political party. The commission’s jurisdiction was expanded to include regulation of interstate and wholesale transactions and transmission of electric power. The commission had a mandate to insure that electricity rates were “reasonable, nondiscriminatory and just to the consumer.” Nuclear-weapons production began with the “Manhattan Project,” formally known as the Manhattan Engineering District of the U.S. Army Corps of Engineers. Begun in 1942, the project resulted in the development of the atomic bombs that were dropped on Hiroshima and Nagasaki in 1945. The following year, Congress enacted the Atomic Energy Act (Public Law 79–585), which created the Atomic Energy Commission (AEC) to manage the development and the use and control of nuclear energy for military and civilian uses. This law was significant as it shifted control over atomic energy from the military and placed it in the hands of civilians. A system of national laboratories, beginning with the Argonne National Laboratory outside of Chicago (1946), was established and placed under the authority of the AEC. The agency was abolished by the Energy Reorganization Act of 1974 (Public Law 93– 436), with its responsibilities for the development and production of nuclear weapons and promotion of nuclear power assigned to the Energy Research and Development Administration (which would become part of the Department of Energy three years later) and the U.S. Nuclear Regulatory Commission (NRC) being given the AEC’s nuclear licensing and regulatory functions.
504 Depar tment of Energy
The Bonneville Power Administration (and the other agencies that would join it during the next three decades) owes its existence to President Franklin D. Roosevelt’s opposition to privately held power companies that dated back to his years as governor of New York (1929–33). During a 1932 campaign stop in Portland, Oregon, Roosevelt promised that a hydroelectric project would be built on the Columbia River to break the dominance of public-utility holding companies in the Pacific Northwest. Congress authorized construction of the dam (Public Law 73–67) in 1935 and created the Bonneville Power Administration two years later to deliver and sell power from the Bonneville Dam (50 Stat. 731). The other power-marketing administrations were created by order of the Secretary of the Interior during the next 30 years. During the 1973 energy crisis, the Organization of Arab Petroleum Exporting Countries (OAPEC) announced that they would not ship oil to nations that had supported Israel during the Yom Kippur War. The higher gasoline prices and long lines of cars at the pumps served as a catalyst for a rethinking of U.S. energy policy and for a reorganization of the agencies responsible for its formulation and implementation. In November 1973, President Richard Nixon signed the Emergency Petroleum Allocation Act (Public Law 93–159) into law. The law imposed price controls on petroleum from existing sources while allowing the market to dictate the price of oil obtained from new sources (as a way of encouraging new exploration and production while limiting the impact of price increases). Nixon then proposed “Project Independence,” a plan for the United States to achieve energy selfsufficiency by 1980. Nixon compared his plan to the crash programs that built the atomic bomb during World War II and landed a man on the Moon in 1969. Nixon promised new federal financial support for the exploration of the U.S.’s remaining untapped energy resources: Alaskan oil and gas, offshore oil reserves, nuclear energy, and synthetic fuel from coal and oil shale. The plan included the construction of the Trans-Alaska Pipeline, which Nixon asserted would supply 11 percent of the nation’s oil needs on its completion. In 1977, Congress created the department because it found that the federal government’s energy programs were fragmented and required coordination.
Creating a cabinet-level department would give greater visibility to the energy problem, improve the coordination of existing policies and programs, and insure a more effective response to energy emergencies than might take place if responsibility for energy remained scattered among several departments. President Carter appointed James R. Schlesinger, a Republican who had served as chairman of the Atomic Energy Commission (1971–73), CIA director (1973), and Secretary of Defense (1973–75) during the Nixon and Ford administrations, as the first secretary of the department. Schlesinger, who served until July 1979 (when he was succeeded by Charles Duncan, Jr.), initiated the department’s Carbon Dioxide Effects Research and Assessment Program. In its early years, the department focused on energy development and regulation, reflecting the nation’s concern with becoming less dependent on foreign oil resources. In December 1981, President Ronald Reagan proposed that the department be dismantled, shifting its functions to four existing cabinet-level departments (Commerce, Interior, Agriculture, and Justice) and making the Federal Energy Regulatory Commission (once known as the Federal Power Commission), which had been made part of the department in the 1977 law, once again an independent agency. The president contended that his plan would dismantle the bureaucracy while retaining the functions, making the “government more efficient and reduce the cost of government to the taxpayers.” Congressional opposition derailed Reagan’s plan. In the 1980s, as tensions between the United States and the Soviet Union mounted, the department shifted its emphasis to nuclear-weapons research, development, and production. During this time, the Reagan Administration pushed the Strategic Defense Initiative (SDI), a space-based antiballistic missile system. Proposals by the Soviets to start new negotiations to reduce nuclear-weapons stockpiles were rejected by President Reagan as they would have impeded SDI development. The collapse of the Soviet Union and the end of the cold war shouldered the department with a new task: environmental remediation of many of the former nuclear-weapons production sites that were being deactivated as a result of the end of the Soviet threat. In 1989, Energy Secretary James Watkins established the Office of Environmental Restoration
Department of Energy 505
and Waste Management (renamed the Office of Environmental Management) to address the environmental risks and hazards that had been created by more than four decades of nuclear-weapons production. In 1998, the department established the Office of River Protection to manage the clean up of the Hanford facility. The department is studying the suitability of the Yucca Mountains in Nevada as a long-term repository for spent nuclear fuel and high-level radioactive waste. These materials are currently stored at more than 100 sites around the nation pending the siting of a long-term site for these by-products of nuclearpower generation and national nuclear-defense programs. In recent years, the department has had to deal with security breaches at the Los Alamos National Laboratory, one of the two laboratories in the United States where classified work is done on the design of nuclear weapons. In 1999, Los Alamos scientist Wen Ho Lee, a Taiwanese American, was accused by the Federal Bureau of Investigation of stealing nuclear secrets for the People’s Republic of China. He was fired by the University of California (which ran the lab). When these charges could not be substantiated, he was charged with 59 counts of mishandling classified information by allegedly downloading nuclear secrets to data tapes and removing them from the lab. After being detained in solitary confinement for 10 months, Lee pled guilty to one count of the indictment as part of a plea bargain and the other charges were dismissed. U.S. District Court judge James A. Parker apologized to Lee for what he termed the government’s “abuse of power” in prosecuting the case. Subsequently, Lee sued the federal government and a number of news organizations for harm caused to his reputation by the leaks of information from the investigation. In June 2006, the federal government and the news organizations settled the lawsuit for $1.65 million. In 2000, it was announced that two computer hard drives containing classified data were missing. They were later found behind a photocopier. In January 2003, the University of California dismissed John C. Browne as director of the lab in the wake of an investigation into the disappearance of equipment and the misuse of laboratory credit cards. In 2004, the laboratory was shut down for a time as it was found that four hard drives containing classi-
fied information were missing. Two of the four were later found to have been improperly moved to another location in the laboratory, and it was determined that the unaccounted for disks had never actually existed. Peter Nanos, who had replaced Browne as laboratory director, resigned in May 2005. On June 1, 2006, management of the Los Alamos lab was taken over by Los Alamos National Security, LLC, a partnership between the University of California and the Bechtel Corporation, ending the University of California’s 60-year management. While the university appoints three members of the 11-member board of directors, the lab director no longer reports to the president of the university, and lab employees are no longer employees of the university. In November 2006, a report by the Energy Department’s inspector general found that security procedures at Los Alamos were “nonexistent, applied inconsistently, or not followed.” The report had been commissioned after Los Alamos County police found computer flash drives that contained classified information in the home of a contract employee. In January 2007, Linton F. Brooks, the administrator of the National Nuclear Security Administration (which oversees the lab), was fired. The department is presently (2007) organized into staff offices, program offices, and operations offices. In addition, the lab operates two dozen research laboratories and four power-marketing administrations. Staff offices provide administrative, management, and oversight support to the department’s program offices. These offices report to the office of the secretary and include: the Chief Financial Officer; Chief Information Officer; Congressional and Intergovernmental Affairs; Economic Impact and Diversity; General Counsel; Health, Safety, and Security; Hearings and Appeals; Human Capital Management; Inspector General; Intelligence and Counterintelligence; Management; and Policy and International Affairs. Congressional and Intergovernmental Affairs and Policy and International Affairs are headed by assistant secretaries of the department. A number of program offices report to the Office of the Under Secretary. They include Civilian Radioactive Waste Management; Electricity Delivery and Energy Reliability, and Legacy Management. Also reporting to the Office of the Under Secretary are the assistant secretaries of energy efficiency and
506 Department of Health and Human Services
renewable energy; environmental management; fossil energy, and nuclear energy. The Office of Science, overseen by the Office of the Under Secretary for Science, includes a number of program offices: Advanced Scientific Computing Research; Basic Energy Sciences; Biological and Environmental Research; Fusion Energy Science; High Energy Physics; and Nuclear Physics, Workforce Development for Teachers and Scientists. A third under secretary is responsible for Nuclear Security and is the administrator for the National Nuclear Security Administration. The department operates 14 field offices outside of Washington, D.C., that manage departmental activities. The 21 laboratories and technology centers under the department’s jurisdiction employ 30,000 scientists and engineers who are engaged in research. In 2006, the department employed 16,100 directly, and another 100,000 worked for agency contractors and had a budget of $23.4 billion. Dr. Samuel W. Bodman was appointed energy secretary by George W. Bush. See also appointment power; cabinet; executive agencies. Further Reading Fehner, Terrence R., and J. M. Holl. Department of Energy 1977–1994: A Summary History. Washington, D.C.: U.S. Department of Energy, 1994; Gosling, F. G. The Manhattan Project: Making the Atomic Bomb. Honolulu: University Press of the Pacific, 2005; Stober, Dan, and Ian Hoffman. A Convenient Spy: Wen Ho Lee and the Politics of Nuclear Espionage. New York: Simon and Schuster, 2002. —Jeffrey Kraus
Department of Health and Human Services The U.S. Department of Health and Human Services (HHS) is a relatively new cabinet department, created during the presidency of Jimmy Carter (1977–81). It was established in 1979 (and activated in May 1980) by the Department of Education Organization Act when the Office of Education was removed from the Department of Health, Education, and Welfare (HEW) and made into its own standing department, the Department of Education. In addition, HEW was renamed Health and Human Services and continued to handle many of the same
policy functions in the area of public health and welfare services. Health and Human Services is a cabinet-level department, headed by a secretary who is appointed by the president with the advice and consent of the Senate. Its mission is to promote and protect the health, welfare, social services, and income security of the nation’s citizens. In total, Health and Human Services runs more than 250 federal programs, and the budget for these programs alone amounts to roughly 40 percent of the federal budget. The budget of Health and Human Services is larger than all 50 state budgets combined and is also larger than most nations. In 2006, the HHS budget totaled more than $640 billion, and the department included more than 67,000 employees. The largest agency in HHS is the Social Security Administration. Other agencies include the Administration for Children and Families, the Administration on Aging, the Agency for Health-care Research and Quality, the Agency for Toxic Substances and Disease Registry, the Centers for Disease Control and Prevention, the Centers for Medicare and Medicaid Services, the Food and Drug Administration, the Health Resources and Services Administration, the Indian Health Service, the National Institutes of Health, and the Substance Abuse and Mental Health Services Administration. The issue of public health has long been considered important, with the Congress first passing legislation in this regard in 1798 to provide for the relief of sick and disabled seamen, which established a federal network of hospitals for the care of merchant seamen (which is considered the forerunner of today’s U.S. Public Health Service). In 1862, President Abraham Lincoln appointed a chemist, Charles M. Wetherill, to the Department of Agriculture to head the Bureau of Chemistry, which was the forerunner to the Food and Drug Administration. In 1871, the position of supervising surgeon was created, which would later become the modern-day position of surgeon general. During the first half of the 20th century, various pieces of legislation created new posts and areas of policy oversight for the federal government in regard to public health and safety. In 1906, Congress passed the Pure Food and Drugs Act which allowed the federal government to monitor the purity of foods and the safety of medicines (this responsibility now belongs to the Food and Drug Administra-
Department of Health and Human Services 507
tion). In 1921, the Bureau of Indian Affairs Health Division was created, and in 1930, the National Institute of Health was created. Other important legislative milestones during this time included the passage of the Social Security Act in 1935; passage of the Federal Food, Drug, and Cosmetic Act in 1938; the Federal Security Agency created in 1939 to bring together related federal activities in the health, education and, social insurance; and the Communicable Disease Center established in 1946 which would be the forerunner of the Centers for Disease Control and Prevention. The Department of Health, Education, and Welfare was created in 1953 during the presidency of Dwight D. Eisenhower, who made the decision to elevate the many social services provided by the federal government in the areas of public health and welfare to cabinet-level status. During the 1950s, under the direction of the first HEW secretary Oveta Culp Hobby (1953–55), expanding health care to all Americans became a prominent goal of the new department, but many of Hobby’s proposals were opposed by the Bureau of the Budget due to the high levels of government spending required. Hobby, who was only the second woman to serve as the head of a cabinet (Frances Perkins, secretary of labor under Franklin D. Roosevelt, was the first) also worked extensively on matters relating to Social Security, and she advocated voluntary, nonprofit insurance plans as a way to extend health care to the poor. Eisenhower’s second appointment, Marion B. Folsom (1955–58), pursued the expansion of medical research and legislation for the construction of research facilities and for programs to control environmental pollution. Under the third HEW secretary appointed by Eisenhower, Arthur Fleming (1958–61), HEW experienced a period of sustained growth, gaining widespread public attention as a cabinet-level department in 1959 when Fleming banned the sale of cranberries prior to Thanksgiving since the fruit had been sprayed with banned and potentially lethal pesticides. Fleming also pursued a variety of issues concerning the elderly and hosted the first White House Conference on Aging. In the 1960s, during the Great Society programs of President Lyndon B. Johnson, HEW became a focal point of government reform activity as the war on poverty was centered there, and these programs greatly expanded the power, the role, and the budget
of the department. Under the direction of Anthony J. Celebrezze, first appointed secretary of HEW by President John F. Kennedy in 1962 (and he would remain in the post under President Johnson until 1965), HEW underwent reorganization through the separation of public assistance and child health and welfare functions from the Social Security Administration by transferring these programs to a new Welfare Administration. Celebrezze also achieved a policy change with the power to deny funds for any federal program to any state or institutions which practiced racial segregation. The next HEW secretary, John Gardner (1965–68), a liberal Republican, issued new guidelines for federally funded schools and hospitals to meet desegregation percentages to receive federal funds. In 1967, Gardner also reorganized HEW by condensing the department from eight subcabinet departments into three: health, education, and individual and family services. Gardner also created new federal standards to reduce pollutants from car exhaust and supported a Federal Trade Commission initiative to put stronger warnings on cigarettes. During the 1970s, with President Johnson no longer in office and without a president to spearhead the Great Society programs, HEW lost some of its prominence in the policy-making arena. Under presidents Richard M. Nixon (1969–74) and Gerald Ford (1974– 77), HEW would oversee the creation of the National Health Service Corps in 1970, as well as the passage of legislation to establish the National Cancer Act in 1971 and the Child Support Enforcement program established in 1975. With the creation of the new Department of Health and Human Services in 1979, Patricia Harris would serve as its first secretary when the agency became active in 1980. Harris, the first African-American woman to ever serve in the cabinet, first served as Carter’s secretary of housing and urban development from 1977–79, then became Secretary of Health, Education, and Welfare in 1979, and continued to serve as secretary of Health and Human Services when the new agency was created and remained in that post until Carter left office in 1981. By the 1980s, when social programs went into disfavor during the administration of Ronald Reagan, the policy prominence of Health and Human Services was diminished as more focus was placed on the reduction of federal government programs and
508 Department of Homeland Security
spending. By the mid-1980s, HIV/AIDS became a national health crisis, and in 1985, Health and Human Services oversaw the licensing of a blood test to detect HIV. Also, in 1984, the National Organ Transplant Act was signed into law, and in 1988, the JOBS program was created, legislation was passed to provide federal support for child care, and the McKinney Act was passed to provide health care to the homeless. Since 1990, major initiatives from Health and Human Services have included the establishment of the Human Genome Project in 1990, the establishment of the Vaccines for Children Program in 1993 to provide free immunizations to all children in low-income families, the creation of the Social Security Administration as an independent agency in 1995, major welfare reform in 1996 with passage of the Personal Responsibility and Work Opportunity Reconciliation Act, the enactment of the Health Insurance Portability and Accountability Act (HIPAA) in 1996, the creation of the State Children’s Health Insurance Program (SCHIP) to enable states to extend health coverage to more uninsured children, and an initiative to combat bioterrorism launched in 1999. In 2001, Health and Human Services would respond to the nation’s first bioterrorism attack—the delivery of anthrax through the mail. Today, Health and Human Services is responsible for administering a variety of public-assistance and health programs. Divided into four sections, Health and Human Services is, even when reform is not a high priority, an important element in the domesticeconomy and the domestic-policy arenas. The first section into which Health and Human Services is divided is the Social Security Administration. This section administers the social security program as well as welfare programs. Next is the Public Health Service (PHS). The PHS includes The National Institute of Health (NIH), which is the world’s largest medical research complex. It conducts a variety of studies of public-health issues such as smoking, heart disease, strokes, and diabetes, among others. The PHS also houses the Centers for Disease Control as well as the Food and Drug Administration, which is concerned with the safety of food. The third section of HHS is the Health Care Financing Administration which is in charge of the administration of both the Medicare and Medicaid programs. Finally, the Administration for Children
and Families is concerned with service programs for children, older people, and families. The longest-serving Secretary of Health and Human Services has been Donna Shalala, President Bill Clinton’s appointment to the post in 1993, who served in the post for the entire eight years of the Clinton presidency until 2001. President George W. Bush’s appointment to this cabinet post included former Wisconsin Governor Tommy Thompson, who served from 2001 until 2005, and his successor, Michael Leavitt, the former governor of Utah and administrator of the Environmental Protection Agency. See also appointment power; cabinet; executive agencies. Further Reading Govern, Frank. U.S. Health Policy and Problem Definition: A Policy Process Adrift. Philadelphia: Xlibris, 2000; Harrington, Charlene, and Carroll L. Estes. Health Policy: Crisis and Reform in the U.S. Health Care Delivery System. Sudbury, Mass.: Jones and Bartlett, 2004; Miller Center for Public Affairs, University of Virginia, “American President: An Online Reference Resource.” Available online. URL: http:// www.millercenter.virginia.edu/index.php/academic/ americanpresident/; U.S. Department of Health and Human Services, “Historical Highlights.” Available online. URL: http://www.hhs.gov/about/hhshist.html. —Lori Cox Han and Michael A. Genovese
Department of Homeland Security The tragic events of the morning of September 11, 2001, caught the United States completely off guard. The surprise terrorist attacks on the United States were not anticipated, and in the confusion and panic that followed, many questions were asked: How could this have happened? How could the United States prevent any further attacks? These two key questions demanded a response, and it was in answer to the second question that the Department of Homeland Security was established. The monumental task of protecting the U.S. homeland from attacks by terrorists, as well as emergency services during and after a natural disaster, now falls to the Department of Homeland Security. It is a massive task, complicated by the fact that the depart-
Department of Homeland Security 509
510 Department of Homeland Security
ment (DHS) is a new government agency, made up of a number of previously established agencies that have been reassigned under the direction of DHS. The cabinet-level agency also faces a series of challenges that complicate its already very difficult task. In response to the September 11, 2001, terrorist attacks against the United States, President George W. Bush launched wars against Afghanistan, the alQaeda terror network, and later, Iraq. The administration also orchestrated one of the most massive government bureaucratic-growth plans while significantly reorganizing the federal bureaucracy to fight terror. It created a new cabinet-level agency, the Department of Homeland Security, and assigned it the task of protecting the homeland. This massive reorganization was not without its problems, and critics soon pointed to the size of the agency as a potential problem, the blurred lines of assignment and organization, and the conflicting bureaucratic responsibilities of the agency. Yet, President Bush was under significant political pressure, much of it from the Congress, to develop this new agency, and eventually the president relented and took the lead in reorganizing the federal government to fight terrorism better. After the September 11, 2001, tragedy, President Bush proposed, and Congress soon approved, the creation of a new Department of Homeland Security to promote homeland defense. Established on November 25, 2002, by the Homeland Security Act of 2002, and formally activated on January 24, 2003, the Department of Homeland Security immediately became the third-largest cabinet department in the federal government, smaller only than the Department of Defense and the Department of Veterans Affairs. The DHS grew out of the Office of Homeland Security (OHS), established on September 20, 2001, and was enlisted to promote the protection of the U.S. homeland. But as congressional pressure grew to devote a cabinet-level agency to homeland protection, President Bush, at first reluctant, eventually threw his support behind the creation of the new agency. As the president said, “We have concluded that our government must be reorganized to deal more effectively with the threats of the twentyfirst century. . . . Tonight I propose a permanent cabinet-level Department of Homeland Security to unite essential agencies that must work more closely
together. . . . What I am proposing tonight is the most extensive reorganization of the federal government since the 1940s.” President Bush was well aware of the potential problems that this new agency faced, and he knew there would be trouble ahead. The newly created department threatened the turf of several existing agencies that would try to guard their bureaucratic territory; its power would be striking, threatening the power of other established agencies of the government; managing such a huge enterprise would be a daunting and some said an impossible task; coordinating so many different agencies to pursue a common objective would be a difficult task that would be complicated by the sheer size of the new agency; and its goal would be difficult to achieve—a foolproof, 100 percent safety and security system. If the Department of Homeland Security was created to solve a major problem, the creation of this new large agency might well be a problem in itself. “This is going to be a tough battle, because we’re going to be stepping on some people’s toes. . . .” the president said. “When you take power away from one person in Washington, it tends to make them nervous.” The department consolidated a number of existing departments and brought several different agencies under one administrative umbrella, all designed to fight terrorism and secure homeland safety. Former Republican Pennsylvania governor and later head of the office of Homeland Security, Tom Ridge, served as the first head of the new department. He was later replaced by Michael Chertoff. Ridge was, in effect, creating a new system of homeland security for a nation reeling from the September 11th attack and was fearful that other, more lethal attacks might follow. What of biological warfare? What of “dirty bombs”? What about a suitcase nuclear weapon? Would suicide bombers bring the war to the streets of the United States? There was so much that the U.S. government did not know and so many fears that might turn into tragic realities. With more than 180,000 employees, a first-year budget of more than $37 billion, and 22 agencies under its control, the new Department of Homeland Security immediately became one of the largest and potentially most powerful agencies in the federal government. It also became something of an administra-
Department of Homeland Security 511
tion and organizational nightmare. Included in the department are such varied agencies as the Secret Service, the Federal Emergency Management Agency, the Customs Service, the Border Patrol, the Immigration and Naturalization Service, and numerous other agencies. Running any organization can be difficult, but so varied and diverse a collection of agencies as DHS soon proved problematic. The special challenge for the Department of Homeland Security is to take this very large and complex organization and insure that sufficient coordination takes place between the different agencies to work together better to achieve the goal of protecting the homeland. This has proved to be no easy task. A small organization can be more nimble and flexible; a large organization needs to pay greater attention to how best to pull the disparate parts of the organization together into a cohesive unit. Does the large and cumbersome bureaucracy help or hinder in efforts to fight terrorism? In its first test of a homeland disaster, the response to Hurricane Katrina in 2005, the department, through the Federal Emergency Management Agency, performed very poorly and was universally condemned for ineptitude and insensitivity to the victims. It demonstrated a lack of focus and coordination and mismanaged the response effort. This failure raised key questions surrounding the ability of the federal government to use a massive bureaucracy to protect the homeland and called into question the wisdom of placing so many different agencies under one bureaucratic roof. Part of the problem in dealing with Hurricane Katrina was one of federalism. In the United States, power and authority are divided between local, state, and federal authorities, and the lines of responsibility are not always clearly drawn. Such was the case with the failed response to Hurricane Katrina. Local, state, and federal agencies all played a role in the disaster, and none performed to task. But FEMA came in for special criticism. After all, the local and state authorities were literally drowning from the effects of the hurricane damage and were thus unable to serve the community. The late and often lame involvement of FEMA, however, was a function of the failure to take their responsibilities seriously initially and of the poor performance and follow up once they engaged with the problem. There
was plenty of blame to go around in the botched response to hurricane Katrina, but FEMA was especially taken to task for its failed response to this tragedy. On February 15, 2005, Michael Chertoff was sworn in as the second secretary of the Department of Homeland Security, replacing Tom Ridge in that post. A graduate of Harvard College (1975) and Harvard Law School (1978) and a former U.S. circuit judge for the Third Circuit of Appeals, Chertoff had also served as a U.S. attorney and as assistant attorney general. He inherited a department still finding its way. As assistant attorney general for the Criminal Division of the Department of Justice, Chertoff worked on tracing the September 11, 2001, terrorist attacks on the United States to the al-Qaeda terrorist network of Osama bin Laden. He also helped write the USA PATRIOT Act while working in the Justice Department. Thus, his antiterrorist credentials were impressive. As head of the DHS, Chertoff received a great deal of criticism for the botched handling of the federal government’s response to Hurricane Katrina in the fall of 2005, where confusion and slow response times led to disaster. Was the department’s weak response his fault, or was it endemic to the challenges DHS faced? Chertoff heads an enormous bureaucracy that in its early years was still trying to define its role and organize its activities. In many ways, Chertoff and the DHS were in uncharted territory, attempting to fight the war against terrorism, defend the homeland, and establish a functioning agency out of a collection of smaller agencies pushed together under a single administrative heading. There can be no panacea for the dilemma of homeland security. Foolproof protections do not exist, and yet, given the significance of the issue, might the Unites States expect a higher quality of performance by the Department of Homeland Security? One of the key reform suggestions regarding DHS is the “One or Many” question: Should the department be split up into several different agencies, or is it better to have everything under one roof? Splitting the department up into several different component parts might make it more manageable,
512 Department of Housing and Urban Development
but it might hinder coordination. In fact, one of the key criticisms of the federal government before the September 11, 2001, attack against the United States was that the various agencies were not communicating with one another, thereby allowing potential problems to fall through the cracks and not allowing the government to “connect the dots.” But keeping the department as one huge unit creates other problems of scope and management that might also lead to problems. How then can the federal government truly protect the homeland? Must civil liberties be sacrificed for greater security? Is the price that the United States must pay for more safety, fewer rights, and liberties? Is the war against terrorism primarily a war metaphor or a crime metaphor, and what are the consequences of employing strategies for dealing with each metaphor? Plus, to what extent can the United States “go it alone” in the war against terrorism? These and many other questions haunt the new Department of Homeland Security as it attempts to shoulder the awesome responsibility of protecting the United States from another terrorist attack. See also appointment power; cabinet; executive agencies. Further Reading Howard, Russell D., James J. F. Forest, and Joanne Moore. Homeland Security and Terrorism. Boston: McGraw-Hill, 2005; Hunt, Alexa. Homeland Security. New York: Forge, 2007; Sauter, Mark, and James Carafano. Homeland Security. Boston: McGraw-Hill, 2005; White, Richard, and Kevin Collins. The United States Department of Homeland Security: An Overview. New York: Pearson, 2005. —Michael A. Genovese
Department of Housing and Urban Development President Lyndon B. Johnson established the Department of Housing and Urban Development (HUD) in 1965 through the Department of Housing and Urban Development Act. This act made HUD a cabinetlevel department within the executive branch. Currently, the secretary of HUD is 13th in the presidential line of succession. Though the federal govern-
ment was involved in federal housing programs prior to 1965, they were not consolidated into a single cabinet until the Johnson administration. The Servicemen’s Readjustment Act of 1944 (better known as the G.I. Bill) contained provisions for returning soldiers to receive home loans from the national government. Other legislation such as the Housing Acts of 1937, 1949, 1950, 1954, and 1959 also highlighted the need for a coordinated effort for housing policies through a single department to administer them. HUD was developed as part of Johnson’s Great Society. HUD has some basic core missions as its goals: home ownership, rental assistance, health of cities, and fighting discrimination and assisting the homeless with housing. Home ownership is perhaps HUD’s oldest mission in the federal government. Legislation supportive of home ownership goes back to the National Housing Act of 1934. This act also created the Federal Housing Administration (FHA). FHA was created to help regulate home loans and help grow the homeownership market in the United States. It was instrumental in stabilizing the single-owner home market during the Great Depression. Becoming part of HUD in 1965 when it was created as a cabinetlevel department, FHA is the only self-funded federal agency completely within the government. Primarily, FHA provides mortgage insurance and mortgage loans in the current market. FHA’s impact on minorities is also extremely important. Though early FHA handbooks in the 1940s encouraged loan officers to take race into consideration, the organization has been instrumental in promoting minority ownership since the 1960s. Currently, FHA loans more money to central city businesses than all other lenders and also lends to a higher percentage of African Americans and Hispanics, contributing greatly to the increase of minority single-family home ownership in the country. HUD has also been actively involved in providing rental assistance for low-income renters. During the Great Depression, many people lost their homes. The federal government started to become involved in providing housing stock as a way to solve the housing crisis. However, U.S. v. Certain Lands in City of Louisville (1935) and U.S. v. Certain Lands in the City of Detroit (1935) prevented the federal government
Department of Housing and Urban Development 513
from using eminent domain to build public housing. The outcome of these cases resulted in the national government sending funds instead to local housing authorities to build public housing. The national government to this day still generally operates in this fashion. During the early years of HUD in the late 1960s, it was primarily involved with public housing construction. Cities with the assistance of HUD were largely engaged in building public housing stock. Projects from this period include Cabrini Green (Chicago), Robert Taylor Homes (Chicago), and Pruitt– Igoe (St. Louis). These high-rise projects were plagued with problems from spatial segregation from the larger city to physical problems resulting from budget cuts as demand for public housing exceeded the supply. In the early 1970s, HUD fell under the scrutiny of President Richard Nixon. In the late 1960s and early 1970s, HUD grew very quickly in terms of size and scope. As the problems of the urban environment began to become more apparent (Watts riots in 1965, failure of high-rise public housing), the national government started to question HUD’s ability to deal with these problems. Pruitt-Igoe was considered to be a complete failure and was razed in 1972 after only 18 years and millions of dollars spent by the national government trying to salvage it. On January 8, 1973, President Nixon announced a moratorium on all federal housing and community-development assistance. If HUD had already committed funds to a program and it was underway, it could continue, but no new funds would be approved. Major changes to HUD’s mission occurred with the Housing and Community Development Act of 1974, signed into law by President Gerald Ford. This act shifted the national government’s commitment to low-income renters from public-housing construction to “Section 8” housing. It also provided for funding of Community Development Block Grants (CDBG). With CDBG projects, federal money went directly from HUD to the city government with few limitations. Section 8, or more specifically, Housing Choice Voucher Program is one of the primary ways in which HUD provides housing assistance to low-income renters. Though originally composed of three subprograms, Section 8 currently has tenant-based and project-based vouchers. Section 8 vouchers strongly
reflect the character of Republican Party ideals from the Nixon and Ford era of embracing free markets and choice. Tenant-based Section 8 allows a renter a voucher to take on the open market, paying only a portion of the rent (30 percent) with the remainder guaranteed and paid by the local housing authority. HUD sets a cap on this maximum as the Fair Market Rent (FMR). HUD individually sets the FMR for almost every city in the country. For project-based Section 8, the local housing authority can link up to 25 percent of its total Section 8 allotment to specific apartments. Families who qualify for these apartments pay only 30 percent of their income to live there but lose it if they decide to move elsewhere either in public housing or on the private market. HUD has also been involved in the health of cities. One of the most popular programs for cities is the CDBG program. Currently, it is one of the oldest and most successful programs for HUD. CDBG is designed specifically as a flexible program to help communities receive funding without a tremendous amount of oversight. It provides grants on a formula basis to 1180 local governments and states. Another important program for HUD is Home Investment Partnership Programs (HOME). It is the largest federal block-grant program given to the states and is designed to help create affordable housing for low-income residents. HOME encourages neighborhood development partially through providing homes for purchase to people who otherwise could not afford them. The original Housing Opportunities for People Everywhere (HOPE) program was part of the 1990 Housing Act. HOPE was considered a top priority for then-HUD Secretary Jack Kemp in the George H. W. Bush administration. HOPE reflected Kemp’s strong belief that single-family home ownership was key in transitioning low-income families off public housing. Early HUD programs for home ownership showed that only prime low-density locations were attractive for buyers. In the test markets, HOPE caused a net loss of public housing units leaving some cities with only their less-desirable housing left. Throughout the 1990s, there were various HOPE projects (II, III, and so on) that tinkered with the original design of HOPE. However, every HOPE program before the Clinton administration focused heavily on the goal of eventual ownership.
514 Depar tment of Justice
During Bill Clinton’s Administration, HOPE VI was created as a major HUD program. The Clinton administration severely cut the funds allocated to home ownership in the HOPE programs. Instead, they shifted focus toward increasing community involvement and participation. HOPE VI wanted to revitalize the most severely distressed housing by upgrading and, in many cases, replacing the nation’s public-housing stock. HUD wanted to change the face of public housing completely in part to help erase its stigma. By alleviating long-standing prejudices, HUD through HOPE VI hoped to avoid large concentrations of impoverished housing through scattering it throughout the community in mixed income developments. HOPE VI also sought to help develop communities to encourage the growth of neighborhoods and businesses around revitalized housing stock. While initially successful, the full goals of HOPE VI were extremely far reaching and ultimately fostered criticism for not being completely attainable. In every budget year of the George W. Bush administration, HOPE VI has been up for severe budget cuts and possible elimination. The Bush administration maintained that HOPE VI achieved its primary and original goals of removing the most distressed housing and that the program should thus be terminated. Instead, the Bush administration has strongly advocated low-income home-ownership programs. The American Dream Downpayment Assistance Act (ADDI) is one of the key programs for first-time homebuyers and is administered as part of the older HOME program with many of the same goals. HUD has also been involved in housing for the homeless since the 1980s. The McKinney-Vento Homeless Assistance Act of 1986 was the first major federal legislation concerning the homeless in the United States. It charged HUD with the mission of involving itself with the homeless population of the country. HUD works through local governments and nonprofit groups to try to provide emergency, transitional, and permanent housing solutions for homeless. Between 1990 and 2006, funding for homelessness has grown from $284 million to $1.327 billion within the HUD budget. President George W. Bush in 2002 announced a goal aimed at ending chronic homelessness in the United States. Chronically homeless are
estimated at only 10 percent of the total homeless population but consumes approximately 50 percent of the allocated emergency funds. The Department of Housing and Urban Development originally emerged from emergency measures in the Great Depression to attempt to provide housing for the homeless. When President Lyndon B. Johnson elevated its programs and agencies to cabinet-level status, it reflected the commitment of the country to improving the quality of individual housing as well as communities. The mission and scope of HUD has shifted based on ideologies, trends, and priorities within government and society. Though many HUD programs have fallen under scrutiny and some scandals have plagued the department, the department has also been instrumental in providing housing access. Racial minorities as well as lowincome citizens have been able to receive housing and/or FHA loans when they may have otherwise been ineligible on the private market. HUD has been active since its inception to helping provide funds for community development programs. See also appointment power; cabinet; executive agencies. Further Reading Hays, R. Allen. The Federal Government and Urban Housing: Ideology and Change in Public Policy. Albany: State University of New York Press, 1995; Thompson, Lawrence L. A History of HUD (2006). Available online. URL: http://mysite.verizon.net/hudhistory/. —Shannon L. Bow
Department of Justice The Justice Department, headed by the U.S. attorney general, is responsible for most of the federal government’s legal work, from investigating crime to bringing cases before the U.S. Supreme Court. The department’s top positions are presidential appointments requiring Senate confirmation. During the first 80 years following ratification of the U.S. Constitution, the national government’s legal business was handled solely by the attorney general, whose office was created by the Judiciary Act of 1789. The attorney general’s most important functions were providing legal advice to the president and the executive branch and representing the government at
Department of Justice 515
the appeals court level. Institutionalization was slow and incremental. Many law officers requested—but did not receive—additional funding for clerks and an office. The first time additional funds were provided by Congress was in 1819 to hire a clerk; the following year, Congress provided for a messenger and expenses. In 1831, Congress provided $500 for law books, with an additional $1,000 in 1840. The first effort to create a justice department came under President James Polk in 1845; it was unsuccessful, as were subsequent efforts for the next 25 years. In fact, the attorney general received a lower salary than other cabinet officials and served only part time. Law officers were expected to maintain their private practices. The position was not made full time until 1853 during the administration of President Franklin Pierce. The years following the U.S. Civil War brought such an explosion in litigation that the government had to hire private attorneys at market rates to represent it in courts around the country. To control these expenses, Congress finally acted to consolidate legal affairs, creating the Justice Department in the Judiciary Act of 1870. Throughout this time, Congress had passed legislation creating law posts in other executive departments and agencies. In forming the Justice Department, legislators did not repeal those statutes, and by 1915, solicitors operated in State, Treasury, Navy, Interior, Commerce, Labor, and Agriculture, as well as in the Internal Revenue Service and the Post Office. During World War I, President Woodrow Wilson issued an executive order bringing these positions under the attorney general’s supervision, but the order lapsed at the end of the war. President Franklin D. Roosevelt also issued an order to bring the solicitors under more centralized control. Even today, legal affairs continue to be somewhat fragmented, although it is the Justice Department that takes the lead in litigation before the U.S. Supreme Court. The Justice Department also determines which side will be supported in court when the executive departments disagree on a legal question. Since the early 20th century, the Justice Department has grown into a large bureaucracy, making the attorney general primarily an administrator. A deputy attorney general and associate attorney general help oversee the administrative duties. The attorney gen-
eral’s legal duties largely have been delegated to the Office of Legal Counsel (OLC) and the Office of U.S. solicitor general (SG). The Office of Legal Counsel provides legal advice to the president and to executive branch agencies, as well as serving as the Justice Department’s general counsel. The advice usually involves important and complex legal issues; often the OLC resolves cases where the agencies are advancing conflicting interpretations of the law. In addition, the OLC reviews all executive orders and presidential proclamations for legality and form. The solicitor general’s office argues on behalf of the federal government when it is a party in a case before the U.S. Supreme Court. When the government is not a party yet has an interest in the outcome of a case, the solicitor general may file a friend of court brief—called amicus curiae—to support that side. The solicitor general also decides if a case should be appealed to the Supreme Court when the government loses in the court of appeals. Scholars have found that the solicitor general has a high degree of success before the Supreme Court, both in having cases accepted for review and winning on the merits. In addition to the Office of Legal Counsel and the Solicitor General, the Justice Department houses about 60 agencies and offices. The broad scope of federal law is evident in its six major divisions, each headed by an assistant attorney general: the civil division (which is the oldest, dating back to 1868), the criminal division, the civil rights division, the tax division, the antitrust division, and the division of environment and natural resources. Each division has responsibility for enforcing federal law within its purview. The Justice Department has formal authority over the 93 offices of U.S. attorneys, although historically they have operated with a degree of autonomy from “Main” Justice in Washington, D.C. The U.S. attorneys prosecute criminal cases and represent the government in civil cases brought before U.S. district courts in the nation and Puerto Rico, the Virgin Islands, Guam, and the Northern Mariana Islands. In addition to litigation, the Justice Department includes two agencies charged with investigation, the Federal Bureau of Investigation (FBI) and the Drug Enforcement Administration (DEA). The FBI was formed as a small detective bureau in 1908 with 34
516 Depar tment of Justice
investigators, was reorganized in 1924, and was renamed in 1935. Now, with more than 30,000 employees including 12,500 special agents, it is one of the largest units in the Justice Department. The DEA is more targeted; it investigates the illegal manufacturing and trafficking of controlled substances. It evolved out of a Treasury Department agency that enforced laws regulating alcohol and, later, narcotics. In 1968, those duties were transferred to the Justice Department. The Justice Department also has responsibility for pardon and parole matters as well as other duties related to capturing and housing federal prisoners. The U.S. Marshals Service, for example, is charged with capturing fugitives, transporting prisoners, and protecting federal judges and witnesses. The marshals, like the attorney general, were formed under the Judiciary Act of 1789; President George Washington appointed the first marshals that same year. The Federal Bureau of Prison administers all federal penal institutions. From its beginning in 1930 with 11 federal prisons, the bureau now operates more than 106 correctional facilities and detention centers, oversees community corrections centers, and home confinement programs. Currently, there are more than 185,000 federal inmates. The Justice Department includes a number of programs to assist local, state, and tribal governments on crime control and justice issues. Organized under the Office of Justice Programs, these programs—the Bureau of Justice Statistics, the National Institute of Justice, the Bureau of Justice Assistance, the Office of Juvenile Justice and Delinquency Prevention, and the Office for Victims of Crime—provide coordination, technological support, training, research, and funds for victim compensation. The Office of Tribal Justice provides not only a contact point for Native American governments to deal with the Justice Department but also advises the federal government on tribal legal issues and serves as a liaison between the department, the federally recognized tribes, and federal, state, and local officials. There is also an Office on Violence Against Women that handles policy and legal issues relating to violence targeted at women. Among its responsibilities, the office allocates grant funding and assists tribal, local,
state, and federal efforts to enforce the Violence Against Women Act and other legislation. Also assisting local law-enforcement efforts are the Community Oriented Policing Services (COPS), an agency that helps to foster the community policing approach, and the Community Relations Service, which assists state and local governments in addressing and resolving racial and ethnic tensions within a community. In terms of national security and international affairs, the department plays a role as well. It houses the U.S. National Central Bureau with Interpol, the international police communications service. In addition, there is the Office of Intelligence Policy and Review, which oversees FBI search warrant requests to be filed with the Foreign Intelligence Surveillance Court. The terrorist strikes on September 11, 2001, brought changes to the department and many of its units. One of the first changes instituted by then attorney general John Ashcroft was a new proactive mission to stop any future terrorist strikes on the country. The mission led to an aggressive lawenforcement approach in the fight against domestic terrorism, including the use of preemptive detention, that is, detention of suspicious persons before evidence of a crime was established. Some of these policies have been controversial, with critics charging that the new approach violates civil liberties. Like the department, the FBI’s mission was broadened as well, “to protect and defend the United States against terrorist and foreign intelligence threats,” in addition to its traditional mission to investigate violations of federal criminal law. The FBI began to place a greater emphasis on collecting, analyzing, and disseminating intelligence related to national security as well as to criminal threats. In addition to the new orientation, the department was restructured in response to 9/11. The Office of Domestic Preparedness and part of the Immigration and Naturalization Service were transferred from Justice to the Department of Homeland Security in 2002, although the Executive Office for Immigration Review has remained at Justice. The Bureau of Alcohol, Tobacco, Firearms, and Explosives, specifically its law-enforcement functions, was then transferred into the department from the Trea-
Department of Labor
sury Department. Another change has been an effort to institute a closer working relationship between “Main” Justice and the U.S. attorneys. The department’s budget also has grown since September 11, with net operating costs now averaging $24 billion per fiscal year. See also appointment power; cabinet; executive agencies. Further Reading Baker, Nancy V. Conflicting Loyalties: Law and Politics in the Office of Attorney General, 1789–1990. Lawrence: University Press of Kansas, 1993; Cummings, Homer, and Carl McFarland. Federal Justice: Chapters in the History of Justice and the Federal Executive. New York: Macmillan, 1937. Department of Justice home page. Available online. URL: http:// www.usdoj.gov/. —Nancy V. Baker
Department of Labor The U.S. Department of Labor is a cabinet-level department that is responsible for occupational safety, wage and hour standards, unemployment insurance benefits, reemployment services, and economic data. The department’s purpose, according to the legislation that created it, is “to foster, promote and develop the welfare of working people, to improve their working conditions, and to enhance their opportunities for profitable employment.” While the present Department of Labor was established in 1913, Congress first created a Bureau of Labor in 1884 within the Department of Interior. The Bureau of Labor would become an independent department in 1888, but it was not granted cabinet status. The department was then absorbed into the new cabinet-level Department of Commerce and Labor in 1903 (32 Stat. 826; 5 U.S.C. 591). Ten years later, the Department of Commerce and Labor was reorganized into two separate cabinet departments with the passage of the Organic Act of the Department of Labor (Public Law 62–426), the culmination of organized labor’s goal of having its own voice in the cabinet. One of President William Howard Taft’s last official acts was to sign the bill into law on March 4, 1913, just hours before he would leave office. In
517
doing so, he expressed reservation “because,” as he wrote, “I think that nine departments are enough for the proper administration of government.” He decided not to veto the bill “because my motive in doing so would be misunderstood.” The new Department of Labor was organized into four bureaus that had been part of the Department of Commerce and Labor: Children, Immigration, Naturalization, and Labor Statistics, and employed 2,000. The law creating the separate department and authorized the creation of a U.S. Conciliation Division to mediate labor disputes. The first labor secretary was William Bauchop Wilson (appointed by President Woodrow Wilson, who was not a relation), a Scottish immigrant who started as a coal miner, became secretary-treasurer of the United Mine Workers Union (1900–08), and a member of the U.S. House of Representatives (1907–13) from Pennsylvania. In Congress, he had been one of the leading advocates of a separate Labor Department. Secretary Wilson would serve throughout President Wilson’s two terms, leaving office on March 4, 1921. In 1915, Secretary Wilson convened a national conference of public-employment officials. This led to the creation of a national advisory committee and a national employment service that operated out of the Bureau of Immigration’s 80 local offices. By 1917, the new employment-service head was finding jobs for 283,799 job seekers. In September 1916, Congress granted the department additional statutory authority by passing the Federal Employees Compensation Act, which provided financial security for federalgovernment employees injured on the job. A new Office of Workers Compensation Programs was set up in the Department of Labor to process claims under the program. As the United States entered World War I, the Labor Department played a major role in administering the administration’s war labor policies. These policies included the right of workers to bargain collectively, an eight-hour workday, and grievance procedures. A War Labor Administration (headed by Secretary Wilson) was set up to insure labor peace, recruit women into the workforce (the Woman’s Bureau would become a permanent bureau after the war), and mobilize 3 million workers for agriculture, shipbuilding and defense plants. Following the war,
518 Depar tment of Labor
the Justice Department arrested more than 4,000 aliens under the “Alien Exclusion Act of 1918.” Secretary Wilson insisted that the Immigration Bureau adhere to due process, and the agency deported more than 550 “dangerous aliens’ ” during the “Red Scare” that followed the rise of the Bolsheviks in Russia. Members of Congress were outraged by Wilson’s decision, and the labor secretary was threatened with impeachment. During the 1920s, under the leadership of James J. Davis (1921–30), the department focused on administering the restrictive immigration laws enacted by Congress. The 1921 Emergency Quota Act (ch. 8, 42 Stat. 5) and the 1924 National Origins Act (ch. 190, 43 Stat. 153) were intended to restrict immigration from Asia and Southern and Eastern Europe and to reduce the number of immigrants permitted to enter the United States to about 150,000 annually. The department’s Immigration Bureau was responsible for implementation of the laws. William N. Doak, a railroad-union official, became labor secretary at the height of the Great Depression (1930) and made combating wage cutting a departmental priority. Congress passed the Davis–Bacon Act, sponsored in the Senate by the former labor secretary (ch. 411, 46 Stat. 1494), which required contractors on federal construction projects to pay the prevailing local wage. The act was intended to protect white union workers, whose wages were being undercut by contractors who were hiring nonunion AfricanAmerican workers at lower wage rates. The election of Franklin D. Roosevelt in 1932 and the “New Deal” marked a significant moment for the Department of Labor. Roosevelt appointed Frances Perkins, who had served as his industrial commissioner during his two terms as New York governor, as secretary of labor. Perkins was the first woman appointed to a cabinet post and would serve for 12 years and three months (1933–45), longer than anyone else as labor secretary. Early in her tenure, Perkins dealt with abuses of power by the department’s Immigration Bureau, ending the large-scale raids in which the bureau’s officers arrested hundreds of aliens and then deported them. She disbanded the division of the bureau responsible for the raids and replaced the commissioner of immigration. In 1940, she oversaw the transfer of the Immi-
gration Bureau to the Department of Justice (President’s Reorganization Plan No. V of 1940). Perkins led the department through the Great Depression and World War II. The Wagner–Peyser Act (48 Stat. 113) institutionalized the U.S. Employment Service; the Fair Labor Standards Act (52 Stat. 1060) was the first general wage-and hour law adopted by Congress; the National Labor Relations Act (ch. 372, 49. Stat. 449), also known as the Wagner Act in honor of its Senate sponsor, Robert F. Wagner, Sr., of New York, was the “charter of organized labor,” legalizing union organizing and protecting collectivebargaining rights. During the first 10 years of the Wagner Act, the number of union members increased from 3.8 million in 1935 to 12.6 million in 1945. Perkins was responsible for implementation of the Civilian Conservation Corps (CCC), one of the early New Deal employment initiatives (which operated from 1933 to 1942), and she is regarded as the principal architect of Roosevelt’s Social Security legislation. In 1934, she established a Division of Labor Standards to encourage employers to voluntarily improve working conditions. In 1939, conservative members of Congress sought to impeach Perkins because of her refusal to deport Harry Bridges, the leader of the longshoremen on the West Coast. It was alleged that Bridges was a communist. The effort to impeach her failed in the House of Representatives. Perkins resigned in June 1945 to lead the U.S. delegation to the International Labor Organization (ILO) conference in Paris and was replaced by Lewis B. Schwellenbach, a former U.S. senator from Washington state and federal judge. Schwellenbach contended with a wave of strikes that took place immediately after the Japanese surrender as labor unions sought to make up for the concessions they had made during World War II. In 1945, there were 4,740 strikes in the United States; during the following year, there were 4,750 strikes. The labor unrest created an antiunion sentiment that helped elect a Republican majority to Congress in the 1946 midterm elections, ending 14 years of dominance by the Democrats. Over President Harry S. Truman’s veto, Congress enacted the Taft-Hartley Act (ch. 120, 61 Stat. 136) that amended the National Labor Relations Act by imposing limits on union activity. These limits included bans on
Department of Labor
jurisdictional strikes, secondary boycotts, closed shops, and political contributions by unions. The law also permitted states to pass “right to work laws” that outlawed union shops, and it authorized the federal government to obtain injunctions against strikes that “imperiled the national health or safety.” The law separated the U.S. Conciliation Service from the Labor Department (since the Republican leadership in Congress believed that the Labor Department was overly sympathetic to organized labor), making it an independent agency: the Federal Mediation and Conciliation Service. During this period, Secretary Schwellenbach created an Office of International Labor Affairs to facilitate the exchange of laborrelated information between the United States and foreign governments. As wartime agencies were dismantled, some of their units were transferred to Labor. The Bureau of Veterans Reemployment Rights, the Bureau of Employment Security, and the Apprentice Training Service were transferred into the agency from the War Manpower Commission. In June 1948, Secretary Schwellenbach died in office. He was replaced by Maurice J. Tobin, the governor of Massachusetts, who was selected by President Truman because he was an effective campaigner whom Truman believed could help him carry Massachusetts in the presidential election. During the Korean War (1950–53), Truman issued an executive order granting the labor secretary authority to mobilize human resources for wartime production. Tobin created a Defense Manpower Administration which promoted improved workplace conditions, attempted to minimize work stoppages, and mobilizing women, older workers, and minorities for the war effort. In the late 1950s, the Labor Department was enlisted in the federal government’s “War On Organized Crime.” The U.S. Senate Permanent Investigations Committee (known as the McClellan Committee, after its chair, John McClellan of Arkansas) conducted hearings into union corruption, finding that unions and employers often misused welfare and pension funds. As a result, Congress enacted the Welfare and Pensions Plan Disclosure Act, which required the administrators of these plans to produce annual reports disclosing their activities and finances. The Department of Labor was assigned responsibility for
519
administering the law, which was intended to make the operation of the plans more transparent to employees and to the public at large. In 1959, Congress adopted additional legislation in this area: The Labor Management Reporting and Disclosure Act of 1959 (Public Law 86–257), commonly known as the Landrum–Griffin Act after its prime sponsors. This law mandated secret ballots in union elections and the filing of union financial reports with the Department of Labor, and it banned communists and convicted felons from union office. To administer these statutes, the department created a Bureau of Labor– Management Reports. During the early 1960s, the department, on the basis of new federal legislation, began to focus on employment training programs. There was a growing recognition that manufacturing in the United States was changing and that emerging technologies would lead to fewer jobs, especially for the unskilled. The Area Redevelopment Act of 1961 (Public Law 87–27) and the Manpower Training and Development Act of 1962 (Public Law 87–415) were laws that provided retraining funds for the unemployed and empowered the department to identify employment trends and undertake research related to job training and development. Arthur Goldberg, a labor lawyer who was appointed by President John F. Kennedy as labor secretary in 1961, started an Office of Manpower, Automation, and Training (OMAT) to administer the new programs. In 1963, Goldberg’s successor, W. Willard Wirtz, created an Office of Manpower Administration (Secretary’s Order 3–63) that absorbed OMAT, as well as other job-training and education programs, the Bureau of Employment Security, the Bureau of Apprenticeship and Training, and would, in 1964, assume responsibility for the Neighborhood Youth Corps, one of President Lyndon Johnson’s programs in the “War on Poverty” that was aimed at 14–21 year olds. Johnson’s “Great Society” impacted on the department as a number of new programs were created, a number of which were assigned to Labor. In 1966, the Office of Manpower Policy, Evaluation and Research replaced the Office of Manpower Administration that had been created just three years earlier. An Office of Federal Contract Compliance was established by Executive Order 11246 in the department
520 Depar tment of Labor
to enforce provisions of the Civil Rights Act of 1964 that prohibited employment discrimination by federal contractors. President Johnson asked Congress to consider reuniting the departments of Commerce and Labor. It was his contention that the two departments had similar goals and that they would have more efficient channels of communication in a single cabinet department. Congress never acted on the president’s recommendation. Following Richard Nixon’s election in 1968, many of the Great Society programs, including some within the department, were eliminated or reorganized. Job training remained a major focus of the department as the agency’s programs were consolidated in 1969 into a new U.S. Training and Employment Service. The Job Corps, a program that had been part of the Office of Economic Opportunity (the agency Johnson established to lead his War on Poverty), was shifted to labor as the Nixon administration dismantled the Office of Economic Opportunity. While dismantling Johnson’s War on Poverty, the department did maintain the Johnson administration’s commitment to affirmative action. In June 1969, the department issued the “Philadelphia Plan” order (named for the city where it was first implemented) to require affirmative action in the construction trades, which historically had been closed to members of minority groups. In 1973, Congress passed the Comprehensive Education and Training Act (CETA) (Public Law 92– 203), which gave the states additional responsibility for job training by providing financial support for states to implement job-training and education programs. This program reflected President Nixon’s belief in a “New Federalism,” that would shift responsibility for domestic policy from the federal government to the states. The 1970s did see the department receive additional statutory responsibilities. The Occupational Safety and Health Act of 1970 (84 Stat. 1590) authorized the department to set and enforce safety and health standards in the workplace while establishing an independent agency, the Occupational Safety and Health Administration (OSHA) to oversee enforcement. The department’s involvement in workplace safety would be extended when Congress
shifted mine safety and health regulation from the Department of Interior to a new Mine Safety and Health Administration (MSHA) in the Department of Labor. In 1974, responding to a growing concern about workers losing pension benefits due to fund mismanagement, Congress passed the Employee Retirement Employee Security Act (Public Law 93– 406), which gave the department oversight over the management of private pension plans. The election of Ronald Reagan in 1980 had a significant impact on the Department of Labor. Reagan was committed to reducing the size of the federal government and providing “regulatory relief” by eliminating or revising many federal regulations. Under Reagan’s labor secretary, Raymond J. Donovan, the department scaled back its regulatory functions. Many OSHA standards were reevaluated with the intent of making them less burdensome. Similar changes were undertaken by the MSHA, ERISA, and the Federal Office of Contract Compliance. The number of employees involved in regulatory functions was reduced substantially as the department’s discretionary spending was reduced by 60 percent during President Reagan’s first term (1981–85) and the number of employees was cut by 21 percent during the same period. Funding for CETA was initially cut by more than 50 percent (from $8 billion to $3.7 billion), and the program was replaced in 1983 by the Job Training and Partnership Act (Public Law 97–300), which used federal funds to train economically disadvantaged individuals and displaced workers with the goal of moving them into privatesector employment. During the administration of George H. W. Bush (1989–93), his two labor secretaries, Elizabeth Dole (1989–90) and Lynn Morley Martin (1989–93), took an activist approach to a number of issues. The Federal Office of Contract Compliance undertook a “glass-ceiling initiative” to reduce barriers for advancement by women and minorities in corporations doing business with the federal government. OSHA and MSHA stepped up their enforcement efforts and sanctioned violators of health and safety regulations. In 1990, a Secretary’s Commission on Achieving Necessary Skills (SCANS) was empanelled to determine the skills that would be needed by young people as they entered the workforce. The commission’s report stated that young people
Department of State 521
U.S. Secretary of Stat e Condoleezza Rice speaking at the Stat e Department (Getty Images)
needed to develop five competencies: resources, interpersonal, information, systems, and technology to succeed in the workplace. In 1993, Robert Reich, a Harvard University professor, was appointed secretary of labor by President Bill Clinton. Reich brought attention to the need to protect workers, actively campaigning against sweatshops, unsafe work sites, and health insurance scams. The first two years of Clinton’s presidency saw a number of labor-related bills enacted by Congress. The Retirement Protection Act (Public Law 103–465) protected workers in underfunded pension plans. The department administered the Family and Medical Leave Act of 1993 (Public Law 103–3), which provided workers with up to 12 weeks of unpaid leave to care for a new child or take care of an ill family member. The Schools to Work Opportunity Act of 1994 was intended to ease the transition from school to work for high school students who do not go to college. The department, under President George W. Bush (2001–present) and his labor secretary, Elaine L. Chao, established a Center for Faith-Based and Community Initiatives to provide support for faithbased organizations providing job training and other services to workers. The program reflects the presi-
dent’s broader interest in encouraging a greater role for faith-based organizations in providing government supported programs. The department is presently (2007) organized into eight large program agencies; the Employment and Training Administration; the Employment Standards Administration; the Occupational Safety and Health Administration; the Mine Safety and Health Administration; the Veteran’s Employment and Training Service; the Employee Benefits Security Administration; the Pension Benefit Guaranty Corporation, and the Bureau of Labor Statistics. In 2004, the department employed 1,650 and had a fiscal-year budget of $51.4 billion. Secretary Elaine L. Chao was appointed by President George W. Bush. See also appointment power; cabinet; executive agencies. Further Reading Breen, William J. Labor Market Politics and the Great War: The Department of Labor, the States, and the First U.S. Employment Service, 1907–1933. Kent, Ohio: Kent State University Press, 1997; Grossman, Jonathan. The Department of Labor. New York: Praeger, 1973; Mathematica Policy Research, Inc. Building Relationships Between the Workforce Investment System and Faith-Based and Community Organizations: Background Paper. ETA Occasional Paper, 2006–02; Reich, Robert B. Locked in the Cabinet. New York: Knopf, 1997; U.S. Department of Labor, Employment and Training Administration, Office of Policy Development and Research. Washington, D.C.: U.S. Department of Labor, 2006; U.S. Congress, Congressional Budget Office. A Guide to Understanding the Pension Benefit Guaranty Corporation. Washington, D.C.: Government Printing Office, 2005; U.S. Department of Labor. U.S. Department of Labor: The First Seventy-Five Years, 1913– 1988. Washington, D.C.: U.S. Government Printing Office, 1988. —Jeffrey Kraus
Department of State Often referred to as the State Department, the U.S. Department of State is the oldest and one of the most respected and prestigious of the cabinet-level
522 Depar tment of State
agencies in the federal government. Originally known as the Department of Foreign Affairs, it was the first agency created under the new U.S. Constitution (July 21, 1789), and when President George Washington signed the bill into law on July 27, 1789, the Department of Foreign Affairs became the federal government’s first established cabinet post. Originally assigned to also deal with some domestic responsibilities, the new department soon became exclusively responsible for foreign policy matters. Today, the State Department is the chief foreign-policy implementing agency of the government and, as such, holds a tremendous amount of potential power. It is the equivalent of the foreign ministries in other nations and serves the president of the United States as other foreign ministries serve their presidents, prime ministers, or chancellors. The Department of State is headed by the secretary of state, who is appointed by the president with the advice and consent of the U.S. Senate. Often, the secretary of state is the principal foreignpolicy adviser to the president, although that position is sometimes taken up by the president’s National Security Advisor. As an indication of the prestige and influence of the secretary of state, it should be noted that the nation’s first secretary of state was Thomas Jefferson, who served in this position in President George Washington’s first term. Although John Jay had been serving as an untitled secretary as a holdover from the preconstitutional days and served in that capacity under the newly formed constitutional government, he was never given the post of secretary formally and served only on an interim basis until Jefferson, who had been serving the United States as minister to France, could return to the United States and assume the new post as secretary of state. Jefferson was followed by several other very prominent and accomplished secretaries, first by Edmund Randolph, then Timothy Pickering, and then John Marshall who would later become one of the nation’s most accomplished chief justices; he was followed by James Madison. Others who served as secretary of state include James Monroe, John Quincy Adams, Henry Clay, Martin Van Buren, Daniel Webster, James Buchanan, William H. Seward, William Jennings Bryan, Charles Evans Hughes, Henry Stimson, Cordell Hull, Dean Acheson, John Foster Dulles,
Henry Kissinger, Edmund Muskie, George Shultz, James Baker, Madeleine Albright, and Colin Powell. The list reads like a “who’s who” of U.S. politics and indicates the high esteem and great importance of the position of secretary of state. Many on this list rose to become president, and at one time (although less so today), the position of secretary of state was seen as a stepping-stone to the presidency. It is considered one of the top jobs in the federal government. In 1997, President Bill Clinton appointed Madeleine Korbel Albright to be the first female secretary of state. In 2001, President George W. Bush appointed the first African American, Colin Powell, to be secretary of state, and in 2005, President Bush appointed Condoleezza Rice, the first female African-American secretary. The Department of State is headquartered in the Harry S. Truman Building in Washington, D.C.’s Foggy Bottom district, a few blocks away from the White House. In addition to formulating and implementing the foreign policy of the United States, the Department of State also is responsible for protecting and assisting United States citizens abroad, running the many U.S. embassies overseas, assisting U.S. businesses abroad, coordinating many activities of the United States overseas, a publicinformation role, and other related activities. Most employees of the Department of State are civilians and are part of the Foreign Service and the Diplomatic Service. The budget of the Department of State represents a bit more than 1 percent of the total federal budget (roughly 12 cents per each U.S. citizen per year). The primary task of the Department of State is to execute the foreign policy of the United States. The United States has a separation-of-powers system that disperses power to the three branches of government. Thus, all three share in the making of policy. Primarily, the president and Congress make U.S. foreign policy. While the Congress has legislative power as well as the power to regulate commerce, raise armies, declare war, and a host of other foreign-policy-related powers (see: Article I of the U.S. Constitution), over time, the presidency has emerged as the primary maker of foreign policy for the nation. With the constitutional authority to receive ambassadors and as the nation’s chief executive officer and commander
Department of State 523
in chief, the president has formidable foreign-policy tools at his disposal. The president’s role as chief executive officer places the president at the center of the administrative vortex of policy making and implementation. Given that the president is “in charge” of the Departments of State and Defense, as well as other departments that have some influence in foreign affairs, it is presumed that the president will be the primary setter and implementer of foreign-policy for the United States. But even with this primary role, the president still shares key powers with the Congress, and at various times, the Congress has attempted to set policy for the nation in opposition to the will of the president. Two examples of this can be seen, first, in the Congress forcing President Ronald Reagan to accept sanctions imposed by the Congress on the apartheid-era government of South Africa, even over the president’s strong opposition, and in the Congress’s efforts in 2007 to force President George W. Bush to set a timetable to withdraw U.S. troops from Iraq. Often, the Department of State is drawn into the middle of such battles. Given that the secretary serves at the pleasure of the president, it should not surprise anyone that the secretary is the spokesperson for the president and the administration before Congress. Quite often, the secretary will be called on to testify before Congress on matters mundane and controversial, and such testimony sometimes leads to heated debate and argument. As the lead agency in the formulation and implementation of U.S. foreign policy, the State Department is at the cutting edge of policy. The secretary is customarily a very close adviser to the president (although there have been some exceptions to this rule), and the secretary of state is part of what political scientist Thomas E. Cronin has called “the inner circle” of the president’s cabinet. Unusual is the secretary of state who does not have ready access and strong influence in an administration. Often, the secretaries of state and defense are rivals in an administration. This is partly a function of normal bureaucratic in-fighting but is more likely the result of the very different missions and approaches of these two key agencies. The State Department is the “diplomatic” arm of the federal government, and the Defense Department is the “military” arm of the government. Sometimes, these
two roles conflict, and open hostility may break out between these two agencies. In the first term of the administration of President George W. Bush, such battles were regular and intense, with Secretary of State Colin Powell often facing off against Defense Secretary Donald Rumsfeld. Powell often sought a more diplomatic course of action, as when he insisted that the president go to the United Nations in the lead up to the war against Iraq. Rumsfeld, impatient with the slower pace and compromises that sometimes plagues the diplomatic side, wanted a more unilateral and militaristic solution to problems. It was a clash between two strong-willed men but also a clash between two different approaches to dealing with the world. Powell and Rumsfeld clashed often, and usually caught in the middle was the National Security Advisor to the president, Dr. Condoleezza Rice. In the end, Rice sided with Rumsfeld, and Powell was pushed out of the inner core of decision making. This narrowed the range of options placed before the president and conveyed the false impression that there was greater unanimity of means and less dissent over the militaristic approach than there really was. Most commentators have viewed this as a dysfunctional decision-making system. The Bush administration is not alone in its failure to use fully and adequately the various agencies designed to help in the making and implementing of policy. During the Nixon presidency, the president and his National Security Advisor Dr. Henry Kissinger so tightly held control over foreign policy that Secretary of State William P. Rogers and Secretary of Defense Melvin R. Laird were often intentionally kept “out of the loop” on key foreign-policy decisions. The president and Kissinger did not trust the bureaucracies of State and Defense and thus kept a tight control in the White House over information and policy. Unlike the case of the Bush administration, in the Nixon years, this was precisely what the president and his National Security Advisor wanted. In this sense, a dysfunctional system was intentionally implemented so as to “freeze out” State and Defense, thereby giving the White House tight control over policy. In general, U.S. foreign policy has been most effective when a strong secretary of state, a strong secretary of defense, a fair-minded and strong National
524 Department of the Interior
Security Advisor, and a strong president can work together to form a cohesive unit. This is not to say that they should all be marching to the same beat—such unanimity can lead to the problem of “group-think.” But if all the key participants are strong and well respected, they can bring their concerns, vision, and ideas to the table and allow the president to weigh a wide range of options and alternatives as policy is being set. When one or more of the cogs in this wheel are weak or excluded, almost invariably, problems occur. Some critics charge that the State Department is too big to be effective. Others see the preferred mission of the department—diplomacy—as a problem for the world’s only superpower. Such “unilateralists” and “militarists” argue that the United States must flex its military muscle and be bold in asserting its interests in the world. Still others see diplomacy as the best way for the United States to achieve its goals at a reasonable cost. Different presidents use their foreign-policy agencies differently. Some (Jimmy Carter and George H. W. Bush, for example) prefer diplomacy where possible and use of the military only where necessary. Other presidents (George W. Bush, for example) prefer a more unilateral and militaristic approach to foreign policy. In this, the U.S. Department of State is often caught in the middle of political and partisan battles. The more politicized is the U.S. State Department, the less able is it to accomplish fully the tasks for which it is held responsible. In such cases, it is the foreign policy of the United States that suffers. See also appointment power; cabinet; executive agencies; foreign-policy power. Further Reading Berridge, G. R. Diplomacy: Theory and Practice. 2nd ed. New York: Palgrave Macmillan, 2003; Dorman, Shawn. Inside a U.S. Embassy: How the Foreign Service Works for America. Washington, D.C.: American Foreign Service Association, 2003; Freeman, Charles W., Jr. Arts of Power: Statecraft and Diplomacy. Washington, D.C.: United States Institute of Peace Press, 1997; Plischke, Elmer. U.S. Department of State: A Reference History. Westport, Conn.: Greenwood Press, 1999. —Michael A. Genovese
Department of the Interior The executive branch of the U.S. government contains 15 executive departments. Four of these are characterized as inner departments because their leaders—the secretaries of state, the treasury, and defense as well as the U.S. attorney general, routinely emphasize their loyalty to the president and frequently interact with him or her. The state, the treasury, and the war departments were established in 1789. The U.S. Department of the Interior was the fourth department to be established. The date of March 3, 1849, was appropriately enough at the conclusion of the administration of President James K. Polk, the only speaker of the U.S. House of Representatives to go on to serve in the presidency. So began the functioning of the first outer cabinet department. These departments focus their energies on building coalitions including congressional committees and interest groups. Relatedly, most Interior secretaries have backgrounds either as members of Congress or as interest-group actors. Stewart Lee Udall, Rogers C. B. Morton, Manuel Lujan, and Dirk Kempthorne all served in the U.S. House; James Watt and Cheryl Norton worked together at the Mountain States Legal Foundation, and Bruce Babbitt served as president of the League of Conservation Voters. Insofar as only one other outer-cabinet department established in the 19th century—Agriculture in 1862—still exists, the development of Interior was a precursor for cabinet building in the 20th and 21st centuries. In the aftermath of the Mexican-American War, which was conducted during the Polk Administration, and Gadsden’s Purchase, the size and concomitantly the stewardship responsibilities of the U.S. government grew exponentially. Nearly one-third of land in the United States is owned by the U.S. government; this ownership is heavily concentrated in the West, and 70 percent of this land is administered by the U.S. Department of the Interior. Attitudes concerning stewardship have varied considerably among individual secretaries and within different agencies of Interior. Secretary of the Interior Carl Shurz in the late 1870s reintroduced the concept of forest reserves which had been developed by New England and Middle Atlantic colonists, but by 1905, the General Land Office of Interior was so associated
Department of the Interior
Poster for the National Park Service promoting travel to national parks (Library of Congress)
with disposing of government lands and natural resources that Gifford Pinchot, self-styled as “America’s First Forester,” was able to persuade President Theodore Roosevelt to transfer the forest reserves to Agriculture. Comity between Interior and Agriculture has waxed and waned. Particularly notable have been jurisdictional disputes involving Richard Ballinger and Harold Ickes of Interior and Pinchot of Agriculture and cooperative environmental and conservation ventures entered into by Secretary of the Interior Stewart Udall and Secretary of Agriculture Orville Freeman. The latter two individuals served in their respective positions during the entirety of the John F. Kennedy and Lyndon B. Johnson administrations. Major components of the U.S. Department of the Interior include the National Park Service, the Fish and Wildlife Service, the Bureau of Land Management, the U.S. Geological Survey, and the Bureau of
525
Indian Affairs. The department deals with diverse topics and diverse people. U.S. Secretary of the Interior Harold Ickes even had responsibility for the construction of public housing during the 1930s. Also, Secretary Harold Ickes during his tenure (1933–46) made pioneering efforts in affirmative action. Native Americans are the most notable ethnic group associated with the department. In part, this stems from the forced relocation of American Indians during the 19th century to the west. Stewart Lee Udall, during his service in the U.S. House representing the Second Congressional District of Arizona, had more American Indians in his district than any other member. In recent years, he has devoted part of his legal practice to assisting Navajo uranium miners. Presidents and secretaries have varied in their popularity among Native Americans, with partisanship playing little or no factor in evaluations. For example, Republican presidents Calvin Coolidge and Richard M. Nixon are generally well regarded by Native Americans and their advocates; while Ronald Reagan’s policies, which in many ways undermined Nixon’s Indian policies, were met with hostility. President Richard M. Nixon became strongly associated with the policy of self-determination. He was the first 20th-century president to emphasize the binding legal quality of the treaties into which the U.S. government had entered with tribes, the Bureau of Indian affairs actively pursued preferential hiring during his presidency and he sought to foster economic development among the tribes. The budget of the Bureau of Indian Affairs was doubled. Reagan vaguely referred during his 1980 presidential campaign to self-determination and advocated the development of natural resources by tribes. Consequently, he did receive support from a number of Indian organizations. By 1983, the support of Indian organizations such as the National Tribal Chairman’s Association had collapsed in response to dramatic proposals for Indian budget cuts and the announcement by the Bureau of Indian Affairs that it would be closing a number of schools, apparently in violation of treaty commitments. The support of John Melcher (D-MO) of the Senate Select Committee on Indian Affairs was able to ameliorate some of the more
526 Department of the Interior
onerous proposals. Secretary of the Interior James Watt’s relaxation of strip-mining regulations threatened Indian lands. The U.S. Department of the Interior not only deals with the often parched lands of the west but also with the outer continental shelf of the United States. Secretary of the Interior Stewart Lee Udall labored to take the western label off the department and relatedly recommended to Nixon adviser Robert Finch that Rogers C. B. Morton of Maryland succeed Walter Hickel of Alaska as secretary of the interior. Despite Udall’s efforts, all of Morton’s successors have hailed from the west. Secretary Udall was more successful in persuading Wilbur Mills (D-AR) to support the establishment of the Land and Water Conservation Fund in 1964. Stewart Lee Udall noted the following in an interview on July 22, 1997: “But I went to Wilbur Mills, the old chairman of the House Ways and Means Committee who was the last of these powerful figures, just literally I went hat in hand to him, and went to him and said this is our idea take the royalties from the continental shelf rentals; put them in this fund and then roll the money over in terms of buying park lands, things of that sort.” Royalties garnered by outer continental-shelf leases granted by the U.S. Department of the Interior are second only to the income tax as a source of revenue for the U.S. Treasury. President George W. Bush signed legislation increasing the number of leases late in 2006. Senator Mary Landrieu (D-LA), a member of the Senate Appropriations Committee, observed that the legislation provides that an increased share of the royalties goes to coastal states, including Louisiana, which is only fair since oil and natural-gas extraction has contributed to coastal erosion. Coastal erosion contributed to storm surge from Hurricanes Katrina and Rita in 2005, resulting in unnecessary loss of life, property, and infrastructure. Appropriately enough, since Louisiana accounts for more than one-half of coastal erosion in the United States, the National Wetlands Research Center of the U.S. Geological Survey is located in Lafayette, Louisiana, with additional offices located in Baton Rouge, Louisiana; Gulf Breeze, Florida; Stennis Space Center, Mississippi, and Corpus Christi, Texas. Research focuses on migratory birds, waterfowl, and global climate change. Experiments are being con-
ducted to determine how native plants can be used for coastal wetland restoration. The U.S. Department of the Interior compiles data on minerals located throughout the world. It at times has been responsible for maintaining mineral reserves for the U.S. military. During the administrations of Presidents William Howard Taft and Woodrow Wilson, it was required to maintain oil reserves for the navy. The secretary of the navy had this responsibility transferred in 1920 out of Interior, but U.S. Secretary of the Interior Albert Fall saw to it that it was transferred back to Interior in 1921, early in the administration of Warren G. Harding. What resulted was the Teapot Dome Scandal and Fall’s imprisonment following his conviction of being bribed by lessee Harry F. Sinclair of Mammoth Oil. The Bureau of Land Management frequently leases land for private purposes, notably cattle grazing. Under authority of the Taylor Act of 1934, it has, with the assistance of clientele groups, promulgated regulations to prevent overgrazing. The advisory boards to district managers have customarily been composed of the leaders of stock people’s associations. Support for programs can be increased and protected against possible future threats through this process of co-optation. By having clients of an agency have input into the decision-making process, they identify with the decision and are reinforced in their support of the agency or bureau. With such support, bureaus are likely to persist. Clients can also successfully resist ideas put forth by secretaries. When Secretary of the Interior Bruce Babbitt tried to impose what he considered to be realistic grazing fees, he was successfully opposed. Similarly, his efforts to revise the Mining Law of 1878 were spurned. Changes at Interior have generally been incremental. U.S. Secretary of the Interior Harold Ickes wanted to replace it with a department of natural resources. More than 20 years later, President Richard M. Nixon proposed replacing the outer departments with four departments, including a department of natural resources. His proposal could not overcome congressional resistance. One Nixon innovation that did take place was the establishment of urban parks under the purview of the National Park Service. President Nixon promoted this because of his own inability as a youth to travel to
Department of Transportation 527
National Parks. Gerald R. Ford, his successor and the only person to have served both as a forest ranger for the National Park Service and as president, presided over the establishment of a number of urban national parks, including one in Boston. From its establishment during the administration of James K. Polk to the expansion of its responsibilities to include the establishment of the refuge system during the presidency of Theodore Roosevelt to President George W. Bush’s signing of legislation in 2006 to extend National Park Service status to World War II internment camps, the U.S. Department of the Interior continues to play a major and responsive role in the political life of the United States. It embodies incrementalism in the political process. See also appointment power; cabinet; executive agencies. Further Reading Cook, Samuel. “Ronald Reagan’s Indian Policy in Retrospect: Economic Crisis and Political Irony.” Policy Studies Journal 24 (Spring 1996): 11–26; “Congress Still Spending on Flood Insurance.” American Press (December 30, 2006): A3; Cronin, Thomas E., and Michael A. Genovese. The Paradoxes of the American Presidency. 2nd ed. New York: Oxford University Press, 2004; Foss, Phillip. Politics of Grass. Seattle: University of Washington Press, 1960; Gould, Lewis. Lady Bird Johnson: Our Environmental First Lady. 1988. Reprint, Lawrence: University Press of Kansas, 1999; Lehmann, Scott. Privatizing Public Lands. New York: Oxford University Press, 1995; McCoy, Donald R. Calvin Coolidge: The Quiet President. Newtown, Conn.: American Political Biography Press, 1998; Prucha, Francis. The Great Father: The United States Government and the American Indians. Vols. 1 and 2, unabridged. Lincoln: University Press of Nebraska, 1984; Selznick, Philip. TVA and the Grassroots: A Study in the Sociology of Formal Organization. Berkeley: University of California Press, 1949; Sirgo, Henry. Establishment of Environmentalism on the U.S. Political Agenda in the Second Half of the Twentieth Century—The Brothers Udall. Lewiston, N.Y.: The Edwin Mellen Press, 2004; Sussman, Glen, Byron Daynes, and Jonathan West. American Politics and the Environment. New York: Longman Publishers, 2002; U.S. Geological Survey, U.S.
Department of the Interior. Minerals Yearbook Area Reports: International 2004—Africa and the Middle East. Vol. 3. Washington, D.C.: United States Government Printing Office, 2006; Warshaw, Shirley Anne. The Domestic Presidency: Policy Making in the White House. Boston: Allyn & Bacon, 1997; Werner, M.R., and John Starr. Teapot Dome. New York: The Viking Press, 1959. —Henry B. Sirgo
Department of Transportation The U.S. Department of Transportation is the cabinet-level department of the federal government that deals with transportation. It was created by the Department of Transportation Act (80 Stat. 931), signed into law on October 15, 1966, by President Lyndon B. Johnson. The department’s first secretary, Alan S. Boyd, took office on January 16, 1967, and the department officially came into existence on April 1, 1967 (Executive Order 11340). Historically, transportation matters were concerns of the states and localities. Today, the federal government has taken complete control, through the U.S. Constitution’s commerce clause, over aviation and railroads, preempting the states. In other fields, the federal government has become more involved, primarily through Congress’s spending power, whereby Congress has created grant programs to encourage the states to undertake certain activities (for example, the Interstate Highway System) or has imposed conditions on the receipt of such funds, requiring the states to change their policies (notably the federal laws that reduced federal highway trust-fund payments to states that did not impose a 55-mile-per-hour speed limit or increase their drinking age to 21). While the Transportation Department was established in 1967, the federal government’s interest in transportation dates back to the early years of the republic. The Lighthouse Service (established 1792) helped foster trade and transportation. In 1806, President Thomas Jefferson authorized the construction of the Cumberland Road, which would be built between Cumberland, Maryland, and Vandalia, Illinois, between 1811 and 1839. An early proponent of federal involvement in transportation was Albert
528 Depar tment of Transportation
Gallatin, Jefferson’s Treasury secretary, who recommended in his Report on the Subject of Public Roads and Canals (1808), that the federal government subsidize internal improvements to meet the new republic’s transportation needs. The first federal legislation concerning railroads was the Pacific Railway Act of 1862 (12 Statutes at Large, 489), which provided land and financial support for the construction of a transcontinental railroad. In 1874, a bill proposing a Bureau of Transportation was introduced in Congress by Representative Laurin D. Woodworth of Ohio. The Office of Road Inquiry (the predecessor to the Federal Highway Administration) was established in the Department of Agriculture in 1893. In 1916, President Woodrow Wilson signed into law the Federal Aid Road Act (ch. 241, 39 stat. 355), known as the Good Roads Act, the first federal law providing money to the states to build roads for delivering the mail. In June 1965, Federal Aviation Agency (FAA) administrator Najeeb Halaby recommended to President Johnson that a Department of Transportation be established to take control of the FAA (an independent agency) and functions that were then under the jurisdiction of the Department of Commerce. Halaby was supported by Budget Director Charles Schultze and Joseph A. Califano, Jr., a special assistant to President Johnson. At their request, Undersecretary of Commerce for Transportation Alan S. Boyd submitted in October 1965 a series of recommendations that called for a department that would include the FAA, the Bureau of Public Roads, the U.S. Coast Guard, the Saint Lawrence Seaway Development Corporation, the Great Lakes Pilotage Association, the Car Service Division of the Interstate Commerce Commission, the Civil Aeronautics Board, and the Panama Canal. On March 6, 1966, President Johnson sent Congress a bill proposing a Transportation Department. In his message to Congress, the president asserted that “America today lacks a coordinated transportation system that permits travelers and goods to move conveniently and efficiently from one means of transportation to another, using the best characteristics of each.” Congress enacted the legislation creating the department and causing the largest reorganization of the federal government since 1947. The new department brought
together more than 30 transportation agencies that had been scattered throughout the federal government and employed 95,000, making it the fourthlargest cabinet department. Under Boyd (January 16, 1967–January 20, 1969), the Federal Highway Administration, the Federal Railroad Administration, and the National Transportation Safety Board were established. In addition, President Johnson signed an executive order transferring the Department of Housing and Urban Development’s (HUD) mass-transit functions to a new Urban Mass Transportation Administration (Reorganization Plan No. 2 of 1968, effective July 1, 1968). Massachusetts governor John A. Volpe became secretary on January 22, 1969 (he would serve in President Richard Nixon’s cabinet until February 1, 1973). In 1970, the Highway Safety Act (84 Stat. 1739) authorized the creation of the National Highway Traffic Safety Administration, giving the agency jurisdiction over highway and automobile safety. Volpe had to contend with a rash of airline hijackings, the bankruptcy of the Penn Central Railroad, and the creation of Amtrak, a government corporation, to operate the nation’s intercity passenger railroads. Claude S. Brinegar, an oil-company executive, succeeded Volpe on February 2, 1973. One of his major achievements was developing the administration’s response to the “Rail Crisis” that resulted from the bankruptcy of seven major freight railroads serving the northeast and the midwest. The Regional Rail Reorganization Act of 1973 (87 Stat. 985) created the Consolidated Rail (ConRail) Corporation, controlled by the U.S. Railway Association, to operate the region’s rail system. Brinegar would continue to serve as secretary under President Gerald Ford following President Richard Nixon’s resignation in 1974, leaving office February 1, 1975, serving one day short of two years. William T. Coleman, Jr., became the first African American to head the department, serving from March 7, 1975, until January 20, 1977. Congress voted to separate the National Transportation Safety Board from the department, making it an independent agency of the federal government. President Jimmy Carter appointed Brockman Adams to serve as his secretary of transportation (January 23, 1977, to July 20, 1979). Adams had
Department of Transportation 529
served in the U.S. House of Representatives for 12 years and was the principal author of the ConRail legislation in 1973. Adams’s major achievement was the establishment of the Research and Special Programs Directorate (later redesignated the Research and Special Programs Administration) in September 1977, combining the Transportation Systems Center, the agency’s hazardous-materials and pipeline-safety programs and research activities into a single operating unit. Adams was succeeded by Neil E. Goldschmitt, the mayor of Portland, Oregon. During Goldschmitt’s tenure (August 15, 1979–January 20, 1981), the trend toward economic deregulation began as Congress began to scale back the regulatory regime that had emerged in response to the Great Depression. The movement back to free markets had a significant impact on transportation, as Congress enacted a number of laws reforming transportation regulation in the railroad, trucking, and airline industries. Drew Lewis was appointed secretary of transportation (January 23, 1981, to February 1, 1983) by President Ronald Reagan. Lewis negotiated the transfer of the Maritime Administration from the Department of Commerce to the Department of Transportation, which was approved by Congress (95 Stat. 151), giving the department complete control over national transportation policy. Lewis led the government’s unsuccessful effort to negotiate a new labor contract with the Professional Air Traffic Controllers’ Organization (PATCO), the union representing air traffic controllers at the nation’s civil airports. When PATCO struck in 1981, President Reagan invoked the federal law prohibiting strikes by federal employees and fired all controllers who did not return to their posts. Lewis then was charged with rebuilding the air-traffic-control system as new controllers were recruited, trained, and deployed in the air-traffic-control towers. Another Lewis milestone was the Surface Transportation Assistance Act of 1982 (Public Law 97–424), which included funding for highway construction and new trafficsafety measures. Lewis was succeeded by Elizabeth Dole (February 7, 1983, to September 30, 1987), who had served on the Federal Trade Commission (FTC) and Reagan’s White House staff. Dole became the first woman to head the department. In 1984, Congress
approved the Commercial Space Launch Act (Public Law 98–575), giving the department responsibility for promoting and regulating commercial space launch vehicles. In the same year, the Civil Aeronautics Board (CAB) Sunset Act (98 Stat. 1703) was enacted, abolishing the CAB and transferring many of its continuing functions to the Department of Transportation. These tasks, including an aviation economic-fitness program, antitrust oversight, data collection, and review of international route negotiations, and awards to carriers, were assigned by Secretary Dole to the office of the assistant secretary for Policy and International Affairs. Concurrent with the expansion of the department’s role into commercial space travel and civil aviation, the department also divested itself of a number of responsibilities. These included the transfer of the Alaska Railroad to the State of Alaska (96 Stat. 2556), the privatization of Conrail; and the turnover by the Federal Aviation Administration of Washington National (now Reagan) and Dulles International airports to the Metropolitan Washington Airports Authority. President George H. W. Bush’s selection as secretary, Samuel K. Skinner (February 6, 1989, to December 13, 1991), became known in Washington as “the Master of Disaster” as the agency was confronted by a number of disasters. The December 21, 1988, terrorist bombing of Pan Am flight 103 killed 270 people. Skinner then dealt with the Eastern Airlines bankruptcy, the Exxon Valdez oil spill in March 1989, and the October 1989 Loma Prieta earthquake in the Bay Area of northern California, which damaged bridges and destroyed interstate highways in the area. In December 1991, President Bush signed the Intermodal Surface Transportation Efficiency Act (Public Law 102–240) into law. The legislation reauthorized the department’s highway, highway-safety, and mass-transit programs. The law also reorganized the department: The Urban Mass Transportation Administration became the Federal Transit Administration, and the Bureau of Transportation Statistics and the Office of Intermodalism were established. President George W. Bush, elected in the disputed 2000 presidential election, crossed party lines when he selected a Democrat, Norman Y. Mineta, as the 13th secretary of transportation. Mineta had
530 Depar tment of the Treasury
served as secretary of Commerce under President Bill Clinton and, before that, as a member of the U.S. House of Representatives, and as mayor of San Jose, California. Following the September 11, 2001, terrorist attacks when four passenger airplanes were hijacked and used to attack the World Trade Center in New York City and the Pentagon, the Federal Aviation Administration halted all civil air traffic in the United States until it could be certain that no other planes had been hijacked. In the wake of the attacks, Congress passed the Aviation and Transportation Security Act (Public Law 107–71). The law established the Transportation Security Administration (TSA), which would take responsibility for screening all airline passengers and personnel at the nation’s airports. The new agency began operations on February 16, 2002. On June 6, 2002, after initially resisting the idea, President George W. Bush asked Congress to establish a new cabinet-level Department of Homeland Security (DHS). Congress passed the Homeland Security Act (Public Law 107–296), the most significant reorganization of the federal government in more than 60 years in that a number of agencies and functions with homeland-security implications were transferred from other federal agencies to DGS. As part of this reorganization, the U.S. Coast Guard and TSA were transferred to DHS. On November 30, 2004, President Bush signed into law the Norman Y. Mineta Research and Special Programs Improvement Act (Public Law 108–426), which created two new operating administrations: the Research and Innovative Technology Administration and the Pipeline and Hazardous Materials Safety Administration. The following year, Congress reauthorized the department’s surface-transportation programs, passing the Safe, Accountable, Flexible, Efficient Transportation Act: A Legacy for Users (Public Law 109–59). On June 23, 2006, Minetta resigned as Transportation Secretary after 65 months (January 21, 2001, to July 7, 2006) in office, the longest that anyone has served in the office. He was replaced by Mary E. Peters, who had served as Federal Highway Administrator from 2001 until 2005. The Department of Transportation is organized into an Office of the Secretary and 11 operating
administrations: the Federal Aviation Administration; the Federal Highway Administration; the Federal Motor Carrier Safety Administration; the Federal Railroad Administration, the National Highway Traffic Safety Administration, the Federal Transit Administration, the Maritime Administration, the Saint Lawrence Seaway Development Corporation, the Research and Special Programs Administration, the Bureau of Transportation Statistics, and the Surface Transportation Board. See also appointment power; cabinet; executive agencies. Further Reading Davis, Grant Miller. The Department of Transportation. Lexington, Mass.: D.C. Heath and Co., 1970; Fehner, T. R., and J. M. Holl. Department of Transportation 1977–1994: A Summary History. Washington, D.C.: U.S. Department of Energy, 1994; Lewis, Tom. Divided Highways: The Interstate Highway System and the Transformation of American Life. New York: Viking Press, 1997; Rose, Mark H. Interstate Express Highway Politics, 1941–1989. Knoxville: University of Tennessee Press, 1990. —Jeffrey Kraus
Department of the Treasury Established on September 2, 1789, the U.S. Department of the Treasury was one of the original cabinet departments of the federal government, along with the Departments of State and War. As with other cabinet members, the secretary of the treasury is appointed by the president and confirmed by the Senate. This is an important position in the executive branch because the secretary acts as the “chief financial officer” of the federal government, a key policy adviser to the president, and an economic and financial “ambassador” to foreign governments and international organizations. The nation’s first secretary of the treasury, Alexander Hamilton, was sworn into office on September 11, 1789, and served until January 21, 1795. Since that time, the office has been filled by other such notable citizens as Albert Gallatin (1801–14), Roger B. Taney (1833–34), Levi Woodbury (1834–41), Salmon P. Chase (1861–64), William G. McAdoo (1913–18), Andrew Mellon (1921–32), Henry Morgenthau, Jr.
Department of the Treasury
(1934–45), James A. Baker III (1985–88), and Robert E. Rubin (1995–99). Broadly, the Treasury’s charge is to protect the financial and economic prosperity, stability, and security of the United States and to foster the conditions that lead to sustainable growth in the United States and around the world. To those ends, the Treasury formulates and recommends to the president economic, financial, monetary, trade, and tax policies; provides reports on the revenue and expenditures of the federal government to the president, the Congress, and the public; enforces finance and tax laws; and manages the monies of the United States. Included in these activities are the collection of taxes and duties, the production and circulation of currency and coinage, and the borrowing, lending, and paying of monies to maintain the federal government. The Treasury also works with foreign governments and International Financial Institutions (IFIs), such as the Inter-American Development Bank and the African Development Bank, to “encourage economic growth, raise standards of living, and predict and prevent, to the extent possible economic and financial crises.” There are nine Treasury offices, which are under direct supervision of the secretary of the Treasury: Domestic Finance, Economic Policy, General Counsel, International Affairs, Management and CFO, Public Affairs, Tax Policy, Terrorism and Financial Intelligence, and the Treasurer of the United States. There are also 12 Treasury bureaus, whose employees comprise 98 percent of the Treasury’s workforce: the Alcohol and Tobacco Tax and Trade Bureau, the Bureau of Engraving and Printing, the Bureau of the Public Debt, the Community Development Financial Institution, the Financial Crimes Enforcement Network, the Financial Management Service, the Inspector General, the Treasury Inspector General for Tax Administration, the Internal Revenue Service, the Office of the Comptroller of the Currency, the Office of Thrift Supervision, and the U.S. Mint. The largest bureau within the Treasury is the Internal Revenue Service, which employs more than 100,000 people, processes more than 200 million tax returns, and collects more than $2 trillion in revenue annually. One of the smaller agencies performs what seems to be an outsized task: The Bureau of the Public Debt borrows the monies needed to finance the operations of the
531
federal government and keeps an accounting down to the penny of the amount owed, which currently exceeds $8 trillion. The agency raises funds by selling Treasury bills, notes, bonds, and U.S. Savings Bonds to investors who loan the federal government money and then after a period of time are repaid with interest. The time frames and the interest rates vary on these instruments. In 2003, as part of a broader restructuring of the federal government, the law-enforcement functions of the Bureau of Alcohol, Tobacco, and Firearms were transferred to the Department of Justice, while the Federal Law Enforcement Training Center, the Customs Service, and the Secret Service were moved to the Department of Homeland Security. These types of reorganizations are not unusual for the department. The Treasury possesses a long bureaucratic narrative, which is marked by the institution’s elasticity and flexibility. In times of war or financial and economic crises, the Treasury’s operations grew to meet the government’s funding needs, inventing and utilizing numerous financial tools to raise monies and sustain the government’s credit. After those trying times passed, the department often scaled back its efforts. Further, many federal agencies began their operations at Treasury but were later relocated to other departments. In that sense, the Treasury continually reinvents itself, even though its mission remains the same. For example, the Treasury oversaw the development of the U.S. Postal Service until 1829, the General Land Office until 1849, the Bureau of the Budget until 1939, and the U.S. Coast Guard, except for brief periods during the two world wars when the service was transferred to the navy, until 1967. One of the more significant expansions of the Treasury began in 1913 after 36 states ratified the Sixteenth Amendment to the U.S. Constitution, authorizing the federal government to collect income taxes, even though at the time, the tax affected less than one percent of the public. The Treasury has also been involved in many of the country’s major historical events. When the British set fire to Washington during the War of 1812, the Treasury building was among those burnt to the ground. During the Civil War, the Treasury supported the Union government by suspending coinage operations at its mints in the South: Charlotte, Dahlonega,
532 Depar tment of the Treasury
and New Orleans. In 1866, the motto “In God We Trust” was placed on many of the nation’s coins, and since 1938, it has been placed on every one. Recently, this has created controversy among those who believe that this placement is a violation of the separation of church and state and has resulted in the filing of a lawsuit by an individual who claims that the motto violates his constitutional rights. Another recent controversy involving the Treasury relates to the practices of the Terrorism and Financial Intelligence Office (TFI), which began to operate in April 2004. TFI brings together personnel with expertise in intelligence gathering and law enforcement to protect the integrity of the United States financial system and to thwart terrorist financing schemes, such as money laundering. In its functioning, this office has regularly used administrative subpoenas to collect financial information on the wire transfers of funds. It has yet to be determined whether the office’s data-mining techniques are consistent with the constitutional rights of U.S. citizens. For most of its first 125 years, the Treasury performed many of the functions of a central bank, along with its other responsibilities. In 1791, Alexander Hamilton helped to establish the first Bank of the United States to assist with increasing the money supply and stabilizing the currency in the new nation. This public–private bank worked with the Treasury to receive deposits, make payments, and service the public debt. Its charter expired in 1811, but in 1816, at President James Madison’s request, the Congress established the second Bank of the United States to bring stability to the circulation of bank notes. In 1832, President Andrew Jackson vetoed the charter renewal for the bank, and in 1833, he ordered treasury secretary Roger B. Taney to defund the Bank. With the removal of the government deposits, the second bank lost its ability to regulate the money supply. Its charter expired in 1836. After the financial panic of 1837, the federal government created the Independent Treasury System to help stabilize the economy. Adopted in 1840, this structure with its regional subtreasuries under the Treasury’s control received public deposits and paid government debts. It was not, however, involved in revenue collection. Though it only survived until 1841, it was revived in an altered form in 1846 and
remained the banking system through much of the Civil War. In 1864, President Abraham Lincoln signed the National Banking System Act, which allowed banks to issue notes against government bonds, thereby providing the necessary funds to continue the war. During much of the next 50 years, Treasury secretaries found that this “free banking system” allowed them to influence the money market with their decisions to deposit, withdraw, and transfer funds; buy and sell bonds; purchase gold and silver; and issue currency. While some of these maneuverings helped to stabilize the money market and promote conditions that lead to greater levels of prosperity, many became concerned that the market was being unduly influenced by politics. Progressive reformers came to believe that the corporations and capitalists who aligned themselves with the Republican Party during the Gilded Age were manipulating the market in their favor. In 1913, after Democrat Woodrow Wilson ascended to the presidency, the Federal Reserve System was established to function as a “decentralized central” bank. It was thought that this system of reserve banks and member banks would create more protection against major economic upswings and severe downturns because it would be self-regulating and not subject to political influence. After the boom of the 1920s and the Great Depression, it became clear that it was not working as had been intended. Thus, the system was reconfigured to transfer more of the open-market authority from the Federal Reserve Banks to the Federal Reserve Board. The Federal Reserve Board was also reconfigured to operate as an independent regulatory agency. Since that time, the Federal Reserve Board has become the locus for the U.S. monetary policy. The Federal Reserve Banks, however, still work closely with the U.S. Mint, the Bureau of Engraving and Printing, and the Comptroller of the Currency to estimate the currency needs of the public, place orders for new deliveries, and distribute currency to the member banks within their regions. Aside from its banking history, the Treasury has been influential in the international relations efforts of the United States. One such turning point for the Treasury in terms of expanding its international
Department of Veterans Affairs
authority was the Bretton Woods Conference in July 1944, which established such international institutions at the General Agreement on Trade and Tariffs (GATT), the International Monetary Fund (IMF), and the World Bank. Secretary Henry Morgenthau, Jr., who was chairman of the conference, reportedly told President Franklin Roosevelt that his intention was to make Washington, not London, the financial center of the world. There is little doubt that since that time, the Treasury has played an even more significant role in the world’s economic and financial growth and development. While presidential administrations have differed in their support and interpretation of several of the international economic and financial policies, there has been a general agreement over the last 60 years to support free trade and international development on a global scale. See also appointment power; cabinet; executive agencies. Further Reading Hoffmann, Susan. Politics and Banking: Ideas, Public Policy, and the Creation of Financial Institutions. Baltimore, Md.: The Johns Hopkins University Press, 2001; Love, Robert Alonzo. Federal Financing: A Study of the Methods Employed by the Treasury in its Borrowing Operations. New York: AMS Press, 1968; Stephen, Cohen. The Making of United States International Economic Policy. New York: Praeger/Greenwood, 2000; Taus, Esther Rogoff. Central Banking Functions of the United States Treasury, 1789–1941. New York: Columbia University Press, 1943; U.S. Department of the Treasury. The Department of the Treasury Strategic Plan, 2003. Available online. URL: http://www.treas.gov/ offices/management/budget/strategic-plan/20032008/strategic-plan2003-2008.pdf. —Lara M. Brown
Department of Veterans Affairs Programs to benefit war veterans and their families have existed since the Revolutionary War. Today, medical treatment, counseling, education, housing, and other benefits for veterans comprise a major responsibility of the federal government. The evolu-
533
tion of such programs, however, has been historically uneven. Following the Revolutionary War and operating under the Articles of Confederation between 1776 and 1789, states failed to pay pensions to veterans. A crisis of significant proportions developed because the national Congress was powerless to levy taxes on the states to pay Revolutionary War veterans the monies they had been promised. The situation culminated in severe economic hardships on many soldiers and militia who had given up their livelihoods for the cause of independence. The problem became so acute that it prompted some farmers to take up arms, notably during Shays’s Rebellion in Massachusetts, as banks threatened to foreclose on veterans’ homes and farms. The rebellion was a key motive in prompting some Founders, such as James Madison, to call for a Constitutional Convention to revise the parameters of authority for a national government. Ultimately, the new Congress took over responsibility for the payment of war veterans’ pensions in 1789. President Abraham Lincoln reiterated the central importance of veterans’ programs to the nation following the Civil War. In his Second Inaugural Address, Lincoln posited that such programs were vital to “care for him who shall have borne the battle, and for his widow and orphan.” Lincoln’s phrase informs still today the federal government’s mission to provide for the nation’s injured veterans and the loved ones that deceased veterans leave behind. A decade after World War I, during the administration of Herbert Hoover, Congress established the Veterans Administration to coordinate better programs that were scattered across the federal government. In 1930, the Veterans Administration consolidated the Veterans’ Bureau, the Bureau of Pensions of the Interior Department, and the National Home for Disabled Volunteer Soldiers. In 1988, 58 years later, President Ronald Reagan signed congressional legislation that established the cabinet-level Department of Veterans Affairs (VA). The VA was the 14th federal department created in the executive branch. The VA began operations on March 15, 1989, under its first administrator, former Illinois congressman and World War II veteran Edward Derwinski. All subsequent presidential appointments have had
534 Depar tment of Veterans Affairs
significant military service or have seen active-duty combat. Subsequent secretaries of the VA include Jesse Brown (1993–97), Togo D. West, Jr. (1998– 2000), Anthony Principi (2001–05), and James Nicholson (2006– ). Secretaries Brown, Principi, and Nicholson were veterans of the Vietnam War. Secretary West served in the Judge Advocate General’s office in the U.S. Army and served as secretary of the Army under President Bill Clinton. The VA is one of the largest departments of the executive branch—third behind the Department of Homeland Security and the Department of Defense in terms of personnel. The VA employs more than 218,000 civil servants and appointed officials who are responsible for (1) maintaining medical centers for veterans around the country; (2) insuring that veterans are informed of their medical, housing, and educational benefits via regional offices; and, (3) overseeing national cemeteries around the nation that serve as the final resting place for veterans. When the Veterans Administration was created in 1930, it operated only 54 hospitals in the United States. The White House Office of Management and Budget notes that today the department operates 158 hospitals, 840 ambulatory-care and community-based outpatient clinics, 133 nursing homes, 206 community-based outpatient psychiatric clinics, 57 regional benefits offices, and 120 national cemeteries. In fiscal year (FY) 2007, the VA had a budget of $77.7 billion, of which $42 billion comprised mandatory spending for established programs. The agency’s fiscal year 2007 budget marked a 10 percent increase over FY 2006. The medical benefits available to veterans depend on many different factors. For veterans who suffered disabilities that are at least 50 percent the result of their military service, comprehensive medical care and prescription drugs are available at no cost. For other veterans small copayments are required for services and prescriptions. Ironically, the VA’s spending on medical programs has increased dramatically in the last decade despite the fact that the number of veterans eligible for benefits continues to decline and collections for medical care by veterans seeking services increased from less than $1 billion in 2001 to just over $2.5 billion in 2005. Just as in the private sector, soaring medical costs have had a profound
impact on the VA’s ability to provide quality care, and a larger share of veterans—more than 4 million annually—are seeking treatment more regularly. Some of the more acute challenges facing the agency include lack of hospital beds and mental-health care services for its clientele. The VA manages a vast network of physicians. The agency reports that more than half of all practicing physicians in the United States gained part of their education and training while working in the VA. The agency is affiliated with 107 medical schools, 55 dental schools, and more than 1,200 other schools and universities across the country. The VA reports that approximately 81,000 health professionals are trained annually in VA medical centers. VA researchers have pioneered medical breakthroughs on a number of fronts, including the development of the cardiac pacemaker, body- and brain-scan equipment, and artificial limbs such as the “Seattle Foot,” which allows amputees to run and jump. The agency is also credited for clinical trials that established new treatments for tuberculosis, schizophrenia, and high blood pressure. The agency operates “centers of excellence” that target specific medical disorders, including AIDS, alcoholism, schizophrenia, and Parkinson’s disease. Doctors and researchers at the VA have been the recipients of national and international awards, including the Nobel Prize. Despite the VA’s innovation in medical care and scientific breakthroughs, the agency has been criticized for a lack of funding by both parties in Congress, inadequate care of veterans, and the failure to investigate sufficiently controversial maladies of which veterans have complained. Following the Vietnam War, the agency was faulted for a putative failure to acknowledge and treat veterans who suffered from exposure to Agent Orange, a powerful chemical used to defoliate tropical vegetation for which the side effects were unknown. Moreover, following the Persian Gulf War (1991) and the U.S. invasion of Iraq, many veterans complained of an array of mental and physical conditions sometimes described as “Gulf War syndrome” that physicians were unable to diagnose immediately or treat. The particularly challenging contexts of treatment of veterans following both the Vietnam War and the Per-
disability (presidential) 535
sian Gulf War accentuate the importance of the VA’s Readjustment Counseling Service, which operates 206 centers aimed at providing psychological counseling for war trauma, community outreach, and referrals to social services to veterans and family members. The VA has also increased its programs to combat alcoholism, drug addiction, and posttraumatic-stress disorder for veterans, about 100,000 of whom are homeless. The VA’s educational and training programs for veterans began with the Servicemembers’ Readjustment Act passed by Congress in 1944. Commonly known as the GI Bill of Rights, the legislation has been updated and expanded since the basic framework was established. More than 20 million veterans have taken advantage of education and training programs at an aggregate cost of more than $75 billion in the last six decades. The GI Bill of Rights helped to ensure a smooth transition of veterans who returned stateside following World War II. Many enrolled in college and other education programs, which eased a potentially overwhelming flood of veterans on the job market that might have created significant unemployment. The GI Bill of Rights also gave veterans home-loan guarantees, which helped fuel a postwar economic boom. The federal government backed almost 2.5 million home loans in the immediate postwar era. The home-loan guarantees that were part of the original bill have been extended to new veterans as the legislation has been updated, notably in 1984, with the “Montgomery” GI Bill, so named for Mississippi congressman Gillespie V. “Sonny” Montgomery. Although a small fraction of the overall VA budget, home loans are one of the agency’s most popular programs, and the VA’s intervention in many cases has saved scores of veterans’ homes from imminent foreclosure. The VA assumed responsibility for national cemetery administration in 1973. In total, the VA oversees 12 national cemeteries in 39 states and Puerto Rico. Between 1999 and 2001, the VA opened five new national cemeteries: the Saratoga National Cemetery near Albany, New York; the Abraham Lincoln National Cemetery near Chicago; the Dallas–Fort Worth National Cemetery; the Ohio Western Reserve National Cemetery near Cleveland; and the Fort Sill National
Cemetery near Oklahoma City. More cemeteries are planned for the major metropolitan areas of Atlanta, Detroit, Miami, Sacramento (California), and Pittsburgh. The VA provides not only interments of veterans but also headstones and markers for their graves. Since 1973, the VA has supplied nearly 8 million headstones and markers. The VA also administers the Presidential Memorial Certificate Program, which provides engraved certificates signed by the president to commemorate deceased veterans who have been honorably discharged from military service. Finally, the VA oversees the State Cemetery Grants Program, which encourages development of state veterans cemeteries by providing 100 percent of funds to develop, expand, or improve veterans cemeteries operated and maintained by the states. See also appointment power; cabinet; executive agencies. Further Reading Bennett, Anthony J. The American President’s Cabinet: From Kennedy to Bush. New York: St. Martin’s, 1996; Figley, Charles R. Stress Disorders Among Vietnam Veterans: Theory, Research, and Treatment. New York: Brunner–Routledge, 1978; Gerber, David A., ed. Disabled Veterans in History. Ann Arbor: University of Michigan Press, 2000; Hess, Stephen, and James P. Pfiffner. Organizing the Presidency. 3rd ed. Washington, D.C.: The Brookings Institution, 2002; Warshaw, Shirley Anne. Powersharing: White House–Cabinet Relations in the Modern Presidency. Albany: State University of New York Press, 1996. —Richard S. Conley
disability (presidential) The Twenty-fifth Amendment to the U.S. Constitution deals with the difficult and controversial subject of what to do in case of presidential disability, temporary or permanent. Passed in 1967, the provisions of the Twenty-fifth Amendment have been invoked on several occasions, usually without incident or controversy, but the potential for mischief or systemic confusion, if not outright abuse, still exists. Concerns over what to do in case a president becomes disabled or unbalanced became paramount
536 disabilit y (presidential)
in the post–World War II era of the cold war conflict between the United States and the Soviet Union. In this age of nuclear stalemate when the United States and the Soviet Union seemed constantly on the brink of war, where both nations possessed massive amounts of nuclear weaponry that could easily destroy not only the adversary but life on Earth as we know it, concerns for the health—physical as well as mental—of the president who every day had his or her “finger on the button” became more and more pronounced. What would happen if the president were to be physically disabled and not capable of running the country? Would our adversary see this as an opportunity to strike at the United States or one of our allies or to expand their control beyond the areas it already controls? And what would happen if the president, facing overwhelming pressure every day, became mentally unbalanced? This fear made its way into popular culture with the famous 1964 Stanley Kubrick film, Dr. Strangelove, or How I Learned to Stop Worrying and Love the Bomb (in which a power-mad presidential aide [not the president] gleefully discussed a doomsday scenario as the United States and the Soviet Union marched unintentionally toward war). Concerns grew as President Dwight D. Eisenhower suffered from several bouts of ill health while president. This sparked the Eisenhower administration to draw up a constitutional amendment designed to deal with just such cases, but Congress failed to act on the proposal. This led President Eisenhower to draft a letter of understanding to his vice president, Richard M. Nixon, that outlined certain procedures to be followed in the event that the president was unable to execute the duties of his office. While the constitutional legitimacy of such a memo of understanding is very questionable, when President John F. Kennedy took office in 1961, he renewed the memo of understanding with his vice president, Lyndon B. Johnson. Following the assassination of President Kennedy in 1963, President Johnson issued a similar memo to House Speaker John McCormack and after the 1964 presidential election with his vice president, Hubert H. Humphrey. This agreement was briefly implemented in September 1965 during President Johnson’s gall-bladder operation. During this time, Congress, with the strong urging of Presi-
dent Johnson, passed the presidential disability amendment (1965), and it was ratified by the states and became part of the Constitution (the Twentyfifth Amendment) in 1967. The amendment itself is rather brief and to the point: Section 1. In case of the removal of the president from office or of his or her death or resignation, the vice president shall become president. Section 2. Whenever there is a vacancy in the office of the vice president, the president shall nominate a vice president who shall take office on confirmation by a majority vote of both Houses of Congress. Section 3. Whenever the president transmits to the president pro tempore of the Senate and the Speaker of the House of Representatives his or her written declaration that he or she is unable to discharge the powers and duties of office, and until he or she transmits to them a written declaration to the contrary, such powers and duties shall be discharged by the vice president as acting president. Section 4. Whenever the vice president and a majority of either the principal officers of the executive departments or of such other body as Congress may by law provide transmits to the president pro tempore of the Senate and the Speaker of the House of Representatives their written declaration that the president is unable to discharge the powers and duties of office, the vice president shall immediately assume the powers and duties of the office as acting president. Thereafter, when the president transmits to the president pro tempore of the Senate and the Speaker of the House of Representatives his written declaration that no inability exists, he or she shall resume the powers and duties of office unless the vice president and a majority of either the principal officers of the executive department or of such other body as Congress may by law provide transmit within four days to the president pro tempore of the Senate and the Speaker of the House of Representatives their written declaration that the president is unable to discharge the powers and duties of office. Thereupon, Congress shall decide the issue, assembling within 48 hours for that purpose if not in session. If the Congress, within 21 days after receipt of the latter written declaration or, if Congress is not in session, within 21 days after Congress is required to assemble, determines by two-thirds vote of both
disability (presidential) 537
Houses that the president is unable to discharge the powers and duties of office, the vice president shall continue to discharge the same as acting president; otherwise, the president shall resume the powers and duties of office. Section 1 made official what had become accepted practice. When a president is removed from office or for any other reason is out of office, the vice president becomes the president. This was a somewhat controversial issue in 1841 when John Tyler became the nation’s first “President by act of God,” as it was known at the time. When President William Henry Harrison succumbed to illness just a few weeks after taking the oath of office, his vice president, John Tyler assumed the office of president of the United States. But was Tyler really the president or merely the acting president? As this was the first case of succession to office between elections, some felt that Tyler was merely acting as president and not entitled to the title of president. In the House of Representatives, John McKeon of Pennsylvania introduced a resolution that would make Tyler “Acting President.” This resolution failed, Tyler claimed the full title and powers of the presidency (even though critics continued to call him “Your Accidency”), and the issue was resolved— for a time. The Twenty-fifth Amendment ended any doubt on this issue. Section 2 of the amendment deals with filling a vacancy in the vice presidency. This has happened several times in U.S. history, and prior to the Twentyfifth Amendment, the office went vacant until the next presidential election. The first president to have the opportunity to employ this provision of the Twenty-fifth Amendment was Richard M. Nixon. President Nixon, under investigation in the Watergate scandal, was confronted with evidence that his vice president, Spiro Agnew, had been taking bribes while serving as governor of Maryland. The U.S. Justice Department allowed the vice president to make a deal, accepting a nolo contendere plea (which means “I do not contest” the charges) on tax evasion, while also resigning from office. But President Nixon was also under fire for alleged corruption, and his choice for a vice president potentially had enormous implications, as the person he appointed to replace Agnew might very well step into the presidency should Nixon resign or be impeached—which
at that time was increasingly being seen as a distinct possibility. There were very few politicians whom Nixon might appoint who would be confirmed by both Houses of Congress, as the amendment stipulated. One of the few was minority leader of the House of Representatives, Gerald Ford (R-MI). While Ford was a partisan Republican facing a House and a Senate controlled by the Democrats, Ford had enough good will and bipartisan support behind him to be confirmed. When, in August 1974, President Nixon resigned the presidency, Ford became president. Section 3 of the amendment allows for the temporary transfer of power from the president to the vice president. This has occurred a number of times, as when President Ronald Reagan went into the hospital and temporarily gave George H. W. Bush, his vice president, the authority of the office. Section 4 is perhaps the most controversial and problematic. It has attracted the most attention and the gravest concerns, and while it has never become a serious problem, critics see a disaster just waiting to happen. What happens if the president, after relinquishing office to the vice president, wants it back but the president’s cabinet feels that he or she is unable—for whatever reason—to fulfill the duties of the office? Here, the cabinet can prevent the president from resuming power. The issue then goes to the Congress, who must vote on whether to allow the president to resume in office or accept the verdict of the cabinet. This section opens the door for much confusion and mischief. It has been the basis of many pulp-fiction political novels and raises concerns for the stability of power in the United States. Conspiracy buffs look to Section 4 and see the potential for conspiratorial machinations. In truth, the likelihood of some such conspiracy is quite remote. After all, it is the president’s cabinet, and they are likely to remain loyal to him or her; they owe their political positions to him or her; and if, in fact, the president is returned to office, they would certainly be both out of a job and deeply embarrassed by their failed actions, opening the door to accusations of an ill-conceived palace coup. But what if the president wants to return but clearly is incapable—for whatever reason—of responsibly exercising power? This is where Section 4
538 elec toral college
becomes essential to the smooth and effective functioning of government. Someone must be able to say “The emperor has no clothes,” and the cabinet may be the only place wherein to rest such a responsibility. The Congress could not do so, for that would interfere with the separation of powers. Thus, the cabinet is best positioned to evaluate the issue of a president returning to power or not. The Twenty-fifth Amendment to the Constitution has been used a number of times since 1967, and while it does have some flaws or loopholes, it has, in practice been generally effective and has adequately dealt with the issues of concern that led to passage of the amendment. Of course, that does not mean that problems or mischief might not someday ensue, but in its 40-plus year history, the amendment has been employed with care, and the results have been generally favorable to governing. Further Reading Abrams, Herbert L. The President Has Been Shot: Confusion, Disability, and the 25th Amendment in the Aftermath of the Attempted Assassination of Ronald Reagan. New York: W.W. Norton, 1992; Bayh, Birch. One Heartbeat Away: Presidential Disability and Succession. Indianapolis, Ind.: Bobbs–Merrill, 1968; Feerick, John. The Twenty-Fifth Amendment: Its Complete History and Earliest Applications. New York: Fordham University Press, 1998; Gilbert, Robert E. The Mortal Presidency. New York: Basic Books, 1992. —Michael A. Genovese
electoral college No other nation has so baroque a manner of electing their leader (in this case, a president) as the United States. When the framers of the U.S. Constitution decided to create a president, they needed a way to select the chief executive officer of the nation. Direct election by the people? No, as this ran the risk that this new president might be too closely aligned with the voters and might join forces with “the people” and overwhelm the Congress. The framers feared that a presidential plebiscite and— with memories of how the Roman republic degenerated into an empire—feared that the people together with a president who controlled the armed forces
might threaten liberty and constitutional government. Their fear of mobocracy led them to reject popular election of the president. Election by the legislature? No, as this might make the president a slave of the Congress and undermine the efforts to have a checks-and-balances system in the separation of powers model of government. After all, a president who is dependent for his position on the good will of Congress might be a sheep to the legislative wolf. For a time, the delegates to the Constitutional Convention thought of allowing the state legislatures to select the president, but they feared regional and state-to-state competition. After weeks of going back and forth on presidential selection, the framers finally invented a wholly new and revolutionary system wherein the states choose electors in a November general election. The manner of election is determined by each state legislature. This means that in the United States there may be one presidential Election Day, but in reality there are 50 different presidential elections, each centered in a different state. Also, each state has its own laws, its own electoral commissions, its own methods of voting and vote counting, and its own voter-registration mechanisms. In point of fact, the U.S. Constitution does not technically even grant citizens the right to vote—that too is determined on a state-by-state basis. The number of electors is to be equal to the total of senators and representatives each state sends to the Congress. The electoral college never actually meets together. The electors go to each state capitol to cast their votes. Those votes are then sent to Washington D.C., where, on the first week of January, in Congress, the vice president reads the results from each state. The votes are then certified and a new president and vice president are selected. This is what usually happens, but there have been times when no clear winner emerged from the electoral college. In such cases, the House of Representatives selects the new president, and the Senate selects the new vice president. The electoral college is the formal body that selects the president of the United States. This measure is part of the U.S. Constitution and is found in Article II, Section 1. The electoral college consists of 538 electors who meet on the first Monday after the second Wednesday of December in the capital of
electoral college 539
the state from which the electors are drawn. It is these electors who in every four-year presidential election cycle vote to elect the next president of the United States. Citizen voters do not vote for a president; they vote for electors, who then select the president and the vice president. Thus U.S. citizens choose their president indirectly through the electoral college. If no candidate wins the electoral vote, the race is thrown into the House of Representatives. In two presidential elections, the outcome was thrown into the House for a decision: 1800 and 1824. In 1800, the House selected Thomas Jefferson as president
in an electoral-college election that found Jefferson in a tie with his vice presidential candidate Aaron Burr. In 1824 the House chose John Quincy Adams. The selection of Adams was not without controversy. The outcome of the general election left several prominent candidates still standing, but none had the sufficient number of electoral votes to claim the presidency. Andrew Jackson had the most popular and electoral votes but not enough to be the winner. The decision was then sent to the House of Representatives where some argue that a deal was struck between Adams and the third-place finisher, Henry Clay, wherein Clay released his votes to Adams in
540 emer gency powers (presidential)
exchange for appointment as secretary of state. When Adams was declared the winner, Clay was made secretary of state. Some thought that the 2000 presidential race might have to be sent to the House for a final decision, but when contested votes in the state of Florida were finally given to Republican candidate George W. Bush in a U.S. Supreme Court 5–4 decision in Bush v. Gore, this outcome was averted. The electoral-college system is open to several potential problems, the most striking of which is that the winner of the popular vote may well lose the election. There have been times when the candidate who wins the popular vote loses the elections. This happened in 2000 when Vice President Al Gore won the popular vote nationally, but George W. Bush won enough states with enough electoral votes to win the presidency. At the time, calls for electoralcollege reform were heard (by Democrats, at least), but these pleas were short-lived and died out after the September 11, 2001, tragedy. The popular winner losing the presidential election has also occurred on three other occasions: 1824, 1876, and 1888. Results such as these are difficult to reconcile with the democratic sentiments of the nation and have led to calls for the direct popular election of the president. How well has the electoral college served the interests of the nation and the goal of political democracy? Many would argue that the electoral college has served the nation well in the course of history, giving a finality to the winner and opening up the presidential-selection process to input from smaller, less-populated states. Others argue that the electoral college is an embarrassment to the democratic sensibilities of the nation and allows the winner (in popular votes) to be the loser (in electoral votes). Of course, the intent of the framers was not to have the president democratically elected (they feared too close a link of the president to the people). The electoral college remains one of the most visibly undemocratic elements still embedded in the U.S. Constitution, and while there have been many efforts to eliminate the electoral college (usually in favor of the direct, popular election of the president), all such efforts have fizzled out over time. Given the durability of the electoral college, few scholars hold out much hope
that it will be eliminated any time soon. Also, many scholars argue that any reform of the electoral college might run the risk of deforming the electoral system and have unintended and damaging consequences. Leave it alone, they argue, as the cure might be worse than the curse. Thus, the people are reluctant to change this presidential selection method, and for better or worse the electoral college seems here to stay—that is, until a significant outrage or injustice in the selection process foments a harsh and lasting backlash wherein the public demands change. Few elements of the Constitution have been as criticized as the invention of the electoral college— Thomas Jefferson said it was “the most dangerous blot on our Constitution.” In the 1970s, a number of proposed constitutional amendments were proposed to introduce either direct election of the president or some other means of abolishing or altering the electoral college. While some of these proposals have passed the House or the Senate, none has received the two-thirds vote needed for adoption, which would then send the proposed amendment on to the states for consideration, and since citizens are deeply committed to their Constitution and very rarely change it, there is little to suggest that the elimination of the electoral college is to be achieved any time soon. With all its faults and potential for disaster, the electoral college has served the nation for more than 200 years, is a fixture (perhaps a permanent one) in the electoral and constitutional order, and is probably here to stay. Further Reading Edwards, George C. Why the Electoral College Is Bad for America. New Haven, Conn.: Yale University Press, 2004; Longely, Lawrence D. The Electoral College Primer. New Haven, Conn.: Yale University Press, 1996; Longely, Lawrence D. The Politics of Electoral College Reform. New Haven, Conn.: Yale University Press, 1975; Witcover, Jules. No Way to Pick a President. New York: Farrar, Straus and Giroux, 1999. —Michael A. Genovese
emergency powers (presidential) Governments must function in times of peace and times of war, feast and famine, good times and bad.
emergency powers (presidential) 541
In wartime or during a crisis, especially difficult stresses are put on the government. As there are special strains on the government in an emergency, it should not surprise us that the distribution of power and the uses of power may be quite different in an emergency than in normal or peaceful times. Given the numerous roadblocks and veto points that litter any president’s path, it is not surprising that the more ambitious and power-oriented presidents quickly become frustrated as other political actors block their way. An obstreperous Congress, powerful special interests, an uncooperative business community, an adversarial press, and others can at times seem to gang up on presidents, preventing them from achieving their policy goals. When faced with these opposing forces, most presidents feel helpless. Often their choice appears to be either to accept defeat (and the political consequences attached to failure) or to take bold action (always, the president believes, in the national interest). To make the complex separation-of-powers system work is difficult under the best of circumstances, and in normal times, it often seems impossible to start the system moving. Thus, rather than accept defeat, presidents are tempted to cut corners, go beyond the law, and stretch the constitutional limits. When all else fails, some presidents—knowing that their popularity, future political success, and historical reputation are at stake—cannot resist stretching the U.S. Constitution and going beyond the law. After all, presidents are convinced that what they want is in the best interest of the nation, so why let Congress, an uniformed public, or a hostile press stand in the way of progress (self-defined by the president)? It is risky, but many presidents believe that it is worth the risk. Sometimes, a president gets away with it (for example, Ronald Reagan in the Irancontra scandal); sometimes not (Richard Nixon in Watergate). This is exactly what the founders feared. The separation of powers and the checks and balances were set up to prevent one-person rule and insure that ambition could counteract ambition and power could check power. These devices were instituted to control and limit power. The model of decision making was cooperative, not executive, but as the frustrations of high demands, high expectations, limited
power, and falling approval eat away at presidents, they often cannot resist the temptation to go beyond the prescribed limits of the law. Is the president above the law? No, as such a notion violates all precepts of the rule of law on which the U.S. government was founded. But are there certain circumstances that can justify a president’s going beyond the law? Are there times—emergencies, for example—when the president may exceed his or her constitutional powers? When, if ever, is a president justified in stretching the Constitution? While the word emergency does not appear in the Constitution, there is some evidence from the founding era to suggest that the framers might have envisioned the possibility of a president exercising some “supraconstitutional powers” in a national emergency. The Constitution’s silence, some suggest, leaves room for presidents to claim that certain constitutional provisions (for example, Article II, Section 1, the executive-power clause, and the “faithfully-execute”-the-law clause, along with Article II, Section 2, the commander in chief clause) grant the president implied powers that can enlarge the president’s powers in times of crisis. Historically, during a crisis, presidents often assume extraconstitutional powers. The separate branches— which, under normal circumstances, are designed to check and balance the president—will usually defer to the president in times of national emergency. The president’s institutional position offers a vantage point from which he or she can more easily exert crisis leadership, and the Congress, the U.S. Supreme Court, and the public usually accept the enlarged scope of a president’s power. While the notion of one set of legal and constitutional standards for normal conditions and another for emergency conditions raises unsettling questions regarding democratic governments, the rule of law, and constitutional systems, the constitution was not established as a suicide pact, and in an emergency, ways must be found to save the republic from grave threats. Thus, in a crisis, the normal workings of the separation-of-powers system will be diminished, and the centralized authority of a president will rise. As unsettling as this may be to advocates of constitutionalism, the political reality has been that during times of great stress, the public, Congress, and even
542 emer gency powers (presidential)
at times the courts turn to the president as a political savior. The centralized authority of the presidency is well suited for quick, decisive action, just what may be needed in a crisis, and while this places the fate of the nation in the hands of one person, the alternative— maintaining the rather slow, cumbersome methods of the separation of powers—is often seen as too slow, too prone to inaction, and thus a threat to the nation. Solving the problem at hand becomes the top priority, and adherence to the Constitution is sometimes seen as an impediment to achieving safety and security. Democratic theory is weak in addressing itself to the problem of crisis government and democratic objectives. In most instances, democratic political theorists have seen a need to revert to some form of authoritarian leadership in times of crisis. John Locke calls this executive “prerogative”; Clinton Rossiter refers to a “Constitutional Dictatorship”; and to Jean Jacques Rousseau, it is an application of the “General Will.” In cases of emergency, when extraordinary pressures are placed on the nation, many theorists suggest that democratic systems—to save themselves from destruction—must defer to the ways of totalitarian regimes. Laws decided on by democratic means can, under this controversial view, be ignored and violated under this notion. To refer once again to Locke in his Second Treatise, in emergency situations the Crown retains the prerogative “power to act according to discretion for the public good, without the prescription of the law and sometimes even against it.” While this prerogative could properly be exercised only for the “public good,” one cannot escape the conclusion that for democratic governments and democratic theory, this is shaky ground on which to stand. Who, after all, is to say what is for the good of the state? Can the president determine this all on his or her own? Rousseau also recognized the need for a temporary suspension of democratic procedures in times of crisis. In the Social Contract, he writes: “The inflexibility of the laws, which prevents them from adapting themselves to circumstances, may, in certain cases, render them disastrous, and make them bring about a times of crisis, the ruin of the State. . . . If . . . the
peril is of such a kind that the paraphernalia of the laws are obstacles to their preservation, the method is to nominate a supreme ruler, who shall silence all the laws and suspend for a moment the sovereign authority. In such a case, there is no doubt about the general will, and it is clear that the people’s first intention is that the State shall not perish.” The result of such views is a greatly enlarged reservoir of presidential power in emergencies. Constitutional scholar Edward S. Corwin refers to this as “constitutional relativity.” Corwin finds the Constitution broad and flexible enough to meet the needs of an emergency situation as defined and measured by its own provisions. By this approach, the Constitution can be adapted to meet the needs of the times. Strict constitutionalists see this as a grave threat to the rule of law. How, they ask, can one person be relied on to supply all the wisdom that is grounded in the representative assembly of the legislature? How, they wonder, can constitutionalism be maintained if the nation turns to the strong hand of a single individual? The dilemma of emergency power in democratic systems is not easily answered. If the power of the state is used too little or too late, the state faces the possibility of destruction. If used arbitrarily and capriciously, this power could lead the system to accept a form of permanent dictatorship. In a contemporary sense, the constant reliance on the executive to solve the many “emergencies” facing the United States could very well lead to the acceptance of the overly powerful executive and make the meaning of the term emergency shallow and susceptible to executive manipulation. The U.S. Supreme Court, under this theory, recognizes the emergency and allows the president to assume additional powers, but the constitutional dictator must recognize the limits of his emergency power. President Franklin D. Roosevelt, in 1942, after requesting of Congress a grant of a large amount of power, assured the legislature that “When the war is won, the powers under which I act automatically revert to the people—to whom they belong.” The executive, in short, is to return the extraordinary powers he or she was granted during the crisis back to their rightful place. But serious questions remain as to (1) whether presidents have in fact returned this power, and (2) whether, even if the president desires
Environmental Protection Agency
to do so, a complete or even reasonable return to normality is possible after dictatorial or quasi-dictatorial power is placed in the hands of a single person. The U.S. political system has met crises with an expansion of presidential power. President Abraham Lincoln during the Civil War and President Franklin D. Roosevelt during the Depression serve as examples of presidents who, when faced with a crisis, acted boldly and assumed power. But what distinguishes the constitutional dictator from the Imperial President? What separates the actions of Lincoln and Roosevelt, generally considered as appropriate, from those of Nixon, generally considered inappropriate or imperial? Several factors must coalesce for the crisis presidency to be valid: (1) The president must face a genuine and a widely recognized emergency; (2) the Congress, the Courts, and public must—more or less—accept that the president will exercise supraconstitutional powers; (3) the Congress may, if it chooses, override any and all presidential acts; (4) the president’s acts must be public so as to allow Congress and the public to judge them; (5) there must be no suspension of the next election; and (6) the president should consult with Congress wherever possible. We can see that Lincoln and Roosevelt met (more or less) all of these requirements, while Nixon met very few. The tragic events of September 11, 2001, brought the crisis presidency to full fruition. Prior to that crisis, the George W. Bush presidency was struggling. After 9/11, the Bush presidency resembled the constitutional dictatorship that Rossiter described. The president was granted wide latitude to act unilaterally not just to pursue and destroy terrorists abroad but to dramatically curtail constitutional dueprocess rights at home. A crisis greatly expands a president’s level of political opportunity and thus his or her power. See also war powers. Further Reading Franklin, Daniel P. Extraordinary Measures. Pittsburgh, Pa.: University of Pittsburgh Press, 1991; Rossiter, Clinton. Constitutional Dictatorship: Crisis Government in Modern Democracy. Princeton, N.J.: Princeton University Press, 1948. —Michael A. Genovese
543
Environmental Protection Agency Created by an executive order in 1970, the U.S. Environmental Protection Agency (EPA) has the largest budget and number of personnel of any regulatory agency in the federal government. The agency was founded when President Richard Nixon consolidated existing pollution-control programs into a single agency that would report directly to the White House. The agency’s mission is a complex tangle of mandates. Unlike many federal agencies, the EPA has no formal mission statement. Congress refrained from defining a list of specific priorities, thus avoiding political conflicts with key constituencies. The agency’s responsibilities include research, planning, and education, although most of its budget is devoted to regulation. Its regulatory activities consist of standardsetting, cleaning up contaminated sites, and implementing and enforcing 10 major environmental statutes. Critics have charged that this lack of a clear mandate and overwhelming number of tasks have resulted in an ad hoc approach, allowing its priorities to be set by court rulings on environmental groups’ litigation, and other outside forces. In the early years of the agency’s existence, the focus was on enforcement. Nixon named an assistant attorney general, William Ruckleshaus, as its first administrator. With the 1972 election approaching, the agency chose to prioritize actions that were more likely to deliver immediate results. Originally, one third of the EPA’s budget was devoted to the Office of Research and Development (ORD), but that diminished with the expansion of the agency’s regulatory responsibilities. By the 1990s, ORD’s share of the budget hovered in the single digits. After the 1972 election, Ruckleshaus and his successor, Russell Train, continued to prioritize the monitoring and prosecution of polluters. The focus on quick results also undercut efforts to turn the EPA into a unified agency with an integrated approach to regulation, an objective that was abandoned largely after Ruckleshaus departed. Consequently, the agency retained a compartmentalized structure based on the medium of pollution, reflecting the various regulatory programs that had been brought together hastily from other agencies in 1970. The EPA’s responsibilities grew drastically during the 1970s. Responding to persistent public concern
544 En vironmental Protection Agency
about pollution, Congress enacted several major environmental laws, including the Clean Air Act, the Clean Water Act, the Safe Drinking Water Act, and the Resource Conservation and Recovery Act. The succession of laws that were passed during the “environmental decade” culminated in 1980 with Superfund, which became the agency’s largest program in terms of its budget and regulatory activity. Relations between the EPA and the White House were sometimes strained during the early 1970s, with Republican administrations concerned that business would be affected adversely by the agency’s regulatory activities. The EPA maintained its independence, however, due to the strong support of a Democratic Congress. During the Jimmy Carter years, administrator Douglas Costle forged stronger ties to the executive branch by embracing Carter’s reform agenda, making the EPA the first agency to adopt zero-base budgeting. Costle also cultivated political support by emphasizing the agency’s public-health mission over ecological protection, managing to increase the budget by 25 percent. In the 1980s, the EPA became the battlefield for a bitter clash concerning regulation and the size of government. Republican presidential candidate Ronald Reagan had campaigned on reducing spending and scaling back what he saw as an unacceptable regulatory burden on business, but he vowed not to touch defense, Social Security or other entitlements. Pollution control became one of only a small number of federal programs left for President Reagan to use for his ambitious agenda of cutbacks. This agenda found little support in Congress since Democrats controlled the House of Representatives and the Republican leadership of the Senate remained cognizant of the voters’ continuing strong support for environmental protection. Consequently, much of Reagan’s agenda was accomplished through administrative avenues, rather than legislation. The Reagan administration’s vision for the EPA was imposed through budget cuts, executive orders, and political appointments. Except for Superfund, a program with a dedicated revenue stream, the EPA withered dramatically. The budget was slashed by 26 percent between 1980 and 1982. Because of layoffs, low morale, and frustration with the workplace climate, more than 4,000 employees left the agency by Reagan’s
second year in office. Reagan issued Executive Order 12291 that required that agencies conduct a regulatory impact analysis for all proposed regulations and demonstrate that the benefits exceed the cost. Many environmentalists perceived his executive order as an attack on the EPA. Perhaps the most visible symbol of the administration’s hostility to environmental regulation was the controversial appointment of Anne Gorsuch as EPA administrator. She demoted many of the deputy administrators already serving with the agency while increasing the number of positions filled with political appointees. Many of the new appointees were attorneys and lobbyists that had represented development and energy interests in Colorado. Her chief of staff had been a lobbyist representing a manufacturer of asbestos products, while Gorsuch herself had been a Colorado state legislator with close ties to the Mountain States Legal Foundation, which specialized in suing the government to block environmental legislation. Even as the White House filled some top jobs with appointees that were less-than-committed to regulation, other key positions were left unfilled several months into Reagan’s first term. Simultaneously, Gorsuch moved to centralize agency decision making within her own office. Rule-making activity stalled in the bottleneck created by this reorganization. With several key positions still vacant and the ranks of agency personnel decimated, work ground to a halt even as several new environmental statutes heaped more and more responsibilities on the agency. The assault on the EPA did not sit well with the public and Congressional leaders, for whom environmental protection remained as popular as it had been before the election of President Reagan. Gorsuch eventually resigned after Congress held her in contempt during an inquiry into Superfund in 1983; at the same time, she found herself at the center of a firestorm of controversy about the handling of the discovery and mitigation of dioxin contamination in the soil of Times Beach, Missouri. The Reagan administration took a slightly less confrontational stance after the Gorsuch resignation, bringing back William Ruckleshaus for a brief stint as administrator, and Lee Thomas served in the top position for the remainder of the Reagan years.
Environmental Protection Agency
As the turbulent 1980s came to a close, President George H. W. Bush made some conciliatory moves by choosing an EPA administrator with strong environmental credentials (William Reilly) and proposing to elevate the agency to the status of a cabinet-level department. At the same time, mistrust of the agency (and the executive branch) led Congress to begin to micromanage its various programs. Frustrated with the lack of progress implementing the major environmental statutes, Congress began to write rigid goals and deadlines into law when the statutes were reauthorized, filling these laws with hammer clauses to force action by specified dates. For example, the Clean Air Act Amendments of 1990 set deadlines for areas that had failed to meet national air-quality standards set in the 1970 act, imposed emission standards and deadlines for mobile sources of air pollution, required the EPA to set standards for 189 hazardous air pollutants, and mandated a schedule for phasing out chemicals and produce ozone. The Food Quality Protection Act of 1996 required the EPA to review 600 pesticides and 75,000 chemicals to assess their risk to human health, imposing a time line for the development of testing methods and implementation of the program. By the 1990s, the EPA was becoming overwhelmed by this collection of legislative mandates. It became a common criticism that it lacked a strategy to set priorities, as some of the most serious environmental threats to human health (such as indoor air pollution) were low on the agency’s agenda. Meanwhile, a disproportionate share of the agency’s resources and dollars were committed to regulating hazards that posed relatively small risks, such as remediation of abandoned hazardous waste sites. This reinvigorated the debate about whether the EPA was primarily a public health agency with a mandate to reduce health risks or whether its obligation was to reduce all forms of pollution in the ecosystem, as many environmentalists believed. The most ambitious effort to reform the EPA occurred during the Clinton years. It became one of the first agencies to be “reinvented” in accordance with President Bill Clinton’s National Performance Review. Clinton’s EPA administrator, Carol Browner, sought greater flexibility to set priorities, relaxing many of the requirements and deadlines that ham-
545
strung the agency’s actions. The 1990s also saw a move away from command-and-control regulation to less coercive approaches, such as market-based regulation and negotiated rule making. These incremental moves did little to quell critics outside or within the government, many of whom continued to view the agency as a symbol of coercive regulatory power throughout the decade. The then-Republican leader in the House of Representatives, Tom DeLay, famously referred to the EPA as “the Gestapo.” New questions about the EPA’s independence were raised as the 21st century dawned. With the change of administrations, environmentalists saw evidence of the White House undermining rule making and scientific research. Immediately on assuming office in 2001, the George W. Bush administration suspended new arsenic rules promulgated during the Clinton administration but reinstated them later that year after a backlash in Congress. Under White House pressure, the EPA concealed information about health risks from polluted air in New York City following the September 11, 2001, terrorist attacks. The administration also was accused of suppressing an EPA report on health risks from mercury pollution and of censoring information on global warming from the EPA’s annual report on air pollution in 2002 and a State of the Environment report in 2003. As in previous decades, the EPA was frequently at the center of political controversy, and the administrator continued to be a lightning rod. Bush’s first EPA administrator, former New Jersey governor Christine Todd Whitman, resigned after just three years. Her successor, former Utah governor Mike Leavitt, lasted only one year before seeking a different post in the administration. See also environmental policy; executive agencies. Further Reading Landy, Marc K., Marc J. Roberts, and Stephen R. Thomas. The Environmental Protection Agency: Asking the Wrong Questions. New York: Oxford University Press, 1990; Morgenstern, Richard D., ed. Economic Analyses at EPA. Washington, D.C.: Resources for the Future, 1997; Powell, Mark R. Science at EPA. Washington, D.C.: Resources for the Future, 1999; Rosenbaum, Walter A. Environmental Politics and
546 evolution of presidential power
Policy. 6th ed. Washington, D.C.: Congressional Quarterly Press, 2005. —David M. Shafie
evolution of presidential power If the U.S. Constitution invented the outline of the presidency and George Washington operationalized the framers’ incomplete creation, history and experience more fully formed this elastic and adaptable institution. In time, the presidency has evolved from “chief clerk” to “chief executive” to national leader and, some would argue, to a modern monarch. The presidency is less an outgrowth of the constitutional design and more a reflection of ambitious men, demanding times, wars and domestic crises, exploited opportunities, fluctuation in institutional relations, the rise of the United States as a world military and economic power, and changing international circumstances. In time, some presidents added to and some detracted from the powers of the office, and while the trend has generally been to enlarge and swell the size and power of the presidency, there have been backlashes and presidency-curbing periods as well, but clearly, the overall trend has been to add to the scope of presidential authority. President George Washington gave the office some independence and republican dignity; Thomas Jefferson added the leader of a political party to the arsenal of presidential resources; Andrew Jackson added tribune and spokesman of the people; James Polk added the role of manipulating the nation into a war; Abraham Lincoln made the president the crisis leader and war leader of the nation; Teddy Roosevelt made the president the legislative and public leader of the nation; Woodrow Wilson added international leadership to the office; Franklin D. Roosevelt created the modern presidency and invented the modern welfare state in the United States; Harry Truman added the national security state apparatus in the cold war; and George W. Bush made the antiterrorism state a presidential tool for policy and power. Each of these presidents, along with others, contributed to the evolution of presidential power, and all of these presidents, added together, have helped take the president from a position of chief clerk/chief executive to leader of the international arena and dominant
force in the United States. For better or worse, the United States has become a presidential nation. It was a rocky journey of two steps forward, one back, but over time the accumulated weight of these assertions of presidential power added up to a new political system and a new presidency, one that the framers of the U.S. Constitution did not envision, nor did they create. Was the growth of presidential power inevitable? A strong presidency seemed to follow the growth of U.S. power. One led, almost inevitably, to the other. Opportunity and necessity are two words that best describe why, in time, the power of the presidency has expanded. The original design of the office created opportunities for ambitious men, especially in times of necessity, to increase presidential power. The presidency—elastic, adaptable, chameleonlike—has been able to transform itself to meet what the times needed, what ambitious officeholders grabbed, what the people wanted, and what world events and U.S. power dictated. During this period, the power of the president has expanded while the power of Congress has decreased. As originally established, the Constitution created a strong Congress and a limited presidency. Most of the key governmental powers are located in Article I of the Constitution, the article that creates and empowers the Congress. Article II creates and empowers a presidency, and it is surprisingly thin on power. The president has very few independent powers—most powers he shares with Congress, especially the Senate. Thus, the Constitution created a limited president, dependent on the cooperation of the Congress and practically devoid of independent political authority. It truly was more a clerk than a national leader. Presidents, however, continually sought ways to increase the power and authority of the office, and in time, presidential power grew and congressional authority diminished. To illustrate the erosion of congressional power and the expansion of presidential power in time, let us examine the evolution of the war powers. The men who invented the presidency feared concentrated power in the hands of a single man—the king. Their experience led them to fear especially the possibility that one man could lead the country into war for slight or unsound reason. They thus carefully circumscribed the war powers and gave
evolution of presidential power 547
to the Congress the sole ability to declare war. They were fearful that the new government might begin to resemble the old monarchy. To counter this, the framers, in Article I, gave Congress, the people’s branch, the power to declare war. Article II makes the president the commander in chief during war. With war declaring and war executing vested in different branches, the framers hoped to ensure that when a country, in Thomas Jefferson’s apt phrase, “unchained the dogs of war,” it was because the people through their representatives supported it. How has this idea played out in time? Some would say not very well. Initially, presidents demonstrated great respect for the Constitution’s separation of the war powers, but in time, as the presidency grew more powerful, as crises centered more authority into the White House, as the United States became a world power, and as U.S. commercial interests grew, power increasingly gravitated to and was absorbed (in some cases, grabbed) by the presidency, and, usually, Congress merely stood by as passive observers; worse, they often delegated war-making authority to the president. A first big step toward presidential control of the war powers came just after World War II with the emergence of the cold war. Amid the anticommunist hysteria of the early cold war days, President Harry S. Truman claimed and Congress quietly ceded to him independent authority to commit U.S. troops to war (in this case, in Korea). That Truman claimed such power should surprise no one. In fact, James Madison anticipated just such efforts and attempted to set power against power via the separation of powers. That Congress should so meekly let Truman get away with this power grab might surprise us. After all, why would the Congress so silently cave in to a president on so important an issue? Part of the reason stems from the cold war ideology: Anything that could be done to stop communism should be done, or so it was widely believed. To defend Congress and its institutional/constitutional authority thus might be seen as caving in to communism, and that usually meant political suicide. Powers once lost are hard to regain. Thus, the Congress wallowed in war-making weakness for decades, but in the aftermath of the debacle in Viet-
nam, Congress made a serious attempt to reclaim (some) of its lost authority. In 1973, it passed the War Powers Resolution which required, among other things, consultation prior to a president placing troops in potentially hostile circumstances. Lamentably, the War Powers Resolution has done little to control presidential war making or to increase congressional involvement in decision making. Presidents continue to act as if they make foreign policy and war, and in those rare instances where the Congress tries to chain the dogs of war, resolute presidents can find ways, legal and otherwise, to get their way. In recent years, a further step has been taken that extends the claims of presidential war-making power even further from its constitutional roots. President George W. Bush has extended the claims to presidential war power in the post–September 11, 2001, war against terrorism, going so far to assert that not only does the president have unilateral war-making authority, but that in the war against terrorism, the president’s actions are “nonreviewable” by the other branches. In this sense, we have nearly come full circle back to the power of the king to decide alone matters of war and peace. As the war-powers case illustrates, in time, presidents have grabbed and Congress has given a tremendous amount of political as well as constitutional authority to the presidency, and while many view this power shift as inevitable, few can argue that it is constitutional. Strikingly, the power of the presidency has grown, but the words of the Constitution have not changed. Presidents have been able to rewrite the Constitution in practice by acting, claiming broad swaths of power, and being given such authority by a meek Congress. Presidential power has indeed been evolutionary. One cannot understand the growth of presidential power without understanding the history and development of the office as well as the nation. Given the role that the United States continues to play on the international stage, one should not expect the power of the presidency to diminish. In fact, the global responsibilities of the United States almost guarantee a strong, centralized presidency with power far beyond what the Constitution dictates. As long as the Congress turns its back on its institutional responsibility; as long as the public remains
548 e xecutive agencies
sanguine regarding presidential authority; as long as the courts do not engage in a frontal constitutional assault on presidential power; and as long as world events propel the United States into the forefront, we can expect the power of the presidency to remain significant. While the power of the presidency has evolved in time, there has never been—with the possible exception of the post-Vietnam, post-Watergate era—an open debate on the scope and limits of presidential power. Absent such a debate, the forces that have propelled the presidency into the forefront will likely continue to add to rather than detract from the power that gravitates to the office. Is the strong presidency a presidency that is a far cry from what the framers of the U.S. Constitution created, here to stay? In all likelihood, the strong presidency will continue to be linked to U.S. power both at home and abroad. In this sense, it is very likely that the strong presidency will become, if it has not already, a permanent fixture in the U.S. and international systems. See also foreign-policy power. Further Reading Cronin, Thomas E., and Michael A. Genovese. The Paradoxes of the American Presidency. 2nd ed. New York: Oxford University Press, 2004; Genovese, Michael A. The Power of the American Presidency, 1787–2000. New York: Oxford University Press, 2001; Neustadt, Richard E. Presidential Power: The Politics of Leadership. New York: Wiley, 1960. —Michael A. Genovese
executive agencies Each of the three branches of the federal government plays a distinct role in the governing process. The legislative branch is responsible for making federal law, the executive branch is responsible for enforcing federal law, and the judicial branch is responsible for interpreting federal law. From the time of the first administration of President George Washington, it became obvious that one person could not carry out the duties of the presidency without advice and assistance in the enforcement and implementation of the laws of the land. As a result,
executive-department heads (known as cabinet secretaries), heads of executive agencies housed within each department, and heads of independent agencies assist the president in this regard. Unlike the powers of the president, the responsibilities of these members of the executive branch are not outlined in the U.S. Constitution, yet each has specific powers and functions associated with each executive-branch post. Department heads and heads of executive agencies advise the president on policy issues and help in the day-to-day execution of those policies, while the heads of independent agencies also execute policies or provide special services to the executive branch of the federal government. While executive agencies are perceived by most citizens to be run by faceless bureaucrats that have little impact on the lives of citizens, these agencies have a tremendous presence in the day-to-day functions of the people in the form of regulation of the food that is eaten, the cars that are driven, consumer products that are purchased, programming available on television and cable, and even air quality. Executive agencies are organized in the following categories: the Executive Office of the President, the 15 departments that consist of the president’s cabinet, independent agencies (including a handful that are considered to be cabinet-level agencies), and quasi-independent agencies (some of which are also considered government corporations). One difference between independent agencies and the remaining agencies is that top appointees within independent agencies cannot be dismissed except “for cause,” which includes a dereliction of duty, or criminal or civil negligence, whereas appointees of other agencies serve, for the most part, at the plea sure of the president. Executive agencies are governed by executive-branch officials, and they are usually proposed and created by those within the executive branch. Yet, these agencies are approved and funded by Congress, and as such, Congress maintains important oversight over the functions of the executive branch in its role in enforcing laws and implementing policy at the federal level. The Executive Office of the President (EOP) was established by Congress in 1939 on the recommendation of the Brownlow Committee, which determined that then-President Franklin D. Roosevelt needed a
executive agencies 549
support staff to enable him to run the executive branch of the federal government effectively. The EOP consists of the immediate staff of the president, as well as the various levels of support staff that report to the president. Since 1939, the EOP has grown tremendously both in size and influence. Currently, more than 1,800 full-time employees make up the EOP, with employees housed in both the East and West Wings of the White House as well as the Eisenhower Executive Office Building, which is considered an extension of the White House. The agencies and offices within the EOP include the following (in alphabetical order): the Council of Economic Advisors (CEA), the Council on Environmental Quality, the Domestic Policy Council, the Homeland Security Council, the National Economic Policy, the National Security Council (NSC), the Office of Faith-Based and Community Initiatives, the Office of the First Lady, the Office of Management and Budget (OMB), the Office of National AIDS Policy, the Office of National Drug Control Policy, the Office of Presidential Communications, the Office of Science and Technology Policy, the Office of the vice president, the Office of White House Administration, the President’s Critical Infrastructure Protection Board, the President’s Foreign Intelligence Advisory Board, the Privacy and Civil Liberties Oversight Board, the United States Trade Representative, the USA Freedom Corps, the White House Chief of Staff, the White House Council, the White House Fellows Office, the White House Military Office, and the White House Office of the Executive Clerk. Currently, there are 15 departments of the executive branch of the federal government. The department heads of these 15 agencies comprise the president’s cabinet, which emerged during the early years of the republic as a group of advisers to the president on a variety of policy issues. President George Washington’s first cabinet of 1789 was composed of only three departments: the Treasury, State, and War, although this group would not be referred to as a “cabinet” until 1793. In 1798, a fourth department was added to Washington’s cabinet, the Department of the Navy. While not officially designated as such until later, the U.S. attorney general was generally considered to be a part of Washington’s cab-
inet as well. The cabinet includes a total of 15 departments. They include (with the date established): the Department of State (1789; formerly the Department of Foreign Affairs), the Department of the Treasury (1789), the Department of Defense (1947; formerly the Department of War), the Department of Justice (1870), Department of the Interior (1849), the Department of Agriculture (1889), the Department of Commerce (1903), the Department of Labor (1913), the Department of Health and Human Services (1953; formerly part of the Department of Health, Education, and Welfare), the Department of Housing and Urban Development (1965), the Department of Transportation (1966), the Department of Energy (1977), the Department of Education (1979), the Department of Veterans Affairs (1988), and the Department of Homeland Security (2002). Six other positions within the executive branch also are considered to be cabinet-level rank, including the vice president of the United States, the White House Chief of Staff, the administrator of the Environmental Protection Agency, the director of the Office of Management and Budget, the director of the National Drug Control Policy, and the United States Trade Representative. Within the 15 departments, more than 300 executive agencies exist to provide specific policy expertise in both giving advice to the president and the implementation of federal policy. The departments that house the most executive agencies include Justice, Defense, State, and Energy. Those departments with the fewest executive agencies include Veterans Affairs, Housing and Urban Development, and Interior. Some of the more prominent and recognizable executive agencies include the following (with the home department listed in parentheses): the Forest Service (Agriculture); the Bureau of the Census (Commerce); the Joint Chiefs of Staff, and the Departments of the Army, Navy, and Air Force (Defense); the Office of Federal Student Aid (Education); Argonne, Lawrence Livermore, and Los Alamos National Laboratories (Energy); the Food and Drug Administration, and the Centers for Disease Control and Prevention (Health and Human Services); the Coast Guard, the Federal Emergency
550 e xecutive agencies
Management Agency, and the U.S. Secret Service (Homeland Security); the Fair Housing Administration (Housing and Urban Development); the National Park Service (Interior); the Federal Bureau of Investigation, the Bureau of Alcohol, Tobacco, Firearms, and Explosives, and the Drug Enforcement Administration (Justice); the Occupational Safety and Health Administration (Labor); the Counterterrorism Office (State); the Federal Aviation Administration and the National Highway Traffic Safety Administration (Transportation); the Internal Revenue Service and the U.S. Mint (Treasury); and the Veterans Health Administration (Veterans Affairs). As the name implies, independent regulatory agencies are to be independent of the president and Congress, are to regulate and make rules in the public as well as private sector, and are considered quasigovernmental agencies since they operate outside of the departments of the executive branch. Independent agencies are established through statutes passed by Congress, which defines the goals of each agency and the policy area that each agency will oversee. If an agency also has rule-making authority over a particular policy area, then the agency rules, and/or regulations will also have the power of federal law while in force. Independent agencies regulate a wide range of activities and are to do so in an effort to serve the public interest. Usually administered by a board or commission, steps are taken to ensure the independence of these governing bodies. While most agency board members are appointed by the president, sometimes, with the advice and consent of the Senate, they are usually drawn from different political parties, serve for a fixed time period, cannot be fired for political reasons, and are mostly insulated from political pressure. The first independent regulatory agency was the Interstate Commerce Commission, established in 1887 (and abolished in 1995). Current examples of independent regulatory agencies include: the Central Intelligence Agency, the Federal Reserve Board, the National Labor Relations Board, the Federal Trade Commission, the Federal Communications Commission, the Environmental Protection Agency, the National Aeronautics and Space Administration, the National Archives and
Records Administration, the National Science Foundation, the Peace Corps, the Small Business Administration, the selective service system, and the Office of Personnel Management. These agencies have considerable power to regulate in their respective areas of concern. In a sense, these agencies exercise certain legislative, executive, and judicial functions, and yet, they are appointed, not elected. A specific example of an independent agency and its area of responsibility is the U.S. General Services Administration (GSA). This federal agency, created by the Federal Property and Administrative Services Act of 1949, is responsible for the purchase, supply, operation, and maintenance of federal property, buildings, and equipment and for the sale of surplus items. The GSA also manages the federal motorvehicle fleet and oversees telecommuting centers and child-care centers. This Federal Property and Administrative Services act set up rules and procedures for the management of the federal government’s records and property. Included under the act is the construction and operation of government buildings; the procurement of supplies; the use and disposal of property; communications, transportation and traffic management; storage of materials; and data processing. The GSA’s job is to facilitate the functioning of the federal government. It supplies products and services to federal agencies across the United States. The enabling legislation lists its mission to “help federal agencies better serve the public by offering, at best value, superior workplaces, expert solutions, acquisition services and management policies.” The GSA employs roughly 13,000 federal employees. GSA Regional Offices are located in Boston, New York, Philadelphia, Atlanta, Chicago, Kansas City, Fort Worth, Denver, San Francisco, Seattle, and Washington, D.C. Two of the main responsibilities of the GSA include its role as the government’s “landlord” by providing office and other space requirements for the federal workforce (the federal government owns or leases more than 8,300 buildings), as well as serving as the primary federal acquisition and procurement force by providing equipment, supplies, telecommunications, and integrated information-technology solutions to federal agencies who interact with citizens as their “customers.” The GSA manages federal-
executive agencies 551
government assets, including an interagency fleet of more than 170,000 vehicles valued at nearly $500 billion. The agency also handles the government auction to the public of various items from surplus, seized, and forfeited property, including cars, boats, jewelry, furniture, and real estate. A number of federal agencies are under the GSA, including Federal Acquisitions Service, the Public Buildings Service, the Office of Governmentwide Policy, and the Federal Technology Service. The National Archives was a part of the GSA until 1985 when it became an independent federal agency. The Federal Citizen Information Center (FCIC) is also part of the GSA. This organization has been providing free information to citizens regarding consumer information and government services since the early 1970s. Headquartered in Pueblo, Colorado, the FCIC produces the Consumer Information Catalog (published four times a year and containing descriptive listings of about 200 free or low-cost federal publications) and the Consumer Action Handbook (first published in 1979, this helps citizens find the best and most direct source for assistance with their consumer problems and questions). The GSA, due to its massive “buying power” has, from time to time, been accused of either bureaucratic inefficiency and/or corruption in purchasing and contracts. According to the GSA’s Web page (www.gsa.gov), the agency is “focusing increasingly on adding value through new, efficient, and effective ways for federal employees to do their work . . . [and] is helping to create a citizen-centric, results-oriented government that is even more productive and responsible to all Americans.” In 1935 the U.S. Supreme Court protected the independence of these types of agencies when in the case Humphrey’s Executor v. United States, the Supreme Court held that President Franklin D. Roosevelt exceeded his authority in firing a member of a regulatory commission without legitimate cause. More than 50 years later, the Supreme Court reiterated this theme in Morrison v. Olson (1988) when it again limited the president’s authority to fire an independent counsel appointed to investigate the executive branch for allegations of wrongdoing. Thus, while presidents may appoint members to independent regulatory agency boards or commissions, only in rare
circumstances can he fire or remove them from the commissions. Such efforts to guarantee independence can only go so far. After all, the president does appoint members of these commissions, and an astute or ideological president can carefully select commission members based on ideological or partisan considerations. This is precisely what Ronald Reagan did when he screened potential appointees for their political and ideological credentials. Part of President Reagan’s administrative-presidency approach, such appointees, and while technically independent, were ideological soul mates of the president and could nearly always be relied on to do the president’s political bidding. Independent regulatory commissions can never be truly independent, but if their work is transparent and open to public and media scrutiny the public interest may, in the end be served. However, there is a clear economic interest for the industries and activities being regulated to attempt to influence the decisions of any regulatory commission, be it independent or otherwise. A government-owned corporation is an agency of government that operates in ways similar to a private corporation in the private sector. A government corporation may be set up when an agency’s business is generally commercial, when it can generate its own revenue usually through the sale of goods or services, and when the agency’s mission requires greater flexibility than is normally granted to a governmental agency. According to the U.S. Code (31 USC 9109), government corporations have two designations: “government corporations” and “wholly owned government corporations.” Both types of government corporations operate in ways similar to private corporations. Some of the best known of these government corporations are the Tennessee Valley Authority (TVA), created in 1933, the National Railroad Passenger Corporation (better known as Amtrak), the Corporation for Public Broadcasting (CPB), the Federal Deposit Insurance Corporation, the Resolution Trust Corporation, and the U.S. Postal Service. These corporations are usually headed by a commission similar to a corporate board. Such commissions are usually bipartisan. Normally, the president will appoint members to the commissions, sometimes with the advice and consent of the Senate. In 1945,
552 e xecutive agreements
Congress passed the Government Corporation Control Act to better control these corporations. At that time, the number of government corporations was rising, and concerns were raised about their use. In recent years, presidents and the Congress have been less inclined to use government corporations and have instead relied more on traditional government agencies. Examples of these different types of government corporations include, the African Development Foundation (designed to support community-based self-help initiatives within the African-American community), the Commodity Credit Corporation (which was established to stabilize, support, and protect farm income and prices), the Community Development Financial Institutions Fund (created to support economic revitalization and development in distressed urban and rural areas), the Export–Import Bank of the United States (which was set up to help create jobs through promoting exports), the Federal Crop Insurance Corporation (to improve the economic stability of agriculture prices), the Corporation for National Community Service (to offer opportunities to young people to volunteer in community activities), the Government National Mortgage Association (often referred to as Ginnie Mae, set up to support federal housing initiatives), and the Overseas Private Investment Corporation (to assist less-developed nations to make the transition to marked oriented economies). The Panama Canal Commission is an example of a now-defunct, wholly owned government corporation since the country of Panama took complete control of the canal on December 31, 1999. Further Reading Aberbach, Joel D., and Mark A. Peterson, eds. Institutions of American Democracy: The Executive Branch. New York: Oxford University Press, 2005; Agliette, Michel. Theory of Capitalist Regulation: The U.S. Experience. London: Verso, 1979; Crew, Michael A., ed. Incentive Regulation for Public Utilities. Boston: Kluwer Academic Publishers, 1994; Goodsell, Charles T. The Case for Bureaucracy: A Public Administration Polemic. 4th ed. Washington, D.C.: Congressional Quarterly Press, 2003; Patterson, Thomas E. The American Democracy. 7th ed. Boston: McGraw-Hill, 2005; Pertschuk, Michael. Revolt Against Regulation:
The Rise and Pause of the Consumer Movement. Berkeley: University of California Press, 1982; Walsh, Annmarie Hauck. The Public’s Business: The Politics and Practice of Government Corporations. Cambridge, Mass.: MIT Press, 1980. —Lori Cox Han and Michael A. Genovese
executive agreements Executive agreements are formal agreements, pacts, or understandings entered into by the president of the United States with a foreign government or foreign head of state. An executive agreement differs from a treaty in that a treaty must be formally approved by the U.S. Senate, whereas an executive agreement is a unilateral pact entered into by the executive with a foreign power. Nowhere mentioned in the U.S. Constitution, executive agreements are nonetheless considered indispensable for dealing with the myriad pieces of business large and small when dealing with other nations. To require that every agreement go to the Senate for formal approval would be a daunting and impossible task; thus, agreements serve to increase the efficiency of government by allowing the executive to conduct business but not be tied down with the formal requirements of treaty authorization. Presidents claim authority to enter into executive agreements as a power derived from the “executive power” as well as the implied powers deriving from the foreign-policy power of the president. When executive agreements are used to cover minor or technical matters, they prove largely uncontroversial, but presidents have attempted increasingly to make major policy moves with executive agreements and to see it as a way to circumvent the constitutionally required approval of the Senate. In this way, the unilateral exercise of presidential power may backfire, and the Congress or the courts may demand that the president submit such proposals to the Congress in the form of a treaty for ratification. It is believed that the first executive agreement dates to 1792, a postal agreement with the government of Canada. Through the years, executive agreements have been employed to make major as well as minor policy decisions. Thomas Jefferson purchased the Louisiana Territory via an executive agreement,
executive agreements 553
even though he believed that such a move was unconstitutional (in fact, Jefferson contemplated submitting a request to Congress for a constitutional amendment that would have allowed him to purchase the Louisiana Territory legally, but in the end, he decided against this route as it would have taken too much time and the opportunity to purchase the Louisiana Territory might have been missed). President James Monroe reached the Ruch-Bagot Agreement with Great Britain in 1817, a pact limiting naval forces in the Great Lakes. Further, in 1898, President William McKinley annexed Hawaii as a U.S. territory through an executive agreement. Just prior to U.S. entry into World War II, President Franklin D. Roosevelt entered into the Lend-Lease Act with Great Britain via executive agreement. Increasingly, presidents have used executive agreements to make major policy and have eschewed the use of the more cumbersome and politically difficult treaty-approval process. In 1972, Congress passed the Case Act requiring presidents to transmit to the Congress all agreements entered into by the executive branch. This act also set up a procedure for submitting to the Congress secret or sensitive agreements that needed to be kept from public knowledge. Executive agreements have the force of law. In United States v. Belmont (1937), the U.S. Supreme Court concluded that the president could enter into such agreements, and that in doing so, the agreements were the legal equivalent of treaties, but such agreements could not violate the Constitution or supersede United States laws. In 1955, the U.S. Supreme Court struck down an executive agreement in Seery v. United States and did so again in 1957 in Reid v. Covert. Thus, if within constitutional bounds, an executive agreement has the full force of law. But if an agreement goes beyond a law or the Constitution, the Supreme Court can and will strike it down. Are executive agreements a sound method of making policy in a constitutional democracy? Clearly to some degree, they are a necessary component of presidential foreign affairs leadership, but what of the cases where presidents may overstep their bounds and use an executive agreement in a place where a treaty would be more appropriate? Is it wise or constitutional to bypass the Congress by using executive
agreements to establish new and significant policies in foreign affairs? Supporters of executive agreements argue that they are vital to the workings of government and indispensable to sound management. While noting the potential dangers, they nonetheless argue that governments would come to a standstill if every agreement necessitated Senate approval. Critics recognize the need for some form of executive agreement but are unsettled at the prospect of presidents using agreements in place of treaties for major policy changes. Yes, presidents have abused the process, and yes, executive agreements are a necessary part of the executive and governing process, but in the long run, it is up to the informal give and take of politics to sort out the rough spots in the executive agreement versus treaty dilemma. Executive agreements are often referred to as part of the “unilateral power” of the president, that is, powers that the president may sometimes exercise on his own, without Congress. Given that presidents may have a hard time getting Congress to pass their proposed legislation or to ratify a controversial treaty, it should not surprise us that presidents look more and more to ways that they might go around Congress to get their way. Also, as the expectations on presidents to perform and get results have increased, they have resorted more and more to the use of executive agreements. In the first 150 years of the republic’s history, treaties (requiring the ratification of the Senate) far surpassed the number of executive agreements. However, in the past several decades, presidents have signed approximately 50 executive agreements for every treaty submitted to the Congress. Scholar Francine Kiefer referred to this as “the art of governing alone.” Presidents tend to exercise their powers unilaterally with executive agreements when they wish to head off a more extreme policy that they fear will be imposed by the Congress, thus preempting a more aggressive or extreme polity with their own. They may also be tempted to issue executive orders when the Congress is unable to decide or when it reached a deadlock on a particular issue that the president feels needs to be addressed and is not by the Congress. In the end, presidents issue executive orders because they feel that they can get away with acting
554 e xecutive branch
unilaterally and when they feel strongly about an issue. It is a form of extraconstitutional activity that is increasingly tempting in an age when presidents and congresses clash and where presidents are held responsible for more and more, and while clearly the framers of the U.S. Constitution did not anticipate the dramatic rise in the use of executive agreements, from a presidential perspective, it makes very good sense to go to the well of executive agreements, at least until the courts or Congress block that path. Governing without Congress may not have been what the framers had in mind, but in the modern age, it may seem to presidents as the only or at least the most attractive path to gaining control of the policy agenda. One should expect that absent a clear signal from the Congress that it will not tolerate such end-runs or a firm decision by the courts that the repeated use of executive agreements violates the system of shared and overlapping powers or intrudes in the legislative process, presidents will continue to use and perhaps even abuse this form of unilateral power. As the power of the presidency has increased, especially concerning foreign affairs, the use of executive agreements and other unilateral powers is likely to remain stable or to increase in time. Executive agreements provide the president with a quick, relatively easy route to reaching agreements with foreign countries. As long as there is no significant congressional backlash against executive agreements, their legal and political viability remains strong. Further Reading Fisher, Louis. Constitutional Conflicts between Congress and the President. 4th ed. Lawrence: University Press of Kansas, 1997; Gilbert, Amy M. Executive Agreements and Treaties. New York: Thomas–Newell, 1973; Glennon, Michael J. Constitutional Diplomacy. Princeton, N.J.: Princeton University Press, 1990; Howell, William G. Power Without Persuasion. Princeton, N.J.: Princeton University Press, 2003; Warber, Adam L. Executive Orders and the Modern Presidency. Boulder, Colo: Rienner, 2006. —Michael A. Genovese
executive branch The executive branch as it exists today was not written into the U.S. Constitution but has its origins in
the first administration of President George Washington. The United States was operating under the Articles of Confederation for about a decade when the Constitution was written and the presidency was created. Even though there was no executive branch per se, the people were used to having some kind of enforcement power in the hands of a number of secretaries. Because they were used to this type of administrative control, Washington was able to appoint quickly an administrative force to work as his executive branch. The first set of secretaries became the first cabinet and included Secretary of State Thomas Jefferson, Secretary of War Henry Knox, Secretary of the Treasury Alexander Hamilton, Attorney General Edmund Randolph, and Postmaster General Samuel Osgood. This cabinet set the norm that if the president wants someone to serve in his or her cabinet, the Senate, which has confirmation powers, will allow him or her the people he or she chooses. Removal powers are ambiguous as the Constitution does not explain how an executive-branch official can be removed other than through congressional impeachment. As a result, it is unclear whether the president can fire secretaries without Congress’s approval as well. Generally, when a president asks for a secretary’s resignation, said person complies. The original set of secretaries also established the norm that the cabinet is a presidential advocate in the executive branch, sending messages down the administrative hierarchy while also being the agency’s advocate with the president. The executive branch is the institution that surrounds the presidency. It began with the above five men serving in roles to which people had become accustomed and now employs nearly 3 million civilian personnel. As the executive function in government is to enforce the law, the executive branch is responsible for implementing all laws. The executive branch includes cabinet bureaucracies that answer also to the Congress, independent regulatory agencies that answer also to statute, executive agencies that answer exclusively to the president, and government corporations that answer to stockholders. The cabinet includes executive agencies such as the Department of State, while the Federal Communications Commission is an example of an independent regulatory agency, the Central Intelligence Agency is an example of an executive agency, and
executive branch 555
556 e xecutive branch
the U.S. Postal Service is an example of a government corporation. The presidency is one piece of this larger executive agency that enforces law. While the executive branch began without explicit constitutional authority, it has amassed much power through the centuries as it has institutionalized. The executive branch is now an organization that is no longer at the whim of outside forces. The more the president has taken responsibility, for example, for the Budget and Accounting Act of 1921 which gave the president the authority to plan a budget, the more the executive branch has become an institution. In the 1930s, however, it became clear that the president still lacked the resources to oversee the rapidly expanding federal government properly. As a result, in 1937, President Franklin D. Roosevelt called for the formation of the Brownlow Committee to study the institution and recommend changes. The commission’s simple conclusion—“The president needs help”—helped to lead to the formation of the Executive Office of the Presidency (EOP) in 1939. The executive branch is a collective entity. The EOP has units that exist for a long time and some that disappear after the current president leaves office. About a third of EOP staff is comprised of permanent civil servants. This transferred the Bureau of the Budget from the Treasury Department to the EOP as well as creating a White House office. One third of the agencies were added to the EOP by an act of Congress, one third was established by a presidential submission of a reorganization plan to Congress, and 40 percent were established by executive order. When the president wants to make changes to the executive branch, he has the easiest time when he uses an executive order. Executive-branch agencies are bound both to Congress and to the presidency. Although the president is the head of the executive branch and all its agencies, the heads of these agencies are accountable also to the legislative branch. As a result, their fates are not only tied to the presidential fate. Because of the budget laws, the president has the power of the purse when it comes to executivebranch agencies. This means that the agencies are at his disposal almost entirely. Even though all agencies in the executive branch have to follow directives from the president if they want to maintain them-
selves, there are some agencies whose heads are completely loyal to him and to his vision, while other agencies tend to function with heads that are more loyal to the agency vision. Those who are completely loyal in the executive branch are known as the inner circle. The executive branch as an institution has a major influence on the way that the U.S. government operates. Large numbers of public bills begin in executive-branch agencies. When a president has a vision for what he wants to happen in some policy area, he will turn to his agency heads to formulate the first swipe at the policy. For example, the civil rights bills that the John F. Kennedy administration was working with in the 1960s were drafted by Attorney General Robert F. Kennedy. When the attorney general then came to Congress to discuss the civil rights policy, Kennedy could refer to the legislation that he specifically had written. Second, congressional committees solicit views from executive-branch agencies whenever they are considering new legislation or working on oversight of executive-branch implementation. Congressional committees will even call in agency heads from the states. For example, when the Congress under Speaker of the House Newt Gingrich (R-GA) was working on welfare reform in the mid 1990s, it brought in state-agency heads to ask about their experiences with implementing different types of policies. In the aftermath of Hurricane Katrina, many agency heads were called in to discuss how to better implement disaster relief in the future. In this way, executive agencies play a major role in the laws that are passed. A third way the executive branch influences the way the U.S. government operates is in its part in the so-called cozy (or iron) triangle. The cozy triangle is a description of the meeting of minds about policy and policy changes. One piece of the triangle is a Congress member on a particular committee or subcommittee that handles a specific area of policy. A second piece of the triangle is an interest group that is involved in lobbying on behalf of an interest that is relevant to the policy. Finally, an executive-branch member is the third part of the triangle. This executive-branch worker will be responsible for the implementation of the policy. In this way, the influence on Congress happens not only through the lobbying
Executive Office of the President 557
interest group but also through the agency expertise that knows how best to implement the policy. The executive branch, even though it has garnered power to itself through the years of developing policy and through statute has taken on more responsibility, is answerable and accountable to the president. He has direct control over the executive branch and even decides what the agencies can say to Congress. President Theodore Roosevelt put into place a gag rule that said no agency could engage in a conversation with Congress without presidential intervention. In 1921, the BAA implemented a central clearance policy that said all conversations between the executive and the legislative branches would happen under the direction of the president through the budget process. In the 19th century, agencies could have and did speak directly to Congress about policy areas under their control. As the president gained power, so did the executive branch and thus also did it gain its independence from Congress while also increasing its dependence on the president. The executive branch is the piece of the federal government with which a U.S. citizen is most likely to deal when trying to resolve an issue. If a grandmother has not received her Social Security check, she will call the Social Security Administration which is an executive-branch agency. If someone is being audited for their taxes, an Internal Revenue Service (IRS) agent will call. The IRS is an executive-branch agency. In many ways, the executive branch is the most citizen-interactive of any of the branches. In many other ways, the executive branch is a very powerful institution that works far from the eyes of U.S. citizens. Much of what citizens know about terrorists in the world is gathered by intelligence agencies that employ people whom they never have seen. These people work for the executive branch trying to make U.S. homes safer and protecting their lives from war. While the executive branch may seem remote to U.S. citizens, it is possible to lobby it, and citizens can be successful in convincing an agency to address their issues. The more politically active people are, the more interest groups have affected the executive branch. Interest-group proliferation has not only influenced the way Congress operates but also the way the executive branch works. See also bureaucracy
Further Reading Cohen, Jeffrey E. The Politics of the U.S. Cabinet: Representation in the Executive Branch, 1789–1984. Pittsburgh, Pa.: University of Pittsburgh Press, 1988; Heclo, Hugh. “One Executive Branch or Many?” In The Presidency, the Executive Branch, and Congress in the 1980s, edited by Anthony King. Washington, D.C.: American Enterprise Institute, 1983; Mayer, Kenneth. “Executive Orders and Presidential Power,” The Journal of Politics 61, no. 2 (May 1999): 445– 466; Neustadt, Richard E. “Presidency and Legislation: The Growth of Central Clearance,” American Political Science Review 48 (1954): 641–671; Ragsdale, Lyn, and John J. Theis III. “The Institutionalization of the American Presidency, 1924–92,” American Journal of Political Science 41, no. 4 (October 1997): 1280–1318; Seligman, Lester. “Presidential Leadership: The Inner Circle and Institutionalization,” Journal of Politics 18, no. 3 (August 1956): 410–416. —Leah A. Murray
Executive Office of the President The Executive Office of the President (EOP), created in 1939, provides the president an organizational structure that has management oversight across the federal government. Without the Executive Office of the President, the president would not be able to manage the enormous bureaucracy in the federal government that in 2006 numbered 3 million employees, not including the armed forces, with an annual budget of more than $2.5 trillion. When the Executive Office of the President was created in 1939, the White House office was its sole unit. In the intervening years, numerous units have been added to the EOP both by statute, such as the Council of Economic Advisors, the National Security Council, and the Office of Management and Budget, and by Executive Order, such as the Domestic Policy Council, the National Economic Council, and the Office of National AIDS Policy. Today, about 15 large units comprise the Executive Office of the President, with the largest units being the Office of Management and Budget and the National Security Council. Presidents tend to add one or two policy units to the Executive Office of the President through
558 Executive Office of the President
executive order during their term in office, but generally these policy units are disbanded in the next administration. This is particularly true if the White House changes party control from Democrat to Republican or vice-versa. These new policy units reflect particular agenda issues on which the administration wants to focus. For example, President Gerald Ford created the Office of Consumer Affairs, President Bill Clinton created the President’s Initiative for One America, and President George W. Bush created the Office of Faith-Based and Community Initiatives. Each of these new policy units within the Executive Office of the President provide staffing and a budget for in-depth oversight within the federal government of a major policy goal of the president. However, when the president leaves office, these policy units tend not to be continued in the new administration. The George W. Bush administration encountered a significant political backlash in 2001 when the White House Chief of Staff Andrew Card announced that the Clinton EOP units of the Office of National AIDS Policy and the President’s Initiative for One America would be terminated. Criticized for being insensitive to racial issues, Card reinstated the Office of National AIDS Policy. As the nation moved from an agricultural base to an industrial base in the late 19th and early 20th century, the role of the federal government expanded. The number of civilian employees in the federal government and federal expenditures all increased. There was little oversight of departmental programs from a central staff in the executive branch, with departments often having significant independence in the programs they developed and administered. To deal with the lack of central control in the executive branch, President William Howard Taft established the Commission on Economy and Efficiency in 1911, although it was more widely known as the Taft Commission. Taft appointed Frederick Cleveland, a senior official with the New York City think tank, the New York Bureau of Municipal Research, to chair the commission. After completing its work in 1913, the Taft Commission noted that the cabinet-level departments needed greater coordination of programs, formal budgets, and fewer overlapping duties. The most important recommendation of the Taft Commission was that the president needed to enhance management control over the executive branch.
The recommendations of the Taft Commission were put on hold as the nation faced World War I and the debate concerning the League of Nations. President Warren Harding returned to the questions of presidential management of the executive branch, creating a task force on government efficiency. The most important recommendation to emerge from the Harding task force was the Budget and Accounting Act of 1921, which created the Bureau of the Budget within the Treasury Department. This was a major step toward presidential management of the executive branch, for it created the first uniform federal budget. By the election of 1932, management of the executive branch had become a central campaign issue. Both Republican president Herbert Hoover and his Democratic challenger, Franklin Delano Roosevelt, addressed the issue during their campaigns. Once elected, Roosevelt created advisory groups to devise ways in which the president could improve his management of the executive branch. The advisory groups, composed primarily of faculty from Columbia University, became known as the Brain Trust, and many individuals within the Brain Trust, became part of Roosevelt’s inner circle. One of the advisory groups, the National Resources Planning Board, was staffed with a small group from the Brain Trust. Although they were charged with working on natural resource and environmental planning, they quickly saw their charge as a broader managerial issue. In June 1935, they recommended to President Roosevelt that he strengthen his own staff with five “disinterested” individuals who would oversee parts of the emerging New Deal programs. Roosevelt supported the concept and, in November 1935, created a three-member panel overseen by Louis Brownlow, the director of the Public Administration Clearing House in Chicago, to develop a proposal for presidential management of the executive branch. The Brownlow Committee recommended that: The president should be the center of managerial direction and control of all executive-branch departments and agencies; the president does not have adequate legal authority or administrative machinery presently to manage centrally and control the departments and agencies; and the restoration of the president to a position of central management and control of the executive branch will require cer-
Executive Office of the President 559
tain changes in law and administrative practice. To accomplish these goals, the Brownlow Committee recommended that the president strengthen his management capabilities by making staff or institutional agencies directly responsible to the president; the president would establish a White House secretariat headed by an executive secretary who would maintain communication with all such agencies, except the Bureau of the Budget which would report directly to the president. The president would be granted continuing executive-branch reorganization power, conditioned by congressional veto authority, with reorganization research supported by the Bureau of the Budget. Finally, the president would be given the power to create temporary emergency agencies, when conditions warrant, and the authority to transfer the activities of the agencies into the permanent executive establishment after an emergency has passed. Roosevelt accepted all of the recommendations except that of a White House secretariat. Instead, he wanted formal presidential assistants. The Brownlow Committee translated that request into a recommendation for an organizational unit called the Executive Office of the President, staffed with six presidential assistants who would work directly with the president. Roosevelt agreed with the recommendation and supported sending the report to Congress for the requisite legislative authority. Due to Roosevelt’s conflicts with Congress over his New Deal programs and his proposal in January 1937 to increase the membership of the U.S. Supreme Court, the proposal developed by the Brownlow Committee was sidelined by Congress. After the 1938 congressional elections, Roosevelt reintroduced the Brownlow Committee proposals, and in 1939, Congress passed the Reorganization Act of 1939 creating the Executive Office of the President. The Executive Office of the President, as defined by the 1939 legislation, had six units: The White House Office with six administrative assistants for the president, in addition to the existing personal aides and clerical offices; the Bureau of the Budget, which was relocated from the Treasury Department; the National Resources Planning Board; a Liaison Office for Personnel Management; an Office of Government Reports; and an Office of Emergency Management.
By the end of President Harry Truman’s term in January 1953, Congress had added several more units to the Executive Office of the President, including the Council of Economic Advisors in 1946 and the National Security Council in 1947. Since 1939, 40 separate units have been added to the Executive Office of the President by various presidents, although most have been disbanded by successive presidents. In general, units that were created by executive order are disbanded by the next presidents. Units, however, that are created by statute remain. Since most units are created by executive order, they tend to have a limited life span. President George W. Bush created four new units within the Executive Office of the President: the Office of Faith-Based and Community Initiatives, the Homeland Security Council, the Office of Global Communications, and the USA Freedom Corps. The Executive Office of the President had the following units in 2006: The White House Office (created by statute); the Council of Economic Advisors (created by statute); the Council on Environmental Quality (created by statute); the Domestic Policy Council (created by executive order); the National Economic Council (created by executive order); the National Security Council (created by statute); the Office of Administration (created by executive order); the Office of Faith-Based and Community Initiatives (created by executive order); the Office of Global Communications (created by executive order); the Office of Management and Budget (created by statute); the Office of National AIDS Policy (created by executive order); the Office of National Drug Control Policy (created by statute); the Office of Science and Technology Policy (created by executive order); the Office of the United States Trade Representative (created by statute); the USA Freedom Corps (created by executive order); and the Homeland Security Council (created by executive order). The staffing and budget of each of all these offices is dependent on the budget request of the president, thus varying from administration to administration. Even though units are statutorily mandated, such as the Council on Environmental Quality, their size and policy-making role within the EOP will vary from president to president. The degree to which presidents rely on any one of the units will depend on a president’s management style.
560 e xecutive orders
Recent presidents have given several members of their EOP staffs cabinet rank. President George W. Bush gave cabinet-rank to the White House chief of staff, the director of the Office of Management and Budget, the U.S. Trade Representative, and the director of the Office of National Control Policy. See also executive agencies. Further Reading Arnold, Peri. Making the Managerial Presidency: Comprehensive Reorganization Planning 1905–1996. 2nd ed. Lawrence: University Press of Kansas, 1998; Relyea, Harold C., ed. The Executive Office of the President: A Historical, Biographical, and Bibliographical Guide. Westport, Conn.: Greenwood Press, 1997; Warshaw, Shirley Anne. Keys to Power: Managing the Presidency. 2nd ed. New York: Longman, 2004. —Shirley Anne Warshaw
executive orders Executive orders are proclamations by the executive branch that have the same effect as law but do not require action from Congress. The president of the United States or the governor of a particular state, under the authority of the office of the executive branch of government, invokes executive orders. Executive orders are directed at government agencies or individual government officials, typically those inside government, and they have the legal power of laws, especially those when created in conjunction with acts of Congress but are created by executive edict rather than legislative deliberation. Special subsets of executive orders issued by presidents of the United States are National Security Directives. The president, with the advice and consent of the president’s National Security Council, announces these executive orders. Given the often sensitive security nature of these directives, they are often kept classified. While executive orders by both the president and governors have similar power as laws, they can also be amended by future presidents and governors who feel that the original is unclear or needs to to be changed. The history of executive orders began when the United States elected George Washington the first president. There have been executive orders pro-
nounced since this time, though it is now made public, whereas in the past it was usually less formal and not widely publicized. In fact, executive orders were not even numbered until the early 1900s, a development which has promoted their organization and publicity. A recent example of an executive order would be one enacted by President George W. Bush on May 10, 2006, that strengthens the federal government’s efforts to protect people from identity theft. In addition, several executive orders have been of a controversial nature. Perhaps the most controversial was President Franklin D. Roosevelt’s Executive Order 9066 where President Roosevelt invoked executive military-power privilege to remove and relocate many Asian Americans along the western coast of the United States to internment camps. In the aftermath of the Japanese bombing of Pearl Harbor, approximately 11,000 individuals were interned by the president’s executive order. With the rise of interest group activity and persistently divided government, executive orders have become a more frequently occurring phenomenon than ever before, as the president and governors use their ability to create executive orders more frequently now than in the past. Executive orders fall under criticism today because of the fear that this executive power is too great. Some see the executive order as a way of enacting a law without congressional approval and changing and altering already existing laws enacted by a democratically elected legislature. Several scholars argue that U.S. presidents, in particular, use executive orders to circumvent the will of Congress and surreptitiously enact their preferred policy agenda. This policy agenda, some argue, is conducted outside the boundaries of the public debate and is largely foreign or arcane to the public. As a result, many people who are opposed to the rampant use of executive orders also argue that the executives may become similar to a dictatorship if they are given the power to alter and manipulate laws created by other democratically elected officials. On the other hand, others view executive orders as a much-needed power of the president (and governors) to clarify laws passed through legislative bodies (as laws passed through Congress are potentially unclear). Executive orders could be seen as additional “guides” to federal agencies on behalf of the president (or governor), they may be seen as either clarifying
executive orders 561
statutory laws that have been passed or as manipulating laws to fit certain desires that were not implemented when the law was passed in legislatures. Executive orders are also valuable for chief executives because they allow for efficiency in the policy-making process. Policy can be more easily enacted by executive fiat than the often time-consuming legislative process. In particular, in times of national crisis, supporters of strong executive power argue that an executive agent must have the capacity to act quickly to respond to threats to the state or the nation. Aside from the normative questions related to the use of executive orders, there are verifiable structural checks on the use of executive orders in government. If, for example, a legislative body does not agree with the executive order that an executive has produced, it has the option of amending the law that was initially in place to clarify its intention further by means of amendment or floor debate. Also, executive orders may be challenged in court if a legislative body does not agree with the direction that the executive has taken, and the executive who enacted the executive order may be found as exceeding his or her power. For instance, in the case of President Harry Truman and Executive Order 10340, Truman was denied by the U.S. Supreme Court the power invoked in his directive to direct the federal government to seize control of steel mills across the nation. He was attempting to solve labor disputes that occurred in these mills after World War II but was found by the Supreme Court to have committed a wrongful action as it was not in a presidential power to “create” laws— it is Congress’s power to make laws. With this, President Truman was seen by the Supreme Court as attempting to seize private businesses (steel mills) to resolve labor disputes without the lawmaking authority of Congress, which is seen as stepping outside of the boundaries of established executive power. The presidential executive orders throughout U.S. history have been documented by the Office of the Federal Register, which can be found online or in several secondary government publications. The National Archives has produced a codification of past and present presidential proclamations and executive orders and includes the dates April 13, 1945, to January 20, 1989. There is also a numerical list of executive orders that occurred throughout these dates (starting retroactively with Abraham Lincoln’s Eman-
cipation Proclamation in 1862), thus creating an easier way of finding information about a particular executive order. Interestingly, one can even look online to discover items on the Federal Register in a particular day. The main benefit of online access to executive orders being available to the public is that it ensures transparent public access to the everyday actions of the government. While it is important to look at executive orders proclaimed by presidents, we must also look at executive orders proclaimed by state governors. As noted above, both governors and presidents have been given the power to create executive orders because of their status of being leaders of the executive branch. Executive orders created by governors are as equally important as those created by the president but they often affect agencies on a state level rather than on a national level. One example can be seen in executive order RP23, which was created on February 3, 2003, in the state of Texas. This executive order was created by Governor Rick Perry to honor the falling of the space shuttle Columbia, an in-flight disaster that resulted in the death of seven astronauts. Perry’s Executive Order RP23 asked that on February 4, 2003, at noon, all Texans hold a moment of silence to honor the fallen space shuttle. Another example of an executive order set forth by a governor would be Executive Order 2006–09 set forth by Idaho’s governor, Dirk Kempthorne. This executive order declared that there would be established within the governor’s office a function entitled “Executive Office for Families and Children.” This establishment would oversee the coordination of the services involved in the wellbeing of the children and families in Idaho. Further Reading Codification of Presidential Proclamations and Executive Orders, April 13, 1945–January 20, 1989. Washington, D.C.: Government Printing Office; Cooper, Philip J. By Order of the President: The Use and Abuse of Executive Direct Action. Lawrence: University Press of Kansas, 2002; Mayer, Kenneth. With the Stroke of a Pen: Executive Orders and Presidential Power. Princeton, N.J.: Princeton University Press, 2002; Warber, Adam L. Executive Orders and the Modern Presidency: Legislating from the Oval Office. Boulder, Colo.: Rienner, 2005. —Jill Dawson and Brandon Rottinghaus
562 e xecutive privilege
executive privilege Executive privilege is the constitutional principle that permits the president and high-level executivebranch officers to withhold information from Congress, the courts, and ultimately the public. Executive privilege is controversial because it is nowhere mentioned in the U.S. Constitution. That fact has led some scholars to suggest that executive privilege does not exist and that the congressional power of inquiry is absolute. There is no doubt that presidents and their staffs have secrecy needs and that these decision makers must be able to deliberate in private without fear that their every utterance may be made public, but many observers question whether presidents have the right to withhold documents and testimony in the face of congressional investigations or judicial proceedings. Executive privilege is an implied presidential power that is recognized by the courts, most famously in the U.S. v. Nixon (1974) U.S. Supreme Court case. Presidents generally use executive privilege to promote certain national security needs, to protect the confidentiality of White House communications, or to protect the secrecy of ongoing criminal investigations in the executive branch. But not all presidents have used this power for the public good, but instead some have claimed executive privilege to try to conceal wrongdoing or politically embarrassing information. No president ever actually used the phrase executive privilege until the Dwight D. Eisenhower administration. Nonetheless, all presidents going back to George Washington have exercised some form of what we today call executive privilege. The first use of this authority is dated back to 1792 when Congress demanded from the Washington administration information regarding the failure of a U.S. military expedition. Congress specifically requested White House records and testimony from presidential staff familiar with the event. Washington convened his cabinet to discuss whether a president possessed the authority to deny information to Congress. The cabinet and the president agreed that the chief executive indeed had such authority when exercised in the public interest. The president communicated this view to Congress in writing. Washington eventually decided to cooperate with the congressional inquiry and turned over the requested materials. But he had first laid the
groundwork for the presidential use of executive privilege. The standard established by Washington—that presidential secrecy must be used only in the service of the public interest—remains the proper approach to this day. The evolution of the exercise of executive privilege and of the legal decisions governing its use make it clear that this is a legitimate presidential power when used appropriately. Nonetheless, executive privilege took on a negative connotation when used by President Richard M. Nixon to try to conceal information about the Watergate scandal. Specifically, Nixon claimed executive privilege to prevent having to release the White House tapes that contained incriminating evidence of his participation in a cover-up of illegal activity by administration officials. Nixon claimed that concealing the tapes was required to protect the national security, but ultimately when the Supreme Court ruled against the president, it became clear that he had been trying to protect himself from incriminating information rather than promoting the public interest. In the U.S. v. Nixon (1974) case, the Supreme Court unanimously ruled that executive privilege is a legitimate presidential power, though not an absolute one. Indeed, in a criminal investigation where evidence was needed to secure the pursuit of justice, the constitutional balancing test weighed in favor of turning over the White House tapes and against the claim of privilege. During the Bill Clinton administration, there were numerous executive privilege battles including, most prominently, during the scandal in 1998–99 that led to the impeachment of the president. During the investigation of the president, Clinton’s attorneys made the case that presidential communications are privileged and that therefore certain White House aides could not be called to testify before the Office of Independent Counsel (OIC). The federal judge ruled that although presidents have legitimate needs of confidentiality, the OIC had made a compelling showing of need for testimony to conduct a criminal investigation properly. Clinton lost in his effort to prevent the testimony of his aides, but the judge did reaffirm the legitimacy of executive privilege. Although the Nixon and the Clinton controversies are the best-known uses of executive privilege, most presidents have exercised this power sparingly and judiciously. Indeed, because executive privilege
executive privilege 563
has been associated in the public’s mind with political scandals, most modern presidents have been reluctant to use that power except when they felt it absolutely necessary to do so. Some presidents have concealed their use of executive privilege by claiming other constitutional bases for the right to withhold information from Congress or the courts. Particularly in the early post-Watergate years, presidents were very reluctant to utter the words executive privilege because of its association with the Nixon scandals. Like his immediate predecessor, President George W. Bush has not been so reluctant to claim executive privilege. Early in his first term, Bush moved on several fronts to try in effect to reestablish executive privilege as a customary presidential power. Yet his efforts, like those of Clinton before him, were very controversial. In one case, he expanded the scope of executive privilege for former presidents, even to allow them to transfer this constitutional authority under Article II to designated family representatives. Bush issued an executive order in 2001 that effectively revised the intent of the Presidential Records Act of 1978 in a way that made it harder for the public to access the papers of past presidential administrations. In another case, the Bush administration tried to expand executive privilege to protect Department of Justice (DOJ) documents from investigations long ago closed. This claim was particularly bothersome to critics because the long-standing practice was to claim a right to withhold information about ongoing investigations that could be compromised by public disclosure. No president had ever claimed executive privilege to withhold documents from DOJ investigations that had been closed for many years. The two uses of executive privilege by the George W. Bush administration showcase how the political give-and-take of our system of separated powers often resolves such controversies. In the former example, the president initially prevailed in large part due to a tepid response from Congress and a lack of significant opposition outside of the academic community. In the latter example, a spate of negative publicity and aggressive efforts by a congressional committee to access the disputed documents resulted in the administration agreeing to turn over most of the materials it had tried to conceal.
Like other constitutional powers, executive privilege is subject to a balancing test. Just as presidents and their advisers have needs of confidentiality, Congress must have access to executive-branch information to carry out its constitutionally based investigative function. Therefore, any claim of executive privilege must be weighed against Congress’s legitimate need for information to carry out its own constitutional role, and, of course, the power of inquiry is not absolute, whether it is wielded by Congress or by prosecutors. In some cases, Congress has overreached in seeking to secure access to executive-branch information that is clearly not germane to any necessary legislative function. Nonetheless, in our constitutional system, the burden is on the executive to prove that it has the right to withhold information and not on Congress to prove that it has the right to investigate. Executive privilege should be reserved for the most compelling reasons. It is not a power that should be used routinely to deny those with compulsory power the right of access to information. Short of a strong showing by the executive branch of a need to withhold information, Congress’s right to investigate must be upheld. To enable the executive to withhold whatever information it wants would be to establish a bad constitutional precedent—one that would erode a core function of the legislative branch and upset the delicate balance of powers in our system. There have been some proposals in Congress to develop a clear statutory definition of the president’s power of executive privilege. Yet no such legislation has ever passed, and it is unlikely that such an effort would reduce interbranch conflicts over access to information. To date, the branches have relied on their existing constitutional powers to negotiate disputes about presidential claims of executive privilege. For the most part, the system has worked well without a legislative solution. As memories of Watergate and the Clinton scandal somewhat recede in time, it is likely that future presidents will become increasingly unembarrassed about the use of executive privilege. Further Reading Berger, Raoul. Executive Privilege: A Constitutional Myth. Cambridge, Mass.: Harvard University Press, 1974; Fisher, Louis. The Politics of Executive Privilege.
564 F ederal Communications Commission
Durham, N.C.: Carolina Academic Press, 2004; Prakash, Saikrishna Bangalore. “A Comment on the Constitutionality of Executive Privilege,” University of Minnesota Law Review (May 1999); Rozell, Mark J. Executive Privilege: Presidential Power, Secrecy and Accountability. 2nd ed. Lawrence: University Press of Kansas, 2002. —Mark J. Rozell
Federal Communications Commission The Federal Communications Commission (FCC) is an independent federal agency charged with the regulation of interstate and foreign communications. Created by Congress through the Communications Act of 1934, the act abolished the Federal Radio Commission and transferred all jurisdiction over radio licensing to the newly established FCC. Established as part of President Franklin D. Roosevelt’s New Deal, the FCC was given authority to establish “a rapid, efficient, Nation-wide, and world-wide wire and radio communication service.” Traditionally, broadcasters have had to contend with much greater regulation than do newspapers, magazines, and other print media. Since its emergence during the early 20th century, broadcast regulation was deemed necessary since the available airwaves were scarce and should be used in the public’s interest. Many federal broadcast rules were abandoned by the 1980s, as cable television and other new media provided viewers with diverse viewing and listening alternatives. However, Congress still remains active in the regulation of television, radio, telephones, and the Internet. The establishment of the FCC is part of the early history of radio in the United States. The 1927 Radio Act had authorized an independent regulatory commission to license radio stations. By 1926, more than 800 broadcasters were using the airwaves at various frequencies. Radio sales began to drop because people grew tired of programming interference. The Federal Radio Commission had the authority to grant and renew licenses and regulate broadcasters based on public interest, convenience, and necessity, since the airwaves were considered a public resource. In 1931, a federal appeals court upheld the right of the FRC to withhold license renewal if programming did not meet the public-interest requirement. For exam-
ple, the radio show of Dr. John Romulus Brinkley, a “doctor” who had purchased his medical diploma and was diagnosing the illnesses of patients and prescribing remedies to listeners, was considered unacceptable programming and not in the public’s interest. Also, by the mid-1930s, the diversity of political views on the radio was considered important, as the world began to watch the rise, with the help of radio, of Adolf Hitler in Germany. A broadcasting license was not considered the property of the holder, but it is a permit to use a frequency that belonged to the American public. As such, Congress had a right to regulate its use. As a result, in an attempt to provide better regulation of the expanding medium, Congress passed the 1934 Communications Act and established the FCC. The FCC is directed by a five-member commission whose members are appointed by the president with the advice and consent of the Senate. Board members serve staggered five-year terms. The president designates one of the board members to serve as chairperson. No more than three board members may come from the same political party. It is the job of the FCC to classify electronic media services into such categories as broadcasting, cable, or common carrier (which is transmission of a message for a fee). The commission assigns frequencies, approves the transfer of licenses and heights of radio towers, monitors station power, adopts programming policies in the public interest, and handles licensing and fines. If a station fails to comply with federal broadcast regulations, the FCC can withdraw its license. However, the FCC seldom ever threatens to revoke a license for fear of being accused of restricting freedom of the press. A broadcast station can apply for renewal of its license by postcard and is virtually guaranteed FCC approval, which used to cover seven years for radio and five for television but was extended to eight years by Congress in 1996. Several issues are considered when granting or renewing a license. The applicant must be a U.S. citizen, show diversity of ownership (television broadcasters can own as many stations nationwide as they wish as long as the national audience share does not exceed 35 percent, and as many as eight radio stations in a given market can be owned by the same company depending on market size), must show financial and
Federal Communications Commission
technical ability to run a station, must promote minorities and women, past broadcast records of applicants must be acceptable, and programming must be offered to serve the needs of the station’s audience (although the FCC has a difficult time in setting policies that can be enforced). Since frequencies are a scarce resource, licensees are required to be somewhat evenhanded during election campaigns. The equal-time restriction means that they cannot sell or give air time to a political candidate without granting equal opportunities to the other candidates running for the same office. Also, the time must be sold at the lowest rate offered to commercial advertisers. Election debates are an exception since broadcasters can sponsor them and limit participation to nominees of the Republican and Democratic Parties only. Another exception is an appearance by a candidate on a bona-fide news program, such as 60 Minutes, Meet the Press, or Face the Nation, which are under the control of each network’s news division. Any candidate who also makes his or her living from broadcast or entertainment is also subjected to the equal-time rule. Comedian Pat Paulsen lost his contract with a television show for Disney when he entered the Republican primary in New Hampshire in 1972. Republican presidential candidate Pat Buchanan is another example. The cohost of Crossfire on CNN stepped down from his position while he was a candidate in 1992. At an earlier time, broadcasters were also bound by the fairness doctrine, an FCC regulation that compelled broadcasters to air opposing opinions on major public issues. By 1987, broadcasters, on the grounds that the fairness doctrine infringed on press freedom, persuaded the FCC to rescind it. During the years, the Communications Act of 1934 has been amended time and again. Most of these changes have come as a result of the rapid development of new communications technologies, such as the rise of television, satellites, microwave technology, cable television, cellular telephones, and personal communications devices. With the emergence of these new technologies, new responsibilities have been added to the FCC’s list of responsibilities. For example, the Communications Satellite Act of 1962 gave the commission authority to regulate satellite communications. The Cable Act of 1992 did the same for the emergence of cable television.
565
The FCC has been involved in several First Amendment cases heard by the U.S. Supreme Court. In Red Lion Broadcasting Co. v. FCC (1969), the Supreme Court upheld the constitutionality of a requirement that broadcasters provide a right of reply to a person whose character had been attacked during discussions of controversial public issues. Licensees must send anyone attacked on the air a copy of the broadcast, either a tape or a transcript, and offer free time to reply. This case arose after a journalist requested a right-of-reply from several radio stations that had carried a broadcast by a radio and television evangelist who had attacked him. Most stations offered free air time to the journalist, but one small station in Red Lion, Pennsylvania, said he would have to pay for airtime. The Supreme Court ruled that the personal-attack rule and the fairness doctrine itself enhanced rather than abridged freedom of the press. This case endorsed the earlier view by the Court that broadcast frequencies were scarce and should not be used by the licensee to promote personal political views. FCC v. Pacifica (1978) involved comedian George Carlin and his infamous “Filthy Words” monologue. The FCC had received a complaint from a listener while driving with his young child, who heard the 12minute monologue in which Carlin covered the seven words inappropriate for television. In a 5–4 decision, the Supreme Court upheld the right of the FCC to punish broadcasters after the fact for playing indecent material, even if it falls short of the obscenity standard. Stations can be fined or have their licenses revoked for airing indecencies at any time except early in the morning when children are not likely to be listening. The majority opinion, written by Associate Justice John Paul Stevens, made a distinction between not allowing prior restraint through censorship but allowing subsequent punishment of material deemed indecent for the airwaves. His reasons included the fact that broadcast messages, unlike print messages, can enter the privacy of one’s home unannounced and that viewers or listeners might not always hear warnings about explicit material as they tune in and out of stations. Also, broadcast stations, whether on radio or television, are “uniquely accessible” to children, and the Supreme Court had upheld decisions that help parents to keep such material out of their children’s hands. The dissenting opinions
566
Federal Emergency Management Agency
stated that since Carlin’s broadcast was not obscene, the Supreme Court had no authority to allow the FCC to punish the station. Also, as Associate Justice William Brennan pointed out, the Supreme Court misconstrued privacy interests since it did not consider equally the rights of listeners who wanted to hear such broadcasts, permitting “majoritarian tastes completely to preclude a protected message from entering the homes of a receptive, unoffended minority.” The outcome of the case showed the Supreme Court’s tendency to place stricter regulations on television and radio broadcasters through the FCC than in print media outlets. In spite of the outcome in FCC v. Pacifica, cable television and talk radio hosts have repeatedly tried to force the issue, and “shock jocks” such as Howard Stern as well as some of the more suggestive song lyrics from the rap and rock-music genres have attempted to push the boundaries of what is allowed. Pushed by a Congress concerned with indecency on the public airwaves, the FCC has attempted to impose a system of “family hours” in the prime-time evening viewing period, ban certain forms of speech and language, and prohibit the explicit depiction of sexual materials. The dynamic tension between free speech on the one hand and protecting young viewing audiences on the other has been a difficult one for the FCC. Where does free speech end and censorship begin? In a free society, just how free do we really want to be? Does the depiction of explicit material serve as an “escape valve” for tensions, or does it encourage antisocial and violent behavior? Also, should the censors be concerned with sexual materials alone? What of potentially improper political speech (the FCC has banned the depiction of explicit pictures of aborted fetuses arguing that it does not represent free speech and is offensive to many)? What of the depiction of violence? For a free society to remain free, certain liberties must be allowed. Regulating or banning sexual, political, or violent speech is repugnant to the defenders of the First Amendment, but the explicit depiction of materials deemed by some to be offensive is also a social problem. Getting the balance “just right” has been difficult, partly because social norms are a constantly moving target and partly because some political groups have a political stake in restricting various forms of speech. A free society cannot
allow a small group of deeply committed religiously motivated citizens to set the standards for an entire society. Neither can a free society have an “anything goes” approach to any and all forms of speech. The controversy over what is and is not permitted in broadcasting will never be fully resolved as the political, social, and moral stakes are areas of high interest and deep social meaning. See also executive agencies. Further Reading Besen, Stanley M., et. al. Misregulating Television: Network Dominance and the FCC. Chicago: University of Chicago Press, 1984; Hilliard, Robert L. The Federal Communications Commission: A Primer. Boston: Focal Press, 1991; McChesney, Robert W. Telecommunications, Mass Media & Democracy. New York: Oxford University Press, 1993; Pember, Don R., and Clay Calvert. Mass Media Law. Boston: McGraw-Hill, 2007. —Michael A. Genovese and Lori Cox Han
Federal Emergency Management Agency (FEMA) For the first 150 years of the existence of the United States, the federal or national government had a limited or more minimal role in setting national policy and, apart from international relations and foreign policy, played a decidedly secondary role in both domestic affairs and the economy. In the early years, the United States was emerging from a decidedly secondary role in the world to a more influential, more powerful, and larger nation and for the first century and a half relied less on the federal government and more on local authorities to meet the needs of its citizens. State and local governments had a more pronounced role in domestic and economic policy and were the primary source of relief and other needs-based policies. If there was a natural disaster, the state, the local authority, local charities, and the churches of the region were mainly responsible for providing assistance, relief, food, and shelter for those in need. The federal government stayed out of such “local” concerns, at least until the massive dislocations caused by the Great Depression that began with the stock market crash of 1929.
Federal Emergency Management Agency
With the onset of the economic depression that lasted well into the 1930s, the federal government grew in size, scope, and power, and it took on a new and expended role in providing relief and aid. The depression was so massive and widespread, so devastating and overwhelming, and the local authorities, the states, and local churches and charities so overextended that they could not provide the relief necessary to meet the demands of this great crisis. Citizens demanded that the federal government step in and provide assistance and develop economic-recovery plans. This led to the election in 1932 of Franklin D. Roosevelt to the presidency, a Democratic majority in both houses of Congress, the emergence of the New Deal policies to meet the extraordinary needs of the times, and the emergence of the federal government as the primary guarantor of relief and assistance. This changed federal relationship weakened the state and local authorities as it strengthened the national government. From this point on, the federal government took on an expanded role and more
567
extensive responsibilities in dealing with crises, disaster relief, and redevelopment. During the New Deal era, a series of new federal agencies were created to deal with housing, food supplies, employment, and a host of other relief related activities. The federal government grew in size and power and from that point on was seen as having greater if not primary responsibility for handling domestic problems, but the increase in size and power led to some confusion over the best way to organize the government to deal with the needs of the nation. If an “administrative state” emerged, it was one cobbled together piece by piece, with little forethought, and less integrated design. In the 1960s, in response to several natural disasters (mostly hurricanes and flooding, primarily in the U.S. South), the federal government created agencies devoted to disaster preparation and to after-crisis relief. What eventually emerged from this evolutionary process was a new federal agency whose responsibility it was to better prepare for disasters and meet
Federal Emergency Management Agency staff assisting in the aftermath of Hurricane Katrina, New Orleans, Louisiana (FEMA)
568
Federal Energy Regulatory Commission
the needs of local communities and regions after disasters occurred: the Federal Emergency Management Agency, or FEMA. The Federal Emergency Management Agency is a part of the Department of Homeland Security (DHS). Originally established in 1979 via an executive order by President Jimmy Carter, (Executive Order 12148), FEMA was designed to coordinate all the disaster relief efforts of the federal government. One of the first major tests of FEMA related to the dumping of toxic waste into Love Canal in Niagara Falls, New York. Shortly thereafter, FEMA took charge of the federal response to the nuclear accident at Three Mile Island in Pennsylvania. Overall, in the early years, FEMA performed well and was headed by experienced disaster-relief specialists. In 1993, President Bill Clinton elevated FEMA to cabinet-level status and named James Lee Witt as director. Witt was an effective director of the agency, and he initiated a number of reforms designed to make FEMA more effective and responsive to disaster relief needs. He was given high marks for his administrative and political expertise. After the September 11, 2001, terrorist attacks against the United States, efforts to prepare better for the “new” threat of terrorism, and the new disasters it might create led to a rethinking of the role of FEMA and of crisis preparation in general. Would FEMA be elevated to greater status? Would it be given new and more expansive responsibilities? Would its size mushroom and its budget increase? What role would FEMA play in a post 9/11 world? When the Department of Homeland Security (DHS) was established in the United States, FEMA, against the advice of some experts, was placed within the Department of Homeland Security where FEMA became part of the Emergency Preparedness and Response Directorate of DHS. This stripped FEMA of some of its stature and placed it within a larger and more complex bureaucratic system. An agency such as FEMA must move quickly, be nimble and flexible, and adjust readily to new threats and requirements. Placing FEMA under the bulky Department of Homeland Security, critics feared, might make the agency top heavy and slow to respond, less flexible and more bureaucratic. It was not long before the critics were proven right.
President George W. Bush, turning away from the Clinton model of appointing experienced disasterrelief specialists to head FEMA, instead began to appoint a series of less-qualified outsiders, not always well versed in disaster relief practices. This became a political problem in the Fall of 2005 when Hurricane Katrina struck the New Orleans and broader Gulf coast area, leaving in its wake massive flooding and devastation. Bush’s FEMA director, Michael D. Brown ineptly handled the Katrina disasters, compounding the already devastating effects of the hurricane. In spite of Brown’s inept handling of the Katrina disaster, it should also be noted that before FEMA was placed under the control of DHS, Brown did issue a warning that putting FEMA within DHS would “fundamentally sever FEMA from its core functions,” “shatter agency morale,” and “break longstanding, effective, and tested relationships with stats and first responder stakeholders.” He further warned of the likelihood of “an ineffective and uncoordinated response” to a terrorist attack or to a natural disaster. He was right. FEMA’s response to Hurricane Katrina was a disaster. Brown was relieved of operational control and shortly thereafter resigned. See also bureaucracy; executive agencies. Further Reading Anderson, C.V. The Federal Emergency Management Agency (FEMA). Hauppauge, N.Y.: Nova Science Publications, Inc. 2003; Brinkley, Douglas. The Great Deluge: Hurricane Katrina, New Orleans, and the Mississippi Gulf Coast. New York: William Morrow, 2006; Cooper, Christopher, and Robert Block. Disaster: Hurricane Katrina and the Failure of Homeland Security. New York: Times Books, 2006. —Michael A. Genovese
Federal Energy Regulatory Commission The Federal Energy Regulatory Commission (FERC), an independent regulatory agency within the U.S. Department of Energy, was created by an act of Congress, the Department of Energy Organization Act (Public Law 95–91), on August 4, 1977. The commission, which replaced the Federal Power Commission (established by the Federal Power Act, 41 Stat. 1063, on June 10, 1920), began formal opera-
Federal Energy Regulatory Commission
tions on October 1, 1977. The agency’s mandate was to determine whether wholesale electricity prices were unjust and unreasonable and, if so, to regulate pricing and order refunds for overcharges to utility customers. According to the FERC’s Strategic Plan for Fiscal Years 2006 through 2011, the mission of the agency today is to “regulate and oversee energy industries in the economic, environmental, and safety interests of the American public.” Its functions include regulating the rates and charges for interstate transportation and sale of natural gas, transmission and sale of electricity, and transportation and sale of oil through pipelines. The agency also reviews and authorizes hydroelectric power projects, liquefied natural gas (LNG) terminals, and interstate natural-gas pipelines. The Energy Policy Act of 2005 (Public Law 109–58) gave the FERC additional authority for insuring the reliability of high-voltage interstate transmission systems (which followed the 2003 North American blackout that left more than 50 million people in the Northeastern United States and central Canada without electricity); for monitoring and investigating energy markets (as a result of the California Energy Crisis of 2000–01 where Enron and Reliant Energy manipulated the market to drive up electricity costs); and for imposing civil penalties against those who violate their rules. As an independent agency, the decisions of the FERC are not reviewed by the president or the Congress. Instead, decisions of the body are subject to review by the federal courts. The agency is also selffinanced, as the Federal Energy Regulatory Commission’s funding comes from recovering costs from the industries that it regulates through fees and annual charges. As a result, the FERC operates at no cost to the U.S. taxpayer. While the present commission was established in 1977, federal involvement in the field reaches back to the presidency of Woodrow Wilson. The Federal Power Commission (FPC) was created by the Federal Water Power Act of 1920 (41 Stat. 1063), which provided for the licensing by the FPC of hydroelectric projects on U.S. government land or navigable waters. The commission was under the joint authority of the secretaries of war, agriculture, and the interior. In 1928, Congress appropriated funds for the commission to hire its own staff. Two years later, in 1930,
569
Congress shifted authority for the commission from the three cabinet secretaries to a five-member commission nominated by the president (and subject to the advice and consent of the Senate) with the stipulation that no more than three of the commissioners could belong to the same political party. In 1934, Congress directed the FPC to conduct a national survey of electricity rates. In 1935, the Federal Power Commission’s role was expanded by the Federal Power Act (49 Stat. 803) which gave the FPC the mandate to ensure that electricity rates were “reasonable, nondiscriminatory and just to the consumer.” The agency was given jurisdiction over interstate and wholesale transactions and transmission of electric power. One of their initiatives was the integration of local utilities into regional systems to improve efficiency. The Natural Gas Act of 1938 (52 Stat. 821) gave the FPC jurisdiction over the natural-gas pipeline industry, as the commission was charged with regulating the sale and transportation of natural gas. This included the licensing of gas pipelines and approving the location of onshore Liquefied Natural Gas (LNG) facilities. The legislation was prompted by a growing concern, raised in a 1935 Federal Trade Commission report, that control over the natural gas market that was being exercised by the 11 interstate pipeline companies. The commission was given the authority to set “just and reasonable rates” for the transmission or sale of natural gas in interstate commerce. In 1940, Congress gave the commission the authority to certify and regulate natural-gas facilities. The commission’s authority to set rates for gas to cross state lines was confirmed by the U.S. Supreme Court 1954 in Phillips Petroleum Co. v. Wisconsin (347 U.S. 672), where the Court held that the FPC had jurisdiction over facilities producing natural gas that was sold in interstate commerce. In a 1964 case, FPC v. SoCal Edison (376 U.S. 205), the Court decided that the FPC had jurisdiction over intrastate sales of electric power that had been transmitted across state lines. In 1976, in an effort to stimulate natural gas production, the Federal Power Commission tripled the ceiling price for newly discovered gas. Then, the FPC was abolished in 1977, and its responsibilities were transferred to the new FERC. In 1978, Congress
570
Federal Energy Regulatory Commission
enacted the National Energy Act, which included the Natural Gas Policy Act (Public Law 95–621). This legislation gave the new agency additional responsibility by unifying the intrastate and interstate natural gas markets and placing them under the FERC’s authority. The law also provided for the gradual deregulation of the price of natural gas, with all controls being removed by 2000. The 1980s saw a movement toward deregulation of the natural-gas industry. With encouragement from the Reagan Administration, the commission made a number of rules changes that resulted in the scaling back of regulation by the FERC of prices and marketing practices. Notably, FERC Order 436 required that natural-gas pipelines provide open access, allowing for consumers to negotiate naturalgas prices with producers while separately contracting for the transportation of natural gas. Under the Natural Gas Wellhead Decontrol Act of 1989 (Public Law 101–60), the FERC ceased regulating wellhead prices on January 1, 1993, accelerating the date for decontrol established in the Natural Gas Policy Act of 1978. In 1992, Congress passed the Energy Policy Act (Public Law 102–486) which required utilities to open their transmission lines to all sellers of electricity and mandated that the FERC promulgate regulations to facilitate “open access.” In 1996, the FERC issued Orders 888 and 889 which provided for the opening of electric transmission lines on a nondiscriminatory basis to generators of electric power. By giving utilities greater discretion in choosing the generators of the power that they sold to their customers, these orders paved the way for a number of states to deregulate electricity rates. In 1996, California became the first state to deregulate the retail sale of electric power. The commission investigated the California Energy Crisis of 2000–01, concluding that the crisis was caused by energy-market manipulation by Enron and other energy companies. As a result of the investigation, the commission collected more than $6.3 billion from energy companies. The Energy Policy Act of 2005 (Public Law 109– 58) gave additional jurisdiction to the FERC. The agency was given authority to establish rules to prevent manipulation of electric and gas markets; to establish and enforce electric reliability standards; to
review the acquisition and transfer of electric generating facilities (as well as mergers); and the authority to provide greater price transparency in electric and gas markets. In January 2007, the commission imposed its first penalties under the Energy Policy Act, assessing civil penalties totaling $22.5 million against five firms. The Federal Energy Regulatory Commission’s present priorities include promoting the establishment of Regional Transportation Organizations (RTOs) and Independent System Operators (ISOs) to ensure access to the electric transmission grid. The commission is also working with the Canadian federal government to facilitate the construction of an Alaskan Natural Gas Pipeline, which will bring natural gas from Alaska south to the lower 48 states. The commission is presently (March 2007) chaired by Joseph T. Kelliher (appointed 2003). The other members of the commission are Suedeen G. Kelly (appointed 2004), Phillip D. Moeller (appointed 2006), Marc Spitzer (appointed 2006), and Jon Wellinghoff (appointed 2006). The commissioners serve fiveyear terms, and the commission chair is designated by the president. The Federal Energy Regulatory Commission is organized into nine functional offices: the Office of Administrative Law Judges; the Office of the Secretary; the Office of External Affairs; the Office of Administrative Litigation; the Office of the Executive Director; the Office of the General Counsel; the Office of Market Oversight and Investigations; the Office of Markets, Tariffs and Rates; and the Office of Energy Products. The commission’s headquarters is in Washington, D.C., and there are five regional offices, located in New York, Atlanta, Chicago, Portland (Oregon), and San Francisco. As of September 30, 2005, the commission employed 1,215 people and had a budget of $210 billion. See also bureaucracy; executive agencies. Further Reading Breyer, Steven G., and Paul W. McAvoy. Energy Regulation by the Federal Power Commission. Washington, D.C.: The Brookings Institution, 1974; Enholm, Gregory B., and J. Robert Malko, eds. Reinventing Electric Utility Regulation. Vienna, Va.: Public Utilities Reports, Inc., 1995; Lambert, Jeremiah D. Energy
Federal Reserve System
Companies and Market Reform: How Deregulation Went Wrong. Tulsa, Okla.: PennWell Books, 2006; McGraw, James H. FERC: Federal Energy Regulatory Commission. Chicago: American Bar Association, Section of Environment, Energy and Resources, 2003; Small, Michael E. A Guide to FERC Regulation and Ratemaking of Electric Utilities and Other Power Suppliers. 3rd ed. Washington, D.C.: Edison Electric Institute, 1994. —Jeffrey Kraus
Federal Reserve System The Federal Reserve System, or the Fed as it is known, is a form of a central bank of the United States. The first U.S. financial institution with responsibilities similar to a central bank was the First Bank of the United States. Chartered in 1791, this bank was the brainchild of Alexander Hamilton. The Second Bank of the United States was chartered in 1816. President Andrew Jackson, believing the Bank to be the tool of elite financial interests, fought to eliminate the Bank and successfully vetoed the recharter of the Bank. From that time until the 1860s, a form of a free banking system—what we today would call “Cowboy Capitalism”—dominated the financial world in the United States. From the 1860s until 1913, a system of private national banks dominated the market, but after a series of financial panics and depressions, bank failures and bankruptcies, the need for a more regulated system became obvious. In 1913, the Congress passed the Owen-Glass Act. President Woodrow Wilson signed it into law on December 23, 1913. The act, known as the Federal Reserve Act, was created, as the act states, to “provide for the establishment of Federal reserve banks, to furnish an elastic currency, to afford means of rediscovering commercial paper, to establish a more effective supervision of banking in the United States, and for other purposes.” The Fed is an independent regulatory institution. As an independent institution, it is largely immune from the normal political pressures to respond to popular or presidential wishes. It is responsible for regulating the money supply through monetary policy (fiscal policy—taxing and spending—is controlled by the president and Congress). By controlling monetary
571
policy, the Fed was especially conscious of inflation and between 1980 and 2005, its major goal has been to limit the money supply and control inflation. The Fed also controls the “discount rate,” the rate of interest that banks and savings and loan institutions can borrow short term from the Federal Reserve district banks. The Federal Open Market Committee of the Fed deals with monetary growth and interest-rate targets. The Fed also supervises and regulates banking institutions and attempts to maintain the stability of the nation’s financial system. The Federal Reserve Board has seven members. They are appointed by the president with the advice and consent of the Senate. They serve a 14-year term. The chairman of the Fed is appointed by the president from among the board and serves a four-year term. The chair can be reappointed. The Federal Reserve System is, technically speaking, immune from direct political pressure, but there is a great deal of interaction, and presidents attempt to influence the policies of the Fed by what is referred to as “jawboning” or a combination of private and public persuasion. Given the fact that presidents are held responsible for the overall state of the U.S. economy, it should come as no surprise that they take a keen interest in what the Fed does. A rise in interest rates, a fluctuation in the rate of inflation, a slip in the supply of money, can all have a significant impact in perceptions of the overall state of the economy, and if things take a downward turn, so too does the president’s popularity. The Fed was set up to be immune from the pressures of a president who may have short-term interests in mind. After all, the president must maintain his or her popularity rating, and the pulse of this popularity is taken virtually every day. Likewise, the president has bills before the Congress and elections just around the corner. Further, a president is in office only a short time— four or sometimes eight years. His or her vision is short term, but the Fed is supposed to take a longterm view and not what is best for the political health of a sitting president but for the long-term health of the U.S. economy. The short-term interests of a president often clash with the long-term interests of the economy. In such conflicts, the independence of the Federal Reserve System is designed to give it political and legal protection from the pressures of a sitting president.
572 F ederal Trade Commission
In his time as the chair of the Fed, Alan Greenspan became a well-known public figure and was seen as the financial guru who helped orchestrate the economic boom of the Bill Clinton presidential years (1993–2001). Born in New York City on March 6, 1926, Greenspan served as the Fed Chair from 1987 until his retirement in 2006. It was a long and generally successful run. During his term, he was considered not only one of the most powerful men in the United States, but in the world. His influence over U.S. economic policy gave him a wide reach over the worldwide economy as well. He held his post under four presidents and is credited with keeping inflation down and productivity up. Critics often charged that Greenspan was “inflation-phobic,” and it was true that he seemed to consider inflation the chief evil to be avoided at all costs, but Greenspan had a more holistic view of the economy than some of his critics charged. What is clear is that in his time as the Fed Chair, the United States experienced a long period of economic growth without high rates of inflation, and while Greenspan cannot be given total credit for these economic boom years, clearly some of the credit must go to him for these successes. Legislators, investors, banks, and consumers hung on Greenspan’s every word, looking for clues and hints as to what policy he might promote. Would interest rates go up or down? Would the money supply shrink or grow? Is the deficit too high? Alan Greenspan as financial guru relished his role as master of the economy and played the role with skill and political deftness. In 2006, he was replaced as Fed chair by Ben Bernanke, who was appointed to the position by President George W. Bush. See also bureaucracy; executive agencies. Further Reading Epstein, Lita, and Preston Martin. The Complete Idiot’s Guide to the Federal Reserve. New York: Alpha Books, 2003; Greider, William. Secrets of the Temple. New York: Simon and Schuster, 1987; Myer, Lawrence. A Term at the Fed: An Insider’s View. New York: HarperBusiness, 2004. —Michael A. Genovese
Federal Trade Commission The Federal Trade Commission (FTC) is an independent agency established in 1914 by the Federal Trade
Commission Act. The Federal Trade Commission’s mission is to enforce antitrust and consumer protection laws. It investigates complaints against specific companies and is responsible for civil enforcement of antitrust laws. As stated on the agency’s Web page, the FTC “deals with issues that touch the economic life of every American. It is the only federal agency with both consumer protection and competition jurisdiction in broad sectors of the economy. The FTC pursues vigorous and effective law enforcement; advances consumers’ interests by sharing its expertise with federal and state legislatures and U.S. and international government agencies; develops policy and research tools through hearings, workshops, and conferences; and creates practical and plain-language educational programs for consumers and businesses in a global marketplace with constantly changing technologies.” The Federal Trade Commission is run by a fivemember board. The commissioners are appointed by the president with the advice and consent of the Senate. No more than three of the commission members may belong to the same political party. The bulk of the work done by the Federal Trade Commission is in three of its bureaus: the Bureau of Consumer Protection (designed to protect consumers against “unfair, deceptive or fraudulent practices”), the Bureau of Competition (charged with “elimination and prevention of anticompetitive business practices”), and the Bureau of Economics (which is “responsible for advising the FTC on the economic effects of its regulation of industry as well as government regulation of competition and consumer protection generally”). The Federal Trade Commission Act that created the commission was passed along with the Clayton Anti-Trust Act and marked the first major revision of antitrust law since the passage of the Sherman Anti-Trust Act of 1890. When Theodore Roosevelt became president in 1901, he was especially interested in monopolies and trusts, but Roosevelt did not object to all monopolies. In fact, he opposed the absolutist application of the Sherman Anti-Trust Act, and sought to redefine national policy toward trusts. Roosevelt wanted to distinguish between what he believed were good and bad trusts. He viewed himself as a progressive who recognized that the emergence of large-scale corporate organiza-
Federal Trade Commission 573
tions was inevitable and in some ways a good thing. Roosevelt merely wished to create a more open environment in which federal regulation ensured that the public interest was protected. He also wished to center enforcement of antitrust policy within the executive branch and not with the courts. The FTC’s predecessor, the Bureau of Corporations, was created in 1903 by legislation that Roosevelt had sought. Roosevelt’s successor, William Howard Taft was more comfortable with the judicial role in antitrust policy and became very active in trust busting. When Theodore Roosevelt decided to challenge Taft for the presidency in 1912, the door was opened for a third candidate, the Democrat Woodrow Wilson, to emerge as the frontrunner. When Wilson became president in 1913, he continued to pursue antitrust activities but with a new twist. By the time Wilson became president, antitrust law was in a state of confusion and was desperately in need of coordination. President Wilson attempted to answer the call, and this led to the passing of the Federal Trade Commission Act in 1914. This act is considered one of President Wilson’s key domestic reforms. Thus, the Federal Trade Commission was born. In the decades since the FTC’s formation, Congress has at various times passed other influential legislation that has guided the policies and oversight functions of the FTC. For example, in 1938, Congress passed the Wheeler–Lea Amendment which included a broad prohibition against “unfair and deceptive acts or practices.” In 1975, Congress passed the Magnuson–Moss Act which gave the FTC the authority to adopt trade regulation rules that define unfair or deceptive acts in particular industries. In recent decades, the FTC also has been directed to administer a wide variety of other consumer protection laws, including the Telemarketing Sales Rule, the Pay-Per-Call Rule and the Equal Credit Opportunity Act. During the late 1970s and 1980s, Congress experienced a more conservative trend toward the issue of consumer protection. Congress used the appropriations process to prohibit the FTC from spending funds on certain investigations that some members of Congress, lobbied by business interests, had opposed. However, as with other executive-branch agencies, oversight and control by Congress over certain practices is difficult at
best due to the complex nature of the policy implementation process. Today the FTC serves the consumer by promoting fair trade and consumer protection. It also promotes free and fair business competition and promotes U.S. business interests at home and abroad. While its work may not gain headlines, it remains an important tool for the U.S. government to promote economic development and growth. Consumers who want to make a complaint about fraudulent business activities can now do so by filling out an online form. The FTC Consumer Complaint Form, which can be accessed at https://rn.ftc.gov/pls/dod/wsolcq$.startup ?Z_ORG_CODE=PU01, allows citizens to make a complaint to the Bureau of Consumer Protection about a particular company or organization, and it can also be used by citizens to make a complaint about media violence. The form includes the following disclaimer: “While the FTC does not resolve individual consumer problems, your complaint helps us investigate fraud, and can lead to law enforcement action.” In 2006, the FTC released its Top Ten Consumer Fraud Complaint Categories. They include: Internet auctions (12 percent), foreign money offers (8 percent), shop-at-home/catalog sales (8 percent), prizes/ sweepstakes and lotteries (7 percent), Internet services and computer complaints (5 percent), business opportunities and work-at-home plans (2 percent), advance-fee loans and credit protection (2 percent), telephone services (2 percent), and other complaints (17 percent). In addition, the FTC has devoted many resources in recent years to deterring identity theft. As part of this effort, the FTC maintains a Web page (www.consumer.gov/idtheft) that serves as a one-stop national resource for consumers to learn about the crime of identity theft, as well as tips on how consumers can protect themselves from identity theft and the necessary steps to take if it occurs. According to the FTC, credit-card fraud was the most common form of reported identity theft, followed by phone or utilities fraud, bank fraud, and employment fraud, and the major metropolitan areas with the highest per-capita rates of reported identity theft were Phoenix/Mesa/Scottsdale, Arizona; Las Vegas/Paradise, Nevada; and Riverside/San Bernardino/Ontario, California. See also bureaucracy; executive agencies.
574 findings , presidential
Further Reading Aberbach, Joel D., and Mark A. Peterson, eds. Institutions of American Democracy: The Executive Branch. New York: Oxford University Press, 2005; Federal Trade Commission Web page. Available online. URL: http://www.ftc.gov/; Lovett, William Anthony, Alfred E. Eckes, and Richard L. Brinnkman. U.S. Trade Policy: History, Theory and the WTO. Armonk, N.Y.: M.E. Sharpe, 2004; Mann, Catherine L. Is the U.S. Trade Deficit Sustainable? Washington, D.C.: Institute for International Economics, 1999. —Michael A. Genovese
findings, presidential In the world of international relations, the regime of rules and laws is still emerging and cannot be compared usefully to the more comprehensive and enforceable set of domestic laws for which a police and judicial/court system are clearly established and functioning and which can more or less control the political and legal environment within a given geographical region. In international relations, a set of rules, laws and norms is still developing, and, therefore, nation states often must “fend for themselves” and use force, bribes, the threat of force, persuasion, the nascent system of international laws, international agreements, and a host of other means to gain compliance from other political actors (states). Because this international system is still characterized as “anarchic” (that is, without agreed on and mutually binding rules and enforcement mechanisms) and often is guided by force or power, nation states sometimes employ methods that are covert, clandestine, and at variance with international norms and laws. In a democratic nation-state based on the rule of law, separation of powers, and checks and balances, however, who is to authorize such questionable activities, and what oversight mechanisms might be employed to diminish the chance that power will be abused? In a harsh world where enemies and adversaries sometimes seek to harm a nation, who is to do the admittedly dirty work of meeting force with force, covert with covert, international intrigue with intrigue? For much of the nation’s history, covert operations were under the control of the president. Such secret or covert activities are not mentioned in the
U.S. Constitution, but are sometimes needed to meet the dangers of a harsh world. Thus, the constitutional base of legitimacy for such actions is suspect. When such covert or secret actions were employed, and prior to World War II, they were used sparingly, it was the president who customarily ordered and supervised such activities. After World War II, as the United States emerged as the world’s dominant or hegemonic power, challenged by the rise of the Soviet Union, a cold war (which meant an absence of actual military engagement) between the United States and the Soviet Union developed. In this cold war atmosphere, covert activities often replaced outright war in the competition between the two rival superpowers. The first president to rely heavily on covert operations in the cold war competition with the Soviet Union was Dwight D. Eisenhower. A former general, Eisenhower knew that war between the United States and the Soviet Union was out of the question. Nuclear weapons made war too costly, and even the winner of such a contest would be devastated. Thus, war by other means emerged. Surrogate wars, such as those in Korea, Vietnam, and Afghanistan, occurred, and the two sides relied more heavily on covert or secret activities in efforts to keep the cold war cold, yet not give in to the pressures or advances of the rival. Covert operations were limited, less overt and threatening, and sometimes had the intended effect (and sometimes, not). President John F. Kennedy continued the Eisenhower tradition of using covert activities in the U.S.– Soviet rivalry , but their successors, Lyndon Johnson and Richard Nixon, began to use these covert activities in a more pronounced and dangerous manner. President Nixon even began to use some of these methods—that were restricted for foreign enemies and conducted on foreign soil—within the United States against his political rivals (whom he often referred to as “enemies” and against whom the administration made up an “enemies list”). When some of these activities came to light, critics charged that presidential power had become “imperial” and that presidents were abusing power. The twin impacts of the war in Vietnam and the Watergate scandal brought these charges to a head and resulted in a reaction, a backlash against the presidency and presidential power.
first ladies 575
The Congress, primarily through the efforts of what became known as the Church Committee (named after Democratic U.S. Senator Frank Church of Idaho), held hearings and exposed a series of abuses of the covert operations by the CIA and under the direction of the president. A series of reforms were instituted, designed to bring under congressional or democratic control the covet operations of the federal government, to limit the abuses that had occurred, and make the president more accountable to the Intelligence Committees of Congress. Congress recognized that it was the president who would be, in effect, in control of covert operations—there seemed no other mechanism for the use of covert operations. They further recognized that in time, given the set of precedents that had been established, covert operations had emerged primarily as the president’s responsibility. As head of the executive branch, which includes the Defense and State Departments as well as the Central Intelligence Agency and the National Security Agency, among others, the president was seen as responsible for authorizing and implementing covert and secret operations in other countries. The key to success of such missions is often predicated on being kept secret, but such covert activities undermines the very concept of checks and balances, so vital to a fully functioning separation of powers system. So how can one give presidents the authority they need to meet the demands of a sometimes hostile world, while still maintaining accountability? The method that has evolved is the presidential finding. A presidential finding is a written directive issued by the president. Similar to an executive order, a finding usually is restricted to the foreign-policy field and often is related to the authorization of covert and/or illegal activities, authorized and approved by the president for national-security reasons. Presidential findings are usually classified. Such extralegal or extraconstitutional decrees tend to be highly sensitive and top secret. The first known use of a presidential finding is traced back to the Agricultural Trade Development and Assistance Act of 1954 that regulated sales of agricultural commodities. This finding was published in the Federal Register. But in modern usage, a finding is not published in the Federal Register and is designed to remain secret.
The modern understanding of a presidential finding can be traced to the Hughes–Ryan amendment to the Foreign Assistance Act of 1974. This amendment, designed to control what some believed to be the outof-control use of the Central Intelligence Agency (CIA) by presidents for covert activities, prohibited the expenditure of congressionally appropriated funds by the CIA “unless and until the President finds that each such operation is important to the national security of the United States and reports in a timely fashion, a description and scope of such operation to the appropriate committees of Congress” (see Section 662 of the act). Thus, the finding was designed to both give the president enough flexibility to deal with complex national-security issues but also to maintain a checks-and-balances mechanism so vital to a functioning separation-of-powers system. The president could authorize extralegal activities, but the Congress had to be informed. The Intelligence Authorization Act of 1991 established greater flexibility for presidential reporting of findings, which were to be “reported to the intelligence committees as soon as possible” after the presidential authorization but “before the initiation of the covert action authorized by the finding.” However, there are still problems with reporting, and oversight that became more pronounced as the war against terrorism proceeded. Congress is still searching for ways to increase the national security of the United States, while also guaranteeing proper oversight and control of covert activities by the executive branch. Further Reading Inderfurth, Karl F., and Lock K. Johnson. Fateful Decisions: Inside the National Security Council. New York: Oxford University Press, 2004; Johnson, Lock H. America’s Secret Power: The CIA in a Democratic Society. New York: Oxford University Press, 1991. —Michael A. Genovese
first ladies Historically, first ladies have been presidential confidantes and advisers. They have been surrogates in presidential campaigns and emissaries during presidential terms; speechwriters, editors, and coaches;
576 first ladies
and participants in decisions relating to appointments, policies, and legislative relations. Their contributions have varied from administration to administration— and term to term—but first ladies have consistently been intensely political and partisan. Yet, there is little understanding of this post or these women, especially during the modern presidency. Admittedly, there are hidden complexities. The first lady’s post is, in its origins, a cultural construct. U.S. voters have selected men, preponderantly married men, as presidents. As gender scholars have repeatedly noted, masculinity, heterosexuality, and leadership have been tightly interwoven throughout the history of the United States. Political wives— including first ladies—have had their lives shaped by demands that lead elected officials virtually to abdicate their family responsibilities and by the manners and mores that govern political campaigns and communications. Supplementing these informal factors have been changes in statutory and case law, relating to both campaigning and governing. During the modern presidency, this mixture of influences has become increasingly volatile. To date, the law governing the first lady has passed through four stages. Initially, law seemed essentially irrelevant. Identified simply as the president’s wife, her participation in politics and governance was judged by standards of what was appropriate for a woman married to the president. When President Franklin D. Roosevelt appointed his wife, Eleanor Roosevelt, the nonsalaried assistant director of the Office of Civilian Defense (OCD) in 1941, she stressed her credentials and her desire to contribute to the war effort. Though her marriage to the president had made her the first lady and thus a public figure, she sought to sever the connection between that post and this appointment. Subsequent congressional debates relating to the first lady’s OCD work were gendered and partisan but not jurisprudential. The second stage was inaugurated in 1967 with passage of the Postal Revenue and Federal Salary Act. Section 221, familiarly known as the “the Bobby Kennedy law,” prohibited nepotism in executivebranch employment, appointments, and promotions. The first lady could no longer receive a presidential appointment; if she was appointed, she could not receive a salary in violation of the AntiDeficiency Act of 1884. In 1977, therefore, Rosal-
ynn Carter could only be appointed “honorary chairman” of the President’s Commission on Mental Health. In 1978, the White House Personnel Authorization Act initiated the third stage. Section 105 (e) stated, “Assistance and services . . . are authorized to be provided to the spouse of the President in connection with assistance provided by such spouse to the President in the discharge of the President’s duties and responsibilities. If the President does not have a spouse, such assistance and services may be provided . . . to a member of the President’s family whom the President designates.” For some, this law merely acknowledged the presence of the president’s spouse. For others, the law recognized and supported the first lady as a presidential adviser. This disagreement continued until the Clinton administration when it was partially resolved through litigation. In Association of American Physicians and Surgeons, Inc. et al. v. Hillary Rodham Clinton, which began the fourth stage, a majority of the D.C. Court of Appeals ruled that the first lady was a de facto federal official for the purposes of the Federal Advisory Committee Act. As the chair of the President’s Task Force on National Health Reform, Hillary Rodham Clinton had presided over lengthy meetings with federal officials and presidential appointees. The majority opinion ensured that those meetings did not have to be opened to interest groups and that their minutes did not have to be disseminated publicly. A dissenting opinion, however, argued forcefully against any formalization of the first lady’s post, maintaining that the 1978 law should be limited in its effects and the constraints imposed by the antinepotism law should be strengthened. Cumulatively, then, the law has recognized the first lady without comprehensively clarifying her status within the presidency or the executive branch. As was true for Eleanor Roosevelt, gender and partisan ideologies continue to have the greatest effect in determining the first ladies’ power. Presidents’ wives have typically been the effective head of household for the First Family, managed the White House as a conference center and preserved it as a national historic site, and participated in politics and governance. In doing so, they have been popularly judged as gender role models for the larger society.
first ladies 577
As the effective head of household for the First Family, first ladies act as wives and mothers on a national stage. As wives, they are expected to facilitate their husbands’ success. Mamie Eisenhower, Barbara Bush, and Laura Bush preferred to minimize public displays of their political participation, but they campaigned extensively. As mothers, first ladies are responsible for the media relations and political actions of their children, including their adult children. Before and during the presidential years, first ladies have also, often, had primary responsibility for family finances. Lady Bird Johnson, for example, was integral to the establishment of the Johnson media fortune, and Rosalynn Carter was a full participant in the Carter family business. As the White House manager and preservationist, the first lady must balance the mansion’s dual identities. As a conference-center manager, the first lady is responsible for the scheduling and logistics of every White House event. As such, she facilitates outreach to decision makers and expresses the administration’s cultural priorities. As the conservator of the White House as a national historic site, first ladies have engaged in extensive historical research and political negotiations. Jacqueline Kennedy particularly excelled in both of these roles. As first lady, she revolutionized entertainment and outreach, establishing a new standard for fine arts at the White House. She also designed and restored a number of the White House public rooms, as well as the executive residence, securing period furnishings through extensive consultations with private collectors, auction houses, museums, and governmental agencies. Familial and White House roles have drawn the first ladies into politics and policy making, but presidents’ wives have often made additional and extensive contributions as presidential representatives and political entrepreneurs. Lady Bird Johnson showcased Great Society programs related to economic development, housing, tourism and travel, conservation, rural improvement, public schools and schooling, health, and urban planning. Rosalynn Carter and Hillary Rodham Clinton, as already noted, advocated on behalf of mental health and national health-care reform. First ladies have also, increasingly, campaigned independently in the midterm and presidential elections. Often viewed as less partisan and less divisive than their husbands, they have suc-
ceeded in drawing support from both party loyalists and unaffiliated voters. Wife, mother, hostess, and homemaker are roles historically assigned to women. Politics and partisanship, however, have been reserved to men. Though some now perceive these undertakings as gendered less exclusively, traditional expectations endure and surface in judgments about the president’s wife. If the first lady accents her contributions in the private sphere, then she is presumed to be less involved in the public sphere. Conversely, if she is a leader in the public sphere, then she is expected to be less attentive to the private sphere. Bess Truman’s dedication as a wife and a mother led many to discount her contributions as a political adviser. Hillary Rodham Clinton’s commitment to policy making, conversely, led many to believe that she was lacking as a wife and mother. In actuality, each first lady has set her own priorities among the roles and responsibilities that she has been assigned, working with her staff to meet demands in both the private and the public spheres. The social secretary, a post established during the Theodore Roosevelt Administration, was the first federally salaried staff position created to help the president’s wife successfully perform her public-sphere responsibilities. The social secretary, specifically, enabled the first lady to fulfill her responsibilities as manager of the White House, a role that has become increasingly important throughout the modern presidency. The social office was the first lady’s only formal liaison to other units in the White House office through the Kennedy administration. In the Johnson administration, the first lady’s press secretary became the chief of her own office, leading a staff that was distinguished by its expertise in print and broadcast media. Jacqueline Kennedy had inaugurated this post but had kept it within the social office, intending that it would function largely to protect her privacy. Lady Bird Johnson, in contrast, encouraged her press secretary to design and implement a comprehensive communications strategy. The first lady could not shift public attention away from the war in Vietnam, but her outreach did influence the domestic-policy agenda. Later in the presidential term, there was also a separate office established to focus on beautification. The social office, the office of the first lady’s press secretary, and the beautification
578 first ladies
office all served as liaisons between this first lady and officials in local, state, and national government. The next major innovation in first-lady staffing came in the Carter Administration. Correspondence staff had been added during the Nixon administration, and a speechwriter had been assigned to the first lady during the Ford administration, but the formal structure had remained relatively constant: The first lady’s staff had been assigned either to the social office or to her press secretary’s office. Drawing on a transition report that recommended extensive reform, Rosalynn Carter created a structure with separate units for correspondence, press, projects (policy), and scheduling (later, scheduling and advance), in addition to the social office. In 1980, the first lady established a staff-director post. By the close of the Carter administration, therefore, both the president and the first lady had staff units providing specialized expertise in management, communications, correspondence, policy and politics, press, and scheduling. First ladies have subsequently refined the structure of their office without significantly altering its subunits. Staff members were first commissioned in the White House office during the Reagan Administration. This was also the first administration in which a man held a senior post in the first lady’s office. In the Clinton administration even more than the Carter administration, the first lady’s policy work drew her office further into the mainstream of presidential and White House politics. Health-care task-force meetings, for example, involved the first lady’s staff in consultations with staff and presidential appointees throughout the executive branch. The first lady’s staff, therefore, has moved from supporting her responsibilities as the manager and conservator of the White House to facilitating and advancing her engagement in politics and policy making. The priorities of social secretaries, press secretaries, project directors, personal assistants, chiefs of staff, and others have gradually been incorporated into the post of the first lady, shaping public expectations and presidential legacies. While the first lady’s post has notable continuities in its roles and responsibilities—most especially in regard to the First Family—these qualities arguably are overwhelmed by changes in the law and the political system generally, in her office and the presi-
dency specifically. Hillary Rodham Clinton identified Eleanor Roosevelt as her role model, but the two women existed in such different environments that the lessons of one were somewhat irrelevant to the other. Eleanor Roo sevelt, after all, did not find her presidential appointment challenged in the courts. Yet Eleanor Roosevelt did encounter a great deal of popular resistance to her political and partisan work, and most modern first ladies have encountered similar controversies. With media scrutiny continuing and public outreach and communications becoming definitive tasks for the first lady, it is unlikely that these debates will lessen in the future. Instead, we should expect them to become more heated, as public conceptions of women’s gender roles become more varied. If first ladies choose to acknowledge the political and partisan character of their work—or if the media draws attention to those qualities—the confrontations will become even more heated. First ladies may again be drawn into court in defense of their formal power. Gender is a fundamental aspect of human identity, and, as a widely recognized gender role model, the first lady cannot escape having her actions evaluated as explicit commentary on women’s place in society. The challenge, especially for first ladies who wish to be political entrepreneurs, is to craft a communications and outreach strategy that will secure widespread and strong support for women as political and partisan actors. Further Reading Anthony, Carl Sferrazza. First Ladies. Vol. 2, The Saga of the Presidents’ Wives and Their Power, 1961–1990. New York: William Morrow, 1991; Beasley, Maurine, ed. The White House Press Conferences of Eleanor Roosevelt. New York: Garland Publishing, 1983; Borrelli, MaryAnne. “Telling It Slant: Gender Roles, Power, and Narrative Style in the First Ladies’ Autobiographies,” 47 no. 7/8 (2002): 355–370; Burrell, Barbara. Public Opinion, The First Ladyship, and Hillary Rodham Clinton. New York: Garland Publishing, 1997; Caroli, Betty Boyd. First Ladies. Expanded ed. New York: Oxford University Press, 1995; Duerst-Lahti, Georgia. “Reconceiving Theories of Power: Consequences of Masculinism in the Executive Branch.” In The Other Elites: Women, Politics, and Power in the Executive Branch,
foreign policy power 579
edited by MaryAnne Borrelli and Janet M. Martin, Boulder, Colo.: Rienner, 1997; Gutin, Myra. The President’s Partner: The First Lady in the Twentieth Century. New York: Greenwood Press, 1989; O’Connor, Karen, Bernadette Nye, and Laura van Assendelft. “Wives in the White House: The Political Influence of First Ladies.” Presidential Studies Quarterly 26, no. 3 (1996): 835–853; various files and records relating to the first lady and the first lady’s staff at the Herbert Hoover, Franklin D. Roosevelt, Harry S. Truman, Dwight D. Eisenhower, John F. Kennedy, Lyndon B. Johnson, Gerald R. Ford, Jimmy Carter, Ronald Reagan, and George H. W. Bush Presidential Libraries. —MaryAnne Borrelli
foreign policy power The United States preponderance of power, both economic and military, gives the United States the tools by which to better achieve foreign policy goals. Power is the ability to achieve what one wants. One can use force or coerce another into complying with one’s wishes (raw or hard power) or can induce, persuade, bribe, or attract others into complying (soft or sticky power). A nation like the United States has many resources. How wisely and how well those resources are used is what spells success or failure in the foreignpolicy arena. Hard power, as Harvard University internationalrelations specialist Joseph Nye reminds us, is measured normally first by military might and then as a function of economic strength. By both measures, the United States is a superpower. The U.S. military dwarfs that of the next 15 nation’s militaries combined. The nation’s economy is big, innovative, vibrant, and strong. The United States has more hard power than any other nation in the world, but as valuable as hard power may be, it is not always fungible, or convertible into achieving desired outcomes, and too naked a use of hard power often produces resentment and a backlash. Economic strength allows the projection of military and other forms of power. Using the military can be draining on an economy. For example, the war in Iraq beginning in 2003 cost hundreds of billions of dollars overall. A robust economy allows a nation to absorb the costs of empire. This is especially impor-
tant when a nation pursues a largely unilateral brand of military intervention/occupation as President George W. Bush did in Iraq. A strong economy also may allow a nation to use economic inducements to gain compliance from other nations. That is why soft or sticky power, as it is sometimes called, is often a preferable tool. Soft power is the carrot to hard power’s stick. The use of coalition building, persuasion, diplomacy, and being seen as an honest broker in dispute resolution can often achieve desired results without resorting to force. The United States’s cultural penetration and the things for which it stands may also be a form of foreign policy power. This refers to creating desire or emulation. The nation’s freedoms, commitment to justice, and sense of fairness are all qualities that may attract others to the United States, and other nations may wish to emulate it. The strength of the U.S. democracy, devotion to human rights, and the rule of law all serve to draw others to the United States. That is why a nation must be constantly vigilant to these high ideals and why a country damages its power when it violates its own high ideals. Diplomacy is also a valuable, even indispensable tool in foreign policy. The Department of State is the government agency with primary responsibility for U.S. diplomacy, but a wide range of government officials engage in diplomatic contacts and interaction with other nations. Alliances can also be a tool in achieving foreign-policy goals. With growing interdependence, no nation can go it alone. From pollution to immigration to drugs to terrorism, no nation is an island, alone and isolated. If the United States is to make headway in solving its and the world’s problems, it must act in concert with others. The wisdom of alliances can be seen in the U.S. response to the post–World War II era. As the cold war threatened the West, the United States formed an interlocking system of alliances such as the North Atlantic Treaty Organization (NATO) to share burdens, responsibility, sacrifices, and successes. These alliances can bind the United States by compelling it to act in concert with others, but they can also help the United States with burden and power sharing. The weakness of the go-it-alone, unilateral approach can be seen when President George W. Bush decided to invade Iraq without the support of the United Nations and with only Great Britain and a
580 Hundr ed Days
few other nations behind it. When the war dragged on and on, and as the cost of that war rose dramatically, the United States was virtually alone in bearing the financial and military burden. The use of international laws and norms can also greatly advance a nation’s interests. While there is no settled system of recognized international law, there is a growing web of interconnected rules, norms, and laws that comprise a nascent system of international law. The United Nations likewise, while somewhat weak and of limited ability, can at times serve the interests of peace and security. One of the recurring temptations of a superpower is to define power too narrowly, largely in military terms. An effective strategy uses the military where necessary but knows that to rely on force means that the argument has been lost. Intelligent leaders (sometimes in short supply) know that there are multiple tools that may be used to achieve goals, and they are flexible and smart enough to know when to use what tools. Some of the tools the United States has underused are the moral strength of example and the political strength of democracy. These should be values that attract others to the United States (sticky power), serve as models for others to attempt to follow, and be beacons of light in a dark world, but for the tools to be effective, the United States must be good and do good, which is not always easy in a dangerous world. No nation can long squander resources, and all politics is about choice. A wise nation calculates national-interest equations with resources and power to determine what options are available for gaining their objectives. A wise nation gets the most bang for its buck and wastes as little as possible. In addition, a wise nation uses a variety of different resources creatively and efficiently to optimize return on political investment. How well and wisely did the United States use its foreign-policy power in the early years of the war against terrorism? Most would argue that in the war against the Taliban government in Afghanistan, the United States, by developing a multilateral coalition and winning the war quickly and easily, did use its resources well and wisely. But when the war effort turned to Iraq and the United States was almost alone in its war effort, when the war dragged on and on, as
U.S. soldiers continued to die, and as it became increasingly difficult to control events on the ground, most concluded that in this case the United States did not use its power and resources wisely nor well. In leaping almost blindly into war, the United States did not anticipate the consequences of its actions, and as the insurgent war morphed into a civil war, the United States was left with no exit strategy and no winning strategy. Having thought so little of the full range of consequences before beginning the war, when things went from bad to worse, the administration was caught by surprise when their optimistic war plans turned to chaos in the fog of war that characterized the messy and costly war in Iraq. Foreign policy power comes in many shapes and sizes. There are a wide variety of resources that comprise the tools of a nation’s foreign policy arsenal, but having the tools may not be enough. Nations need to integrate these tools with strategies and tactics designed to increase the national security of the nation. See also evolution of presidential power. Further Reading Carter, Ralph G. Contemporary Cases in U.S. Foreign Policy: From Terrorism to Trade. Washington, D.C.: Congressional Quarterly Press, 2004; Hunt, Michael H. The Crisis in U. S. Foreign Policy. New Haven, Conn.: Yale University Press, 1996; Lafeber, Walter. The American Age: United States Foreign Policy at Home and Abroad: 1750 to the Present. New York: W.W. Norton, 1994; Nye, Joseph S., Jr. The Paradox of American Power: Why the World’s Only Superpower Can’t Go It Alone. New York: Oxford University Press, 2002. —Michael A. Genovese
Hundred Days Elected in 1932 amid the economic depression that started in 1929 with the crash of the stock market, President Franklin D. Roosevelt told the nation in his inaugural address that they had “nothing to fear, but fear itself.” Millions of people were out of work, desperate, and homeless. There was little hope and a great deal of fear. President Roosevelt knew that he had to get the country moving again and pick up its spirit. In Europe, the worldwide depression opened
Hundred Days 581
the door to the rise of fascism in Germany and Italy, and all elected governments were in jeopardy. Could the nation pull through? Would democracy survive this test? Were drastic measures necessary? Could a constitutional democracy such as the United States solve this problem, or were the more efficient methods of an authoritarian regime required? To meet the great challenges of the Great Depression, Roosevelt launched a major legislative offensive, offering bills dealing with everything from unemployment to farm subsidies, from transportation to banking reform, from relief to Social Security. Roosevelt piled bill after bill before the legislature and pushed them through the compliant Democratic Congress. In the first hundred days of his administration, Roosevelt amassed an impressive array of legislative victories: the Emergency Banking Relief Act, the Economy Act, the Civilian Conservation Corps Reconstruction Act, the Federal Emergency Relief Act, the Agricultural Adjustment Act, the Tennessee Valley Authority Act, the Federal Securities Act, the Home Owners Refinancing Act, the National Industrial Recovery Act, the Banking Act of 1933, the Emergency Railroad Transportation Act, and the Farm Credit Act, to name just a few. While the legislative victories of the first hundred days did not end the depression, these steps did give hope to millions and relief to millions more. It was the most impressive legislative showing in history, engineered by the master politician, Franklin Delano Roosevelt. But Roosevelt also had public opinion behind him, a legislative majority of his fellow Democrats on which to draw, and a genuine emergency to animate the government. He also had a number of bills already in the congressional pipeline on which to build. It was the combination of all these factors that led to the great success of the hundred days and got the nation moving again. This era spawned a “Hallowed-be-the-President” sentiment among the public and created an “FDR halo.” Roosevelt began to live in both myth and legend. He was powerful, and he was on the side of the people. He was a master politician. Today, he ranks as one of the greatest presidents in history (the other two presidents ranked in the “great” category are George Washington and Abraham Lincoln). Since the days of Roosevelt, all presidents have been hounded by the FDR standard. All are mea-
sured against the heady days of the Roosevelt Hundred Days; all are expected to “hit the ground running” as soon as they take office, to pass bold new legislation to right the wrongs of the nation, and to keep the promises of the campaign. But—and not surprisingly—no subsequent president has been able to match FDR’s success. Why? First of all, FDR was a master of the art and science of politics. Few who have followed have had his sense of how to govern. Second, the prerequisite for his success was an emergency. Few presidents have had the “luxury” of governing in an emergency when the checks and balances that inhibit a president in routine times evaporate and a door to power opens up. George W. Bush experienced this when he first took office in January 2001 with a rather weak political hand with which to deal, but after the terrorist attack of September 11, 2001, the rally-’round-the-president momentum gave him vastly increased powers. Third, few presidents have the legislative numbers in their favor as did FDR. After winning in 1964, Lyndon Johnson had a huge majority of his Democratic Party in the Congress, and he was able to convert these numbers into significant legislative victories, but large majorities are a rarity. Fourth, public opinion is often divided on issues, parties, and presidents. They rarely have the widespread backing to demand bold initiatives across the board. FDR had a wellspring of popular support on which he could draw. Fifth, one hundred days is a very short time. To succeed in such a small time frame requires that the legislative “idea factory” already has a great many proposals in the pipeline. It is unfair to expect presidents to meet the FDR standard because so rarely are all the pieces in place that would allow a president to exercise truly the brand of leadership for which FDR is identified. Presidents are rarely dealt a power hand comparable to what FDR had, and while it is true, his skill and political genius allowed him to get the most out of the hand that he was dealt, it is likewise true that he was dealt an excellent power hand. If FDR created the modern presidency, and if every successor is “in his shadow,” we should expect that all successors would be expected to achieve much in their first 100 days. But FDR came to the presidency with a high level of political opportunity: The depression weakened the old guard and created
582 impeachment
demands for government action; FDR was a highly skilled politician who came to office with a huge majority of his own party in control of the Congress; the public demanded action by the government, and the ideas that he would eventually adopt as his own were quite often percolating up from Congress and the states. In short, everything was in place for Roosevelt to become a powerful force within the government, and he did not disappoint. If the conditions were right, so too was the man in the White House. Roosevelt was ideally suited to the task at hand. He entered the White House determined to get the United States moving again, and he was willing to take risks, try new things, and not sit back and let events run their course. He wanted to take command and move the nation. It was almost the perfect match of person to task to opportunity level. But if all the cards were in FDR’s hands, the same could not be said for any of his predecessors or his successors. FDR came to power with the political stars aligned in his favor; could the same be said for his successors? Few could match the skill of Roosevelt, and fewer still had the robust level of political opportunity that FDR faced. Thus, all of his successors would be found wanting when compared to the great Roosevelt. All such comparisons are, of course, highly unfair, but they were made nonetheless, and all of Roosevelt’s successors were compelled to face the first hundred days test. All of them, not surprisingly, fell far short of FDR. In effect, they were tested on the basis of standards that virtually no one could match and when they failed the test, as was to be expected, the press, the public, and other politicians found them lacking. On taking office, each new president is measured against the FDR hundred-days model. Reporters ask: What will the new president achieve in his or her first one hundred days? Pundits evaluate the new president on how he or she matches up to Roosevelt. The public is inclined to use the FDR measuring stick and usually find the president lacking in that FDR quality. But if it is unfair to ask other presidents to match up to the FDR standard, it is also human nature to do so. Presidents sign on for a bumpy ride, and one of the first great hurdles they encounter is the shadow of the giant, Franklin Delano Roosevelt. Usually, that shadow swallows the new president whole.
The hundred-days standard is an unfair burden to place on a newly elected president. That is why most new presidents try to lower expectations, not raise them. But as long as they are held to this high standard, they will almost certainly be found wanting, not a good way to start off a presidency. See also mandate; presidential leadership. Further Reading Burns, James MacGregor. Roosevelt: The Lion and the Fox. New York: Harcourt, 1956; Freidel, Frank. Franklin D. Roosevelt. Boston: Little, Brown, 1990; Leuchtenburg, William E. The FDR Years. New York: Columbia University Press, 1995. —Michael A. Genovese
impeachment Impeachment is the political process provided in the U.S. Constitution for investigating and potentially removing a specific class of officials from public office as punishment for political offenses. It has parallels to but is wholly separate and distinct from the criminal process. The only similarity between both processes is that each contains two steps—instituting formal charges and a trial to determine the validity of those charges—but that is where the similarity ends. Impeachment occurs in the two houses of Congress, while criminal trials occur in courts. Impeachment has as its only punishment the removal from national office, while punishment for violation of the criminal law may involve fines, imprisonment, or the death penalty. Impeachment and removal can be only for “Conviction of Treason, Bribery, or other high Crimes and Misdemeanors,” offenses named in Article II, Section 4 of the Constitution and can only be committed by public officers in their official capacity. Criminal convictions are for offenses defined by legislatures (Congress or state) in statutes. Impeachment is applicable only to a handful of people, as designated in Article II—“the President, Vice-President and all civil Officers of the United States”—while criminal statutes apply to all persons residing in the applicable jurisdiction, whether that is the nation or a state. Impeachment, then, is an extraordinary process that is used rarely for a very limited group of people (president, vice president, cabinet members, and federal judges) for misconduct in their official
impeachment 583
capacities. It carries with it a very singular penalty that has direct meaning only to those in the covered offices. The Constitution designates Congress as the institution responsible for conducting this two-step impeachment process, a function that is completely separate from the legislative duties of Congress. The House determines by a majority vote whether there are impeachable charges for official misconduct, and the Senate conducts a trial on those charges, with the chief justice of the U.S. Supreme Court sitting as its judge. Conviction in the Senate is by a two-thirds vote, a high hurdle imposed by the framers to indicate how difficult and rare they intended impeachment to be. The two chambers also differed in their nature, a difference that was reflected in their institutional division of responsibilities and sequence of actions in the process. The House was closer to the passions of the people and mirrored those emotional judgments,
while the Senate was more deliberative and sober and more able to judge the long-term interests of the nation at a critical moment. In identifying the grounds for impeachment, the framers wanted these to be broader than simply the commission of a criminal offense but narrower than “maladministration,” the vague standard proposed by George Mason at the Constitutional Convention. James Madison feared that the “maladministration” standard was so vague as to make a president too vulnerable to political attack and would increase the likelihood of abuse of the impeachment process. Alexander Hamilton’s view in Federalist 65 seems to capture best the essence of what the framers wanted, that the misconduct by the president must be political in nature and damaging to the nation. Of more recent vintage, the House Judiciary Committee report in the impeachment inquiry of President Richard Nixon in 1974 clarified that description further by adding that “in an impeachment proceeding, a President
Sketch showing the U.S. Senate as a court of impeachment for the trial of Andrew Johnson (Library of Congress)
584 impeachment
is called to account for abusing powers that only a President possesses.” Impeachment and removal do not preclude indictment and prosecution of a president in the criminal justice system where his or her acts are also criminal offenses, but this avenue proceeds along an entirely separate track. For example, a president who faces an impeachment inquiry in the House for obstruction of justice charges might also face charges of murder in the criminal justice system. Scholars presume that impeachment precedes any possible criminal indictment and prosecution, although there is no universal agreement on this issue. No sitting president has ever been indicted for a crime, although President Richard Nixon was designated an “unindicted co-conspirator” for his role in the Watergate cover-up in 1974. Three presidents in history have been the subject of impeachment inquiries: Andrew Johnson, Richard Nixon, and Bill Clinton. Johnson and Clinton were impeached by the House but escaped conviction in the Senate and thus remained in office. Nixon faced three articles of impeachment voted by the House Judiciary Committee, but he resigned from office before a vote on the House floor occurred. The impeachment of Andrew Johnson was rooted in the politics of Reconstruction. He was a Democratic former slave owner whom Lincoln appointed to the “Union” ticket as vice president in 1864. Lincoln’s assassination elevated Johnson to the presidency where he was responsible for implementing Reconstruction. He incurred the wrath of Republicans in Congress when he vetoed the Tenure of Office Act in 1867, which they subsequently overrode, and then fired Secretary of War Edwin M. Stanton in 1868 after the Senate refused to approve his removal, as the act required. The House voted 11 articles of impeachment against Johnson, most focusing on his violation of the Tenure of Office Act. He was acquitted in the Senate, which fell short by one vote of the required two-thirds needed for conviction. In 1974, Richard Nixon was the subject of an impeachment inquiry in the House for his misuse of the office to cover up the burglary of the Democratic National Committee headquarters in the Watergate hotel complex in Washington, D.C., on June 17, 1972, by people hired by his reelection committee.
Congress began to investigate the break-in and a connection with illegal fund raising with a committee hearing by the Senate Select Committee on Presidential Campaign Activities, headed by Senator Sam Ervin of North Carolina. At the same time, Archibald Cox was appointed special prosecutor to determine if any executive branch officials should be charged with crimes. When the Ervin Committee learned of the existence of an Oval Office taping system, attention quickly turned to efforts by both the Senate committee and Cox to obtain the tapes to see if they showed a White House conspiracy to obstruct the investigations. When, after a series of court battles and the firing of Cox, Nixon refused to turn over the tapes as ordered by a federal court, a House committee began its own inquiry into the Watergate cover-up and eventually started impeachment hearings. A U.S. Supreme Court decision on July 24, 1974, ordering Nixon to provide the tapes to the House committee was followed three days later by the committee vote of three articles of impeachment against Nixon: obstruction of justice, abuse of power, and contempt of Congress. With the release by the president on August 5 of a tape that revealed that he had known of the cover-up since June 23, 1972, Nixon resigned from office only four days later rather than face certain impeachment on the House floor. The effort to impeach Bill Clinton differed from the Johnson and Nixon impeachments in that it involved private, not public, behavior, and the inquiry itself started as an investigation into Clinton’s financial dealings before he became president. Only through the unusual twists and turns of the law under which an independent counsel operated did the focus ultimately turn to Clinton’s personal relationship with an intern in the White House. It was his unsuccessful efforts, under oath in legal proceedings, to refrain from divulging the full details of that relationship that provided the basis for his impeachment and Senate trial. He was impeached by the House on December 19, 1998, on two counts, perjury and obstruction of justice, and was acquitted in the Senate on both charges on February 12, 1999. It should come as little surprise, then, that with only three impeachment efforts against sitting presidents in the more than 200 years since the adoption of the U.S. Constitution, none have been successful.
impoundment 585
Conventional wisdom suggests, however, that had the effort against Nixon run its full course in the House and Senate, he most likely would have been impeached and convicted. The lesson from these presidential impeachment efforts is that the burden of proof is on the accusers who must meet an exceptionally high standard to convict and remove from office. Moreover, impeachments that are politically motivated and that do not have bipartisan support are doomed to fail. When Nixon’s own party members informed him that they would not support him in the House, he knew that he could not survive in office. Conversely, Clinton’s party stood by him throughout the process, five Republican senators even crossed party lines to vote “not guilty” at the Senate trial, and he prevailed against a deeply partisan effort to force him from office. The effort to impeach a president is long, hard, complex, and uncertain—and that is exactly how the framers wanted it to be. Federal judges are also subject to impeachment and removal, the only constitutional process for limiting life tenure on the federal bench. The procedures are the same as those used for impeachments of presidents. There have been attempts to impeach 13 judges, mostly at the district-court level, but including one unsuccessful effort to remove a Supreme Court justice, Samuel Chase, in 1805, on charges regarded by scholars as partisan and not impeachable. The fact that this effort did not succeed demonstrated a victory for the principle of an independent judiciary and an acceptance of a narrow, rather than broad, interpretation of the scope of “impeachable offenses.” Seven federal judges have been convicted and removed (and not all for misconduct in office, but some for such “crimes” as “loose morals and drunkenness [Judge John Pickering, 1804] and “support of secession and holding of Confederate office” [Judge West H. Humphreys, 1862]), still a relatively small number, considering the thousands of federal judges who have served during the last two centuries. One senator, William Blount from Tennessee, was impeached by the House in 1797 as the very first impeachment in the United States. It precipitated subsequent wrangling concerning whether senators were “civil officers” subject to impeachment and removal under Article II, but the expulsion of Blount
from the Senate following his impeachment and other procedural steps mooted further efforts to proceed to a trial in the Senate. This incident appeared to resolve the question in the negative on whether senators are civil officers subject to impeachment. States, also, have provisions in their constitutions for impeachment of state officials, usually judges and governors. Procedures vary, with some following the federal two-step model, but others modifying the process, even allowing for recall elections (for example, Arizona). Six governors have been impeached and removed from office, many on charges of corruption. Further Reading Abraham, Henry J. The Judicial Process, 5th ed. New York: Oxford University Press, 1986; Adler, David Gray, and Kassop, Nancy. “The Impeachment of Bill Clinton.” In David Gray Adler and Michael A. Genovese, The Presidency and the Law: The Clinton Legacy. Lawrence: University Press of Kansas, 2002; Black, Charles L. Impeachment: A Handbook. New Haven, Conn.: Yale University Press, 1974; “Constitutional Grounds for Presidential Impeachment,” Committee of the Judiciary, House of Representatives, 93rd Congress, 2nd session, 1974; Gerhardt, Michael J. The Federal Impeachment Process: A Constitutional and Historical Analysis. Princeton, N.J.: Princeton University Press, 1996; Gerhardt, Michael J. “The Impeachment and Acquittal of William Jefferson Clinton.” In The Clinton Scandal and the Future of American Government, edited by Mark J. Rozell and Clyde Wilcox. Washington, D.C.: Georgetown University Press, 2000; Hamilton, Alexander, James Madison, and John Jay. The Federalist Papers. New York: New American Library, 1961. —Nancy Kassop
impoundment The impoundment of funds, that is, an executive (president) withholding the spending of congressionally authorized expenditures, has a long history. Almost from the beginning of the republic, presidents have, from time to time, usually for sound fiscal reasons, withheld the spending of appropriated funds when events changed or needs changed. The first known case of a president impounding funds occurred when Thomas Jefferson refused to
586 impoundment
spend funds that had been appropriated to purchase gunboats to defend the Mississippi River. Jefferson claimed that since the time the Congress appropriated the funds, the need for the spending of those funds had diminished and that spending the appropriated money was no longer necessary and was in fact wasteful. That was the first of many examples of the presidential impoundment of congressionally appropriated funds. Presidents sometimes withheld funds because they believed that the spending of appropriated funds was a waste of money and was no longer necessary. President Franklin D. Roosevelt impounded funds in the emergency of World War II when he announced that the funds should be diverted from domestic spending to the war effort, and President Lyndon B. Johnson threatened to impound funds to local districts that violated the Civil Rights Act. Thus, impoundment was both a political and a fiscal tool of management and politics. For some presidents, impoundment became a type of “item veto” that allowed them to pick and choose programs that they believed needed to be cut. Rarely has the impoundment of funds raised political concern. As long as the withholding of the funds was made for sound reasons, as long as the president informed Congress of his actions, and as long as political civility marked the impoundment of funds, few squabbled with the president and rarely did controversy ensue. Nowhere in the U.S. Constitution is the president authorized to impound funds. Presidents have sometimes claimed that it is an implied power, part of the executive power of the president. Sound management and fiscal responsibility, presidents have claimed, justify the occasional impoundment of congressionally appropriated funds. The roots of impoundment go deep in U.S. history. As mentioned above, it is believed that the first case of the presidential impoundment of congressionally authorized funds took place in 1803 when President Thomas Jefferson refused to spend $50,000 appropriated by Congress on the grounds that the spending was no longer necessary. Jefferson’s claim of efficiency won the day, and in time, other presidents occasionally would impound funds, mostly for reasons of efficiency. As long as presidents acted openly and with reasons that the Congress could accept, the impoundment of funds did not become a hot political
topic. In time, most presidents recognized that impoundment was not a political tool nor should it be used to gain the political upper hand. Presidents recognized that the power of the purse belonged to Congress and for the most part engaged in impoundment sparingly and cautiously. Due to this caution, presidents usually got their way when they impounded funds, and the Congress rarely fought back. It was deemed a management tool that a chief executive, within reason, should be able to use for reasons of management, efficiency, or sound policy, but it was not seen as a vehicle for a president who had lost in the political arena to impound funds for projects to which he objected. When that line was crossed, the political protection that the president claimed as the nation’s chief executive evaporated, and political and legal battles ensued. It was not until the Richard M. Nixon presidency when the president went beyond fiscal reasoning and began to impound funds for political and policy reasons that the presidential impoundment of funds caused concern with the Congress. President Nixon, a Republican, faced a Congress controlled by the Democrats. The Democrats passed a series of laws to which the president objected but for political reasons (they were popular with the public) felt that he could not afford to veto. Therefore, losing in the legislative arena, he attempted to gain in the administrative arena what he could not get what he wanted from the Congress. The president began to impound funds earmarked for a variety of Democratic priorities that the president did not share. Nixon impounded billions of dollars appropriated by Congress for things such as pollution control and clean water. The president claimed that the budget was out of control and that he was attempting to control spending and inflation, but the Democrats accused him of slicing domestic programs with which he disagreed in open defiance of the stated will of Congress (and by implication, the U.S. public). President Nixon clashed with the Congress over spending priorities, and when the Congress overrode a Nixon veto of a bill designed to control water pollution, the president impounded half the allotted money. He also impounded money for low-rent housing, food stamps, mass transit, medical research, and a variety of other spending programs with which he disagreed. By 1973, President Nixon had impounded
Interstate Commerce Commission
more than $20 billion in congressionally appropriated funds. Increasingly, the Congress began to fight back. One avenue followed by members of Congress was to take the president to court. While historically the courts have been reluctant to go against claims of presidential power, in the impoundment cases, it was the courts and the Congress against the president in an area that constitutionally seemed very clear. As only Congress has the power of the purse, and as Congress had passed the spending legislation through legally appropriate means, and as the president signed into law many of the bills over which he was now impounding funds, the courts consistently ended up siding with the Congress against the president in impoundment case after impoundment case. But Congress, recognizing that it too was at fault in this fiasco, decided that, to regain some measure of control over a budget process it had all but forfeited to the presidency in time, it would have to reform the process of federal budgeting. It did so in 1974 by passing the Congressional Budget and Impoundment Control Act. This act, among other things, establishes two forms of legal impoundment: deferrals, which the president makes to spend money later; and rescissions, where the president decides not to spend the allocated funds. The act established methods for the Congress to vote up or down on deferrals and rescissions. The question remains, however: Is impoundment of congressionally authorized funding a legitimate management and fiscal tool that the president should wield? Few would argue that the president should have no flexibility in the occasional withholding of funds for sound policy or fiscal reasons. It is when the president, as was the case during the Nixon presidency, goes too far and blatantly uses impoundment as a political axe with which to grind opponents that the impoundment of funds becomes a problem. Yes, presidents need flexibility, and no, they should not use impoundment to win battles on the budgetary front that they have already lost on the political battlefield. Further Reading Genovese, Michael A. The Nixon Presidency: Power and Politics in Turbulent Times. New York: Greenwood, 1990; Hoff, Joan. Nixon Reconsidered. New
587
York: Basic Books, 1994; Small, Melvin. The Presidency of Richard Nixon. Lawrence: University Press of Kansas, 1999. —Michael A. Genovese
Interstate Commerce Commission The Interstate Commerce Commission (ICC) was created in 1887 to oversee the provisions of the Act to Regulate Commerce, signed into law by President Grover Cleveland. The ICC was created originally to oversee railroads in the United States and was the first regulatory commission created in the United States. During the 19th century, the creation and placement of railroads greatly affected settlement patterns as well as the economy of much of the country. In addition, until the early 20th century with the growth of the automobile industry, railroads were also the primary way that people traveled between cities and states. Though the bulk of the railroad system in the United States was in place by the 1880s, the national government did not regulate development or rates that they charged customers. Without regulation, the rail lines as well as prices were set at what companies felt the market could bear. Inequalities and abuses were rampant, with prohibitively high rates common on less-frequented lines. In fact, the popularity of the Populist movement in the latter 19th century drew much of its appeal from farmers who were unhappy with lack of regulation over the railroad industry and rate exploitation. The national government eventually moved to regulate railroads following the U.S. Supreme Court’s decision in Wabash v. Illinois (1886). The Wabash case, as it is commonly known, clearly states that the federal government regulates the railroads, not state governments. This case gave power only to the national government to regulate interstate shipping rates. The aftermath of this case directly resulted in Congress passing The Act to Regulate Commerce to manage railroads. This act had several provisions outlining that rates must be just and reasonable as well as prohibitions on personal discrimination, undue preference or prejudice, and pooling agreements. In addition, the act stated that all rates had to be published and followed. To enforce this act, the five-member ICC was created. The members were appointed by the president
588 I nterstate Commerce Commission
and approved by the Senate. Other regulatory boards such as the Federal Trade Commission (1914) and the Federal Communications Commission (1933) later used the model of this commission. The role of the early ICC was to hear complaints about violations of the law, and if the railroads were found in violation, the ICC could order the railroads to stop, and the ICC also could assess damages. Major early problems for the ICC arose because they could impose penalties for violations but lacked the ability to enforce sanctions. Therefore, to force a railroad company into compliance, the early ICC used the federal court system to sue railroads. Originally, the ICC’s members held six-year, staggered terms with membership evenly divided between the two major political parties. The first chairman of the ICC was Thomas M. Cooley. Cooley’s approach to ICC management was to examine each situation brought to them on a case-by-case basis. While this method is useful to minimize politics, it also prevented the development of comprehensive railroad policy. Because of several court challenges and the inability to force railroads into compliance, the ICC in the 1880s was not a very effective governmental agency. It was not until Theodore Roosevelt’s administration that the ICC was given additional power to carry out its stated mission. In fact, by the mid 1920s, the ICC ultimately became one of the most powerful agencies within the federal government through its expansion of power during the Progressive era. It was able to garner this power through the implementation of several congressional acts. In 1903, the Elkins Act was instrumental in beginning the growth of power to the ICC. Until then, railroads would often provide a rebate to their most-prized customers. While everyone paid the published rate, large companies would demand a rebate back from the railroad. The result was that smaller companies and farmers tended to pay far more for railroad services than major corporations. Railroad companies were actually in favor of the passage of the Elkins Act because it created truly standard rates to help insulate them from the strong-arm tactics of the monopolistic trusts of the time. However, the Elkins Act did not give the ICC control over rates or even the definition of a reasonable rate. It took the passage of the Hepburn Act of 1906 to address these shortcomings.
Under the Hepburn Act, the ICC was increased from five to seven members. The jurisdiction of the ICC was also enlarged to include such common carriers as storage facilities, ferries, sleeping-car companies, terminals, oil pipelines, deports, yards, switches, and spurs. In addition, the ICC was given the power to set maximum rates by replacing existing rates if need be. Free passes were essentially forbidden, and penalties for violations were toughened. One of the more important changes was that orders from the ICC became compulsory. If a company did not agree with a ruling from the ICC, they would have to challenge it through the judicial system. Before the Hepburn Act, the railroad companies frequently used long legal delays as a way to circumvent rulings. Following this act, any lower-court appeal immediately was sent to the Supreme Court to curtail these practices. The Hepburn Act also imposed a uniform accounting system on all railroads with standardized reports. Finally, if a railroad appealed a decision from the ICC, the burden of proof rested on the shipper not on the ICC. This was a major shift to curtail legal manipulations by the railroads via appellate courts. To enforce the Hepburn Act, the ICC was also allowed to appoint additional examiners and agents. Between 1905 and 1909, the ICC grew from a staff of 178 to 527. The Hepburn Act was instrumental in helping to make the ICC the most powerful agency within the federal government. This commission became the primary agency that governed railroads and shaped their development in the United States throughout the 20th century. To help strengthen the Hepburn Act, the Mann-Elkins Act was passed in 1910. The Mann-Elkins Act was also important for shaping the ICC. Most notably, it gave the ICC jurisdiction over telegraph, telephone, and cable lines in the United States. In fact, the ICC regulated telephone service until it was turned over to the Federal Communications Commission in 1934. The Mann-Elkins Act also established an important but short-lived Commerce Court (1910–13). The Commerce Court was a federal-level trial court with jurisdiction over cases involving orders from the ICC. It was also given judicial-review powers over decisions from the ICC. If a case in the Commerce Court was appealed, it went directly to the U.S. Supreme Court. One of the most lasting outcomes from Mann-Elkins Act was
Iran-contra scandal 589
how it helped define the status of the ICC within the federal government. The legislative branch had oversight power over the ICC and not the executive or judicial branches. During the period of the Woodrow Wilson administration, U.S. Supreme Court cases affirmed the ICC’s right also to regulate intrastate rates that discriminated against interstate rates. President Wilson also expanded the ICC to nine members with divisions into groups so that they could handle different functions. In December 1917, President Wilson took control of the rail transportation in the United States as part of the war effort. Federal control of the railroads bypassed the jurisdiction of the ICC and greatly eroded its power over the railroads in this time period. Through the Transportation Act of 1920, railroads were returned to their owners, and much of the power of the ICC was restored to its prewar state though the number of commissioners grew to 11. Under this 1920 act, the ICC was given the authority to set minimum and maximum rates for railroads and to regulate intrastate and interstate railroad rates, and it increased regulation generally. Specifically, many provisions were designed to favor the function of the rail system over individual companies. Several railroads would be consolidated, and car distribution fell under ICC control. Furthermore, railroad companies could no longer build new or remove old lines without approval of the ICC. The Great Depression set in motion many factors that helped ultimately to undermine the ICC. The growth of road building, the loss of passenger-train traffic (to automobiles), and the explosion of the air transportation industry seriously weakened the power of rail transportation in the United States. President Franklin D. Roosevelt attempted to move the ICC within the Department of Commerce and under executive control but ultimately failed. As other forms of transportation evolved following World War II in the United States, the ICC did not adapt well. The regulation put in place by the ICC to control corporate abuses was itself seen as too controlling of industry by not allowing rail companies to be freer to compete with alternate forms of transportation. Though the ICC attempted to reform itself in the 1960s and again in the 1970s, the commission waned in power as U.S. industry and the public used other forms of transportation at higher rates than rail ser-
vice. The safety functions of the ICC were transferred to the Department of Transportation in 1966, but it kept regulatory and rate-making powers. Jimmy Carter’s administration was notable for attempting to deregulate the ICC at a more aggressive pace than any previous administration. The Staggers Rail Act of 1980 removed maximum rates from almost two-thirds of all rail rates. The goals of the Staggers Act were to increase deregulation while increasing competition in the market. Unfortunately, by 1995, it was clear that the ICC had outlived its usefulness as an independent commission within the federal government. The Interstate Commerce Commission Termination Act in 1995 transferred most of its authority to the Surface Transportation Board (STB). The STB is an independent agency but is affiliated with the Department of Transportation. See also bureaucracy; executive agencies. Further Reading Skowronek, Stephen. Building a New American State: The Expansion of National Administrative Capacities. Cambridge, Mass.: Cambridge University Press, 1982; Stone, Richard D. The Interstate Commerce Commission and the Railroad Industry: A History of Regulatory Policy. New York: Praeger, 1991. —Shannon L. Bow
Iran- contra scandal The Iran-contra scandal occurred during the presidency of Ronald Reagan. Iran-contra refers to two separate but connected scandals, each one a major problem in itself, but combined to make a constitutional crisis. It led to the conviction of several highlevel Reagan administration officials (some of whose convictions were vacated on technicalities); to calls for the impeachment of Ronald Reagan; and to a political scandal that proved embarrassing to the president, damaging to the national security and the reputation of the United States, and an aid to terrorism worldwide. The Iran-contra scandal occurred at a time of “divided government” when the White House was controlled by one party (in this case, the Republicans), and one or both Houses of Congress was controlled by the opposition Democrats. It also occurred
590 I ran-contra scandal
in a post-Vietnam, post-Watergate age of hyperpartisanship. In such an atmosphere, it became difficult for both sides to bargain and to compromise, and presidents, in an effort to set policy in a complex political atmosphere, began to exert unilateral and unchecked power. The rule of law gave way to the political and policy preferences of presidents, and ways were often found to govern without or even in spite of the stated will of the Congress. The Iran-contra scandal emerges out of this cauldron of distrust and power grabbing. It was a complex double scandal that led to a constitutional crisis, deep embarrassment for the United States, and nearly led to the impeachment of Ronald Reagan. The first part of the Iran-contra scandal, the Iran part, involved the U.S. government selling arms to terrorists. In the aftermath of the Iranian revolution in the late 1970s, militant forces in Iran began to kidnap and hold hostage several U.S. citizens, among them, a CIA operative. Efforts to free the hostages met with failure, and members of the Reagan Administration, led by a gung-ho lieutenant colonel named Oliver North, tried to trade weapons for some of the hostages. At first, the Iranians took the weapons but refused to release any hostages. They blackmailed the U.S. government, demanding more weapons for the yet-to-be-released hostages. Amazingly, Lt. Col. Oliver North and the other Reagan officials involved caved in and sent more arms to the Iranian terrorists. This time, a hostage was released. But a day or so later, the Iranians kidnapped another hostage (thus the tragic, yet Keystone Kops element of this sad and tragic affair). The Reagan administration continued to sell arms to the Iranians—a nation officially designated by the U.S. State Department as a nation that promoted state-sponsored terrorism—and the Iranians released some of the hostages. But the story did not end there. The contra part of the story relates to the efforts by the Reagan administration to overthrow the government in Nicaragua. In 1979, the corrupt Somoza government fell and a Sandinista government was established in Nicaragua. Aligned with Cuba and espousing Marxist principles, this government was seen as a threat to the region by Reagan administration officials. They were determined to overthrow the government with the aid of local forces known as the
contras. The administration attempted to convince the Congress to support their actions in Nicaragua, but public opinion and congressional support was never forthcoming. Reagan, determined not to let a Marxist government form a beachhead in the hemisphere, decided to go directly against the law (the Boland Amendment which prohibited funding or aiding the rebels) and ordered his staff to find a way to help the contras. The “way” was to take the profits from the arms sales in Iran illegally and give them to the contras. When these illegal acts came to public light (via a story in a Middle East newspaper), a great scandal ensued. The Reagan administration bungled the investigation so thoroughly that many concluded that they intentionally mishandled events so as to give their coworkers time to shred evidences—and shred documents they did. With the FBI waiting outside their offices, Lt. Col. North and his secretary Fawn Hall fed document after incriminating document into the shredder, and when they ran out of time, Hall hid some documents in her undergarments. Needless to say, the investigation had been corrupted, and the conspiracy to obstruct justice was a success. After a prolonged public outcry, Reagan appointed a commission (the Tower Commission) to investigate the Iran-contra scandal. Congress also investigated this affair, as did a special prosecutor. The Tower Commission concluded that Reagan’s lax management style allowed the administration’s foreign policy apparatus to be hijacked during the Reagan years. The joint congressional committee pointed to the many examples of illegal activity but could not—due to the shredding of evidence and other problems—conclusively provide evidence that the president himself was the guilty party. The special prosecutor (Lawrence E. Walsh) was preempted in the middle of his work when Reagan’s successor, former Reagan vice president and active participant in the Iran-contra scandal, George H. W. Bush, pardoned former Defense Secretary Casper Weinberger in the middle of his criminal trial (where Weinberger diary entries implicated both the president and his vice president in the crimes of the Iran-contra scandal). In the end, the “big fish” got away. A combination of constitutional crisis and Keystone Kops comedy, the Iran-contra scandal proved to be one of the most bizarre and embarrassing inci-
Joint Chiefs of Staff 591
dents in modern U.S. politics. It was a constitutional crisis because of the breakdown of the rule of law, the disregard for the separation of powers, and because U.S. foreign policy was hijacked by a band of ideological extremists who worked their way into the center of the Reagan policy team. It was a Keystone Kops comedy because this band of merry and incompetent rouges—a mad colonel, his blonde bombshell of a secretary who hid top-secret documents in her clothing, a nefarious attorney general who either inadvertently or intentionally alerted possible criminal defendants that the FBI was on their tail and that they had better shred the evidence, a president who kept changing his story over and over, hoping against hope that someone (especially the special prosecutor) would believe one of his stories—provided the nation with a madcap escapade that was too bizarre to be believable and too compelling to ignore. Had President Reagan been involved in an impeachable offense? Clearly a series of crimes were committed, but who was to blame? Who was in charge? Was Reagan the knowing mastermind of the events or an ignorant bystander? Former President Richard Nixon (according to an October 5, 1987 Newsweek story) offered this answer: “Reagan will survive the Iran-contra scandal because when push comes to shove he can say, ‘I was stupid.’ After a pause, Nixon added with a sly grin: “I never had that option.” Reagan’s constantly changing defense: At first, he claimed, “It was a neat idea” and that “It was my idea”; then later, after Attorney General Edwin Meese informed him that he just might have admitted under oath to having committed an impeachable offense, Reagan went back to the special prosecutor and changed his testimony to “I knew nothing about it” to which the attorney general informed the president that he had just committed perjury, another potentially impeachable offense; finally, he returned to the special prosecutor claiming “I don’t remember.” Reagan’s I ordered it; I didn’t order it; I didn’t know about it; I just don’t remember fiasco left friends and critics baffled. Perhaps that was the president’s intent. In any event, Reagan escaped impeachment and, while deeply wounded, survived politically to see the end of his term in office. And yet, the Iran-contra scandal remains a stain on his presidential reputation and a refutation of the rule of law and constitutionalism in the United States. The Iran-contra scandal
demonstrated just how vulnerable the U.S. system was to unscrupulous people who were intent on having their way in spite of the law. In setting up a government outside the government, the Reagan administration attempted to impose its will on the policy agenda in direct violation of the expressed will of Congress and the law of the land. The administration set itself up outside the rule of law and set itself up above the law. In selling arms to terrorists, it violated both the law and common sense. But in the end, it was caught in the act, and efforts to distract and cover-up notwithstanding, in the end the sordid and illegal acts of the administration were brought to light. Further Reading Draper, Theodore. A Very Thin Line: The IranContra Affairs. New York: Hill & Wang, 1991; Koh, Harold Hongji. The National Security Constitution. New Haven, Conn.: Yale University Press, 1990; Walsh, Lawrence E. Firewall: The Iran-Contra Conspiracy and Cover-Up. New York: W.W. Norton, 1997. —Michael A. Genovese
Joint Chiefs of Staff The Joint Chiefs of Staff has been defined as an executive agency, composed of the chiefs of the army, the navy, the air force, and the commandant of the marine corps, that is responsible for advising the president on military questions. Similar [organizations] sometimes known as Chiefs of Staff Committees (COSCs) in the British Commonwealth are common in other nations. Today, their primary responsibility is to ensure the readiness of their respective military services. The Joint Chiefs of Staff also act in an advisory military capacity for the president of the United States and the secretary of Defense. In addition, the chairman of the Joint Chiefs of Staff acts as the chief military adviser to the president and the secretary of Defense. In this strictly advisory role, the Joint Chiefs constitute the second-highest [debating] body for military policy, after the National Security Council, which includes the president and other officials besides the chairman of the Joint Chiefs. Unfortunately, neither definition allows for a complete or, for that matter, an adequate understanding
592 Joint Chiefs of Staff
of this complex yet critical military organization. Like most agencies within the vast Department of Defense, the Joint Chiefs of Staff is part of the U.S. government bureaucratic decision-making process. Is the Joint Chiefs of Staff a major contributor to national-security policy or an outdated concept in need of reorganization? The answer depends on one’s perspective. The U.S. version of a Joint Chiefs of Staff resulted from pressure by the British government, a U.S. ally during World War II, that wanted the United States to operate under the same decision-making processes as the British to facilitate the war effort. An AngloAmerican Combined Chiefs of Staff was formed in 1941 and the first U.S. Joint Chiefs of Staff held its inaugural meeting February 9, 1942. This temporary body was composed of the chief of naval operations, the chief of staff of the army, the commander in chief of the U.S. fleet, and the commanding general of the army air forces. In 1942, the position of the chief of naval operations and the commander in chief of the U.S. fleet were combined, and, shortly after, a new member joined the group, the Chief of Staff to the president. Prior to World War II, the U.S. military services operated as separate and distinct systems with unique responsibilities. The Department of the Navy was responsible for defense of the United States in the open seas and operated as a separate service. It also deployed its own infantry, the U.S. Marine Corps. The army with the army air corps fell under the responsibility of the War Department and focused on land and air battles. During this time, there was no Department of Defense; thus the JCS also acted as the principal avenue for coordination between the army and the navy. As a temporary organization with no legal authority or operating procedures, the JCS of World War II operated as any new organization, having to establish its authority and its relationships both internally and externally while handling the many crises associated with involvement in a war. Anecdotally, there are many examples of poor decisions made by the first JCS due to a lack of consensus between the various strong personalities and organizational cultures of the services. One worth mentioning involved a British plea to the United States for steel to support England’s war effort. This request was approved by the JCS only to have the navy veto it due to concern over the navy’s ship-building capacity. Since a unani-
mous agreement was required, the British request was turned down. After the war, President Harry Truman took on the reorganization of the U.S. national-security apparatus as a top priority. Dissatisfied with the United States’s ability to protect itself or provide adequate advice to the president of the United States in terms of national security, the president proposed a strong defense establishment with an equally strong central military authority supported by Congressional legislation. President Truman’s vision included a permanent Joint Chiefs of Staff. The National Security Act of 1947 established the Defense Establishment (it would be upgraded to a department in later legislation), the Central Intelligence Agency, the National Security Council, and the Joint Chiefs of Staff. Unfortunately, the National Security Act and the origins of the modern JCS system were marred by compromise and bureaucratic infighting. The outcome was a Joint Chiefs of Staff which was a committee of top military leaders with separate requirements and motives, each with an equal vote and veto power. Members of the JCS acted as less a corporate body and more as a group of independent military advisers. This can be understood in the context of warfare during this time. The concept of joint warfare was not considered a necessary attribute for victory on the battlefield. Naval and ground military operations were considered distinct and unique, with each requiring independent resourcing, doctrine, and strategy. Consequently, each service chief provided independent advice to decision makers. It was not unusual for all the members of the Joint Chiefs of Staff to be present at meetings to represent their respective service as only they uniquely could. There were several attempts by various presidents to both strengthen and consolidate the Joint Chiefs of Staff advisory role, primarily by President Dwight D. Eisenhower. The intent was to make the JCS more corporate while providing one voice on military advice to the president and the secretary of Defense. Despite incremental changes to the JCS concept (voting power to the chairman, the addition of the commandant of the marine corps, and so on) up until 1986, the Joint Chiefs of Staff was characterized by competing and sometimes contradictory advice to the president of the United States and the secretary of defense. The role of the Joint Chiefs of
mandate 593
Staff, during this time period that included such major military operations as the Korean conflict, the Vietnam conflict, and the U.S. invasion of Grenada can in hindsight be seen as inadequate, confusing, and in some cases dangerous. One only has to look at the CIA’s “Bay of Pigs” operation in 1961 for a glimpse of the problems associated with the structure and authorities of the JCS. During this operation, the JCS’s deliberate lack of involvement in an obvious large-scale military operation while seemingly providing concurrence and support to the same operation illustrates the general modus operandi of the Joint Chiefs of Staff prior to 1986. The year 1986 ushered in legislation that fundamentally changed the structure and authorities of the Joint Chiefs of Staff. Known as the Goldwater–Nichols Act, this sweeping act required the military services to place more emphasis on joint war fighting. In the early 1980s, the United States experienced a military tragedy known as Desert One, the attempt to rescue the U.S. hostages held in Tehran, Iran. The aftermath of this operation was an increase in congressional involvement in military affairs. Recognizing that the military services were operating too independently of each other, the Goldwater–Nichols act, ironically spearheaded by outgoing members of the Joint Chiefs of Staff, gave more authority to the chairman of the Joint Chiefs of Staff, established a vice chairman of the Joint Chiefs of Staff, required joint-focused officers in all branches be promoted at the same rate as service-focused officers within each branch, required mandatory joint education, and, most importantly, streamlined the chain of command from the combatant commanders, with emphasis on the regional or geographic commanders to the secretary of Defense. The chairman of the Joint Chiefs of Staff became, in effect, a dual hat position. First, the chairman was given a primary role as an independent senior military adviser to the president, the secretary of Defense, and the National Security Council. With this responsibility came an enlarged Joint Staff to provide the chairman with analysis and evaluation independent of the individual armed services. Second, the more traditional role as the chairman of the Joint Chiefs of Staff was enhanced to make the chairman a leader of the JCS. However, the Goldwater– Nichols Act effectively removed the JCS from the operational military chain of command. The employ-
ment of the U.S. military and the actual fighting of the nation’s battles became the responsibility of regionally focused senior military officers known as geographical combatant commanders. This arrangement allowed the JCS to focus on responsibilities necessary to recruit, supply, and maintain the individual armed services. It was not until 1989 that the intent of the Goldwater-Nichols Act was truly realized. Operation Desert Shield/Storm institutionalized the act, giving the U.S. central commanding general the responsibility for the prosecution of the war with Iraq. While the JCS provided the regional combatant commander with the equipment and forces necessary to fight the war, it was the regional combatant commander who was held responsible for the conduct of the war. Today, the Joint Chiefs of Staff is responsible for the readiness of the U.S. military. As the senior officers for the individual armed services, the members of the Joint Chiefs ensure their respective forces are financed, equipped, and trained to support the nationalsecurity requirements of the United States. Further Reading Locher, James R., III. Victory on the Potomac: The Goldwater-Nichols Act Unifies the Pentagon. College Station: Texas A&M University Press, 2002; Perry, Mark. Four Stars: The Inside Story of the Forty-Year Battle Between the Joint Chiefs of Staff and America’s Civilian Leaders. Boston: Houghton Mifflin, 1989; Zegart, Amy. Flawed By Design, the Evolution of the JCS, CIA and NSC. Stanford, Calif.: Stanford University Press, 1999. —Peter J. Gustaitis II
mandate The word mandate derives from the Latin verb mandare, meaning to command or enjoin. It is commonly used to denote legal imperatives, for instance the mandate of the U.S. Constitution to uphold certain rights or the mandate of the courts to desegregate the schools. In electoral politics, mandates refer to a command or instruction from voters to elected officials. The notion of an instruction from voters to elected officials first emerged in the 18th century for members of legislatures. Mandates referred to specific instructions regarding the way a politician should vote
594 manda te
on a given policy or very general boundaries outside of which a politician should not stray when dealing with new issues. Beginning with Andrew Jackson, U.S. presidents also began to define themselves as representatives of the people and associating their policy agenda with the will of the people. The concept of electoral mandates engages two major theoretical debates about elections. First, should a politician be bound by the preferences of constituents or be free to use his or her own judgment? Second, are elections mechanisms of accountability or signals about popular policy preferences? Do they simply allow voters to control officials because they can vote them out of office, or do they also express the will of the people? Politicians who claim mandates are declaring themselves to be delegates of the people, bound by their preferences. They see the election results as a reflection of voters’ policy preferences, not as a simple rejection of the incumbent politician or political party. Behavioral political scientists test the validity of electoral mandates by treating them as statements of fact about fully formed and stable public preferences. They use mass public opinion surveys to determine whether voters have opinions on policies, know the candidate’s stands on the issues, and vote on the basis of the issues. The evidence indicates that many voters are ignorant of the issues and cannot accurately identify candidates’ platforms. The evidence also suggests that voters are motivated more strongly by partisanship or the personal attributes of candidates than by their policy positions. The limited role of policy in elections, however, may say more about politicians than voters. Candidates have an incentive to be ambiguous or centrist to attract a majority of votes. Likewise, it is often advantageous for candidates to emphasize their personal qualities or use the election as a referendum on the past performance of the parties. Thus, mandates are viewed with skepticism because they rest on a story about voter intentions and campaign behavior that is not consistent with empirical evidence about voters and elections. Social-choice theorists also deny the plausibility of electoral mandates to the extent that a mandate implies a unique public preference for a specific policy. Beginning with Arrow’s theorem, scholars have proved that no method of aggregating individual preferences can both satisfy our basic criteria for fairness
and also generate a transitive social-preference ordering. Methods of aggregation, including majority rule, can yield cyclic or intransitive social orderings. Without a unique social choice, the notion of a single “public interest” is suspect. Moreover, for any given set of preferences in an electorate, different methods of aggregation, for instance plurality rule versus a Borda count, will yield different outcomes. As with sporting events, the outcome depends on how one scores the game. Other scholars have examined mandates from the point of view of politicians themselves. According to this theory of mandates, elections do constrain politicians by signaling the boundaries of public opinion. Though they do not have the individual-level survey data available to scholars, politicians act as if it is meaningful to think of policies favored by majorities of voters. They pay attention to election outcomes and believe that elections provide useful information. For instance, the interpretation of elections affects how members of Congress construct their policy agendas, committee preferences, staffs, and provisions of constituency services. Politicians make inferences about elections by relying on evidence such as patterns of party support across demographic groups, the magnitude of victory, and comparisons with elections past. For example, according to this theory, a president claims a mandate when the election signals strong public support for his or her agenda or when doing battle with Congress will shift policy outcomes closer to his or her ideal point. The signal from the electorate is strongest for those who win landslides and upset the received wisdom about groups of voters and their party loyalties. When a president believes that he or she can mobilize a majority of voters, he or she will declare a mandate, and Congress will go along; nobody wants to be on the wrong side of a winning issue. These elections tend to be landslides (large in terms of the magnitude of victory), to be national in character (the president’s party gains and controls Congress), and follow campaigns oriented around sharp differences between the parties on policy issues. Examples are Lyndon Johnson in 1964 and Ronald Reagan in 1980. In some cases, the president’s public backing is weaker, so the decision to declare a mandate depends on partisan and ideological support in Congress as
National Labor Relations Board
well as popular support. A president with policy preferences in between those of members of Congress and the status quo has nothing to lose by placing a policy change on the national agenda. He or she knows that he or she will encounter opposition, but the outcome will be closer to his or her own preferences. In 1992, for instance, Bill Clinton won only 43 percent of the popular vote, but Democrats controlled both the House and the Senate. The public opinion polls— and independent candidate H. Ross Perot voters— made it clear that jobs and deficit reduction were the major issues about which voters cared. Clinton had every incentive to claim a mandate and to push his economic plan. Mandates are also frequently claimed by presidents and prime ministers in parliamentary systems. The U.S. political system differs from most other governments in industrialized democracies because the executive and legislative powers are separated. Most other democracies follow a parliamentary model that places executive power in a head of government (the prime minister) who also heads the legislature (the Parliament). Great Britain is one example. The British prime minister is selected from Parliament by the political party that wins the most seats. The prime minister appoints a cabinet who are also members of his political party in Parliament. Thus, the prime minister is the leader of his party and of the government. British political parties offer clear policy alternatives, and the centralized nature of Parliamentary government ensures that the majority party will be equipped to carry out its policy agenda. In the context of such strong political parties, large electoral victories are readily perceived as indicating that voters endorse a particular policy agenda. If mandate claims are meaningless because of the informational shortcomings of voters and the logical problems of identifying collective preferences, then politicians are either blowing hot air or duping the public for their own selfish ends. Some view mandates as “myths” that give politicians too much power and legitimate a policy agenda at odds with the true preferences of the public. On the other hand, politicians never have perfect information about their constituents. They have an incentive to be accurate in their appraisal of public opinion because they care about their reputation and they want to be reelected. Competition between candidates and politi-
595
cal parties ensures that false mandate claims will be contested. President-elect John F. Kennedy, tired of questions about his narrow margin of victory, joked with reporters in 1960: “Mandate, schmandate, the mandate is that I’m here and you’re not.” In the aftermath of elections, politicians and members of the press try to figure out what the outcome signifies. What is unclear is whether or not mandate rhetoric is used to empower the public or fool them. See also presidential elections; presidential leadership. Further Reading Conley, Patricia Presidential Mandates: How Elections Shape the National Agenda. Chicago: University of Chicago Press, 2001; Dahl, Robert. “The Myth of the Presidential Mandate,” Political Science Quarterly 105 (1990): 355–372; Ellis, Richard, and Stephen Kirk, “Presidential Mandates in the Nineteenth Century: Conceptual Change and Institutional Development,” Studies in American Political Development 9, no. 1 (1995): 117–186; Fenno, Richard. “Adjusting to the U.S. Senate.” In Congress and Policy Change, edited by Gerald Wright et al. New York: Agathon Press, 1986; Hershey, Marjorie. “Campaign Learning, Congressional Behavior, and Policy Change.” In Congress and Policy Change, edited by Gerald Wright et al. New York: Agathon Press, 1986; Kelley, Stanley. Interpreting Elections. Princeton, N.J.: Princeton University Press, 1983; Lowi, Theodore J. “Presidential Democracy in America,” Political Science Quarterly 109 (1994): 401–438; Riker, William. Liberalism Against Populism. San Francisco: W.W. Freeman, 1982. —Patricia Conley
National Labor Relations Board The National Labor Relations Board (NLRB) is an independent federal agency created under the National Labor Relations Act (NLRA) of 1935. Its primary directive is to administer and enforce the provisions of that act—that is, to regulate collective bargaining in the private sector. Though technically part of the executive branch, the NLRB exists independently of other executive departments. Independent federal agencies are created through statutes
596
National Labor Relations Board
passed by Congress. The NLRB’s rules and regulations are equal to that of federal law. The NLRB has two primary functions that are explicitly defined. The first is the determination of whether employees wish to organize and to be represented by a union through an election process and, if so, by which one. This in effect gives workers a say in their work lives and in their economic futures. The other function is to prevent, investigate, and remedy unfair labor practices by either employers or unions. The NLRB, as currently organized, has two major bodies—the five-member governing board and the office of the general counsel. Governing board members and the general counsel are appointed by the president and confirmed by the Senate. The board by definition is a quasi-judicial body that hears and exercises jurisdiction over labor issues. The general counsel’s primary duties are to investigate and prosecute cases of unfair labor practices by industry or labor. With limited exceptions, the NLRB has jurisdiction over private-sector employers. The board also has authority in Puerto Rico and American Samoa. The NLRB’s predecessor was the National Labor Board (NLB). The NLB was created on August 5, 1933, to facilitate the resolution of labor issues that developed under the National Industrial Recovery Act (NIRA) of 1933. The NLB was a seven-member board. Three of the members represented labor and three represented industry, and it was chaired by U.S. Senator Robert F. Wagner (D-NY). The NLB was created by the administrator of the National Recovery Administration (NRA), General Hugh S. Johnson. Because the NLB was not created as a result of legislation or executive order, it lacked any real power or authority. The NIRA, a response to the Great Depression, explicitly granted the right of laborers to form unions and to seek redress of their grievances. These rights were commonly referred to as Section 7a, as that was the section of the NIRA in which they were contained. The act also required industry to engage in negotiations with those unions. It is important to note that the NIRA did not deal exclusively with unions and labor issues, but it also dealt with issues of interstate and intrastate commerce to speed the economic recovery of the United States. In 1935, the U.S. Supreme Court, in Schechter Poultry Corp. v. United States (295 U.S. 495), would later find that the
NIRA violated the commerce clause because the federal government had no authority to regulate intrastate commerce. The NIRA, along with the NLB, was struck down and at the time lacked sufficient support to be reworked into something constitutionally acceptable. Because the NLB lacked the formal authority or jurisdiction to intervene legally in labor issues, Wagner pursued legislation that would create a body that had those necessary powers. However, Wagner encountered numerous obstacles. The proposed legislation was not looked on favorably by various groups, including members of Congress. For example, Wagner’s proposed bill faced several alternatives, including a bill presented by Senator David I. Walsh (D-MA), chair of the Senate Committee on Education and Labor, which also ultimately failed. Executive Order 6073, issued on June 19, 1934, by President Franklin D. Roosevelt, replaced the NLB with the first manifestation of the NLRB. The newly created and legally authorized NLRB had three members, reduced from seven which had comprised the NLB. Those members were Lloyd K. Garrison, dean of the University of Wisconsin Law School, who served as its chair; Harry A. Millis, professor of economics at the University of Chicago; and Edwin A. Smith, Commissioner of Labor and Industry for the state of Massachusetts. Roosevelt’s executive order was considered to be flawed due to its authorization of the creation of multiple labor boards for individual sectors of industry. The boards failed to achieve uniformity in rulings regarding labor practices across industries, which resulted in extreme fragmentation in U.S. labor law. Wagner continued the pursuit for a more workable solution in light of the extreme failure of Roosevelt’s executive order. In 1935, Wagner introduced the NLRA. The NLRA, alternatively known as the Wagner Act, was signed into law on July 5, 1935, by President Roosevelt. This time, Wagner enjoyed the support of his legislation because of the volatile labor climate that existed within the United States. Physical violence had broken out at various general strikes across the country. Through the adoption of the NLRA, labor’s right to organize along with the ability to bargain collectively for better wages and working conditions were codified into law. The act also specifically outlined
National Labor Relations Board
five practices that were considered “unfair labor practices” on the part of industry. Additionally, the act provided for the process by which labor could elect representation. This act transformed the NLRB into its presentday form, an independent federal agency, and it gave the board formal authority to administer the various parts of the NLRA. The act also granted the NLRB the authority to determine if, in fact, labor desired to be represented by a union. Along with its administrative duties, the NLRB was given the power to enforce the provisions of the NLRA. Initially, the act was ignored generally by industry and labor. General strikes, including the Flint SitDown Strike which occurred in late 1936 through early 1937, were continuing to be used as means of forcing industry to recognize and bargain with unions instead of through the NLRB, as was intended. The lack of recognition of the legality of the act can be attributed to the Schechter decision in which the U.S. Supreme Court knocked down other similar statutes on the basis that Congress did not have the constitutional authority to do so. However, on April 12, 1937, the U.S. Supreme Court ruled in National Labor Relations Board v. Jones & Laughlin Steel Corporation (301 U.S. 1). In their 5–4 decision, the Court held that Congress had the power, under the commerce clause, to regulate labor relations. In the majority opinion, the Court held: “[Although] activities may be intrastate in character when separately considered, if they have such a close and substantial relation to interstate commerce that their control is essential or appropriate to protect that commerce from burdens and obstructions, congress cannot be denied the power to exercise that control.” The NLRA was deemed constitutional. One of the next major developments for the NLRB was the passage of the Taft–Hartley Act (THA) in 1947. Congress had come to believe that the NLRA had given undue consideration to labor. Believing that industry and the public needed protection from unfair union practices, they passed the THA. Not only did it restrain some of the actions of unions, it also limited the NLRB’s ability in determining appropriate units for bargaining. The act also limited the NLRB to one election for representation per year in a unit. The THA was vetoed by President Harry S. Truman and subsequently was overridden by Congress.
597
Its passage also marked the beginning of a more balanced approach and attitude by the NLRB towards labor, management, and employees. In January 1957, the U.S. Supreme Court heard arguments in Guss v. Utah Labor Relations Board, (353 U.S. 1). The court was deciding whether cases rejected by the NLRB on the basis of its jurisdictional standards could be heard subsequently by state laborrelations boards. The Court found that state boards could not hear cases rejected by the NLRB. The THA was not immune to change either. In 1959, the Labor– Management Reporting and Disclosure Act of 1959 (LMRDA), also known as the Landrum–Griffin Act, was passed. Its passage was in response to egregious practices of unions on its members or potential members. The act specifically gave the general counsel of the NLRB the power to seek injunctions against unions that violated the prohibition against recognitional picketing for more than 30 days without filing a petition for representation with the board. The LMRDA also overrode the Supreme Court’s Guss decision by ceding jurisdiction to state courts and labor-relations boards over cases that had been rejected by the NLRB. In time, other federal laws relating to U.S. labor relations have passed, but they have dealt primarily with adding, rescinding, and defining U.S. labor law and not with the operational nature of the NLRB, as the previously discussed laws had done. These other federal laws include the American With Disabilities Act (ADA), the Age Discrimination Employment Act, the Antiracketeering Act, the Antistrikebreaking Act, the Civil Rights Acts, the Norris-LaGuardia Act, the Occupational Safety and Health Act (OSHA), the Racketeer Influenced and Corrupt Organizations Act (RICO), the Railway Labor Act, Fair Labor Standards Act (FLSA), the Walsh-Healey Act, the DavisBacon Act, the Service Contract Act of 1965, and the Welfare and Pension Plans Disclosure Act. The NLRB does not enforce these laws that are under the jurisdiction of other federal agencies. For example, the Department of Labor addresses issues or conflicts involving the above mentioned OSHA and FLSA. During its 70-year history, the NLRB has processed more than 2 million cases, issued more than 65,000 published decisions in adjudicated cases, and conducted 415,000 elections involving more than 40
598 Na tional Security Advisor
million workers. During fiscal year 2006, the NLRB received 182,161 inquiries from the public. However, their total case intake for the same time period was 26,723. This is a reminder that all cases are not appropriate for consideration by the NLRB or are not under the jurisdiction of the provisions of the NLRA, but those cases accepted for consideration are a reflection of the issues facing today’s workforce. The NLRB continues to protect employee rights, including the right to organize and bargain collectively. See also bureaucracy; executive agencies. Further Reading Gross, James. The Making of the National Labor Relations Board: A Study in Economics, Politics, and the Law. New York: SUNY Press, 1974; Kahn, Linda, ed. Primer of Labor Relations. Washington, D.C.: BNA Press, 1995; Morris, Charles. The Blue Eagle at Work: Reclaiming Democratic Rights in the American Workplace. Ithaca N.Y.: Cornell University Press, 2005; Schlesinger, Arthur. The Age of Roosevelt: The Coming of the New Deal. Boston: The Riverside Press Cambridge, 1958; Taylor, Benjamin, and Fred Witney. U.S. Labor Relations Law: Historical Development. Englewood Cliffs, N.J.: Prentice Hall, 1992. —Victoria Gordon and Jeffery L. Osgood, Jr.
National Security Advisor The National Security Advisor (“assistant to the president for national security”) is the president’s top political adviser in the White House and is responsible for the daily management of U.S. foreign policy and national-security affairs. The president relies heavily on the National Security Advisor (NSA) to manage a number of specific national-security issues and to direct the National Security Council (NSC) and supervise the NSC staff. On the whole, the National Security Advisor serves as an “honest broker” of the foreign-policy-making process. In other words, all reasonable courses of action must be brought to the president’s attention and that the advice of the president’s other foreign-policy advisers are accurately conveyed. The National Security Advisor also supplies advice from the perspective to the president. Unlike the secretaries of state or
defense, who represent bureaucratic organizations, the National Security Advisor’s constituency is the president. The National Security Advisor has been the bridge that joins all the important components of the foreign-policy-making establishment, namely the intelligence community, the national military establishment, and the diplomatic community. However, the 1947 act that created the National Security Council, did not provide for the establishment of the National Security Advisor. The official position of “special assistant to the president for national security affairs” was established by President Dwight D. Eisenhower to designate the person who would be the overall director of the NSC and the president’s top White House foreign-policy adviser. The original title of the position was shortened by President Richard Nixon to “the assistant to the president for national security” or the “National Security Advisor.” The National Security Advisor monitors the actions taken by the cabinet departments in formulating and enforcing the president’s national-security goals. The National Security Advisor has a special role in crisis management since the rapid pace of events during crises often draws the National Security Advisor into an even more active role of advising the president on unfolding events. Therefore, the National Security Advisor fulfills the president’s need for prompt and coordinated action under White House management and in communicating presidential directives to the cabinet departments. For example, Condoleezza Rice, who served as President George W. Bush’s first National Security Advisor, focused more on advising the president and ensuring coordination of the policy-making process and less on advocating specific policies at the NSC. The intense involvement of the Departments of Defense and State in the global war on terrorism and missions in Afghanistan and Iraq resulted in Secretary of Defense Donald Rumsfeld and Secretary of State Colin Powell more actively involved in foreign-policy development. The National Security Advisor must therefore walk a fine line with the president. Given the demands on the president’s time, the chief executive can only deal with those problems that require a certain level of his involvement. An effective National Security Advisor should neither usurp the president’s authority
National Security Advisor
on “lower-level” issues nor consume his limited time on routine foreign-policy issues. The National Security Advisor must protect the president’s precious time and manage the constant demands of other cabinet officials and foreign leaders. Consequently, the National Security Advisor must serve as a gatekeeper on foreign-policy and nationalsecurity issues for the White House by determining who warrants influence in the foreign-policy-making process. The analysts and officials who work directly for the National Security Advisor are the NSC staff. Staff members who deal with key international issues are usually frequently experts from think tanks and academia, senior professionals from various cabinet departments, and military officers. The NSC staff oversees the day-to-day management of foreign-policy and national security for the president. Since the formal members of the NSC meet infrequently and have little direct contact with one another, the NSC staff is considered the most important and consequential component in the foreign-policy-making process. The wide-ranging duties and activities of the NSC staff result from the fact that the staff is under the control of the National Security Advisor who, in turn, works directly for the president. Even though the Secretaries of State and Defense are cabinet officials who belong to the NSC, they have little authority over the NSC staff. In time, the roles of the National Security Advisor have greatly expanded. During the Harry S. Truman administration, foreign-policy officials viewed the newly created NSC as little more than a neutral coordinator of foreign-policy information prepared for the president by formal NSC members. Now, the National Security Advisor is at the center of foreign-policy making, in particular, national security. Twenty men and one woman have served as National Security Advisor, from Robert Cutler under President Eisenhower to Stephen Hadley under President George W. Bush. Whereas some National Security Advisors have only been engaged minimally in policy advocacy (William Clark under President Ronald Reagan and Hadley under Bush), others have been active foreign-policy advisers (Henry Kissinger under Nixon, Zbigniew Brzezinski under President Jimmy Carter, and Sandy Berger under
599
President Bill Clinton). In 1973, Henry Kissinger actually served both as National Security Advisor and Secretary of State between 1973 and 1975 under Nixon and President Gerald Ford. Two National Security Advisors under Reagan, Robert C. McFarlane and Vice Admiral John M. Poindexter, managed and actively led questionable foreign-policy and intelligence operations. Whatever their status or level of ambition, each National Security Advisor has been confronted with the enormous task of managing the NSC Principals Committee, budget requests from foreign-policy agencies and cabinet departments, crisis management, and policy evaluations on specific topics that range from global trade to arms control. The National Security Advisor is always on the telephone with key foreign-policy officials and will occasionally host foreign representatives in his or her office in the West Wing of the White House. More activist National Security Advisors assign studies for the NSC staff for the president, read and comment on briefs and studies, distill the findings and recommendations into a report for the president, manage the flow of information on security and foreign-policy matters from cabinet officials, monitor the implementation of decisions made by the president to ensure that they are properly carried out by the bureaucracy, and convey their personal views on policy matters to the president. When Robert Cutler became National Security Advisor in the Eisenhower administration, the National Security Advisor largely served as an honest broker of the foreign-policy-making process. After Cutler, the National Security Advisor position was expanded to include more policy advocacy and administrative coordination. The National Security Advisor’s office became a source of considerable political authority during the John F. Kennedy Administration, mostly in reaction to President Kennedy’s disastrous handling of the Bay of Pigs mission. Key National Security Advisors, namely McGeorge Bundy, Walt Rostow, Henry Kissinger, and Zbigniew Brzezinski, combined their political skills with their intellectual capabilities to become close personal advisers to the president, to centralize foreign-policy making in the hands of the White House, and to concentrate political power in the hands of the NSC staff. In particular, McGeorge Bundy replaced Eisenhower’s
600 Na tional Security Advisor
formal NSC system with a much more informal and decentralized process, similar to a university classroom. The power of the National Security Advisor reached its highest point during the Nixon and Carter administrations. President Nixon relied heavily on Henry Kissinger’s judgment, who proved to be very skillful in nurturing a close relationship with the president. Kissinger not only became the most influential foreign-policy maker, but he was the top public advocate for Nixon’s foreign-policy objectives. Under Carter, Zbigniew Brzezinski regularly held press conferences, employed a press secretary on the NSC staff, and possessed significant influence over the direction of Carter’s foreign-policy. Depending on the specific issue, Brzezinski was a public advocate, coordinator, administrator, policy entrepreneur, mediator, and interpreter of foreignpolicy and national-security matters. As a result of this far-reaching power in the hands of the National Security Advisors, Nixon and Carter isolated their secretaries of state. Secretaries of State William Rogers (Nixon) and Cyrus Vance (Carter) perceived both Kissinger and Brzezinski as quite aggressive in their expansion of the political roles of the National Security Advisor. This was especially the case with Kissinger, who acted simultaneously as both National Security Advisor and Secretary of State between 1973 and 1975 under both Nixon and Ford. In reaction to the model of wide-ranging power under Kissinger and Brzezinski, President Reagan preferred a less-influential National Security Advisor. Reagan’s first three National Security Advisors, Richard Allen, William Clark, and Robert McFarlane, wielded power in a more behind-the-scenes and support fashion. Each focused on buttressing President Reagan’s foreign-policy needs, relegating the role of public foreign-policy spokesperson to Secretaries of State Alexander Haig and George Shultz. Interestingly, highly publicized bureaucratic turf battles were waged between Reagan’s Secretary of Defense Caspar Weinberger and Secretary of State George Shultz. John Poindexter was probably the most unassuming, yet quietly powerful of all of Reagan’s National Security Advisors. However, the Iran-contra scandal revealed that Poindexter was operating in a
highly dubious and illicit fashion in secretly pulling the levers of power. Although Poindexter’s actions dealt the National Security Advisor a huge setback in terms of his political roles, Reagan’s fifth National Security Advisor, Frank Carlucci, was a highly experienced official who helped restore the influence and prestige of the National Security Advisor. Carlucci rescued the National Security Advisor office by reverting to its original role as an honest broker and manager of the foreign-policy-making process. He also moved to transfer the center of power to the Secretary of State. Interestingly, Clinton’s two National Security Advisors, Anthony Lake and Sandy Berger, believed that the highly public role of the National Security Advisor should be maximized; that is, they contended that the National Security Advisor should explain U.S. foreign-policy and presidential initiatives to the public and international leaders. In particular, Berger fashioned Clinton’s NSC staff into a leading force on foreign-policy and national-security matters, having transformed his office and staff into a more powerful force than Secretary of State Madeleine Albright. In Clinton’s second term, especially during the NATO air war against Serbia in 1999, the power and influence wielded by Berger’s National Security Advisor rivaled that of Kissinger’s and Brzezinski’s. President George W. Bush’s first National Security Advisor, Condoleezza Rice, was an academic who managed an NSC system composed of such domineering Washington political figures as Secretary of State Colin Powell, Secretary of Defense Donald Rumsfeld, CIA Director George Tenet, and Vice President Dick Cheney. While Rice seemed to have been eclipsed by these highly influential foreignpolicy makers, she relished in her role as a behindthe-scenes coordinator who mediated bureaucratic disputes. However, Rice has taken an active role in articulating key foreign-policy measures, such as President Bush’s foreign-policy doctrine of preventive and preemptive military force. In Bush’s second term, Rice succeeded Powell as Secretary of State and her deputy, Stephen Hadley, was named Bush’s second National Security Advisor. As such, Hadley has generally allowed Rice to lead U.S. foreign policy actively. As a modest man in a politically significant position, Hadley thinks of him-
National Security Council
self largely as an honest broker; in fact, Hadley may be the lowest-profile National Security Advisor since Robert Cutler. This may stem from the fact that Hadley is most uncomfortable with operating in the public limelight. At a time when terrorism and weapons of mass destruction dominate the president’s agenda, Hadley appears subordinate to the Departments of State and Defense. In effect, Hadley considers himself to be a facilitator of the president’s objectives, which has been shaped to a great extent by Rice and Rumsfeld. Under Hadley, bureaucratic struggles have been kept in check. However, this seems largely due to the fact that following Powell’s departure in 2005, there have been few policy disagreements. As a broker, Hadley has been successful in bringing a level-headed and balanced approach to a foreign-policy that has been criticized as reflecting neoconservative ideological principles. From Walt Rostow under Kennedy to Stephen Hadley today, the National Security Advisor has served both as an honest broker of the daily process and as an active foreign-policy adviser-advocate. It is clear that presidents favor both roles, although each conflicts with the other and is often in conflict with other key officials, namely the secretary of state. A National Security Advisor who advances his or her views too aggressively risks losing the confidence and trust of the NSC. The most astute National Security Advisors politically have balanced these roles effectively according to the specific issue. Others have overreached and produced discord within the president’s foreign-policy-making team. Further Reading Bohn, Michael K. Nerve Center: Inside the White House Situation Room. New York: Potomac Books, 2002; Brzezinski, Zbigniew K. Power and Principle: Members of the National Security Adviser, 1977– 1981. New York: Farrar, Straus and Giroux, 1985; Ditchfield, Christian. Condoleezza Rice: National Security Adviser. New York: Scholastic Library, 2003; Ingram, Scott. The National Security Advisor. New York: Thomson Gale, 2004; Newmann, William W. Managing National Security Policy: The President and the Process. Pittsburgh, Pa.: University of Pittsburgh Press, 2003. —Chris J. Dolan
601
National Security Council The National Security Council (NSC) was created with the passage of the National Security Act of 1947. Located in the Executive Office of the President, the National Security Council is the president’s principal forum for managing national-security and foreign-policy matters with his senior advisers and cabinet officials. Since its inception under President Harry S. Truman, the function of the National Security Council has been to advise and assist the president on national-security and foreignpolicy issues. The National Security Council also serves as the president’s primary mechanism for coordinating these issues among various government agencies. The National Security Council is chaired by the president and directed by the assistant to the president for national security (National Security Advisor), although the 1947 act did not provide for the creation of the position. Three levels comprise the formal National Security Council structure: the Principals Committee, Deputies Committees, and NSC staff. The most senior, regularly constituted group is the Principals Committee (PC). The PC meets on a regular basis to discuss current and developing foreign issues and review and coordinate policy recommendations developed by subordinate groups and other departments and agencies. The six principal advisers are the National Security Advisor, the Secretaries of State, Defense, and Treasury, the director of the Central Intelligence Agency, and the chair of the Joint Chiefs of Staff. Other key advisers attend the PC when issues related to their areas of responsibility are discussed; for example, the director of the Central Intelligence Agency, the U.S. attorney general, the director of the Office of Management and Budget, and the assistant to the president for homeland security. When international economic issues are on the agenda, attendees may include the secretaries of agriculture, homeland security, and commerce, the United States Trade Representative, the assistant to the president for economic policy, the White House Chief of Staff, and the vice president’s National Security Advisor. Under the PC is the Deputies Committee (DC), which is responsible for directing the work of interagency working groups and ensuring that issues
602 Na tional Security Council
President Ford meets with his National Security Council to discuss the Mayaguez situation, May 13, 1975. (Ford Library)
brought before the PC have been properly analyzed. The DC makes the bulk of foreign-policy decisions in preparation for the PC’s review and the president’s ultimate decision. The DC is composed of deputylevel advisers or assistant and undersecretaries in various cabinet departments, namely: the deputy secretaries of State, Treasury, Defense, the undersecretary of state for political affairs, the undersecretary of defense for policy, the deputy attorney general, the deputy directors of the Office of Management and Budget and the CIA, the counterterrorism adviser, the vice chair of the Joint Chiefs of Staff, the White House deputy chief of staff, the deputy assistant to the president for homeland security, and the various deputy National Security Advisors. When international economic issues are on the agenda, the DC’s regular membership adds the undersecretary of the treasury for international affairs, the deputy secretary of commerce, the Dep-
uty U.S. Trade Representative, and the Deputy Secretary of Agriculture. The DC is also responsible for coordinating various interagency committees, which are composed of experts and senior officials from the departments and agencies represented on the DC. Interagency committees are responsible for managing the development and implementation of foreign-policy policies when they involve more than one government agency. Contingent on the scope of their responsibilities, some interagency committees may meet regularly while others meet only when developments or planning require policy synchronization. Interagency committees are organized around regional and functional issues. Regional committees are headed by assistant secretaries of state while functional committees are headed by senior NSC officials. Regional committees include: Europe and Eurasia; Western Hemisphere; East Asia; South Asia; Near East and
National Security Council
North Africa; and Africa. Functional committees include: arms control; biodefense; terrorism information strategy; contingency planning; crisis planning; counterterrorism security; defense strategy; force structure; democracy and human rights; combat detainees (NSC); the global environment; HIV/ AIDS and infectious diseases; intelligence and counterintelligence; drug interdiction; humanitarian assistance; international finance; organized crime; maritime security; Muslim world outreach; weapons proliferation and counterproliferation; space; communications; terrorist financing; global economic policy; Afghanistan operations; and Iraq policy and operations. The 1947 act also provided for a permanent staff headed by an executive secretary who is appointed by the president. The executive secretary largely assists the president and the National Security Advisor in preparing and convening meetings with other National Security Council members, monitoring meetings between the president and foreign political leaders, and scheduling presidential foreign travel. Initially, the National Security Council staff had no substantive role in policy formulation or enforcement. However, during the years, the staff has fallen more under the supervision of the president and the National Security Advisor who have expanded the role of the staff in participating in national security and intelligence briefings and assisting the president in responding to congressional inquiries and preparing public remarks. Today, the National Security Council staff serves as an initial point of contact for departments and agencies who wish to bring a national security issue to the president’s attention. While the National Security Council is at the center of the president’s foreign-policy coordination system, it has changed many times to conform to the needs and inclinations of each president. Although President Harry S. Truman fully intended to maintain direct control of foreign policy, he rarely attended NSC meetings, which were often chaired by the Secretary of State. As a result, instead of producing coordinated unified policies, the NSC was hampered by bureaucratic turf battles. With the outbreak of the Korean War in 1950, Truman began to convene regular meetings to develop, discuss, and coordinate policy related to the war. Truman’s increased use of the
603
NSC system brought about procedures that have endured to the present day, including interagency committees with responsibilities for specific regional and functional areas, analysis and development of policy options, and recommendations for presidential decisions. Following the election of Dwight D. Eisenhower in 1952, the National Security Council and its staff grew in importance, size, and responsibilities. President Eisenhower’s military experience led him to establish an elaborate NSC structure centered on a planning board to coordinate policy development and an operations coordinating board for monitoring enforcement of policies. In 1953, Eisenhower also created the post of National Security Advisor to direct the NSC and staff. President John F. Kennedy was uncomfortable with the Eisenhower National Security Council system, adopting a more informal process where he would talk with individual NSC statutory members and staffers. Kennedy also created the White House Situation Room, where 24-hour communications would be maintained between the NSC and all agencies, U.S. embassies, and military command posts in the United States and around the world. President Lyndon B. Johnson continued with an informal advisory NSC system, relying on the National Security Advisor, a smaller NSC staff, and trusted friends. During the Vietnam War, Johnson instituted a “Tuesday Lunch” policy discussion with key NSC members, including the Secretaries of State and Defense, the CIA director, and the chair of the Joint Chiefs of Staff. Centralized control of foreign policy making by the National Security Council climaxed under President Richard Nixon. National Security Advisor Henry Kissinger expanded the NSC staff and put more controls in place to ensure that analytical information from the departments would be routed through his office. Kissinger would draft his own written recommendations for Nixon. This system reflected Kissinger’s dominating personality, as well as his desire to position the NSC staff as the preeminent foreignpolicy-making agency in the administration. Following Nixon’s resignation, President Gerald Ford inherited Nixon’s NSC configuration, which found Kissinger acting both as National Security Advisor and Secretary of State. To make his own imprint,
604 Na tional Security Council
Ford did appoint General Brent Scowcroft as National Security Advisor. Kissinger maintained his role as chief foreign-policy adviser to the president, and Scowcroft coordinated analyses and policy options between the executive branch departments and agencies. President Jimmy Carter was determined to eliminate the Nixon NSC system, believing that Kissinger had amassed too much power at the expense of other NSC members. He envisioned an NSC system focused on coordination and research, and he reorganized to ensure that the National Security Advisor would be only one of many advisers on the NSC. Carter chose Zbigniew Brzezinski as his National Security Advisor because he wanted an assertive intellectual at his side to provide him with guidance on foreign-policy decisions. Initially, Carter reduced both the NSC staff and the number of standing committees. All issues referred to the NSC were reviewed by two newly created committees: the Policy Review Committee (PRC) or the Special Coordinating Committee (SCC). Carter used frequent, informal meetings as a decision-making device, typically his Friday breakfasts, usually attended by the vice president, the Secretaries of State and Defense, the National Security Advisor, and the domestic-policy adviser. Carter also promoted the free flow of ideas, unencumbered by formal constraints. However, the system led to bureaucratic battles between Brzezinski and Secretary of State Cyrus Vance on key issues such as arms control and the Iranian revolution and the hostage crisis. Significant changes were made to the National Security Council system by President Ronald Reagan. Reagan established three senior-level informal interdepartmental groups on foreign, defense, and intelligence issues, chaired by the Secretaries of State and Defense and the director of the CIA. Under the groups, a series of assistant secretarylevel groups, each chaired by the agency with particular responsibility, dealt with specific foreign- policy issues. The NSC staff was responsible for the assignment of issues to the groups. During 1985 and 1986, the NSC and its staff took a particularly activist role in setting policy toward Latin America and the Middle East. However, its activism led to the Iran–contra scandal, which was one of the
lowest points in NSC history. The scandal was the product of illegal NSC efforts to develop a secret policy in which the United States would provide arms to Iran in exchange for its resistance to the Soviet Union and to assist in the freeing of U.S. hostages held by extremist groups in Lebanon. National Security Advisor McFarlane and Admiral Poindexter, who succeeded him in December 1985, coordinated the policy. The efforts to provide arms for hostages eventually became connected through the transfer of funds made with arms sales, with the NSC staff’s ardent support for the Nicaraguan contras in their civil war against the left-wing government of Nicaragua. Investigations by the Tower Commission, Congress, and a special prosecutor found that the NSC staff, the president, the National Security Advisors, and the heads of particular NSC agencies were complicit in coordinating the illicit activities. President George H. W. Bush made many changes to the National Security Council. Two of the most significant were the enlargement of the policy review group and delegation of responsibility for the Deputies Committee to the deputy National Security Advisor. President Bush brought deep experience to the NSC with his appointment of General Brent Scowcroft as National Security Advisor. Scowcroft had served in the Kissinger NSC, served as President Ford’s National Security Advisor and chaired the president’s board examining the Irancontra scandal. The NSC also maintained good relationships with the other agencies, in particular the Department of State. The Bush NSC oversaw many critical events, such as the collapse of the USSR, the reunification of Germany, the U.S. invasion of Panama, the Tiananmen Square massacre, and the Persian Gulf War. President Bill Clinton created a National Security Council system that included a greater emphasis on economic issues in the formulation of foreign-policy. Clinton added the Secretary of the Treasury, the U.S. Trade Representative, the U.S. Ambassador to the United Nations, the White House chief of staff, and the newly created assistant to the president for economic policy (national economic adviser) to the NSC Principals Committee. The national economic adviser would serve as a senior adviser to coordinate foreign and domestic economic policy through a newly cre-
Nuclear Regulatory Commission
ated National Economic Council (NEC). Among the most urgent the NSC dealt with were crises in Bosnia, Haiti, Iraq, and Somalia, illegal drug trafficking, UN peacekeeping, strategic-arms control policy, human rights and trade with China, global environmental affairs, ratification of the Chemical Weapons Treaty, NATO enlargement, and the Middle East peace process. The political roles of the NSC have been influenced by historical developments revolving around the pursuit of key national interests. For example, during the Clinton administration, the NSC increasingly focused more on the relationship of economic matters and international trade in U.S. foreignpolicy. Historically, economic issues were handled by the NSC staff, the Treasury Department, and the Council of Economic Advisors. The dynamism and complexity of globalization and international economics led Clinton to create the national economic adviser and the National Economic Council, which was modeled on the formal structure of the NSC. The George W. Bush administration continued this practice of recognizing the importance of economic issues in foreign policy by appointing economic specialists to the NSC staff and promoting cooperation between the NSC and NEC principals committee. Interestingly, formal meetings of the National Security Council have been rare in previous administrations because presidents have not seen the need to hold official NSC meetings because presidents tend to prefer more informal consultations with select NSC members. Moreover, presidents seem more inclined to manage foreign policy and national security through direct meetings with cabinet officers and key advisers and through committees with substantive responsibilities. However, this pattern of infrequent NSC meetings changed in the wake of the terrorist attacks of September 11, 2001, and the subsequent military operations in Afghanistan and Iraq. Like President Truman during the Korean War, President George W. Bush has convened formal meetings of the NSC on a regular basis to formulate policies for conducting the war on terrorism, key military campaigns, and weapons counterproliferation. In the intervening periods and subsequently, the Bush NSC has met at least weekly at the White House or through the use of the Secure Video-Teleconference Service
605
when the president has traveled or when he is at his Texas ranch. Further Reading Best, Richard A. The National Security Council: An Organizational Assessment. New York: Nova Science Publishers, 2001; Johnson, Loch K., and Karl Inderfurth, eds. Fateful Decisions: Inside the National Security Council. Oxford: England Oxford University Press, 2003; Menges, Constantine C. Keepers of the Keys: A History of the Security Council from Truman to Bush. New York: HarperCollins, 1992; Rothkopf, David J. Running The World: The Inside Story of the National Security Council and the Architects of American Power. New York: PublicAffairs; 2005; Stewart, Alva W. The National Security Council: Its Role in the Making of Foreign Policy. New York: Vance, 1988. —Chris J. Dolan
Nuclear Regulatory Commission The Nuclear Regulatory Commission (NRC) is an agency of the federal government. It regulates and licenses the use of all nuclear energy in the United States and is designated to serve the public interest and protect public health and safety, as well as environmental safety. The NRC licenses companies to build and operate nuclear-energy-producing facilities (reactors) and is also responsible for licensing renewal. The NRC sets industry safety standards and makes rules and regulations that cover the operations of these facilities. It also has responsibility for inspecting nuclear facilities in the United States. The stated mission of the NRC is to “license and regulate the Nation’s civilian use of byproduct, source, and special nuclear materials to ensure adequate protection of public health and safety, promote the common defense and security, and protect the environment.” Created in 1975 as part of the Energy Reorganization Act of 1974, the NRC supplanted the old Atomic Energy Commission (AEC), the federal agency responsible for the regulation of the atomic/ nuclear industry since 1946. The original AEC was designed to separate the military from the commercial use of atomic energy. Today, while the NRC is
606 Nuclear Regulatory Commission
responsible for the civilian and commercial use of nuclear power, it also has some say in the military application of nuclear energy, especially as safety and environmental concerns are raised. Overall, the control of nuclear weapons is now handled by the Energy Research and Development Administration (ERDA), which was also established by the Energy Reorganization Act of 1974. In 1977, the ERDA became part of the newly created Department of Energy. A major focus of the regulatory programs of the NRC (and its predecessor, the AEC) has been the prevention of a major nuclear-reactor accident. The NRC has issued a series of requirements in an attempt to protect the public and to make sure that a massive release of radiation from a power reactor would not occur. This issue was especially important during the late 1960s and early 1970s as both the number and size of nuclear plants being built increased; this generated intense interest among members of Congress, environmentalists, and the media. The NRC directly dealt with the issue of public safety in March 1979 when an accident occurred at Unit 2 of the Three Mile Island plant in Pennsylvania, melting about half of the reactor’s core and for a time generated fear that widespread radioactive contamination would result. The crisis ended without a major release of dangerous forms of radiation; as a result, federal officials developed new approaches to nuclear regulation. Afterward, the NRC placed much greater emphasis on operator training and “human factors” in plant performance, severe accidents that could occur as a result of small equipment failures (as occurred at Three Mile Island), emergency planning, and plant operating histories, among other issues. Nuclear energy remains a very controversial means of weaning the United States off its addiction to expensive and politically unpredictable supplies of oil and fossil fuel. In late 2005 and early 2006, as the supply of oil became more politically variable and as gas prices shot up, there were renewed calls for the development of more nuclear power plants to supply the United States with a less politically vulnerable source of energy. However, political opposition to new nuclear power plants remained a significant hurdle as nuclear energy remained a controversial political issue in the United States.
The NRC is run by a five-member commission. Commission members are appointed by the president with the advice and consent of the Senate. They serve for a five-year term. The president designates one member of the commission as chair. The current chairman of the Nuclear Regulatory Commission is Dr. Dale E. Klein, whose term ends on June 30, 2011. The budget of the NRC is slightly more than $900 million (based on the fiscal year 2008 request), and the NRC staff includes more than 3,500 full-time employees. Headquartered in Rockville, Maryland, the NRC is also separated into four regional offices, located in King of Prussia, Pennsylvania; Atlanta, Georgia; Lisle, Illinois; and Arlington, Texas. These regional offices oversee the more than 100 power-producing reactors and nearly 40 non/power-producing nuclear facilities across the United States. Their mission focuses on the control and safety of nuclear reactors, storage, the safety of nuclear materials, and the disposal and transportation of spent and active nuclear materials. In addition, an On-Site Representative High-Level Waste Management Office is located in Las Vegas, Nevada, which maintains information associated with the proposed high-level waste repository at Yucca Mountain. The NRC also has a Technical Training Center in Chattanooga, Tennessee, to provide training for its staff in various technical disciplines that are associated with the regulation of nuclear materials and facilities. Since the terrorist attacks of September 11, 2001, the safety issues surrounding nuclear energy have become a national-security issue as nuclear facilities may be an attractive target for terrorists. In addition, federal officials fear that extremist terrorist groups might attempt to use so-called radioactive “dirty bombs” in terrorist efforts in the United States and abroad. In early 2007, an undercover investigation conducted by the U.S. Government Accountability Office (GAO) found that the NRC had provided a license to a fake company that would have allowed it to purchase radioactive materials needed to make a dirty bomb. After only 28 days following the fake company’s application, the NRC mailed the license to a post-office box in West Virginia. The investigation by the GAO showed that much work needs to be done by various federal agencies, the Nuclear Regulatory Commission
Occupational Safety and Health Administration
included, to prevent further acts of terrorism in the United States. See also bureaucracy; executive agencies. Further Reading Loeb, Paul Rogat. Nuclear Culture: Living and Working in the World’s Largest Atomic Complex, New York: Coward, McCann & Geoghegan, 1982; Titus, Costandina. Bombs in the Backyard: Atomic Testing and American Politics. Reno: University of Nevada Press, 1986; U.S. Nuclear Regulatory Commission Web page. Available online. URL: http:// www.nrc.gov/. Webb, Richard E. The Accidental Hazards of Nuclear Power Plants. Amherst: University of Massachusetts Press, 1976. —Michael A. Genovese
Occupational Safety and Health Administration The Occupational Safety and Health Administration (OSHA), a program agency within the U.S. Department of Labor, was created by an act of Congress, the Occupational Safety and Health Act (84 Stat. 1590), which was signed into law by President Richard M. Nixon on December 29, 1970. The agency began operations on April 28, 1971, and absorbed the Bureau of Labor Standards on May 1, 1971. OSHA’s mission is to “assure the safety and health of America’s workers by setting and enforcing standards; providing training, outreach, and education; establishing partnerships; and encouraging continual improvement in workplace safety and health.” OSHA and the 24 states that have established their own job safety and health programs have more than 2,100 inspectors, as well as investigators, engineers, physicians, educators, standards writers, and other technical staff. The federal agency, as well as the state programs that they approve and monitor, establish protective standards, enforce the standards, and provide technical assistance and consultation to employers and employees. While OSHA was established in 1971, the government’s role in promoting industrial safety and health dates back to the 1930s. A Division of Labor Standards was established in the Department of Labor by a departmental order in November 1934. The agency was established shortly after reports were
607
released by the National Child Labor Committee on children involved in industrial accidents in Illinois, Tennessee, and Wisconsin, and by the U.S. Department of the Interior’s Bureau of Mines on accidents in metallurgical works in the United States. The purpose of the bureau was to promote industrial safety and health; develop national standards for labor legislation and labor-law administration; coordinate the enforcement of labor, wage, and occupational safety and health laws; and advise on child labor and youth employment issues. In 1936, Congress passed the Walsh–Healey Public Contracts Act (41 U.S.C. 35), which, among other provisions, established safety and sanitation standards for employees of federal-government contractors where the value of the contract was greater than $10,000. The Industrial Division of the Department’s Children’s Bureau was transferred to the Division of Labor Standards in 1946. The division was renamed the Bureau of Labor Standards by Secretary of Labor General Order 39 on February 17, 1948. In 1970, Congress enacted the Occupational Safety and Health Act to protect workers and worker safety by ensuring that employers provided their employees with a work environment that was free from dangers to their safety and health. In the statute, Congress declared “that personal injuries and illnesses arising out of work situations impose a substantial burden upon, and are a hindrance to interstate commerce in terms of lost production, wage loss, medical expenses, and disability compensation payments” (29 U.S.C. 651). The Occupational Safety and Health Administration was established to prevent work-related injuries, illnesses, and deaths by issuing and enforcing standards for workplace safety and health. The assistant secretary of labor for Occupational Safety and Health is responsible for management of OSHA. The law also encouraged states to develop and operate occupational safety and health programs of their own. The federal agency is responsible for reviewing their plans and monitoring their actions. The law also authorized OSHA to provide the states with up to 5 percent of their operating costs for the administration of these plans. As of this writing (March 2007), 24 states, Puerto Rico and the U.S. Virgin Islands operate their own program. The first assistant secretary of labor for Occupational Safety
608
Occupational Safety and Health Administration
and Health was George Guenther (1971–73), who had been the director of the Labor Standards Bureau, which was absorbed by OSHA. Guenther decided to emphasize the agency’s responsibilities for health. The first standard established by the agency was for asbestos fibers. Early in its history, OSHA was criticized for promulgating confusing and economically burdensome regulations. OSHA’s regulations were seen as costly to businesses, which were being required to adapt existing equipment to meet new standards and to implement other costly hazard controls. The agency’s regulations that required employers to provide training, communications concerning the existence of hazards to employees, and extensive documentation to prove compliance were also seen as very expensive for business. There were also those who questioned whether some of the Permissible Exposure Limits (PEL) established for hazardous materials were actually safe. For example, OSHA adopted a PEL of 52 micrograms of hexavalent chromium per cubic meter of air as a level adequate to prevent injuries to workers exposed to chromium. This standard was adopted without review, as it was based on a standard recommended by the American National Standards Institute in 1943. In 1972, consumer advocate Ralph Nader released a report entitled “Occupational Epidemic” in which he charged that the agency was failing to deal with illness and injury on the job. He claimed that the Department of Labor was a “hostile environment” for the enforcement of the Occupational Safety and Health Act. John Stender, a union official, replaced Guenther in 1973. Under Stender, the agency revised many of the much-maligned standards that had been put in place during its first two years and wrote many new workplace standards. However, the agency found itself criticized for ineffective enforcement. Another blow to the agency took place during the investigation of the Watergate scandal when a memorandum, written by Guenther in 1972, suggested that the agency would scale back its enforcement so as not to alienate business supporters of President Nixon’s reelection. This led some critics of the agency to charge that its enforcement efforts were influenced by politics. Stender resigned from
OSHA in July 1975 and was eventually replaced by Morton Corn, a professor of occupational health and chemical engineering at the University of Pittsburgh. While Corn attempted to make changes at the agency, it became enmeshed in election-year politics. President Gerald Ford, who succeeded Nixon, was challenged for the Republican presidential nomination by Ronald Reagan, an ardent critic of government regulation. In an appearance in New Hampshire, Ford said that some business executives wanted to “throw OSHA into the ocean.” When the agency announced that it would delay the release of new standards for asbestos, ammonia, arsenic, cotton dust, and lead until after the election, the decision was criticized by leaders of organized labor. Notwithstanding this early criticism, the agency’s budget and staff grew substantially during the 1970s. By 1980, more than 2,900 people worked for OSHA. During Jimmy Carter’s Presidency (1977–81), Eula Bingham, the agency head and a toxicologist, emphasized making employees more aware of their exposure to toxic chemical hazards. She also focused on reducing occupational lead exposure. President Ronald Reagan, who was elected in 1980, sharply reduced OSHA’s budget and staff. Under Thomas Auchter (1981–84), the agency focused on “regulatory relief,” by encouraging voluntary compliance and instituting a less punitive approach to violators. Auchter’s actions were criticized by the AFL–CIO, the American Public Health Association, the Sierra Club, and others who saw “regulatory relief” as an “assault” on worker safety. By 1987, OSHA’s staff had been reduced to 2,200, making enforcement more difficult. There were some significant rule makings during this period, notably the Hazard Communication Standard (issued in 1983 as 29CFR1910.1200), and the Grain Handling Facilities Standard (issued in 1987 as 29CFR1910.272). The Reagan Administration also established a Voluntary Protection Program (VPP), where employers establish a safety-and-health management system. By 2006, there were more than 1,400 VPP worksites in the United States. During George H. W. Bush’s presidency (1989– 93), OSHA issued its Process Safety Management Standard (29CFR1910.119), which was intended to
Occupational Safety and Health Administration
reduce large-scale industrial accidents. Bush’s Labor Secretary, Elizabeth Dole, pledged in August 1990 to take “the most effective steps necessary to address the problem of ergonomic hazards on an industrywide basis.” However, the administration would take no action. The agency’s budget was increased during the first two years of Bill Clinton’s presidency (1993– 2001), and the focus of the agency shifted to “stakeholder” satisfaction by providing guidance for compliance to employers. The election of a Republican-dominated Congress led to additional budget cuts, a reduction in staff, and efforts to limit the agency’s ability to issue new standards. Two statutes intended to curb OSHA and other regulatory agencies were the Small Business Regulatory Enforcement Fairness Act of 1996 (Public Law 104– 121). This legislation made it easier for small businesses to challenge regulations and decisions by regulatory agencies, mandated that agencies assist small business with regulatory compliance, and provided for Congress to review new regulations and, by joint resolution, overrule those that it disapproved. In 2000, OSHA issued its ergonomics standard, in the face of opposition from the U.S. Chamber of Commerce and the National Association of Manufacturers. In March 2001, Congress voted to rescind the standard, and President George W. Bush signed the measure into law. Under the Bush administration, OSHA issued voluntary guidelines rather than new standards. The agency also emphasized voluntary compliance with OSHA standards and shifted resources from enforcement. In 2004, the General Accounting Office issued a report questioning the effectiveness of these voluntary programs. In the aftermath of Hurricane Katrina in 2005, OSHA enforced clean-up regulations to help prevent illnesses that may be contracted from mold, fungi, and contaminated waters. The agency also provided outreach training in the impacted areas. The current assistant secretary of labor (March 2007) for Occupational Safety and Health is Edwin G. Foulke, Jr., who took office in 2006. The agency’s budget for the 2007 fiscal year is $472.4 million, and it employed 2,100. The Occupational Safety and Health Administration maintains a
609
number of advisory groups. These include the National Advisory Committee on Occupational Safety and Health; the Advisory Committee on Construction Safety and Health; the Maritime Advisory Committee on Safety and Health (which was reestablished in July 2006 after its charter had expired on April 1, 2005), and the Federal Advisory Committee on Occupational Safety and Health, which covers federal government agencies. Under the Department of Labor’s strategic plan, “OSHA’s regulations will continue to be developed or revised under the agency’s focused regulatory agenda. DOL will continue to direct inspections and outreach at establishments and industries with the highest injury, illness, and fatality rates and will respond to complaints of serious workplace hazards.” Since the formation of OSHA in 1971, the workplace death rate has been reduced by 50 percent, and occupational injury and illness rates have declined by 40 percent. During the same period, the number of employees in the United States increased from 56 million workers to 105 million workers. OSHA has been criticized for the ineffectiveness of its sanctions, especially its criminal sanctions. The agency can also pursue criminal charges (a misdemeanor) when a willful violation of their standards results in the death of an employee. With the Democrats retaking control of Congress in the 2006 midterm elections, efforts to revise the Occupational Safety and Health Act to impose stiffer sanctions on OSHA violators are likely to occur. See also bureaucracy; executive agencies. Further Reading Daugherty, Duane. The New OSHA: Blueprints for Effective Training and Written Programs. New York: American Management Association, 1996; Linder, Marc. Void Where Prohibited Revisited: The TrickleDown Effect of OSHA’s At-Will Bathroom-Break Regulation. Iowa City: Fanpihua Press, 2003; McCaffrey, David Patrick. OSHA and the Politics of Health Regulation. New York: Plenum Press, 1982; McGarity, Thomas O., and Sidney A. Shapiro. Workers at Risk: The Failed Promise of the Occupational Safety and Health Administration. Westport, Conn.: Praeger, 1993; Mogensen, Vernon, ed. Worker Safety
610
Office of Management and Budget
Under Siege: Labor, Capital, and the Politics of Workplace Safety in a Deregulated World. Armonk, N.Y.: M.E. Sharpe, 2006. —Jeffrey Kraus
Office of Management and Budget The Office of Management and Budget (OMB) is the president’s chief administrative tool to help him manage the budget and policy process in the White House. It was created by President Richard Nixon in 1970 when he issued an executive order renaming the former Bureau of the Budget the OMB. Thus, the history and function of the OMB stretches back to an earlier process of institutional development in the White House. The U.S. Constitution gives Congress complete power to tax and spend, and in early U.S. history, the president’s control over the budget process was tenuous at best. Federal departments and agencies went directly to Congress for their appropriations. The president submitted departmental spending requests for the year as part of his annual message, but, generally, the separate appropriations committees in Congress worked directly with agency heads without consulting the president. The result was a wasteful and corrupt process in which interest groups and their government allies did as they chose. James K. Polk was the first president to direct the budget process, reviewing all budget requests and trying to enforce some measure of fiscal discipline on executive-branch departments. Presidents Rutherford B. Hayes and Grover Cleveland also attempted to rein in spending by vetoing excessive appropriations, but their efforts did not have a lasting effect. The process as it developed allowed no way to set or balance priorities. Later presidents in the Progressive era recommended a national budget system controlled by the president. President William Howard Taft even created a government commission to advocate for this view. Congress resisted, but rising deficits during World War I and the general Progressive goal of more efficient and scientific management prompted Congress to act, passing the Budget and Accounting Act of 1921. President Warren G. Harding signed the bill into law, beginning a long process of increasing institu-
tionalization and centralization of power in the White House. The Budget and Accounting Act of 1921 gave the president the responsibility to prepare and submit to Congress a single executive budget. To help the president with this new task, the act also created the Bureau of the Budget (BOB), located in the Treasury Department. The president now had legal authority to oversee the allocation of funds in his branch. Agencies had to go through a central entity, BOB, for their requests, not congressional committees. The Budget Bureau evaluated the agency budget requests, adjusting them to meet the president’s goals and combining them into a comprehensive executive budget. It also reviewed the testimony of executive-branch officials and helped coordinate the overall administration budget strategy. Congress had expanded the president’s power at the expense of its own and given the president an institutional tool to exercise that power. The Budget and Accounting Act is considered by many to be the seminal event leading to the much more powerful administrative presidency later in the 20th century. In 1939, Congress passed the Reorganization Act, allowing the president to create by executive order the Executive Office of the President (EOP). Franklin Roosevelt used this power to move several agencies, including the Budget Bureau, under his direct control to help him coordinate and manage the work of the executive branch and create significant policy-planning and development powers in the White House. President Richard Nixon renamed and restructured the Budget Bureau as the Office of Management and Budget in 1970, greatly expanding its power and making it even more responsive to the president’s wishes. Nixon was suspicious of his own bureaucracy, and his reorganization of the principal budgeting agency was an attempt to further centralize power and control in the White House. Nixon ensured this by placing political appointees in control of the organization. The OMB director is one of the president’s close advisers, enjoying the title “assistant to the president.” The OMB performs several functions in the executive branch. The most obvious function connected to its former incarnation as the Bureau of the Budget
Office of Management and Budget
is its task of screening and coordinating the budget requests of executive branch departments and agencies. OMB officials examine all department proposals to make sure that they are in line with the president’s program. The office can also resolve disputes among agencies and ultimately put the president’s stamp of approval on the entire executive budget that he submits to Congress. Congress retains its full constitutional powers over the budget process, so it is not obligated to accept the president’s budget proposals, and in particularly conflictual times the president’s budget has been called “dead on arrival.” As presidents have sought greater control over the budget process since the 1970s, this task has become more politicized. Political appointees rather than civil servants make the most important decisions, and this can come at the expense of the “rational” and “neutral” quality the Progressive reformers originally sought. OMB officials have been known to alter or suppress technical analyses when those reports run counter to the administration’s larger policy objectives. The OMB also performs legislative clearance in which the office screens proposed legislation from executive-branch departments and agencies to determine whether it is in accord with the president’s program. At one time, this function was critically important in developing the president’s legislative program. The old Bureau of the Budget solicited ideas from the various departments for new programs that might be included in such key messages as the annual State of the Union Address. As presidents centralized more control in the White House, however, department-generated proposals declined in importance. Expanded White House capabilities enabled presidents to construct their legislative programs within the White House. The OMB still screens legislative proposals from the bureaucracy, but those proposals rarely have a large influence in the president’s larger program. The OMB also recommends whether bills passed by Congress and waiting presidential action should be signed or vetoed. This is known as the enrolled bill process. Typically, the OMB takes legislation passed by Congress and circulates it to all appropriate departments and agencies for advice. Those entities will
611
make recommendations, accompanied by draft signing or veto statements, which the OMB will summarize. Since the president has only 10 days to sign or veto legislation, this process is performed within five days. The OMB sends the file, along with its own recommendation, to the president. The more unified the advice, the more likely a president is to take it, but the most important player in this process—the entity whose advice the president most often follows—is the OMB. The OMB also engages in what is known as administrative clearance. This is a process in which the OMB evaluates the rules and regulations of executive-branch departments and agencies to see whether they are consistent with the president’s desires. President Ronald Reagan issued executive orders 12291 and 12498, requiring all executive-branch departments and agencies to submit proposed regulations to the OMB for review to make sure they lined up with administration policy. Departments now had to submit draft regulations to the OMB, which returned them with suggestions after consulting with those who would be affected by the proposed regulation. The OMB’s new focus on cost-benefit analysis—analyzing whether new regulations would be cost effective—gave the president a powerful influence over how laws passed by Congress are implemented. In fact, members of Congress have protested that OMB practices violate constitutional procedures that govern the enactment of public policy. Presidents from both parties have chosen to maintain control of the regulatory review process, even as the details have changed. The principal conflict concerning the evolution of the OMB concerns the tension between the office’s professional obligations and its political obligations. As part of its original mandate, the old Bureau of the Budget was designed to help the president facilitate the executive budget process—to give him greater ability to coordinate executive-branch policy and budget requests. This task was performed by civil servants who were supposed to manage this process in a professional and objective fashion. As the presidency acquired more administrative powers, however, especially in the era of divided government when presidents often faced a Congress controlled
612 par don (executive)
by the opposition party, chief executives had an incentive to use their institutional tools to control policy development and implementation. As the OMB increased its contingent of political appointees who were answerable to the president, the office itself became a key tool and resource for the president to pursue political objectives. This tool has enabled presidents to further their agenda often without congressional participation. Critics argue that this development has come at the cost of truly independent analysis. See also executive agencies. Further Reading Arnold, Peri. Making the Managerial Presidency. Princeton, N.J.: Princeton University Press, 1986; Stockman, David. The Triumph of Politics: How the Reagan Revolution Failed. New York: Harper & Row, 1986; Warshaw, Shirley Anne. The Domestic Presidency: Policy Making in the White House. Boston: Allyn and Bacon, 1997. —David A. Crockett
pardon (executive) Presidents have very few powers that they can exercise independent of the Congress and that are immune from court review. Most of the president’s constitutional powers are shared with Congress, especially the Senate that has the advice and consent authority over many presidential appointments. The system of separation of powers, the sharing and fragmentation of political authority, and the checks and balances embedded in the U.S. Constitution creates a system of government in which the branches must work with and cooperate with each other to make policy legitimately. One of the few exceptions to this is the independent and exclusive power the president has: the power to pardon. The president’s pardon power is grounded in Article II, Section 2 of the U.S. Constitution, which reads in part that “he shall have the Power to Grant Reprieves and Pardons for Offenses against the United States, except in Cases of Impeachment.” Thus, the only limiting factor in the granting of a presidential pardon can be found in cases of impeachment. In all other cases, the pres-
ident has clear and absolute authority to grant pardons. While there are no constitutional or legal limits on the president’s pardon power, there are certain political limits that most presidents would be hard pressed to challenge. Pardons can be controversial, and a president who, early in a term, grants a particularly unpopular pardon may face a voter backlash against himself or the party. That is why most controversial pardons come during the last days of a presidency when there is unlikely to be a significant political backlash. The pardon power has roots in the concept of executive clemency that came to the colonies through English practice and law. In the seventh century, during the time of King Ine of Wessex and shortly thereafter in the time of Ethelred, there developed a tradition wherein the king could grant pardons after conviction. The law allowed a convict to “flee to the king” to give him back his life or freedom, and while rarely exercised, this power became part of the king’s legal authority. King Cnut expanded the pardon power, issuing a proclamation offering mercy to those who would swear to renounce lawlessness and observe the law. The codes of William the Conqueror and his son Henry I further extended this power but with a twist. Henry I used the pardon power in exchange for compensation by the guilty party. This led to a backlash as accusations of abuse of power followed Henry I’s unseemly efforts. Pardons were initially intended as an act of grace, allowing the king to employ the royal prerogative of mercy to those deemed deserving, but even after the time of Henry I, kings often used the pardon for personal, financial, or political gain. Pardons were often gained at a price, sometimes in exchange for agreeing to enter military service. King Edward I in fact openly offered pardons virtually to anyone willing to enter the service in times of war. The nascent Parliament began to complain about the abuse of the pardon power and finally, in 1389, enacted a statute that prevented issuing pardons for certain serious crimes. It was a bold effort but proved unsuccessful as the king routinely ignored the statute. During the next several decades, Parliament tried to
pardon (executive)
limit the scope of the pardon power, but kings steadfastly refused to succumb. It was not until the late 17th century and the controversial case of the earl of Danby that real inroads were made to limit the authority of the king to grant pardons. Thomas Osborne, earl of Danby and lord high treasurer of England from 1673 to 1679, was to be impeached for allowing the English minister in the court of Versailles to make decisions beyond his authority. King Charles II came to Danby’s rescue, claiming that Danby was doing the bidding of the king. Questions were raised about the king’s version of events, and the Parliament continued to press the issue. The king issued a pardon to Danby, and Parliament faced the question: Could a pardon stop an impeachment inquiry? Amid the warnings of a parliamentary revolt, the king withdrew the pardon, and Danby was sentenced to the Tower of London. Emboldened by the Danby affair, Parliament, in 1700, issued the Act of Settlement which declared that no pardon shall be issued in cases of impeachment. This would greatly influence the framers of the U.S. Constitution, as they too included such a provision in the Constitution. During the prerevolutionary war era in the colonies, the Crown delegated royal authority to the governors who exercised power in the Crown’s name. This included the pardon power. In time, some state constitutions gave the pardon power to a general court or some other judicial body. By the time of the Revolutionary War, virtually all states had in their constitutions some provision for the pardon power. When the framers met in Philadelphia in 1787 to write a new Constitution for the nation, they were unsure of what to do about the power to pardon. Neither the Virginia Plan nor the New Jersey Plan (the two leading plans for a new constitution) mentioned the pardon power, but delegate Charles Pinckney of South Carolina introduced a proposal for the executive to have the power to pardon, except in cases of impeachment. With very little debate, the convention accepted Pinckney’s proposal. Few controversial pardons were granted prior to the Civil War, but some general amnesties caused some concern. It was not until the Civil War that
613
deep controversy ensnarled the pardon and amnesty terrains. President Abraham Lincoln granted amnesty to those who had “participated in the existing rebellion”. This caused a backlash, as many felt that such treason should be punished, not “rewarded”. In spite of several early challenges to the pardon power, it has stood up to court and political challenges. In 1833, the U.S. Supreme Court, in United States v. Wilson (7 Pet. 150, 161, 1833) defined the pardon power as an “act of grace” that was presidential in scope. In 1855, Ex parte William Wells (59 U.S. 307, 311, 1855) declared that a pardon implies “forgiveness, release, remission,” but in 1927, in Biddle v. Perovich (274 U.S. 480, 486, 1927) regarded a pardon as “the determination of the ultimate authority that the public welfare will be better served by inflicting less than what the judgment fixed.” This shift from grace and forgiveness to a public policy and public good had little impact on the president’s power to pardon, but it marked a shift in the understanding of the role and the purpose of the pardon power. Normally, there are five stages to the pardon process. Each stage is supervised by a pardon attorney from the Office of the Pardon Attorney, a part of the Justice Department. This five-stage process includes (1) application; (2) investigation; (3) preparation; (4) consideration and action; and (5) notification. Sometimes, this process is short-circuited as when the president has a particularly strong interest in a particular case. Ultimately, it is up to the president to grant or not grant a pardon, and recommendations from the pardon attorney are not in any way binding on a president. Since 1900, roughly 75,000 pardon and commutation of sentence requests have been processed by the Office of the Pardon Attorney. Slightly more than 20,000 of these requests have been granted. While most pardons do not attract media or public attention, some pardons have been particularly controversial. President Gerald Ford’s pardon of former President Richard Nixon in 1974 was a particularly controversial one as Nixon had only recently resigned from office during the heat of the Watergate scandal and the House Judiciary Committee had held hearings on impeachment, voting to recommend to the full House the impeachment
614 par don (executive)
of the president. Nixon resigned the presidency on August 9, 1974, just ahead of a full House vote on impeachment, and his hand-picked successor, President Ford, provoked a storm of controversy when a few weeks later, on September 8, 1974, he gave Nixon a “full, free, and absolute pardon . . . for all offenses against the United States . . . that he has committed, or may have committed or taken part in during the period from January 20, 1969, through August 9, 1974 [the time of his presidency].” Some critics accused Ford and Nixon of a preresignation deal wherein Nixon would leave office and then Ford would grant him a pardon, but no evidence has surfaced to lend credence to that charge. One of the more controversial pardons in history was the Christmas Eve pardon by President George H. W. Bush of former secretary of defense, Casper Weinberger. During the presidency of Ronald Reagan, a scandal known as the Iran–contra scandal rocked the nation. Members of the Reagan administration had tried to sell arms to Iran in exchange for the release of U.S. hostages held by Iran. This was a clear violation of U.S. law as Iran was officially on the government’s list of terrorist nations. The swaps took place, and the profits from the arms deal went—again illegally—to the rebels fighting the Marxist government in Nicaragua. When this scandal made news, indictments and investigations followed. One of the most interesting of the court cases concerned former secretary Weinberger, who kept diaries of the events under investigation. Weinberger’s diary entries were to be used in his defense, and they contained, according to sources close to the case, entries revealing that then vice president George H. W. Bush may have committed perjury. As the Weinberger case came to trial, President Bush issued a December 24, 1992, pardon to Weinberger. Bush was set to leave office a few weeks later. Critics argued that in issuing the pardon, Bush may have actually, in an indirect way, been issuing himself a pardon, as had the Weinberger diaries implicated Bush, he might well face criminal charges. At the end of his presidency, President Bill Clinton also granted several controversial pardons, including the pardon of white-collar fugitive Marc
Rich. Charged with an illegal oil-pricing scheme, Rich went into hiding in Switzerland to avoid U.S. prosecution. Marc Rich’s former wife was a prominent Democratic Party fund raiser who lobbied Clinton for the pardon. The president did so, circumventing the normal Justice Department pardon process and granting the pardon on his own (which as president he had full authority to do). A political firestorm followed, but in spite of the controversy, the pardon, as distasteful as it may have been, was perfectly legal. These cases serve as examples of the independent and unchecked power that the president possesses in the pardon arena. While a pardon may be controversial and may strike many as inappropriate, it is one of the few unchecked and absolute (except in cases of impeachment) powers possessed by the president and by the president alone. No court or no Congress may interfere with this power, and the only way to change this is to amend the Constitution. Throughout the nation’s history, there have been only a handful of truly controversial and suspect pardons. Generally presidents have been true to the intent of the framers in granting pardons, and while the controversial pardons attract a great deal of media and public attention, the overwhelming number of pardons granted conform in style, process, and intent to the wishes of the framers of the U.S. Constitution. Though sometimes politically controversial and perhaps even politically as well as legally unwise, the president remains on firm constitutional ground in granting pardons. In this, the president has no rivals, and the courts have given the president wide-ranging powers in this realm. There are few real limits on the president’s power to grant pardons, and short of amending the Constitution, there is likely to be little that can or will be done to limit the president’s authority in this area. Further Reading Adler, David Gray. “The President’s Pardon Power.” In Inventing the American Presidency, edited by Thomas E. Cronin. Lawrence: University Press of Kansas, 1989; Genovese, Michael A., and Kristine Almquist. “The Pardon Power Under Clinton: Tested but Intact.” In The Presidency and the Law: The Clinton Legacy, edited by David Gracy Adler
presidency 615
and Michael A. Genovese. Lawrence: University Press of Kansas, 2001. —Michael A. Genovese
presidency Article I of the U.S. Constitution created the first strong executive in the history of democratic republics. Indeed, the founding fathers had no historic precedent for a strong executive working in any type of democracy, so this creation was entirely their own. They could not borrow from history, but they could look at what in their experience did not work well, their example being a king, and they could look at what did work well, the few strong governors in the colonies. After the excesses of the state legislatures in the decade under the Articles of Confederation, many younger founding fathers began to understand that not only executives were prone to tyranny. When they thought of separating the powers of government into three branches to secure liberty, they knew that they needed a strong executive to counter a strong legislature. Toward this end, Article I established a relatively weak presidency to appease the political leanings of that generation of Americans but with vague powers and much room to grow to address the issue of having strength in other branches besides the legislature. The presidency is a one-person office, which was not necessarily a given. During the Constitutional Convention, a number of possibilities were considered, including a plural executive, which would have included members from each region of the nation. A council was also considered to be a requirement for decisions following the example of the British monarchy. The founders, however, decided that in the interest of improved efficiency and accountability, a single presidency was a better idea. In this way, one person makes the decisions, and, thus, whatever the outcome, one person is credited or blamed for the choice. Presidents were to be chosen by an electoral college for a term of four years from a national constituency. In this, the presidency is unique in the United States. No other elected official has as his or her constituents every citizen. The electoral college, the only piece of the Constitution not addressed during the Federalist–Anti-Federalist debate, was to
be selected by the state legislatures in whichever manner they chose. Originally presidents were not limited in the number of terms for which they could be elected, but that was changed by the Twentysecond Amendment, in 1951, that kept terms to two total. Each president would be 35 years old, 14 years a resident of the United States, and a natural-born citizen. These are the only de facto requirements for the president, but de jure, there are more. To date, every president has been male and white. Every president has been a Protestant, except John F. Kennedy, who was Catholic. They have all been married except for James Buchanan and Grover Cleveland, who married while in office, and they have all been career public servants. The only president in the 20th century who was not previously elected to some office such as senator or governor was Dwight D. Eisenhower who had served as a general of the army during World War II. Presidential elections are grand national events that involve every state and many localities. Elections have been decided in a small number of states coming down to specific counties. They are run by local governments and focus the entire nation’s energy on one political outcome. In many ways, choosing the president is the way the nation comes together. The more large coalitions are built to select an executive, the more people from all over the country talk to each other. The authors of the Federalist defended Article II in essays 67 through 77 and argued that the president was not similar to the powerful monarch whom the colonies had just thrown off and was not even all that similar to the strongest governor in the states at the time, New York’s governor. They argued that the president had four ingredients that were necessary for a strong-enough executive to check the powers of the legislative branch. First, the presidency would have unity, meaning that there was no worry about accountability shared among more than one official. As discussed above, it was important to have one person as president so that all credit and blame could be laid at one person’s feet. The president’s term was four years long, which was much longer than many of the states’ elected officials at the time. The Federalist argued that the longer term contributed to the president’s ability to get things done, while multiple reelection potential kept the
616 pr esidency
president honest. Basically, the longer he or she is in office, the more the president can deliver on his or her promises during the campaign. The more often he or she can run for office, the more checked by the democratic process the president is. The third ingredient was adequate provision for support to do his or her job. Congress is not allowed to reduce or increase the pay of the president while he or she is in office, thus keeping the principle of separation of powers intact. As Congress cannot offer a pay raise or threaten a pay cut, the president is independent. Finally, the president was given competent powers to secure the national interest. First, he or she is commander in chief of the armed forces, which ensures the military being led by a civilian and protects against military coups. The president also has the power to obtain opinions from the department heads in writing, which allows him or her to implement laws passed by the legislature. He or she also has pardon power which has been used to heal the nation in times of trouble. The president is in charge of negotiating treaties so that there is one clear diplomat for the United States. Finally, the president has the power of appointment, which allows him or her to make decisions about people that are not open to lengthy debate. The Senate confirms these appointments and ratifies the treaties, but for the most part the president gets to lead the agenda in making these types of decisions. The president is given power from the vesting clause in Article II of the Constitution: “executive power shall be vested in a President of the United States.” This phrasing is deliberately vague and ambiguous. As a result, the presidents have in time expanded the executive power to mean much more than it did under President George Washington. When you consider the difference between this vesting clause and that of Article I where the founders explicitly state that the Congress receives the powers “herein granted,” you see that the presidency was granted latitude to evolve in time. The founders really were not sure what they wanted and thus created a substitute for a king that was palatable to the people. They received their ideas for the executive branch from English commentators, political philosophers, historians, colonial experience, and revolutionary experience. The idea of a presidency is a
dynamic one that is still evolving today. The presidency, while similar today to that of the early nation, is very different given the higher demands and expectations. Another type of power that has evolved in time is executive prerogative which originates with John Locke. Prerogative is the power to take extraordinary actions without explicit legal authorization in emergencies. For example, in the post-9/11 political world, President George W. Bush has made some expansive decisions without specific legal authorization. This could be chalked up to prerogative. What power the president exactly commands has been debated by scholars and practitioners alike for two centuries. Alexander Hamilton argued that all powers belonged to the presidency, given the vesting clause, while Thomas Jefferson argued that anything outside the Constitution was not allowed. Some have argued that the power comes from the United States being a world power. Basically, the stronger the nation is internationally, the more power its leader accrues. As the United States transformed in the 20th century from a weak nation to a world superpower, something was needed to counteract the localist tendencies of the Congress. Into that vacuum came the president. On the other hand, the United States is an economic power, and some have argued that the president’s power depends on his or her ability to command vast sums of money. A famous argument for the president’s power comes from Richard Neustadt who claims that the power of the president is his or her power to persuade. Neustadt argues that the president operates in a shared-powers institutional arrangement with men and women whose political futures do not necessarily depend on the president. Consequently, he or she has to persuade these people to do what he or she wants. Stephen Skowronek argues that actually the president’s power is dependent on his or her location in the political time line. If a president is opposed to a regime that is vulnerable, then he or she may reconstruct a whole new regime. He or she is given vast amounts of power, as was Thomas Jefferson, Andrew Jackson, Abraham Lincoln, and Franklin D. Roosevelt. On the other hand, if a president is committed to a regime that is vulnerable, he or she has the absolute nadir of presidential
presidential corruption 617
power, as was the case with John Quincy Adams, Franklin Pierce, Herbert Hoover, and Jimmy Carter. Thus, the most persuasive president operating in a vulnerable regime would not have much power. Another argument for presidential power is that the president has garnered vast power from connecting directly with this country’s people. Theodore Lowi argues that democracy has come to be seen as what the president can deliver, so the president appeals directly to the people. If they are with the president, then he or she can do pretty much anything he or she wants. Once they desert the president, he or she is on his or her own and watches his or her power shrivel. Thus, the power of a president depends on his or her abilities to communicate with a mass audience and, on the events of his or her time, not moving public opinion too far against him or her. The presidency is very influential in U.S. government throughout the constitutional system. He or she has legislative power as well as executive power. The president can veto legislation that he or she does not like. The president gives a State of the Union Address every year that sets the legislative agenda for Congress. The president can get Congress to pass legislation of which it is unsure by making direct appeals to the citizens. In many ways, the presidency in its new powerful guise upsets the separation-ofpowers principles set forth at the founding. The president is also able to claim executive prerogative very easily depending on the situation. In this, when is the U.S. government not in an extraordinary situation? Whenever any threat comes along, the president can claim vast power to protect the nation, power that affects the way people live. Presidents can also issue executive orders that have the power of law without Congressional input. Whenever the president wants to accomplish something, he or she can just issue a dictate that, generally, the judiciary supports. Thus, the president can pass law as well as enforce it. The president is the focus of democracy and is at the center of the U.S. national government. He or she is the national leader, and no other elected official holds a candle to his or her civic leadership while he or she is president. The president is also the most possible contender for working against democratic
features of government since he or she has the most potential power. Further Reading Fatovic, Clement. “Constitutionalism and Presidential Prerogative: Jeffersonian and Hamiltonian Perspectives,” American Journal of Political Science 48, no. 3 (July 2004): 429–444; Genovese, Michael A. The Power of the American Presidency 1789–2000. New York: Oxford University Press, 2001; Hamilton, Alexander, John Jay, and James Madison. The Federalist. Edited by George W. Carey and James McClellan. Indianapolis, Ind.: Liberty Fund, 2001; Lowi, Theodore. The Personal President: Power Invested, Promise Unfulfilled. Ithaca, N.Y.: Cornell University Press, 1985; Mayer, Kenneth R. “Executive Orders and Presidential Power,” The Journal of Politics 61, no. 2 (May 1999): 445–466; McDonald, Forrest. The American Presidency: An Intellectual History. Lawrence: University Press of Kansas, 1994; Milkis, Sidney M., and Michael Nelson. The American Presidency: Origins and Development, 1776–1990. Washington, D.C.: Congressional Quarterly Press, 1990; Neustadt, Richard E. Presidential Power and the Modern Presidents: The Politics of Leadership From Roosevelt to Reagan. New York: The Free Press, 1990; Skowronek, Stephen. The Politics Presidents Make: Leadership from John Adams to Bill Clinton. Cambridge, Mass.: The Belknap Press of Harvard University, 1997; Spitzer, Robert J. The Presidential Veto: Touchstone of the American Presidency. Albany: State University of New York Press, 1988; Weisberg, Herbert F. “Partisanship and Incumbency in Presidential Elections,” Political Behavior 24, no. 4, Special issue: Parties and Partisanship, Part Three (December 2002): 339–360; Young, James Sterling. “Power and Purpose in the Politics Presidents Make,” Polity 27, no. 3 (Spring 1995): 509–516. —Leah A. Murray
presidential corruption No political system is scandal proof. Regardless of time, place, type of political system, culture, or legal environment, the problem of political corruption is as persistent as it is vexing. Where there is power, where
618 pr esidential corruption
there are resources, where the temptations seem worth the risk, the threat of corruption exists. No system has mastered the problem of eliminating corruption, and, it is safe to say, none could. But some systems seem more prone to corruption than others. In the United States, the problem of presidential corruption highlights the interaction between the structural design of the U.S. system of government and its impact on leadership behavior. Much depends on how one defines corruption and on the political and cultural biases one brings to the subject, but regardless of such factors, one thing remains clear: political corruption and abuse of power are real, universal, and problematic. They exist in the United States and in all political systems. The founders of the U.S. Constitution were well aware of the persistence of political corruption and knew that systematic corruption, or what Thomas Jefferson referred to as “a long train of abuses,” could undermine stability or even lead to revolution. How did the framers view corruption, and what were the means that they created to combat corruption and the abuse of power? In the prerevolutionary period, two schools of thought prevailed, with the influence of civic republicanism dominating, and classical liberalism serving as a secondary influence. After the revolution, in the period of the writing of the Constitution, these traditions switched places, with liberalism becoming the dominant perspective and with civic republicanism slipping into a secondary position. Thus, the Constitution is liberal first and foremost and is republican only in a secondary sense. The civic republican influence, so prevalent in the pre-Revolutionary war period, has a long and distinguished pedigree. Its goal is to build civic virtue, to develop the character of citizen and leaders alike, and to achieve (or at least approach) human excellence. The goal or end of government was to promote virtue. In the prerevolutionary period, civic virtue emerged as a prerequisite for the development of republican government. In such a system, five interrelated factors become important: virtue, community, and concern for the common good, the public interest, and a robust view of the citizen. Virtue is the compass, a guide for public action.
Community, in the communitarian sense, leads to unity and social cohesion. This leads to a concern for the common or community good, which requires a heightened concern for pursuing the public—or common—interest over the private individual interest. This requires an invigorated, civicminded citizenry. In such a system, corruption was diminished because one could teach virtue to the leader. The assumption that virtue could be taught, that virtue would serve a limiting function for the leader, and that virtue would produce good behavior animates the civic republican tradition where questions of corruption are concerned. After flirting with a civic republicanism that is grounded in virtue, citizenship, and community, the Founders made a slow shift that moved toward liberalism. By the time of the writing of the Constitution, republicanism gave way to classical liberalism with its concern for liberty, property, and the pursuit of self-interest. It was a shift in approach from government as “freedom to” accomplish community goals, to “freedom from” government intrusion. Liberalism had emerged, not to replace republicanism but to rise above it in influence and importance. Thus, there were always dual influences, with liberalism exerting dominance as the fervor of revolution gave way to the responsibilities of governing the new nation. As colonial America matured, and as issues of individual liberty, commerce, and property rose in importance, the ideas of John Locke found a fertile soil in which to take root and grow, especially among those brought together in Philadelphia to revise the Articles of Confederation. In Philadelphia, they discarded the articles and wrote an entirely new framework for government, one in which the drive for civic republicanism gave way to more powerful forces of classical liberalism. If the founders were suspicious of (or dismissed as unattainable) the central tents of classical republicanism (virtue and community) and embraced a new liberal basis for government, yet were still influenced somewhat by republicanism as a subordinate philosophy, how were they to reconcile these two competing and contradictory foundations of politics? The answer was found in the “new science of politics.”
presidential corruption 619
Since virtue was not enough and democracy was somewhat dangerous, but since embracing an autocratic model was unacceptable, the framers were forced to conceptualize a new science that would govern politics. The dramatic break with the past reflected a new, even radical reformulation of the foundations of government. Interest and virtue, but not in equal proportions. Clearly interest, at least in the minds of the framers, preceded virtue, and while Gordon Wood, John Diggins, and J. G. A Pocock argue that by 1787, “the decline of virtue” meant that the new republic created a system “which did not require a virtuous people for its sustenance,” it is clear that they overstate the case. The framers did not totally reject virtue; they merely felt that people could not rely on virtue to triumph over interests. Therefore, interest became, in their minds, the dominant force that animated human behavior, and virtue was a secondary though not unimportant force. The new science of politics would not rely on the hope that virtue would triumph. The framers were too “realistic” for such an illusion. The framers were less concerned with the way men (and at that time they meant males) ought to live but with how they did live. They were less concerned with shaping character than with dealing with man “as he actually is.” What then is this “new science of politics,” and how did it impact presidential corruption? Grounded in an empiricism, which saw itself as based in a realistic, and not a utopian conception of human nature, the framers assumed that man was motivated not by ethics or virtue but by self interest. To form a government with such baggage, the framers drew not on the language of ethics but on the language of natural science. A balance or equilibrium kept order; appeals to justices or virtue did not. Thus, interest must counterbalance interest, ambition must check ambition, factions must counteract factions, and power must meet power. This was a mechanical, architectural, or structural method. In the new science of politics, the framers saw a rational ordering in the universe which could be transferred to the world of politics. As John Adams notes, governments could be “erected on the simple principles of nature,” and Madison wrote in similar Newtonian terms of constructing a government so
“that its several constituent parts may, by their mutual relations, be the means of keeping each other in their proper places.” If the people were not expected to be virtuous, neither were their rulers. If the ruler could not be trained in virtue, what prevented abuse of power and corruption? To answer this question, the framers relied on the mechanical devices of Newtonian physics, their new science of politics. When the framers of the Constitution met in Philadelphia to, among other things, invent a presidency, they harbored no illusions about changing human nature so as to produce a virtuous ruler. Between the time of revolutionary fervor and the late 1780s, a shift in outlook has occurred: “The ‘spirit of ’76,’ the pure flame of freedom and independence, had been replaced by the structures of political control.” Assuming that “a human being was an atom of selfinterest,” how can a government be formed that both takes power to order events and yet does not threaten individual liberty? How could one energize the executive and yet, in the absence of virtue, hold the executive in check? How could one control the abuse of power and executive corruption? Following the logic of the new science of politics, the framers embraced an essentially structural or architectural model which set power against power and ambition against ambition. If greed or self interest was to be expected of mere mortals, why not unleash man’s material proclivities and allow self interest to prevail, even demand that it do so? In this way, a rough balance of powers could be achieved, an equilibrium, which as Alexander Hamilton notes in the Federalist, would prevent “the extremes of tyranny and anarchy” that had plagued republics of the past. These ancient republics did not understand the new science of politics, which revealed the answer to responsibility and energy in the executive. The framers knew that, as Madison would warn, “Enlightened statesmen will not always be at the helm,” and in the absence of virtuous rulers, only a properly constructed state could control the ambitions of power-hungry rulers. While elections were to serve as one control, Madison knew that “experience has taught mankind the necessity of auxiliary precautions.” For the framers, the primary precaution was found in the way the three political offices were arranged so as to
620 pr esidential corruption
allow ambition to emerge but to set ambition against ambition. “The great security against a gradual concentration of the several powers in the same department consists in giving to those who administer each department the necessary constitutional means and personal motives to resist encroachments of the others.” If the president was to be able to stand up to and check threats from the legislature, the office had to be independent of the Congress in selection: thus the electoral college where neither the people (fears of democracy ran strong among the framers) nor the legislature could dominate the executive. Mere “parchment barriers” were not enough. The president has to have his own independent source of power: Thus, constitutional powers independent from the legislature in selection and power gave the president (and conversely Congress) the means to stand firm and check potential abuses by the other branches. The separation of powers was thus a controlling element, but it was also a moderating force. Since cooperation was necessary if politics were to be seen as legitimate, the overlapping and sharing of powers meant that, as Madison noted in Federalist 51, “neither party being able to consummate its will without the occurrence of the other, there is a necessity on both to consult and to accommodate.” Realizing that enlightened statesmen would not always rule, the framers set up a separation-ofpowers, checks-and-balances, system which encourages the emergence of institutional self-interest where ambition was made to counteract ambition and law was the basis of action. This institutional equilibrium was built into the very structure of the government and was thus a mechanical means of controlling power. The framers, rather than attempt to elevate man, assumed that humans were corrupt. They built a system of government around that assumption. Even with the best of intentions and training, people were people and needed to be controlled. By setting up institution against institution (but ironically requiring each institution to cooperate and/ or work with other institutions), the framers gave in to their jaundiced view of human nature and decided to check vice with vice, not with virtue. It therefore
could create “energy” in the executive as a means of checking legislative energy. The framers did not abandon the rhetoric of civic republicanism. In fact, several of their references speak directly to virtue, as was the case in Federalist 57 where James Madison wrote that the “aim of every political constitution is, or ought to be, first to obtain for rulers men who possess most wisdom to discern, and most virtue to pursue, the common good of the society; and in the next place, to take the most effectual precautions for keeping them virtuous whilst they continue to hold their public trust.” But republican rhetoric aside, the framers then went on to develop a structural, not character-based means for reducing the threat of abuse and corruption. After all, as Federalist 15 reminds us; “why has government been instituted at all? Because the passions of men will not conform to the dictates of reason and justice, without constraint.” These “passions” and interests are found rooted in human nature and are immutable. In Federalist 10, Madison presents a negative view of man’s nature, reminding us that “the latent causes of factions are sown in the nature of man” and that “the most common and durable source of factions has been the various and unequal distribution of property.” In this, the framers were influenced heavily by Montesquieu, who wrote that “Every man who possess power is driven to abuse it; he goes forward until he discovers the limits.” In like fashion, Madison, in Federalist 51 reminds us that “If men were angels, no government would be necessary. If angels were to govern men, neither external nor external controls on government would be necessary. In framing a government which is to be administered by men over men, the great difficulty lies in this; you must first enable the government to control the governed; and in the next place oblige it to control itself.” Of course, Madison and the other framers realized that some virtue was necessary, lest the system collapse of the weight of constant power plays and pettiness. But they were convinced that one could not rely on virtue. Thus, “auxiliary precautions,” structural devices, were paramount. Paramount, but
presidential election 621
insufficient. Constitutional structures were not infallible, and human weaknesses could undermine mechanical devices. What forces held the executive in check? For Madison, (1) “some virtue” was necessary, though of only limited utility; (2) “parchment barriers” (Federalist 48) were useful (but insufficient); and (3) “auxiliary precautions” (Federalist 51) were necessary and ultimately the best hope. While no panacea could be invented, Madison saw the interaction of these three forces (primarily the third) as diminishing the opportunities for corrupt leaders to abuse power. Madison acknowledged that “there is a degree of depravity in mankind which requires a certain degree of circumspections and distrust” but he insisted that “there are other qualities in human nature, which justify a certain portion of esteem and confidence,” and he wrote that “republican government presupposes the existence of these qualities on a higher degree than any other form.” “Were the picture,” he noted, “which have been drawn by the political jealousy of some among us, faithful likeness of the human character, the inference would be that there is not sufficient virtue among men for self government, and that nothing less than the chains of despotism can restrain them from destroying and devouring one another.” Likewise, Alexander Hamilton wrote in Federalist 75 that “the history of human conduct does not warrant” an “exalted opinion of human virtue” and added that “the supposition of universal reality in human nature is little less an error in political reasoning than the supposition of universal rectitude. The institution of delicate power implies that there is a portion of virtue and honor among mankind, which may be a reasonable foundation of confidence.” How well has this model worked in practice? Historically, there have been only five presidencies that were considered to be highly corrupt: Ulysses S. Grant, Warren G. Harding, Richard M. Nixon, Ronald Reagan, and Bill Clinton, and none of these presidents so jeopardized the nation that there was a threat of collapse or revolt. In effect, the efforts of the framers in combating presidential corruption have been comparatively successful.
Further Reading Berns, Walter. Making Patriots. Chicago: University of Chicago Press, 2001; Dunn, Charles W. The Scarlet Thread of Scandal: Morality and the American Presidency. Lanham, Md.: Rowman & Littlefield, 2000; Hamilton, Alexander, John Jay, and James Madison. The Federalist. Edited by George W. Carey and James McClellan. Indianapolis, Ind.: Liberty Fund, 2001. —Michael A. Genovese
presidential election The method of selecting the president was debated at the Constitutional Convention, with the major alternatives being direct popular election, selection by Congress, and the electoral college system. Direct popular election was eliminated because the Founders were afraid that the public would be easily fooled by opportunistic politicians. Direct popular election would also have disadvantaged the least populous states concentrated in the South. The founders eliminated selection by Congress on the grounds that an executive selected by Congress would not be properly independent of the legislature. Furthermore, presidential selection might be so divisive an issue for members of Congress that it would interfere with other business. The electoral college was more complicated, but it was thought to be less susceptible to corruption and would preserve the independence of the three major branches of government. The founders believed that such an indirect method of selection would lead to both higher quality candidates and a more thoughtful evaluation of the strengths and the weaknesses of the candidates. Article II of the U.S. Constitution lays out the electoral college system. Each state legislature determines how to select a number of electors equal to the state’s total number of senators and representatives. The electors meet in their respective states, and each votes for two people, at least one of which is not from the same state. The votes are certified and sent to the president of the Senate who opens the certificates and counts the votes before members of the Senate and the House of Representatives. The majority winner is president. If no candidate receives a majority of
622 pr esidential election
electoral college votes, the decision goes to the House of Representatives. Each state delegation in the House is allowed to cast one vote. The person receiving the most votes is president, and the first runner up is vice president. The Senate serves as a tie breaker. There are currently 538 electoral college votes (equal in number to 100 senators, 435 representatives, and three votes for the District of Columbia), so a candidate for president needs 270 votes to win. The distribution of votes across states shifts after each census since electoral votes are apportioned to states according to their representation in Congress, which depends upon their population. The vast majority of states have a winner-takes-all method of assigning electors to candidates; the presidential candidate who wins the most votes in a state wins all of the electoralcollege votes of that state. In 1796, John Adams and Thomas Jefferson were candidates for president for the Federalist and Democratic-Republican Parties. John Adams received the largest number of electoral votes and was declared president, while his opponent Thomas Jefferson received more votes than Adams’s own vice presidential running mate and was declared vice president. In the election of 1800, Thomas Jefferson and his vicepresidential candidate Aaron Burr received the same number of electoral college votes, and the election was thrown into the House of Representatives. The Twelfth Amendment was then passed so that electors now choose the president and vice president on separate ballots. The election of 1824 is the only election thus far where no presidential candidate received a majority of electoral college votes. Though Andrew Jackson won the most popular votes (41.3 percent) in a four-way race, John Quincy Adams (who received 31 percent of the popular vote) was ultimately chosen to be president by the House of Representatives. In three other cases, the winner of the popular vote did not become president. In 1876, Republican Rutherford B. Hayes became president over the popular vote winner and Democratic candidate Samuel Tilden when several states sent in double sets of electoral college returns and a bipartisan commission awarded the disputed votes to Hayes. In 1888, Republican Benjamin
Harrison was declared president after winning the electoral college vote even though his Democratic opponent Grover Cleveland narrowly won the popular vote. The most recent case occurred in the presidential election of 2000 when Republican candidate George W. Bush and Democratic candidate Al Gore both needed the 25 electoral college votes from the state of Florida to win the presidency. George W. Bush’s margin of victory in the state of Florida stood at a mere 325 votes after a machine recount of ballots in all 67 Florida counties. The Gore campaign asked for a manual recount in several counties. Some voters claimed that the voting machines were so clogged that they could not fully punch out the appropriate hole in their ballot. Other voters claimed that they had been so confused by the design of the ballot (the so-called butterfly ballot) that they mistakenly voted for Reform Party Candidate Pat Buchanan or punched their ballot twice. When Florida’s Republican secretary of state used the machine-tabulated vote count to certify the vote and assign Florida’s electoral college votes to Bush, the Gore team took their case to the Florida Supreme Court. The Florida Supreme Court ordered a manual recount of all ballots where no vote for president had been recorded. The Bush team then appealed to the U.S. Supreme Court, which ordered a stop to the manual recount. Bush won the electoral college vote by a margin of 271 to 267; he lost the popular vote by a half-million votes. Throughout our nation’s history, there have been movements to abolish the electoral college. Critics maintain that the presidential selection system should not allow for the selection of a candidate who did not win a majority of the popular vote. They argue that rural states are overrepresented since every state has at least three electoral college votes no matter what its population. Third parties are discouraged because third-party candidates have a hard time winning any electoral-college votes in a winner-takes-all system. Finally critics have argued that the electoral college discourages voter turnout. If a state is overwhelmingly Democratic or Republican, voters in the minority party have almost no chance of affecting the outcome. On the other hand, some like the fact that
presidential election 623
the electoral college system encourages a two-party system because it forces third parties to join forces with one of the major parties, widening their appeal. The electoral college also forces candidates to campaign in multiple states and regions rather than simply focusing on high-population states or cities. Finally, minority groups can be influential if their numbers are high enough within states but would not be as influential in a popular-vote contest across states. In time, presidential elections have witnessed a dramatic increase in the participation of ordinary citizens in every stage of the electoral process— nomination, campaigning, and voting. For instance, the Constitution does not discuss how presidential candidates will be nominated. Early in the nation’s history, candidates were put forward by caucuses of the political parties in Congress. By 1832, political parties held national nominating conventions to pick the candidates. Delegates to party conventions usually were selected by local officials or appointed by governors. Party leaders as opposed to rank-and-file voters played the greatest role in selecting the presidential nominees. National party conventions involved bartering and vote trading among party leaders and multiple roll call votes to select the nominee. By 1960, however, presidential primaries gained acceptance as the way to select each party’s nominee. Today, the major political parties hold primaries or caucuses in every state. To win their political party’s nomination, candidates must compete in state-by-state elections held among voters within their political party. Rank-and-file voters in each party—particularly those participating in primaries early in the process—play a major role in selecting the eventual nominee. In a series of elections, the candidates work to accumulate a majority of delegates to their national convention. Though the delegates are free to switch their allegiance at the convention, they tend to stick with their original preferences. The conventions showcase the nominee and party platform, and the primary results are not overturned. The U.S. electorate has become more numerous and diverse. In 1828, most property requirements for white men over the age of 21 were abolished. In 1920, the passage of the Nineteenth
Amendment gave women the right to vote. In 1870, the Fifteenth Amendment to the Constitution prohibited the denial of the right to vote on the basis of race. People of color still were denied the right to vote effectively, however, by counties that instituted literacy requirements or the payment of poll taxes. Only with the passage of the Twenty-fourth Amendment in 1964, 100 years after the end of slavery, did all African Americans have an effective right to vote. In 1971, the Twenty-sixth Amendment lowered the voting age from 21 to 18 in federal, state, and local elections. Despite hard-fought efforts to expand the vote, however, little more than half of the eligible adult voting population typically turns out to vote in presidential elections. This relatively low turnout is often attributed to the complexities of voter registration and the fact that Election Day is always a regular working day rather than a holiday. The advent of the mass media, as well as the changes in the nomination process, led presidential candidates to develop their own campaign staff, employing pollsters, fund raisers, media consultants, and policy experts. Campaigns today cost a great deal of money as candidates fight first for their party’s nomination and then to win the general election. In the early 1970s, Congress passed the Federal Election Campaign Act (FECA) and established the Federal Election Commission to limit and regulate campaign contributions and make fund-raising information publicly available. The law was partly intended to equalize spending by the major political parties. Individual contributions are encouraged, as presidential candidates receive matching public funds for all individual contributions up to $250.00. Candidates who accept public funding must abide by spending limits set by the law, except in the case of “soft money.” National party committees may spend an unlimited amount of funds to get out the vote; state and local party committees and issue advocacy groups (“527 committees”) can spend money on issue advertisements and television commercials. All help (or hurt) the presidential candidates without being direct expenditures of the presidential candidates themselves. Scholars have long sought to understand how elections change public policy and shift the balance
624 pr esidential inaugurations
of power between the major political parties. Realignment theory examines which elections constitute a critical break with the past. Realigning elections or eras are marked by intense disruptions of traditional patterns of voting behavior, high levels of voter participation and issue-related voting, and ideological polarization between political parties. They result in durable changes in the party balance of the electorate. The party holding the presidency tends to shift. Whether focusing on changes in the terms of political conflict or changes in the balance of partisan identification in the electorate, the eras surrounding the presidential elections of 1828, 1860, and 1932 are considered realigning or “critical” elections. Democrats dominated the presidency and Congress in the period from 1828 to 1860. Power shifted to the Republican Party from 1860 to 1932. The election of Franklin D. Roosevelt in 1932 returned the Democrats to power. Most elections are not realignments, yet they are consequential for changes in the direction of public policy. Presidents who win landslide victories after campaigns where the candidates differed sharply on the issues are likely to claim mandates for policy change. Similarly, when one of the major political parties wins both the presidency and control of both chambers in Congress, they are likely to challenge the status quo or dig in their heels to maintain it. When the presidential and congressional elections yield divided government, policy change proceeds at a more incremental pace. Whatever the circumstances of their election, individual presidents have the power to change the course of both foreign and domestic policy. For this reason, the character of the electoral process and the quality of voter decision making will continue to be the object of electoral reform. Further Reading Abramson, Paul R., John H. Aldrich, and David W. Rhode. Change and Continuity in the 2004 Elections. Washington, D.C.: Congressional Quarterly Press, 2005; Bartels, Larry. Presidential Primaries and the Dynamics of Public Choice. Princeton, N.J.: Princeton University Press, 1988; Burnham, Walter Dean. Critical Elections and the Mainsprings of American Politics. New York: Norton, 1970; Campbell, Angus, Philip Converse, Warren Miller, and Donald Stokes.
The American Voter. New York: John Wiley & Sons, 1960; Conley, Patricia. Presidential Mandates: How Elections Shape the National Agenda, Chicago: University of Chicago Press, 2001; Polsby, Nelson, and Aaron Wildavsky. Presidential Elections. 10th ed. New York: Chatham House Publishers, 2000; Mayhew, David. Electoral Realignments. New Haven, Conn.: Yale University Press, 2002; Sundquist, James L. Dynamics of the Party System. Rev. ed. Washington, D.C.: The Brookings Institution, 1983; Wayne, Stephen J. “Presidential Elections and American Democracy.” In The Executive Branch, edited by Joel Aberbach and Mark Peterson. New York: Oxford University Press, 2005; Wayne, Stephen J. The Road to the White House 2004. Belmont, Calif.: Wadsworth Publishing Co., 2003. —Patricia Conley
presidential inaugurations Every four years, the United States celebrates the swearing in of a president. It has become a ceremony of great significance, media, and public attention. It sometimes marks the transition of power from one party to another, often is the setting for the establishment of new priorities for the nation, sometimes signals a new direction for the United States, and just as often, is of only negligible impact. The United States does not have a royal family, and many look to the president for symbolic representation as one might look to a queen or a king. The president serves as both head of government (a political role) as well as head of state (a symbolic role) for the nation. In this way, the presidential inauguration serves as a signal event in the symbolic representation of the nation as a whole. Its pomp and pageantry is in part a unifying event that is to bring the nation together as one in the hope of new life for the nation. There have been more than 50 inaugural ceremonies, although few were memorable. We remember George Washington’s because it was the first; Andrew Jackson’s because the postceremonial White House reception degenerated into a raucous party; William Henry Harrison’s was memorable because he caught pneumonia giving his inaugural address in a driving rainstorm and died a month later; and we remember Jimmy Carter’s because he eschewed the usual presi-
presidential inaugurations 625
Bill Clinton, standing bet ween Hillary Rodham Clinton and Chelsea Clinton, takes the oath of offic e of the pr esident of the Uni ted States, January 20, 1993. (Library of Congress)
dential limousine and walked from the Capitol to the White House. There were only a few memorable inaugural addresses. George Washington gave the shortest, a mere 135 words. Franklin D. Roosevelt’s fourth address in 1945 during World War II lasted only six minutes. William Henry Harrison, of pneumonia fame, gave the longest, nearly two hours and 9,000 words, became ill afterwards, and died weeks later. Only a few speeches are remembered for their historical significance or eloquence. Both of Abraham Lincoln’s inaugural addresses were memorable, none more so than his second inaugural address. Franklin D. Roosevelt reminded a nation gripped by a devastating economic depression that it had nothing to fear “but fear itself,” and John F. Kennedy spoke of the dreams of a nation at the peak of its power, imagination, and optimism, with the famous line “Ask not what your country can do for you; ask what you can do for your country.”
Some inaugurations were modest, others fit for a king. Ronald Reagan’s 1980 inauguration was, at more than $10 million, the most expensive in history until the recent 2000 ceremony for George W. Bush. The 2004 inauguration is headed for yet another spending record. This conspicuous display of consumption and fiscal exhibitionism contrasts nicely with Thomas Jefferson’s where, after the ceremony had ended, he walked back to his boarding house where he had to wait in line for his supper. More than mere presidential oath taking, inaugurations are a celebration of hope and renewal and of opportunity and change. The oath itself is a mere 35 words: “I do solemnly swear (or affirm) that I will faithfully execute the office of President of the United States, and will to the best of my ability to preserve, protect, and defend the Constitution of the United States.” George Washington added “so help me God,” a tradition that continues to this day. Franklin Pierce, in 1853, was, for religious reasons, the only president
626 pr esidential inaugurations
to “affirm” rather than swear, and Pierce’s vice president William Rufus King stands alone as the only person to take the oath of office abroad. King was ill and could not travel, and so a special act of Congress allowed him to take the oath in Havana, Cuba. He died six weeks later. Every president except John Quincy Adams placed his hand on the Bible when taking the oath (usually, the Bible is open to a page with a passage of special significance to the new president). Adams used a volume on constitutional law that Chief Justice John Marshall had given him. Marshall, by the way, swore in more presidents (nine) than any other chief justice. It is customary for the chief justice to administer the oath to an incoming president. There are exceptions, however. Usually, these exceptions arose when emergencies such as the death or the assassination of a president occurred, and the new president had to be sworn in quickly. After Warren G. Harding’s death, Calvin Coolidge was sworn in by his father, a judge. The first inauguration took place in 1789 in the temporary capital in New York. Washington’s second inauguration in 1793 also took place in New York. John Adams, the second president, was inaugurated in Philadelphia in 1797. All the rest, with the exception of some of the unelected or “accidental” presidents, took place in Washington, D.C. Gerald Ford took his oath in the East Room of the White House shortly after Richard Nixon’s resignation because of the Watergate scandal, preferring a quiet ceremony in the aftermath of a constitutional crisis. Eight swearing-ins were held inside the Capitol; most of the others were held on the East Portico of the Capitol. In 1981, Ronald Reagan moved the ceremony to the west front of the Capitol. Inaugurations are now held in January. They used to be held in March (presidents were constitutionally mandated to begin their term in office on March 4), but it was felt that there was just too much time between an election and the swearing-in of the new president, and the Twentieth Amendment to the U.S. Constitution, passed in 1933, shortened the gap and the time a sitting president served in a “lame duck” capacity. Now, presidents begin their term on January 20. This gave the president-elect less time to prepare to govern as the transition was considerably short-
ened. This became an especially pressing problem after the 2000 presidential election when a dispute over the votes in the state of Florida, prevented a winner being declared until roughly four weeks after the election date. Parades down Pennsylvania Avenue evolved slowly. The first full-fledged parade was in 1829 in celebration of Andrew Jackson’s victory. Inaugural balls as a practice also evolved slowly. After the disputed election in 1876 in which Rutherford B. Hayes received fewer popular votes than Samuel B. Tilden, yet won the election (and earned the nickname “RutherFraud”), the new president decided it might seem inappropriate to hold a grand public display and did not hold a ball or a public ceremony. Instead, he took the oath at a private ceremony inside the White House. Presidential inaugurations can be a time of national pride and unity. Thomas Jefferson used the occasion to try to bridge the partisan divide that was plaguing the nation as the country was deeply divided between Federalists and Jeffersonians. Clearly, the inaugural message of George W. Bush will be remembered for his “let us come together” theme after his election was also disputed with the vote recount in Florida (and the fact that he lost the popular vote to Democratic candidate Al Gore). The United States is at times a politically divided nation, and to govern more effectively, the president needs to evoke themes of national unity. Other presidents have used the occasion to ask for early support and bipartisanship. Such pleas often fall on deaf ears and do not lead to a very long presidential honeymoon. Full of pomp and ceremony, part celebration and part “new beginning,” presidential inaugurations are a time when, quite often, power changes personal and partisan hands. It is done in peace and relative tranquility. No armed forces are needed to usher in a changing of the political guard. The change in leadership takes place as a result of votes, not force. The presidential inauguration is today taken for granted. Its style and significance are assumed and have taken on a kind of ordinariness. And yet, it is quite extraordinary that in this nation, power so easily and smoothly changes hands. Transferring power from one person to another, one party to another, in so peaceful a manner is nothing short of remarkable. It is a tribute to the strength of politi-
presidential leadership 627
cal democracy in the United States as well as the character of its people. Further Reading Boller, Paul F., Jr. Presidential Inaugurations: Behind the Scenes—An Informal, Anecdotal History from Washington’s First Election to the 2001 Gala. New York: Harcourt, Inc., 2001; Library of Freedom. The Inaugural Addresses of the Presidents. New York: Gramercy Books, 1995. —Michael A. Genovese
presidential leadership Few presidents are considered to have been highly successful. Only George Washington, Abraham Lincoln, and Franklin D. Roosevelt are considered to have been “great” presidents. Most are not able to overcome the lethargy built into the separation-ofpower system and cannot bring sufficient skill to bear on the limited opportunities they face. To be successful, a president must be jack-of-alltrades and a master of all. It is rocket science. In fact, it is more difficult than rocket science because in science there are laws and rules that apply, rules that do not always work in the world of presidential politics. Since power floats and is so elusive in the United States, often a power vacuum is the natural order, and someone or something fills that vacuum. The vacuum is most often and easily filled by those who wish to protect the status quo. In the United States, there is a great deal of negative power and multiple veto points, but in normal times, there are few opportunities to promote change. By what means can a president fill that power vacuum? At times, that vacuum can be filled with presidential leadership. Presidential leadership refers to more than mere office holding. Leadership is a complex phenomenon revolving around influence—the ability to move others in your desired direction. Successful leaders are those who can take full advantage of their opportunities, resources, and skills. A president’s level of political opportunity is the context in which power operates. It is measured by such factors as the margin of one’s victory in the last election; the issues on which the president ran in that election; the number of the president’s party in Congress; issue
ripeness; public mood; and the level of political demand for action. Resources are a function of the power granted to the president. Constitutionally, the president has limited power. Further, the structure of government, the separation of powers, also reflects limited presidential authority. Skill is what each individual president brings to the complex task of governing. A successful president confronts his or her opportunities and converts resources to power, but this conversion is not automatic. To convert resources to power requires political skill. The president’s skills—his or her style, experience, political acumen, political strategy, management skill, vision, ability to mobilize political support, build coalitions, develop a consensus, and his or her character traits and personal attributes—provide a behavioral repertoire, a set of competencies or skills on which a president may draw. The president has vast responsibilities with high public expectations but limited power resources. Ordinarily, that would be a recipe for political disaster, but as an institution, the presidency has proven to be elastic; it stretches to accommodate skilled leaders in situations of high opportunity, but it can also contract to restrict less skilled presidents. While the framers of the U.S. Constitution sought a governing structure that was slow, deliberative, and interconnected, they also wanted a government of energy, but a specific type of energy—energy that results from consensus, coalitions, and cooperation. The model on which this is based is one of consensus, not fait accompli; influence, not command; agreement, not independence; and cooperation, not unilateralism. If the model of leadership built into the presidency is one of cooperation and consensus, what paths are open to a president in her or his efforts to lead? How can a skilled president take a relatively limited office and exert strong leadership? What are the fuels that ignite presidential power, and what are the preconditions of successful presidential leadership? Apart from the level of political opportunity, two that are especially important are vision and skill. These two elements provide the foundation on which the president’s potential for leadership largely rests.
628 pr esidential leadership
Perhaps the most important resource a president can have is the ability to present a clear and compelling vision to the public and to the Congress. A well-articulated, positive vision that builds on the past, addresses needs of the present, and portrays a hopeful, optimistic image of a possible future opens more doors to presidential leadership than all the other presidential resources combined. A compelling vision energizes and empowers, inspires and moves people and organizations. A president with a powerful vision can be a powerful president. However, few presidents can use fully what Theodore Roosevelt referred to as “the bully pulpit” to develop a public philosophy that animates governing. However, if presidents wish to be change agents and not merely preside over events, they must use the bully pulpit to promote a moral and political vision in support of their programs and policies. A visionary leader, similar to what James MacGregor Burns calls a transforming leader, gives direction to an organization, gives purpose to action, and creates new hope. Such leaders are both instruments for change and catalysts of change. The public seems to assume that political skill is all that is necessary for a president to be successful. If only we could get another Franklin D. Roosevelt in the White House, we could make things work. While skill is important, skill alone is not enough. Even the most skilled leaders face formidable obstacles on the path to power. Skill helps to determine the extent to which a president may take advantage of or is buried by the political conditions she or he faces, but skill is not enough. President Ronald Reagan used to refer to the “window of opportunity”—his way of talking about how open or closed were the conditions for exercising presidential leadership. The image is apropos. A highly skilled president who faces a closed window—such as the opposition party’s controlling the Congress during a period of economic troubles in which the president’s popularity is low—will be very limited in what she or he can achieve. By contrast, a president of limited skill but who has an open window of opportunity will have much greater political leverage, even though her or his skill base may be smaller. President George W. Bush was an object
of much media ridicule before September 11, 2001, but after that tragedy, his window of opportunity— and his power—increased dramatically. It was not that Bush became more skilled overnight; it was a change of conditions that opened a window to power. What skills are most useful to a president? Political experience is often seen as a requirement for effective leadership, and while this sounds like common sense, the correlation between experience and achievement is not especially strong. Some of our most experienced leaders were the biggest disappointments (for example, Lyndon Johnson and Richard Nixon). But in general, one could argue that more experience is better than less experience. While Washington, D.C., “amateurs” (Jimmy Carter, Ronald Reagan, Bill Clinton, and George W. Bush) have made serious blunders in part due to their lack of experience, experienced political hands (George H. W. Bush had arguably the best resume in Washington) have often done poorly despite their rich backgrounds. This suggests that while experience can be very helpful, there are other factors that also determine success or failure. The timing of politics also matters. When new legislation is introduced, when the public appears ready to accept change, when the Congress can be pressured to act, when the president leads and when she or he follows, and when she or he pushes and when she or he pauses, all contribute to the success/ failure equation. A president’s sense of political timing, part of the overall “power sense” all great leaders have, helps her or him know when to move and when to retreat, when to push and when to compromise. The president’s most important job, as stated earlier, is to articulate a vision for the nation’s future. Presidents must identify the national purpose and then move the machinery of government in support of that vision. Goal-oriented presidents sometimes control the political agenda; they take charge and are masters of their fate. To control the agenda, a president must (1) develop and articulate a compelling vision; (2) present a series of policy proposals designed to achieve that vision; (3) sell that program to the public and Congress; and (4) place emphasis on the presentation of self and programs. If accomplished
presidential leadership 629
with skill, luck, and good timing, this program may allow the president to control the political agenda and thereby succeed. The foundation on which the U.S. system of government was founded is based on consensus and coalition building. Consensus means agreements about ends; coalitions are the means by which the ends are achieved. Since power in the U.S. system is fragmented and dispersed, something (usually a crisis) or someone (usually the president) has to pull the disparate parts of the system together. Thus, power can be formed if the president has a clear, focused agenda and can forcefully and compellingly articulate a vision and if the public is ready to embrace that vision. If a president can develop a consensus around this vision, she or he can then muster the power to form the coalitions necessary to bring that vision to fruition. Leading public opinion is believed to be one of the key sources of power for a president, but the ability to move the nation and generate public support takes time and effort, there is no guarantee of success, and support is not automatically conferred at the inauguration, nor can it always be translated into political clout. Presidents spend a great deal of time and effort trying to build popular support both for themselves and their programs, but their efforts often come to very little. A president with popular support (and other skills to complement popularity) can exert a great deal of pressure on Congress and is more likely to get her or his program passed. For this reason, presidents are keenly aware of fluctuations in popularity, and they routinely engage in efforts aimed at self-dramatization in the hope of increasing their ratings. Some believe that popularity is a very convertible source of power; that presidents can convert popularity into congressional votes, better treatment by the media, and less criticism by political opponents. While solid evidence to support these claims is hard to come by, one must remember that, in politics, perception counts more than reality. If a president is popular, she or he might be able to convince the Congress that it is dangerous to go against her or him. In the aftermath of the September 11, 2001, attack, President George W. Bush’s popularity soared to the 90 percent range. This led Democrats and others to be
very cautious in their criticism of the president and softened their opposition to some of his policies. Later, when his popularity ratings dropped, political opposition increased. Can presidents convert popularity into power? Sometimes, but not always. Political scientist George C. Edwards III is skeptical regarding the president’s ability to convert popularity into political clout. After a statistical study of the relationship of popularity to power, Edwards concluded that while popularity may help, there is no certainty that presidents who are popular will get their legislative programs through Congress. How can presidents lead public opinion? President Franklin D. Roosevelt asserted that “all our great Presidents were leaders of thought at times when certain historic ideas in the life of the nation had to be clarified,” and his cousin President Theodore Roosevelt observed, “People used to say of me that I . . . divined what the people were going to think. I did not ‘divine’ . . . I simply made up my mind what they ought to think, and then did my best to get them to think it.” Citizens demand a great deal of their presidents. Expectations are high, but power resources are limited. This puts presidents in a vise. As their constitutional power is not commensurate with the demands and expectations placed upon them, they turn to the creative force of presidential leadership in attempts to close this expectation/resource gap. They rarely succeed. The forces against them are, in any but a crisis, too strong, and their resource base is too limited. Even the creative use of presidential leadership cannot long close the expectation/resource gap. In this way, presidents all face an uphill battle in their efforts to govern. Further Reading Cronin, Thomas E., and Michael A. Genovese. The Paradoxes of the American Presidency. 2nd ed. New York: Oxford University Press, 2004; Edwards, George C, III. At the Margins: Presidential Leadership of Congress. New Haven, Conn.: Yale University Press, 1990; Genovese, Michael A. The Power of the American Presidency, 1787–2000. New York: Oxford University Press, 2001. —Michael A. Genovese
630 pr esidential succession
presidential succession What happens if the office of president of the United States is left vacant due to the death, disability, resignation, or impeachment and conviction of the president? The U.S. Constitution, in Article II, Section 1, states that “the same [the office of President of the United States] shall devolve on the Vice President.” There was some confusion concerning the meaning of “the same.” Did this mean the office itself with its title and powers or merely the powers and duties of the office? Would the vice president become the president fully and completely, or would he or she become the acting president? An orderly and fair succession process is essential to the legitimacy of the office of the presidency and to the political system overall, and as the political stakes are also very high, the succession process has at times become contentious and testy. After all, no democratic system can long survive if it is deemed by the voting public to be illegitimate and lacks authority. Therefore, methods must be in place for the replacement of a political leader should the office become, for any reason, vacant. A stable, orderly, and transparent system of leadership succession thus becomes an essential for a stable and legitimate political system. This was a problem in the early republic when the president was selected as the highest vote getter in the electoral college with the runnerup becoming vice president. But with the development of political parties, the president might be of one party and the vice president of the other. In fact, this happened with the second president of the United States, John Adams, a Federalist. His vice president was Thomas Jefferson, a Jeffersonian Democrat–Republican. Such a situation was deemed dangerous by some and inappropriate by most, and soon the Constitution was changed to reflect better the changing political realities of political parties and the development of party tickets for president and vice president. These constitutional changes took place immediately after the contentious election of 1800 when both Thomas Jefferson and his intended vice presidential candidate, Aaron Burr, received the same number of electoral votes. Here, the decision of
who would be the next president went to the House of Representatives, who, after a heated battle, chose Jefferson as president. But these early battles would not be the end of the matter. Soon, further questions of succession and legitimacy would take center stage. The dilemma was resolved when, in 1841, President William Henry Harrison died shortly after taking office. Harrison, the oldest elected president to that date, gave a lengthy inaugural address during a rainstorm and soon became ill. He died a month later. Vice president John Tyler took the oath of office as president and demanded that he assume all the powers of office and that he be called president and not acting president (some critics referred to Tyler as “Your Accidency”). While there was some dissension and political opponents attempted to prevent Tyler from assuming office as “president,” after much wrangling, Tyler managed to establish that the vice president became the president with full powers, honors, and rights when elevated to the position. That issue has not been debated since. But there were other issues and questions that remained unanswered. What happened when both the presidency and the vice presidency were vacant? A line of succession needed to be established. Again, Article II, Section 1 of the Constitution provides for this, giving to the Congress the authority to establish a line of succession to determine “what officer shall act as President, and such officer shall act accordingly.” Again, did this give the officer the powers of office or title as well? In 1792, Congress passed the first of a series of succession acts. It provided that the president pro tempore of the Senate would be next in the line of succession, followed by the Speaker of the House of Representatives. Then an interim election was to be held to determine a new president. In 1886, Congress changed the order or replacement and placed cabinet officers in line to succeed the president. Here the line of succession was based on the order in which the departments were created, thus placing the secretary of state next in line. This act also eliminated the special interim election. After World War II, President Harry Truman urged the Congress to again change the line of succession so that the Speaker of the House and then the
President’s Daily Briefing
president pro tempore of the Senate would be placed ahead of the cabinet. In 1945 Congress passed its revision to the succession acts and followed Truman’s recommendation. In 1967, the Twenty-fifth Amendment to the Constitution was passed. This amendment gave the president the right to fill a vacant vice presidential office with his choice of replacement, given the consent of both houses of Congress. This occurred in 1973 when vice president Spiro Agnew was forced to resign his office in the face of a political scandal and a court plea of nolo contendere (roughly meaning “I do not contest” the charges). President Richard M. Nixon, also under a cloud of scandal in the Watergate crisis, filled the vacancy by appointing Representative Gerald R. Ford (R-MI) to the vice presidency. Given that there was talk of impeaching Nixon, the president was severely limited in whom he could nominate for vice president. After all, many believed that Nixon might actually be appointing his replacement as president, and in the heated atmosphere of Watergate, few Democrats (who controlled both Houses of Congress) held Nixon in high regard and were suspicious of any Nixon nominee. Ford, at that time the minority leader in the House of Representatives, was one of the few prominent Republicans who could have been approved by the Democratic majority, and Nixon appointed him vice president. Shortly thereafter, President Nixon resigned when faced with the certainty of impeachment and conviction, and Gerald Ford was elevated to the presidency, the nation’s only “unelected” president. Ford then appointed New York governor Nelson Rockefeller to fill the open office of vice president, making the entire ticket an “unelected” presidency and vice presidency, and while there were some who questioned the legitimacy of this, in the end there were no serious legal or constitutional challenges made to the arrangement. Ford and Rockefeller filled out the unexpired time on the presidency of Richard M. Nixon. This amendment also granted the president the right to vacate or turn over the power of the presidency temporarily to the vice president. This provision of the Twenty-fifth Amendment has been put into effect on several occasions when presidents were temporarily unable to govern during, for example, operations.
631
Today, the presidential line of succession is fairly stable and noncontroversial. In effect, most of the “kinks” have been worked out of the system, and both the line of succession and the process are widely accepted and clearly defined. This helps create a more stable and orderly political environment and diminished the opportunity for foul play or political intrigue that could easily jeopardize security and political safety of the nation. Further Reading Kinkade, Vance R., Jr. Heirs Apparent: Solving the Vice Presidential Dilemma. Westport, Conn.: Praeger, 2000; Natoli, Marie D. American Prince, American Pauper: The Contemporary Vice Presidency in Perspective. New York: Greenwood Press, 1985; Sindler, Allen. Unchosen Presidents. Berkeley: University of California Press, 1976. —Michael A. Genovese
President’s Daily Briefing (PDB) In a complex, sometimes dangerous, often confusing world, the president needs all the good information he or she can get to make decisions that are designed to advance the national interests and security of the United States. Presidents have advisers, government agencies, and a host of information sources designed to keep the president abreast of changes in the international arena, and while there is clearly no shortage of information, there are often problems of how best to organize and present information to the decision maker, what to include and what to leave out, what trends seem to be coming to a head, and which ones seem to be fading in importance. In sheer volume, the president has far more information than he or she needs or could possibly use or process. But there is always an issue of transmitting the right information to the right individuals in the right way at the right time. If given too late, disaster may result; if given in a way that does not highlight the potential consequences of actions, it might lead to mistakes; if a “sign” is ignored or downplayed, a crisis may develop; if given in a one-sided or biased way, an ideological blindness may intrude into the decision-making process.
632 P resident’s Daily Briefing
Different presidents develop different methods of staying in touch with political reality. President George H. W. Bush was referred to as the “Rolodex president,” a reference to his propensity for going through his rolodex and placing phone calls to various individuals who might shed light on a situation or problem. President Bill Clinton was a voracious reader who gobbled up data at an amazing pace. President George W. Bush relied heavily on a very few trusted and ideologically compatible advisers. But all presidents rely on one method of “getting up to snuff” on the changes and threats that may have occurred overnight: the President’s Daily Briefing (PDB). The President’s Daily Briefing, or President’s Daily Brief, is a document given every morning to the president with a summary of important events that require the president’s attention. Since 1964, the PDB was compiled in the Central Intelligence Agency with other agencies adding material from time to time. The PDB was often delivered to the president by the CIA director himself, and the director might discuss with the president the impact of certain events and the need for presidential attention or action. On occasion, the director of the CIA would bring colleagues to the briefing who might be able to shed additional light on significant controversies or problems that are outlined in the PDB. In 2005, responsibility for putting the PDB together and delivering it to the president was transferred to the newly established post of the director of national intelligence. Often, the PDB is the opening of a conversation among high-ranking executive branch officials regarding what action to take or what issues to pursue. Primarily intended to alert the president as an early warning mechanism for emerging international threats and developments, the PDB is also delivered to other high-ranking officials. While there is no statutory requirement as to who shall receive these PDBs, and while it is up to the president himself or herself to decide who shall be included in the list of those who receive the PDB, often the secretary of State, the president’s National Security Advisor, and the secretary of Defense will be among those to receive the PDB. The PDB is “top secret” and on very rare occasion will be designated for the sole use of the president with “for the President’s eyes only” stamped
on the cover. On May 21, 2002, former White House press secretary Ari Fleischer called the PDB “the most highly sensitized classified document in the government.” The National Intelligence Daily, which contains many of the same reports and documents as the PDB, is issued to a much wider audience within the administration. All former presidents are entitled to receive copies of the PDB after it is delivered to the sitting president. Public attention was focused on the PDB in the aftermath of the September 11, 2001, terrorist attacks against the United States. Investigators, looking into the government’s response to terrorist threats, requested access to a controversial PDB dated August 6, 2001, five weeks before the 9/11 attacks. This PDB contained a memo with the alarming title, “Bin Laden Determined to Strike in US.” After much pressure, the administration declassified the August 6, 2001 PDB. George W. Bush was the first president in the history of the United States to declassify and release a PDB. While the threat was clear, the administration argued that it was not “new” and that there were no actionable dates or locations to follow up with, and thus the threat was taken seriously but not taken literally. In the aftermath of the release of the August 6, 2001 PDB, the fallout for the administration was minimal. The contents of the PDB are as follows: PRESIDENTS DAILY BRIEFING: August 6, 2001 (Declassified and Approved for Release, 10 April 2004). Bin Laden Determined to Strike the US: Clandestine foreign government and media reports indicate Bin Laden since 1997 has wanted to conduct terrorist attacks in the US. Bin Laden implied in US television interviews in 1997 and 1998 that his followers would follow the example of the World. Trade Center bomber Ramzi Yousef and “bring the fighting to America.” After US missile strikes on his base in Afghanistan in 1998, Bin Laden told followers he wanted to retaliate in Washington, according to a ■■■■■■■ service. An Egyptian Islamic Jihad (EIJ) operative told an ■■■■ service at the same time that Bin Laden was planning to exploit the operative’s access to the US to mount a terrorist strike. The millennium plotting in Canada in 1999 may have been part of Bin Laden’s fist serious attempt to implement a terrorist strike in the US. Convicted plotter Ahmed Flessam has told the FBI
regulation 633
that he conceived the idea to attack Los Angeles International Airport himself, but that Bin Laden lieutenant Abu Zubaydah encouraged him and helped facilitate the operation. Flessam also said that in 1998 Abu Zubaydah was planning his own attack. Rassam says Bin Laden was aware of the Los Angeles operation. Although Bin Laden has not succeeded, his attacks against the US Embassies in Kenya and Tanzania in 1998 demonstrate that he prepares operations years in advance and is not deterred by setbacks. Bin Laden associates surveilled out Embassies in Nairobi and Dar as Salaam as early as 1993, and some members of the Nairobi cell planning the bombings were arrested and deported in 1997. Al-Qa’ida members— including some who are US citizens—have resided in or traveled to the US for years, and the group apparently maintains a support structure that could aid attacks. Two al-Qa’ida members found guilty in the conspiracy to bomb our Embassies in East Africa were US citizens and a senior EIJ member lived in California in the mid-1990s. A clandestine source said in 1996 that a Bin Laden cell in New York was recruiting Muslim-American youth for attacks. We have not been able to corroborate some of the more sensational threat reporting, such as that from a ■■■■■■■■ service in 1998 saying that Bin Laden wanted to hijack a US aircraft to gain release of “Blind Shaykh” ‘Umar’ Abd al-Rashman and other USheld extremists. For the President Only. 6 August 2001.
Further Reading Inderfurth, Karl F., and Loch H. Johnson. Fateful Decisions: Inside the National Security Council. New York: Oxford University Press, 2004; Johnson, Loch H. America’s Secret Power: The CIA in a Democratic Society. New York: Oxford University Press, 1991. —Michael A. Genovese
regulation The federal government, more specifically, the executive branch of the government, is responsible for enacting the laws of Congress. As the legislative branch, Congress makes all laws, but laws are not selfexecuting, nor are they always clear regarding how
they are to be implemented. Thus, some amount of executive discretion is necessary in bringing the will of Congress to life. The responsibility for this element of governance falls on the executive branch of government, and in performing this function, the executive branch relies on the permanent government— the bureaucracy—to implement laws. To do so, the executive branch must establish rules and regulations that are to apply to the implementation of the law. The bureaucracy implements laws and regulations. By implementation, we mean employing the skill and resources necessary to put policy into action. It brings to life the will of Congress and the president. Laws and regulations are not self-executing. They must be made to happen. Bureaucrats do this by implementing policy, and they implement policy through issuing regulations. Beginning in the 1970s, there was a political backlash to what was perceived as excessive and intrusive government regulations, especially of business. Regulations can be quite irksome, especially to those being regulated. Many business groups fought back and supported politicians who wanted to limit or roll back the regulations placed on them. This in turn led to demands from environmental and other groups for more regulations and restrictions on the unfettered activities of business. The tension caused by this push and pull in the area of government regulations became one of the central political clashes of the 1980s and has continued unrelenting into the 21st century. It shows no sign of letting up any time in the near future. Remember that much is at stake in this political and economic battle. Businesses have much to gain or lose financially, and environmental groups, for example, see the decline of regulations as causing pending environmental disasters on the horizon. The modern battle over regulation is about power, money, and the future of the nation. A regulation is a legal rule established by the government (often through administrative agencies) designed to control or impact certain policy areas and supported by fines or penalties in cases where regulations are violated. The body of rules and regulations make up what is referred to as administrative law. These rules and regulations are usually designed to promote safety, ensure fair and open competition, and promote honest dealings. They involve the details of governing. Since most laws are written in general
634 r egulation
terms and are sometimes ambiguous, someone must establish rules and standards that will govern the implementation of the law. The issuance of regulations does this. They set rules, guidelines, and legally binding limits. They attempt to promote forms of behavior and standards and practices that might not otherwise occur. One of the most recognizable forms of regulation can be seen in the antipollution regulations imposed by the government on industries. If left to their own devices, many business enterprises might find it economically disadvantageous to clean up the pollutants that their enterprise produces and emits into the atmosphere. Thus, the government intervenes and establishes rules and regulations governing the amount of and type of pollutants that can be released into the atmosphere. Where these rules are violated, the government can impose fines and legal penalties. Of course, there are heated political battles concerning the establishment and the enforcement of such antipollution regulations, and industry after industry hire attorneys, lobbyists, and advocates to make their case before Congress, the executive branch, and government agencies. These industry agents also often come with campaign contributions that are aimed at influencing the choices of government officials. Such efforts sometimes backfire as congressional scandals concerning the inappropriate or illegal uses of money in the political process are occasionally exposed. The stakes in these ventures are very high. If a government regulation on pollution is imposed, it could cost the business significantly. If the regulations are lifted, the business could save significant amounts of money. Thus, it should not be surprising that the politics of government regulation is often tinged with the breath of scandal and the influence of big money. For a politician to be openly antiregulation is often seen as an overt effort to attract campaign funds, and this message is not lost on industry lobbyists whose job it is to sniff out those politicians who are most likely to be friendly to industry. Ironically, most of these contributions are, in the strictest sense, legal, and while this may undermine the faith of citizens in their democracy raising questions and accusations of “government for sale to the highest bidder,” the lawmakers who benefit from the largesse of the lobbyists campaign contributions are precisely the ones who write the laws and regulations that gov-
ern campaign contribution law. When the foxes are in charge of the henhouse, do not expect the hens to be happy—or safe. As we know, political battles do not end when a bill becomes a law. No, the politics and the battles continue, although in a different arena—the arena of implementation. Implementation, it is often said, is politics by other means. The vagueness of many laws allows wide latitude for bureaucratic interpretation, and Congress delegates some discretion to the bureaucracy. This discretion often means that even as they implement laws, bureaucrats make policy. As the administrative wing of the executive branch of government, the bureaucracy also performs the ongoing and routine tasks of government. In implementing and administering laws and regulations, bureaucracies also interpret the law. This adjudication function makes the bureaucrat a judge of sorts, as they determine violations of rules and apply penalties. Of course, as laws are not self-executing, neither are they self-explanatory. They need to be put into action, the intent of the law must be interpreted, and the means of implementation must be decided. This gives the bureaucracy a good deal of administrative discretion. Perhaps the most controversial of the bureaucratic tasks involves regulation. Bureaucrats are rule makers, and the development of formal rules is called regulation. The government regulates a variety of activities. All government regulations are published in the Federal Register (which has about 60,000 pages of rules each year). The Administrative Procedures Act, passed in 1946, requires that the government announce any proposed regulation in the Federal Register, hold hearings, allow the public and interest groups to register complaints and suggestions, research the economic and environmental impact of new proposed rules, consult with elected officials, and then publish the new regulations in the Federal Register. Imagine for a moment a world without government regulations. To the libertarian mind, such a thought is a utopian paradise of less government and more freedom, but most of us rely on the government and on government regulations (as imperfect as they are) to regulate the safety of our food supply, of many of the professions, of product safety, of transportation safety, and of a host of other activities. In a modern, complex, industrialized world and in a global
removal power 635
economy, it seems reasonable to assume that some measure of regulation is not only necessary but a public good. The ongoing complaint that government is too intrusive or “regulation–happy” led to a deregulation movement in the 1970s and 1980s. Some of the deregulation worked well; some did not. This led to a reregulation movement, especially where health and safety issues were concerned. See also administrative presidency. Further Reading Josling, Timothy Edward, Donna Roberts, and David Orden. Food Regulation and Trade: Toward a Safe and Open Global System. Washington, D.C.: Institute for International Economics, 2004; Kahn, Alfred E. The Economics of Regulation: Principles and Institutions. Cambridge, Mass.: MIT Press, 1988; Kahn, Alfred E. Lessons From Deregulation: Telecommunications and Airlines after the Crunch. Washington, D.C.: AEI-Brookings, 2004. —Michael A. Genovese
removal power The U.S. Constitution provides clear governance on the matter of appointments, but it is silent on the issue of removal, except for impeachment. The question of whether an appointment made by the president with the advice and consent of the Senate could be terminated by the executive without the approval of the Senate has provoked discussion and debate for more than two centuries. Alexander Hamilton seemingly had answered the question in Federalist 77: “It has been mentioned as one of the advantages to be expected from the cooperation of the Senate, in the business of appointments, that it would contribute to the stability of the administration. The consent of that body would be necessary to displace as well as to appoint. A change of the Chief Magistrate, therefore, would not occasion so violent or so general a revolution in the offices of the government as might be expected, if he were the sole disposer of offices. Where a man in any station had given satisfactory evidence of his fitness for it, a new President would be restrained from attempting a change in favor of a person more agreeable to him, by the apprehension that the discountenance of the Senate might frustrate
the attempt, and bring some discredit upon himself. Those who can best estimate the value of a steady administration, will be most disposed to prize a provision which connects the official existence of public men with the approbation or disapprobation of that body, which from the greater permanence of its own composition, will in all probability be less subject to inconsistency than any other member of the government.” If Hamilton’s Federalist essay, like its counterparts, reflected what scholar Edward S. Corwin called “the mature conclusions of the Convention,” some explanation—not yet supplied—is necessary to account for the fact that James Madison, in the debates in 1789 about a removal clause in the statute creating the Department of Foreign Affairs, argued for a broad removal power for the president. The great debate on the removal of the secretary of State, known as the “Decision of 1789,” swirled about the question of whether the removal power was an intrinsically executive function and thus beyond the need for Senate approval. In a long address to his colleagues in the House of Representatives, Madison maintained that the president did not require Senate approval to remove an official who had been appointed with its consent. Madison’s position reflected several concerns. He observed that the Constitution vested the executive power in the president, subject to particular exceptions such as the Senate’s participation in the appointment power. Congress, he noted, could not extend or modify the president’s constitutional authority. Moreover, the placement of the removal power in the executive promoted presidential responsibility and accountability for the conduct of department heads. The president’s duty under the Take Care Clause implied the authority to remove department heads. If the president improvidently removed an official without warrant, he would be vulnerable to “impeachment and removal from his own high trust.” By the end of the debate, Congress had come to view the removal power as an incident of executive power. Accordingly, it approved flaccid legislation that would apply to the Departments of Foreign Affairs, War, and the Treasury. Under the statute, subordinate officers would assume custody of all records whenever the secretary “shall be
636 r emoval power
removed from office by the President of the United States.” Madison’s concerns involved department heads and not necessarily subordinate officials. Thus, Congress did not acknowledge a presidential removal power that enveloped all administrative officials. When, for example, he addressed the comptroller of the Treasury, he stated that the office was not purely executive in nature but that it embodied a judiciary quality as well that urged that the officer “should not hold his office at the pleasure of the Executive branch of the Government.” Hamilton later revised the representations that he made to the people in Federalist 77 when, in 1793, he attributed the removal power to the broad executive power vested in the office of the presidency. Writing as Pacificus, he explained that with the exception of the Senate’s role in the appointment power and the treaty power and the sole authority of Congress to declare war and to issue letters of marque and reprisal, that “the executive power of the United States is completely lodged in the President. This mode of construing the Constitution has indeed been recognized by Congress in formal acts, upon full consideration and debate; of which the power of removal from office is an important instance.” The debate on the repository and scope of the removal power continued to smolder in the young republic. In 1833, President Andrew Jackson claimed authority to remove cabinet officials without senatorial approval. Jackson removed Secretary of the Treasury William J. Duane for refusing to remove funds from the Bank of the United States for the purpose of depositing them in state banks. Jackson then appointed Roger B. Taney to the office during a congressional recess, and Taney carried out Jackson’s orders. The Senate retaliated against the recess appointment. When it came back, the Senate refused to approve Taney, which forced him out of the office, and it followed up by rejecting his nomination for Associate Justice of the Supreme Court. Taney was confirmed in 1836 for the position of Chief Justice of the United States. Jackson was censured by the Senate in 1834. The motion drew the support of Daniel Webster, John C. Calhoun, and Henry Clay. It provided that the president had “assumed upon himself authority and power not conferred by the Constitution and
the laws, but in derogation of both.” Jackson responded with an energetic rebuttal. He urged the Take Care Clause which, he argued, made him “responsible for the entire action of the executive department.” As a consequence, he possessed the right to remove agents who did not implement his orders. Three years later, the Senate expunged its resolution of censure. The unsettled nature of the removal power reached a boiling point in the presidency of Andrew Johnson. Johnson was impeached for violating the 1867 Tenure of Office Act, which prohibited the removal of a cabinet officer before his successor had been nominated and approved by the Senate. Johnson escaped conviction in the Senate by a single vote. He had vetoed the act on the grounds that it encroached on the president’s removal authority, as determined in the 1789 congressional debate. Congress, which had been in deep conflict with Johnson, produced the votes to override his veto. The first penetrating judicial examination of the removal power was conducted in the landmark case, Myers v. United States (1926), although the courts had previously ruled on the issue. In Marbury v. Madison (1803), Chief Justice John Marshall wrote that the president’s discretion over an office ran until an appointment was made. After that, “his power over the office is terminated in all cases, where by law the officer is not removable by him. The right to the office is then in the person appointed, and he has the absolute, unconditional power of accepting or rejecting it.” Marshall’s decision suggested that the president’s removal power could be circumscribed by statute. In 1903, in Shurtleff v. United States, the U.S. Supreme Court held that Congress may by statute restrict the president’s removal power to specified causes. In Wallace v. United States (1922), the Court held that the limitations imposed on the president’s power to remove an army officer do not apply when the removal is approved by the Senate through its consent to a new appointment to the post. The trajectory of the removal power, which had exalted a legislative role, changed abruptly in the Myers case. Chief Justice William Howard Taft advanced an overly broad view of the removal power in an opinion that asserted an unrestricted removal power in the face of statutory limitations. Taft wrote that the “Decision of 1789” put the
renditions, extraordinary 637
question beyond doubt: The power to remove officials appointed by the president and the Senate is “vested in the president alone.” Taft’s assertion that the president might remove any executive officer was tempered by the Court less than a decade later in Humphrey’s Executor v. United States (1935). There, the Court limited the president’s removal power to purely executive officers; it did not extend to the removal of quasi-legislative or quasi-judicial officers. The Court held that the removal of those officials could be conditioned by statutory measures. In Weiner v. United States (1958), the Court went a step further in holding that the removal power could be restricted even in the absence of statutory provisions. As a matter of course, Congress has by legislation protected members of commissions, special prosecutors and independent counsels who investigate scandals involving the presidency, and officials who serve in agencies that are legislative in nature, such as the comptroller general. In a case arising out of the Watergate scandal, a district court in Nader v. Bork (1973) held that President Richard Nixon’s decision to fire Special Prosecutor Archibald Cox was illegal since the removal violated the department’s regulations governing the special prosecutor. The Ethics Act of 1978 prohibited the removal of a special prosecutor, later termed the independent counsel, except for cause as specified in the legislation. The act was upheld by the Supreme Court in Morrison v. Olson (1988). The controversy surrounding the removal power reflects the need to strike a balance between the presidential power required to control the administrative establishment and the congressional power to establish offices and stipulate the powers and responsibilities of those who fill them. Presidents will contend that they need the authority to remove officials to force them to do their bidding. History has favored that view, although courts have drawn the line where the threat of the Damoclean Sword would undermine the independence of independent agencies. Further Reading Corwin, Edward S. The President: Office and Powers, 1787–1984. 5th ed. Revised by Randall W. Bland, Theodore T. Hindson, and Jack W. Peltason. New
York: Columbia University Press, 1984; Fisher, Louis. Constitutional Conflicts between Congress and the President. 4th rev. ed. Lawrence: University Press of Kansas, 1997. —David Gray Adler
renditions, extraordinary During the war against terrorism which began for the United States with the terrorist attack of September 11, 2001, President George W. Bush maintained that the United States was engaged in a new war that required new policies. This would not be a traditional war fought against an established and recognized nation–state but a conflict against shadowy and loosely aligned terrorist cells that knew no true national boundaries, were located in no one identifiable place or nation, and could move from region to region and country to country. In effect, these terrorist cells had no home and no set base but were flexible and mobile. This made it harder to identify and capture or kill the enemy as the old rules and traditional methods of war might not be as well suited to this new type of war. New means of fighting would have to be devised, methods that were different from the means and methods of fighting a war against a traditional nation– state. This new type of war called for a new approach to fighting. The old “rules” evolved in time, in nationto-nation conflicts—they fit the old style of war—but what were the appropriate rules and limits on this new type of war? For better or for worse, they would have to be made up as the war proceeded. This led to some questionable if not unfathomable actions on the part of the United States, actions that sometimes blurred the lines between those who were supposed to defend the rule of law and international agreements, and the terrorists, who seemed to follow no rules except to do maximum damage to their perceived enemies. Critics of the Bush policies pointed to the curtailment of civil rights and liberties of U.S. citizens; the detention of “enemy combatants” who were denied access to attorneys, were not charged with crimes, and were not allowed access to courts; the establishment of the Guantánamo Bay detention center; the use of torture; the misuse of information to create the impression that our supposed enemies were engaged in activities that made war against those individuals/
638 r enditions, extraordinary
A detainee at Guantánamo Bay (GITMO) prison in Cuba (David P. Coleman/U.S. Department of Defense)
nations (for example, Saddam Hussein and Iraq) more likely; and other activities to suggest that while President Bush believed that he was within his rights to defend the nation, others saw the president as engaging in questionable, sometimes illegal, and sometimes ill-advised efforts to fight the war against terrorism. These controversies dogged President Bush, and as his popularity declined and as the war in Iraq proved more difficult, costly, and ruinous than imagined, an expanding chorus of criticism followed the president and his controversial policies. One of the most controversial of the Bush antiterrorism efforts was the use of what was called “extraordinary rendition.” In wars, mistakes are made, and even the best and brightest may go too far. One such excess that came back to haunt the United States was the policy of extraordinary rendition. Kept secret initially and denied when exposed, this policy ended up both embarrassing the United States and undermining the moral claims that were so often used and
so powerful in the initial stages of the war against terrorism. Renditions is a quasi-legal term, meaning to surrender or hand over persons who were charged with a crime; usually such a person has an outstanding arrest warrant. Extraordinary rendition has no clear basis in international law but is the term that is used to describe the kidnapping of a suspect who is then transported to another location for interrogation and sometimes torture. Extraordinary rendition is thus a euphemism for what, by its true description, is a crime in international law. Extraordinary renditions became more common in the aftermath of the September 11, 2001, terrorist attacks against the United States when the Bush administration, hoping to get tough on terrorism, began to employ this technique against suspected terrorists. In an extraordinary rendition, a suspected terrorist may be assaulted, stripped, bound and gagged, then put into a van, taken to an isolated airstrip, flown into a country where he or she is interrogated away from the watchful eye of a court or human-rights organization, and sometimes tortured. The United States thus turns over suspects to third parties who sometimes employ torture. This “torture by proxy” allows the United States to maintain that its hands are clean while still drawing the alleged benefits of the use of torture, but even using third-party surrogates to torture is illegal under international law: They violate the United Nations Convention Against Torture, to which the United States is a signatory. Article 3 of this convention states: “No State Party shall expel, return or extradite a person to another State where there are substantial grounds for believing that he would be in danger of being subjected to torture. For the purpose of determining whether there are such grounds, the competent authorities shall take in account all relevant considerations including, where applicable, the existence in the State concerned of consistent pattern of gross, flagrant or mass violations of human rights.” Further, the UN Declaration on the Protection of All Persons from Enforced Disappearances of 1992 states that “any act of enforced disappearance in an offence to human dignity,” which “placed the persons subjected thereto outside the protection of the law and inflicts severe suffering on them and their families. It constitutes a violation of the rules of international law guaranteeing, inter alia, the right to
Securities and Exchange Commission
recognition as a person before the law, the right to liberty and security of the person and the right not to be subjected to torture and other cruel, inhuman or degrading treatment or punishment. It also violates or constitutes a grave threat to the right to life.” The Rome Statute of the International Criminal Court defines as a crime against humanity the “enforced disappearance of persons” as “the arrest, detention or abduction of persons by, or with the authorization, support or acquiescence of, a State or a political organization, followed by a refusal to acknowledge that deprivation of freedom or to give information on the fate or whereabouts of those persons, with the intention of removing them from the protection of the law for a prolonged period of time.” When media reports hinted at a program of extraordinary rendition, the administration was quick to deny that such tactics were employed, but as the investigations proceeded, mainly by the European press and politicians, evidence emerged suggesting that indeed there may have been a number of cases of extraordinary rendition. This became especially pressing when several men who claimed to have been kidnapped by U.S. agents, flown to other countries, tortured, then, after it was realized they were not the right suspects, released, came forward and pressed their cases against the United States in courts. Proponents argue that extraordinary rendition is a useful, even an essential tool in the war against terrorism. They claim that such renditions allow the government to scoop up dangerous terrorists who might give the United States important information that can head off future terrorist attacks against the United States and its allies. Critics argue that such renditions are illegal, unethical, and anathema to U.S. values and that torture is notoriously untrustworthy and ineffective as an interrogation device. In 2005 and 2006, European Parliamentary inquiries that were held in relation to extraordinary renditions led to a series of indictments in European courts against CIA officials who were accused of abducting people from several European nations and taking them to unnamed locations for interrogation and perhaps torture. The administration sent Secretary of State Condoleezza Rice to several European capitals to counter the public-relations disaster that was occurring, and while the secretary insisted that the
639
United States was following accepted procedures in international law, she was unable to head off the criticism or the court inquiries. At the time of printing, these cases were still pending. Further Reading Bonner, Raymond. “The CIA’s Secret Torture.” New York Review of Books, January 11, 2007; Grey, Stephen. Ghost Plane: The True Story of the CIA Torture Program. New York: St. Martin’s Press, 2006. —Michael A. Genovese
Securities and Exchange Commission The Securities and Exchange Commission (SEC) was created by the Securities Exchange Act of 1934. It was one of the key pieces of legislation passed as part of President Franklin D. Roosevelt’s New Deal, a response to the economic depression of 1929. The SEC was created to oversee the stock and financial markets of the United States. Today, it also is responsible for regulating the securities industry and has been the premier regulatory agency in overseeing the U.S. economy. Its mission is to “protect investors, maintain fair, orderly, and efficient markets, and facilitate capital formation.” Prior to the stock market crash of 1929, there was very little push for federal regulation of the securities market. This was especially true during the boom years of the post–World War I era. The individualistic culture of the U.S. “rags to riches” stories inspired those wishing to make a financial killing in the market and insulated the economy from pressures to regulate business. The stock market crash of 1929 had a devastating effect on the economy of the United States. After the boom years of the Roaring Twenties, people thought that the good times would go on and on, but in 1929, the stock market crashed, causing the rest of the economy to tumble as well. The nation’s Gross National Product declined, productivity dropped, unemployment which was at 3.1 percent in early 1929 jumped to more than 25 percent by 1933. Bank failures (in 1931 alone, more than 2,300 banks failed), bankruptcies, and poverty all rose dramatically. And not only was the United States affected, but a worldwide depression also occurred. The level of human suffering worldwide was devastating.
640
Securities and Exchange Commission
How could this have happened? While the causes may be varied, the government of Franklin D. Roosevelt led the assault against the depression with what became known as the New Deal. The New Deal was a series of policies and programs designed to make the nation work again and to stem the tide of the depression. Among the many efforts to turn the nation around, the government believed that a stable and more transparent system of economic regulation would boost confidence of investors and help revive the sluggish economy. Wild speculation, ineffective regulations, rules without teeth—all, the Roosevelt administration believed, had contributed to the depression, and their persistence prevented recovery. To turn things around, one of the policies promoted by President Roosevelt was the creation of a Securities and Exchange Commission. The SEC was responsible for enforcing many of the key pieces of New Deal legislation such as the Securities Act of 1933, the Public Utility Holding Exchange Act of 1934, the Securities Exchange Act of 1934, the Trust Indenture Act of 1939, and the Investment Company Act of 1940. The creation of the SEC represents part of the drive to move away from the less-regulated market system that, it was believed, led to gross speculation and other maladies that led to the economic depression of 1929. The SEC and other efforts marked the rise of a more transparent, regulated, and open market system for the United States. It was intended not only to force compliance with rules and regulations relating to the market economy but also to restore confidence in the markets as well. By guaranteeing greater transparency and compliance with even-handed rule making, it was believed that investors would feel greater confidence in the honesty and integrity of the system and thus be more likely to invest in the nation’s economy. The SEC has five commissioners. They are appointed by the president (who also designates one member of the commission as chairman) with the advice and consent of the Senate to staggered terms lasting five years. As nonpartisanship (or bipartisanship) is considered an essential ingredient in upholding the integrity of this regulatory institution, by law, no more than three commissioners can belong to the same political party. The job of the commissioners includes interpreting federal securities laws, amending existing rules, proposing new rules to
address changing market conditions, and enforcing rules and laws. The SEC is separated into four divisions, including the Division of Corporate Finance (which oversees corporate disclosure of important information to the investing public), the Division of Market Regulation (which establishes and maintains standards for fair, orderly, and efficient markets), the Division of Investment Management (which oversees and regulates the $15 trillion investment management industry and administers the securities laws that affect investment companies, including mutual funds and investment advisers), and the Division of Enforcement (which investigates possible violations of securities laws, recommends commission action when appropriate, either in a federal court or before an administrative law judge, and negotiates settlements on behalf of the commission). The first chairman of the Securities and Exchange Commission was the controversial Joseph P. Kennedy, the father of John F. Kennedy. Joseph P. Kennedy was appointed by President Franklin D. Roosevelt, who saw Kennedy as much as a rival as an ally, and it is believed that Roosevelt appointed Kennedy to the post to reward him as well as to keep an eye on him and keep him out of the national political limelight. Kennedy was an aggressive administrator and helped establish the SEC as a viable regulatory agency within the federal government. The current chairman of the Securities and Exchange Commission is Christopher Cox, appointed by President George W. Bush in 2005. Cox, a former Republican congressman from Orange County, California, is the 28th chairman of the SEC. The SEC is headquartered in Washington, D.C., with regional offices in 11 cities, including: Los Angeles, San Francisco, New York, Boston, Philadelphia, Denver, Fort Worth, Salt Lake City, Chicago, Miami, and Atlanta. In addition, there are 18 separate offices within the SEC, including: the Office of Administrative Law Judges, the Office of Administrative Services, the Office of the Chief Accountant, the Office of Compliance Inspections and Examinations, the Office of Economic Analysis, the Office of Equal Employment Opportunity, the Office of the Executive Director, the Office of Financial Management, the Office of the General Counsel, the Office of Human Resources, the Office of Information Technology, the Office of the Inspector General, the Office of International Affairs, the Office
selective service system
of Investor Education and Assistance, the Office of Legislative Affairs, the Office of Public Affairs, the Office of the Secretary, and the Freedom of Information and Privacy Act Office. Approximately 3,100 employees comprise the staff of the SEC, and while the overall number of employees is small compared to other executive branch agencies, the number of divisions and offices within the SEC model the overall complexity of the federal bureaucracy. Today, in a global economy, the Securities and Exchange Commission plays an expanded role in economic regulation and oversight. The role of the SEC today remains one of regulating the securities markets and protecting the interests of investors, but it now has a more global reach and interest. Following the scandals of companies such as Enron and WorldCom, Congress enacted legislation in 2002 to increase the policing powers of the SEC. See also executive agencies. Further Reading Geisst, Charles R. Visionary Capitalism: Financial Markets and the American Dream in the Twentieth Century. Westport, Conn.: Praeger, 1990; Matthews, John O. Struggle and Survival on Wall Street: The Economics of Competition Among Securities Firms. New York: Oxford University Press, 1994; Samuelson, Paul A., and Nordhaus, William D. Economics. 18th ed. Boston: McGraw-Hill Irwin, 2005; U.S. Securities and Exchange Commission Web page. Available online. URL: http://www.sec.gov; Wolfson, Nicholas. Corporate First Amendment Rights and the SEC. New York: Quorum Books, 1990. —Michael A. Genovese
selective service system Nations exist in an international setting that is characterized by nascent rules and regulations that guide and direct the behavior of states. These international laws and regime norms do not form a superstructure of the rule of law but are often strong inducements to guiding behavior in the international arena. However, there is no set and enforced international system or rules and laws that bind nations and form a robust and enforceable system of international law. The term international community presupposes global governance and international laws that bind
641
nations together in a mutually beneficial set of binding laws, but anarchy is more characteristic of the international environment than are rules and regulations that are abided by and enforced by a superstructure of a United Nations or an accepted mechanism of international law. In this setting, nations feel that they are threatened and rely on self-enforcing mechanisms such as treaties, alliances, and their own military and economic power to protect themselves from potentially hostile nations. This state of affairs has induced nations to establish standing armies and to develop weapons systems designed to deter other nations and to protect themselves from potential adversaries. One of the chief means of deterrence is to have a large standing army, trained and prepared for combat. Nations do this because in the absence of a defined and enforceable system of international law, each nation feels responsible for its own national defense. In a world where power or the threat of power often helps to determine international behavior, nations feel that they must arm themselves and be prepared for war in the hope that this preparation will lead to peace. In a world characterized by an emerging but limited set of international laws and norms, each state must, in effect, fend for itself (there are, of course, alliances and coalitions of states), the possibility of employing the use of force is always on the table. Thus, nations form military, militia or police forces designed to protect and defend the nation in a world often characterized as anarchistic. There are essentially two ways to ensure a large standing army: an all-volunteer force or a conscripted army. The United States has had both. Depending on the times and the demands of those times, the United States has employed all volunteer forces but during major conflagrations has often resorted to drafting or conscripting an army. Compelling the youth of the nation (usually the male youth) to serve their country in military service is common in many nations, but it asks a great deal of citizens to sacrifice several years of their lives and potentially face military hostilities to defend the nation, thereby facing the possibility of death at the hands of the enemy. In the early days of the republic, the nation neither needed nor wanted a large standing army. The framers of the nation feared that a large standing army would encourage military and imperialistic adventurism, and thus warned the new
642 selec tive service system
nation to avoid possessing large armies. But as the nation developed, many argued that a larger military was necessary to protect U.S. economic enterprises abroad and to protect the security of the nation from external threats. But it was not until after World War II that a truly large and permanent military was deemed necessary for the nation. After the Second World War, the United States became, more by default than by design, the dominant or hegemonic power of the West, and with the start of the cold war against the Soviet Union, the need for a large standing army became more pronounced than ever. Still, the threats and warnings of the framers haunted policy makers in this era of superpower competition. When, after the end of the cold war in 1989, the war on terrorism engulfed the nation, the need for a large standing army once again persuaded policy makers to place emphasis on security and military affairs, and again the need for a large standing army became pronounced. But how, in a democracy, with free people, can you persuade the youth of the nation to give up several years of their lives, and perhaps their lives, in serving the nation in the military? During the war in Vietnam the nation relied on a conscripted army, but as protests about the legitimacy and the morality of the war took to the streets, President Richard Nixon went from a conscripted army to an all-volunteer army. This helped stem the tide of protest but only postponed the debate concerning the best type of army for a democracy to have in a dangerous world. With the war against terrorism, this debate flared up again, and when the military could not fill its recruitment goals amid a failed and unpopular war in Iraq in 2005–06, the question of what type of military was best for a democracy that was also the world’s only superpower again came to public and policy-makers’ attention. The Selective Service is the federal agency that administers the drafting and/or the laws relating to military registration and service. Currently, with an all-volunteer army, all males between the ages of 18 and 26 are required to register with the selective service for possible induction into the military service. Only the Congress has the legal authority to call citizens into service, but during the Civil War, President Abraham Lincoln, without the consent of Congress,
called up soldiers into the service of the nation to fight the war. During the war in Vietnam, the selective service became a symbol of the injustice of the war and a target for protests. The selective service system came to represent nearly everything that was deemed wrong with the United States. As the war in Vietnam dragged on and as protests against the war accelerated, the selective service system became the focal point of much of the protest. It was not uncommon to protest or to hold demonstrations outside a draft board or selective service office, and at times these protests degenerated into violence, as police sometimes clashed with protesters. As a powerful symbol, selective service was a rallying point that gravitated protesters against the symbols of the war in Vietnam. When, in the middle of the war, President Richard Nixon shifted from a conscripted army to a lottery system and an allvolunteer army, he helped water down the complaints and the protests about the selective service system. Some of the protesters who were now assured that they would not be called into military service dropped out of the protest movement, and some of the momentum and fervor of the protesters was diminished. As the president continued to wind down the war, the absence of the selective service system as a target allowed the president a bit of political breathing room. In 2004–05, as the war in Iraq dragged on and the ranks of the military dwindled below necessary levels, there was renewed talk of reviving the draft and employing the selective service system as a way to boost enlistment, but talk of renewing the draft was such a contentious political issue that even if the government felt a need to do so, politically they would be hard-pressed to make the case for renewing the draft. As soon as trial balloons were sent up, they were shot down, and it became politically too difficult to reinstate a draft. This put the George W. Bush administration in a difficult position: It was engaged in a robust and global war against terrorism and it had ambitions to expand that war, but the war in Iraq was not going well, enlistments were down, it was losing political support in the nation, and it did not have the troops to “take care of business.” While reinstating a draft may have been the preferred goal, politically they were in too weak a position even to try to do this, and as each new trial balloon was shot down, the adminis-
signing statements 643
tration became more and more aware of just how unpopular renewing the draft would be. While, today, the selective service system attracts little attention and little protest, if the nation were to change to a conscripted military, clearly, the selective service system would again be an attractive target for protests. Further Reading Kessinger, Roger. U.S. Military Selective Service Act. Boise, Idaho: Kessinger Publishing, 1991; U.S. Congress, Senate Judiciary Committee. Selective Service and Amnesty: Hearings Before the Subcommittee on Administrative Practice. Washington, D.C.: United States Congress, 1972. —Michael A. Genovese
signing statements Constitutionally, the president has limited powers, especially when compared to those powers enumerated to the more powerful government institution of Congress. Article I of the U.S. Constitution grants to the Congress all legislative power (subject to a potential veto by the president that the Congress has the authority to override)—the power of the purse; the power to declare war, raise armies, regulate commerce; and a host of other powers that make the Congress constitutionally the most powerful branch of government. Article II, the executive article, has surprisingly few independent powers reserved for the president. The president shares most of his powers with a Senate and in purely constitutional terms is rather limited and weak compared to the Congress, and yet, public demands and expectations place a great deal of hopes on the president to solve the nation’s problems. Thus, there is a power/expectation gap: The president’s powers are limited but the demands or expectations on the office are enormous, and given that presidents are judged virtually every day of the year by poll after opinion poll, it should not surprise us that presidents are especially sensitive to fluctuations in their popularity and attempt to do things—from image management to achieving policy results—that will elevate their stature and give them higher popularity ratings. But not all presidents can achieve this goal.
Presidents search for ways to close this power/ expectation gap. They seek ways to expand their limited power to meet the excessive demands of the public. President Thomas Jefferson added the leadership of a political party to the presidential arsenal; President Andrew Jackson added the claim that he was the voice of the people; President Abraham Lincoln added the crisis/war authority to the office; President Teddy Roosevelt focused on using the “bully pulpit” is his search for added powers; President Woodrow Wilson added the leadership of international opinion, or world leadership, to the presidential tool kit; and President Franklin D. Roosevelt added the managerial revolution to the office. Other presidents added other powers to the arsenal of presidential authority, and at times the Congress fought back—sometimes with success but more often with disappointing results for congressional power and authority. Overall there has been a trend towards more, not less, presidential power. In time, presidents have used a “scaffolding” approach whereby different presidents constructed a new layer or level of power that a future president might use, thus enlarging the office by adding scaffolding to the old edifice of power. In recent years, a new source of authority has been claimed by presidents: the signing statement. This device has not been tested in the courts, and thus we do not know if it will stand the test of constitutionality (it probably would not, but one cannot be certain), and so for now, it is a contested and potential source of added power for the president, one that in the coming years is likely to face a court test to determine its legitimacy. In using signing statements, presidents hope to have a greater influence over how laws are interpreted and implemented. The signing statement, as it has emerged in recent years, is a type of “line-item veto” (the line-item veto was approved by Congress but was declared unconstitutional by the U.S. Supreme Court during the Clinton presidency), the signing statement has no foundation in law or statute but is a claimed right (implied or inherent) of the president to both express his own views of what a law means and how the agencies of the federal government should implement said law. Initially, these signing statements were of little concern and were largely ceremonial and noncontroversial, but in recent
644 signing statements
years, they have taken on a new, more political, and policy-related dimension as presidents attempted to alter the meaning of laws passed by Congress by using these signing statements as a way of nudging legislative interpretations in the president’s desired direction. Customarily, when a president signs a significant piece of legislation into law, he or she will issue a signing statement giving reasons for supporting the bill. In the 1980s, President Ronald Reagan began to use signing statements in a new and more controversial way, directing agencies to enact the new law in a manner described in the signing statement. On occasion, this interpretation was at variance with the intent of the Congress. Reagan thus enlarged the administrative presidency and used it to influence policy, going around the Congress. It was part of a bigger strategy by the Reagan administration to use administrative or managerial devices to achieve policy ends when it was not successful at obtaining Congress’s approval. During the presidency of George W. Bush, the signing statement took another turn as President Bush used these statements, on occasion, to announce that he would not be bound by the bills, or portions of the bills, that he was signing into law. This unusual interpretation, in effect, gave the president a lineitem veto, something that the Supreme Court struck down during the Clinton years, as unconstitutional (Clinton v. New York, 1998). Where the Reagan signing statements were a new use of presidential power, they were not as bold as the Bush signing statements, and thus, where Reagan was able to “fly below the congressional and public radar screens,” the Bush administration’s bold and controversial use of signing statements, initially under the radar screen, became front-page news when Bush used these statements to undermine and reverse congressionally authorized policies. In his first term, President Bush never vetoed a single piece of legislation. He did, however, issue 505 constitutional challenges or objections, via signing statements, to bills which he then signed into law. Often, these objections were relayed to the executive agencies that were responsible for implementing the laws, along with instructions in how these agencies were to follow the instructions of the president, even when these instructions seemed to go
against the laws as written by Congress. A series of reports in The Boston Globe brought signing statements to the attention of the public, and, in recent years, there has been a great deal of congressional and scholarly examination of the utility as well as the legality of signing statements. Some of the Bush administration signing statements dealt with rather mundane and noncontroversial matters, but on occasion, these signing statements dealt with significant and controversial issues, as when President Bush signed the defense appropriations bill but announced in his signing statement that his administration was not bound by the law’s ban on torture. In the war against terrorism, the administration engaged in what critics called the “unacceptable use of torture” to attempt to gain information from suspected terrorists. Senator John McCain (R-AZ), a former prisoner of war in Vietnam, led a congressional effort to ban the use of torture by the United States and its agents. President Bush lobbied Congress to not attach the torture-banning amendment to the Defense Appropriations Bill, and the vice president, Dick Cheney, met with McCain several times to try to persuade the senator to drop the amendment—the president even threatened to veto the bill if it contained the torture ban—but McCain refused to back down, and the Congress did pass the amendment, making it a part of the appropriations bill. The president, rather than veto the bill, signed it into law, but he included with his signature a signing statement in which he announced that he was not bound by any provisions in the bill which he believed inappropriately restricted the authority of the president to fight the war against terrorism. Thus, Bush claimed a type of line-item veto of bills he signed, something the U.S. Supreme Court had already banned as unconstitutional. Critics charged a presidential sleight-of-hand in this, but the president claimed that the Congress did not have the authority to restrict the commander in chief in a time of war and further said that he would not be bound by the law that he had just signed. It set up a potential constitutional showdown between the president and Congress over the issue. Scholars and legal experts immediately weighed in on the constitutionality of the signing statement as interpreted by the Bush administration. To some extent, there was a partisan element in the evalua-
solicitor general, U.S.
tions of the Bush signing-statement logic, with critics of the Bush policies lining up against the president and his allies and friends backing the president, but once the partisan brush was cleared, the consensus that emerged is that the president—this one or any other—does not have the authority to apply selectively only those laws with which he or she agrees and to ignore provisions in the law with which he or she disagrees. The president’s constitutional remedy is to veto legislation with which he disagrees. He has neither the line-item-veto option nor the signingstatement option to shape the meaning of the law. The president takes an oath “that the Laws be faithfully executed” (Article II, Section 3 of the U.S. Constitution)—even when he or she disagrees with certain provisions of the law. The president is but one branch of the three branch system of government of the United States, and he or she can no more ignore laws than can the average citizen. In the United States, no one is above the law, and it is not up to the president alone to determine what is and is not the law. It is a decidedly shared responsibility, and the president must veto legislation to which he or she objects (for whatever reason). He or she cannot merely announce that he or she does not like this or that provision of a law and that, therefore, he or she is not bound to follow or faithfully execute that law. There was a time, during the golden age of kings when the king was judge, jury, legislator, executive—the entire package. The United States fought a revolution to overthrow that system and replace it with a republican government based on the rule of law. In that system, the president cannot adhere selectively to some laws and violate others. The president’s advocates claimed that in wartime, the rules change and that the president possesses a greater reservoir of powers, and while that may be politically true, it is constitutionally suspect. Nowhere in the Constitution is the president given any extra or prerogative powers in crisis or war. Thus, the claim that the modern presidential use of signing statements has a statutory or constitutional basis is unfounded. Is there a constitutional or statutory source for these signing statements? No. The authority, if it exists at all, may be an implied power deriving from the president’s “executive authority,” but even here, claims of a legal basis for the signing statement are very weak, especially if they negate the intent of
645
Congress as expressed in a legitimate piece of legislation that attains the president’s signature. How, after all, can the president sign a piece of legislation into law and, at the same time, argue that it is not the law? The weakness of the presidential claim of signing-statement authority is undermined further by the Supreme Court decision that the line-item veto is unconstitutional. Further, in July 2006, a task force of the American Bar Association, empowered to report on the alleged legality of these signing statements, concluded that they are “contrary to the rule of law and our constitutional system of separation of powers,” and yet, the signing statement awaits a true constitutional test before the Supreme Court, and the issue will not be settled until that time. In fact, this issue may even remain unsettled after a Court decision. In a system of shared and overlapping powers, few constitutional issues or questions are ever settled for good. That is part of the messiness, as well as the beauty, of a constitutional democracy in action. Further Reading Cooper, Phillip. By Order of the President. In The Presidency and the Constitution: Cases and Controversies, edited by Robert J. Spitzer and Michael A. Genovese, New York: Palgrave Macmillan, 2004; “Examples of the President’s Signing Statements,” The Boston Globe, April 30, 2006; Garber, Marc N., and Kurt A. Wimmer. “Presidential Signing Statements and Interpretations of Legislative Intent.” Harvard Journal on Legislation 24 (1987): 363–395; Rogers, Lindsay. “The Power of the President to Sign Bills after Congress Has Adjourned.” Yale Law Journal 30 (1920): 1–22. —Michael A. Genovese
solicitor general, U.S. The solicitor general is the third-ranking official in the U.S. Department of Justice, following the U.S. attorney general and the deputy attorney general. Outside of federal judges, the solicitor general is the most important legal position in the government, and it is the only higher-level federal post that statutorily requires the occupant be “learned in the law.” He or she is appointed by the president and must be confirmed by the Senate. The solicitor
646 solicit or general, U.S.
general serves at the pleasure of the president and can be removed whenever the chief executive so chooses. The office of solicitor general was created in 1870, and there have been a total of 44 individuals who have served as solicitor general. Among members of the legal community, the office of solicitor general is considered to be one of the most prestigious and wellregarded legal establishments for which an attorney can work. It is a relatively small entity consisting of approximately 25 to 30 lawyers and additional administrative staff. Several former solicitors general have gone on to become U.S. Supreme Court justices (for example, William Howard Taft, Stanley Reed, Robert Jackson, and Thurgood Marshall), and one became president and then Chief Justice of the United States (William Howard Taft). Many nationally prominent lawyers have served in this position, such as John W. Davis, Erwin Griswold, Archibald Cox, Robert Bork, Kenneth Starr, Walter E. Dellinger III, and Theodore Olson. This post has often been referred to both as “the lawyer for the executive branch” and “the 10th Supreme Court justice.” The most important responsibilities of the solicitor general consist of the following: directly arguing cases before the Supreme Court when the U.S. government is a party to a case; deciding the cases lost by the federal government in the district courts that will be appealed to the courts of appeals; and deciding the cases lost by the federal government in the courts of appeals that will be appealed to the Supreme Court. Thus, the solicitor general is the gatekeeper for all appellate litigation involving the federal government because it is this office that controls all the decisions to litigate on the part of the U.S. government. The solicitor general or his or her deputies serve as the legal advocate for all cases where officers or agencies of the federal government are parties before the Supreme Court. With a few exceptions, the solicitor general directs and must approve personally any appeal by the federal government to the Supreme Court. The office of the solicitor general also has the duty of supervising all litigant briefs and amicus curiae (friend of the court) briefs that the federal government files with the Supreme Court. These amicus curiae briefs are submitted by the solicitor general either in support of or in opposition to the positions of parties who are
requesting review by the Court or who already have cases being heard before the Supreme Court. Due to its close and unique relationship with the Supreme Court, the office of solicitor general holds a special place in its various interactions with the justices. Scholarly studies have clearly shown this special relationship being manifested in the justices being quite likely to agree to hear cases where the U.S. government has filed a petition to the Supreme Court requesting review (formally known as requesting a writ of certiorari from the Supreme Court, from the Latin “to be informed”). The Supreme Court accepts at least 70 percent of the cases where the solicitor general is involved as the petitioning party. Furthermore, and reinforcing this deferential trend of the Supreme Court toward the solicitor general when the federal government is one of the two litigants in a case, it is more likely to win than lose at the final outcome when the Supreme Court renders its binding “decision on the merits,” as it is officially known. Empirical studies indicate that the side of a dispute that the solicitor general contributes amicus curiae briefs on behalf of has an increased likelihood of winning at the final outcome as well. These findings beg the question of what explains this success of the solicitor general with the Supreme Court. Several theories have been posited by scholars to explain these patterns. One school of thought argues that the extensive litigation experience of the solicitor general and related staff before the Supreme Court best explains the high success rates. Supreme Court analysts commonly refer to litigants who have been before the Court before in a variety of cases as “repeat players”—and the theory goes that the solicitor general is the archetypal repeat player and uses it to his or her advantage. The office of solicitor general has a record of appearances before the Supreme Court that clearly is unequaled by any other litigant. This considerable background and history with the Supreme Court has allowed it to build an extraordinarily high level of expertise in terms of crafting effective legal briefs (both litigant and amicus curiae at the application stage and full-hearing stage) and putting forth efficacious oral arguments, particularly compared to other litigants. Another theory contends that the solicitor general represents the federal govern-
solicitor general, U.S.
ment and that the Supreme Court would be loath to ignore the wishes and preferences of such an enormously important entity in the governance and life of the nation. A third theory asserts that the solicitor general is strategic in choosing the cases to litigate directly or to support indirectly by amicus, and thus he or she decides by working off the probability of success in the Supreme Court. In other words, the solicitor general assesses and tactically calculates the potential “lay of the land” of the Supreme Court with a particular case and then acts accordingly whether or not to try to advance it. These selection strategies operate to reinforce the tendency toward success for the solicitor general. A fourth theory focuses on the distinctive relationship between this office and the Supreme Court and the faith that the justices have in the solicitor general to help them manage their caseloads efficiently. Distilled, the justices utilize the solicitor general as an important prioritizing filter to help glean cases that deserve attention and those that do not. The agenda space for the Supreme Court is highly constrained, and the justices essentially assume that the government’s attorney will not waste such space on insignificant cases or less-than-compelling disputes. The solicitor general serves at the pleasure of the president and can be removed from that post at the president’s discretion for whatever reason. The average tenure in office for a solicitor general is approximately two years. They can be fired summarily and let go from the administration, or, alternatively, they can be elevated by the president to an even more important and prestigious position in the government, such as on the Supreme Court. This need for the solicitor general to keep his political patron, the president, happy with his work and performance, coupled with his role as an “officer of the court,” can lead to a potentially challenging tension for the solicitor general in terms of role strain. Distilled, the solicitor general may be pulled at times in opposite directions as he or she tries to fulfill both the job of advocate for the president’s administration policies on the one hand and, on the other, the duties of a legal professional on whose judgment and work the Supreme Court (and the Congress at times as well) depend heavily in a variety of ways. The challenge here for
647
the solicitor general is the identification of “client” whom they are serving and of how those accompanying interests should drive their actions. Resolving these questions for a solicitor general necessarily influences the perceived reputation of that office from the other leading political actors in the system. A prime example of this challenge of role strain and how excessive politicization of the office of the solicitor general can lead to disruption of important relationships was found in President Ronald Reagan’s Administration. The Reagan White House wished to roll back what it considered to be illadvised liberal policies, programs, and Supreme Court decisions. Along these lines, scholarly analysis has noted that the Reagan administration quite aggressively utilized the office of solicitor general in the attempted advancement through the courts of its conservative policy agenda. The use of the solicitor general in this more politically forceful way constituted an alteration from the traditional orientation of nonpartisan restraint historically seen in this office. This aggressive pushing of Reaganite policies worked to reduce the faith and the trust that the Supreme Court commonly had held before with the solicitor general’s judgment, recommendations, and legal arguments. Subsequent presidents recognized this problematic shift and loss of credibility of the solicitor general and, as a result, generally have retreated from this type of partisan strategy with the solicitor general. This type of tension is endemic to lawyers at all levels. Who is the exact master whom they are to serve—the immediate client, the court, the law, or notions of justice—and how best to resolve the conflicting pressures stemming from those considerations? For the solicitor general, it is an even more of a challenging question to resolve due to their myriad responsibilities and the prominence of their post. In the cases before them, do they slavishly follow the specific preferences of the president and executive branch officers and agencies, or should they be guided more by their broader obligations in terms of the Supreme Court, the Congress, the law, or the people? Analysts contend that this ongoing question between law and policy and where that line is to be drawn is structured effectively by the legal professionalism of the members
648 State of the Union Address
of the office of the solicitor general as well as by the legal processes they must follow. In other words, as this argument goes, by virtue of their professional and legal obligations, the solicitor general generally functions to moderate the political extremism that is found commonly in all presidential administrations. Further Reading Baum, Lawrence. The Supreme Court. 8th ed. Washington, D.C.: Congressional Quarterly Press, 2003; Caplan, Lincoln. The Tenth Justice. New York: Vintage Books, 1988; Epstein, Lee, and Thomas G. Walker. Constitutional Law for a Changing America: Rights, Liberties, and Justice. 5th ed. Washington, D.C.: Congressional Quarterly Press, 2004; Pika, Joseph A., and John A. Maltese. The Politics of the Presidency. Rev. 6th ed. Washington, D.C.: Congressional Quarterly Press, 2005; Salokar, Rebecca Mae. The Solicitor General: The Politics of Law, Reprint. Philadelphia: Temple University Press, 1994. —Stephen R. Routh
State of the Union Address Presidents love State of the Union Addresses and for good reason: They are guaranteed a national audience on a national stage and a moment when their leadership is on display for all to see. They stand before the mighty—the full Congress, the U.S. Supreme Court, the Joint Chiefs of Staff, and most of the cabinet—as one who is mightier still, speaking as only the president can, in his capacity as both head of state and head of the government to sustained and thunderous applause. Presidents also undoubtedly dread State of the Union Addresses and for many of these same reasons. The magnitude of the event also magnifies any errors, real or imagined; there is an ever-present threat of speaking an obviously designed applause line that is received in stony silence; and finally, the faces of friend and foe alike will also be televised, sometimes looking contemptuous, sometimes merely bored. More often than not, however, presidents use this occasion to their advantage. Think of Ronald Reagan, who used his first State of the Union Address in 1981 to focus on the national economy, setting the agenda
for his first term and incidentally beginning the practice of referring to exemplary people in the gallery as illustrations of his points. Think of Bill Clinton, by his refusal to even acknowledge the Lewinsky scandal, effectively ending the debate over whether his presidency was, for all intents and purposes, over. State of the Union Addresses are one of the few actions (nominations are another) and the only speech required by the U.S. Constitution. (Article II, section 3 states that “the president shall, from time to time, give Congress information on the state of the union and recommend to their consideration such measures as he shall judge expedient.)” George Washington began the practice of supplying this information in the form of a yearly message. Thomas Jefferson, a notoriously poor public speaker, discontinued this practice, submitting the annual message, as it was then called, as a written document. Woodrow Wilson revived the annual message as a public speech in 1913, and it has been given orally ever since. Calvin Coolidge was the first president to have his speech broadcast via radio in 1923, and Harry Truman’s 1947 State of the Union was the first one televised. In 1936, Franklin D. Roosevelt moved the speech from day to evening so that more people could listen. The U.S. Constitution does not mandate the scheduling of the address, but it is now usually given in January or early February. When the president was inaugurated in March, the State of the Union was generally delivered in December, and more annual messages were given in December than in any other month. At the president’s request, Congress sends an invitation to the White House (the president may not appear on the floor of Congress without an invitation), which the president then accepts, and the speech is scheduled. These speeches are examples of how the federal government’s informal party system and formal separation of powers work. As the president has increased his role in the legislative process, the State of the Union address has become more significant, as it is the one public occasion where the president can articulate his legislative goals and agenda formally, and Congress, through its applause (or lack thereof), can publicly demonstrate its support or opposition to the president. The party that does not occupy the White House also now follows the State of the Union
State of the Union Address 649
President George W. Bush delivers the annual State of the Union address (Getty Images)
Address with a televised response to the speech, given by a rising star or a presidential hopeful, in which the out party makes its case to the U.S. public. While the television audience for that response is generally lower than that for the State of the Union Address, the media coverage of the opposition speech allows the out party to voice its concerns and articulate its preferred agenda. The State of the Union Address, as it has been called since the middle of Franklin D. Roosevelt’s administration, is always given in the House chamber, reflecting its importance as a message given through Congress to the people of the United States. Members of the House, the Senate, and the Supreme Court all attend, as do all but one member of the president’s cabinet (one member remains in a secure location to provide leadership in case of catastrophe). It is the only time when all members of all three branches of government appear together.
The State of the Union Address has always been considered important. Historian Charles A. Beard called it our “one great public document,” but since the beginning of the 20th century, it has increased in importance and in the amount of attention given to it. Presidents may reserve the announcement of major legislative initiatives for the State of the Union— Harry Truman, for instance, called for ending discrimination against African Americans in an annual message, and Lyndon Johnson declared “War on Poverty” in 1964. They are also a chance for presidents to introduce symbolic elements, as when Franklin D. Roosevelt announced the “Four Freedoms” in his 1941 address or when Bill Clinton declared that “the era of big government is over” in 1996. Media attention is commensurate with the speech’s policy and symbolic importance. The State of the Union is now carried live on all major networks, and the media speculate for days in advance on the content of the speech and the reception it is likely to
650 State of the Union Address
receive. While the actual audience for the speech is relatively low and may indeed be diminishing, the substance of the address receives so much media attention that many scholars believe the indirect effect remains strong. At a minimum, the president’s public approval will usually rise after the speech, although there is very little evidence that these speeches are successful in changing public opinion on specific issues. These addresses are often considered long, but the longest predates television—in 1946, Harry Truman’s speech was some 25,000 words; the shortest remains George Washington’s 833 word speech, given in 1790. Most State of the Union Addresses run at least an hour long and average several thousand words. The State of the Union Address is an important link between the executive branch and the legislative branch of government. It has a major impact on policy making, for the address details the president’s legislative priorities for the coming year and gives the Congress a “laundry list” of items to which they may—or may not—choose to respond. While some scholars argue that presidential speech in general and the State of the Union in particular are limited in their ability to reach or influence the mass public, others argue that there is evidence that presidents who can garner public support through such activities as the State of the Union Address are more likely to succeed in Congress than presidents who are less successful in earning public support for their policies. Karlyn Kohrs Campbell and Kathleen Hall Jamieson provided a starting point for studying the content of Annual Messages and State of the Union Addresses by identifying generic characteristics. State of the Union Addresses can be characterized by three processes: (1) public meditations on values, (2) assessments of information and issues, and (3) policy recommendations. In the course of executing these three processes, presidents “also create and celebrate national identity, tie together the past, present, and future, and sustain the institution of the presidency.” Campbell and Jamieson note further that the messages they studied (1790– 1989) were typically conciliatory in tone because they sought cooperation with Congress to enact legislation.
Even presidents who are considered poor communicators take advantage of these occasions. In speaking as the sole voice of the nation, presidents also become in effect our national historians, reflecting on the past in ways that allow them to define and shape both the present and the future. Generally speaking, these speeches change slightly during the course of any presidential administration. Those that are given early in a president’s term focus on goals and are, unsurprisingly, forward looking. Those given near the middle of an administration look both forward and back, as the president seeks to offer a unified and coherent view of his or her time in office. Those given at the end of a presidency recapitulate the actions of the administration as presidents make arguments for their place in history. The final State of the Union may serve as a farewell address, in which the president steps away from the role of president and offers “nonpolitical” wisdom to the audience. These speeches have changed in time as well, with 20th-century addresses being more anti-intellectual, more abstract, more assertive, more democratic, and more conversational. As the executive branch seeks more primacy over the other branches of government, this is reflected in State of the Union Addresses. Presidents increasingly avoid conciliatory language, and have become more assertive. Contemporary presidents also seek to use these speeches as a way of sharpening their identification with the people of the United States and heightening their role as the sole voice of the nation and the people’s representative. In this, they are a prime example of the rhetorical presidency or the tendency of presidents to take their case directly to the people, hoping to encourage congressional action through the vehicle of public support. Many presidents have found that this is more effective when they take aggressive stands against either Congress or the federal government as a whole. These speeches are powerful affirmations of the strength and power of the presidency and of the nation as a whole—especially since they now garner international as well as national audiences and commentary. Surprisingly, there are few memorable moments in these addresses, as presidents seek to focus on the speech’s deliberative function—the details of policy—rather than aspiring to the poetic or
transitions 651
inspirational language more associated with ceremonial occasions. As deliberative speeches, they generally involve a brief articulation of national values (such as freedom or equality) and seek to connect those values to the president’s preferred policy choices. Presidents thus use these occasions to articulate their sense of the national mission and the national destiny and to connect that understanding to specific political actions. There is some fear that the State of the Union Addresses both add to and exemplify dangerous levels of presidential power in that they allow us to believe that governmental power is placed largely in the hands of the chief executive to the exclusion of the other branches—who are reduced to being merely another audience to the president’s speech making. Too much focus on the president can be misleading, for the executive branch must work in concert with the Congress. These speeches are also powerful reminders of that fact as well, for all of the attention the president receives on these occasions. For the president appears before Congress to request action in the form of legislation; he cannot demand it. Thus, these speeches can also be considered profoundly democratic. For to the extent that democracy requires links between the actions of public officials and the citizens they serve, such links are only provided through the words of those officials given to a public that listens attentively. See also bully pulpit. Further Reading Campbell, Karlyn Kohrs, and Kathleen Hall Jamieson. Deeds Done in Words: Presidential Rhetoric and the Genres of Governance. Chicago: University of Chicago Press, 1990; Cohen, Jeffrey E. Presidential Responsiveness and Public Policy-Making: The Public and the Policies That Presidents Choose. Ann Arbor: University of Michigan Press, 1997; Edwards, George C., III. On Deaf Ears: The Limits of the Bully Pulpit. New Haven, Conn.: Yale University Press, 2003; Kernell, Samuel. Going Public: New Strategies of Presidential Leadership. 3rd ed. Washington, D.C.: Congressional Quarterly Press, 1997; Lim, Elvin T. “Five Trends in Presidential Rhetoric: An Analysis of Rhetoric from George Washington to Bill Clinton.” Presidential Studies Quarterly 32
(2002); Teten, Ryan L. “Evolution of the Modern Rhetorical Presidency: Presidential Presentation and Development of the State of the Union Address.” Presidential Studies Quarterly 33 (2003): L 333–346. —Mary E. Stuckey
transitions How well (or poorly) a new president starts her or his term in office goes a long way in indicating how well her or his presidency will go. A president who starts off well can seize the initiative, establish her or his agenda as the public conversation for the nation, keep opponents at bay, and signal that she or he is a political force with which to be reckoned. Experts believe that it is deemed important to start well, to “hit the ground running” as presidential scholar James P. Pfiffner has said, and not to stumble out of the presidential starting gate (or “hit the ground stumbling”). The good start often begins before a president even takes office—in the transition. A good transition often leads to taking office under good terms, can lead to a smooth “honeymoon” period, and a successful first “Hundred Days,” and these ingredients often lead to a more successful presidency. Therefore, starting off on the right foot is a key to successful presidential leadership, and much of the hard work in preparing to govern takes place during the transition period. Most presidential insiders know that it is in the beginning of a president’s term that she or he often has the most leverage or political clout with which to gain compliance in the separated system of U.S. government. It is at this time that opponents give a softer handling when they take the measure of the new president, when the Congress is still trying to see how the new president will lead, when the public is optimistic that she or he will solve many of the nation’s problems, and when the news media goes a bit easier on her or him. Presidents tend to propose more key new pieces of legislation in the early period and are more likely to meet with legislative success in the early days of a new administration. This makes the transition to office so central to successful governing. The transition is the relatively short time between the presidential election and the taking of the oath of office. The election is held in early Novem-
652 tr ansitions
ber, and the oath of office is taken in mid-January of the following year. Thus, the president-elect has only a very limited time—roughly 10 or 11 weeks—in which to make a variety of important decisions that will have a dramatic impact on her or his ability to govern. The switch from campaigning to governing is a taxing one, and many administrations fail to shift successfully from campaign mode to governing mode. After a usually hard-fought and exhausting campaign, one in which the focus is on one thing and one thing only—winning—the president-elect and her or his top transition team must immediately shift gears and return to work with barely time for a rest. In the roughly 11-week period between election and assuming office, the president-elect must make some of the most important decisions that will have a profound impact on her or his ability to govern. A cabinet must be selected as must a top staff, a White House Chief of Staff; legislative and budget priorities must be set; a management style must be decided as must an approach to foreign policy, a strategy for governing, and much, much more. If a new president and her or his team make significant errors in this period, they may never be able to recover. If the president’s team appears confused and is unable to dominate the agenda, they may not have a second chance to take command of the government. Therefore, the work done in the brief period known as the transition may well determine the scope and the nature of power for the entire presidential administration. That is why each new administration takes the transition so seriously—because it is so serious. The financial cost of the transition has risen dramatically during the past several decades. In 1952, Dwight D. Eisenhower had a transition team of about 100 at the cost of approximately $400,000. By 1968, Richard Nixon’s transition team cost about $1.5 million. In 1988, George H. W. Bush’s transition team numbered about 300 and cost more than $3.5 million. In 2000, the transition to office of George W. Bush was abbreviated due to the contested nature of the 2000 presidential election and the uncertain outcome of the election. While shorter than most previous transitions, the Bush team, led by vice president-elect Dick Cheney, did a very effective job in a short space of time in putting the pieces of the new administra-
tion together. Much of the cost of the transition is paid by taxpayers, but in recent presidential transitions, the party or the candidate has added to the funds available for transitions, making them even more costly but also, at times, more highly professional. In 2000, the transition cost approximately $8.5 million, with about half provided by the U.S. government. Timing is a key to successful politics. The “when” of politics matters greatly: When major legislation is introduced, when the public is ready to accept change, when the Congress can be pressured to respond favorably to the president’s initiatives, when the president leads and when she or he follows, when to push and when to retreat. A sense of political timing, part of the overall “power sense” that all great leaders have, can help a president know when to move and when to pull back, when to push and when to compromise, and when to attack and when to bargain. As should be clear, a good start is a key element of political success, and during the transition, some of the most important work of the administration is done. In this preparation stage before taking office, the president-elect lays the groundwork for much that is to follow and sets the tone that shapes the way others see the new administration. Much like the journalist’s key questions, these five W’s (who, what, when, where, and why) and an h (how) of politics set the stage not only for how the administration will operate but also for the way the public, Congress, and the news media will view and respond to the new administration. Presidential transitions have differed dramatically in style, approach, and success. The Bill Clinton transition was slow, awkward, and drifting. This was seen as a weakness and was seen as an opening by the Republicans and the news media to jump on the new president earlier than is customary. Thus Senate minority leader Bob Dole (R-KS) led a highly orchestrated series of early assaults on the new president, designed to undermine his leadership. Likewise the news media, sensing disarray and wishing to show bipartisanship after Republican attacks on the media, jumped all over themselves in an effort to find negative things to say about Clinton. This seriously wounded the new president and made governing that much more difficult. Even members of his own party
treaty making 653
tried to “roll” the president as they too saw him as weakened and vulnerable. The transition of George W. Bush was highly unusual due to the bizarre nature of the postNovember presidential indecision regarding just who the new president would be. As the November election did not determine—at least not right away—a winner of the contest, a postelection battle ensued over the ultimate outcome of the contest. When, more than a month later, George W. Bush was finally declared the winner, he had precious little time to engage in the normal type of transition, but in this, Bush was aided by the experienced hand of his vice president-elect, Dick Cheney, an old Washington, D.C., insider. Cheney orchestrated a brief but fairly smooth transition, put the cabinet and top staff in place, and got the new administration off to a good start. This allowed Bush to start his term on the right foot and gave him the opportunity to focus attention on his key legislative priorities: tax cuts and education reform. The transition is thus important because, as James P. Pfiffner points out, “power is not automatically transferred, but must be seized.” Pfiffner further notes that while the authority of the presidency is transferred with the presidential oath of office, the power of the presidency must be earned, worked for, fought for, and won. Thus Pfiffner suggests that to seize power, presidents must adopt a strategic approach to the transition, one that is highly organized, strategic, leaves little to chance, and deals self-consciously with seizing and using of political power. To fail in this endeavor is to fail to govern effectively. The presidential transition is a key to determining the success or failure of a new administration. Getting off to a good start is essential, and it is in the transition that the groundwork is laid for the start of the new presidency. Further Reading Brauer, Carl M. Presidential Transitions. New York: Oxford University Press, 1986; Burke, John P. Becoming President: The Bush Transition, 2000–2003. Boulder, Colo.: Rienner, 2004; Pfiffner, James P. The Strategic Presidency: Hitting the Ground Running. Chicago: Dorsey, 1988. —Michael A. Genovese
treaty making The constitutional treaty-making process involves two different agencies of the federal government— the president and the Senate. According to Alexander Hamilton in Federalist 75, treaties are neither purely executive nor purely legislative. They are essentially contracts or agreements between foreign powers “which have the force of law, but derive it from the obligations of good faith.” Because treaties involve relations with foreign governments, they require the participation of the president. However, because they also have the force of law—because they are, in fact, part of the “supreme law of the land,” according to Article VI of the U.S. Constitution—they require the participation of the legislature. The two branches perform different functions in the treaty-making process, however, much as they do in lawmaking. In lawmaking, for example, Congress is the active agent, drafting legislation, then deliberating about it, and ultimately sending it on to the president for signature or veto. Thus, the president’s primary constitutional role in lawmaking is to check Congress’s power. Similarly, the president is the active agent in treaty making, ultimately sending on a completed treaty to the Senate for ratification. Thus, the Senate’s primary constitutional role is to check the president’s power. The president’s position as the active agent in treaty making becomes clear by examining the qualities the office brings to foreign policy by virtue of the fact that the presidency is a unitary office. James Madison and Hamilton both argue for the importance of energy in government, and Hamilton in Federalist 70 makes a strong case for energy in the executive—a quality that comes from being a unitary actor. Positive qualities that come from this structural fact, according to Hamilton, include “decision, activity, secrecy, and dispatch.” There was a sense on the part of the framers of the Constitution that foreign affairs required a firm hand, and that was most likely to be found in the executive branch. John Jay makes the argument most powerfully in Federalist 64 that secrecy and dispatch—or speed—are required sometimes in treaty negotiations (and foreign-policy in general). Circumstances may require secret negotiations, and Jay argues that foreign powers are more likely to trust one person with authority than an entire Senate. In the same way, treaties may require speed
654 tr eaty making
in negotiation, and Jay argues that such a quality is more likely to be found in one person than in a plural entity. In fact, the very qualities that make the Senate an outstanding deliberative body may work against it performing the functions that are essential for effective foreign policy. At the same time, the framers did not want the president to have an unfettered hand when it comes to foreign policy. They understood that someone who served as president would not be in the office for life and therefore might be tempted to use power for personal gain, perhaps at the expense of the national interest. Hamilton makes the point in Federalist 75 that it is unwise to trust to “superlative virtue” when self-interest is more powerful. The Senate, then, serves as a watchdog over the president, guarding the public interest. Just as the presidency’s unitary structure gives it certain advantages in foreign policy, so the Senate’s qualifications and structure enable it to guard the public interest. The framers believed that the long terms in the Senate shielded senators from popular passions and empowered them to employ caution and steadiness in their deliberations. They also believed that the term length and size of the Senate allowed its members to possess greater knowledge of foreign policy, as well as “a nice and uniform sensibility to national character.” Both Hamilton and Jay argue that the Senate is qualified uniquely to perform the ratification role, as opposed to the House of Representatives. Treaties may take a long time to negotiate, and the structure of the House—a much larger body with constantly changing members in frequent elections—makes it inadequate to the task. Senators have the time to study the issues, to retain the institutional knowledge of events, and to make wellinformed decisions based on the common good. The result is a complex process that attempts to combine several necessary features—the secrecy and speed that come from the presidency with the deliberation and steadiness that come from the Senate. Treaties require a two-thirds vote in the Senate to be ratified. Because treaties involve relations with foreign nations that might involve the national security, the framers wanted a high threshold to guard against corruption. They believed it would be virtually impossible for both the president and two-thirds of the Senate to be corrupted at the same time about the same thing. Of course, this high threshold makes
it essential for a president to account for the constitutional math when pursuing a treaty. It is very rare for a president’s party to control two-thirds of the Senate, so a president almost always has to ensure that his efforts enjoy some measure of bipartisan support. Although there is no formal role for the House of Representatives in treaty making, its influence may be felt indirectly due to its power to appropriate necessary funds to implement treaties. There is no constitutional requirement for presidents to consult with the Senate prior to making treaties, but the language in Article II suggests that course of action. The language concerning presidential appointments, for example, says that the president “shall nominate, and by and with the advice and consent of the Senate, shall appoint” various officials, leaving the nomination power solely in the president’s hands. The language concerning treaty making, however, is slightly different. There, the Constitution says that the president “shall have Power, by and with the Advice and Consent of the Senate, to make Treaties, provided two-thirds of the Senators present concur.” George Washington understood this difference in language to imply that joint action on treaties was appropriate while respecting the different strengths the two branches bring to the process. Senators also believe that they have the ability to make changes or recommendations to treaty drafts. Positive examples of this type of cooperation include the drafting of the United Nations Charter and the North Atlantic Treaty. Negative examples of treaty-making failure have been quite consequential. No doubt the most famous example of a failed treaty-making effort was Woodrow Wilson’s attempt to ratify the Treaty of Versailles following World War I. By the time the war ended, Wilson’s Democratic Party had lost control of both houses of Congress. Even when Democrats controlled the Senate, they never had the necessary twothirds majority to ratify a treaty without Republican support. With Republicans in charge following the 1918 midterm election, Wilson would have to take account of their concerns. Despite the need for a supermajority, Wilson’s treatment of the Senate throughout the treatymaking process harmed the ratification effort. Wilson needed to address valid Senate concerns about the treaty, but his demeanor toward the Senate exacerbated interbranch relations and contributed to the
unitary executive 655
treaty’s eventual defeat. Wilson rejected any significant Republican participation in the treaty process. He refused to provide a copy of the draft when requested by the Senate. He rejected any consideration of Senate reservations to the treaty. He toured the country in an attempt to mobilize popular support for the treaty and persistently refused to compromise, despite pleas from his own allies to do so. All attempts to ratify the treaty failed, and the result was a lack of U.S. involvement in the League of Nations, hampering its effectiveness. More recent treaty failures include the Comprehensive Test Ban Treaty and the Kyoto Treaty on global warming, both negotiated during the presidency of Bill Clinton. Clinton signed the nuclear testing treaty despite Senate objections, arguing for the right to test and a warning of the difficulty of verification. With the Senate in Republican hands, Clinton was well short of the necessary two-thirds vote for ratification. He sent the treaty to the Senate in 1997, but the Senate refused to debate it. Then, when administration forces attempted to rush the process in 1999, the treaty went down to defeat. Clinton never submitted the Kyoto Treaty on global warming to the Senate because that body voted unanimously, prior to the treaty’s negotiation, for a resolution stating the sense of the Senate that the United States should not be party to a plan that did not enforce its provisions on developing countries or that might harm the U.S. economy. Thus, the treaty was never ratified and never became part of the supreme law of the land, and President George W. Bush withdrew from the protocol. One constitutional question about treaties that has arisen several times is whether the president can terminate a treaty arbitrarily without the consent of the Senate. Since it takes a two-thirds vote in the Senate to ratify a treaty, some argue that termination of a treaty requires similar involvement by that body. President Jimmy Carter, however, terminated a defense treaty with Taiwan when he recognized mainland China in 1978. Some senators, including Barry Goldwater, objected and filed suit, but the U.S. Supreme Court rejected the suit, and the Senate could not agree on a united response. Similarly, President George W. Bush withdrew from the Anti-Ballistic Missile Treaty, forged in 1972 with the Soviet Union, when he chose to continue development of antimis-
sile technology in the wake of the terrorist attacks on 9/11. Although some members of Congress protested, there was little to be done. Congress’s primary tool to fight such actions lies with its power of the purse—it can choose to restrict or prevent spending on presidential initiatives. The president has several options available to him or her to handle treaty defeats. One alternative is to recast a failed treaty as a joint resolution of Congress. A treaty requires a two-thirds vote of the Senate, but a joint resolution only requires a simple majority vote of both chambers of Congress. President John Tyler resorted to this option in 1845 when his treaty to annex Texas failed in the Senate. Presidents took similar actions to annex Hawaii in 1898 and to authorize construction of the St. Lawrence Seaway in 1954. Presidents can also make use of executive agreements to bypass the treaty process. Executive agreements are normally made to implement treaty requirements, but presidents have used them to make foreign-policy as well. Famous examples of executive agreements include Franklin Roosevelt’s Lend–Lease arrangement with Great Britain prior to U.S. entrance into World War II and the Yalta agreement partitioning Axis powers toward the end of that war. The U.S. Supreme Court has ruled that presidents have the power to make such agreements, but they may not supersede the law, and Congress can more easily renege on them than it can on treaties. Congressional pressure can convince presidents to pursue the treaty route when members of Congress feel strongly enough about the issue. See also foreign-policy power. Further Reading Fisher, Louis. The Politics of Shared Power: Congress and the Executive. 4th ed. College Station: Texas A&M University Press, 1998; Hamilton, Alexander, James Madison, and John Jay. The Federalist Papers, Nos. 62–64, 70, 75. Edited by Clinton Rossiter. New York: New American Library, 1961. —David A. Crockett
unitary executive The scope and limits of a president’s executive and war powers are somewhat ill defined. Historically, the president has been granted wide authority to
656 unitar y executive
meet crisis and wars, yet such authority is not absolute. U.S. Supreme Court Justice Frank Murphy, in Hirabayashi v. U.S. (1942), wrote that war gives the president “authority to exercise measures of control over persons and property which would not in all cases be permissible in normal times,” but that such powers are not without limits. The Supreme Court reminded us in United States v. Robel (1967), “[E]ven the war power does not remove constitutional limitations safeguarding essential liberties,” and nearly 40 years later, Justice Sandra Day O’Connor wrote in Hamdi v. Rumsfeld (2004) that “a state of war is not a blank check for the president” and that the commander in chief powers do not give the president authority to “turn our system of checks and balances on its head.” The post 9/11 presidency of George W. Bush was a muscle-flexing, assertive, and, in many ways, a unilateral presidency. The opportunity to exercise power afforded the administration in the aftermath of the September 11, 2001, terrorist attack went, at first, virtually unchallenged. The “rally ’round the flag” effect of the attack on the United States opened a wide door to power, and the Bush administration was anything but shy about using that power to impose its will both at home and abroad. What is the alleged source of this authority? In response to the terrorist attack against the United States, the Bush administration argued that a new approach to foreign policy was required to meet new dangers. The cold war was over, and the international war against terrorism had begun. The president declared war against terrorism; the USA Patriot Act was passed, a Department of Homeland Security was established, a doctrine of “first strike” or preventative/preemptive war was adopted, a war against the Taliban government in Afghanistan took place, the al-Qaeda terrorist network was pursued, and a war against Saddam Hussein in Iraq was launched. Critics charged that the president’s actions threatened the separation of powers, the rule of law, and checks and balances, and while Bush was not the first president to move beyond the law, his bold assertion that the rule of law did not bind a president in time of war marked a new approach and was a grave challenge to the U.S. Constitution and the separation of powers.
At first, efforts to justify the executive power seizure by the Bush administration were restricted to the president repeating the administration’s mantra: “I’m a war president,” and while that, coupled with several notable efforts from key administration officials such as Attorney General John Ashcroft’s to paint critics as weak on terrorism, unpatriotic, as aiding the enemy, even as traitors, all but silenced early critics. President Bush was comfortable exercising a swaggering style of leadership that has been called “the Un-Hidden Hand” leadership style of the president. At first, it worked, but in time, boldness proved insufficient. But what were the underpinnings of the presidential boldness? In essence, the intellectual pedigree for the Bush administration’s expansive view of executive power can be seen in what is called the unitary executive (some members of the administration referred to it as the “New Paradigm”). While the administration rarely provided a comprehensive defense of its actions, we can nonetheless make the arguments that the administration should be making in defense of its aggressive use of executive power. The unitary executive is a constitutional model of presidential power which posits that “all” executive powers belong exclusively to the president. In its most expansive form, the unitary executive is disembodied from the separation of powers and checks and balances and thus seems in contradiction or contradistinction to the original model of constitutionalism as envisioned by the framers. The unitary executive consists of seven parts: (1) executive prerogative, based on John Locke’s Second Treatise; (2) “energy” in the executive, grounded in Alexander Hamilton’s defense of presidential authority; (3) the “coordinate constitution” view of the Constitution, where the “executive power” is fused with the “commander in chief” clause; (4) the doctrine of “necessity,” as practiced by Abraham Lincoln during the Civil War; (5) the “constitutional dictatorship as described by Clinton Rossiter in Constitutional Dictatorship; (6) precedent, that is, past practices of U.S. presidents; and (7) supporting court decisions. Regarding John Locke’s “executive prerogative,” while the word emergency does not appear in the Constitution, some scholars suggest that the Founders did envision the possibility of a president exercising “supraconstitutional powers” in a time of national
unitary executive 657
emergency. Historically during a crisis, the president assumes extraconstitutional powers. The separate branches—which, under normal circumstances, are designed to check and balance one another—usually will defer to the president in times of national crisis or emergency. The president’s institutional position offers a vantage point from which he or she can more easily exert crisis leadership, and the Congress, Court, and public usually will accept the president’s judgments and power seizures. In most instances, democratic political theorists have seen a need to revert to authoritarian leadership in times of crisis. John Locke called this executive “prerogative;” to Rousseau, it is an application of the “general will.” In cases of emergency, when extraordinary pressures are placed on democratic regimes, many theorists suggest that these democratic systems—to save themselves from destruction—must embrace the ways of totalitarian regimes. Laws decided on by democratic means may be ignored or violated under this concept. As Locke wrote, in emergency situations, the Crown retains the prerogative “power to act according to discretion for the public good, without the prescription of the law and sometimes even against it.” While this prerogative could properly be exercised only for the “public good,” one cannot escape the conclusion that for democratic governments this is shaky ground on which to stand. And what if an executive acts wrongly? Here Locke is forced to abandon secular concerns, and he writes that “the people have no other remedy in this, as in all other cases where they have no judge on earth, but to appeal to Heaven.” Most scholars of the presidency and the Constitution conclude that the framers invented an executive with limited authority grounded in a separation and sharing of power under the rule of law. But one of the few framers who called for a more expansive presidency was Alexander Hamilton. Elements of Hamilton’s case for an energetic presidency can be found in Federalist 70. It reads in part: “There is an idea, which is not without its advocates, that a vigorous executive is inconsistent with the genus of republican government. . . . Energy in the executive is a leading character in the definition of good government. It is essential to the protection of the community against foreign attacks: It is not less essential to the steady administration of the laws, to the protection of property
against those irregular and high handed combinations, which sometimes interrupt the ordinary course of justice, to the security of liberty against the enterprises and assaults of ambition, of faction and of anarchy. . . . A feeble executive implies a feeble execution of the government. A feeble execution is but another phrase for a bad execution: And a government ill executed, whatever it may be in theory, must be in practice a bad government. . . . Taking it for granted, therefore, that all men of sense will agree in the necessity of an energetic executive; it will only remain to inquire, what are the ingredients which constitute this energy. . . . The ingredients, which constitute energy in the executive, are first unity, secondly duration, thirdly an adequate provision for its support, fourthly competent powers.” But it must be noted that an energetic presidency is not an imperial presidency, or at least it should not be, and Hamilton’s energetic executive is but a part of the framer’s story. In combining two constitutional provisions, the executive power clause and the commander in chief clause (both in Article II), advocates of the unitaryexecutive theory of presidential power see a geometric expansion of executive authority where the parts, when added together, are considerably greater than the whole. Conveniently forgotten is the fact that the president takes an oath of office to “preserve, protect, and defend the Constitution of the United States.” He or she must therefore “take Care that the Laws be faithfully executed,” even the laws with which he or she may personally disagree. Some administration officials see presidential authority in times of war as creating an executive of virtually unchecked power. A September 25 officeof-legal-counsel memo argues that “These decisions [in wartime] under our Constitution are for the President alone to make.” Other memos suggest that the president may make things that are unlawful or lawful and that neither the Congress nor the courts have the authority to review presidential acts in a time of war. But such an expansive reading violates both the spirit and the letter of the law, and the Supreme Court, in cases such as Hamdi v. Rumsfeld (2004) and Rasul v. Bush (2004), and the Congress, in efforts such as their ban on the use of torture (a bill the president signed, but in a signing statement argued
658 unitar y executive
that while he was signing the bill into law, he did not consider himself bound by the law that he had just signed), have attempted to reclaim some of the power that was lost, delegated, ceded, or stolen. Perhaps no other claim by the Bush administration resonates as powerfully as the “necessity” argument. The old Roman adage Inter Arma Silent Leges (in war time, the laws are silent), while not constitutionally valid, still holds persuasive power. Abraham Lincoln relied on the doctrine of necessity during the Civil War, arguing to a special session of Congress on July 4, 1861: “The attention of the country has been called to the proposition that one who is sworn to ‘take care that the laws be faithfully executed,’ should not himself violate them. Of course some consideration was given to the questions of power, and propriety, before this matter was acted upon. The whole of the laws which were required to be faithfully executed, were being resisted, and failing of execution, in nearly one-third of the States. Must they be allowed to finally fail of execution, even had it been perfectly clear, that by the use of the tenderness of the citizen’s liberty, that practically, it relieves more of the guilty, than of the innocent, should, to very limited extent, be violated? To state the question more directly, are all the laws, but one, to go unexecuted, and the government itself to go to pieces, lest that one be violated? Even in such a case, would not the official oath be broken, if the government should be overthrown, when it was believed that disregarding the single law, would tend to preserve it?” Lincoln believed that it was a union (nation) that above all else had to be preserved because without that union the constitution and the rule of law would be meaningless. In an 1864 letter to Senator Albert Hodges, Lincoln gives his rationale for the exercise of extraordinary presidential power, writing: “I am naturally anti-slavery. If slavery is not wrong, nothing is wrong. . . . And yet I have never understood that the Presidency conferred upon me an unrestricted right to act officially upon this judgment and feeling. It was in the oath I took that I would, to the best of my ability, preserve, protect, and defend the Constitution of the United States. I could not take the office without taking the oath. Nor was it my view that I might take an oath to get power, and break the oath in using the power. I understood, too, that in ordinary civil admin-
istration this oath even forbade me to practically indulge my primary abstract judgment on the moral question of slavery. . . . I did understand however, that my oath to preserve the constitution to the best of my ability imposed upon me the duty of preserving, by ever indispensable means, that government— that nation—of which that constitution was the organic law. Was it possible to lose the nation, and yet preserve the constitution? By general law life and limb must be protected; yet often a limb must be amputated to save a life; but a life is never wisely given to save a limb. I felt that measures, otherwise unconstitutional, might become lawful, by becoming indispensable to the preservation of the constitution, through the preservation of the nation. Right or wrong, I assumed this ground, and now avow it. I could not feel that, to the best of my ability, I had even tried to preserve the constitution, if, to save slavery, or any minor matter, I should permit the wreck of government, country, and Constitution all together.” Even the “small-government” advocate Thomas Jefferson resorted to a “necessity” argument in his constitutionally questionable purchase of the Louisiana Territory. Jefferson wrote: “A strict observance of the written law is doubtless one of the high duties of a good citizen, but it is not the highest. The laws of necessity, of self-preservation, of saving our country when in danger, are of the higher obligation. To lose our country by scrupulous adherence to the written law, would be to lose the law itself, with life, liberty, property and all those who are enjoying them with us; thus absurdly sacrificing the end to the means.” Only if acts are truly necessary to preserve the nation can a president act in ways beyond the scope of the Constitution. He or she is a servant of the law and the Constitution, even as he or she acts beyond their literal scope, never claiming an inherent power to go beyond the law. Lincoln believed that the authority of the government was, during a crisis, the authority to act in defense of the nation, believing that he was venturing on congressional territory and skating on constitutional thin ice. He never claimed that all authority was his, but only that, in a crisis, the doctrine of necessity embodied authority in the government—authority that the president brought to life. He suggested that acts “whether strictly legal or not, were ventured upon,
unitary executive 659
under what appeared to be a popular demand, and a public necessity. It is believed that nothing has been done beyond the constitutional competency of Congress.” Thus, in a legitimate emergency, the people demand that the president act, and the president’s actions are constitutionally permissible if the Congress maintains its authority to ultimately control and limit the actions of a president. “Must,” he asked, “a government, of necessity, be too strong for the liberties of its own people or too weak to maintain its own existence?” He also said that “Actions which otherwise would be unconstitutional could become lawful if undertaken for the purpose of preserving the Constitution and the Nation,” and he posed the dilemma as an analogy: “Often a limb must be amputated to save a life; but a life is never wisely given to save a limb.” In similar fashion, President Franklin D. Roosevelt, in the early response to the crisis of the Great Depression, confronted the task ahead by admitting that some of his actions might go beyond expectations. In his March 4, 1933, first inaugural address, Roosevelt sounded the call for expanded power: “It is to be hoped that the normal balance of executive and legislative authority may be wholly adequate to meet the unprecedented task before us. But it may be that an unprecedented demand and need for undelayed action may call for temporary departure from that normal balance of public procedure. I am prepared under my constitutional duty to recommend the measures that a stricken nation in the midst of a stricken world may require. These measures, or such other measures as the Congress may build out of its experience and wisdom, I shall seek, within my constitutional authority, to bring speedy adoption. But in the event that Congress shall fail to take one of these two courses, and in the event that the national emergency is still critical, I shall not evade the clear course of duty that will then confront me. I shall ask the congress for the one remaining instrument to meet the crisis, broad Executive power to wage a war against the emergency, as great as the power that would be given to me if we were in fact invaded by a foreign foe.” Contemporary defenders of the rule of law and constitutionalism are rightly concerned with such visions of expansive power, but adherents to the doctrine of necessity do not see the Constitution as a
suicide pact and see the Constitution as “bendable” in times of crisis. However, as Justice David Davis wrote in Ex Parte Milligan (71, U.S. 2, 18 L. ed. 281, 1866): “The Constitution of the United States is a law for rulers and people, equally in war and in peace, and covers with the shield of its protection all classes of men, at all times, and under all circumstances. No doctrine involving more pernicious consequences was ever invented by the wit of man that any of its provisions can be suspended during any of the great exigencies of government. Such a doctrine leads directly to anarchy or despotism, but the theory of necessity on which it is based is false; for the government, with the Constitution, has all the powers granted to it, which are necessary to preserve its existence. . . .” Likewise, Justice Robert Jackson’s concurring opinion in the Youngstown case also rebuts the necessity doctrine. Jackson wrote that the framers “knew what emergencies were, knew the pressures they engender for authoritative action, knew, too, how they afford a ready pretext for usurpation. We may also suspect that they suspected that emergency powers would tend to kindle emergencies.” Overall, the courts have not served as a very effective check on presidential power. While there have been times when the courts were willing to stand up to the president (for example, some of the Civil War cases, early in the New Deal era, late in the Watergate period, and late in the war against terrorism), in general, the courts have tended to shy away from direct confrontations with the presidency and were often willing to defer to or add to the powers of the presidency. Defenders of the powerful presidency gravitate toward one court case in particular, Unites States v. Curtiss-Wright Export Corp (1936). In that case, Justice George Sutherland, drawing on a speech in the House of Representatives by John Marshall, referred to the president as “the sole organ” of U.S. foreign policy. While Sutherland’s “sole-organ” remark was a judicial aside (dicta), it has become the unofficial executive branch mantra for the president’s bold assertion of a broad and unregulated power over foreign affairs, but scholars have found little in Curtiss-Wright to rely on in the defense of the robust presidency, and other than defenders of presidential power, this case is not seen
660 unitar y executive
as significant in granting presidents expansive powers. It may be of comfort but only small comfort to defenders of presidential power and exclusivity in foreign policy. Scholar Clinton Rossiter’s Constitutional Dictatorship is a modern version of the same problem that democratic theorists attempted with little success to solve. The Constitutional Dictatorship is an admission of the weakness of democratic theory and of its failure to cover the full range of governing requirements. To save democracy, we must escape from it; to protect democracy, we must reject it. In cases of emergency, we must reject democracy for the more expedient ways of a dictator. In this manner, democratic theory opens the door to a strong poweraggrandizing executive. Of course, nowhere in the Constitution is it specified that the president should have additional powers in times of crisis, but history has shown us that in times of crisis or emergency, the powers of the president have greatly expanded, and while Justice Abe Fortas writes that “Under the Constitution the President has no implied powers which enable him to make or disregard laws,” under the microscope of political reality we can see that this is precisely what U.S. presidents have done and avoided consequences. The result of this view of an enlarged reservoir of presidential power in emergencies has been characterized by Edward S. Corwin as “constitutional relativity.” Corwin’s view sees the Constitution as broad and flexible enough to meet the needs of an emergency situation. By this approach, the Constitution can be adapted to meet the needs of demanding times. If the times call for quasi-dictatorial action by the executive, the Court might well find this acceptable. Advocates of the president’s position argue that there is sufficient precedent to justify President Bush’s activities. Lincoln during the Civil War, Woodrow Wilson in World War I, Franklin D. Roosevelt in the Great Depression and World War II, and others paved the path that Bush is following. However, so too did Richard Nixon, and while his acts are condemned, his “when the president does it that means it is not illegal” motto lives on in the Bush administration. Precedent is an uncertain guide in the war against terrorism. After all, this is a war without end. Even if one is tempted to give President Bush some
leverage in this, a permanent imperial presidency would do such constitutional violence to the U.S. system as to force the abandonment of constitutional government in favor of something more closely resembling one-man rule. It would be full circle from a revolution against the executive tyranny against the king of England to the embrace of imperial rule in the modern era. Presidents Abraham Lincoln, Woodrow Wilson, and Franklin D. Roosevelt were said to have exercised legitimate emergency powers in time of crises, but other presidents such as Richard Nixon and Ronald Reagan also attempted to grab extraconstitutional power and were rebuffed and condemned. What made Lincoln and FDR heroes and Nixon and Reagan usurpers? The predicate is a legitimate and widely recognized crisis. Only when there is a genuine emergency can a president attempt to exercise extraconstitutional power. Also, the other branches and the public must be willing to cede to the president these extraconstitutional powers. Third, the president must remain willing to bow to the will of Congress if it chooses to set policy or limits the president’s exercise of power, and the president cannot use secrecy and distortion to hide from congressional scrutiny. In general, Lincoln and FDR followed these guidelines; Nixon and Reagan did not. And what of the case of George W. Bush in the post 9/11 era? Bush may have had the predicate, but he was reluctant to place himself within the rule of law, bowing only when his popularity plummeted to the 30 percent range, the courts chided him on several occasions, and the Congress belatedly asserted some authority. Until then, he exercised extraconstitutional power and claimed that his acts were not reviewable by Congress or the courts, often cloaking his actions in secrecy and duplicity. Such a bold and illegitimate interpretation of the president’s powers is unsupportable in law or history. In an age of terrorism, is the unitary executive or the imperial presidency a permanent fixture of our political system, or can we strike the right balance between the needs of presidential power and the demands of the rule of law? We need a presidency both powerful and accountable, robust yet under the rule of law. But how to achieve such ends? The answers to these questions will shape the nation for the coming generation and determine whether the
United States Trade Representative 661
experiment in self-government was a fool’s game or the solution to our problems. Further Reading Locke, John. Treatise on Civil Government and a Letter Concerning Toleration. New York: AppletonCentury-Crofts, 1937; Posner, Richard A. Not a Suicide Pact. New York: Oxford University Press, 2006; Rossiter, Clinton. Constitutional Dictatorship: Crisis Government in the Modern Democracy. Princeton, N.J.: Princeton University Press, 1948; Yoo, John. The Powers of War and Peace. Chicago: University of Chicago Press, 2005. —Michael A. Genovese
United States Trade Representative The Office of the United States Trade Representative (USTR) was created in 1962 by an act of Congress. The U.S. Trade Representative is authorized to negotiate trade agreements for the United States. In 1974, Congress created the office as a cabinet-level agency within the Executive Office of the President (EOP). The 1974 act also gave the U.S. Trade Representative additional powers and responsibilities for coordinating U.S. trade policy. As the face of the United States in international trade negotiations, the U.S. Trade Representative plays an important role in advancing the economic as well as the political interests of the United States. In the early 1960s, the United States was concerned with sluggish economic growth and questions relating to its trade position internationally. A huge trace deficit had emerged, and the economies of Europe and Japan were growing and recovering from the devastation of World War II to become trade and economic rivals to the United States. Some of these nations began to impose restrictive protectionist policies, effectively freezing out U.S. products. In an effort to overcome these obstacles, President John F. Kennedy asked the Congress for authority to begin reciprocal tariff talks with European nations. The Trade Expansion Act of 1962 gave the president this authority and also created the Office of Special Trade Representative to lead such talks on behalf of the United States. This was also the beginning of talks held under the General Agreement on Tariffs and Trade (GATT).
In granting the president this new expanded authority, Congress was hoping not only to promote economic growth but also to combat the Soviet Union and develop stronger ties with the European allies. By moving toward a free-trade regime, the U.S. government believed that both economic and political gains would be achieved by the United States, especially as directed against the Soviet Union. By the 1970s, as global trade increased, Congress passed the Trade Act of 1974. In 1980, President Jimmy Carter, using an executive order, renamed the agency the Office of the United States Trade Representative and increased the visibility and authority of the U.S. Trade Representative. When the Soviet Union collapsed in 1991, a new free-trade regime emerged with a push from Presidents George H. W. Bush and Bill Clinton, culminating in the passage of the North American Free Trade Agreement (NAFTA) under President Clinton (which took effect on January 1, 1994). The U.S. Trade Representative was at the center of this push for the passage of NAFTA. In addition, a number of multi- and bilateral trade agreements have developed more open and free trading across the globe. In the course of recent decades, a free-trade ethos has been growing internationally. Led by the United States and Presidents George H. W. Bush and Bill Clinton and continued by President George W. Bush, the operating assumption behind this drive to impose a free-trade regime globally is that the more open markets there are, the more trade can freely flow and the better for all from a financial and economic perspective. It is in some ways a trickle-down approach based on the view that a rising tide lifts all boats. The key nations of the industrialized world have been promoting a limited brand of free trade for several decades now, and many of those in the lessdeveloped world have yet to reap the supposed rewards of the free-trade regime. Clearly a free-trade regime benefits the developed world; it remains to be seen whether it will also be a boon to the less-developed world. The mission of the Office of the United States Trade Representative states: “American trade policy works toward opening markets throughout the world to create new opportunities and higher living standards for families, farmers, manufacturers, workers,
662 v eto, presidential
consumers, and businesses. The United States is party to numerous trade agreements with other countries, and is participating in negotiations for new trade agreements with a number of countries and regions of the world.” The agency is responsible for developing and coordinating U.S. international trade, commodity, and direct-investment policy and overseeing negotiations with other countries. The U.S. Trade Representative is a member of the president’s cabinet who serves as the president’s principal trade adviser, negotiator, and spokesperson on trade issues. The Office of the United States Trade Representative is housed within the Executive Office of the President, and the USTR also serves as vice chairman of the Overseas Private Investment Corporation (OPIC), is a nonvoting member of the Export-Import Bank, and is a member of the National Advisory Council on International Monetary and Financial Policies. Leadership of the World Trade Organization (WTO) is also an area of policy oversight for the USTR. The USTR also maintains a close relationship with Congress; five members from each House are appointed under statute as official congressional advisers on trade policy, and the USTR provides detailed briefings on a regular basis for the Congressional Oversight Group, a new organization composed of members from a broad range of congressional committees. The USTR has offices in Washington, D.C., and Geneva, Switzerland. The agency is divided into five categories: bilateral negotiations (which includes the Americas, Europe and the Mediterranean, North Asia, South Asia, Southeast Asia and the Pacific, and Africa); multilateral negotiations (which includes the World Trade Organization, multilateral affairs, and the United Nations Conference on Trade and Development); sectoral activities (which includes agriculture, services, investment, intellectual property, manufacturing and industrial affairs, government procurement, environment, and labor); analysis, legal affairs, and policy coordination (which includes general counsel, economic affairs, and policy coordination); and public outreach (which includes congressional affairs, public/media affairs, and intergovernmental affairs and public liaison). The USTR’s Geneva Office handles general WTO affairs, nontariff agreements, agricultural policy, commodity policy, and the Harmonized Code System. The Geneva dep-
uty USTR is the U.S. ambassador to the WTO and to the United Nations Conference on Trade and Development on commodity matters. The Geneva staff represents U.S. interests in negotiations and in other contacts on trade and trade policy in both forums. See also bureaucracy; cabinet; executive agencies. Further Reading Lovett, William Anthony, Alfred E. Eckes, and Richard L. Brinkman. U.S. Trade Policy: History, Theory, and the WTO. New York: M.E. Sharpe, 2004; Mann, Catherine L. Is the U.S. Trade Deficit Sustainable? Washington, D.C.: Institute for International Economics, 1999; Office of the United States Trade Representative Web page. Available online. URL: http://www.ustr.gov. —Michael A. Genovese
veto, presidential The presidential veto is the power of the president to prevent legislation passed by Congress from being enacted into law. When presidents veto a bill, they return it, unsigned, to the house of Congress where the bill originated with a written explanation for the veto. Some of the most important and controversial actions taken by presidents revolve around vetoes and veto threats. Far from being an incidental power, the veto was central to the expansion of executive authority over legislative matters in the 19th century, and it is pivotal for modern presidents as they seek to shape the flow of legislation, especially when Congress is controlled by a political party different from that of the president. According to the procedures described in Article I, Section 7 of the U.S. Constitution, when Congress passes a piece of legislation and presents it to the president in a form referred to as an “enrolled bill,” the chief executive faces four possible actions: (1) sign the bill into law within 10 days of bill presentation (not including Sundays); (2) withhold presidential signature, in which case the bill automatically becomes law after 10 days; (3) exercise the regular or return veto by (a) withholding the presidential signature and (b) returning the bill “with his Objections to that House in which it shall have originated . . .”; or (4) withhold presidential signature at a time when
veto, presidential 663
“Congress by their Adjournment prevent its Return” within 10 days of having been presented a bill. This final option, called the “pocket veto,” kills the legislation without returning the bill to Congress. In the case of a return veto, Congress has the option of trying to override the veto, which succeeds if two-thirds of both houses vote to do so. If an override vote is successful, the bill is enacted into law despite the president’s opposition. Yet successful overrides are unusual. One indication of the veto’s power is the fact that, of the more than 1,400 regular bill vetoes from 1789 to 2006, only about 7 percent have been overridden successfully. In addition, presidents throughout history have exercised more than 1,000 pocket vetoes, although some of these bills were reintroduced and passed ultimately by Congress. The Constitution’s founders did not view the veto power merely as a negative block. Instead, they envisioned the veto as a positive and constructive power that presidents could use to improve legislation that was enacted hastily or unjustly or was of dubious constitutionality. This is why the Constitution requires the president to justify his or her vetoes in writing to Congress. Three other important facts about the veto power can be seen from its placement in the Constitution. First, the power is described in Article I, which is otherwise devoted to the powers of Congress; other presidential powers are described in Article II. This placement reflects the fact that the veto power is actually a legislative power, even though it belongs to the executive. Second, the veto is a classic example of constitutional checks and balances, as the veto makes the president an essential player in the enactment of legislation. Just as the presidential veto is a check on Congress’s lawmaking power, so, too, is Congress’s power to override a veto a countercheck on executive power. Third, even though modern presidents possess vast new powers unrelated to the Constitution, the constitutionally based veto power continues to be vitally important as a source of executive authority. In the early decades of the country’s history, presidents used the veto sparingly, yet not without controversy. President George Washington vetoed only two bills in his eight years in office. Both were upheld. President Andrew Jackson sparked outrage from his opponents by vetoing 12 bills during his two terms. His 1832 veto of a bill to continue the Bank of the
United States infuriated his foes and was the pivotal issue in that year’s presidential election. President John Tyler invited political fury largely through his 10 vetoes; in fact, Tyler became the first president to face impeachment charges, which arose in part because of allegations that he had used his veto power improperly (although the vote in the House of Representatives to bring impeachment charges failed). Tyler was also the first president to have a veto successfully overridden by Congress. Veto use exploded after the Civil War largely because of congressional enactment of many private bills sponsored by members of Congress trying to obtain relief for specific individuals seeking pensions and other private benefits, mostly from Civil Warrelated claims, that only Congress could provide. Many of these claims were suspect, and post-Civil War presidents often vetoed them by the hundreds. From the first veto in 1792 until 1868, presidents vetoed 88 bills. From then until 2006, almost 2,500 bills have been vetoed. President Grover Cleveland holds the record for most vetoes per year in office (73), whereas President Franklin D. Roosevelt holds the record for most total vetoes—635 bills in his 12plus years in office. Thus, more-frequent veto use after the Civil War reflected greater acceptance of the president’s right to veto bills and greater presidential involvement in legislative affairs. Early in the 19th century, when Congress dominated the national government, many in Congress considered it improper for presidents to so much as express opinions in public about legislation pending before Congress, as this was considered improper interference with Congress’s constitutional lawmaking power. Yet as vetoes increased, Congress found it necessary to solicit the president’s views on pending legislation openly since Congress realized that it was wasting considerable time and effort if it enacted bills that the president would simply veto. These inquiries concerning the president’s legislative preferences opened the door to more formal and detailed requests from Congress that presidents state their legislative wishes so that they might be used to shape legislation more to the president’s liking and therefore avoid vetoes. By the middle of the 20th century, presidents were submitting to Congress detailed annual legislative agendas routinely that now form the basis for most important legislative activity. Despite congressional acceptance
664 v eto, presidential
of the right of presidents to veto any bills they choose, vetoes did and do continue to provoke political controversy. The veto is more likely to be important under two circumstances: when party control between the executive and the legislative branches is divided, an arrangement that has become common since the 1960s, and when presidents either do not have their own affirmative policy agendas to push in Congress or are so weakened politically that they fail to advance their policy agendas in Congress. The veto’s continued importance also reminds us that constitutionally based powers continue to be important to presidents in the modern era. The existing veto power requires the president either to veto or to approve an entire bill as it is presented to the executive by Congress; that is, unlike 43 of 50 state governors, the president may not veto items or parts of bills. There was no discussion of an item veto at the Constitutional Convention of 1787, but this does not mean that the founders were unaware of the idea. The man who presided over the convention, George Washington, said in 1793 that “From the nature of the Constitution, I must approve all parts of a Bill, or reject it in toto.” As early as the 1840s, reformers argued that the president should have an item veto to excise wasteful or unnecessary spending provisions—so-called “porkbarrel” provisions—or other riders added to legislation. In addition, the existing all-or-nothing veto has meant that presidents are often forced either to sign a bill that includes objectionable parts or to veto a bill that has positive elements within it. Opponents of an item veto have argued that it would simply give the president one more weapon in an already ample arsenal and that it would not result in greater fiscal responsibility since presidents are as interested in pork barrel as members of Congress. Further, many have noted that the all-or-nothing nature of a veto has not prevented presidents from using the power successfully to force Congress to change even a small part of a bill about which presidents have raised objections. For example, President Washington’s second veto was of a bill with which he otherwise agreed but that included a provision that the president opposed—to reduce the size of the already small national army. Noting his objection to this one provision, Congress passed the same bill but with the one
objectionable provision removed. Washington then signed the new bill. Presidents from Washington to the present have used the veto power in just this fashion. In addition, a mere threat from a president to veto a bill may be sufficient to persuade Congress to remove an offending provision of a bill. Such “veto bargaining” has become key to presidential– congressional relations. The item-veto power first appeared in the Constitution of the Confederacy, written in 1861 at the outset of the Civil War, although Confederate President Jefferson Davis never actually used the power. After the Civil War, most states voted to give some kind of item-veto power to their governors. President Ulysses S. Grant was the first president to ask for a presidential item veto. Since then, many presidents have echoed the same call. The idea received renewed attention in the 1980s when President Ronald Reagan argued strongly for the power. In 1996, Congress passed the Line Item Veto Act. Despite the law’s name, it did not technically grant the president a true item veto, as most agreed that a full-blown item veto could be given to the president only through constitutional amendment. Instead, the 1996 law gave the president “enhanced rescission” power, meaning that, beginning in 1997, presidents could block certain limited categories of dollar-amount spending or limited tax breaks within five days of signing any bill with such provisions in it. Any spending measures canceled in this way would remain canceled unless Congress attempted to resurrect them with a new piece of legislation, passed by simple-majority vote. The president could then veto these new bills, which in turn could then be overridden by two-thirds vote of both houses. In the Fall of 1997, President Clinton used this power to block a total of 82 items found in 11 pieces of legislation. The constitutionality of this de facto item-veto power was challenged in court. In the case of Clinton v. City of New York (1998), a six-member majority of the U.S. Supreme Court ruled that the power in question had the effect of allowing the president to rewrite legislation, in violation of the Constitution’s presentment clause, even though the bills in question technically became law with presidential signature before the president then excised specific spending items. In striking down this power, the Supreme Court argued that it violated the “finely wrought” procedures by
vice president 665
which a bill becomes a law as described in Article I, Section 7 of the Constitution. While efforts to give the president some kind of item-veto continue, few expect Congress to muster the necessary two-thirds vote in both houses in support of amending the Constitution. The other major veto controversy of recent years has involved efforts by presidents to expand the pocket-veto power. The pocket veto is different from the regular or return veto in that the pocket veto is absolute—that is, Congress has no opportunity to override a pocket veto. Instead, if Congress wishes to try to overcome a pocket veto, it must begin all over again by reintroducing the pocket-vetoed bill as a new bill when it reconvenes and run the bill through the entire legislative process. While not impossible, this process is much more difficult and complicated for Congress than dealing with the regular or return veto. More important, the Constitution’s founders emphatically and repeatedly rejected the idea of giving the president an absolute veto, based on America’s frustrating experiences with the absolute vetoes exercised by British monarchs and colonial governors when America was still a British colony. Instead, the founders preferred to give Congress the final say through the power of veto override. Why, then, was the president given the pocket veto at all? The answer is this: The pocket veto was inserted to guard against the possibility that Congress would try to duck a presidential veto entirely by passing a bill that it knew or suspected the president would oppose but then quickly adjourning to avoid a veto. Since the regular veto can only occur if Congress receives the returned bill along with the president’s objections to it, there can be no veto if Congress is not present to receive a bill that has been vetoed. Such quick adjournments had actually occurred in some state legislatures before the writing of the Constitution. Thus, the pocket veto was included to prevent a bill from automatically becoming law despite the president’s objections but without the president’s signature after 10 days. The pocket veto can only be used when two circumstances exist: when Congress is in adjournment and when a bill return to Congress cannot occur. It might seem as though these two conditions would always occur together. For most of the 19th century, when Congress often met for only a few months out of the year and travel and communications were slow
and unreliable, this was true. By the middle of the 20th century, however, Congress was meeting nearly year-round. It also began to designate legal agents (normally the clerks of the House and the Senate) to receive presidential vetoes and other messages when it adjourned. A similar and parallel practice has also been employed for many decades by presidents, who designate the White House Office of the Executive clerk to legally “receive” enrolled bills from Congress and other messages while the president is away. So, for example, if the president is traveling abroad when bills are presented to the White House, the 10-day clock is not considered to begin until the president returns, even if the president is gone for weeks. After having been used successfully on many occasions, this arrangement of having agents of Congress receive vetoed bills was challenged in 1970 when President Richard Nixon claimed to pocket veto a bill during a six-day Christmas recess (the 10th day fell during the six-day period). The pocket veto’s constitutionality was challenged in court, and two federal courts held that Nixon’s action was not, in fact, a pocket veto as Congress had properly designated agents to receive veto messages. In other words, President Nixon could and should have used a regular veto, according to the court. Presidents now routinely return veto messages to congressional agents during intra- and intersession adjournments, leaving the pocket veto to be used only during sine die adjournments at the end of a two-year Congress. Further Reading Cameron, Charles M. Veto Bargaining: Presidents and the Politics of Negative Power. New York: Cambridge University Press, 2000; Spitzer, Robert J. The Presidential Veto: Touchstone of the American Presidency. Albany, N.Y.: SUNY Press, 1988; Watson, Richard A. Presidential Vetoes and Public Policy. Lawrence: University Press of Kansas, 1993. —Robert J. Spitzer
vice president The U.S. vice presidency is a rather peculiar office. For most of the nation’s history, popular opinion of the office varied from holding it in low esteem (in many ways, the vice presidency was seen as a national embarrassment) to being the punch line of many
666 vic e president
political jokes. Even occupants of the office itself were drawn to the inevitable self-deprecating depictions of their dilemma. Daniel Webster, when offered the position in 1848 responded, “I do not propose to be buried until I am dead.” At the Constitutional Convention in 1787, the vice presidency was created almost as an afterthought. Late in the convention, the delegates decided to elect the president by an electoral college and not by the legislature, the people, or the state legislatures. This created support among the delegates for a standby to the office of the presidency, a vice presidency, in case something happened to the president and a stand-in president became necessary. “I am nothing,” admitted the first vice president, John Adams, “But I may be everything,” and that just about sums up the situation of the vice presidency. The office has few responsibilities and virtually no independent power, but in an instant, the occupant may become president. In a letter to his wife Abigail, Adams wrote (December 19, 1793): “My country has contrived for me the most insignificant office that ever the invention of man contrived or his imagination conceived.” Franklin D. Roosevelt’s vice president James Nance Garner once said that “the vice presidency of the United States isn’t worth a pitcher of warm spit.” (Garner actually used another work instead of “spit,” but an enterprising journalist, in recording the quote, cleaned up the language a bit.) Selecting a vice president has always been a tricky proposition. In the early years of the republic, the runner up in the electoral college vote became vice president automatically (at that time, the president and vice president did not run together on a single ticket), but with the rise of political parties, this made for electoral trouble. The Twelfth Amendment changed that. From that point on, political parties ran a ticket with both a designated presidential and a vice presidential candidate running together. This made the vice presidential selection the responsibility of a presidential candidate, and at first, the numbertwo person on the ticket was usually chosen for electoral reasons. Quite often, a northerner chose a southerner; someone from one wing of the party chose a vice president from the other wing of the party, and so on. In the past 20 years, this strategy has been revised, and regional or ideological balance has
been less significant. Only one woman has ever been selected as a major party vice-presidential candidate: In 1984 Democratic nominee Walter Mondale selected Representative Geraldine Ferraro, a three-term member of Congress from New York, as his running mate. The Mondale-Ferraro ticket lost to President Ronald Reagan and his vice president, George H. W. Bush. Constitutionally, the vice presidency is a bit of an anomaly. It has very little constitutional power and only a relatively minor political role. While the vice president serves as president of the Senate, the position has become more a symbolic than a substantive charge. Additionally, the vice president serves as a member of the National Security Council, chairs several advisory councils, is often a diplomatic representative of the United States to other nations, may serve as a top presidential adviser, may be an administration’s liaison with Congress, and works in the vineyards of political-party development and fund raising; in reality, most of these tasks depend on the wishes and the good will of the president on whom the vice president always depends. The legitimacy of the office rests on two thin reeds: The office is granted institutional legitimacy in that it stems from the U.S. Constitution, but the office’s political legitimacy depends of the wishes of the president alone. The vice presidency is in practice whatever the president decides it will be. Throughout most of the nation’s history, that has been very little. Constitutionally, the vice president has two main responsibilities: to preside over the Senate where he can cast a tie-breaking vote (vice presidents in modern times presided over the Senate very infrequently) and, of course, to succeed the president in the event of death, resignation, or removal from office. Following ratification of the Twenty-fifth Amendment in 1967, the vice president also becomes the acting president if the president is “unable to discharge the powers and duties of his office.” This is done through either the president informing the Speaker of the House of Representatives and the president pro tempore of the Senate of his or her inability to execute the duties of the office or when the vice president “and a majority of either the principal officers of the executive departments or of such other body as Congress may by law provide” determine that the president’s disability prevents him or her from serving. For example, in 1985, dur-
vice president 667
ing the Reagan presidency, a temporary power turnover occurred when President Reagan underwent surgery. In addition, the Twenty-fifth Amendment provides an opportunity to fill a vacancy in the office of the vice president. Prior to 1967, when a vacancy would occur, the position would remain vacant until the next election. For example, when Lyndon Johnson succeeded to the presidency following John F. Kennedy’s assassination in November 1963, the vice presidency remained vacant until Johnson was elected in his own right with running mate Hubert H. Humphrey in 1964. The Twenty-Fifth Amendment now takes precedence over the Presidential Succession Act of 1947 and is only necessary if both the presidency and the vice presidency become vacant at the same time. Under the Presidential Succession Act, cabinet members follow the vice president, the Speaker of the House of Representatives, and the president pro tempore of the Senate based on the date their offices were established. The first four cabinet members in line for succession include, in order: secretary of state, secretary of the treasury, secretary of defense, and the U.S. attorney general. Historically, the primary role of the vice presidency has been as a president-in-waiting. This places the vice president in a peculiar position—becoming important only if the president dies or is unable to serve. Nine vice presidents (John Tyler, Millard Fillmore, Andrew Johnson, Chester Arthur, Theodore Roosevelt, Calvin Coolidge, Harry Truman, and Lyndon Johnson), roughly one in five, have risen to the presidency from death or resignation of a president. The office is sometimes seen as a political steppingstone to the presidency; however, only five vice presidents have been elected president, and George H. W. Bush was the first since 1836 elected directly to the presidency. The other four vice presidents elected to the office of the presidency include John Adams, Thomas Jefferson, Martin Van Buren, and Richard Nixon. In recent years, however, the power, the influence, and the political significance of the vice presidency has grown considerably. The office has become institutionalized with a large staff and significant political and governmental responsibilities. The rise of the vice presidency can be traced to the presidency of Jimmy Carter. President Carter, a former governor
from Georgia, had no Washington, D.C., experience, but his vice president, Walter Mondale, was a senator from Minnesota who was considered to be a Washington veteran and insider. Carter relied on Mondale for advice and counsel and drew his vice president into the decision-making orbit. At this point, the beltway crowd began to take the vice presidency more seriously. Carter’s successor, Ronald Reagan, another Washington, D.C., outsider and state governor (from California) also brought in a more experienced vice president, George H. W. Bush, and gave him an added role as a presidential adviser. Bush had been a congressman, chairman of the Republican National Committee, U.S. ambassador to China, and director of the Central Intelligence Agency prior to becoming Reagan’s vice president. When Bush ran for president in 1988, he chose J. Danforth Quayle, a young and somewhat inexperienced U.S. senator from Indiana as his running mate. Quayle quickly became known for his public speaking gaffes, especially his much publicized failure to spell “potato” correctly at a grade-school photo opportunity (he insisted that a student add an “e” onto the end of the word). Bush did not give Quayle a great deal of responsibility within his administration, and as a result, the vice president did not play a prominent role in the dayto-day operation of the Bush White House. Bill Clinton, yet another D.C. outsider and former governor of Arkansas, brought his vice president, Tennessee senator Al Gore, close to the center of power, and Gore became the most influential vice president in history—up to that point. When former Texas governor George W. Bush became president, he relied on an experienced veteran of two administrations (Gerald Ford and George H. W. Bush) and a former member of the House of Representatives to be one of his closest policy advisers. To some, Dick Cheney played the role of copresident to yet another state governor who had no real experience in dealing with Congress or all of the other many political actors in the nation’s capitol. Cheney was the most influential vice president in U.S. history, managing to influence the inexperienced Bush in ways large and small. Jokes circulated around Washington that if anything happened to Cheney, Bush might have to become president. The jokes were stinging but only marginally off the mark. Unlike most of his predecessors,
668 w ar powers
Cheney found himself to be within the White House inner circle, and unlike all of his predecessors, he actually played a key role in the decision-making process. Further Reading Goldstein, Joel K. The Modern American Vice Presidency. Princeton, N.J.: Princeton University Press, 1982; Patterson, Bradley. Ring of Power. New York; Basic Books, 1988; Light, Paul C. Vice-Presidential Power. Baltimore, Md.: Johns Hopkins University Press, 1984; Walch, Timothy, ed. At the President’s Side: The American Vice Presidency in the Twentieth Century. Columbia: University of Missouri Press, 1997. —Michael A. Genovese
war powers There is, in the U.S. Constitution, no mention of the term war powers, which applies to a group of powers vested principally in Congress but also in the president, to preserve, protect, and safeguard the United States. The war powers of the nation encompass the various facets of national security, including the authority to maintain peace, declare neutrality, initiate military hostilities, and prosecute war. The “war power,” the authority to commence war or lesser military hostilities, John Quincy Adams observed, “is strictly constitutional.” It is derived from the war clause of the Constitution (Article I, Section 8), which provides: “The Congress shall have power . . . to declare war [and] grant letters of marque and reprisal.” This provision vests in Congress the sum total of the nation’s war-making powers. Accordingly, the courts have held that it is the sole and exclusive province of Congress to move the country from a state of peace to a state of war. As James Madison explained, it falls to Congress alone to “commence, continue and conclude war.” Toward that end, Congress is equipped with several specific powers to initiate and conduct war making: to raise and support armies and provide and maintain a navy, to make regulations governing the land and naval forces, to call forth the militia, and to provide for organizing armies and disciplining the militia. In this manner, as Alexander Hamilton explained in Federalist 23, “Congress have an unlimited discretion to make requisitions of
men and money; to govern the army and navy; to direct their operations.” The president’s authority is drawn from the commander-in-chief clause. Hamilton explained in Federalist 69 that the president is constitutionally authorized to repel sudden attacks against the United States. As commander in chief, he is “first General and admiral” of the military forces, and he is to conduct war when “authorized” by Congress. The president has no constitutional authority to initiate military hostilities, and he is subject to statutory command. Few decisions in the life of a nation rival in importance those that involve war and peace and national security. For the framers of the Constitution, the pursuit of an efficient design for foreign affairs and war making was an animating purpose of the Constitutional Convention. They might well have embraced and implemented the English model for foreign affairs, one that concentrated power in the executive and exalted the values of unity, secrecy, and dispatch, but they did not. Guided by their desire to establish a republican form of government, they sharply rejected the executive model for foreign affairs despite their familiarity with the warnings issued by John Locke, among others, that separation and dispersal of war and foreign-relations powers invited chaos and disaster. But the founders, who represented a generation that lived in dread fear of a powerful executive and the method of executive unilateralism, ignored the threats and dire warnings and engineered a radically new design for foreign affairs that was grounded on the concept of collective decision making, the cardinal principle of republicanism, which champions discussion and debate and conflict and consensus as a method of establishing programs, policies, and laws. In its essence, it rests on the idea that the combined wisdom of the many is superior to one. At bottom, the framers’ blueprint for foreign affairs reflects an effort to apply the principles and elements of constitutionalism, separation of powers, checks and balances, and the rule of law to foreign affairs as rigorously as they applied them to foreign affairs. This design represented a distinctively American contribution to politics and political science, a new vision that embraced the values of the New World and repudiated the outdated, shopworn values of the Old World that were associated with monarchy and unilateral executive powers.
war powers 669
The debates on the war clause reflect the framers’ commitment to collective decision making. Edmund Randolph’s proposal on June 1 to create a national executive, which would possess the executive powers of the Continental Congress, spurred discussion about the repository of the war power. Charles Pinckney expressed his concern that “the Executive powers of [the existing Congress] might extend to peace and war which would render the Executive a Monarchy, of the worst kind, to wit an elective one.” Fellow South Carolinian John Rutledge “was for vesting the Executive power in a single person, tho’ he was not for giving him the power of war and peace.” James Wilson sought to allay their concerns. He observed that “making peace and war are generally determined by writers on the Laws of Nations to be legislative powers.” In addition, “the Prerogatives of the British Monarchy” are not “a proper guide in defining the executive powers. Some of the prerogatives were of a legislative nature. Among others that of war & peace.” James Madison agreed that the war power was legislative in character. There was no vote on Randolph’s motion, but the discussion reflects an understanding that the power of war and peace—that is, the power to initiate war—did not belong to the executive but to the legislature. On August 6, the Committee of Detail circulated a draft constitution that provided that “The legislature of the United States shall have the power . . . to make war.” This was strikingly similar to the provisions of the Articles of Confederation, which granted to the Continental Congress the “sole and exclusive right and power of determining on peace and war.” When the War Clause was placed in discussion on August 17, Pinckney objected to vesting the power in Congress: “Its proceedings were too slow. . . . The Senate would be the best depositary, being more acquainted with foreign affairs, and most capable of proper resolutions.” Pierce Butler stated that he “was for vesting the power in the President, who will have all the requisite qualities, and will not make war but when the nation will support it.” Butler’s view alarmed Elbridge Gerry, who said that he “never expected to hear in a republic a motion to empower the Executive alone to declare war.” There was no support for his position. The proposal of the Committee of Detail to clothe the legislature with the authority to make war was
unsatisfactory to Madison and Gerry, and they persuaded the delegates to substitute declare for make, thus “leaving to the Executive the power to repel sudden attacks.” The meaning of the motion was clear. Congress was granted the power to make—that is initiate—war; the president could act immediately to repel sudden attacks without authorization from Congress. There was no objection to the sudden-attack provision, but there was some question as to whether the substitution of declare for make would effectuate the aims of Madison and Gerry. Roger Sherman of Connecticut approved of the motion and said that the president should “be able to repel and not to commence war.” Virginia’s George Mason also supported the proposal and added that he was opposed to “giving the power of the war to the Executive, because [he was] not [safely] to be trusted with it. . . .” The Madison-Gerry motion was adopted by the convention. Only one delegate—Pierce Butler—advanced the notion of a presidential power to initiate war. However, by the end of the August 17 debate on the war clause, he clearly understood the convention’s decision to place the war power under legislative control, as evidenced by his motion “to give the Legislature the power of peace, as they were to have that of war.” The motion, which represented a sharp reversal on Butler’s part, drew no discussion, and it failed by a unanimous vote. In all likelihood, it was viewed by delegates as utterly superfluous given the understanding that the war power encompassed authority to determine both war and peace. The debates and the vote on the war clause demonstrate that Congress alone possesses the authority to initiate war. The war-making authority was specifically withheld from the president; he was granted only the authority to repel sudden invasions. Confirmation of this understanding was provided by ratifiers in various state conventions. For example, James Wilson, a delegate and one of the leading authors of the Constitution, told the Pennsylvania ratifying convention: “This system will not hurry us into war; it is calculated to guard against it. It will not be in the power of a single man, or a single body of men, to involve us in such distress; for the important power of declaring war is vested in the legislature at large.” The grant to Congress of the war power meant that Congress, alone, possesses the authority to initiate military hostilities on behalf of the people of the
670 w ar powers
United States. At the time of the framing of the Constitution, the verb declare enjoyed a settled understanding and an established usage under international law and English law. Since 1552, it had been synonymous with the verb commence; they both denoted the initiation of hostilities. The framers were familiar with the practice. Accordingly, war may not be lawfully commenced by the United States without an act of Congress. The Constitution does not require a congressional declaration of war before hostilities may be commenced lawfully but merely that it must be initiated by Congress. Given the equivalence of commence and declare, it is clear that a congressional declaration of war would institute military hostilities. According to international-law commentators, a declaration of war was desirable because it announced the institution of a state of war and the legal consequences it entailed— to the adversary, to neutral nations, and to citizens of the sovereign initiating the war. In fact, this is the essence of a declaration of war: Notice by the proper authority of its intent to convert a state of peace into a state of war. But all that is required under U.S. law is a joint resolution or an explicit congressional authorization of the use of force against a named advisory. This can come in the form either of a declaration “pure” and “simple” or a conditional declaration of war. There are also two kinds of war: those that U.S. courts have labeled perfect or general and those labeled imperfect or limited. Three early U.S. Supreme Court cases held that only Congress may initiate hostilities and that it is for Congress to determine whether a war is perfect or imperfect. The Constitution, then, grants to Congress the sum total of the offensive powers of the nation. Consistent with this constitutional theory, the framers gave Congress the power to issue “letters of marque and reprisal.” Harking back to the Middle Ages when sovereigns employed private forces to retaliate for injuries caused by the sovereign of another state or his subjects, the practice of issuing reprisals gradually evolved into the use of public armies. By the time of the Constitutional Convention, the power to issue letters of marque and reprisal was considered sufficient to authorize a broad spectrum of armed hostilities short of declared war. In other words, it was regarded as a species of imperfect war and thus within the province of Congress.
For the first 150 years of the nation’s history, the government, with few exceptions, adhered to the constitutional design for war making. In 1793, war broke out between Great Britain and France. President George Washington declared that the Treaty of Alliance of 1778 did not obligate the United States to defend French territory in the United States, and he issued a Proclamation of Neutrality. Whether the power belonged to the president or to Congress was debated by Hamilton and Madison. Hamilton, writing under the pseudonym “Pacificus,” sought to defend the proclamation: “If the legislature have the right to make war on the one hand—it is on the other the duty of the Executive to preserve Peace till war is declared.” For Hamilton, that duty carried with it the authority to determine the extent of the nation’s treaty obligations. In response, Madison maintained that if the proclamation was valid, it meant the president had usurped congressional power to decide between a state of peace and a state of war. Despite this difference, both agreed that the power to commence war is vested in Congress. Moreover, throughout their lives, both Hamilton and Madison maintained the doctrine that it is Congress’s responsibility alone to initiate hostilities. In 1794, Congress passed the Neutrality Act, signed by President Washington, which established, as Madison had argued, that it is for Congress to determine matters of war and peace. Early presidents refused to claim constitutional authority to initiate military hostilities. Presidents George Washington, John Adams, Thomas Jefferson, James Madison, James Monroe and Andrew Jackson recognized Congress as the repository of the war power and refused to initiate hostilities without authority from Congress. Contrary to the claim that Adams engaged in an exercise of unilateral presidential war making in the Quasi-War with France (1798– 1800), the facts demonstrate that the war was clearly authorized by Congress. Congress debated the prospect of war and passed some 20 statutes permitting it to be waged. Moreover, Adams took no independent action. In Bas v. Tingy (1800), the U.S. Supreme Court held that the body of statutes enacted by Congress had authorized imperfect, or limited war. In Talbot v. Seeman (1801), a case that arose from issues in the Quasi-War, Chief Justice John Marshall wrote for the majority of the Supreme Court: “The whole powers of war being, by the constitution of the United
Watergate 6 71
States, vested in Congress, the acts of that body can alone be resorted to as our guides in this inquiry.” In Little v. Barreme (1804), Marshall emphasized that the president, in his capacity as commander in chief, is subject to statutory commands. There was no departure from this understanding of the war clause throughout the 19th century. In 1846, President James K. Polk ordered an army into a disputed area between Texas and Mexico; it defeated the Mexican forces. In a message to Congress, Polk offered the rationale that Mexico had invaded the United States, which spurred Congress to declare war. If Polk’s rationale was correct, then his actions could not be challenged on constitutional grounds, for it was well established that the president had the authority to repel invasions of the United States. If, however, he was disingenuous—if he had in fact initiated military hostilities—then he had clearly usurped the war-making power of Congress. Polk made no claim to constitutional authority to make war. The House of Representatives censured Polk for his actions because the war had been “unnecessarily and unconstitutionally begun by the President of the United States.” Representative Abraham Lincoln voted with the majority against Polk. As president, it is worth noting, Lincoln maintained that only Congress could authorize the initiation of hostilities. None of his actions in the Civil War, including the appropriation of funds from the U.S. Treasury or his decision to call forth regiments from state militias, each of which was eventually retroactively authorized by Congress, constituted a precedent for presidential initiation of war. Moreover, in the Prize Cases (1863), the U.S. Supreme Court upheld Lincoln’s blockade against the rebellious Confederacy as a constitutional response to sudden invasion, which began with the attack on Fort Sumter. The Supreme Court stated that the president, as commander in chief, “has no power to initiate or declare war either against a foreign nation or a domestic state.” Nevertheless, in the event of an invasion by a foreign nation or state, the president was not only authorized “but bound to resist force by force. He does not initiate the war, but is bound to accept the challenge without waiting for any special legislative authority.” Until 1950, no president, no judge, no legislator, and no commentator ever contended that the presi-
dent has a legal authority to initiate war. Since then, however, a steady pattern of presidential war making has developed, from the Korean War and the Vietnam War to U.S. incursions in Grenada, Panama, Somalia, Kosovo, Afghanistan, and Iraq. Revisionists, including presidents, academics, and journalists, have defended the legality of these acts principally by invoking the commander-in-chief clause. The clause, however, confers no war-making power whatever; it vests the president with the authority to repel invasions of the United States and to direct war, as Hamilton explained, when authorized by Congress. The assertion of a presidential power to initiate hostilities would eviscerate the war clause. Aggressive presidential claims to the war power for the past halfcentury have succeeded in practice because Congress has not defended its constitutional powers. As a consequence, presidential war making, comprised of one part usurpation and one part abdication, has become the norm, if not the law, in the United States. Further Reading Adler, David Gray. “The Constitution and Presidential Warmaking: The Enduring Debate.” Political Science Quarterly 103 (1988): 1–36; Fisher, Louis. Presidential War Power. 2nd ed. Lawrence: University Press of Kansas, 2004; Wormuth, Francis D., and Edwin Firmage. To Chain the Dog of War. Dallas, Tex.: Southern Methodist University Press, 1986. —David Gray Adler
Watergate The scandal known as Watergate was the most significant political scandal in U.S. history. In its wake, one president was compelled to resign from office, a vice president in a related scandal resigned, several cabinet officials were convicted and spent time in jail, and several of the president’s top aides, including his White House Chief of Staff, chief domestic adviser, and legal counsel were also convicted and did jail time. It led to an era of harsh, hyperpartisan politics and a slash-and-burn attitude in which the politics of personal destruction dominated the political culture. It had, and still has, a tremendous impact on the U.S. political system. It all started with a break-in at the
672 Watergate
President Richard Nixon leaving the White House after resigning in the wake of the Watergate scandal (Getty Images)
Democratic national headquarters in the Watergate complex in Washington, D.C., in the summer of 1972. Richard Nixon was elected president in 1968 on a promise to end the controversial war in Vietnam, but as president, he extended before he ended that war. Hounded by antiwar protesters, the Nixon White House went into a defensive posture and tried to undermine the protest efforts and silence critics. His predecessor, Lyndon B Johnson, was forced not to seek reelection because of the antiwar sentiment in the nation. Nixon was determined not to let this happen to him. Initially, this involved trying to plug leaks of controversial information. To do this, the White House formed “the Plumbers” (to plug leaks). In time, leak plugging gave way to more controversial and illegal activities. As his first term drew to a conclusion, President Nixon focused on reelection. He was pulling U.S.
troops out of Vietnam, had opened doors to China, and was developing détente with the Soviet Union. In many respects, it was an impressive record of innovation and daring—especially in foreign affairs. It seemed like a strong record on which to seek reelection, but he was hounded by the prospect of losing the 1972 election. He was determined not to become yet another victim of the antiwar protests and began to engage in secret efforts to undermine the Democratic candidates who might run against him. These White House “dirty tricks” leveled against the strongest of the potential Democratic rivals the president might face in the 1972 race proved very successful, and in the end, Nixon faced Senator George McGovern (D-SD), the liberal antiwar candidate and a man whom Nixon knew he could beat. In effect, McGovern was the weakest candidate in the Democratic Party, and unless something drastic happened, Nixon faced the possibility of a landslide electoral victory.
Watergate 6 73
During the 1972 presidential race, a potential tragedy for the president occurred when four men were arrested (others, more closely linked to the Nixon White House were later arrested in connection with this crime) inside the Democratic National Committee Headquarters, located in the Watergate office complex in Washington, D.C. Eventually, these men were traced to the Nixon White House, and the possibility of implicating those close to the president, or even the president himself, became a possibility. Almost immediately, a cover-up was devised. It was designed to limit damage to the president, keep the burglars quiet, and contain the problem until the November election. This strategy proved successful, as Nixon won a landslide electoral victory over George McGovern in the November 1972 election. Everything seemed to be going well for the president, but before long, the conspiracy and the cover-up began to unravel. Newspaper stories in The Washington Post by reporters Bob Woodward and Carl Bernstein and stories that began to crop up elsewhere began to tighten the noose around the Oval Office. Members of the administration began to talk to prosecutors, hire lawyers, and threaten to blow open the cover-up. One of the first of the “big fish” to turn on the president was his White House counsel, John Dean. Dean made a deal with prosecutors and told his story to a Senate committee headed by Sam Ervin (D-NC). Dean accused the president of being involved in a cover-up and a conspiracy. It was a devastating set of accusations, but there was little proof since it was Dean’s word against the word of the president of the United States, and at that point, most citizens were willing to believe the president. When it was revealed that President Nixon had been tape recording his conversations in the White House and elsewhere, it became possible to see definitively who was telling the truth. A battle for control of the tapes ensued. The Senate wanted them, as did the special prosecutor, Archibald Cox, who had been appointed to investigate the Watergate break-in. So too did the criminal court headed by Judge John Sirica who was presiding over the Democratic National Headquarters break-in case, but President Nixon resisted turning over the tapes.
In the meantime, the scandal was drawing closer and closer to the president. Several of his top aides were indicted, including the former attorney general and his chief of staff, but for the president, it all seemed to hinge on the tapes. When finally the U.S. Supreme Court agreed to hear the case concerning control of the tapes, the end of the controversy seemed near. On July 24, 1974, the U.S. Supreme Court handed down its 8-0 decision in United States v. Nixon (associate justice William Rehnquist recused himself from the decision since he had served in the Nixon Justice Department prior to his appointment to the high court). The president would have to turn over the tapes. On the same day, the House Judiciary Committee, headed by Peter Rodino (D-NJ), began to debate articles of impeachment against President Nixon. Three days later, the committee voted 27-11 to recommend to the full House that President Nixon be impeached for obstruction of justice. Shortly thereafter, two more articles of impeachment passed the committee. When the president finally released the White House tapes, his fate was sealed. These tapes revealed that the president of the United States had committed several illegal acts, and the evidence was irrefutable. A few days later, on August 9, 1974, President Richard M. Nixon, resigned the presidency. A brief chronology of the Watergate scandal does not, however, reveal the true level of corruption of the Nixon White House. For that, we must remember that Watergate is an umbrella term under which a variety of corrupt and illegal acts are encompassed. Essentially, there were three different but related conspiracies at work here. First was the plumbers conspiracy, which involved political crimes aimed at “getting political enemies.” These included illegal wiretapping, the break-in of the office of Daniel Ellsberg’s psychiatrist’s office in an effort to get “dirt” on Ellsberg (Daniel Ellsberg is the man who released the Pentagon Papers [a government report outlining the history of U.S. involvement in Vietnam, that the Nixon White House attempted to suppress] to the press), and use of government agencies to go after people on the Nixon “enemies list.”
674 Watergate
The second conspiracy is the reelection conspiracy. This involved efforts to undermine the Democratic primary so as to hurt the candidacies of the stronger Democratic rivals and elevate the weaker candidates. This also involved extorting campaign contributions from individuals and corporations, laundering money, and sabotaging the electoral process, forgery, fraud, and dirty tricks. The third conspiracy was the cover-up conspiracy. Almost immediately after the break-in of the Democratic Headquarters, a criminal conspiracy to cover up the crime began. Designed to mislead investigators, this conspiracy involved destruction of evidence, perjury, lying to Congress, and the defiance of subpoenas. In these conspiracies, how much did the president know, and how deeply involved was he? As the White House tapes reveal, the president was very deeply involved in many of the crimes of Watergate, including the paying of hush money to criminal defendants (in a March 21, 1973, White House tape, the president is told that it could take “a million dollars” to keep the defendants quiet, to which the president says, “We could get that. If you need the money, I mean, uh, you could get the money . . . you could get a million dollars. And you could get it in cash. I, I know where it could be gotten. . . . I mean it’s not easy, but it could be done.” The defendants’ silence was later bought) and the cover-up of a crime (on June 23, 1972, shortly after the breakin of the Democratic Headquarters, the president and H. R. “Bob” Haldeman, his chief of staff, are discussing the investigation of the break-in, and Haldeman tells the president that “the FBI is not under control,” and the president and Haldeman discuss ways to keep the FBI out of the investigation, agreeing to have the CIA tell the FBI not to investigate in certain areas. The president then ordered Haldeman to have CIA director Richard Helms tell the FBI that “the President believes that it is going to open the whole Bay of Pigs thing up again. And that they [the CIA] should call the FBI in and [unintelligible] don’t go any further into this case period!”) . There is, however, no direct evidence that President Nixon ordered the breakin of the Democratic Headquarters in the Watergate complex.
The Watergate scandal had a profound impact on U.S. politics and on U.S. political culture. It spawned a cynicism and culture of slash-and-burn politics that has lasted more than a quarter-century. It further divided the political parties and helped usher in a hyperpartisanship that made compromise and accommodation more difficult. It opened up the press to more investigative reporting, but rather than focusing on uncovering public crimes and abuses of power, it has evolved into a more personalistic, more intrusive, and more private outing of the personal lives and habits of elected officials. The Watergate scandal also led to a series of presidency-bashing and presidency-curbing legislation that tried to control the imperial presidency that had been created out of the Vietnam War and Watergate. Congress passed the Case Act (1972), the War Powers Resolution (1973), the Budget Control and Impoundment Act (1974), the Federal Election Campaign Act (1974), the National Emergencies Act (1976), the Government in Sunshine Act (1976), the Corrupt Practices Act (1977), the Ethics in Government Act (1978), the Presidential Records Act (1978), and the Foreign Intelligence Surveillance Act (1978). Individually, each of these acts had the effect of shrinking the power of the presidency. Collectively, they were designed to both return some powers to the Congress as well as enchain the presidency in a web of legal restrictions that were to put an end to the imperial presidency. Clearly, the Watergate scandal was the most serious example of presidential corruption in U.S. history (the other serious cases of presidential corruption come from the presidencies of Ulysses S. Grant, Warren G. Harding, Ronald Reagan, and Bill Clinton). President Nixon attempted to defend his acts in a television interview with British journalist David Frost in 1977. In the interview Nixon said that “When the President does it, that means it is not illegal.” Such a defense flies in the face of the U.S. Constitution and the rule of law. The basis on which the United States was founded is that no person is above the law, not even the president, and while some of the post-Watergate efforts to chain the power of the presidency may have gone too far, clearly they were aimed not at only punishing Nixon but also at reclaim-
Watergate 6 75
ing some of the lost or stolen powers that had gone to the executive branch during the previous 50 years. As a nation, the United States still suffers from the crimes of Watergate. That era created a brand of politics that was more harsh, more partisan, more personal, and more divisive than any period since the Civil War era. It is a brand of politics that is still with us today and may be with us for some time to come. In this sense, we are still paying for the crimes of Watergate. Further Reading Dean, John. Blind Ambition: The White House Years. New York: Simon and Schuster, 1976; Emery, Fred.
Watergate: The Corruption of American Politics and the Fall of Richard Nixon. New York: Touchstone Books, 1994; Friedman, Leon, and William F. Levantrosser, eds. Watergate and Afterward: The Legacy of Richard M. Nixon. Westport, Conn.: Greenwood Press, 1992; Genovese, Michael A. The Watergate Crisis. Westport, Conn.: Greenwood Press, 1999; Jeffrey, Harry P., and Thomas Maxwell-Long, eds. Watergate and the Resignation of Richard Nixon: Impact of a Constitutional Crisis. Washington, D.C.: Congressional Quarterly Press, 2004; Kutler, Stanley I., ed. Abuse of Power: The New Nixon Tapes. New York: Free Press, 1997. —Michael A. Genovese
JUDICIAL BRANCH
administrative law
ciations with administrative agencies. This essay provides an overview of administrative law and the constraints it places on both administrative agencies and the three branches of government as outlined in the U.S. Constitution. From 1816 to 2001, the number of federal government employees increased from 4,837 to more than 2.6 million. The administrative courts in the Social Security Administration adjudicate more than 300,000 cases a year, which is more than 10 times the caseload of all federal judges combined, and it is predicted that by 2023, there will be another 266 new federal agencies created. The creation of and growth of administrative agencies reflects a changing perception of what government ought to do. Not until 1887 with the creation of the Interstate Commerce Commission (ICC) was there even such a thing as an independent regulatory agency. The creation of the ICC was a reaction to those who were concerned about the abuses that occurred within the free-market economic system. Since 1887, there have been five episodes of expansion, each of which is a function of historical circumstances, changing attitudes, and government cooperation. From 1887 to 1900, administrative agencies were created to control monopolies and trusts. There was a boom in the industrial and banking sectors that led to corruption and monopolies which resulted in unfair business practices. There was a demand placed on the government to step in and right the system. The government reacted by creating administrative agencies
Administrative law is a body of law that governs the ways in which regulatory agencies make and enforce policy. Regulatory agencies are created by enabling legislation passed by Congress and are subject to review by the legislative, executive, and judicial branches. Administrative law guides the actions of regulatory agencies by defining each agency’s jurisdiction and function with respect to the other branches of government and society outside of the government. While administrative agencies are not created by the U.S. Constitution as the other branches of government are, they are vital for the government and the nation to function efficiently. One of the largest debates in the field of administrative law is the balance between democracy on the one hand and bureaucracy on the other. Whereas Congress and the president are elected officials, bureaucrats are isolated from public opinion and the politics of the other branches. Bureaucrats, who populate administrative agencies, are highly skilled professionals and experts who provide expertise and guidance to elected officials on various policy matters about which those elected officials do not have the time or the resources to become experts themselves. The constitutional branches of the national government, including the unelected judiciary, grant discretion to bureaucrats to make and implement public policy because of their expertise. One way to think of administrative law is to think of it as the branch of public law that applies to administrative agencies and the other branches’ asso677
678 administr ative law
to oversee private-sector transactions. From 1906 to 1915, the government was again called on to check the practices of private industry but this time on behalf of consumers who were concerned about the quality of the products and safety of the workplace. Upton Sinclair, in The Jungle, painted a bleak picture of the Chicago meatpacking industry during this time, and his assessment could have been applied to almost any sizable U.S. city and any industry. The New Deal was not only an economic stimulus package enacted by President Franklin D. Roosevelt, but 1930 to 1940 saw the United States’s third period of administrative expansion as a result of FDR’s New Deal policies. The New Deal was an effort to bring the country out of economic depression, which was the result of the stock-market crash in 1929. As people lost their jobs and poverty swept over the nation, the populace began to demand that the government become more interventionist than it had ever been before. This period of expansion saw the creation of such agencies as the Tennessee Valley Authority and the Social Security Administration. The effects of this expansion period are still felt, and these efforts were continued in the fourth stage of expansion, which lasted from 1960 to 1979. Much of the nation at this time could remember the Great Depression, but the nation was still dealing with issues of racial and gender equality that required government aid to correct. In an effort to make the United States cleaner, healthier, and safer, administrative agencies were created at a record pace, which lead into the fifth stage of administrative expansion which is not expansion at all but a move to deregulation. The changing attitudes of the nation and the lack of a great catastrophe such as the Great Depression has led to a move away from administrative expansion and increased rule by free-market mechanisms. This era of deregulation may soon come to an end as a result of corporate scandals, terrorism, and the natural disasters that have shaped the national outlook in the early 21st century. There are four general types of administrative agencies: independent regulatory agencies, quasiindependent regulatory agencies, executive departments, and commissions. Independent regulatory agencies (IRA) are public agencies that perform powerful regulatory functions and lie outside of the executive department, which places them beyond
the president’s jurisdiction. IRAs utilize both their rule-making and adjudicative authority to regulate specific industries. Examples of IRAs include the Federal Trade Commission (FTC), Securities and Exchange Commission (SEC), Federal Communications Commission (FCC), and the Federal Reserve Board (the Fed). Like many agencies, IRAs were created as a response to environmental factors. Congress, in response to a specific set of problems, creates agencies to handle matters with which it may not have the time or expertise to deal. Likewise, when Congress perceives the federal judiciary as an inadequate outlet, the likelihood of creating an IRA increases. To help isolate bureaucrats within the IRA from political pressure, heads of these agencies are appointed by the president for fixed terms and are confirmed by the Senate. The only way an IRA head can be removed is if he or she fails to do his or her job. Quasi-independent regulatory agencies deviate only subtly from IRAs, but the subtle differences make quasi-IRA’s more susceptible to political pressures and oversight from both elected and appointed officials. For instance, the Federal Aviation Administration (FAA) was created as an IRA, but it is housed in the Department of Transportation (DOT); as such, its independence is quite limited. Agencies such as the environmental Protection Agency (EPA) and the Federal Drug Administration (FDA) are categorized in the U.S. Government Manual as IRAs, but their independence is limited since the heads of these agencies can be removed for reasons other than cause. Executive departments carry a high level of prestige since they are cabinet-level positions. There are currently 15 executive departments, which include the Department of Agriculture, the Department of Commerce, the Department of Defense, the Department of Education, the Department of Energy, the Department of Health and Human Services, the Department of Homeland Security, the Department of Housing and Urban Development, the Department of the Interior, the Department of Justice, the Department of Labor, the Department of State, the Department of Transportation, the Department of the Treasury, and the Department of Veterans Affairs. Most of these departments have been created to help the president implement and
administrative law 679
refine new policies. For instance, as a result of the passage of the USA PATRIOT Act of 2001, the Department of Homeland Security was created to help the president implement the law passed by Congress. In addition to these high-profile agencies and departments, a great number of agencies exist on a smaller scale in more obscure policy areas. These commissions, such as the Small Business Administration or the American Battle Monuments Commission, are created as a response to special and narrow demands and as a result have smaller staffs and budgets which reflect their narrower focus. Not all of these commissions are obscure, however; the U.S. Postal Service and the National Aeronautics and Space Administration (NASA) fall into this category. While Congress passes bills, presidents sign bills into law and enact executive orders, and courts hand down decisions, administrative agencies make rules. Rule making is the method by which agencies make policies that have the force of law. According to the Administrative Procedure Act of 1946, a rule is “an agency statement of general or particular applicability and future effect designed to implement, interpret, or prescribe law or policy.” Rules are the products of administrative action and are issued by agencies, departments, and commissions. Sections 551 and 553 of the Administrative Procedure Act (APA) require administrative agencies to follow certain procedures when making rules. There are two prominent methods of rule making. The first method of rule making is the APA notice-and-comment rule making. In this scenario, administrators draft and propose rules that are then published in the Federal Register. After spending at least 30 days in the Federal Register, the administrative agency must make revisions to the rules that have come from recommendations of interested parties who have read the rules in the Federal Register. Once revisions are made, the rules are placed back into the Federal Register, where they stay and take effect after another 30-day waiting period. In 1934, the government began to list administrative rules in the Federal Register; it began to categorize them in 1938 in the Code of Federal Registrations, which is organized under 50 titles corresponding to different areas of law and public policy. The number of rules and regulations published in each of these books has increased exponentially since their cre-
ation. For instance, the number of rules published in the Federal Register in 2001 totaled 19,643 compared to slightly more than 6,000 in 1988. Likewise, the number of pages for just agriculture in the Code of Federal Regulations was 10,406; in 1938, the total number of pages in the agriculture section was 1,174. These are commonly used measures to indicate the increased rule-making function of administrative agencies. The second method of rule making is the formal method which follows a triallike procedure. Whereas the notice-and-comment method of rule making is the quasi-legislative method used by administrative agencies, there is a quasi-judicial method known as adjudication. Sometimes, in the course of rule making, Congress requires that agencies engage in quasijudicial hearings to clarify a specific rule or regulation. When a dispute arises and agencies are called on to make rules through adjudication, there is an administrative trial in which there are two parties who are alerted by official notice. Then there are pleadings and a discovery process followed by an examination and a cross-examination, much like what occurs in a real trial. In these administrative trials, there are no juries, and the judges are public administrators. While quasijudicial hearings make up only a small amount of an agency’s workload, in 1983 there were more than 400,000 cases filed in agencies, compared to only 275,000 filed in all of the U.S. district courts combined. Aside from these two recognized methods of rule making, there are many loopholes to the official methods of rule making that give agencies a high degree of flexibility and discretion. The Administrative Procedure Act permits many of these loopholes due to its vagueness. Due to the large number of rules passed informally, there are a high number of rules challenged through the federal court system, thus allowing federal courts to play a role in outlining agency discretion. Agency discretion is categorized as informal administrative acts that are within an administrator’s formal authority. Administrators use their discretion to either act on policies that they are in favor of or not to act to close the door on the policies which they oppose. Since there are so many agencies dealing with such a great number of issues, it is impossible for the judiciary and Congress to provide extensive oversight. In
680 American Bar Association
fact, the inability of the three branches of government to deal with so many issues adequately is the very reason administrative agencies were created in the first place. Therefore, much of what agencies do happens out of the view of the other branches of government. The extent to which administrators ought to be given discretion is a contentious debate that has implications for democratic governance. An increase in agency discretion means a decrease in congressional authority and oversight of administrative agencies. This, as mentioned in the opening, is at the heart of the democracy/bureaucracy debate. Proponents of administrative discretion argue that bureaucrats are better equipped to deal with certain policy matters and therefore do a better job. The opposing argument is that too much agency discretion violates democracy’s central tenet of government by elected officials. There is no clear answer as to whether the U.S. Supreme Court and Congress are acting to expand or restrict agency discretion. For instance, in Securities and Exchange Commission v. Chenery Corporation (1947), the U.S. Supreme Court decided that when a statute requires an agency to proceed by Section 553 of the APA and the agency ignores the statute and decides to deal with an issue through adjudication, the courts will not interfere. Also, when an agency is required by statute to provide a hearing on record but instead chooses to promulgate a general rule and deny the required hearing to whomever fails to meet the terms of the rule, the courts will not interfere with the agency’s choice. This was the opinion handed down by the Supreme Court in U.S. v. Storer Broadcasting Company (1956). Agency discretion has been limited by such Supreme Court decisions as Citizens to Preserve Overton Park, Inc. v. Volpe (1971) and Dunlop v. Bachowski (1975), but other cases decided by the Supreme Court in recent years have allowed Congress to grant increasingly broader discretionary authority to administrators. This brief discussion of administrative law has focused on only a few dimensions of administrative law: the history and growth of administrative agencies, the structure of the U.S. bureaucracy, rule making and adjudication in administrative agencies, and agency discretion. The point that should be stressed
is that administrative law is a complex system of formal and informal powers that provides the parameters within which administrative agencies operate. Its complexity is due to the dynamic nature of administrative powers that are a function of changing policy demands; that is to say, as the public demands government action, policies are created. These policies then lead to changes in administrative agencies as a result of changing administrative law, thus throwing into question the validity of the assumption at the foundation of the bureaucracy/democracy dichotomy. Further Reading Kerwin, Cornelius. Rulemaking: How Government Agencies Write Law and Make Policy. 3rd ed. Washington, D.C.: Congressional Quarterly Press, 2003; Warren, Kenneth F. Administrative Law in the Political System. 4th ed. Boulder, Colo.: Westview Press, 2004. —Kyle Scott
American Bar Association The American Bar Association (ABA) is the largest voluntary professional association in the world. With more than 400,000 members, the ABA provides lawschool accreditation, continuing legal education, information about the law, programs to assist lawyers and judges in their work, and initiatives to improve the legal system for the public. The stated mission of the ABA is to be the national representative of the legal profession, serving the public and the profession by promoting justice, professional excellence, and respect for the law. ABA membership is open to lawyers who were admitted to practice and hold good standing before the bar of any state or territory of the United States. Those eligible to join the ABA as associates are: nonlawyer judges, federal court executives, bar-association executives, law-school educators, criminal justice professionals, members of administrative agencies, industrial organization economists, law-office administrators, legal assistants, law librarians, and members of ABA-approved law school boards of visitors. Members of the legal profession in other nations who have not been admitted to the practice of law in the United States can become international associates.
American Bar Association 681
The ABA was founded on August 21, 1878, in Saratoga Springs, New York, by a mere 100 lawyers from 21 different states. At the time, lawyers were generally sole practitioners who trained under apprentices; as a result, there were no codes of ethics or other guidelines by which to govern the profession. After its creation, the ABA sought to develop a national code of ethics for the legal profession, as well as provide a forum for discussion involving intricate issues involving the practice of law. The first ABA constitution, which is still mostly in effect today, defined the purpose of the ABA as being for “the advancement of the science of jurisprudence, the promotion of the administration of justice and a uniformity of legislation throughout the country.” By the 20th century, the ABA had begun to expand both its membership and its programs. The organization adopted codes of ethics for the legal profession, promoted the independence of the judicial branch, and enacted standards for legal education to be followed by accredited law schools. The ABA’s model code of professional conduct has been adopted by 49 states and the District of Columbia; California is the only state that has not adopted the ABA’s model officially, yet relied on major provisions in the state bar’s creation of its own code of conduct for the legal profession. In 1925, African-American lawyers formed the National Bar Association since they were not allowed to join the ABA. However, since the 1960s, the ABA has had much greater diversity in its membership, and approximately 50 percent of all attorneys in the United States are members. Today, the Student Law Division has close to 35,000 members. The organization is quite active in public outreach programs to help educate citizens about a variety of legal topics, including consumer legal issues, children and the law, domestic violence, family law, human rights, immigration, protection for online shopping, and tax tips. The ABA’s Division for Public Education sponsors a Law Day each year to focus on the U.S. “heritage of liberty under law and how the rule of law makes our democracy possible.” For its members, the ABA provides professional development through continuing legal education and professional networking, a variety of resources to attorneys practicing in specific fields of law or in the public or private sector, and public-service opportunities.
The official publication of the organization is the ABA Journal. ABA members can join sections of the organization devoted to specific legal topics; these sections hold their own meetings and also publish a variety of newsletters and magazines for members. The ABA has a house of delegates that acts as the primary body for adopting new policies and recommendations for official positions. Currently, the ABA headquarters are located in Chicago, Illinois. Past presidents of the ABA include political and legal luminaries such as former U.S. president and Chief Justice of the United States William Howard Taft and Chief Justice of the United States Charles Evans Hughes. In recent decades, the ABA has taken a stand on such controversial public-policy issues as favoring abortion rights and opposing capital punishment. As a result, many conservative politicians have denounced the organization for having a liberal political bias, and the conservative group, the Federalist Society, twice a year publishes its ABA Watch to report on political and policy-related activities by the ABA. Like any interest group that represents a particular profession, the ABA can play an important role in the policy-making process. However, unlike most other professional organizations, the ABA can and does play a prominent and specific role in nominations to the federal judiciary and, in particular, to the U.S. Supreme Court. Beginning in 1956, the ABA’s Standing Committee on Federal Judiciary began its assessment of nominees to the Supreme Court. At that time, the committee would rate nominees as either “qualified” or “unqualified.” This process became an important political element for a president in his selection of justices since having a nominee’s professional credentials labeled as “unqualified” would make confirmation difficult in the Senate. Today, the organization rates Supreme Court nominees as “well qualified,” “qualified,” or “not qualified.” Not all presidents have submitted names to the ABA when making a nomination to the Supreme Court, and some ratings by the committee were quite controversial. Two of President Richard Nixon’s appointees, Clement Haynsworth and G. Harrold Carswell, were both rejected by the Senate (many senators viewed them as unqualified to serve on the
682
amicus curiae
high court) in spite of the ABA’s claim that both were qualified. Later nominations by Nixon were not submitted, although the ABA continued to provide its own rating. The ABA has never given a “not qualified” rating, but a less-than-unanimous vote among the committee members for a qualified rating can cause a public stir during the confirmation hearings and weaken the nominee’s chances of receiving the seat on the bench. Both Robert Bork, nominated by President Ronald Reagan but not confirmed in 1987, and Clarence Thomas, nominated by President George H. W. Bush and confirmed in 1991, failed to receive unanimous votes from the ABA committee. Even though the votes against the two nominees constituted a small minority of the committee (four voted against Bork, and two voted against Thomas), the results gave opponents important ammunition in the confirmation process that is now driven by interest-group participation and news-media coverage. As a result, many Republican lawmakers in recent years have become convinced that the ABA had taken too liberal a stance on judicial nominees, and they began to pay less credence to the ratings. In 2001, the George W. Bush administration announced that it would not submit the names of judicial nominees to the ABA, eliminating its official role in the nomination process. However, the ABA still continues to assess the nominees and sends its reports to the Senate Judiciary Committee. The ABA is also active in other aspects of the judiciary, particularly at the state level regarding the selection of state judges. The ABA’s Commission on the 21st Century Judiciary has taken a strong stance in regards to states that elect its judges, advocating for limits on campaign spending and decreasing partisanship in the electoral process. A report released by the commission in 2003 suggests that reforms are needed to preserve the independence and the integrity of state court systems by changing the process in states that elect its judges or where judges retain their seats through a vote of retention. Further Reading Baum, Lawrence. The Supreme Court. 8th ed. Washington, D.C.: Congressional Quarterly Press, 2004; Hall, Kermit L., and Kevin T. McGuire, eds.
The Judicial Branch. New York: Oxford University. Press, 2005. O’Brien, David M. Storm Center: The Supreme Court in American Politics. 6th ed. New York: W.W. Norton, 2003. —Lori Cox Han
amicus curiae The term amicus curiae is Latin for “friend of the court” and represents a party who contributes legal arguments in a court case even though not a party to the litigation. In centuries past, the amicus was a lawyer offering disinterested aid to the court. In current U.S. legal practice, amici are almost always organized interests or government entities who offer support to one of the parties in the case. Courts generally require permission for amicus briefs from the parties in a case, though today, this is usually granted almost reflexively. Amicus participation usually takes the form of written briefs only; oral argument by an amicus is rare, though an exception more often is made in the case of the federal government as amicus. Indeed, the solicitor general of the United States plays a special role as amicus, often choosing to contribute an amicus brief in cases where the federal government is not a party but the solicitor general’s office or the White House believes that an important interest is at stake. In such cases, recent solicitor generals have been accused of sometimes compromising credibility with the court by taking politicized positions. However, the U.S. Supreme Court may itself call for the views of the solicitor general (referred to as a “CVSG”), essentially inviting an amicus brief from the federal government. The solicitor general nearly always responds to such invitations, and in such cases, the government’s brief often comes very close to the old model of a disinterested friend of the court, not really a partisan of either side but rather an advocate for the rational application and development of the law. Such amicus briefs seemed to be given great weight by the Court. Amicus support has long been common in matters of constitutional law but now is equally likely from large corporations, trade associations, and other interest groups with an economic or social interest in shaping the law. Modern amicus participation,
amicus curiae 683
especially at the U.S. Supreme Court, is generally a well-orchestrated affair, with litigants actively seeking amicus support and then brainstorming with several amici over which parties should make which arguments. The role of amici is sometimes compared to that which lobbyists play for legislators, including providing information about the preferences of other political actors. Amicus briefs often contribute unique arguments, but they also commonly reiterate themes of the brief of the party they support. Indeed, studies suggest that majority opinions of the U.S. Supreme Court only rarely adopt arguments from amicus briefs that were not raised in the briefs from the parties, suggesting that the influence of amici is not primarily a function of any new arguments that they contribute. Several studies of such issue areas as obscenity or death-penalty cases found that the presence of an amicus brief on a given side would increase the chances of victory of a given position in the U.S. Supreme Court. However, other studies have challenged this consensus, concluding that when one controls for issues and ideology, support from an amicus curiae has limited impact on the chance of success in the Supreme Court. To the extent that there is an impact, amicus briefs by institutional litigants and experienced lawyers appear to carry the most weight. The U.S. Supreme Court fills most of its docket through discretionary choice, and it seeks cases that have the greatest political and legal salience. Amicus briefs in support of a request for review (called a writ or petition for certiorari) can be a signal or “cue” as to how important a case will be in the view of organized interests, including government entities. The willingness of interest groups to expend resources suggests that a case raises controversial and significant legal questions, which increases the prospects that the case will be accepted for a full hearing on the merits. The presence of amicus briefs also is correlated with dissenting and concurring opinions, further confirming that the existence of amicus briefs correlates with the salience and controversial nature of a case. Organized interest groups strategically file amicus briefs in cases that they believe offer the greatest opportunity for them to influence the outcome and reasoning in a court opinion. They focus on cases in
which they think the justices are in need of or are open to new information. However, interest groups that rely on a mass membership also consider the impact of amicus participation on the group’s ability to attract and retain membership support. Thus, such groups also choose cases that promise high visibility and the best prospects to claim success. Scholars have long studied amicus participation at the U.S. Supreme Court, and many have also turned their attention to amicus curiae in the federal courts of appeal and in state supreme courts. Though amicus participation is much lower in federal circuit courts and state courts than at the Supreme Court, research demonstrates that there is increasing and significant amicus participation in cases that advocates believe are desirable as a policy vehicle. By the middle of the 1980s, interest-group participation as amici in state supreme courts had noticeably increased, reflecting an increasing focus on state courts as policy makers. Studies by Songer and others of the success of amici curiae in state-supreme-court decisions on the merits (including simple won/loss ratios, success in “matched pairs,” and multivariate logit models of the relationship between amicus support and the success of litigants) suggest that amicus support is significantly related to the likelihood of success of the supported litigants. It may be that amicus support is a cause of litigant success, though it also may be that cases believed to be successful attract more amicus support. The variety of institutional mechanisms reflected in the various state supreme courts also provides a test of strategic behavior by amici. For example, a study of the implications of an informational theory of goal-oriented litigant behavior in state supreme courts demonstrates how differences in judicial selection lead to differences in argumentation in briefs. Thus, amicus briefs offer more arguments related to institutional concerns of governors and legislatures in states where justices are selected and retained in office by those actors; such policy arguments are salient in those states since they help justices retain office. On the other hand, where judicial office is a function of partisan popular elections, justices are more likely to value information about the policy implications of their decisions on the public. Amicus support can increase significantly the prospects of disadvantaged litigants in contests
684
associate justice of the Supreme Court
with adversaries with superior litigation capacity. Research has repeatedly confirmed scholar Marc Galanter’s observation that the “haves” rather consistently defeat the “have nots” in court, largely because businesses and other “repeat players” have numerous advantages in litigation over individual, “one-shot” parties. However, research also indicates that amicus support can even the odds for one shotters and that disadvantaged litigants with group support were substantially more successful than similar litigants without group support. In fact, individuals with interest group support may even have a higher rate of success than corporate litigants lacking group support. See also opinions, U.S. Supreme Court. Further Reading Collins, Paul M., Jr. “Friends of the Court: Examining the Influence of Amicus Curiae Participation in U.S. Supreme Court Litigation.” Law and Society Review 38, no. 4 (2004): 807–832; Comparato, Scott A. Amici Curiae and Strategic Behavior in State Supreme Courts. Westport, Conn.: Praeger, 2003; Hansford, Thomas G. “Lobbying Strategies, Venue Selection, and Organized Interest Involvement at the U.S. Supreme Court.” American Politics Research 32, no. 2 (2004): 170–197; Krislov, Samuel. “The Amicus Curiae Brief: From Friendship to Advocacy.” Yale Law Journal 72, no. 4 (1963): 694–721; Martinek, Wendy L. “Amici Curiae in the U.S. Courts of Appeals.” American Politics Research 34 (2006): 803–824; Sheehan, Reginald S., William Mishler, and Donald R. Songer. “Ideology, Status, and the Differential Success of Direct Parties before the Supreme Court.” American Political Science Review 86 (1992): 464–471; Simpson, Reagan William, and Mary R. Vasaly. The Amicus Brief: How to Be a Good Friend of the Court. 2nd ed. Chicago, Ill.: American Bar Association, 2004; Songer, Donald R., and Ashlyn Kuersten. “The Success of Amici in State Supreme Courts.” Political Research Quarterly 48, no. 1 (1995): 31–42; Songer, Donald, Ashlyn Kuersten, and Erin Kaheny. “Why the Haves Don’t Always Come Out Ahead: Repeat Players Meet Amici Curiae for the Disadvantaged.” Political Research Quarterly 53, no. 3 (2000): 537–556. —Ronald L. Steiner
associate justice of the Supreme Court The U.S. Constitution provides for “one supreme Court” in Article III, Section 1, and obliquely refers to the office of chief justice by stating that he or she shall preside at impeachment trials of the president. (Article I, section 3, clause 6). The only reference in the document to Associate Justices, however, is in the appointments clause. There, the Constitution declares that the president “shall nominate, and by and with the advice and consent of the Senate, shall appoint . . . Judges of the supreme Court.” (Article II, section 2, clause 2). How many associates shall sit on the Court and exactly what their duties shall be, however, are not specified in the Constitution. It has been left to statutes to set the number of justices, and since 1869, Congress has provided that the Court shall be comprised of a chief justice and eight Associate Justices. (Act of April 10, 1869, ch. 22, 16 Stat. 44). However, it was not always so. The Judiciary Act of 1789 originally set the Court’s membership at six justices, including the chief justice. Since that time, Congress has varied the number of justices from a minimum of five to a maximum of 10. Occasionally, Congress has used its power to alter the number of justices to exert influence over the policy direction of the U.S. Supreme Court. For example, Congress reduced the number of justices to ensure that President Andrew Johnson (whose views on Reconstruction were opposed by the Congress) would not have the opportunity to name a justice, even if one of the sitting justices left the Court. Whenever there is a vacancy on the Court, the president must decide on a nominee for the position. The president submits that name to the Senate, which must give its “consent” before the nominee will be permitted to take his or her seat. The Senate Committee on the Judiciary will conduct hearings on the nomination, which, since President Dwight D. Eisenhower’s nomination of Justice John Marshall Harlan, has included testimony by the nominee himself. Committee hearings often involve the senators asking questions designed to discover how the nominee would decide cases once on the Court, but most nominees answer only in vague generalities. The committee so far has not insisted on answers, and several nominees have insisted that it is inappropriate for nominees to commit themselves to particular deci-
associate justice of the Supreme Court
sions before considering an actual case. Nevertheless, justices have tremendous power to shape the law, and for that reason, many senators believe it their duty to “consent” to a nomination only if the nominee will issue acceptable decisions. Once the Judiciary Committee acts on the nomination (by recommending that the whole Senate confirm the nominee, by recommending that the whole Senate defeat the nomination, or by deciding not to issue a recommendation), the nomination goes to the full Senate. When a vote is taken in the Senate, the nominee must receive a simple majority for confirmation. If he or she receives it, then he or she will take the oath of office and become a justice. If the Senate defeats the nomination, the president must select a new nominee. Once on the Court, a justice will remain an associate justice unless nominated again by the president to undergo a second appointment as chief justice. Five chief justices—John Rutledge, Edward White, Charles Evans Hughes, Harlan Stone, and William Rehnquist—previously served as associate justices, though the Senate never confirmed Rutledge’s elevation (he was chief justice only briefly as a “recess appointee”). Hughes was nominated for chief justice several years after he had resigned as an associate justice to run for president. It may appear odd that Congress should be able to exercise so much power over the U.S. Supreme Court as to determine both the number of justices by statute and the identities of those justices by the Senate’s power to approve of nominations, but once a nominee has taken his or her seat and becomes a justice, there is very little that Congress (or anyone else) can do to control his or her official actions. The Constitution insulates all federal judges from political pressure by providing that Congress may not reduce the salary of any sitting judge and may remove a judge only by impeachment because justices hold their offices for “good Behaviour” (Article III, Section 1). In practice, this has meant that justices serve for life unless they voluntarily elect to leave the Court; only one justice—Associate Justice Samuel Chase—has been impeached, and the Senate acquitted Chase in the impeachment trial, albeit by a margin of only one vote. Chief justices lend their names to the Courts over which they preside (which is why we refer to the
685
“Warren Court” or the “Rehnquist Court”), but that does not mean that they have always been the most influential or most respected justices on the Court. Of the 110 men and women who have served on the Supreme Court, 98 have served as associate justices, and throughout the Court’s history, many associates have made lasting impacts on the judiciary and the nation through their service. The associate justices John Harlan I, Oliver Wendell Holmes, Louis Brandeis, Hugo Black, William Douglas, Felix Frankfurter, Robert Jackson, John Harlan II, William Brennan, Thurgood Marshall, and Antonin Scalia have all had historic influences on the law and on history by bringing their talents and widely varying judicial philosophies to the Court. Though the chief justice possesses some administrative authority and is the symbolic head of the judicial branch, in practice the Court operates as “nine law firms”—independent chambers where each justice deliberates on cases, even as the Court must come together to issue decisions. By no means are associate justices responsible to the chief justice; one story, as told in the autobiography of Justice William O. Douglas, has it that the cantankerous associate justice James McReynolds, on being summoned to the courtroom by Chief Justice Hughes, instructed the messenger to “[t]ell the Chief Justice that I do not work for him” and arrived 30 minutes later. The Constitution does little to delineate the powers of either the justices individually or collectively as a Court. The justices decide about 85 cases per year, and they choose those cases from among approximately 7,500 cases in which a litigant files a petition for certiorari asking the Court to hear the case. The justices choose those cases that present important issues of federal law and are especially likely to select those cases presenting issues on which different lower courts have disagreed. It takes the votes of four justices to grant certiorari and to bring the case to the Court. Beginning on the first Monday in October each year, the Court hears oral argument in each of the cases that it has selected for decision. The justices usually hear two cases per argument day in public session in the Supreme Court building across from the Capitol in Washington, D.C. There are usually six argument days per month, with cases heard on Monday, Tuesday, and Wednesday for two weeks, followed
686
associate justice of the Supreme Court
by two weeks without argument. Lawyers typically receive 30 minutes in which they present their case and answer questions from the justices. Each justice is free to ask whatever questions of counsel will assist that justice in deciding the case. Each justice’s vote on cases is worth the same as any other’s, and the Court operates for the most part by majority rule—it takes five justices to agree on the result in a case, though there often is not a majority that agrees with any one rationale for reaching that result. The opinion of the Court may be written by any of the justices. After the Court discusses each argued case in conference, the justices’ votes are tallied. The most senior justice in the majority assigns the opinion of the Court, and the most senior dissenting justice assigns the writing of the principal dissent. Seniority is determined by years of service, except that the chief justice is the most senior regardless of his or her years of service. Once an opinion is assigned, the justice receiving the assignment drafts the opinion and circulates the completed draft to the other justices. Those justices may offer comments or suggestions for improvement and may even tell the author that they will not join their names to the opinion unless certain changes are made. Each justice is free to write an opinion in any case where he or she sees fit to do so, and there have been times where every one of the justices have written opinions (for example, New York Times Co. v. United States, 403 U.S. 713, 1971). Opinions that agree with the result that the Court reaches are “concurrences” or “concurring opinions,” while opinions disagreeing with the result the Court reaches are “dissents” or “dissenting opinions.” In the very early years of the Court, it was standard practice for each justice to write his or her own opinion in each case (the practice known as seriatim opinion writing). Since the chief justiceship of John Marshall (1801–35), however, the Court typically issues an opinion that commands the agreement of a majority, or at least a plurality, of the justices. Perhaps the most important ways in which justices show their independence from other justices (including the Chief Justice) is to dissent from the decisions rendered by the rest of the Court. In calling attention to the missteps of his or her colleagues, a dissenting justice, according to Charles Evans
Hughes, makes “an appeal to the brooding spirit of the law, to the intelligence of a future day, when a later decision may possibly correct the error into which the dissenting judge believes the court to have been betrayed.” Some dissents have ensured their authors’ places in history through their wit, foresight, wisdom, or literary style. Examples include the first Justice Harlan’s dissent in Plessy v. Ferguson, 163 U.S. 537 (1896); Justice Holmes’s in Lochner v. New York, 198 U.S. 45 (1905) and Abrams v. United States, 250 U.S. 616 (1919); Justice Jackson’s in Korematsu v. United States, 323 U.S. 214 (1944); the second Justice Harlan’s in Reynolds v. Sims, 377 U.S. 533 (1964); Justice Black’s in Griswold v. Connecticut, 381 U.S. 479 (1965); and Justice Scalia’s in Morrison v. Olson, 487 U.S. 654 (1988) and Planned Parenthood v. Casey, 505 U.S. 833 (1992). Other times, concurrences have shaped the law as much or more than opinions of the Court. The best examples of this phenomenon are Justice Harlan’s opinion in Katz v. United States, 389 U.S. 347 (1967) and Justice Jackson’s in Youngstown Sheet & Tube Co. v. Sawyer, 343 U.S. 579 (1952). When all the justices have completed their writing and each has come to a decision about which opinions to join, the decision is announced. Cases can be decided any day that the Court is scheduled to hear argument, though at the end of the Court’s term in May and June, the Court usually issues opinions only on Mondays. The Court announces decisions in public session, and the author of the Court’s opinion reads part of the opinion to the assembled spectators. Authors of dissents or concurrences may read their opinions as well. Copies of the decisions are made available to the public through the Public Information Office and on the Court’s Web site: http://www .supremecourtus.gov. Some associate justices have gone beyond merely protesting the errors committed by the rest of the Court and have done their part to lead the Court in the directions that they believe to be appropriate. Since at least the Taney Court in the 1830s, associate justices have occasionally played a lead role in directing the Court’s jurisprudence. In recent times, owing to his substantial legal talents and great personal charm, Associate Justice William Brennan often was successful at urging the Court to move the law in directions which he favored. Other associate justices
capital punishment 687
have been pivotal in determining case outcomes owing to their positions as “swing justices,” whose decisions tip the balance between blocs of liberal and conservative justices. In recent years, Associate Justices Lewis Powell, Sandra Day O’Connor, and Anthony Kennedy have been the swing justices because often each has provided fifth votes for decisions that have been sometimes conservative and sometimes liberal. Though an associate justice’s duties deciding cases at the Supreme Court require much of his or her time, each Justice has the additional task of serving as a “circuit justice” for one or more of the 13 judicial circuits throughout the country. In this capacity, the circuit justice is charged with deciding whether to grant interim relief, such as a stay of a lower-court order, in a case pending in that circuit “in order to preserve the jurisdiction of the full Court to consider an applicant’s claim on the merits.” (Kimble v. Swackhamer, 439 U.S. 1385, 1385, 1978, Rehnquist, in chambers). More permanent relief may be granted only by the whole Court. (See Blodgett v. Campbell, 508 U.S. 1301, 1993, O’Connor, in chambers; see also R. Stern et al., Supreme Court Practice, Washington, D.C.: Bureau of National Affairs, 8th ed. 2002, sections 17.1– 17.21). Further Reading Abraham, Henry J. Justices, Presidents, and Senators: A History of U.S. Supreme Court Appointments from Washington to Clinton. New York: Rowman & Littlefield, 1999; Douglas, William O. The Court Years 1939–1975: The Autobiography of William O. Douglas. New York: Random House, 1980; Hall, Kermit, ed. The Oxford Companion to the Supreme Court of the United States. 2nd ed. New York: Oxford University Press, 2005; Hughes, Charles Evans. The Supreme Court of the United States. New York: Columbia University Press, 1928; O’Brien, David M. Storm Center: The Supreme Court in American Politics. 7th ed. New York: W.W. Norton & Co., 2005; Rehnquist, William H. The Supreme Court: How It Was, How It Is. New York: William Morrow, 1987; Schwartz, Bernard. Decision: How the Supreme Court Decides Cases. New York: Oxford University Press, 1996; Schwartz, Bernard. A History of the Supreme Court. New York: Oxford University Press, 1993; Segal, Jeffrey, et al.
The Supreme Court Compendium: Data, Decisions, and Developments. 4th ed. Washington, D.C.: Congressional Quarterly Press, 2006; Stern, Robert, et al. Supreme Court Practice. 8th ed. Washington, D.C.: Bureau of National Affairs, 2002; Urofsky, Melvin, ed. Biographical Encyclopedia of the Supreme Court: The Lives and Legal Philosophies of the Justices. Washington, D.C.: Congressional Quarterly Press, 2006. —Michael Richard Dimino, Sr.
capital punishment Capital punishment, often referred to as the death penalty, is the execution of someone as a punishment for a crime (sometimes referred to as a capital crime or capital offense) where the state judges and convicts someone and the state puts that person to death. Very few industrial nations allow the death penalty for any crimes. In fact, it has been abolished in Australia, New Zealand, Canada, virtually all of Europe, and most of Latin America. In 2005, 42 countries had a constitutional ban in place against the death penalty. The United States, however, does allow for the death penalty (as does Japan). The decision on whether to impose the death penalty is a state decision, and states vary in their application of this punishment. The U.S. courts have been involved in determining whether capital punishment violates the constitutional ban against “cruel and unusual punishment.” The Eighth Amendment to the U.S. Constitution states: “Excessive bail shall not be required, nor excessive fines imposed, nor cruel and unusual punishment inflicted.” Thus the courts have had to decide if the death penalty falls within the ban against such a punishment. Since 1977, the courts have allowed states to impose capital punishment arguing that it is not cruel or unusual. In that time, the death penalty has been imposed more than 1,000 times in the United States. Twelve states have abolished the use of the death penalty. Virtually every other industrialized and democratic nation condemns the use of the death penalty, and in this regard, the United States—as one of the few remaining democratic and industrialized nations that still imposes the death penalty— comes up against a great deal of criticism and condemnation.
688 capital punishment
The word capital comes from the Latin caput meaning “head.” The death penalty thus allows for the punishment where the head may be removed from the body (literally or figuratively). Historically, there have been many forms that capital punishment has taken: decapitation, asphyxiation, drowning, burning at the stake (this was at one time common for religious crimes), boiling to death, crucifixion, dismemberment, disembowelment, drawing and quartering, electric chair, gassing, hanging, impalement, lethal injection, walking the plank, poisoning, shooting, and stoning, among other methods. Some are considered crueler than others, and the more gruesome have generally been banned in modern times. In medieval times, the European nobility were executed in as painless a way as possible. The lower classes, however, often met with a harsher means of execution, often as part of a public spectacle. Today, critics of the death penalty point to statistics that a disproportionate number of those sentenced to capital punishment come from the lower classes and minority groups.
In the United States, a total of 38 states, as well as the federal government, allow capital punishment for the offense of capital murder. The state of Texas, considered “ground zero” for capital punishment, greatly outpaces the other 37 states in its use of the death penalty. From a constitutional perspective, the capital punishment debate deals with the issue of whether or not the death penalty is considered cruel and unusual punishment, which is prohibited by the Eighth Amendment. The U.S. Supreme Court initially dealt with this issue in Furman v. Georgia (1972), a case in which five justices voted to strike down Georgia’s death-penalty statute as unconstitutional. Four years later, the death penalty was reinstated by the Supreme Court in Gregg v. Georgia (1976). Thirty-five state legislatures had rewritten their death penalty laws, including Georgia. As a result, the Court upheld the constitutionality of the death penalty in the 1976 case by supporting a bifurcated trial for capital crimes. In the first stage, guilt is determined in the usual manner. The second stage deals with the appropriate
capital punishment 689
sentence, and the jury must find that certain circumstances of the case meet the statutory requirements. Automatic appeal to the state supreme court is also provided. In other cases, the Supreme Court has refused to allow states to execute criminals convicted of lesser crimes than first-degree murder. A defendant found to be insane cannot be executed, but mild mental retardation, in and of itself, is not a sufficient basis to bar the imposition of the death penalty. Passage of the 1994 Crime Bill also allowed for the use of the death penalty in additional types of crimes, including treason, murder of a federal lawenforcement official, and kidnapping, carjacking, child abuse, and bank robbery that result in death. In general, Court rulings have supported capital punishment for juvenile offenders 16 and older who are tried as adults. According to a 2004 report from Amnesty International, a group that opposes imposition of the death penalty, roughly 74 nations still allow for the death penalty, but in recent years, 25 countries actually have imposed capital punishment. The country that most often imposes the death penalty is The People’s Republic of China. Next on the list is Iran, followed by Vietnam, the United States, Saudi Arabia, Pakistan, Kuwait, Bangladesh, Egypt, Singapore, Yemen, and Belarus. According to a report from the United Nations Secretary-General, the nation with the highest per capita use of the death penalty is Singapore, with 13.57 executions per one million in the population. At the end of 2006, the hanging of former Iraqi dictator made international headlines. Many of those in the United States who oppose the death penalty point to this list of nations and ask if this is the kind of company with whom the United States should be associated. It clearly raises eyebrows as well as questions regarding the use of capital punishment and compels one to reflect on just how viable the imposition of the death penalty is when the United States is partnered with nations with very questionable humanrights records. There are two main arguments for the death penalty: It is justified punishment for serious crimes, and it serves as a deterrence. The first argument is a normative question and is open to individual choice and opinion. The second argument, that having a death penalty deters serious crimes, has been the subject of
numerous studies, and to date, there is at best a very tenuous link between a reduction in crime and the death penalty. Arguments against the death penalty often revolve around religious reasons: Virtually all major religions, including Christianity and Buddhism, reject the use of capital punishment. Others object to the death penalty on the grounds that “killing someone who kills someone to show that killing is wrong” defies logic and common sense. If killing is wrong, does state sponsorship make it right? The great novelist Victor Hugo said it best: Que dit la loi? Tu ne tueras pas! Comment le dit-elle? En tuant! (What does the law say? You will not kill! How does it say it? By killing!). Popular attention on capital punishment often revolves around the taking of a life by the state in a controversial case that receives a significant amount of media attention, and popular attention is also drawn to certain movies that deal with the death penalty. In 1995, the movie, Dead Man Walking, directed by Tim Robbins and starring his real-life companion Susan Sarandon as Sister Helen Prejean, a Roman Catholic nun who took on the case of convicted murderer Patrick Sonnier (played by Sean Penn). A few years later, the movie The Green Mile (1999) starring Tom Hanks, also focused attention on the death penalty in the United States. The 2003 movie The Life of David Gale, starring Kevin Spacey as a tireless worker against the death penalty who is then convicted of murder and sentenced to death, also dealt with this controversial issue. In spite of the great controversy surrounding capital punishment, public-opinion polls continue to suggest that a majority of the population still supports the death penalty, especially in cases of murder. The Supreme Court has on occasion determined that under certain conditions, the death penalty is cruel and unusual punishment, but this has not prevented those states wishing to have the death penalty as a potential punishment from imposing this punishment on convicted offenders. Further Reading Banner, Stuart. The Death Penalty; An American History. Cambridge, Mass.: Harvard University Press, 2003; Bedau, Hugo Adam, and Paul G. Cassell. Debating the Death Penalty: Should America Have Capital Punishment? The Experts on Both Sides Make Their
690 chief justice of the United States
Best Case. New York: Oxford University Press, 2004; Mandery, Evan J. Capital Punishment: A Balanced Explanation. Sudbury, Mass.: Jones and Bartlett, 2005; Sarat, Austin. When the State Kills: Capital Punishment and the American Condition. Princeton, N.J.: Princeton University Press, 2001; Stack, Richard A. Dead Wrong: Violence, Vengeance, and the Victims of Capital Punishment. Westport, Conn.: Praeger, 2006. —Michael A. Genovese
chief justice of the United States While the nine members of the U.S. Supreme Court are equal in their ability to participate in selecting and deciding cases, the chief justice plays a special role in the functioning of both the Supreme Court and the federal judiciary as a whole. This can be seen first and foremost in the official title of the position, “chief justice of the United States” (as opposed to what most people believe is the correct title, “chief justice of the Supreme Court”). It is the responsibility of the chief justice to deliver an annual “state of the judiciary” message, as well as to chair the Judicial Conference and deliver the opinions from that conference on legislative issues to the Congress. In addition, the extraconstitutional duty of presiding over Senate trials in impeachment proceedings fall to the Chief Justice (most recently, William Rehnquist presided over the impeachment trial of Bill Clinton in early 1999). Within the Supreme Court, the chief justice presides over all public sessions and supervises the administrative functions of the Court. While in session, the chief justice takes the center chair while the senior associate justice sits to his or her right, the second senior to his or her left, alternating right and left by seniority. The chief justice also plays a distinct role in deciding which justice will author an opinion; if the chief justice is part of the majority, then he or she decides who will author the opinion. If not, then the justice with the most seniority within the majority will decide (and either the chief justice or the most senior justice can select him- or herself to write the opinion). The chief justice can also play an important role regarding internal influences on the Court in shaping the decision-making process through his or her leadership style and the level of collegiality among the justices (important factors in setting the tone and determining working relationships on the Court).
Chief Justic e John M arshall (Alonzo Chappell , C ollection of the Supreme Court of the United States)
To date, a total of 17 men have served as chief justice of the United States. Under the first chief justice, John Jay (a contributor to the Federalist), the earliest sessions of the Court were devoted to organizational proceedings. The first cases did not reach the Supreme Court until its second year of existence in 1791, and the justices handed down their first opinion in 1792. Between 1790 and 1799, the Court handed down only about 50 cases and made few significant decisions. As a result, the first justices complained that the Court had a limited stature; they also stated their displeasure with the burdens of “riding circuit” under primitive travel conditions (circuit riding required that each justice be assigned a regional circuit and was responsible for meeting twice a year with district court judges to hear appeals of federal cases). Jay, concerned that the Court lacked prestige, resigned in 1795 to become envoy to England and was later elected governor of New York. Jay was followed in the chief justiceship by John Rutledge, who
chief justice of the United States 691
served for one term, and then Oliver Ellsworth, who stayed in the position for five years. Like Jay, both men had been appointed by President George Washington, but all three failed to make a grand mark on the position. Despite the pleading of President John Adams, Jay could not be persuaded to accept reappointment as Chief Justice when the post again became vacant in 1800. Adams instead appointed John Marshall of Virginia to be the fourth Chief Justice. With Marshall at the helm, the Supreme Court began to take a much more prominent role in U.S. government. Marshall left a tremendously important legacy for the Court as an institution and authored some of the most important decisions ever handed down by the justices. Unlike his predecessors, who had only served briefly on the Court, Marshall’s tenure as chief justice lasted 34 years, five months, the longest tenure of any Chief Justice to date. Marshall is revered as the greatest chief justice in the history of the Supreme Court and dominated the Court in a way that has since been unmatched. The Marshall Court was aggressive in its assertion of power, granting extensive authority not only to the federal government but also to the Court itself. Without a doubt, the most important decision ever handed down by the Court came in 1803 when in his majority opinion in Marbury v. Madison, Marshall established the Supreme Court’s power of judicial review. Marshall would later use that power in another important case, McCulloch v. Maryland (1819), in which the Court upheld both the “supremacy” and “necessary and proper” clauses of the U.S. Constitution in ruling that the state of Maryland could not tax a federal bank. Marshall’s greatest legacy is not only in shaping the position of Chief Justice but in advancing policies that he favored to strengthen the national government during its earliest days. Roger B. Taney, who succeeded Marshall as chief justice and held the position from 1836 until 1864, differed from Marshall in many ways. Whereas Marshall was a supporter of a strong national government, Taney supported states’ rights. As a justice, he was also more restrained and redefined Marshall’s strong nationalist view to allow dual federalism, with states maintaining rights over many social and economic matters. Taney is best remembered for the infamous decision in the 1857 Dred Scott v. Sandford case in
which he proclaimed that states were not required to consider blacks as citizens of the United States and that slaves were property, a right protected by the U.S. Constitution. The decision enflamed public opinion over the issue of slavery and played a role in the election of Abraham Lincoln as president in 1860, followed by the start of the Civil War in 1861. Following the Marshall and Taney courts, where each Chief Justice served many years and left many legal legacies, a Chief Justice of the same significance would not emerge again until the mid-20th century. Earl Warren, the former Republican governor of California, barely had time to acclimate himself to being the new chief justice in 1953 before he had to confront one of the most important cases in the Court’s history in Brown v. Board of Education (1954), which overturned the 1896 decision in Plessy v. Ferguson by declaring that separate but equal was not equal and that several states must end racial segregation in public schools. Many other significant rulings came from the Warren Court prior to the Chief Justice’s retirement in 1969, as he greatly expanded civil rights and civil liberties through his leadership on the Court. Many legal scholars believe that the Warren Court revolutionized constitutional law and U.S. society with important rulings on privacy rights, criminal procedures that protected the rights of the accused, equal voting rights, and many other policy areas. The liberal jurisprudence of Warren was followed by attempts to restore “law and order” by the Burger Court. Warren Burger became chief justice in 1969, but despite being the first of four appointments by Republican President Richard Nixon, the Court did not completely reverse the liberal judicial activism of Warren. With both liberals and conservatives on the bench during the Burger years, the new chief justice was only able to make minor adjustments with his rulings, while this court also added several activist decisions of its own, including cases dealing with abortion (Roe v. Wade in 1973), affirmative action (Regents of the University of California v. Bakke in 1978), and campaign finance reform (Buckley v. Valeo in 1976). While presidents are always concerned with leaving a positive legacy for their political beliefs through their Supreme Court appointments, former chief justice William Rehnquist is an excellent example of
692 chief justice of the United States
positive legacy building for a president. First nominated as an associate justice by Richard Nixon in 1971 and confirmed in early 1972, Rehnquist was a strong advocate for the law-and-order, states’ rights approach to the U.S. Constitution that Nixon advocated. Within three years of Rehnquist’s confirmation to the Court, Nixon had resigned from office in 1974 because of the Watergate scandal. Nixon died 20 years later in 1994, and Rehnquist, following his elevation to chief justice by President Ronald Reagan in 1986, still remained one of the most powerful men in Washington. Prior to his death in 2005 more than 33 years after his initial nomination to the Court and 31 years following Nixon’s resignation, Rehnquist was closing in on the record for longest-serving member of the Supreme Court. Nixon, obviously, had chosen well for the purposes of a long-lasting political legacy. Given how long a justice can serve or how powerful the Court can be in setting certain public policies, that is no small accomplishment. President Ronald Reagan seized the opportunity to make the Court more conservative and restrained in its decisions with three appointments during the 1980s, as well as by elevating William Rehnquist to Chief Justice in 1986. While, in general, the Rehnquist Court provided a stronger view of states’ rights and reversed earlier rulings in regard to criminal procedures that protected the accused, the Court overall had a mixed record. Certain swing voters on the Court, most notably Associate Justice Sandra Day O’Connor (also a Reagan appointee), were able to uphold abortion rights and affirmative action, for example, and in perhaps the most famous case heard during the Rehnquist era, the chief justice who was known for advocating judicial restraint became part of the 5-4 majority that ended the 2000 presidentialelection controversy. The irony of Bush v. Gore (2000) is that the act of five justices who usually endorsed judicial restraint used the power of the Supreme Court to overturn the Florida supreme court’s decision to hold a statewide recount of the ballots. The decision, in effect, awarded both Florida’s electoral votes and the presidency to George W. Bush. With his confirmation in 2005 as chief justice at the age of 50, John Roberts is expected to guide the nation’s highest court for several years or even decades to come. After graduating from Harvard
Law School, Roberts clerked for then–associate justice William Rehnquist, who he would ultimately replace as Chief Justice. He also held positions within both the Justice Department and the White House during the Reagan years and served as a deputy solicitor general from 1989 to 1993 while George H. W. Bush was president. President George W. Bush initially appointed him to the U.S. Court of Appeals for the District of Columbia in 2003, prior to nominating him to the Supreme Court (Roberts was first nominated to replace retiring associate justice Sandra Day O’Connor but became the nominee for chief justice following Rehnquist’s death a few months later before Roberts had been confirmed as an associate justice). For many Court observers, Roberts is viewed as similar to Rehnquist in his judicial philosophy—a conservative jurist who more often than not believes in judicial restraint, relying on precedent, and protective of states’ rights. Whether the Roberts Court is viewed as activist or restrained in its rulings and what impact the Court may have on public policies will in part be determined by future vacancies on the Court and whether they occur during the remaining years of the Bush presidency or for the next occupant of the Oval Office. It is also too early to tell if Roberts, as chief justice, will have as large an impact on the federal government and its policies as some of his predecessors, such as Marshall, Taney, and Warren, but as with any chief justice, the potential is there for a long and significant tenure on the nation’s highest court. Further Reading Abraham, Henry J. Justices, President, and Senators: A History of the U.S. Supreme Court Appointments from Washington to Clinton. Lanham, Md.: Rowman & Littlefield, 1999; Baum, Lawrence. The Supreme Court. 8th ed. Washington, D.C.: Congressional Quarterly Press, 2004; Flanders, Henry. The Lives and Times of the Chief Justices of the Supreme Court of the United States. Ann Arbor, Mich.: Scholarly Publishing Office, University of Michigan Library, 2005; O’Brien, David M. Storm Center: The Supreme Court in American Politics. 7th ed. New York: W.W. Norton, 2005; Official Web site of the Supreme Court of the United States. Available online URL: www .supremecourtus.gov. —Lori Cox Han
constitutional law 693
constitutional law The U.S. system of jurisprudence and the categorization of laws predicated on it are rooted in a fundamental distinction between constitutional law and ordinary law. Constitutional law comprises the overriding definitional politico-juridical principles that create, govern, and legitimize the various types of ordinary law, which are wholly subordinate to it. As such, laws exist according to a two-tier schema whose viability is confirmed by the prioritization of constitutional law over ordinary law. This bifurcated structure necessitates the adoption of or adherence to constitutional laws that create or affirm the institutions that, in turn, create or affirm relevant ordinary laws. In the United States, the distinction between constitutional law and ordinary law is arguably the most conspicuous and rigorous among industrialized nations, not least because of the implementation of a written constitution with unique procedural safeguards whose intent is to preserve the subordination of ordinary law to constitutional law. U.S. constitutional law encompasses the U.S. Constitution, its amendments, and the interpretive growth from more than 200 years of U.S. Supreme Court Opinions. Ordinary law flows from this corpus of higher law and includes statutory law, common law, administrative and regulatory law, equity, and so on. Although the division of laws into constitutional and ordinary, or nonconstitutional, realms is common in today’s industrialized countries and has acquired nominal legitimacy in even some authoritarian societies, it reflects a comparatively recent development in Anglo-American political and legal thought. The history of Western civilizations, though replete with examples of polities whose acceptance of the supremacy of law has become legendary, has largely been characterized by the absence not only of a distinction between constitutional and nonconstitutional law but also the recognition of fundamental law per se. In fact, a verifiably modern concept of fundamental law did not enter the Anglo-American strain of political and legal discourse until the 17th century with ascendancy of the notion of an ancient constitution. Even so, the idea that constitutional and nonconstitutional law were distinct did not gain significant exponents until the 1770s.
Prior to American experiments in constitution building, Western legal philosophy was inexorably influenced by, among others, Aristotle and Machiavelli. They had relegated the law to a secondary concern compared to other governing authorities. Neither thinker expanded the prevailing conception of a constitution beyond its equation with the properties and characteristics that are associated with the term polity. Certainly, neither philosopher conceptualized a constitution in a manner that could have accommodated the notion of fundamental law within it because neither man would have found the concept of fundamental law meaningful or relevant. The concept of fundamental law was part of a greatly expanded notion of constitutionalism that was embedded in 17th-century English interpretations of republicanism, so constitutionalism and, by extension, jurisprudence necessarily assume a more prominent role in the story of the development of republican discourse at that juncture. In 17th-century English republicanism, the law assumed a pivotal role not only as a vehicle for the authorized use of political power but also as a hermeneutic framework through which English republicans discussed, understood, and clarified the universal propositions that defined English political institutions. Despite the expansion of the law’s function through continual reinterpretations of republicanism to accommodate increasingly robust conceptions of the law, English jurisprudence was invariably linked to the Aristotelian belief that the law is a tool whose existence is determined by the functional dictates of the English polity. This relegation of the law to secondary status would persist until the American framers (of the Constitution) subverted Aristotelian notions of constitutionalism by embracing a positivist conception of the law. Contrary to the traditional dictates of Aristotelian jurisprudence, the framers’ positivist jurisprudence endowed the law with a power whose authority and legitimacy did not derive from a political entity but from a type of positive law, or fundamental law. The framers of the U.S. Constitution invested in it the power to authorize, legitimate, and stabilize a constitutionally ordered government that it created and to protect that government from subversive interests. The framers’ notion of a constitution-centered republic included a fundamentally restructured jurisprudence.
694 c onstitutional law
That jurisprudence was based on the conviction that law, as the expression of the will of the citizenry as sovereign, has an inherent value apart from its utility as a political expedient. According to the framers’ constitutional vision, the law truly became king, especially in its capacity as the expression of the consensual will of the people acting in their constituent capacity. Because the old political discourse recognized only ordinary legislative activity or, as Bruce Ackerman has argued, a onetrack system of politics, no special and separate mechanism existed—and none was needed—aside from the regular legislative apparatus to effectuate constitutional change. Under dominant contemporary English conceptions of constitutionalism, such a mechanism was unnecessary and would have been superfluous because a naturally defined sociopolitical structure determined its corresponding constitution. The framers’ new discourse recognized a higher level of lawmaking and enshrined constitutional lawmaking by the citizenry in its role as a constituent assembly as a uniquely post-Lockean feature of U.S. constitutionalism. Thus, as Ackerman has indicated, a two-track system of politics was authorized by the Constitution, and such a system necessitated a prioritization of laws according to their level of operation in this two-track system. Since a legal mechanism was invoked to frame a government, a certain type of law became superior to a political structure that it created. The emergence of a new political and legal discourse in the late 1780s created a need and provided a context for the elevation of constitutional law above ordinary law, a move that would have been prohibited by, or, more correctly, illogical and impossible previously. As Alexander Hamilton underscores in Federalist 33, under the new discourse, a rejection of the supremacy of constitutional law would have been untenable. Thus, by the end of the 1780s, most political observers had realized that, as Hamilton contended, “the prior act of a superior” constitutional authority “ought to be preferred to the subsequent act of an inferior and subordinate” legislative authority. The implications of Hamilton’s statement can be enhanced through legal scholar H. L. A. Hart’s conception of legal positivism, which bears an uncanny
resemblance to the jurisprudence of the framers’ Constitution. One of the hallmarks of Hart’s positivism is the distinction between constitutional and ordinary—statutory and common—law, a distinction which is a consequence of the existence of what Hart labels a “rule of recognition,” which is a “rule for [the] conclusive identification of” all subordinate rules. Hart explains that a “rule of recognition, providing the criteria by which the validity of other rules of the system is assessed, is . . . an ultimate rule” since, in a system such as the one the framers created, one rule must, ipso facto, always be “supreme.” A rule “is supreme if rules identified by reference to it are still recognized as rules of the system, even if they conflict with rules identified by reference to [an]other criteri[on],” such as, for example, custom, precedent, or a state constitution. However, “rules identified by reference to the[se] latter [criteria] are not so recognized if they conflict with the rules identified by reference to the supreme criterion.” The Constitution was, indeed, such a supreme criterion since the Constitution was the framers’ rule of recognition. Hart’s conception of the law underscores the framers’ conviction that constitutional law was something posited by an authorized representative of the will of a sovereign citizenry. The framers’ reconceptualization of constitutional law as an independent and definite source of authority, legitimacy, and stability allowed them to erect a constitutionally ordered republic whose claims to authority, legitimacy, and stability rested on the will of a constituent assembly in its role as a constitutional legislator, not as an ordinary legislator. Of course, the procedural differences intended to separate constitutional from ordinary legislation, and the attendant substantive differences between constitutional and ordinary law have not been as readily identifiable with respect to their practical manifestations. The constitutional transformations and evolutionary adjustments especially during the past 70 years have frequently mitigated, if not undermined, these apparent differences. The structural needs of an industrialized society coupled with the political exigencies of partisan policy making have been frustrated by the limitations of a Constitution crafted in a world that knew neither industrialization nor professional politics.
constitutional law 695
From a purely theoretical perspective, the establishment of a constitutional republic according to a two-tier system of politics would ensure and preserve the fundamental tenets of that system by insulating it from the vagaries of history and the designs of partisan political actors seeking short-term solutions to long-term structural problems. Unfortunately, the pace of—particularly—sociocultural and economic change during the century or so following the election of President Andrew Jackson in 1828 and the resulting societal pressures exerted on a constitutional framework originally erected by a largely rural people compelled adjustments and adaptations that did not always recognize the ostensibly clear division between constitutional and ordinary law. As both Bruce Ackerman and Howard Gillman have shown, many of the constitutional innovations during the Progressive era, especially during the New Deal not to mention the 70 years since, could hardly be described as having followed the constitutional instead of ordinary legislative track as prescribed by the framers. The urgency generated by the depression of the 1930s and the social upheavals of the 1950s and 1960s was not responsive or accommodating to an amendment process that worked slowly and whose viability relied on the formation of supermajorities that did not exist. Neither the U.S. Supreme Court nor the broader nation whose Constitution the Court was charged with interpreting could afford to postpone the erection of a muchneeded welfare state or a system of constitutional protections for minorities during critical periods during the past 70 years. So, the courts, Congress, and every president since Franklin D. Roosevelt have used constitutional expedients such as the commerce clause and the Fourteenth Amendment to tailor an 18th-century Constitution ratified by a mostly rural citizenry to the requirements of an industrialized, urban society whose political will had become too fickle and too demanding to embrace the measured, deliberate tempo of constitutional politics. The reality of the situation is that the U.S. republic may not have survived the shocks and dislocations of the 20th century had it maintained a strict allegiance to the theoretical separation between constitutional and ordinary law. The modern welfare state, protections for minorities
and disadvantaged groups, the expansion of military capabilities, and numerous other hallmarks of an industrialized America exist because the three branches of government, with the tacit approval of the voters, effected a fundamental reinterpretation and restructuring of the Constitution without invoking the constitutional procedures designed for such purposes. Specifically, the expansion of presidential powers, the Supreme Court’s decisions on matters ostensibly not addressed by the Constitution, and the pressure on congressional legislators to secure short-term political solutions that breach constitutional barriers have all produced de facto constitutional law through ordinary or nonconstitutional processes. Through it all, the U.S. political system appears to have maintained at least a nominal fidelity to the rudimentary difference between constitutional and ordinary law. That fidelity has often enabled U.S. politicians and particularly members of the Supreme Court to appreciate and uphold the symbolic if not practical distinction between the two. Though the theoretical differences between constitutional and ordinary law are resolutely clear to adherents and students of those differences, the practical effects of those differences have prevented a complete or even sustainable separation of constitutional lawmaking from ordinary lawmaking and will continue to do so as the United States inevitably grows in complexity. See also opinions, U.S. Supreme Court. Further Reading Ackerman, Bruce A. We the People: Foundations. Cambridge, Mass.: Harvard University Press, 1991; Caenegem, R.C. van. An Historical Introduction to Western Constitutional Law. Cambridge, Mass.: Cambridge University Press, 1995; Fuller, Lon L. The Morality of Law. New Haven, Conn.: Yale University Press, 1964; Gillman, Howard. The Constitution Besieged: The Rise and Demise of Lochner Era Police Powers Jurisprudence. Durham, N.C.: Duke University Press, 1993; Hart, H. L. A. The Concept of Law. Oxford, England: Oxford University Press, 1961; Kelley, J. M. A Short History of Western Legal Theory. Oxford, England: Oxford University Press, 1992. —Tomislav Han
696 c ontract law
contract law In general terms, contracts are viewed as promises that the law will enforce. The definition of a contract has not changed since the 18th century, which means that a contract is an agreement voluntarily entered into by two or more parties in which a promise is made and something of value is given or pledged. In the United States, to obtain damages for breach of contract, the injured party may file a civil lawsuit, usually in a state court, or use a private arbitrator to decide the contract issues at hand. To be viewed as a legally binding contract, a promise must be exchanged for adequate consideration, which is a benefit or a detriment that a party receives and a reasonable and fair understanding to make the promise of the contract. In most cases, contracts are governed by common law and state statutory law. Most principles of the common law of contracts are found in the Restatement Second of The Law of Contracts, which is published by the American Law Institute. The body of statutory law that governs many categories of contracts can be found in The Uniform Commercial Code, whose original articles have been adopted in most states. Certain business sectors or other related activities can be highly regulated by state and federal law. Historically speaking, the importance of contract law as a constitutional issue dates back to the founding era in the United States. There is a strong constitutional relationship between civil liberties and economic liberties since both ask the same fundamental question: To what extent can government enact legislation that infringes on personal rights? In addition to other liberties, people have always held economic well-being as a high priority. As a result, even though private property is not mentioned in the U.S. Constitution, the U.S. Supreme Court has often heard constitutional challenges in which individuals claim that their personal economic liberties have been violated by government actions. The Court must determine how much power federal and state governments have over such specific economic issues as altering freely made contracts, seizing property, or restricting private employment agreements about wages and hours. These issues also involve the conflict between the interests of the individual and the common good. The framers of the Constitution believed that liberty equated the protection of private
property and that the states, not the federal government, posed the greatest threat. Under the Articles of Confederation, the founders believed that the states had crippled both the government and the economy. As a result, a matter of debate at the Constitutional Convention in 1787 focused on creating a national government strong enough to protect the economic interests of the ruling elite. The framers, who represented the propertied elites of the era, were concerned that the unpropertied masses might succeed in taking control of state legislatures and using their numerical strength to advance their own interests, such as doing away with debts or taxing various industries. From the start, U.S. society valued commercial activity to support its market-based economy. In most cases, the people have always expected their government to honor and enforce contracts, which is exactly how the framers viewed the issue as well. As a result, the framers drafted the contract clause, one of the most important constitutional provisions during the early years of the nation. Article I, Section 10 prohibits any state from passing any law “impairing the Obligation of Contracts.” As drafted in Philadelphia during the convention, the Constitution represented the economic interests of the delegates by strengthening the federal government’s control over economic activities. For the framers, the right to enter into contracts was an important freedom closely tied to the right of private property. The ownership of private property implied the right to buy, sell, divide, occupy, lease, and/or use it however the owner saw fit. The framers were greatly influenced by the writings of both John Locke and Sir William Blackstone on this issue. According to Locke in the Second Treatise of Government, property was considered an extension of liberty, and in Blackstone’s Commentaries on the Laws of England, property was an “absolute right, inherent in every Englishman.” In the early decades of the nation, the contract clause was one of the most litigated constitutional provisions, and members of the U.S. Supreme Court throughout the 19th and early 20th centuries became defenders of property rights and economic liberties. The contract clause was most fiercely defended during the era when John Marshall served as Chief Justice (1801–35) since the Supreme Court at that time
contract law 697
believed that the nation’s interests could be best served if the federal government rather than the states became the primary agent for economic policy making. In addition, the Marshall Court also provided a broader interpretation of the contract clause to include public contracts (those between government agencies and private individuals) and safeguarded what is known as “vested interest” in private property. The Doctrine of Vested Rights, as it evolved through early Supreme Court rulings on the contract clause, viewed it the obligation of the judicial branch to protect contract rights from legislative impairments and, according to legal scholar Edward S. Corwin, set out “with the assumption that the property right is fundamental” and “treats any law impairing vested rights, whatever its intention, as a bill of pains and penalties, and so, void.” Fletcher v. Peck (1810) became the first important ruling of the Marshall Court on the issue of the contract clause, and it was also the first case in which the Court ruled a state law to be unconstitutional. Known as the famous “Yazoo case,” the issue emerged from the sale of land in the Yazoo River territory (now a part of Mississippi) in 1795 by the Georgia state legislature. The legislature had awarded a land grant giving the territory to four companies, but many of the legislators had accepted various bribes prior to their decision. A scandal erupted as a result, and many incumbents were voted out of office the following year. The newly elected legislature then voided the law and declared all rights and claims to the property to be invalid. In 1800, John Peck acquired land that had been part of the original legislative grant and then sold the land to Robert Fletcher three years later, claiming that past sales of the land had been legitimate. Fletcher argued that since the original sale of the land had been declared invalid, Peck had no legal right to sell the land and thus committed a breach of contract. The question then arose as to whether or not a contract could be nullified by a state legislature. In its ruling, the Court unanimously determined that a state could not nullify a public contract. The Court held that since the land had been “passed into the hands of a purchaser for a valuable consideration” legally, the Georgia legislature could not take away the land or invalidate the contract. The Court also pointed out that the Constitution did not permit bills of attainder (the act of a legislature to
declare someone guilty of a crime and/or providing punishment without benefit of a trial) or ex-post-facto laws (a law that retroactively seeks to change the consequences of acts committed). Another important ruling from the Supreme Court involving the contract clause came in 1819 in Dartmouth College v. Woodward. Dartmouth College’s charter had been granted by King George III in 1769. In 1815, the New Hampshire legislature attempted to invalidate the charter to convert the school from a private to a public institution. The Dartmouth College trustees objected and sought to have the actions of the legislature declared unconstitutional. In the majority opinion written by Chief Justice Marshall, the Court ruled in favor of the college and invalidated the actions of the state legislature, protecting contracts against state encroachments. The Marshall Court during this era also declared that in the absence of federal action, state bankruptcy laws were constitutional and that a corporation’s charter was to be considered a contract and therefore free from state intervention. Rulings during this time also showed the complexity in the relationship between “vested rights” and “community interests.” In a mature democratic society, sometimes the protection of a vested right can harm the community interests, an issue that would be considered in later cases before the Court. When John Marshall died, President Andrew Jackson appointed Roger B. Taney chief justice, and a major shift in the constitutional interpretation of the contract clause and many other constitutional provisions began. The Taney Court (1836–64) struck a more balanced view of the contract clause, which allowed states some increased freedom to exercise their police powers. In Charles River Bridge v. Warren Bridge (1837), the Court’s ruling showed that the right of contract is never an absolute guarantee and also showed that the community at large also had rights that must be represented. The case stemmed from the building of two bridges, both contracted through the Massachusetts state legislature. The first was built in 1785 by the Charles River Bridge Company to connect Boston to Charleston. The second, the Warren Bridge, was built 40 years later in close proximity to the first and caused the proprietors of the first bridge to sue for breach of contract. The Court ruled in favor of the proprietors of the second
698 distric t courts
bridge, balancing the rights of property against the rights reserved to states. In the opinion written by Taney, the Chief Justice stated, “While the rights of private property are sacredly guarded, we must not forget that the community also have rights, and that the happiness and well being of every citizen depends on their faithful preservation.” During the post–Civil War period, the Supreme Court moved further away from protecting contractclause claims. In Stone v. Mississippi (1880), the Court ruled that the state has police powers that allow states to regulate the health, safety, morals, and general welfare of its citizens. The Court upheld the right of a state to cancel a previously approved contract for a state lottery, stating that a legislature, by means of contract, cannot bargain away the state’s police powers if a new legislature then wants a change in policy. The contract clause found almost no protection during the Depression/New Deal era during the 1930s, with the Court’s interpretation suggesting that the Constitution must bend in the face of national crisis. In Home Building & Loan Association v. Blaisdell (1934), the Court allowed as constitutional state intervention in protecting homeowners against banks foreclosing on their homes for late mortgage payments. The contract clause saw a small resurgence in the 1970s with a more conservative-leaning Supreme Court, but challenging state laws on these grounds remains a difficult task. Further Reading Epstein, Lee, and Thomas G. Walker. Constitutional Law for a Changing America: Institutional Powers and Constraints. 5th ed. Washington, D.C.: Congressional Quarterly Press, 2004; Fisher, Louis. American Constitutional Law. 5th ed. Durham, N.C.: Carolina Academic Press, 2003; O’Brien, David M. Constitutional Law and Politics. Vol. 1, Struggles for Power and Governmental Accountability. 5th ed. New York: W.W. Norton, 2003; Stephens, Otis H., Jr., and John M. Scheb II. American Constitutional Law. 3rd ed. Belmont, Calif.: Thompson, 2003 —Lori Cox Han
district courts U.S. District Courts are the primary courts of original jurisdiction in the federal court system. They
owe their existence to Article III of the U.S. Constitution: “Section 1. The judicial power of the United States shall be vested in one U.S. Supreme Court, and in such inferior Courts, as the Congress may from time to time ordain and establish.” Article III was debated in the Constitutional Convention of 1787 as well as immediately thereafter during the ratification process in the states. It was again debated in the First Congress when the legislative branch set about to build the structure of the judiciary and establish the limits of its jurisdiction or power to hear cases and resolve disputes. The opponents of ratification, called antifederalists, focused their criticism of Article III on the implied power of the judiciary to overrule both the executive and the legislative branches. They saw both a threat to preexisting state judicial systems as well as dangers to individual rights. The proponents of the Constitution, Alexander Hamilton in particular, emphasized the need for a strong national judiciary that would be empowered to review state and federal laws and make certain that they were in conformity with the constitution. He argued that the judiciary lacked both “sword and purse,” namely the power of the executive to effect orders and the legislature to finance them and, therefore, was the weakest branch of the proposed government. The opponents sought to maintain the separate state jurisdictions as the inferior court system with the only federal or national court being the Supreme Court. All matters would, under their proposals, originate in state court and be ruled on in an appeal to the Supreme Court as the court of last resort. Hamilton, and others, promoted the completely independent national court system with separate jurisdiction from the state courts. As the Constitution was presented to the states for ratification, opponents focused on the lack of an itemization of rights and the potential dangers of Article III. One scholar determined that 19 of the 123 amendments proposed by the state ratifying conventions and 48 of 173 amendments proposed in the first session of Congress sought significant changes in Article III. Both the antifederalists during the constitutional ratification debate and in the First Congress used the peril to individual rights as the focus of criticism of Article III of the Constitution. Because of the language of it, the business of the First Congress included the creation of the federal court system, the so-called
district courts
inferior courts, and the drafting and passage of the Bill of Rights. Congress drew from colonial experience and the writings of the times to form the system. The majority in the First Congress rejected the adoption by the national government of the existing states’ judiciaries. The idea of a separate district-court system, a product of legislative compromise, appeared earlier in Hamilton’s Federalist 81. “The power of constituting inferior courts is evidently calculated to obviate the necessity of having recourse to the Supreme Court in every case of federal cognizance. It is intended to
699
enable the national government to initiate or authorize, in each state or district of the United States, a tribunal competent to the determination of matters of national jurisdiction within its limits. . . . I am not sure, but that it will be found highly expedient and useful, to divide the United States into four or five or half a dozen districts, and to institute a federal court in each district, in lieu of one in every state. The judges of these courts, with the aid of the state judges, may hold circuits for the trial of causes in the several parts of the respective districts. Justice through them may be administered with ease and despatch; and appeals may be
700 distric t courts
safely circumscribed within a narrow compass. This plan appears to me at present the most eligible of any that could be adopted; and in order to it, it is necessary that the power of constituting inferior courts should exist in the full extent in which it is to be found in the proposed Constitution.” Opponents once again pressed for changes that included guaranteeing a jury trial in both civil and criminal cases, restricting federal appellate jurisdiction to questions of law and thereby preventing them from reversing the jury verdict, eliminating or seriously limiting congressional authority to establish lower federal courts, and eliminating the authorization for federal diversity jurisdiction. The debate also included the jurisdiction of the district courts. Originally, all agreed, based on Revolutionary War experience, that the courts must have admiralty jurisdiction. Ships seized during the war were prizes over which the states fought. The earliest forms of circuit courts, independent of the states, were those that resolved these and other admiralty issues. The issues associated with the power or jurisdiction of the district courts went through much of the same process. The Constitution gave the federal courts jurisdiction over cases involving federal questions, diversity of citizenship generally, and specifically over matters involving ambassadors, admiralty, and controversies where the federal government was a party. Hamilton and, in the beginning, James Madison believed that a bill of rights was unnecessary, asserting that it was implicit in the Constitution and that every state constitution had such a bill already. Ratification was achieved, in part, on the representation that the final version of the Constitution would be amended in the First Congress with the addition of a bill of rights. Madison drafted and worked to achieve its adoption in the First Congress. While struggling over the Bill of Rights, the same First Congress took up the issue of the formation of the judiciary. It was the Senate that addressed that issue first. The Senate formed a committee, which included Senator Oliver Ellsworth of Connecticut and Senator William Paterson of New Jersey, to draft Senate Bill No. 1, the bill creating the federal judiciary. George Washington, as president, signed the Judiciary Act of 1789 into law and submitted the names of his nominees for federal-court justices on the same day, September 24, 1789.
The Judiciary Act of 1789 created a system of lower federal courts designed to exist side by side with the preexisting state-court system. In the senate debate and elsewhere, the fear was that the new independent federal judiciary might threaten state courts and restrict certain civil liberties. In response, Congress created a three-tier judiciary based on a compromise that reflected state interests. The geographic boundaries of federal districts and circuits followed state boundaries, and the new district judges were required to be residents of their districts. It allowed state courts to exercise concurrent jurisdiction over many federal questions. It also required the federal courts to select juries according to the procedures used by the district’s state courts and guaranteed the right to trial in the district where the defendant resided. The requirement of senatorial confirmation, not specified in the Constitution or by statute, linked the political and legal ties of state and judiciary as well. Finally, it imposed a high monetary threshold for causes taken on appeal to the circuit courts, thereby protecting small debtors and those who could not afford to travel. The act provided for two types of trial courts: district courts and circuit courts, the latter having a limited appellate jurisdiction. It also made specific provisions for the Supreme Court, defined federal jurisdiction, and authorized the president to appoint marshals, U.S. attorneys and a U.S. attorney general. It created 13 district courts, one for each of the 11 states that ratified the Constitution, plus separate districts for admiralty cases, forfeitures, penalties, petty crimes, and minor U.S. penalty cases. Congress authorized a district judge for each district but varied the salaries based on the length of the coastline. For example, Delaware’s judge received a salary of $800, and South Carolina, with a larger coastline, received $1,800 per year. The districts were included in three circuits: Eastern, Middle, and Southern, based on the military administrative divisions used in the Revolutionary War. District courts were held to four sessions per year, and circuit courts were to sit twice a year in designated cities in each district. For each circuit court’s session, there were to be three judges: two Supreme Court justices and a district-court judge. During the years, the Supreme Court justices felt that the travel associated with circuit-court duties was too burdensome. Likewise, as more states joined
district courts
the Union, more circuits and districts were created, making the tasks even more difficult. The number of circuits reached its 19th century high in 1855 when Congress created the 10th circuit, the California circuit. Congress had been enlarging the Supreme Court to accommodate the needs of the circuits and it reached its maximum number of nine judges in 1865. Since then, there have been circuit courts, later renamed U.S. Courts of Appeals and nine Supreme Court justices. There are 94 federal judicial districts, including at least one district for each state, the District of Columbia, and Puerto Rico. Three territories of the United States, the Virgin Islands, Guam, and the Northern Mariana Islands, have district courts that hear federal cases, including bankruptcy cases. Bankruptcy courts are separate units of the district courts. Federal courts have exclusive jurisdiction over bankruptcy cases, and no bankruptcy case may be filed or heard in state courts. There are two special trial courts that have national jurisdiction in certain types of cases: the Court of International Trade and the U.S. Court of Federal Claims. The former hears cases involving disputed international trade and custom issues. The latter hears cases involving monetary claims against the U.S. government. These include contract and condemnation and a variety of other issues. District judges usually concentrate on managing their court’s caseload, conducting trials and hearings, and writing decisions involving both civil and criminal cases. During the years, their workload has been alleviated by the creation and expansion of the roles of both magistrate and bankruptcy judges. There is also a federal circuit, whose court of appeals is based in Washington, D.C., but hears certain types of cases from all over the country. Each district has from two to 28 judges. Their jurisdiction includes claims made under federal law, civil claims between citizens of different states if the amount in controversy exceeds $75,000, and, generally, whatever else Congress deems appropriate. On occasion, Congress has created law and assigned its enforcement to the federal courts, such as when it established protections for civil rights violations. Trials are both with and without empanelled juries. During the years, Congress has made many changes in the structure and personnel in the district courts. It has added to the jurisdiction and attempted to reduce it. Throughout
701
it all, the three-tier system with the district court has remained in tact. During the years, Congress has also expanded the role of the U.S. magistrate judges to permit them to conduct preliminary hearings, set bail, assist district judges in complex cases, and try some cases by agreement of counsel. Likewise, bankruptcy judges now preside over cases involving bankruptcy. The U.S. district court judges review their decisions on appeal by an aggrieved party. Attorneys seeking to practice in the U.S. district courts must be admitted to practice in a state and maintain an office in the district after being nominated by a member of the bar. Admission pro hac vice (for a particular case or occasion) is available on application. Further Reading Baum, Lawrence. The Supreme Court. 8th ed. Washington, D.C.: Congressional Quarterly Press, 2004; Elliot, Jonathan. Debates on the Adoption of the Federal Constitution in the Convention Held in Philadelphia in 1787 with the Diary of the Debates of the Congress of the Confederation as Reported by James Madison. Vol. 5. New York: Burt Franklin, 1966; Friedman, Lawrence M. A History of American Law, 2nd ed. New York: Simon and Schuster, 1985; Hamilton, Alexander, et al., The Federalist Papers. Edited by Clinton Rossiter. New York: Penguin Books, 1961; Goebel, Julius, Jr., ed. The Oliver Wendell Holmes Devise History of the Supreme Court of the United States. Vol. 1, Antecedents and Beginnings to 1801. New York: Macmillan, 1971; Haskins, George L., and Herbert A. Johnson, eds. The Oliver Wendell Holmes Devise History of the Supreme Court of the United States. Vol. 2, Foundations of Power: John Marshall, 1801–15. New York: Macmillan, 1981; Madison, James. Notes of Debates in the Federal Convention of 1787. Athens: Ohio University Press, 1966; Schwartz, Bernard. A History of the Supreme Court. New York: Oxford University Press, 1993; Warren, Charles. The Supreme Court in United States History. Vol. 1; 1789–1835. Boston: Little, Brown, 1926; U.S. Courts. Available online. URL: http://www. uscourts.gov; the official Web site of the Federal Judiciary; Federal Judicial Center. Available online. URL: http://www.fjc.gov (the research and education center of the federal judicial system established by Congress in 1968). U.S. Constitution. —Roger J. Cusick
702 f ederal judges
federal judges The first judges appointed to serve in the federal government were those nominated by President George Washington on September 24, 1789, the same day he signed into law the Judiciary Act of 1789. Article III of the U.S. Constitution created the federal judiciary and authorized Congress to establish a system of inferior courts. Once the Constitution with Article III was ratified, the debate concerning the existence and jurisdiction of a federal court system of inferior courts shifted to Congress. With the passage and enactment of the Judiciary Act of 1789, Congress established the system of district and circuit courts that were independent of the state court systems. Judges were then appointed to each district and circuit in accordance with the language of Article III, Section 1 of the Constitution (Article III judges): “The Judges, both of the supreme and inferior courts, shall hold their offices during good behavior, and shall, at times, receive for services, a compensation, which shall not be diminished during their continuance in office.” During the Constitutional Convention, the framers had debated the positions of judges in the national government as well as the judiciary generally. The language of Article III that was settled on was sufficiently vague to generate debate prior to ratification. The proponents of the Constitution, known as the Federalists, addressed the federal judiciary in an effort to allay any fears that developed during the colonial period. The colonial experience at the hands of the British judiciary had been oppressive, and the experience under the Articles of Confederation after the Revolutionary War had also been poor. Alexander Hamilton, as a proponent, had argued in Federalist 78 through 81 that the judiciary must be neutral and independent. In comparing the judiciary to the legislature and executive, Hamilton argued that it was no threat to the liberties of citizens of the republic. The judges identified by Article III would constitute the weakest branch of government, despite the ability to interpret law and reconcile conflicts between several statutes and between a statute and the Constitution. The power of the judiciary was self-limiting, according to Hamilton, as it lacked both the powers of the executive and those of the legislature: “. . . The judiciary, from the nature of its functions, will always be the least dangerous to the political rights of the Constitution; because it will be least in a capacity to
annoy or injure them.” Compared to those other branches, “The judiciary on the contrary, has no influence over either the sword or the purse; no direction either of the strengths or of the wealth of the society, and can take no active resolution whatever. It may truly be said to have neither force nor will but merely judgment. . . . It proves incontestably that the judiciary is beyond comparison the weakest of the three departments of power; . . .” Hamilton also argued for life tenure or service during good behavior rather than any limited term in office for the judges. Life tenure provided both independence and the ability to attract the most talented practitioners. An independent judiciary must not fear dismissal by a disappointed executive or a disapproving public or its legislature. The federal judiciary must be secure also in its salary and be protected from reductions during service for the same reasons. Hamilton effectively argued in favor of strengthening the judiciary by showing its weaknesses. Likewise, he emphasized the need for independence and freedom from the constrictions that the legislature might impose upon the judges’ salaries. By fixing the compensation of the judges, the Constitution protected independence: “In the general course of human nature, a power over a man’s subsistence amounts to a power over his will. As we can never hope to see realized in practice the complete separation of the judicial from the legislative power, in any system which leaves the former dependent for procuring resources on the occasional grants of the latter.” After ratification, Article III issues and the federal judiciary came before the first Congress for a similar debate. Congress provided the detailed organization of the federal judiciary and Article III judges that the Constitution had sketched only in general terms in the Judiciary Act of 1789. The debate was between those who wanted the federal courts to exercise the full jurisdiction allowed under the Constitution and those who opposed any lower federal courts or proposed restricting them to admiralty jurisdiction. The act, a compromise, established a three-tier judiciary, separate and distinct from the state systems. The first U.S. Supreme Court consisted of a Chief Justice and five Associate Justices. In each state and in Kentucky and Maine (part of other states then), a federal judge presided over a U.S. district court,
federal judges 703
which heard admiralty and maritime cases and some other minor cases. The middle tier of the judiciary consisted of U.S. circuit courts, which served as the principal trial courts in the federal system and exercised limited appellate jurisdiction. These were presided over by designated Supreme Court justices and a local district-court judge. The Judiciary Act of 1789 established the basic multitier court structure operating alongside state courts that exists today. Article III judges initially included the justices of the Supreme Court, the Circuit Court of Appeals judges and the U.S. district court judges. They are nominated by the president, confirmed by the Senate, and serve for life on the condition of good behavior. They can only be removed from office by impeachment. This requires an indictment by the House of Representatives for treason and other high crimes and misdemeanors after a trial before the Senate with the Chief Justice of the Supreme Court presiding. A two-thirds vote is required for removal. There have been 13 impeachments resulting in seven convictions, four acquittals and two resignations since the founding era. Federal judges’ salaries cannot be reduced during the term of service. There are no designated age or other constitutional conditions for service as a federal judge. The youngest federal judge was Thomas Jefferson Boynton, who was 25 years of age when President Abraham Lincoln issued him a recess appointment to the U.S. district court for the southern district of Florida on October 19, 1863. The youngest judge appointed to a U.S. Court of Appeals was William Howard Taft who was 34 years old when he was appointed on March 17, 1892. Joseph Story was the youngest Justice of the Supreme Court; he was 32 when appointed as an Associate Justice on November 18, 1811. At 104 years of age, Joseph Woodrough was the longest-serving judge. He was a senior judge and a member of the U.S. Court of Appeals for the eighth circuit. A judge who has reached the age of 65 (or has become disabled) may retire or elect to go on senior status and continue working. The original district courts were each assigned one judge. With the growth in population and litigation, Congress has periodically added judgeships to the districts, bringing the current total to 667, and each district has between two and 28 judges. Congress created the position of commissioner in the 1790s. Commissioners served the federal judges
by assisting in ministerial tasks and preliminary matters, as assigned to them by each district judge. They were paid a fee, and the method of selection varied from court to court. By the 1960s, members of Congress and the Judicial Conference of the United States saw a need to create a uniform system of selection, to standardize salaries, and to relieve the congestion of the federal-court dockets by expanding the judicial responsibilities of the commissioners. Congress established the position of magistrate in 1968 pursuant to the Federal Magistrate Act of 1968. It created the new title and expanded its authority to conduct misdemeanor trials with the consent of the defendants, to serve as special masters in civil actions, and to assist district judges in pretrial and discovery proceedings as well as appeals for posttrial relief. It also authorized the district judges to assign additional duties that were not inconsistent with both the Constitution and preexisting laws. Full-time magistrates are appointed under this act by district judges and serve for eight-year, renewable terms. There are also part-time positions of four-year, renewable terms. In 1976, Congress expanded their authority by granting them the power to conduct habeas corpus hearings. The Federal Magistrates Act of 1979 gave them the power to conduct civil actions and misdemeanor trials on the condition that the parties and defendants consent. The Judicial Improvements Act of 1990 formally named them magistrate judges. There are now 466 full-time and 60 part-time magistrate judges. Bankruptcy judges began as bankruptcy referees under the Bankruptcy Act of 1898. They were appointed by district judges to oversee the bankruptcy cases in district courts. In time, Congress gradually increased their duties. The title was changed to bankruptcy judge in 1973. The Bankruptcy Act of 1978 established bankruptcy courts in each judicial district with bankruptcy judges appointed by the president and confirmed by the Senate for a term of 14 years. In 1982, the Supreme Court decided that it was unconstitutional for Congress to grant bankruptcy jurisdiction to independent courts that were composed of judges who did not have the protections of Article III. It postponed its application to allow Congress an opportunity to restructure the bankruptcy courts. The Federal Judgeship Act of 1984 made the courts of appeals responsible for the appointment of bankruptcy judges and declared that the bankruptcy
704 Foreign Intelligence Surveillance Act Court
judges shall serve as judicial officers under Article III. To deal with the Supreme Court decision, Congress explicitly reserved for the district courts certain jurisdiction over bankruptcy proceedings. During the years since 1978, there have been several efforts to extend to bankruptcy judges the protections of life tenure and immunity from reduction in salary provided by Article III of the Constitution. None have been successful, and the bankruptcy judges continue to serve without the protections of Article III of the U.S. Constitution. Further Reading Elliot, Jonathan. Debates on the Adoption of the Federal Constitution in the Convention Held in Philadelphia in 1787 with the Diary of the Debates of the Congress of the Confederation as Reported by James Madison. Vol. 5. New York: Burt Franklin, 1966; Friedman, Lawrence M. A History of American Law. 2nd ed. New York: Simon and Schuster, 1985; Hamilton, Alexander, et al. The Federalist Papers. Edited by Clinton Rossiter. New York: Penguin Books, 1961; Goebel, Julius, Jr., ed. The Oliver Wendell Holmes Devise History of the Supreme Court of the United States. Vol. 1, Antecedents and Beginnings to 1801. New York: Macmillan, 1971; Haskins, George L., and Herbert A. Johnson, eds. The Oliver Wendell Holmes Devise History of the Supreme Court of the United States. Vol. 2, Foundations of Power: John Marshall, 1801–15. New York: Macmillan, 1981; Madison, James. Notes of Debates in the Federal Convention of 1787. Athens: Ohio University Press, 1966; Schwartz, Bernard. A History of the Supreme Court. New York: Oxford University Press, 1993; Warren, Charles. The Supreme Court in United States History, Vol. 1, 1789–1835. Boston: Little, Brown 1926; U.S. Courts. Available online. URL: http://www.uscourts.gov; The official Web site of the Federal Judiciary. Federal Judicial Center. Available online. URL: http://www. fjc.gov (the research and education center of the federal judicial system established by Congress in 1968); United States Constitution. —Roger J. Cusick
Foreign Intelligence Surveillance Act Court In the aftermath of the national intelligence abuses that occurred during the Richard Nixon Administra-
tion in the late 1960s and early 1970s, Congress attempted to establish a means of oversight and a mechanism to limit the political and domestic uses and abuses of the U.S. intelligence services as well as federal law-enforcement agencies by passing the Foreign Intelligence Surveillance Act (FISA) in 1978. Ironically, FISA, and the court it set up, has had exactly the opposite impact. FISA created the Foreign Intelligence Surveillance Court for the purpose of removing sensitive and potentially controversial national-security intelligence-gathering decisions from the hands of politicians. Specifically, the act lays out the guidelines for procedures for the physical and electronic surveillance and the collection of “foreign intelligence information” between or among “foreign powers.” After the many abuses in this area from the 1960s and 1970s, Congress felt that there was too great a temptation to use the intelligence-gathering agencies of the government for personal, political, or international uses that would be inappropriate. Thus, Congress brought the courts into the process in an effort to guarantee that when politicians wanted to engage in what might be questionable or even illegal activities, there was a judicial check and balance embedded into the process. The court was to oversee requests by federal agencies for surveillance warrants directed against suspected foreign-intelligence agents operating inside the United States. After the terrorist attack against the United States on September 11, 2001, the act’s provisions were strengthened and extended to cover a wider range of activities, specifically including antiterrorist efforts under covered categories, but ironically, at a time when the George W. Bush administration could have acquired even greater flexibility built into the FISA process, it declined even to attempt to convince a very compliant and willing Congress to change the law governing these activities. A secret court was thus established in 1978 that allowed the government to circumvent the existing constitutional protections that guaranteed rights based in the First, Fourth, Fifth, and Sixth Amendments to the U.S. Constitution. Under FISA procedures, all hearings are conducted in secret, and results are not open to public scrutiny or serious congressional oversight. While the intent of this legislation was to keep abuses under control, ironically, it opened the
Foreign Intelligence Surveillance Act Court 705
door for a series of later abuses in the wake of the tragedy of September 11, 2001. The court meets in a sealed room atop the Justice Department building in Washington, D.C. All court proceedings are strictly secret. Today, this court issues more surveillance and physical-search orders than the entire federal judiciary combined. It has become a rubber-stamp-approval court for virtually any and all requests for surveillance, thereby undermining the legislative intent of the original act that created FISA. Congressional oversight of the FISA is virtually nonexistent. The only information that FISA requires the court to present to Congress is the number of surveillance orders that the court approves each year. The court itself consists of seven federal judges who are chosen from the federal district courts by the chief justice of the United States Supreme Court. Each member serves for a seven-year, nonrenewable term, and membership is staggered so that there is a new court member each year. If the FISA court refuses to approve a surveillance request, there is an appeals process to a separate panel or ultimately to the U.S. Supreme Court. Secrecy and nonreviewability have come to characterize this court’s activities, and in an age of terrorism, the court has become the government’s vehicle to wiretap and conduct search and surveillance without a warrant. This has led to a series of concerns regarding the erosion of constitutional rights and guarantees as the court has extended its reach into criminal cases and other areas. Given the lack of oversight, this has raised legal as well as democratic concerns. The FISA court is virtually a rubber stamp for the executive. Of the several thousand applications for wiretaps, only a handful has been rejected by the court. If the FISA court is so compliant, why did the Bush administration violate the law (according to most legal experts) and bypass the FISA court in approving a plan for domestic wiretapping and surveillance? The administration claimed that the FISA requirements were too cumbersome and that speed and secrecy were of the utmost importance in efforts to stop terrorists. However, the FISA law allows the administration to wiretap without a warrant for 72 hours (while the paperwork is completed), so the
turnaround time in the court’s decision is relatively brief. When the Bush administration’s domestic surveillance and wiretapping program was exposed in newspaper articles, a public outcry ensued, including criticisms from both Democrats and even some Republicans in Congress. The administration insisted that the program was necessary and limited and served to protect the security of the nation. They also asserted that by writing about the program, the newspapers were endangering national security by alerting the terrorists that they were being wiretapped, and yet, it is hard to imagine that the terrorists did not know they were likely being wiretapped, so it is hard to see that telling the terrorists what they already knew endangered national security. When the stories broke in December 2005 by the New York Times and about the domestic wiretap program and in mid-2006 by USA Today that the National Security Agency (NSA) was collecting information on millions of private domestic calls, the administration was compelled to go public with its case for the wiretap program. General Michael Hayden, the head of the NSA, who was in 2006 appointed head of the Central Intelligence Agency by President Bush, defended the program, arguing that the FISA procedures were too burdensome and cumbersome and interfered with efforts to go after the terrorists. The administration sent its top legal experts out to defend the program, but a series of challenges placed the issue in the lap of Congress. There was pressure to hold public hearings and change the law, but the administration put a “full court press” on the Republicancontrolled leadership of Congress, and no hearings were held. In the end, the Republicans in Congress announced a deal wherein the administration would not hold hearings on the program but where the White House would expand the scope of information to seven, instead of two, members of Congress. It was a political victory for the administration but was seen by many as a defeat for civil libertarians and ruleof-law advocates. The primary concern of the domestic wiretap program is that U.S. citizens are being wiretapped against the law and that the administration is claiming powers that would place it above the law. The president’s claims of authority and of “necessity” here clash with the Constitution and the law. It is one of the
706 judicial branch
ongoing dilemmas of the age of terrorism. Just how many of the constitutional rights that are guaranteed to U.S. citizens can be curtailed in our efforts to combat terrorism? What is the proper balance between protecting liberties and rights on the one hand and efforts to thwart terrorism on the other? There are no easy answers to these questions. Can a national antiterrorist state coexist with a constitutional democracy? While there will always be tensions between rights and the needs of the state to protect citizens from terrorist threats, and while some tradeoffs may be necessary, it is clear that a court with such great power and so little oversight violates virtually every precept of constitutional government and citizen rights. That the FISA court has grown in importance without a public debate over its need or its merits does not allow us to escape the fact that for such a court to exist within a constitutional democracy raises troubling questions and serious concerns. Further Reading Genovese, Michael A. The Power of the American Presidency, 1787–2000. New York: Oxford University Press, 2001; ———, and Robert J. Spitzer. The Presidency and the Constitution: Cases and Controversies. New York: Palgrave Macmillan, 2005; Yoo, John. The Powers of War and Peace: The Constitution and Foreign Affairs After 9/11. Chicago: University of Chicago Press, 2005. —Michael A. Genovese
judicial branch The judicial branch is the third and final branch of government enumerated in the U.S. Constitution. However, we should not assume that its location in the text necessarily implies any inferiority between or among any other branch or branches constituted by the framers. In time, the federal judiciary has been an extremely important driver of constitutional change, but at other periods in U.S. history, it has taken a relative back seat to the legislative branch and the executive branch. It has been lauded, as it was during the height of the Civil Rights Movement, for landmark cases such as Brown v. Board of Education in 1954 (overturning the doctrine of “separate but equal” racial legislation) and Baker v. Carr in 1962 (establishing the principle of “one man, one vote),
and it also has been derided, as it was before the Civil War, in cases such as Dred Scott v. Sanford in 1856 (denying African Americans citizenship) and, during the beginning of the 20th century, in cases such as Lochner v. New York in 1905 (overturning a maximumhours law for bakers). Alexander Hamilton characterized the federal judiciary as “the least dangerous branch;” political scientist Robert Dahl saw the Supreme Court as an institution that, while sometimes out of touch with national political majorities, nevertheless did reflect majoritarian sentiment over time. Contrarily, legal scholar Alexander Bickel called the Court a “countermajoritrian” institution whose function, whether good or bad, is potentially undemocratic in the sense that it sometimes acts to thwart majority will. The judicial branch, then, is both a disputed product of its constitutional design as well as the larger political contexts in which its constitutional provisions are implemented. Most important, though, the judicial branch, like its counterparts—the legislative and executive branches—is dependent on the other branches to exercise their judicial powers. Article III of the U.S. Constitution constitutes national judicial power. Even before ratification, though, the concept of judicial power had had several inherent powers that in many ways still function to define it today. First, judicial power implies the ability to interpret standing law, which includes constitutional provisions, national and state legislation, and court precedent in its case law or common-law forms. Second, judicial power assumes a necessary element of independence in the judiciary’s actions and decisions. This independence can be seen in a few ways. Aside from the fact that the judicial branch is constituted separately from the other two branches of government, which makes it capable of exercising checks and balances against the other two branches, Section I further empowers the judiciary by insulating federal judges by providing them with lifetime appointments “during good behavior” and providing salaries for their services which cannot be diminished while they hold their appointments. The lifetime appointments of all federal judges, especially U.S. Supreme Court justices, is perhaps the most important element of judicial independence and of the judicial power overall. The idea of lifetime appointments is representative of the founder’s design not
judicial branch 707
only to limit national power by separating the three branches of government but also to design each branch’s duration of appointment as an incentive to maintain fidelity to the office in a representative democracy. The president’s four-year term and mode of election through the electoral college is designed to produce both a president who is representative of the will of a majority of the people in their capacity as state citizens and also to provide the president with enough time (at least four years) to carry out his or her goals. In the legislative branch, the duration of time in office for members of the House of Representatives (two years) is designed to maintain a close and dependent relationship between representative and constituency, and the Senate’s six-year duration is designed to produce senators that are less dependent on the immediate will of the people but still ultimately responsible to their constituencies at election time. Thus, we can see that the framers had a conception of the relationship between “time” and representation: the more dependent on the people, the shorter the time between elections. Lifetime appointments of members of the judiciary, then, while still conforming to democracy and republicanism in that judges are nominated by the president and approved by the Senate, are explicitly designed to provide them with a permanent degree of separation from the temporal will of the people whom they serve. While this reality might seem counterintuitive and even undemocratic, it is consistent with a conception of constitutionalism that both limits government while still providing it with the requisite power to carry out the will of the people. “Judging” is thus normatively conceived to be a function of government that requires a large degree of separation and independence from popular whim and opinion to produce legal reason. In reality, though, lifetime appointments can create disjunctions within political coalitions over time. For example, a majority of the so-called “Lochner era” Supreme Court received their lifetime appointments during the late 19th and early 20th centuries when the United States was experiencing unprecedented economic and industrial growth. Not surprisingly, these judges, who were appointed by presidents and confirmed by senators who shared in a commitment to foster this economic growth, for most part shared these coalitional aspirations. However, these justices
continued to remain on the court after the economic collapse of the Great Depression that ushered in an entirely new political coalition, led by President Franklin Delano Roosevelt. Roosevelt’s “Court Packing Plan” of 1937 shows how a large ideological divisiveness between the Court and other branches can have a salient effect on U.S. politics more broadly. The Court of 1937 had for many years been hostile to state and federal legislation designed to reign in a growing industrial economy at the cusp of the 20th century and increase economic regulation of the economy as a result of the stock market crash of 1929. Faced with a hostile court, Roosevelt and Congress simply threatened mandated retirements in the Court and a simultaneous increase in its size through constitutional amendment. In many ways, though, the Court backed down and between 1937 and 1995, the U.S. Supreme Court did not overturn a single piece of national legislation that sought to regulate the economy. A third inherent function of judicial power is the ability to hear “cases and controversies” consistent with judicially created modes of legal reasoning and methods within the jurisdiction of the federal courts. However, the meaning of cases and controversies, as well as the conception of jurisdiction more generally, has been interpreted differently in time. Traditionally, cases are justiciable when they possess one or more of the following characteristics: adverseness, or evidence that the parties really exist and really disagree about a point of law. This is also applicable to the Court’s time-honored prohibition of “advisory opinions” like those sought, but denied, by the Court from Alexander Hamilton and Thomas Jefferson during George Washington’s presidency. Those bringing cases must also have standing, which is a showing that the individual bringing the case to court has actually suffered a “loss” that is real. Traditionally, this has meant physical or monetary loss, but the last few decades have produced a more liberal conception of standing in the form of class-action lawsuits and lawsuits brought in the name of “public interest.” The doctrines of “ripeness” and “mootness” also govern cases. Ripeness is achieved only if the harm is realized at the time of the case, and cases are moot (and hence nonjusticiable) if the injury has been thwarted (for examples, affirmative-action cases involving petitioners who claim denial of admissions but matriculate).
708 judicial branch
Finally, cases cannot be decided if they contain “political questions” that, in the judgment of the Court, should be left to legislative or executive branches. Luther v. Borden (1842) is an early example where the Supreme Court refused to decide whether Rhode Island’s state government was legitimate, arguing that the “Republican Guarantee Clause” of Article IV was better left to Congress to interpret. However, since the 1960s, the Court has interpreted “political questions” very narrowly as it agreed to enter the political thicket in voting-rights cases and legislative apportionment. Aside from the rules governing justiciable cases and controversies, the judiciary is limited further in its jurisdiction in Sections 1 and 2. Section 1 vests the judicial power in “one Supreme Court, and in such inferior Courts as the Congress may from time to time ordain and establish.” Section 2 provides that “In all Cases affecting Ambassadors, other public Ministers and Consuls, and those in which a State shall be Party, the supreme Court shall have original Jurisdiction. In all the other Cases before mentioned, the Supreme Court shall have appellate Jurisdiction, both as to Law and Fact, with such Exceptions, and under such Regulations as the Congress shall make.” These two limits structure the judiciary within the Constitution’s overall design of separation of powers as well as checks and balances. Quite simply, Article III only mandates that there be one Supreme Court; all other inferior federal courts, then, are purely creatures of congressional design. This means that Congress has the power to create, as well as eliminate, any and all federal courts except the Supreme Court. Moreover, the size of the court—currently set at nine—is also not fixed. It is possible, as we saw during the Roosevelt Administration and prior to that during Reconstruction, for Congress to propose changes in the size and number of federal courts. The original/appellate jurisdiction distinction further limits the Court. Only those types of cases specifically mentioned can be heard by the Court in the first instance; every other case can only be appealed, or through means of writs of error and certiorari, to the Court. The last clause of this section, though, illustrates the vast power that Congress possesses over the judiciary. The “under such regulations” clause allows Congress to determine what kinds of federal cases are to be heard by
the Court. Habeas corpus jurisdiction (the ability of individuals to challenge the legality of their detainment in Court), for example, was taken away from the Court during Reconstruction for 17 years (1868– 85). Habeas cases could still be heard by lower federal courts, but as we have seen, if Congress chose to eliminate these courts, they effectively could control almost any kind of case that the judiciary could hear. While the federal courts can exercise judicial review over federal and state legislation, Congress can also exercise, if it so chooses, its power to limit the judiciary’s jurisdiction and cognizance a great deal. The most powerful tool of the judiciary, though, is interestingly one that is not enumerated in the Constitution: judicial review, or the power of the judiciary to decide on the constitutionality of state and federal legislation. Though judicial review as we know it today had actually been exercised for many years even before ratification, judicial scholars mark its birth in Marbury v. Madison (1803). In question was the constitutionality of a portion of the Judiciary Act of 1789 in which Congress structured the nascent judiciary courts and the types and kinds of cases it could hear. Chief Justice John Marshall ruled that a portion of this act that had granted original jurisdiction in some types of cases was unconstitutional, or “void,” in his words. While Congress did have the power to structure the judiciary in many ways, Congress did not have the power to change the original/ appellate jurisdictions as enumerated in Article III. This could only be done as a constitutional amendment and not by an “ordinary” act of the legislature. Marshall proclaimed that “It is emphatically the province and duty of the judicial department to say what the law is. . . . If, then, the courts are to regard the constitution, and the constitution is superior to any ordinary act of the legislature, the constitution, and not the ordinary act, must govern the case.” This construction by Marshall not only envisions the Constitution as “higher law” but also suggests that the determination of conflicts between acts of legislatures and the Constitution are to be determined by the judiciary. Judicial review, as explained above, while potentially powerful, is inherently limited. As this article has shown, the Judiciary is not all-powerful nor is it impotent. Instead, the interaction among all three
judicial philosophy 709
branches structures the political and legal context in which the judiciary acts. While the Court has seen an unprecedented rise in power and prestige in the second half of the 20th century as it helped to eradicate decades of racial discrimination, voting abuses, and criminal-procedure deprivations, the overall reach of the Supreme Court is always potentially determined by nonjudicial actors. It is quite possible that if a new majority on the court is significantly at odds with the legislative or executive branch in the near future, we potentially could see important limitations placed on the Court through the processes laid out in Article III. Further Readings Bickel, Alexander. The Least Dangerous Branch: The Supreme Court at the Bar of American Politics. 2nd ed. New Haven, Conn.: Yale University Press, 1986; Corwin, Edward S. The Constitution and What It Means Today. 11th ed. Princeton, N.J.: Princeton University Press, 1954; Dahl, Robert. “Decision-Making in a Democracy: The Supreme Court as National Policy-Maker.” Emory Law Journal, 50 2001: 563; Hamilton, Alexander, James Madison, and John Jay. The Federalist Papers. Edited by Clinton Rossiter. New York: The New American Library, 1961, Nos. 78–83; Lutz, Donald S. The Origins of American Constitutionalism. Baton Rouge: Louisiana State University Press, 1988. O’Brien, David M. Storm Center: The Supreme Court in American Politics. 7th ed. New York: W.W. Norton, 2005; Rosenberg, Gerald N. The Hollow Hope: Can Courts Bring About Social Change? Chicago: University of Chicago Press, 1991; —Justin J. Wert
judicial philosophy During the years, particularly due to the overt politicization of the judicial confirmation process during the past few decades, the concept of judicial philosophy has been relevant as an apparent predictor of a prospective judge’s stand on potentially justiciable issues. More broadly, judicial philosophy refers to the ostensibly systematized, if not rational, set of standards, criteria, or principles according to which judges decide legal cases. As such, though it inevitably involves the analysis and assessment of descriptive factors and parameters, judicial philosophy largely
reflects the normative determinants that drive judicial decision making. Since at least the failed confirmation of Judge Robert Bork in the late 1980s for a seat on the U.S. Supreme Court, a federal nominee’s judicial philosophy has been the central component of de facto litmus-testing intended to qualify, or disqualify, the nominee based on future voting probabilities. Despite the obvious importance of judicial philosophy as a variable in legal decision-making processes, efforts to define the concept have often been vague and misleading, especially in their inability to describe the differences between judicial philosophy and related concepts, such as jurisprudence (or, more appropriately, jurisprudential philosophy). Most frequently, discussions of judicial philosophy are practically indistinguishable from those concerning jurisprudential philosophy, and the two terms are, thus, used interchangeably in many contexts. On the whole, the term judicial philosophy is favored by those whose concern is the specific voting behavior, either past or future, of a particular judge, whereas the more academic term jurisprudence is utilized by those whose focus is the doctrinal significance of the decisions that arise from that behavior. Nevertheless, despite the manifest similarities, both substantive and otherwise, between judicial philosophy and its apparent cognates, the concept of judicial philosophy is sufficiently distinct from these others, such as jurisprudence, to warrant separate classification and elucidation. In fact, judicial philosophy is an umbrella concept that encompasses many of these cognates, but it also transcends them through its more inclusive or general character. At the most rudimentary level, judicial philosophy comprises at least four interrelated yet theoretically discrete elements: jurisprudence, legal hermeneutics, political ideology, and what can best be categorized as judicial ethics. Each of these elements contributes to a judge’s overall judicial philosophy, and though it would be tempting to consider one or two as predominant or determinative in some sense, each judge’s or judicial nominee’s judicial philosophy represents a unique and, therefore, unpredictable mixture of these elements and other seemingly lesser factors. From an abstract perspective, jurisprudence arguably constitutes the most significant element of the above four, though its contribution to the mix is
710 judicial philosophy
obscured frequently by the political implications of the other three. Specifically, jurisprudence is used here to refer to those theoretical questions regarding the law that seek to address the ontological determinants of legal theory. In other words, with respect to the overall topic of judicial philosophy, jurisprudence concerns itself with the ontology (the nature of existence or being) of the law by confining its focus to inquiries about the nature, causes, circumstances, and defining principles of the law’s existence as law and not, as such, with the substance, meaning, or relevance of specific laws, regulations, or standards enacted as manifestations of that theoretical existence. Consequently, jurisprudential inquiries seek to provide a justification for a methodology and rationale that support a more or less unified and consistent approach to the conceptualization of general legal problems according to recognizable theoretical criteria. Practically speaking, such inquiries have endeavored to produce explanations of the power, authority, and legitimacy of the law by identifying the ontological sources through which the law originates and continues to exist. These efforts, though legion from a historical point of view, can be grouped into four general categories with respect to the practice of law during the past hundred years or so. Perhaps the oldest of the four is represented by adherents of natural-law jurisprudence. The idea that natural law of one sort or another is the ultimate source of derivative positive laws can be traced back more than 300 years and has an impressive intellectual lineage in the United States. Natural-law jurisprudence was one of the dominant doctrines among legal practitioners and political actors in colonial North America, particularly during the first two-thirds of the 18th century. Natural-law theories ultimately surrendered their preeminent status to those of legal positivism, but their cultural influences and doctrinal impact have been hard to ignore during the past 200 years, and they still continue to exert a powerful hold over a substantial minority of legal scholars, jurists, and politicians in the Anglo-American world, especially among those who consider themselves to be comparatively conservative in some fashion. The notion that the legitimacy of the U.S. Constitution and the subsidiary laws that exist as practical manifestations of it is guaranteed through an ontologically antecedent set of
political and legal principles of political morality resonates quite strongly with many people. For a growing minority, those principles are either linked to or established by a broader system of theologically determined norms that inextricably bind religion and politics in a way that makes others uncomfortable, but for most proponents of natural-law jurisprudence, it would be disingenuous to assert a causality of this type. Despite the profound historical importance of the natural law tradition in the United States, the vast majority of the nation’s jurists and politicians espouse some variant of legal positivism. Legal positivism is not the oldest jurisprudential doctrine in the United States, but it has become the predominant one among legal scholars, judges, lawyers, and politicians of all stripes. This doctrine, or set of doctrines, has such a well-established and secure pedigree for numerous reasons, among which is the fact that the U.S. Constitution was the creation of an unmistakably positivist outlook. In fact, many of the determinants of positivism eventually defined by legal scholar H. L. A. Hart in his path-breaking work on the topic can be found in the Constitution as envisioned by its framers. Legal positivism was accorded a pride of place, theoretically speaking, that has stacked the deck against insurgent theories since the founding. Although it would be too crude to claim that the Constitution represents the simple concept of law as the command of the sovereign, such a concept is conspicuously present and seems to underscore the entire constitutional structure. If the Constitution presupposes that the law is something posited by those who possess sovereignty and the authority to govern, then the conclusion that the Constitution ipso facto defines law according to legal-positivist principles is difficult to escape. This, of course, is not the only reason why legal positivism enjoys doctrinal superiority in the United States, but the U.S. system of justice does appear to bias the practice of law in a way that assumes the legitimacy of legal positivism a priori. Furthermore, aside from the associated abstract ontological questions about the law as law, the Constitution does declare that, in the United States, constitutional and ordinary (statutory and otherwise) law originates with the people, that is, the sovereign.
judicial philosophy 711
As should already be clear from the discussion of natural law, none of the above is intended to imply that the status of legal positivism in the United States has been unchallenged or unrivaled. Indeed, one of the most formidable challenges to legal positivism, particularly in its incarnation through legal–formalist approaches during the Gilded Age and the Progressive era, has been what ultimately emerged as legal realism. Legal realism has represented a consistent and powerful alternative to the determinism and ostensibly unworkable foundationalism of legal positivism. Especially during the first third of the 20th century, legal realism posed a tangible threat to the perceived legitimacy of legal positivism as a viable doctrine, and it offered insights into nearly insoluble contemporary social and political dilemmas in that its basic tenet rested on the belief that positive laws (such as statutes or other codes) did not determine the outcome of legal disputes. Rather, judges were the ultimate determinant of laws through their rulings. By the end of World War II, though the purely intellectual appeal of legal realism seemed unabated among some progressive circles, its practical applications and relevance in the legal profession had been somewhat marginalized. Throughout the remainder of the 20th century and during the early 21st, legal realism’s popularity and utility were confined largely to the academic realm where legal–realist doctrines continued to attract strong support, particularly among a cadre of social scientists in key disciplines. At the risk of minimizing the substantial contributions of a doctrinal approach that has made its presence felt for more than a century in various legal environments, legal realism was always more effective as a critique of positivism and traditional doctrinal postures than as a consistent and coherent program for the practice of law within the structural constraints of the U.S. legal system. As a result, to the nonacademic legal profession itself, legal realism has remained an intellectual curiosity without a serious following among practitioners of the law. Much the same could be said about the last and newest group of doctrinal approaches within the jurisprudence. Like their legal–realist counterparts, postmodern approaches to law have enjoyed relative success among academics but have witnessed only a grudging acceptance among nonacademic practitio-
ners, without even the ephemeral real-world penetration that legal realism secured for itself. Critics of postmodernist jurisprudence have been unusually threatened by the apparent implications of doctrinal positions whose central contentions revolve around the claim that absolute and epistemological foundations of the law are a fiction of Enlightenmentcentered and modernist obsessions with objectivity and truth. Despite the prominence of self-proclaimed neopragmatists such as Richard Posner, postmodernism has gained few practical adherents outside academe in its struggle for legitimacy. Scholars such as Stanley Fish have greatly enhanced our understanding of legal processes and the law generally, but the U.S. public and the practicing legal community have been loath to validate approaches that appear to undermine the very foundations on which our political culture is erected. In the end, although jurisprudence may have a greater effect on the development of judicial philosophy than any other single element, most practicing jurists in this country subscribe to some variant of legal positivism, but so much of the differentiation in this regard comes from the other elements. A closely related element which, for all intents and purposes, is the philosophical obverse of jurisprudence aims to answer epistemological questions as opposed to ontological ones. If jurisprudence can be thought of as the ontology of the law, then legal hermeneutics should be viewed as the epistemology (the origin and nature of human knowledge) of the law. By concentrating on the various problems associated with the determinacy, discoverability, and definability of the law, legal hermeneutics seeks to formulate systematic approaches to the interpretation of the law. During the years, various theories of legal interpretation have gained currency among legal practitioners so that, despite the rather wide acceptance of positivist ontological doctrines, the legal profession has not witnessed a corresponding uniformity regarding hermeneutic or epistemological theories. Since at least the end of the 19th century, little consensus has existed about the most appropriate or effective method of legal interpretation. Numerous methodologies and theoretical frameworks have generated a dizzying array of interpretive paradigms, but they can be divided into two general categories, from which further divisions can be identified. Thomas C. Grey
712 judicial philosophy
referred to these two categories as interpretivism and noninterpretivism, though this essay utilizes the terms in a much more inclusive sense. Interpretivism refers to those hermeneutic approaches that are text centered; that is, it includes interpretive methodologies whose justification for existence is the conviction that the meaning and relevance of texts are discovered or constructed by focusing on the interpretation of the text per se and the specific intent, historical or otherwise, of the author. Moreover, despite the fact that interpretivism recognizes the presence and the influence of subsidiary, ancillary, or otherwise complementary documents and influences, it restricts the boundary of legitimate interpretive activity to the words of the target text itself. As such, interpretivism rejects the idea that, obvious issues of textual determinacy aside, language is radically indeterminate or that interpretation is therefore unbounded or unconstrained. On the other hand, noninterpretivism includes approaches whose interpretive scope is relaxed to include sources, texts, or other influences beyond the text in question itself. This may entail the use and interpretation of supplementary or complementary materials that are deemed to be central to an understanding or relevance of the target text so that the target text remains the central focus of any hermeneutic activity and the additional materials are used for clarification, elucidation, or elaboration of some sort as necessary. Noninterpretivism also accommodates the epistemological side of natural-law jurisprudence, and, in this guise, since principles antecedent to or independent of legal texts themselves legitimize those texts, the law must be interpreted in accordance with the dictates of such higher principles. Alternatively, noninterpretivism can involve the utilization of radically subjective approaches that shun the target document altogether and merely rely on it and its provisions as rationalizations for the fulfillment of nakedly political objectives. In this extreme formulation, noninterpretivism is based on the assumption that the essential indeterminacy of language prevents bounded and constrained interpretation based on commonly identifiable standards. At the very least, nonetheless, all forms of noninterpretivism share the conviction that language is, to some extent, indeterminate and that such indeterminacy belies the discovery or construction of meaning
through a reliance on the words of the target text alone. Among the most noteworthy variants of interpretivism has been the textualism or literalism of people such as Hugo Black (who believed in, among other things, an absolutist approach to interpreting the First Amendment) who have argued that words possess an absolute, self-evident meaning that is discoverable and unambiguous and whose interpretation involves the conscientious application of the meaning of the words to all interpretive settings. Accordingly, meaning and relevance are qualities or properties that inhere in the text itself and whose permanence is unaffected by the practical circumstances in which they are invoked. Within the interpretivist tradition, one of the more famous approaches has been originalism, which claims that the original intent of the authors of a legal text should and does constitute the regulating and controlling interpretive criteria according to which the meaning and relevance of legal texts are determined. So, for instance, exponents of originalism assert that the intentions of the framers of the Constitution are intrinsically regulative and normative and that their understanding of the text should govern our interpretations of it. One of the most unusual if not utterly atypical examples of interpretivism is neopragmatism. In fact, most writers would disagree with the classification of neopragmatism as interpretivism, not least due to neopragmatism’s rejection of the idea that meaning and relevance are discoverable because they are inherent in the text itself. By maintaining that interpretation assigns, creates, or constructs meaning and relevance, neopragmatism seems to contradict one of the central tenets of interpretivism. Nevertheless, because neopragmatists share the interpretivist conviction that the interpretation of authorial intent lies at the heart of any interpretive activity, neopragmatist approaches are ispo facto text based, which places them under the interpretivist umbrella. By refuting the notion of radical indeterminacy and the related idea that interpretation is unbounded and unconstrained, neopragmatists have displayed an affinity for interpretivism and a fundamental incompatibility with noninterpretivism. The differences between interpretivism and noninterpretivism are often slight, and even the espousal of extratextual foundations does not necessarily pre-
judicial philosophy 713
clude interpretation that is inherently bounded and constrained by a priori norms. Such is the case with natural-law-based theories of interpretation, inasmuch as they are rooted in the conviction that interpretation is ipso facto determinate and bounded by the political and legal principles whose existence is antecedent to that of the laws that must be interpreted. For example, natural-law scholars such as Hadley Arkes believe that certain tenets of political morality, whose existence is evidenced in such documents as the Declaration of Independence and the Magna Carta, validate the foundational principles of government defined in the Constitution and that, therefore, the Constitution must be interpreted pursuant to those tenets of political morality, which endow the Constitution with meaning. So interpretation in this example does transcend the text, but it does not transcend the fundamental beliefs represented by the text. Within a real-world context, most noninterpretivists, such as William Brennan, do not reject the precedence and priority of the actual legal text. Their central contention is that legal texts are living documents whose meaning and relevance must change along with the settings in which they are interpreted. Furthermore, those texts must be able to incorporate and confront the historical developments that ensue subsequent to the creation of those texts. So to determine or to discover the proper meaning and relevance of legal texts in evolving contexts with differing circumstances, interpreters must widen their hermeneutic scope to include the supporting texts and materials that clarify and elucidate the aforementioned developments and their impact on target texts. Obviously, as indicated, there are those who have forwarded the more radical proposition that language, legal and otherwise, is radically indeterminate and that, thus, interpretation is nothing more than the pursuit of bald political agendas unconstrained by definable legal foundations, but the bulk of noninterpretivists do not fall into this camp. In addition to jurisprudence and interpretation, political ideology manifestly affects a person’s judicial philosophy. Customarily, we have split political ideology crudely along a conservative–progressive divide, but that is not very instructive in this setting. Although it is not at all uncommon to assume that a judge’s putative conservatism or progressivism almost
inevitably will lead to particular juridical outcomes, history has provided us with far too much evidence to the contrary. Oliver Wendell Holmes, Earl Warren, and David Souter are but three examples of the fact that assuming the existence of determinate correlations between political ideology and judicial philosophy is not empirically warranted. Yet, political ideology does play a role in the shaping of an individual’s judicial philosophy, so this factor must be considered. At the most unsophisticated level of generalization, we can presume that judges who align themselves along the right side of the political spectrum will be more sympathetic to the priorities of local and state instead of federal government and that they will frequently look askance at regulatory measures and initiatives. We also assume that those kinds of judges will often err on the side of law and order in the arena of domestic security and police powers and that many of them will not be inhospitable toward the state endorsement, if not promotion, of a comparatively clear set of sociocultural values. On the other side, we are comfortable with the assumption that judges who label themselves as denizens of the left of center will be likely to validate government policies that intend to regulate economic activity or redistribute resources in a way that appears less regressive. We may be correct in presuming that such judges will more vigorously enforce the rights of alleged criminals and attempt to curb the overextension of coercive capabilities by the state and they may be more willing than others to acknowledge the existence of rights, privileges, immunities, and liberties that are not positively recognized by foundational legal documents. Evidently, it is possible to hypothesize in this fashion ad infinitum and to propose myriad possible correlations with respect to the relationship between political ideology and judicial philosophy, but the limitations of such conjecture are clear. Unfortunately, the most definite and determinate statement that can be made regarding this relationship is that political ideology will undoubtedly shape a judge’s judicial philosophy and that historical evidence supports the validity that the above observations support in many cases. However, the exact nature of that relationship is neither definable nor discoverable, and it appears to be susceptible to the kind of variation
714 judicial philosophy
that would render further conclusions unreliable. Therefore, it would be foolish and simplistic to assume that a particular ideological outlook corresponds to a determinate judicial philosophy, but the connection between the two is nonetheless strong, if not always uniform. Of the four major elements that compose judicial philosophy, the most problematic to classify in a useful and understandable manner is judicial ethics. In fact, the term judicial ethics may be somewhat misleading due to its prevailing popular connotations, but if we conceptualize ethics in a more or less classical sense as a discipline that addresses questions not only of goods as substantive objectives but also of the behavioral norms animated by those objectives, then the concept of judicial ethics has relevance in this discussion. Pursuantly, the arena of judicial ethics includes the norms, standards, and criteria according to which judges make decisions about the proper role of the judiciary and, by extension, jurists in our system of government. These decisions manifest themselves as specific behavioral postures that have often been labeled as either activist or restraintist in nature, but such labels can inhibit an adequate understanding of judicial behavior by posing a false dichotomy between judicial activism and judicial restraint. The core issue here is not whether a judge is intrinsically activist or restraintist because such a formulation of the problem puts the cart before the proverbial horse. Rather, the critical issue concerns a judge’s ethical convictions regarding the legitimate ends of juridical activity in a constitutional system such as ours. If those ethical convictions lead a judge, á la Oliver Wendell Holmes or Felix Frankfurter, to conclude that the purposes of the judiciary are only served and fulfilled through deference to the democratically defined objectives that emerge from legislative institutions, then what is commonly referred to as restraint must govern judicial behavior, regardless of the advisability or desirability of those objectives according to ideological criteria. On the other hand, if the judiciary is viewed as an affirmative agent or author of change or the protector and even arbiter of objectives essentially independent of and antecedent to legislative processes, as has been the case with judges as different as William Rehnquist and Earl Warren, then judges
will be inclined to act in ways that are perceived as activist. A popular yet mostly false rule holds that progressive judges are essentially activist and that conservative ones are mostly restraintist. However, conservative judges, such as Rehnquist and also, on occasion, Antonin Scalia, have been just as likely as their putatively progressive colleagues to embrace activism in pursuit of ostensibly viable doctrinal objectives. Likewise, though so-called liberals (or, more appropriately, progressives) are customarily portrayed as inherently activist, history has provided us with plenty of examples, such as Frankfurter or liberal judges whose behavior has been anything but activist. All in all, much the same conclusion can be drawn about the relationship between judicial ethics and judicial philosophy as was true of the presumed correlation between political ideology and judicial philosophy. An inextricable link binds the two in a way that the existence of the relationship is undeniable, but that relationship has not been characterized by the kind of uniformity that would make generalizations about it reliable. Anyone searching for a formulaic description of the dynamic among the aforementioned elements that compose judicial philosophy will not find it in this essay. Although these four components are clearly among the most significant of the many factors that shape the development of judicial philosophy, that development is marked more by differentiation and individuation than by categorization and conformity according to repeatable or predictable patterns. This is not meant to imply that no conclusions can or should be drawn regarding perceived cause and effect or the apparent nature of the dynamic among the various elements that influence judicial philosophy, but such conclusions can only be provisional and tentative and not universal, either from a descriptive or normative perspective. Further Reading Arkes, Hadley. Beyond the Constitution. Princeton, N.J.: Princeton University Press, 1990; Bickel, Alexander. The Least Dangerous Branch: The Supreme Court at the Bar of Politics. New Haven, Conn.: Yale University Press, 1962; Bork, Robert H. The Tempting of America: The Political Seduction of the Law. New York: Free Press, 1990; Fish, Stanley. Doing What
judicial review 715
Comes Naturally: Change, Rhetoric, and the Practice of Theory in Literary and Legal Studies. Durham, N.C.: Duke University Press, 1989; Friedman, Lawrence M. A History of American Law. New York: Simon and Schuster, 1985; Grey, Thomas C. “Do We Have an Unwritten Constitution?” Stanford Law Review 27 (1975): 703–718; Hall, Kermit L. The Magic Mirror: Law in American History. Oxford, England: Oxford University Press, 1989; Hart, H. L. A. The Concept of Law. Oxford, England: Oxford University Press, 1961; McAffee, Thomas B. “Prolegomena to a Meaningful Debate of the ‘Unwritten Constitution’ Thesis.” Cincinnati Law Review 61 (1992): 107–169; Posner, Richard A. The Problems of Jurisprudence. Cambridge, Mass.: Harvard University Press, 1990. —Tomislav Han
judicial review Judicial review is the power of the judicial branch to declare actions of the legislative branch and the executive branch invalid on the grounds that they violate the U.S. Constitution. This power is not explicitly granted to the U.S. Supreme Court by the Constitution. Article III, which lays out the structure and jurisdiction of the federal judiciary and specifies the original and appellate jurisdiction of the Supreme Court, but it never states that the Court has the power to invalidate actions of the other branches of government. However, much as Article II vests “the executive power” in a president without much elaboration, so too does Article III vest “the judicial power” in a supreme court, leaving the interpretation of what exactly constitutes “the judicial power” somewhat open to debate. The argument for judicial review is first and most powerfully made by Alexander Hamilton in Federalist 78 (though the Anti-Federalists also anticipated its use and warned against it). The principal theme of his argument is there is not judicial review but tenure during good behavior. Federal judges hold their jobs for life unless convicted of improper behavior in an impeachment trial. Hamilton argues that the major reason for this tenure is to enable the Court to resist external pressure when performing its function of adjudication. In a famous statement, Hamilton labels the Court the branch “least dangerous to the political rights of the Constitution.” This is so because by itself
the Court can do nothing to endanger our political rights. The two traditional powers of government are the power of the purse and the power of the sword. The power of the purse is the power to tax and pass laws, which is controlled by Congress. The power of the sword is the power to act and enforce the law, which is found in the executive branch. By contrast, the Court has no direct influence in these areas. It has only the power to exercise its judgment, but it must rely on the other two branches to make its judgments effective. The framers also believed that it was likely that the three branches of government would attempt to encroach on each other’s functions and prerogatives. Because the Court was the weakest of the three branches, it was quite possible that Congress or the president would invade its arena. Many framers, in fact, believed that Congress would be the most powerful branch in a republican system of government. Hamilton saw this possibility as very dangerous, for although the Court by itself can do nothing to endanger political rights, if its functions were combined with one of the other two branches, the result would be hazardous. One has only to imagine what would happen if the branch that has the power to make or enforce a law also has the power to declare the meaning of the Constitution and how its actions fit within that meaning. The only way to protect the Court, then, and the liberty of U.S. citizens is to give the Court adequate independence, and the only way to do that is to give its members permanent tenure. It is at this point in his argument that Hamilton introduces the concept of judicial review (though he does not call it that). He sees the need for the weakest branch of government to be given protection from the other branches so that it can do its job—and that job is the power to declare laws and other actions invalid if they contradict the Constitution. Hamilton does not link this assertion to any specific clause in the Constitution itself—his argument rests on the logic of the Constitution’s status as a fundamental law that constrains and limits the power of government. This argument merits elaboration. Hamilton makes the point that the federal constitution is limited—that it limits the power of the government, particularly the legislative branch. The Constitution is an example of fundamental law—it is a higher law than the normal legislation passed by
716 judicial review
Congress for it is the Constitution that gives Congress its authority to make laws. By specifying the powers and authority of government, the Constitution limits those powers and authority. Thus, no law passed by Congress contrary to the Constitution can be considered valid. This might happen because Congress passes a law that it thinks is constitutional but in fact is not. More critical, Hamilton acknowledges the possibility that a legislative majority could pass a law targeting a minority in the community for unjust treatment. In other words, it is possible in a democratic system of government for the people’s representatives to attempt to infringe on the rights and liberties of other citizens. Therefore, the nation needs some agency to determine whether a law is unconstitutional or not. Hamilton argues that Congress cannot perform this function for it is too likely to make its decisions based on popular opinion. How can a majority that is making a law targeting a minority’s rights or liberties be trusted to render justice on the question of whether or not the law it is passing is constitutional? Thus, the Court has the power to interpret laws and to declare the sense of the law. Hamilton argues that this power of judicial review does not make the Court superior to Congress; instead, it makes the power of the people superior to both. Although Congress represents the will of the people in a republican system of government, the highest will of the people is embodied in the Constitution— the fundamental law that was drafted and ratified by “we the people.” That highest expression of the popular will should always prevail over temporary and transitory passions. It is for this reason that the Court requires permanent duration—to give the justices the independence they need to resist the other branches of government or popular opinion in general and to be able to adhere inflexibly and uniformly to the rights and liberties in the Constitution. This argument also demonstrates that the Court’s primary objective of protecting liberty and individual rights contributes to the stability of the nation by countering unconstitutional changes in the law and to popular opinion by protecting the highest law established by the people. Hamilton’s argument for judicial review did not become a reality until the Supreme Court claimed it as a power. The Court was a comparatively weak
branch of government in the early years of the republic, just as Hamilton suggested. Its first Chief Justice, John Jay, left the position to help negotiate a treaty with Great Britain and then serve as governor of New York, hardly the traditional career ladder witnessed in federal judges today. When John Marshall became Chief Justice, however, he transformed the power of the Court by claiming precisely the power that Hamilton had described years earlier. The court case that established the power of judicial review was Marbury v. Madison. Decided in 1803, the case centered on the failure of Secretary of State James Madison to approve several last-minute appointments from John Adams’s administration. Although Marshall argued that Marbury and seven others were entitled to their appointments, he also said the Court had no power to compel the government, through a writ of mandamus, to deliver them. Congress had expanded the Court’s original jurisdiction unconstitutionally in the Judiciary Act of 1789 by giving it the power to issue such writs, and it was that clause that Marshall declared invalid. His argument is remarkably Hamiltonian. He states that the Constitution is “a superior, paramount law, unchangeable by ordinary means.” Because of that, any “act of the legislature, repugnant to the constitution, is void.” In one of the most important statements in constitutional law, Marshall then argues that “it is emphatically the province and duty of the judicial department to say what the law is.” If there is a conflict between ordinary legislation and the Constitution, “the court must determine which of these conflicting rules governs the case.” That is “the very essence of judicial duty,” and the contrary argument would amount to “giving to the legislature a practical and real omnipotence.” In more than two centuries, the Supreme Court has struck down some 160 acts of Congress. It has also invalidated numerous state laws and presidential actions. Because of its position as a branch that interprets the Constitution, sometimes making decisions that run counter to majority sentiment, it is imperative that the Court be sound in its reasoning for the sake of its own legitimacy. Its real power lies in the force of its arguments. When the Court ventures into controversial areas, such as slavery in Dred Scott v. Sanford (1856) or abortion rights in Roe v. Wade
jurisdiction 717
(1973), it can prompt a public backlash that exacerbates the problem the Court thought it was resolving. If the Court is exceptionally vigorous in striking down legislation as the Supreme Court was in invalidating numerous New Deal proposals in President Franklin D. Roosevelt’s first term, then it can prompt strong political attacks by the other branches of government. Also, the Court’s decisions are not self-executing; they require the cooperation of the elective branches of government. Perhaps the most famous example of the Court’s inherent weakness is the apocryphal story of President Andrew Jackson’s reaction to Chief Justice John Marshall’s decision to order the restoration of Cherokee lands. Jackson is reported to have replied: “John Marshall has made his decision. Now let him enforce it.” It is this inherent weakness, based on the Court’s lack of enforcement ability, which testifies to the Court’s weakness as an institution. The Court’s use of judicial review, however, has often provoked charges that it is too powerful—that it is an “imperial judiciary.” Many of the political arguments about the proper role of the Court and its use of judicial
review center on disputes about how it has exercised this power anticipated by Hamilton and claimed by Marshall. See also opinions, U.S. Supreme Court. Further Reading Bickel, Alexander. The Least Dangerous Branch. Indianapolis, Ind.: Bobbs-Merrill, 1962; Hamilton, Alexander, James Madison, and John Jay. The Federalist Papers. Edited by Clinton Rossiter. New York: New American Library, 1961, No. 78. —David A. Crockett
jurisdiction In law and in politics, jurisdiction refers to the authority within a given entity to decide legal matters and administer justice within a specific area of legal responsibility. Within the U.S. system of government, jurisdiction separates the duties and responsibilities of each branch of government and distinguishes the powers that rest in the hands of the national and
718 jurisdic tion
subnational government structures. Debates concerning jurisdiction fuel many political debates, and in fact the very concept of jurisdiction was the most contentious issue of debate surrounding the ratification of the U.S. Constitution. There are three types of jurisdiction: geographical, subject matter, and hierarchical. Geographical jurisdiction simply states that an entity has jurisdiction over a particular matter if that matter is within a specific geographic area. For instance, a state court of last resort in California (which would be the California Supreme Court) has geographical jurisdiction over the entire state whereas a trial court in California (such as a superior Court) only has geographic jurisdiction in a particular city or county. However, the U.S. Congress has a geographic jurisdiction over the entire United States, whereas the single-house legislature in Nebraska can only make laws that apply to the state of Nebraska and its citizens, as defined by its geographical boundaries that set it apart from other states. Subject-matter jurisdiction divides law into categories of civil law and criminal law. Civil law are those matters where a dispute exists between private individuals, and criminal matters are those in which a law has been broken and the government is a party in the case. If there is a dispute between person A and person B concerning whether person A owes person B money, this is a civil dispute in which civil law has jurisdiction. If person A kills person B because of the dispute, then it becomes a criminal matter under the jurisdiction of criminal law. In some U.S. states, there are separate courts established to hear civil cases and criminal cases. There are federal courts, known as Article I courts, that only have a single area of subject-matter jurisdiction, such as tax, patent, or bankruptcy law. These courts were established by Congress, which is why they are referred to as Article I courts, to help relieve the workload of the other federal courts. Moreover, Article I courts deal with technical matters that not all judges or juries are equipped to handle. Thus, the Article I courts have judges with an expertise in a specific area of law that enables them to make well-informed decisions on the matters brought before them. However, most state and federal courts have jurisdiction over both civil and criminal matters. Article III of the U.S. Constitution describes the 11
areas of subject-matter jurisdiction for the U.S. Supreme Court, circuit court of appeals, and the federal district courts. Hierarchical jurisdiction ranks the courts and other political entities according to their authority. The highest court in the state’s hierarchy is the state court of last resort, followed by the intermediate appellate court, and last the trial courts. At the federal level, the highest level on the judicial hierarchy is the U.S. Supreme Court, followed by the circuit courts of appeals, and ending with federal district courts. Each of these courts has its jurisdiction laid out in general terms by Article III of the U.S. Constitution. Legislation can modify the jurisdiction of the courts. Legislation passed by Congress, such as the Judiciary Act of 1789 and the Court of Appeals Act of 1891, are examples of Congress’s ability to control the structure and jurisdiction of the federal courts. There are a number of considerations that account for a court’s placement on the jurisdictional hierarchy, the first being whether the court has appellate jurisdiction, original jurisdiction, or both. The Constitution states that the Supreme Court is to deal with “cases and controversies” stemming from the U.S. Constitution. As such, the Supreme Court can hear cases which are appealed to it from the lower-level courts, thus giving it appellate jurisdiction, and the U.S. Supreme Court also has original jurisdiction in some cases. Original jurisdiction means that certain cases are heard first at the level of the Supreme Court. These cases include legal disputes involving foreign diplomats such as ambassadors, ministers, and consuls, and also involving legal disputes between states. In the entire history of the Court, it has heard only roughly 200 original jurisdiction cases, and most dealt with border disputes between states prior to the 20th century. Under its appellate jurisdiction, the Court receives roughly 7,000 cases per year on appeal from lower courts. Thus, appellate jurisdiction gives a court more authority over courts that only have original jurisdiction. The U.S. federal district courts only have original jurisdiction, where as the circuit courts of appeals have appellate jurisdiction only. Thus the circuit courts of appeals are placed above the federal district courts since they can hear and overturn appeals coming from the federal district courts, but the circuit
law clerks 719
courts are placed beneath the Supreme Court since the decisions handed down by circuit courts can be appealed to and overturned by the Supreme Court. The circuit courts of appeals do not have original jurisdiction. The Supreme Court also separates itself from the circuit court of appeals since it has a discretionary docket. Like 39 of the 50 state courts of last resort, the U.S. Supreme Court can decide which cases it chooses to hear and which to ignore. The circuit courts of appeals have no such discretion over its docket, thus giving it a mandatory docket. Furthermore, the U.S. Supreme Court has geographic jurisdiction over the entire United States, the circuit courts have geographic jurisdiction only within their circuit, and federal district courts only have geographic jurisdiction within their district. Districts are geographically smaller than any of the other two geographic jurisdictions. Most states have a judicial structure that is similar to the national model. First, like the national scheme, there is a hierarchy in the state judicial structure. There is the trial level that has original jurisdiction in civil and criminal matters. If there is a decision handed down by the trial court with which one of the sides does not agree, the decision may be appealed. In 39 of the 50 states, there is an intermediate appellate court to which a case from the trial court level is appealed. In the 11 remaining states, an appealed case goes directly to the state court of last resort, which is similar to the U.S. Supreme Court. In all cases of capital punishment, there is a mandatory appeal that goes directly to the state court of last resort. Jurisdiction is synonymous with power. A court possesses jurisdiction over matters, only to the extent that the Constitution or legislation grants the court jurisdiction. The ability to hear and decide cases is the ultimate determiner of a court’s power as that is their sole responsibility. Thus, Congress, as given control over certain aspects of the federal judiciary’s jurisdiction, can limit or expand judicial power by manipulating its jurisdiction. Likewise, the U.S. Constitution lays out specific areas of jurisdiction that limits Congress’s ability to control the courts. Since the federal courts, particularly the U.S. Supreme Court, are in charge of interpreting the Constitution, it can control its own jurisdiction in many cases. Many
times, political battles will be fought over jurisdictional boundaries. Further Reading Carp, Robert A., and Ronald Stidham. The Federal Courts. 4th ed. Washington, D.C.: Congressional Quarterly Press, 2001; O’Brien, David M. Storm Center: The American Supreme Court in American Politics. 6th ed. New York: W.W. Norton 2003. —Kyle Scott
law clerks Law clerks are the short-term support personnel to judges and courts who assist in legal research, writing, and decision making. Clerks are generally young, fresh out of law school, and academically meritorious legal assistants. Though their roles can vary, there is remarkable similarity in the duties performed by clerks employed at the local, state, and national levels. At the U.S. Supreme Court, the role of clerks has generated a fair amount of controversy, from who gets selected to the influence they have on the Court’s internal workings. Because of this controversy and because most of the research on clerking has focused on the nation’s highest tribunal, Supreme Court clerks will be the primary focus of this entry. Still, much of what is true of high-court clerks also applies to clerks on the lower federal courts as well as state and local courts. Because formal legal education did not begin until 1870 with the establishment of Harvard Law School, the path to a career in the law began by study and practice with a legal professional. As a result, the position of law clerk was held by aspiring lawyers who learned the law by apprenticing with an attorney or judge. As formal legal education became more common, clerkships were transformed into advancedlearning forums to be held after obtaining a law degree. Clerks went from career appointees to working for only one or two years with a judge before departing for positions in academia, government, and private practice. Though clerkships were born out of the apprentice model of legal education, the expansion of clerks at all levels of courts has largely been due to workload pressures: As courts have handled a greater number of cases, both the numbers of judges and clerks has expanded in time, yet their
720 la w clerks
responsibilities have not always developed purely as a result of workload pressures. Instead, seemingly nonrelated institutional changes in the way that courts conduct their business have given rise to increasing clerk responsibility and influence. As a general rule, the most desirable and prestigious clerkships have been held by the top graduates of the top law schools such as Harvard, Yale, Chicago, Columbia, Stanford, Virginia, and Michigan. Historically, clerkships have been the province of white males from upper socioeconomic classes. Yet women and minorities have made inroads into this institution just as they have increasingly populated law schools and the legal profession as a whole. Lucille Lomen was the first female law clerk to serve at the Supreme Court, working for Associate Justice William O. Douglas in 1944. Yet it was not until two decades later that another female clerk was selected when Margaret J. Corcoran was chosen by Associate Justice Hugo Black in 1966. The key event that triggered the regular selection of at least one female law clerk per term was Associate Justice Ruth Bader Ginsburg’s impassioned oral arguments as an attorney in the sex discrimination cases of the early 1970s. By the start of the 21st century, women routinely comprised 40 percent of the Supreme Court’s clerking corps. African Americans and other minorities have also had some success at obtaining clerkships beginning with William Coleman’s selection by Associate Justice Felix Frankfurter in 1948. Still, 56 years later, there had been a total of less than two dozen African-American Supreme Court law clerks. During the same period, there were about the same number of gay and lesbian Supreme Court law clerks. Supreme Court law clerks spend one year clerking at one of the U.S. Court’s of Appeals for one of the top “feeder” judges who routinely place their clerks at the Supreme Court. For example, Courts of Appeals judges J. Michael Letting, Laurence Silverman, and James Skelly Wright placed more than 30 of their clerks with Supreme Court justices. Indeed, clerks have become increasingly partisan in time, and ideology is a key factor in clerk selection. Application cues such as membership in liberal or conservative organizations and clerkships with feeder judges are the ideological signals used by Supreme Court justices when considering their potential clerks.
Initially, Supreme Court law clerks only studied and briefed the petitions that came to the Court— certiorari petitions, or cert petitions for short—as a way of learning the law. The exercise rarely if ever helped their justices who met in regular private conferences with each other to discuss each petition. Yet as the number of petitions grew, in the mid-1930s, Chief Justice Charles Evans Hughes decided to end the practice of the justices formally discussing each petition. Instead, it was up to the individual justices to decide for themselves which cases should be formally considered. Justices turned to their clerks for help in analyzing the petitions and asked for their recommendations of whether each case ought to be heard by the Supreme Court. In the early 1970s, a number of the justices decided to pool their clerks to reduce what they felt was a duplication of effort. Prior to the creation of the “cert pool,” each of the nine justices had one of their clerks write a memo on each case. Eventually, every justice joined the pool except Associate Justice John Paul Stevens. The eight justices in the cert pool shared the pool memo written by one of the clerks from the eight different justices, thereby reducing duplicated effort and freeing up clerks for other work—namely opinion writing. The cert pool is controversial, and even the justices who participate are glad that Justice Stevens—who remains outside the pool as of this writing—provides a check on the single-pool memo. When Stevens leaves the Court, it is likely that the justices will revisit the way they consider the nearly 10,000 petitions they receive each year. An almost completely hidden, yet crucial part of the clerks’ job is the ambassadorial role played out through the clerk network: the informal process of information gathering, lobbying, and negotiation that goes on among clerks from different justices. Prior to the creation of the Supreme Court building in 1935, every Supreme Court clerk worked at the home of their justice and rarely if ever saw the other clerks. By the time all of the justices and their staffs moved into their own suite of offices at the new building in 1941, the clerks knew each other, routinely lunched together, and began to discuss the cases on which they were working. The justices quickly recognized that the clerk network could be used to gain information on what was happening in other chambers, which in turn helped them make decisions and form coalitions.
law clerks 721
The justices became so reliant on the clerk network that they eventually established a separate, enclosed eating space in the Court cafeteria reserved specifically for the clerks so that they could speak with each other undisturbed and without fear of eavesdropping tourists, attorneys, and members of the press corps. For example, a memo from a clerk to Associate Justice Harry Blackmun to her boss demonstrates how important the clerk network is to the decisionmaking process: “Last week . . . the Chief circulated a memo to Justice White. . . . This is sure to stir up trouble. . . . Justice O’Connor has sent Justice White a memo asking for several changes. . . . According to [O’Connor’s clerk] . . . Justice O’Connor is still inclined to wait and has no plans to join anytime soon. . . . I understand from Justice White’s clerk that Justice White has no intention of removing the references . . . in his draft. . . . I will keep you posted.” Perhaps the most controversial part of a clerk’s job is drafting the opinions that are issued in the names of their judges. As with the agenda-setting process, initially Supreme Court clerks drafted the occasional opinion as an exercise in learning the law. Justices rarely if ever used any of these clerk-written drafts in their own work. Early clerks did fill out footnotes to the opinions written by their justices. Today, it is rare for justices to draft their own opinions and routine for clerks to do all of the writing. The cause of this dramatic change was the decision by Chief Justice Fred Vinson, later continued by Chief Justice Earl Warren and all subsequent Chief Justices of distributing opinion writing evenly among all nine justices. Prior to Vinson’s 1950 shift to opinion equalization, opinions were assigned according to the speed at which they were completed. Therefore, speedy writers such as Associate Justices William O. Douglas and Hugo Black wrote far more opinions of the Court than did their more deliberate colleagues such as Felix Frankfurter and Frank Murphy. When Vinson and later Warren practiced the equality principle in assigning opinions, the slower writers were forced to rely on their clerks to keep up the pace. In time, a norm of clerk-written opinions developed, which continue to this day. Though practices vary from judge to judge, evidence shows that some judges routinely issue opinions wholly written by their clerks with little or no
changes. To be sure, clerks cannot write whatever they please, and judges generally provide direction and carefully read over the drafts written by their clerks. Still, judicial opinions are far different from speeches or op-ed pieces that are regularly ghost written for public officials. In the law, word choices and phrases are often crucial so that a seemingly harmless phrase, such as “exceedingly persuasive justification,” can be placed into an opinion by a clerk at one time and years later become a crucial test in sex discrimination law. Furthermore, the more that it is understood that clerks are doing the drafting, the greater the possibility exists that the words and phrases used will lose authority with litigators, lower-court judges, and government officials responsible for implementing the decisions. In all, the institution of law clerk has undergone dramatic transformations in time. What began as an apprenticeship in learning the law has become a crucial part of the judicial process. Clerkships are highly competitive, coveted positions that lead to prestigious careers in the academic, government, and private law fields. As the responsibility and influence of clerks has grown in time, a concomitant danger of unelected, unaccountable clerks overstepping their bounds has also escalated. For the first 100 years of the Supreme Court, there were only nine justices conducting the Court’s business. In the 100 years since, the justices have added three dozen clerks and ceded much of the Court’s work to them. Because of this radical development, it has become increasingly important, not only for the justices and clerks themselves but also for Congress and the people of this country to be vigilant against apprentices who might be tempted to put on the robes of the master and try their hand at legal sorcery. Nothing less than the legitimacy of courts and judges are at stake. See also associate justice of the Supreme Court; chief justice of the United States. Further Reading Best, Bradley J. Law Clerks, Support Personnel, and the Decline of Consensual Norms on the United States Supreme Court, 1935–1995. New York: LFB Scholarly Publishing, 2002; Hutchinson, Dennis J., and David J. Garrow, eds. The Forgotten Memoir of John Knox: A Year in the Life of a Supreme Court
722 maritime law
Clerk in FDR’s Washington. Chicago: University of Chicago Press, 2002; Lazarus, Edward. Closed Chambers: The First Eyewitness Account of the Epic Struggles Inside the Supreme Court. New York: Times Books, 1998; Peppers, Todd. Courtiers of the Marble Palace: The Rise and Influence of the Supreme Court Law Clerk. Stanford, Conn.: Stanford University Press, 2006; Ward, Artemus, and David L. Weiden. Sorcerers’ Apprentices: 100 Years of Law Clerks at the United States Supreme Court. New York: New York University Press, 2006; Woodward, Bob, and Scott Armstrong. The Brethren: Inside the Supreme Court. New York: Simon and Schuster, 1979. —Artemus Ward
maritime law Maritime law, also referred to as admiralty law, is private international law governing relationships between private entities that operate vessels on the oceans. It is different from the Law of the Sea, which is a body of public international law dealing with navigational rights, mineral rights, and international law governing relationships between nations. Maritime law covers such issues as shipping, commerce, seamen safety, towage, docks, insurance, canals, recreation, pollution, and security, including piracy. Many trace the international roots of maritime law to the Rhodians, who lived in the eastern Mediterranean Sea. Although no direct records of their system of maritime law exist, we have references to it in both Greek and Roman law. While some argue that Rhodian law either never existed or was never the basis for Roman maritime law that others have posited, what is known with certainty is that Roman law created the basis for some of our modern conceptions of maritime law. A second international source comes from the maritime law of Oléron, an island off the coast of France. Eleanor of Aquitaine, queen of Henry II and mother of Richard I Lionheart, established this maritime code for the seafaring people of the island. Richard I then brought this legal system to England in about a.d. 1190. These laws were codified into a comprehensive English text called the Black Book of the Admiralty in 1336. During the centuries, the English also borrowed heavily from the Marine Ordinances of
Louis XIV, the laws of Wisbuy, and the maritime laws of the Hanseatic League. Admiralty law in the United States originated from the British admiralty courts established in the U.S. colonies. The British had established maritime courts in North America as early as 1696 and by 1768 had established Vice Admiralty courts in Halifax, Boston, Philadelphia, and Charleston. While there are no records of maritime laws being discussed during the Constitutional Convention, the U.S. Constitution itself only briefly mentions admiralty and maritime law. By 1778, there were admiralty courts in all 13 states, and the Judiciary Act of 1789 placed admiralty courts under the jurisdiction of the U.S. federal courts. In the United States, maritime and admiralty jurisdiction enjoyed a broad conception at the time of the writing of the Constitution. Congress gave federal district courts exclusive original jurisdiction over all civil cases of admiralty and maritime jurisdiction, including seizures, navigation, and trade, both locally and on the high seas. U.S. maritime law was to be based not only on English law but also on the customs and maritime courts of all nations, especially those of continental Europe. In Article III of the U.S. Constitution, the terms admiralty and maritime jurisdictions are used without differentiating between them. In the original English context, admiralty cases referred to local incidents involving policy regulations of shipping, harbors, and fishing. Maritime cases referred to issues arising on the high seas, a term that in present context refers to an area beyond 12 miles from a country’s coastline. In the U. S. Supreme Court case of De Lovio v. Boit (1815), Justice Story defined the scope of the Court’s admiralty jurisdiction by stating that it extended to all areas “which relate to the navigation, business, or commerce of the sea.” U.S. federal courts were given jurisdiction over maritime law in Article III, Section 2 of the Constitution; in addition, evolution of the statutes involved has led to state courts sharing some responsibility in maritime matters. In fact, state courts have concurrent jurisdiction as to most admiralty matters. However, in rem actions, or those cases against property, testimony can only be heard in federal courts. Prize cases, although rare throughout the 20th century, are heard in federal courts and involve cases that are
maritime law 723
brought after the seizure of an enemy ship or cargo. When states have an overriding interest in the outcome, such as a case that threatens a state’s environment due to pollution or toxic spills, the case may be heard in state court. Prior to the Civil War, there was a debate in the United States as to the appropriate scope of federal maritime law. The English tradition had been for the admiralty courts to limit themselves to disputes arising solely on the high seas, such as collisions, salvage, and seamen’s claims for wages. The English court had no jurisdiction over contracts. However, the U.S. maritime courts slowly expanded their power, based in part on the great importance of shipping and maritime activity to the United States. The Supreme Court did decide in 1848 that the federal district courts had jurisdiction in personam—directed at the person—as well as in rem—directed toward property. This means that cases can be brought directly against a ship, as if it were a person. Because the ownership of a vessel can be difficult to ascertain, or because there might be a risk of the ship leaving a district, seizing and suing the ship directly has evolved as a way to ensure legal matters can be properly adjudicated. The Lottawanna (1875) Supreme Court case established that states should not have regulation of maritime law, as that would defeat the purpose of the Constitution in creating uniformity among the various states. Today, however, state tribunals have authority to hear many maritime cases with plaintiffs (“suitors”) who were able to seek remedy in many cases under state law. Congress has asserted its authority to create and change maritime law as need be based on the commerce clause (Article I, Section 8, Clause 3 of the U. S. Constitution), which empowers the U. S. Congress “To regulate Commerce with foreign Nations, and among the several States, and with the Indian Tribes.” However, subsequent cases have argued that Congress has the power to change maritime law based on its authority to make decisions based on the admiralty and maritime clause in the Constitution and the necessary and proper clause, as well as the commerce clause. While the Supreme Court set out areas over which admiralty courts have jurisdiction, shipbuilding contracts, for example, are outside of maritime-court jurisdiction, as are contracts to sell a vessel. Cases
involving the Great Lakes, any navigable waterway connecting U.S. states, marine insurance, contracts to charter a vessel, and contracts to repair ships are all under the jurisdiction of maritime courts. Federal courts, according to Exxon Corp. v. Central Gulf Lines, Inc. (1991), have admiralty jurisdiction to protect maritime commerce. Under the admiralty law, a ship’s flag is one of the prime determinates of the source of applicable law. For example, a ship flying the U.S. flag in the Mediterranean Sea would be subject to U.S. law. While there have been exceptions to this rule, such as in cases involving collisions or those involving the slave trade, the issue of flag origination and the right to hoist a flag are of prime importance in settling which country has jurisdiction over a maritime case. There are many different components to maritime law. Maritime liens involve situations where banks have purchased ships, seamen who are due wages, and companies that supply any part of the necessities of seagoing travel as well as those that have provided cargo. When a ship enters or leaves certain state ports, it is required to have a pilot on board. In instances of mishap at the hands of a pilot, the ship owner is not said to be responsible. The ship itself is described as being at fault and charged with a maritime lien that is enforceable in court. Other instances of a lien include collision and personal injury or salvage. Responsibility for maritime collisions is based on the fault principle in that a colliding vessel is not found responsible for damage to another ship or object such as a wharf or a jetty unless the collision is caused by deficiency or negligence. Generally, the burden of proof lies with the moving vessel. In the United States, if two vessels are both to blame for a collision, the total damages are equally divided, regardless of the individual degree of fault for each ship. The right of salvage is another special property of maritime law. When a ship or a crew saves maritime property from loss or damage, the crew is entitled to a reward. The amount of the reward is based on several factors, including the extent of the effort required to recover the goods, the skill and energy put forward in the salvage effort, the amount of money or goods involved, and the risks incurred during the salvage operation.
724 maritime law
For tort cases to be heard in an admiralty court, the Supreme Court ruled in Executive Jet Aviation, Inc. v. City of Cleveland (1972) that a tort must “bear a significant relationship to traditional maritime activity.” Therefore, cases in which swimmers injured swimmers or airplanes crashed into lakes were not within the jurisdiction of admiralty courts; however, collisions between recreational vessels on navigable waters are within admiralty jurisdiction. Maritime cases have no right to a jury trial, a custom that survives from the Roman law, except in the case of a seaman’s personal injury trial. The English maritime courts, following the Roman legal precedent of disallowing jury trials in maritime cases, were most likely set up in great numbers in the United States to avoid the resistance of American juries during the Revolutionary War period. State courts allow jury trials based on the laws of each particular state. State courts hearing admiralty cases must abide by federal admiralty and maritime law codes, unless there is no congressional legislation, in which case states can hear cases and punish offenses. The limitation of liability is a key aspect of limiting ship owners’ responsibilities after the loss of a ship. In essence, if a ship is lost in a circumstance that is beyond what a ship owner could reasonably expect, the owner can limit his or her liability to the value of the ship. In the United States, the limit is the value of the ship and the earnings from the voyage on which it was engaged. In the United Kingdom and other countries that have ratified the Brussels Convention on Limitation of Liability for Maritime Claims (1976), the limit is a set value based on the net tonnage of the vessel, regardless of its actual value. The ship owner can limit the liability due to the negligence of the crew but not for his or her own personal negligence. Modern maritime insurance appears to have made these limitations rules obsolete, but few countries have been willing to change the rules that, in effect, acknowledge the potentially hazardous conditions of the shipping industry. Maritime law in many ways is a form of international law in that courts often look to the maritime laws of another country when there is trouble reaching agreement in their own courts. However, while there are international conventions to codify certain aspects of maritime law, each country has created its own maritime law as it sees fit. Individual country
maritime courts are moving to restore some of the uniformity that existed in previous times and are using international organizations to coordinate their efforts. The International Maritime Committee, commonly called by its French name Comité Maritime International (CMI), is a private organization composed of the maritime law associations of more than 40 countries. This group was founded in 1897 and was responsible for many of the early writings on maritime law. However, the CMI still draws up drafts of international conventions and submits them to the Belgian government, where official country delegates discuss the CMI draft and then either ratified or rejected them for international use. Since its first meeting in 1958, the International Maritime Organization (IMO) has carried out many of the duties and functions the CMI had undertaken. The IMO, formerly called the Inter-Governmental Maritime Consultative Organization (IMCO), is an agency of the United Nations that originally was formed in 1948. Among its many current duties, the IMO is charged with improving safety at sea, facilitating seafaring commercial activity, and creating regulations on maritime pollution. Since September 11, 2001, the IMO increasingly has addressed issues of security at sea with the International Ship and Port Facility Security Code going into effect on July 1, 2004. The Maritime Law Association (MLA) of the United States was founded in 1899, three years after the founding of the CMI. It is a platform for the discussion of new maritime laws and the uniform interpretation of existing law. It operates in conjunction with the American Bar Association and is a party to the CMI. The MLA has standing committees that deal with such issues as cruise lines and passenger ships, fisheries, marine ecology, marine criminal law, insurance, and recreational boating, among others. U.S. maritime law is a constantly evolving body. In an increasingly globalized world, there is a need for greater conformity and adherence to internationally recognized maritime conventions to ensure protection of maritime rights among the major seafaring nations. While many complicated issues remain to be resolved in the area of maritime law, greater coordination of national laws will yield increased protection of both seamen and ship owners.
opinions, U.S. Supreme Court
Further Reading Comité Maritime International’s Web site: Available online. URL: http://www.comitemaritime.org; Gilmore, Grant, and Charles L. Black, Jr. The Law of Admiralty. Mineola, N.Y.: Foundation Press, 1975; International Maritime Organization’s Web site. Available online. URL: http://www.imo.org; Lucas, Jo Desha. Cases and Materials on Admiralty. Mineola, N.Y.: Foundation Press, 1987; Maritime Law Association of the United States’s Web site. Available online. URL: http://www.mlaus.org; Robinson, Gustavus H. Handbook of Admiralty Law in the United States. St. Paul, Minn.: West Publishing Co., 1939. —Peter Thompson
opinions, U.S. Supreme Court In the judicial system of the United States, an opinion is the ruling, as written and published, of a higher court that establishes a new legal precedent or overturns a current case to create a new precedent. At the federal level, it is the job of the U.S. Supreme Court to base decisions on the merits of the case with two separate components—the immediate outcome for the individuals involved in the case and a statement of general legal rules that will be used by lower courts in future cases. It is important to note that an opinion does not denote that of an individual but, in the legal sense, represents the collective opinion and outcome of the court itself regarding a particular case. For the Supreme Court, the opinion serves as the basis for how the members of the Court have interpreted the U.S. Constitution in regard to a specific law passed at the federal, state, or local level. Many decisions that are reached by higher courts, including the Supreme Court, result in what is known as a memorandum opinion. This type of opinion clarifies how a law applies to the particular case and does not set precedent, but it can affirm or reserve the decision of a lower court. Unlike Congress, where specific procedures and rules exist for the introduction, debate, and vote of a particular bill, and where the deliberation of a piece of legislation is public, the Supreme Court has much different norms for the decision-making process. It is the job of an appeals court to determine whether the law was applied correctly or errors occurred in the trial that would invalidate the verdict, thus requiring a new trial. When the justices agree to hear a case,
725
each side presents new briefs to the justices. In an attempt to influence the justices’ decision, interest groups also submit amicus curiae briefs (known as friend-of-the-court briefs) to argue for one particular side in the dispute. An oral argument is then scheduled for the case, and in most cases, each side receives 30 minutes to make its case before the justices. Because the case is an appeal, no new evidence is introduced, and justices take this opportunity to question the attorneys representing each side. The oral argument is the only public aspect of the process, witnessed only by those people sitting in the chamber that day. Following the oral argument, the Court must then reach its decision and write its collective opinion. A tentative conference vote is taken by the justices after hearing a case, and then drafts of opinions are circulated among the justices until each decides how to rule. Since the votes are tentative until the final decision is handed down, the process of circulating drafts is an important one as it gives justices time to make compelling arguments to their colleagues in an attempt to gain support for their positions. Clerks, who work for the individual justices, play an integral role in writing draft opinions by helping justices with research on important legal precedents. All justices have unique styles and approaches to writing opinions, and this part of the job is the most time consuming for those serving on the Supreme Court. In cases receiving full consideration, the Court provides both its decision (the actual vote of the justices) and opinion (the written explanation that guides lower courts on the issue). Four types of opinions can be issued: majority, plurality, concurring, and dissenting. A majority opinion is written when five or more justices are in agreement on the legal basis of the decision. If the Chief Justice is part of the majority, then he decides who will author the opinion. If not, then the justice with the most seniority within the majority will decide (and either the Chief Justice or the most senior justice can select themselves to write the opinion). A plurality opinion is handed down when a majority of justices agree on a decision but do not agree on the legal basis, so the position held by most of the justices on the winning side is called the plurality. A concurring opinion is written when a justice votes with the majority but disagrees with the legal rationale behind it and therefore writes his or
726
opinions, U.S. Supreme Court
her own opinion. A dissenting opinion allows a justice to explain the reasoning for disagreeing with those in the majority. In many cases throughout the Court’s history, a dissenting opinion in a particular case has become legal precedent later as the majority opinion in a subsequent case. For example, Associate Justice John Marshall Harlan provided the lone dissent in the 1896 ruling in Plessy v. Ferguson, arguing against the majority that believed that the doctrine of “separate but equal” was constitutional in regards to race. In his dissenting opinion, Harlan wrote, “Our Constitution is color-blind, and neither knows nor tolerates classes among citizens. In respect of civil rights, all citizens are equal before the law.” This dissent would provide the legal rationale nearly 60 years later in the 1954 Brown v. Board of Education case that declared segregated schools in the South (that is, separate but equal) unconstitutional. While the justices explain their decision within the written opinion that will be handed down later in the term, the actual behind-the-scenes process of how justices decide a case remains for the most part a mystery to the public. The Court is certainly a legal branch in that its procedures are based first and foremost on the law and that the actions of judges within the legal system are based on following precedent and applying the law. However, it is unrealistic to expect the justices to be completely objective and immune from the political process in deciding cases. Therefore, it is important to note that the Supreme Court is both a legal and a political branch since the justices can never be completely isolated from other political actors. Similarly, the opinions of the Court are not self-executing. Justices are often mindful when writing their opinions that they need the compliance of many other political actors at the federal, state, and/or local level to implement their rulings. The Supreme Court also follows strict procedures regarding what are considered “justiciable controversies.” Litigants must be real and adverse and must not represent a hypothetical legal situation. The Court also avoids providing advisory opinions on issues not raised in the case at hand; this tradition dates back to the first term of the Supreme Court in 1790 under the first Chief Justice, John Jay. Justices have, however, often relied in their opinions on what is known
as dicta, which is a statement of personal opinion or philosophy that is not necessary to the decision. This practice is viewed as a justice’s opportunity to give legal advice to either the lawyers in the case or to the policy makers who created the law (like members of Congress or the president). The decisions of the justices are announced in the Supreme Court chamber. In most cases, only the outcome of the cases are announced, although in some controversial cases, parts of or even the entire opinion is read aloud by one of the justices. The opinions of the Supreme Court are published officially in a set of case books called the U.S. Reports. The U.S. Reports are compiled and published for the Court by the Reporter of Decisions. Page proofs prepared by the Court’s Publications Unit are reproduced, printed, and bound by private firms under contract with the U.S. Government Printing Office. The Supreme Court’s opinions and related materials are disseminated to the public through four printed publications and two computerized services. Prior to the publication of the U.S. Reports, the Court’s official decisions appear in three temporary printed forms: bench opinions (which include printed copies of the opinions made available to the public and the press through the Court’s Public Information Office on the day the opinion is announced, as well as electronic versions available on the Court’s Web page and through the Court’s subscriber service); slip opinions (a corrected version of the bench opinion which is posted on the official Web site of the Supreme Court at www.supremecourtus.gov), and preliminary prints (which are brown, soft-cover “advance pamphlets” that contain, in addition to the opinions themselves, all of the announcements, tables, indexes, and other features that will be included in the U.S. Reports). See also judicial review. Further Reading Baum, Lawrence. The Supreme Court. 8th ed. Washington, D.C.: Congressional Quarterly Press, 2004; Fallon, Richard H., Jr. The Dynamic Constitution: An Introduction to American Constitutional Law. New York: Cambridge University Press, 2005; McCloskey, Robert G., and Sanford Levinson. The American Supreme Court. 4th ed. Chicago: University of Chicago Press, 2004; O’Brien, David M. Constitutional Law and Politics, Vol. 1, Struggles for Power and
plea bargaining 727
Governmental Accountability. New York: W.W. Norton, 2005; O’Brien, David M. Storm Center: The Supreme Court in American Politics. 7th ed. New York: W.W. Norton, 2005. —Lori Cox Han
plea bargaining Plea bargaining, a common practice in trial courts, is the process by which a defendant in a criminal case pleads guilty to a crime in return for a lesser sentence or some other consideration from the court. Usually, a plea bargain will result in a lesser sentence than the defendant would have received if convicted after trial. There are three types of plea bargains. A “charge” bargain is where the defendant pleads guilty to a lesser charge than the one originally filed. For example, a defendant might plead guilty to assault instead of aggravated assault with a firearm. The end result would be a lesser sentence for the defendant. The second type of plea bargain is a “count” bargain where the defendant pleads guilty to one count or charge and the prosecutor drops the remaining charges. This, too, results in a lesser sentence. The third and most common form of plea bargaining is a “sentence” bargain where the defendant pleads guilty with the promise of a specific sentence in return. For example, a habitual offender might face a prison sentence of 64 years for possession of 500 grams of cocaine if he or she goes to trial, but if he or she pleads guilty, a sentence of 24 years with the possibility of parole after serving only 16 years will be imposed. A long-standing practice in U.S. courts, plea bargaining has existed since at least the late 19th century. Although plea-bargaining practices vary widely, it is common for as many as 95 percent of criminal cases to be resolved by a plea bargain, although pleabargaining rates differ from jurisdiction to jurisdiction. There is some disagreement as to exactly why plea bargaining is so prevalent, but each participant in the courtroom work group—prosecutors, defense attorneys, and judges as well as defendants—gains from the reduced time required to settle the case and the mutually agreed on outcome. Plea bargaining is often viewed critically by the public, which sees it as a practice that lets criminals off easy because they are not receiving the sentence
that they should. It is politically popular to propose banning plea-bargaining. Indeed, in the past, states such as Michigan have instituted plea bargain bans involving certain type of crimes. The state of Alaska went even further, issuing a statewide ban on plea bargaining. Attempts to limit plea bargaining have usually proven difficult to achieve, however, because of the internal dynamics in trial courts. Plea bargaining is viewed very differently by the participants in the trial courts—the judges, the prosecutors, and the defense attorneys—than it is by the public. Within the trial court, prosecutors, judges, and defense attorneys spend their days in a frenetically paced environment of processing one case after another. After working in the criminal courts for even a few months, prosecutors and defense attorneys come to the conclusion that most of the defendants with whom they are working are in fact, guilty. As a result, it quickly becomes apparent to participants that it is a substantial waste of limited resources to provide a full trial for the vast majority of cases. It is a much better use of the court’s time to reserve trials for those cases that raise either substantial legal questions or honest questions of doubt regarding the facts. It makes little sense to try a “deadbang” case, where the prosecution and defense both know that the only result of the trial will be a conviction. Working together on a daily basis causes the courtroom work-group members to develop shared norms—or what is often referred to as a “local legal culture”—about how to dispose of particular types of cases. Not only do norms suggest which types of cases should be tried and which should be resolved by a guilty plea, but the participants in the courtroom work group also develop a sense of the “worth” of a case. For example, they often establish “going rates” for a particular crime. While the statute might say a habitual offender should receive a 60-year sentence, the work group members often have a sense that a 24-year sentence is a just and appropriate sentence for a particular type of case. Thus, as long as the “plea bargain” comes close to the “going rate,” the members of the courtroom work group believe that justice has been served and that the court has been spared the time and expense of an unnecessary trial. In this sense, the defendant is not “getting off easy;” he or she is “getting what he deserves.”
728 plea bargaining
Defendants may have a very different view of plea bargaining from the courtroom work group. The defendant is faced with a dilemma regarding whether to take a plea bargain or to exercise constitutional rights to a trial. If they take the “bargain,” they will face a lesser sentence than they would if convicted at trial. In some courts, this is referred to explicitly as the “trial penalty.” Some judges have even been reputed to say “If you take some of my time, I’ll take some of yours.” Yet, this is not the only possible outcome. If a defendant goes to trial and then is acquitted after the trial, the defendant would go free. Thus, the defendant has to evaluate whether the risk of conviction outweighs the chance of going free. The defendant’s decision to accept a plea bargain or to take a chance at trial is complicated by the fact that many defendants have public defenders for counsel, who are normally plagued with heavy caseloads and have limited time to prepare for any one client. As a result, public defenders may pressure defendants to take the plea. “This is a good offer—if we go to trial, we are looking at a much longer sentence, and the judge is not likely to show any mercy.” The first-time offender’s experience with plea bargaining is often very different from that of the repeat offender. The first-time offender probably has a view of the justice system as one where he or she is innocent until proven guilty—and that guilt is proven through a trial. The first-time offender probably does not understand the workings of the court and may be more reluctant to take a guilty plea. The repeat offender probably has a more jaded view of the system, knowing that most cases are resolved by a guilty plea and having little faith in the public defender’s office. The repeat offender will try to get the best “deal” he or she can, depending on the strength of the evidence against him or her. There are consequences of taking a guilty plea of which many defendants may not be fully aware. When the defendant takes a plea, he or she is admitting guilt and eliminating any possibility of appeal. It is only the defendant who is convicted at trial that has a right to appeal. Second, for many cases, the defendant might be told, “Take the plea bargain and you can go home today—with no jail sentence, just probation.” Yet, most first-time offenders do not realize that by taking that option, they are incurring monthly probation costs which if they fail to pay can result in
extending the term of their probation. For the poor defendant, this can be a recipe for bankruptcy. Critics of plea bargaining argue that the justice system is more concerned about quickly processing cases than it is in providing justice. The most commonly heard reason for plea bargaining is to save time and to prevent the court system from collapsing on itself if every case were brought to trial. Defenders of the system often fail to articulate that guilty pleas with lesser sentences are appropriate responses to cases where the defendant is willing to admit guilt and to forgo the expense of a trial. Critics also argue that the prevalence of plea bargaining creates a very real risk that innocent defendants will take a guilty plea instead of taking their chances at trial. For example, the innocent defendant could rationalize that by taking a guilty plea, the process will be over. He or she will be able to go home, serve a term of probation, and go back to his or her life. If the case is brought to trial, the risk of a jail or prison term is very real. It might be easier to take the plea than to face possibly several years in incarceration. Such a scenario is a terrifying prospect for critics of plea bargaining, who believe that the very notion of a fair and equitable system of justice depends on avoiding such an outcome. Yet, there is little research on the prevalence of innocent defendants pleading guilty. Further Reading Carns, Theresa White, and John A. Kruse. “Alaska’s Ban on Plea Bargaining Reevaluated.” Judicature 75 (April–May, 1992): 310–317; Casper, Jonathan D. American Criminal Justice: The Defendant’s Perspective. New York: Prentice Hall, 1972; Eisenstein, James, and Herbert, Jacob. Felony Justice. Boston: Little Brown, 1977; Eisenstein, James, Roy B. Flemming, and Peter F. Nardulli. The Contours of Justice: Communities and their Courts. Boston: Little, Brown, 1988; Feely, Malcolm. The Process Is the Punishment. New York: Russell Sage Foundation, 1979; Friedman, Lawrence M. “Plea Bargaining in Historical Perspective.” Law and Society Review 13 (1979): 247–259; Heumann, Milton. Plea Bargaining: The Experiences of Prosecutors, Judges, and Defense Attorneys. Chicago: University of Chicago Press, 1978; Heumann, Milton, and Colin Loftin. “Mandatory Sentencing and the Abolition of Plea
po liti calquestionsdoctrine
Bargaining: The Michigan Felon Firearm Statute.” Law and Society Review 13 (1979): 393–430; Mather, Lynn. Plea Bargaining or Trial? Lexington, Mass.: Lexington Books, 1979. —Michael C. Gizzi
po liti calquestions doctrine The political questions doctrine is a judicially created rule by which courts decline to exercise jurisdiction over a legal claim because its resolution requires a discretionary judgment that is best left to the elected representatives of the people rather than an application of law. As Chief Justice John Marshall explained in Marbury v. Madison (1803), “[Q]uestions, in their nature political, or which are, by the constitution and laws, submitted to the executive, can never be made in this court.” The origins of the rule trace back at least as far as the British case of The Nabob of the Carnatic v. East India Company (1791) and (1793), where it was held that “for their political acts states were not amenable to tribunals of justice.” In the early case of Luther v. Borden 1849, Chief Justice Roger B. Taney invoked the principle, holding that a dispute, over which was the true government of a state, was exclusively a question for the Congress under the terms of the republican-form-of-government clause of Article I and that that “decision is binding on every other department of the government, and could not be questioned in a judicial tribunal.” The modern political-questions doctrine is articulated in Baker v. Carr (1962), where the U.S. Supreme Court described it as a function of the constitutional separation of powers and practical constraints on the exercise of judicial power. The Court held that a political question existed when any of six conditions were “prominent on the surface of [a] case”: “[1] a textually demonstrable constitutional commitment of the issue to a coordinate political department; or [2] a lack of judicially discoverable and manageable standards for resolving it; or [3] the impossibility of deciding without an initial policy determination of a kind clearly for nonjudicial discretion; or [4] the impossibility of a court’s undertaking independent resolution without expressing lack of the respect due coordinate branches of government; or [5] an unusual need for unquestioning adherence to a
729
political decision already made; or [6] the potentiality of embarrassment from multifarious pronouncements by various departments on one question.” Legal scholar Alexander M. Bickel famously argued that the doctrine was inherently “flexible” and that it reflects “the Court’s sense of lack of capacity” and “the anxiety, not so much that judicial judgment will be ignored, as that perhaps it should but will not be.” In Baker v. Carr itself, the Supreme Court conceded the doctrine emerged in time out of “attributes which, in various settings, diverge, combine, appear, and disappear in seeming disorderliness.” Indeed, the label political questions doctrine is somewhat misleading. First, as scholars and courts often note, the judiciary deals with political issues all the time—not everything that involves political issues is a nonjusticiable political question. Second, there is no single doctrine but rather a variety of constitutional, institutional, and prudential considerations which the courts cite when deciding whether to decide a case. In the earliest cases in which the doctrine appears, a political question is treated as a matter of constitutional text and history, while the later cases come to rely increasingly on prudential considerations to counsel judicial abstention in defense of the legitimacy of courts. Such doctrinal evolution is part of why the politicalquestions doctrine is a source of great controversy. Constitutional scholar David M. O’Brien has observed that “the doctrine’s logic is circular. ‘Political questions are matters not soluble by the judicial process; matters not soluble by the judicial process are political questions.’ ” Others, such as noted internationallaw scholar Louis Henkin, have gone so far as to wonder whether the doctrine exists at all, and legal scholar and lawyer Erwin Chemerinsky speaks for many who think that the doctrine should not exist, complaining that it is “inconsistent with the most fundamental purpose of the Constitution: safeguarding matters from majority rule.” Thus, in Powell v. McCormack (1969), the Supreme Court held that there was no political question bar to its consideration of a challenge to the exclusion of a member-elect by the U.S. House of Representatives because there was no exclusive commitment of the terms of office to the Congress itself. However, in Nixon v. United States (1993), the Court found a political question because there was a
730
po liti calquestionsdoctrine
textually demonstrable commitment of the issue of impeachment to the Senate and also a lack of judicially discoverable and manageable standards for resolving the issue of a claim against the process by an impeached federal judge. The political questions doctrine also has been invoked in cases involving the president’s power to abrogate treaties, the commander-in-chief power to commit troops, the date at which a state of war exists, congressional authority over immigration and the regulation of aliens, whether or not to claim jurisdiction over disputed territory, and the power to recognize the authority of foreign-treaty negotiators. As this list indicates, foreign affairs and national security is a matter often treated within the scope of political questions. Thomas Franck argues that the origins of the political-questions doctrine are to be found in the concession by the early “fragile federal judiciary” that it could not reach into foreign affairs, the provenance of the more powerful political branches. The Baker Court pointed to the special context of foreign relations where disputes “frequently turn on standards that defy judicial application, or involve the exercise of a discretion demonstrably committed to the executive or legislature; but many such questions uniquely demand single-voiced statement of the Government’s views.” In Oetjen v. Central Leather Co. (1917), the Court presaged the first Baker element, holding that “[t]he conduct of foreign relations of our Government is committed by the Constitution to the Executive and Legislative—‘the political’—departments of the Government, and the propriety of what may be done in the exercise of this political power is not subject to judicial inquiry or decision.” However, cases arising under international law are within the power of the judiciary, and under the Alien Tort Claims Act and other federal statutes, courts are authorized by the Congress to hear certain actions that implicate the law of nations. Thus, the courts do not invoke the political questions doctrine routinely or lightly to refuse to hear a case over which they are otherwise satisfied with their jurisdiction. For example, the Supreme Court rejected an executive branch bid to invoke the politicalquestions doctrine in a complaint against an executive agency decision under a statute authorizing the
agency to certify that Japanese whaling practices undermined the effectiveness of international treaties. The Court reasoned that it had the responsibility to interpret treaties and statutes—and the intersecting obligations of the two—even where that implicated foreign relations.“[U]nder the Constitution,” wrote the Court in Japan Whaling Assn. v. American Cetacean Society (1986), “one of the Judiciary’s characteristic roles is to interpret statutes, and we cannot shirk this responsibility merely because our decision may have significant political overtones.” In recent years, scholars have come to wonder whether cases such as Japan Whaling might not represent a practical repudiation of the political questions doctrine by the contemporary courts. Baker v. Carr has itself been labeled the “beginning of the end” of the political questions doctrine, given that its very precise definition of political questions was done in the service of a case that found that there was no political question in the previously contentious debate over the justiciability of apportionment and electoral districts. Most scholars and practitioners believe that the political questions doctrine is largely out of favor in the Supreme Court, even with respect to foreignaffairs controversies, though it still makes an occasional appearance in lower courts in cases involving foreign relations. Further Reading Barkow, Rachel E. “More Supreme than Court? The Fall of the Political Question Doctrine and the Rise of Judicial Supremacy.” Columbia Law Review 102 (2002): 237; Bickel, Alexander M. The Least Dangerous Branch: The Supreme Court at the Bar of Politics. 2nd ed. New Haven, Conn.: Yale University Press, 1986; Chemerinsky, Erwin. Interpreting the Constitution. New York: Praeger, 1987; Fisher, Louis, and Nada Mourtada-Sabbah. Is War a Political Question? CRS Report RL 30687. Washington D.C.: Congressional Research Service, 2000; Franck, Thomas M. Political Questions/Judicial Answers: Does the Rule of Law Apply to Foreign Affairs? Princeton, N.J.: Princeton University Press, 1992; Henkin, Louis. “Is There a Political Question Doctrine?” Yale Law Journal 85 (1976): 597; MourtadaSabbah, Nada, and Bruce E. Cain, eds. The Political Question Doctrine and the Supreme Court of the United States. Lanham, Md.: Lexington Books/Rowman & Lit-
precedent 731
tlefield, 2007; O’Brien, David M. Storm Center, The Supreme Court in American Politics. 7th ed. New York: W.W. Norton, 2005. —Ronald L. Steiner
pre ce dent Precedent is law that is created by judges when they interpret the U. S. Constitution or a statute or when they apply legal principles to the facts of a case. When judges make a ruling in a case, they will frequently write an opinion that explains the basis for their decision. Opinions that answer new or important legal questions are published in bound volumes called law reporters, which are printed by the federal or state governments. These published opinions are known as precedent or case-law and provide guidance to later judges who must rule on similar legal issues or in cases with similar facts. Precedent can be either “binding” or “persuasive.” Binding precedent is case-law that judges are required to follow. The principle of binding precedent is also known as stare decisis—a Latin phrase meaning “to stand by things decided.” Persuasive precedent is case-law that is influential but that judges are not obligated to follow. Whether precedent is binding or merely persuasive depends primarily on whether the court that is dealing with the legal issue the second time is within the jurisdiction of the court that created the precedent. In the federal court system, for example, U.S. Supreme Court decisions are binding precedent on the Supreme Court itself and on all other federal courts; decisions of a circuit court of appeals are binding on the circuit that created the precedent and on the federal district courts within that circuit. However, case-law from one circuit court of appeals would only be persuasive precedent in a different circuit and in the district courts within that sister circuit. The idea that judges should be bound by prior decisions first began to develop in England in the late 1700s and grew out of the principles of natural-law theory. Natural-law theorists believe that the law exists independently of human constructs such as politics and societal norms. Accordingly, the application and interpretation of the law should not be susceptible to the biases and personal opinions of the individual judge that hears a case. Because stare decisis
binds judges, it limits their ability to shape the law to their personal beliefs. The framers of the U.S. Constitution were heavily influenced by natural-law theory and by the English legal tradition and adopted the concept of stare decisis when creating the judiciary branch of the federal government. Alexander Hamilton discussed the importance of precedent in his Federalist 78— one of a series of articles published in 1788 that is known collectively as the Federalist—encouraging ratification of the Constitution. Hamilton wrote: “To avoid an arbitrary discretion in the courts, it is indispensable that [judges] should be bound down by strict rules and precedents. . . .” Limiting arbitrariness and insulating the law from the biases and personal opinions of individual judges gives legitimacy to both the laws and the courts. Laws are, in essence, a set of rules by which citizens must live. For the rule of law to be legitimate, the laws must be consistent, must be predictable, and must be applied equally to all citizens. If precedent did not exist, the meaning of laws would be susceptible to constant change and reversal depending on the particular judge deciding the case. As the late Supreme Court Justice Lewis Powell explained, without stare decisis, “the Constitution is nothing more than what five Justices say it is.” Precedent gives legitimacy to the law by providing consistent, predictable application. Precedent also gives legitimacy to the courts because it prevents judges from using their position and authority to promote personal or political agendas. In addition to providing legitimacy to the laws and the judicial system, stare decisis also serves the practical purpose of efficiency because judges do not have to redecide every legal dispute that comes before them. For example, in 1803, the Supreme Court decided the case of Marbury v. Madison and held that the Constitution gives the federal courts the authority to exercise judicial review. Without precedent, each time someone brought a lawsuit alleging that a federal law was unconstitutional, the courts would have to reconsider the issue of whether they have the authority to hear the case. However, because judges are human and fallible, they sometimes interpret the law incorrectly. Tension arises between these different benefits of stare decisis when courts must deal with wrongly decided
732 pr ecedent
precedent. On the one hand, stability would be undermined if judges constantly reversed precedent to correct perceived mistakes. The legitimacy of the courts would also be comprised if precedent is easily or frequently reversed because the law becomes no more than what the particular judge who hears a case says it is. On the other hand, stare decisis is based on the theory that the law exists independently of judges; therefore, judge-made law should not be allowed to trump the true meaning of the law. The legitimacy of the rule of law would also be compromised if judges were bound permanently by precedent that is unjust. Generally, the authority to reverse prior precedent resides with the highest court in the judicial system. Most lower-court judges will not disturb the decision of a higher court, even if a judge believes that the precedent was wrongly decided. The Supreme Court’s decision in Brown v. Board of Education of Topeka (1954) offers an example of how courts have dealt with wrongly decided precedent in the past. In 1896, the Supreme Court decided Plessy v. Ferguson and held that state laws permitting segregation of public facilities—in that case, railroad cars—were permissible under the Constitution so long as the separate facilities were of equal quality. This became known as the doctrine of “separate but equal.” As a result of the Court’s ruling in Plessy, states were able to enact laws permitting segregated school systems. When Linda Brown—a black third-grade student in Topeka, Kansas—was denied admission to a white public elementary school, her father brought a lawsuit in federal district court on her behalf arguing that segregation in schools violated the equal protection clause of the Fourteenth Amendment of the Constitution. The district court found that segregated public education did have a detrimental effect on black students. However, the court also concluded that the segregated school facilities were substantially equal. As a result, the district court held that it was bound by Plessy and the doctrine of separate but equal and ruled in favor of the school district. The case was appealed to the Supreme Court, and the nine justices, led by Chief Justice Earl Warren, unanimously decided to reverse Plessy. The Court held that segregation in public schools is inherently unequal and therefore unconstitutional. The
decision was based on the fact that, even though the physical school facilities may have been equal, segregation created a feeling of inferiority among the black students, which hindered their educational development. The Court went on to write that this conclusion was supported by psychological research that was not known in 1896 when the Court decided Plessy. It is important to note, however, that Brown did not end segregation completely. Rather, the decision only reversed Plessy in cases of public education. The Supreme Court’s approach in Brown reflects the approach that is still taken by the Court today when dealing with the question of whether to reverse precedent. Generally, the Court will not reverse prior case-law that the justices may consider to have been decided wrongly unless there is new information or experience that offers evidence on which to base reversal. In the case of Brown, that new information was scientific research that showed the harmful psychological effects of segregation. Reversal is not the only tool that courts use to deal with wrongly decided precedent. The courts can also limit the binding power of precedent by “distinguishing” the prior case-law. When courts distinguish precedent, they are saying that the facts of the precedential case are so different from the facts of the case being decided that the holding of the prior case does not apply. In this way, the courts can free themselves from the binding effect of the precedent without disturbing the stability that stare decisis provides. Courts are not the only entities with the power to reverse precedent; legislatures can reverse case-law that interprets a statute incorrectly by passing a new law. Precedent interpreting the Constitution can also be reversed by constitutional amendment. The question of what to do about wrongly decided precedent continues to cause conflict between judges and legal scholars who subscribe to two competing theories of constitutional interpretation. On one side of the debate are “originalists” who view the Constitution like a contract and believe that its text should be interpreted according to the meaning that the words had at the time the Constitution was ratified in 1789. As a result, the meaning of the Constitution does not change over time. This, in the view of the originalists, preserves the integrity of the Constitution by protecting it from constant reshaping by judges.
precedent 733
An opposing approach to constitutional interpretation is the theory of the living Constitution. Under this theory, the meaning of the Constitution’s words evolve in time along with U.S. cultural, moral, and societal norms. If the Constitution does not evolve and the living Constitution theory holds, it will become an outdated document that has little relevance to modern life. The two theories frequently come into conflict when the courts deal with the legal disputes over the existence of rights and liberties that are not enumerated specifically in the Constitution. Originalists are more willing to reverse decisions that they believe create new rights that are not found within the text of the Constitution. Those who believe in an evolving Constitution are more likely to adhere to stare decisis when the precedent has become integrated into existing societal norms. This conflict is very visible in cases involving the precedent set out in Roe v. Wade (1973), which holds that women have a constitutionally protected right to have an abortion. Supporters of Roe argue that the Constitution can be interpreted to create a generalized right of privacy from which flows reproductive rights, including the right to an abortion. Critics of Roe argue that the opinion does not have a basis in the text of the Constitution, which does not mention the right to an abortion, reproductive rights, or a general right to privacy. The question of whether Roe should or can be reversed has become a key area of focus during the confirmation process of more recent Supreme Court nominees. This occurred most notably during the 1987 Senate confirmation hearings of Robert Bork. Bork, an originalist, faced vigorous opposition from politicians and activist groups who believed that he would vote to reverse Roe if he were on the Supreme Court, and his nomination ultimately was rejected by the Senate. The issue of Roe’s precedential weight arose in a new form during the 2005 confirmation hearings of Chief Justice John Roberts and the 2006 confirmation hearings of Associate Justice Samuel Alito, Jr., when both nominees were asked whether they believed in the idea of “superprecedent.” Also referred to as super–stare decisis or bedrock precedent, superprecedent is case-law that is considered to be integrated so firmly into the law that it is irre-
versible. The theory was first presented in an opinion written by Judge J. Michael Luttig of the U.S Court of Appeals for the fourth circuit in the case of Richmond Medical Center for Women v. Gilmore (2000). The case involved a suit to prevent the enforcement of a Virginia state law banning certain types of abortion procedures. Judge Luttig wrote that the holding in Roe has become “super–stare decisis with respect to a woman’s fundamental right to choose whether or not to proceed with a pregnancy.” He based his opinion on the fact that the Supreme Court had consistently reaffirmed Roe in its decisions in Planned Parenthood of Southeastern Pennsylvania v. Casey (1992) and in Stenberg v. Carhart (2000). Although the theory of superprecedent originated out of the Roe debate, the concept has expanded beyond the issue of reproductive rights. Many legal scholars consider such cases as Marbury v. Madison, discussed above, and Miranda v. Arizona (1966), which requires criminal suspects to be informed of their constitutional right against self-incrimination and right to an attorney, to be superprecedent. Because the concept is new, there is no clear formula yet how precedent becomes elevated to the level of superprecedent. Some scholars have suggested that precedent becomes “super” when it is repeatedly reaffirmed for a time by Supreme Courts comprised of different justices. Other scholars suggest that precedent can become irreversible when the holding has become so deeply embedded in U.S. culture that reversal would cause great societal disruption, thereby undermining the very stability of the legal system itself. Critics of super–stare decisis argue that a wrongly decided precedent should not be adhered to merely because prior courts have reached the same wrong conclusion or because change would be disruptive. As they note, at the time the Supreme Court decided Brown v. Board of Education, Plessy v. Ferguson could have been considered superprecedent because it had been consistently reaffirmed by different Supreme Courts, and desegregation would—and did— cause serious societal disruption. There will always be a tension between the rule of precedent and the rule of law, and the courts must constantly strike a balance between rejecting precedent too freely and adhering to a precedent
734 tribunals , military
unyieldingly. Moderation is essential when dealing with stare decisis because either extreme undermines the legitimacy of the courts and the rule of law. See also opinions, U.S. Supreme Court. Further Reading Farber, Daniel A. “The Rule of Law and the Law of Precedents.” Minnesota Law Review 90 (2006): 1,173; Lee, Thomas R. “Stare Decisis in Historical Perspective: From the Founding Era to the Rehnquist Court.” Vanderbilt Law Review 52 (1999): 647; Powell, Lewis F., Jr. “Stare Decisis and Judicial Restraint.” Washington & Lee Law Review 47 (1990): 281; Scalia, Antonin. “The Rule of Law as a Law of Rules.” Chicago Law Review 56 (1989): 1175. —Denis M. Delja
tribunals, military In the aftermath of the September 11, 2001, terrorist attacks on the United States, President George W. Bush, among other things, established procedures for the creation of military tribunals that were intended to place on trial enemy combatants and prisoners of war who were captured in the war against terrorism. In authorizing the creation of these military tribunals, President Bush was following in the footsteps of several of his predecessors, but there were new twists to the president’s efforts that complicated matters for the constitutional relationship between the presidency, Congress, and the federal courts as well. In creating these military tribunals, the executive branch did not consult with nor receive authorization from Congress, nor did it seek the assistance of the Armed Services Committees or the Judiciary Committees of either house of Congress. Neither did they seek the input of the judge advocate general office of the military services. The decision to create these military tribunals was made by the executive alone, and this raised several questions regarding the legitimacy as well as the legality of these tribunals. President Bush’s creation of the military tribunals was modeled after the 1942 tribunals established by President Franklin D. Roosevelt where the U.S. Supreme Court, in Ex Parte Quirin (1942), upheld the legality of these military commissions. President Bush presumed, and relied on various legal opinions
that his military commissions or tribunals would also pass legal muster. However, that was not quite the case. Since the end of the First World War, executivebranch officials have been moving away from statutory authorizations in the creations of military courts, commissions, and tribunals and have added civilian control to these military courts. By going around the Congress, the executive branch skirts the separation of powers and checks and balances and claims independent authority over military law and the dispensation of justice. Further, the standards of justice often are weakened as military tribunals do not have the same high standards of proof or the use of evidence as do civil courts. Thus, questions of legitimacy, legality, civil liberties, and the abuse of power are raised in the establishment of executivedominated military tribunals. While the U.S. Constitution seems clearly to grant Congress the power to create and set rules for military tribunals, in the modern era, this function has been subsumed under the presidency, and presidents, relying on claims that the commander-in-chief clause of the Constitution empowers presidents to control virtually all aspects of war, have wrestled this authority from Congress, which is often all too willing to cede or delegate this authority to the executive. Thus, a combination of presidential capaciousness and legislative acquiescence has led to the executive controlling most aspects of military law and the creation of rules governing military tribunals. While military tribunals became a source of much controversy in the aftermath of the September 11, 2001, attacks against the United States, it should be remembered that similar tribunals have a deep and rich history in the U.S. past. For example, similar tribunals were used often during the Civil War. During this time, President Abraham Lincoln, on his own claimed authority, suspended the writ of habeas corpus and authorized the imposition of military law in several regions of the country. He did this without the authorization of the Congress, but Congress did eventually give legislative approval of these measures—after they had already been implemented. In April 1861, while Congress was in recess, President Lincoln issued a series of emergency proclamations authorizing the calling up of the state militias, an increase in the size of the regular military,
tribunals, military 735
and the suspension of habeas corpus. When Congress came into special session in July of that year, they approved Lincoln’s actions, thereby giving legislative scope to the president’s already-taken emergency measures. Lincoln’s actions were challenged in Court, and in a series of decisions, the Supreme Court found the president’s actions to be unconstitutional. The case most relevant to the issue of military tribunals is the Ex parte Milligan (1866) case in which the Supreme Court, deciding the case after the Civil War was over, found the president’s creation and use of military tribunals illegal as the cases should have gone through civil courts that were operating at the time. In 1864, Lambdin P. Milligan was arrested by military authorities on charges of conspiracy. Milligan was found guilty by a military tribunal and sentenced to be hanged. Understandably concerned with this decision, Milligan appealed, with his case eventually reaching the Supreme Court. By the time Milligan’s case came before the Court, the war was over. The Court decided that as long as civil courts were open and operating, the military tribunals were unjustified. If the decision in Ex parte Milligan (1866) seemed to put an end to the use of military tribunals, that simply was not how things turned out. Military tribunals continued to be used in the South under Reconstruction while the region was still under martial law. Related to the Milligan case was Ex parte Merryman (1861), which was decided by Chief Justice Roger B. Taney while sitting as a circuit judge. John Merryman, a secessionist, was arrested by military authorities and imprisoned. He was refused a writ of habeas corpus, and sued the government. Chief Justice Taney ordered the military commander to bring Merryman to his court, but the commander refused. Rather than attempting to serve the commander (his agents were refused entry into the prison), Taney chose to write an opinion in which he rejected the government’s claim and argued that the military officer had no right to arrest and detain a U.S. citizen independent of a judicial authority. President Lincoln merely ignored Taney’s decision. The next major controversy involving military tribunals occurred during the Second World War when eight Nazi saboteurs were arrested on U.S. soil. President Franklin D. Roosevelt issued Proclamation
2561 setting up a military tribunal and denied the eight men access to U.S. civil courts. The burden of proof in the tribunal was far lower than a civil court, and creation of the tribunals was challenged in court. Eventually, the Supreme Court took on the case and decided in Ex parte Quirin (1942) that the tribunals were legal. This was not the only military tribunal in World War II, and as legal scholar Louis Fisher points out in his book, Military Tribunals & Presidential Power (2005), the Court was often willing to cede to the president the authority to control military law and justice. Military tribunals again became controversial during the war against terrorism. With the U.S. response to the September 11, 2001, attacks in the form of war against the Taliban government in Afghanistan well under way, President George W. Bush, on the creation of military tribunals to be used to put on trial prisoners, both military combatants and enemy noncombatants, under the control of the United States. Bush’s attorney general, John Ashcroft, claimed that the president had full authority under his commander-in-chief powers to establish such tribunals and that “Congress has recognized this authority, and the Supreme Court has never held that any Congress may limit it.” In reality, Congress has never recognized a unilateral presidential authority to create such tribunals, and the Court has often held that Congress has the authority to create these tribunals. It should not surprise us that several challenges to the Bush administration’s claims made their way through the federal court system and that the Supreme Court was finally called on to settle these legal and political disputes. Several cases relating to the war against terrorism and the creation of executive-based military justice have recently reached the Supreme Court. In Hamdi v Rumsfeld (2004), the Court, in an 8-1 vote, held that the detention of Yaser Esam Hamdi (arrested in Afghanistan) without a trial and without access to civil courts and their protections was not an exclusively presidential decision and that the courts did have the power to determine the status of enemy combatants. In this case, Associate Justice Sandra Day O’Connor wrote that “a state of war is not a blank check for the President.” Deciding that an enemy combatant must receive notice of the factual basis for arrest, the Court rejected the executive’s claim of independent and
736 U.S. Court of Appeals for the Armed Services
nonreviewable authority over the status of enemy combatants. In a more direct challenge to the creation of military tribunals, Hamdan v. Rumsfeld (2006), the Supreme Court held that the executive-established military tribunals were not constitutional. The president was forced to go back to Congress for legal authorization of the tribunals, which he received. This reestablished the role of Congress in the creation of military tribunals and struck a blow against the unilateral power of the presidency, even in times of war. As one can see, the federal court system has played both sides of the issue at different times, but the overwhelming weight of the judicial evidence is that the Supreme Court has most often determined that it is Congress, based on its Article I authority, that has the right and the power to create military tribunals. The president may not, the Court has usually held, create such tribunals on his own exclusive claimed authority, and the tribunals must be under the control of the Congress. Also, while Louis Fisher would recognize that “Judicial rulings during World War II provided disturbing evidence of a Court in the midst of war forfeiting its role as the guardian of constitutional rights,” he further recognized that the Court also has a history of standing up to presidents even in times of war and defending constitutional principles and rule of law. In the fog of war, courts, like citizens, may sometimes lose their way, but in calmer times, the Court has been quite willing to stand up to pretensions of presidential power that are not based in law or the constitution. Military tribunals constitutionally may only be created by Congress, and the president, even in the capacity of commander in chief, may not unilaterally and independent of the authorization of Congress create such tribunals (World War II cases notwithstanding). The rule of law was created for calm as well as for stormy seasons, and regardless of claims by the executive, it is Congress that is authorized to create military tribunals. In the heat of war, constitutional corners are often cut, but claims of executive authority in war and in peace must face challenges from the courts, and in the case of military tribunals, the courts have been fairly consistent, though not absolute, in siding with the Congress in the struggle for power that so often characterizes the separation of powers in the United States.
Further Reading Detter, Delupis Ingrid. The Laws of War. New York: Cambridge University Press, 2000; Fisher, Louis. Military Tribunals and Presidential Power. Lawrence: University Press of Kansas, 2005; ———. Nazi Saboteurs on Trial: A Military Tribunal and American Law. Lawrence: University Press of Kansas, 2003; ———. Presidential War Power. Lawrence: University Press of Kansas, 2004. —Michael A. Genovese
U.S. Court of Appeals for the Armed Services The U.S. Court of Appeals for the Armed Services exercises worldwide appellate jurisdiction over active-duty military personnel of the armed forces of the United States (U.S. Air Force, U.S. Army, U.S. Coast Guard, U.S. Marine Corps, and U.S. Navy) and other persons who are subject to the Uniform Code of Military Justice (Title 10, U.S. Code, Chapter 47). This includes the commissioned corps of the U.S. Public Health Service and the National Oceanic and Atmospheric Administration Commissioned Corps, who are subject to the code when they are either militarized by executive order or while detailed to any of the armed forces (USMJ S. 802. Art. 2. Subs. [a]. Para. [8]). The court is an independent tribunal established by Congress under its power, found in Article I, Section 8 of the U.S. Constitution, “to constitute tribunals inferior to the supreme court.” A 1989 U.S. Senate Committee Report described the court’s role, stating that it “regularly interprets federal statutes, executive orders, and departmental regulations. The Court also determines the applicability of constitutional provisions to members of the armed forces. Through its decisions, the Court has a significant impact on the state of discipline in the armed forces, military readiness, and the rights of service members. The Court plays an indispensable role in the military justice system.” The jurisdiction of the U.S. Court of Appeals for the Armed Forces, which is found in Article 67 (a) of the UCMJ, includes four types of cases: cases in which a death sentence is affirmed by a court of criminal appeals (after courts-martial are held, cases are reviewed by the convening authority, the officer who originally referred the case for trial; that officer can then, in the
U.S. Court of Appeals for the Armed Services 737
cases specified above, refer the matter for review by one of the intermediate courts: the army court of criminal appeals, the navy–marine corps court of criminal appeals; the air force court of criminal appeals, and the coast guard court of criminal appeals); cases reviewed by a court of criminal appeals which a judge advocate general forwards to it for review; cases reviewed by a court of criminal appeals, in which, on petition of the accused, the court has granted review, and in its discretion, the court considers original petitions of denials by an inferior court for extraordinary relief including, but not limited to, writs of mandamus, writs of prohibition, writs of habeas corpus, and writs of error coram nobis (28 USC, section 1651 (a)). The court is composed of five civilian judges appointed by the president of the United States, subject to the advice and consent of the U.S. Senate, for 15-year terms. To insure that the judges “be appointed from civilian life,” Article 142 of the Uniform Code of Military Justice provides that individuals who have retired from the armed services after 20 years of more of active service “shall not be considered to be in civilian life.” When hearing cases, all five sit as a single panel. If one of the judges of the court is unavailable, the chief judge of the court may recall a retired judge of the court to sit as a senior judge if one of the active judges is unable to participate. If a senior judge is not available, the chief judge may request that the Chief Justice of the U.S. Supreme Court designate a judge of the U.S. Court of Appeals or a judge of the U.S. district court to sit with the court. Courts-martial are judicial proceedings conducted by military authorities under 10 U.S.C. sections 801–946. Courts-martial were first authorized by the Continental Congress in June 1775 during the American Revolution for the Continental Army. The Continental Congress established 69 articles of war to govern the Continental Army. On April 10, 1806, Congress enacted 101 articles of war that would apply to both the army and the navy. Until the 20th century, U.S. military justice was based on the articles of war and the articles for the government of the navy. Until 1920, court-martial convictions were reviewed by a commander in the field or the president of the United States (acting in his capacity as commander in chief), depending on the severity of the sentence
or the rank of the military personnel involved. For example, in his book, Don’t Shoot That Boy!: Abraham Lincoln and Military Justice (1999), Thomas P. Lowry found 792 Civil War era courts-martial records in the National Archives that bore a notation or endorsement by President Abraham Lincoln. Lowry found that Lincoln pardoned all 78 soldiers sentenced to death for sleeping at their posts. Following World War I in the Act of June 4, 1920, Congress required the army to establish boards of review, consisting of three lawyers, to review cases where the death penalty, dismissal of an officer, dishonorable discharge, or a prison sentence had been imposed. Other cases would be reviewed by the office of the judge advocate general. The system of military justice would again come under review after World War II. During the war, there were more than 1.7 million courts-martial conducted by U.S. military authorities, many of which were conducted with lawyers acting as either presiding officers or counsel. Studies conducted by military authorities and civilians concluded that there were a number of problems with these proceedings, including the potential for improper command influence. In 1948, Congress, under its authority “To make rules for the government and regulation of the land and naval forces,” reformed the articles of war (Public Law 80–759). One of the more significant changes was the creation of a judicial council of three general officers to consider cases involving sentences of death, life imprisonment, or dismissal of an officer. Cases could also be referred to this council by a board of review or the judge advocate general. James Forrestal, the first secretary of the Department of Defense, appointed Harvard law professor Edmund Morris Morgan to lead the Committee on a Uniform Code of Military Justice to review the existing army and navy justice systems and to recommend a unified system of military justice. The other members of the committee were Gordon Gray, assistant secretary of the army; W. John Kenney, undersecretary of the navy; Eugene M. Zuckert, assistant secretary of the air force, and Felix E. Larkin, an assistant general counsel in the office of the secretary of defense. Morgan’s committee recommended a unified system of military justice that would apply to all branches of the armed services. The committee also recommended that attorneys serve as presiding officers
738 U.S. Court of Federal Claims
and counsel and that an independent civilian appellate court be created. The Uniform Code of Military Justice (UCMJ), based largely on the Morgan Committee’s recommendations, was enacted by Congress on May 5, 1950, signed into law by President Harry S. Truman, and became effective on May 31, 1951 (Public Law 81–506). The law superseded the articles of war, the articles for the government of the navy, and the disciplinary laws of the coast guard. Article 67 of the UCMJ established the court of military appeals as a three-judge civilian court. The court would be the court of last resort for most cases arising out of the military justice system. In 1968, the court would be redesignated as the U.S. Court of Military Appeals. In 1983, Congress authorized direct appeal to the U.S. Supreme Court of cases decided by the U.S. Court of Military Appeals, with the exception of cases involving denial of a petition for a discretionary review. The court, in U.S. v. Matthews, 16 M.J. 354 (1983), held that the military capital-sentencing procedures were unconstitutional for failing to require a finding of individualized aggravating circumstances. The military’s death penalty was reinstated the following year (1984) when President Ronald Reagan signed an executive order establishing detailed rules for capital courts-martial. In 1989, Congress expanded the court’s membership from three to five. In 1994, the court was given its present name, the U.S. Court of Appeals for the Armed Services. In 2004, the court, in U.S. v. Marcum (60 M.J. 198, 2004), refused to strike down the armed-forces ban on consensual sodomy by letting stand the conviction of Air Force Sergeant Eric Marcum. Rather than consider the military’s sodomy law (Article 125 of the UCMJ), the court upheld Marcum’s conviction on the grounds that his partner was in his chain of command, a violation of military protocol. The current (January 2007) judges of the court are: Chief Judge Andrew S. Effron, James E. Baker, Charles E. Erdmann, Margaret A. Ryan, and Scott W. Stucky. Until 1992, the chief judge of the court was appointed by the president from among the sitting judges for a five-year term. Since that time, the position of chief judge is rotated among the members of the court, based on seniority, to the most senior judge who has not yet served as chief judge.
The court meets primarily in Washington, D.C. Since October 1952, it has been housed at the Federal court house located at 450 E Street NW. Prior to that time, the court house (which was opened in 1910) housed the U.S. Court of Appeals for the District of Columbia circuit. Through its Project Outreach, the court periodically holds oral arguments at law schools, military bases, and other public facilities. Since 1976, the official decisions of the court have been published in the Military Justice Reporter (M. J.), a publication of West. Before that time, the Court’s decisions were published in the Court Martial Reports (C.M.R.), which was published by the Lawyer’s CoOperative Publishing Company. See also judicial branch. Further Reading Lurie, Jonathan. Military Justice in America: The U.S. Court of Appeals for the Armed Forces, 1775– 1980. Lawrence: University Press of Kansas, 2001; ———. Pursuing Military Justice: The History of the U.S. Court of Appeals for the Armed Services, 1951–1980. Princeton, N.J.: Princeton University Press, 1998; ———. Arming Military Justice, Vol. 1: The Origins of the United States Court for Military Appeals, 1775–1950. Princeton, N.J.: Princeton University Press, 1992; Nufer, Harold F. American Servicemembers Supreme Court: Impact of the U.S. Court Of Military Appeals on Military Justice. Washington, D.C: University Press of America, 1981. —Jeffrey Kraus
U.S. Court of Federal Claims The United States Court of Federal Claims is a court that has jurisdiction over money claims against the federal government based on the U.S. Constitution, federal laws, executive regulations, or contracts. The court also has equitable jurisdiction over bid protests and vaccine compensation. The court is an independent tribunal established by Congress under its power, found in Article I, Section 8 of the U.S. Constitution, “to constitute tribunals inferior to the supreme court.” The present court was established by Congress in October 1982 as the U.S. Claims Court by the Federal Courts
U.S. Court of Federal Claims 739
Improvement Act (Public Law 96–417). It replaced the trial division of the court of claims which had been established in 1855. The court was designated as the U.S. Court of Federal Claims by the Federal Courts Administration Act of 1992 (Public Law No. 102–572). Approximately 25 percent of the court’s docket consists of tax-refund suits. Breach of contract claims make up about one-third of the court’s workload, which also includes cases involving claims of back pay made by federal civilian and military personnel, intellectual property claims against the federal government, federal takings of private property for public use (eminent domain), as well as cases brought by persons who were injured by childhood vaccines. Claims have a statute of limitations of six years. Cases are brought by individuals, domestic and foreign corporations, state and local governments, Indian tribes and nations, and foreign nationals and governments. Either house of Congress may refer to the chief judge a claim for which there is no legal remedy, seeking findings and a recommendation as to whether there is an equitable basis on which Congress itself should compensate a claimant. While most lawsuits against the federal government for money damages greater than $10,000 must be tried in this court, the U.S. district courts have exclusive jurisdiction over tort claims and have concurrent jurisdiction (with the U.S. Court of Federal Claims) over tax refunds. A contractor with a claim against the federal government can choose between filing a lawsuit with the court or with an agency, the Board of Contract Appeals. All trials in this court are bench trials, without juries. Appeals of decisions by the U.S. Court of Federal Claims are heard by the U.S. Court of Appeals for the federal circuit. Since 1987, a number of cases have been resolved under an alternative dispute resolution (ADR) process established by the court. As noted in the court’s Second Amended General Order No. 40 (2004), the program was put in place when “the court realized that rising litigation costs and delay were inherent in the traditional judicial resolution of complex legal claims.” Under this program, cases may be disposed of in one of three methods: by a “settlement judge,” one of the judges of the court, who assesses the case
and recommends a settlement to the parties; minitrials, where an informal proceeding is held where the parties try to reach a settlement; or a “third party neutral,” where an attorney experienced in alternative dispute resolution meets with the parties to try to facilitate a settlement. The court’s history dates back to 1855 when Congress established the Court of Claims as an Article I court (10 Stat. 612). Before this court was established, money claims against the federal government were presented by petition to Congress. In creating the court, Congress gave it jurisdiction to hear cases based on a law, a regulation, a federal-government contract, or referral from Congress or one of the executive departments of the government. The court would report its findings to Congress and prepare bills for payments to claimants whose claims they had approved. Congress would then review the case and make the appropriation to pay the claim. The court was originally made up of three judges who were appointed by the president (subject to the advice and consent of the Senate) for life. The judges had the authority to appoint commissioners to take depositions and issue subpoenas. In 1863, Congress enacted the Abandoned and Captured Property Act. The law allowed the owners of property seized in the Confederate States by the Union Army to seek compensation through the court. Later that year, Congress enacted legislation (12 Stat. 765) giving the court the power to issue final judgments, ending the need for the court to forward its findings to Congress for final disposition. Instead, the court would forward bills to the U.S. Department of the Treasury for review and payment. Following a U.S. Supreme Court decision in Gordon v. United States (1864), where the court held that it lacked jurisdiction over Court of Claims cases because its decisions were subject to review by an executive department, Congress passed legislation (14 Stat. 9) ending review of Court of Claims cases by the Treasury. In 1880, Congress granted the court jurisdiction over claims against the District of Columbia in areas of public works, property damage, and welfare services (21 Stat. 284). In 1887, Congress enacted the Tucker Act (24 Stat. 505), which expanded the court’s jurisdiction by giving it authority over money “claims founded upon the Constitution.” This meant that the
740 U.S. Courts of Appeals
court would hear eminent-domain cases. In 1891, Congress granted the court jurisdiction over claims due to property taken by Indians. In 1925, Congress changed the court’s structure by authorizing it to appoint seven commissioners who were empowered to hear evidence and report their findings to the court. The five judges of the court would then serve as a board of review. In 1948, the court was renamed the U.S. Court of Claims, and in 1953, Congress passed a law giving this Court of Claims status as an Article III court. In 1973, as the caseload continued to grow, the judges (who now numbered seven) began to review cases in panels of three. In 1982, Congress abolished the Court of Claims and transferred its original jurisdiction to the new U.S. Claims Court and its appellate jurisdiction to the U.S. Court of Appeals for the federal circuit, which assumed the old Court of Claims appellate functions as well of those of the Court of Customs and Patent Appeals (Public Law 96–417). The National Childhood Vaccine Injury Act of 1986 (Public Law 99–660) gave the court jurisdiction over claims made by individuals who were injured by vaccines. Cases are reviewed by special masters, who determine whether claimants are eligible for compensation under the program. Payments are made, based on a schedule, from the Vaccine Injury Compensation Trust Fund. In 2002, in response to a growing number of claims alleging that vaccines had caused autism, an autism omnibus program was established to deal with these claims. The court is composed of 16 judges nominated by the president of the United States, subject to the advice and consent of the U.S. Senate. Unlike judges of courts that are established under Article Three of the Constitution who have lifetime tenure, judges on this court are appointed for 15-year terms. The chief judge is designated by the president. The current (February 2007) judges of the court are: Chief Judge Edward J. Damich (appointed 1998 and designated chief judge by President George W. Bush in 2002), Christine Odell Cook Miller (appointed 1983; reappointed 1998), Marian Blank Horn (appointed 2003), Francis M. Allegra (appointed 1998), Lawrence M. Baskir (appointed 1998), Lynn J. Bush (appointed 1998), Nancy B.
Firestone (appointed 1998), Emily C. Hewitt (appointed 1998), Lawrence J. Block (appointed 2002), Mary Ellen Coster Williams (appointed 2003), Charles F. Lettow (appointed 2003), Susan G. Braden (appointed 2003), Victor J. Wolski (appointed 2003), George W. Miller (appointed 2004), Thomas C. Wheeler (appointed 2005), and Margaret M. Sweeney (appointed 2005). The senior judges of the court are Thomas J. Lydon, John Paul Wiese, Robert J. Yock, James F, Merow, Reginald W. Gibson, Lawrence S. Margolis, Loren A. Smith, Eric G. Bruggink, Bohdan A. Futey, and Robert H. Hodges, Jr. The court’s special masters are Gary J. Golkiewicz (chief special master), George L. Hastings, Laura D. Millman, Richard B. Abell, John F. Edwards, Patricia Campbell-Smith, Christian J. Moran, and Denise K. Vowell. The court meets primarily in Washington, D.C., and it is housed at the Howard T. Markey National Courts Building located at 717 Madison Place, Northwest. However, cases are heard at other locations that are more convenient to the parties involved. See also judicial branch; jurisdiction. Further Reading Citron, Rodger D. “Culture Clash? The Miller and Modigliani Propositions Meet The United States Court of Federal Claims,” The Review of Litigation 22 (2003): 319–354; Stinson, David B. The United States Court of Federal Claims Handbook and Procedures Manual. 2nd ed. Washington, D.C.: Bar Association of the District of Columbia, 2003; U.S. Court of Federal Claims Bar Association. The United States Court of Federal Claims: A Deskbook for Practitioners. Washington, D.C.: U.S. Court of Federal Claims Bar Association, 1998; Williams, Greg H. World War II Naval and Maritime Claims Against the United States: Cases in the Federal Court of Claims, 1937– 1948. Jefferson, N.C.: McFarland and Company, 2006. —Jeffrey Kraus
U.S. Courts of Appeals Every court system is thought to need a court which must decide all cases brought to it so that all litigants
U.S. Courts of Appeals
can have at least one appeal “as of right.” In the United States, the U.S. Constitution does not provide explicitly for this right of appeal, but we have established intermediate appellate courts with mandatory jurisdiction, which means that they must decide all cases brought to them, unlike the U.S. Supreme Court with its discretion to pick and choose the cases it will hear. These intermediate appellate courts—located in the judicial hierarchy between the trial courts and the highest court of the jurisdiction—are considered to have “error-correction” as their primary task, as opposed to “lawmaking,” which is supposed to be the Supreme Court’s primary function, although other appellate courts also partake in it. In the U.S. federal court system, the intermediate appellate courts are the U.S. Courts of Appeals, sometimes referred to as the Circuit Courts of Appeals. These courts are crucial because, for all but the very few decisions that the U.S. Supreme Court reviews, they are the “court of last resort;” that is, most cases end there because, of the many thousands of cases appealed to the courts of appeals, few go further, either because they are not taken to the Supreme Court or, when a losing party does seek Supreme Court review, the justices decline it by denying the petition for certiorari. Some nations have a single nationwide intermediate appellate court, but in the United States, there are 12 separate circuits, one for the District of Columbia and 11 others, each of which covers several states. These courts of appeals have general appellate jurisdiction; that is, they have the authority to hear all types of cases brought on appeal from the district courts, in addition to appeals from decisions of some of the federal regulatory agencies. There is also the Court of Appeals for the federal circuit, a separate semispecialized nationwide court which has exclusive appellate jurisdiction over patent and trademark cases, which come from the U.S. district courts, as well as appellate jurisdiction over federal personnel disputes, certain monetary claims against the government, veterans’ claims, and appeals from the military courts, which come to it from a variety of specialized lower courts. The 11 geographically based circuits vary considerably in size and in number of judgeships. They range from the First Circuit (eastern New England
741
and Puerto Rico), which has only six judgeships, to the Ninth Circuit (the entire West Coast, plus Montana, Idaho, Nevada, Arizona, Hawaii, and the Districts of Guam and the Northern Marianas), which has 28 judgeships. Most of the other circuits, while varying in geographic span, have between 12 and 15 judges. The boundaries of most of the circuits have remained the same since the Evarts Act of 1891 created the courts of appeals, but the Tenth Circuit (some of the Rocky Mountain states) was carved out of the Eighth Circuit early in the 20th century, and the “old Fifth” Circuit, which spread from Texas through Florida, was divided in the 1980s. Efforts are undertaken regularly to divide the Ninth Circuit, often based on claims that it is “too big,” but the claims are usually driven in fact by partisan considerations such as conservatives’ dislike for decisions supporting environmental statutes or liberal positions on social issues. In addition to the courts’ regular (“active-duty”) judges, considerable help in deciding cases is provided by senior judges, those who have taken a “semiretired” status that allows them to continue to hear cases while new judges are appointed to their former positions. Some courts of appeals also utilize district judges from within the circuit, sitting “by designation” as members of court of appeals panels, to help decide cases. Visiting judges from other circuits also participate in the decision making. Judges of the courts of appeals are appointed in the same way as other Article III (lifetime or “good behavior”) federal judges—on nomination by the president and confirmation by the Senate. Although court of appeals judges serve the entire circuit and not just a single state, it is understood that the judgeships will be allocated among the states in a circuit so that, for example, all the judges in the Second Circuit could not be from New York State, but at least some would have to be from Connecticut and Vermont. This means that the practice of senatorial courtesy, in which senators of the president’s party from a state supposedly are able to determine who is nominated, applies to the courts of appeals. The nominations to the courts of appeals have become quite highly contested in recent decades, particularly once President Ronald Reagan began to select nominees in considerable measure on the
742 U.S. Courts of Appeals
basis of their (conservative) ideology. Since that time in the 1980s, the tension between the Republicans and Democrats in the Senate has risen considerably over quite a number of court of appeals nominations, with each party claiming that the other party has delayed or blocked nominations, although some nominations continue to be approved routinely. The governance structure of the U.S. courts of appeals is simple. There is a chief judge, usually the active-status judge with the most seniority on the court, except that one cannot become a chief judge after reaching age 65 and one cannot serve as chief judge on reaching age 70. Court-of-appeals judges meet regularly to determine matters of policy for the court, and a circuit council composed of an equal number of circuit judges and district judges plays the same role for the circuit as a whole, as well as dealing with matters of judicial discipline. Day-today operations of the courts of appeals are under the control of the clerk of court and the clerk’s staff, and circuitwide matters are handled by a circuit executive and staff. An important fact about the courts of appeals is that, unlike the situation in the Supreme Court where all the justices are in the same building, courts of appeals judges have their chambers throughout the circuit and come together to hear arguments and for court meetings. For most of the cases, the judges sit in three-judge panels, whose membership rotates from month to month; thus Judge A may sit with Judge B and C one month but may sit with Judges F and J the next. One result is that, after a three-judge panel has met to consider a case initially, the judges communicate primarily by email systems or by telephone. We speak of an appeals court “hearing” a case, but the judges decide a large proportion of cases without oral argument by the parties’ lawyers and rule on the basis of the lawyers’ briefs. A very large portion of courts of appeals cases result in so-called “unpublished” dispositions—given that name because, prior to electronic legal databases, they did not appear in the formal volumes of the court’s decisions. While these decisions are binding on the parties, they are not “precedential”, that is, they may not be cited in subsequent cases. (They can now be cited but still are considered nonprecedential.) Preparation of “unpublished” decisions is thought to be easier than pub-
lished opinions because the former are intended only for the parties. “Unpublished” dispositions, which are now handed down in more than four-fifths of cases decided by the U.S. courts of appeals, are used primarily, although not only, in relatively routine, noncomplex cases, particularly in those which receive no oral argument. Although the court of appeals might use an unpublished opinion in reversing a district court or regulatory agency, reversals are more likely to occur in published opinions. Likewise, dissents, while rare in the courts of appeals, are more frequent in published opinions than in unpublished dispositions. Unlike the Supreme Court, where we are accustomed to see a divided court with justices filing separate concurrences and dissenting opinions, dissents—indeed, any separate opinions—are relatively rare in the U.S. courts of appeals. A major reason is that most of the cases considered by these courts are either simple—remember that the court of appeals must decide all cases brought to it—or legally straightforward, controlled by a prior ruling of the circuit or by Supreme Court cases. However, divisions do occur, and they are more likely when the whole court sits. Those divisions are often along ideological lines, with “liberal” judges (often appointed by Democratic presidents) on one side and “conservative” judges, often Republican appointees, on the other. A ruling by a three-judge panel is the decision of the court of appeals, and later panels are expected to follow it. Only the entire court—the court sitting en banc—can overrule circuit precedent. (In the Ninth Circuit, a “limited en banc” is used, with a smaller number of judges drawn by lot to serve as the en banc court.) The courts of appeals sit en banc for only a small number of cases and are expected to reserve such sittings for cases of greatest importance. However, the courts most often sit en banc if a judge of the court has persuaded his or her colleagues that a panel’s ruling conflicts with other decisions of the court (intracircuit conflict) or simply that the panel “got it wrong.” Neither senior circuit judges, district judges, nor out-of-circuit visitors may vote on whether a case should be heard en banc, and the latter two may never sit on an en banc court. When a petition to rehear a case en banc is denied, some judges may dissent, and at times they are said to
U.S. Courts of Appeals
743
U.S. Supreme Court building at dusk
be attempting to get the Supreme Court’s attention by doing so. Cases decided en banc are more likely than cases from panels to be taken to the Supreme Court, and they are more likely to be granted review by the justices. Some court of appeals judges object to sitting en banc because of the large expenditure of judicial resources necessary to bring together the judges, who otherwise could be resolving cases, and some think it also delays cases reaching the Supreme Court when the parties would take the cases there in any event. Although the Supreme Court decides only a small number of cases each term—less than 80 in recent years—the largest proportion have been
brought from the U.S. courts of appeals. In particular, the justices decide many cases in which one or more courts of appeals decide an issue one way and one or more decide it the other way (intercircuit conflicts); indeed, a major cue for the Supreme Court granting review is the presence of such a conflict. Other cues that are said to alert the justices to the significance of cases from the courts of appeals are division within those courts (the presence of a dissent) and a court of appeals’ reversal of a district court or a regulatory agency. In the cases that the justices do decide to review, the Supreme Court is more likely to reverse than to affirm the courts of appeals; the justices on average affirm only about
744 U .S. Supreme Court
one-third of the cases that they decide. However, the rate at which they reverse varies from one circuit to another, in some years approaching 100 percent of the cases brought from a circuit court. These reversals are thought to be important well beyond resolving the specific dispute between the parties, as the Supreme Court’s rulings are precedent for the nation—and thus all the courts of appeals—to follow. See also judicial branch. Further Reading Coffin, Frank M. On Appeal: Courts, Lawyering, and Judging. New York: W.W. Norton, 1994; Cohen, Jonathan Matthew. Inside Appellate Courts: The Impact of Court Organization on Judicial Decision Making in the United States Courts of Appeals. Ann Arbor: University of Michigan Press, 2002; Hellman, Arthur D., ed. Restructuring Justice: The Innovations of the Ninth Circuit and the Future of the Federal Courts. Ithaca, N.Y.: Cornell University Press, 1990; Howard, J. Woodford, Jr. Courts of Appeals in the Federal Judicial System: A Study of the Second, Fifth, and District of Columbia Circuits. Princeton, N.J.: Princeton University Press, 1981; Songer, Donald R., Reginald S. Sheehan, and Susan B. Haire. Continuity and Change on the United States Courts of Appeals. Ann Arbor: University of Michigan Press, 2000; Schick, Marvin. Learned Hand’s Court. Baltimore, Md.: Johns Hopkins University Press, 1970. —Stephen L. Wasby
U.S. Supreme Court According to Alexander Hamilton in Federalist 78, the U.S. Supreme Court was to be “the least dangerous to the political rights of the Constitution” of the three branches of government, given that it had “no influence of either the sword or the purse” and that it would “have neither force nor will, but merely judgment.” The U.S. Supreme Court is the only court established in Article III of the U.S. Constitution. The framers designated this court to be vested with the judicial power of the U.S. government, as well as “such inferior Courts as the Congress may from time to time ordain and establish.” Article III states that a supreme court will exist, and other federal courts would be established as needed by Congress. The
only other constitutional provisions include the tenure of federal judges, who remain on the bench “during good Behaviour,” and the jurisdiction of the Supreme Court, which determines when the Court has the authority to hear a particular case. The Constitution left Congress to determine the exact powers and organization of the judicial branch as a whole. As a result, the establishment of a federal judiciary was a high priority for the new government, and the first bill introduced in the U.S. Senate became the Judiciary Act of 1789. The Supreme Court first assembled on February 1, 1790, in the Merchants Exchange Building in New York City, which was then the nation’s capital. Under the first Chief Justice, John Jay, the earliest sessions of the Court were devoted to organizational proceedings. The first cases did not reach the Supreme Court until its second year, and the justices handed down their first opinion in 1792. Between 1790 and 1799, the Court handed down only about 50 cases and made few significant decisions. The first justices complained that the Court had a limited stature, and they also greatly disliked “riding circuit,” which required the justices to endure primitive travel conditions while visiting their assigned regional circuit where each was responsible for meeting twice a year with district court judges to hear appeals of federal cases. Originally, the Court had a total of six members, and Congress changed that number six times before settling at the present total of nine in 1869. All nine justices (eight associate justices and the Chief Justice) are appointed to life terms. Justices and federal judges can be impeached by a majority vote in the House of Representatives and removed by a two-thirds majority vote in the Senate. No Supreme Court justice has ever been removed from office; all have served until retirement or death. In 1805, Associate Justice Samuel Chase was impeached but not removed from the bench due to political opposition to his legal decisions. A total of 17 men have served as Chief Justice, and a total of 98 men and women have served as associate justices, with an average tenure on the bench of 15 years. Tradition remains an important feature of the Supreme Court, with many traditions dating back to its start in 1790. The Supreme Court term begins on the first Monday of October and usually lasts until
U.S. Supreme Court
late June or early July. During the summer months, the justices continue to consider new cases for review and to prepare for upcoming cases in the fall. During the term, justices are either “sitting” to hear cases and deliver opinions or on “recess” when they write their opinions and deal with other court business. Sittings and recesses alternate roughly every two weeks. Public sessions are held from 10 a.m. until 3 p.m. on Mondays, Tuesdays, and Wednesdays, with a one-hour lunch recess at noon. At the start of the session, the Supreme Court marshal announces the entrance of the justices at precisely 10 a.m. Everyone in the chamber must rise at the sound of the gavel and remain standing until the justices are seated following the traditional chant: “The Honorable, the Chief Justice and the Associate Justices of the Supreme Court of the United States. Oyez! Oyez! Oyez! All persons having business before the Honorable, the Supreme Court of the United States, are admonished to draw near and give their attention, for the Court is now sitting. God save the United States and this Honorable Court!” No public sessions are held on Thursdays or Fridays when justices meet to discuss the cases that have been argued and to vote on petitions for review. Other traditions include the seating arrangement in court, a custom throughout the federal judiciary, with the nine Justices seated by seniority on the bench—the Chief Justice takes the center chair while the senior associate justice sits to his or her right, the second senior to his or her left, alternating right and left by seniority. Justices have also worn black robes while in court since 1800, and white quill pens are placed on counsel tables each day. The “conference handshake,” which has been around since Chief Justice Melville W. Fuller started the practice in the late 19th century, calls for each justice to shake hands with each of the other eight to remind them to look past any differences of opinion and to remember the Court’s overall purpose. The Supreme Court has a traditional seal, similar to the Great Seal of the United States, which is kept by the clerk of the Court and is stamped on official papers. When a vacancy occurs on the Court, a potential justice is first nominated for the position by the president. Once the president makes a nomination, the Senate Judiciary Committee considers it. If the committee approves, the nomination then goes to the
745
entire Senate with confirmation occurring by a simple majority vote. In modern times, nominees have also been scrutinized closely by the American Bar Association (who gives an unofficial recommendation of either well qualified, qualified, or not qualified), and the legal community as a whole, the news media, and the nominees must also undergo a background check with the FBI. Through 2006, a total of 151 nominations have been made to the Court, with only 12 rejected by a vote in the Senate. In addition, seven individuals have declined the nomination (something that has not occurred since 1882), and eight nominations were withdrawn, usually due to impending defeat in the Senate and/or negative public reaction. During the confirmation hearings, nominees are expected to answer questions by the Senate Judiciary Committee. This became an accepted practice in 1955 during the confirmation of President Dwight Eisenhower’s nominee John M. Harlan. Most confirmation hearings are routine and draw little public attention. However, in recent years, a few notable exceptions have occurred, including President Ronald Reagan’s nomination of Robert Bork, a conservative appeals-court judge, in 1987 and President George H. W. Bush’s nomination of Clarence Thomas in 1991, an appeals-court judge and former head of the Equal Opportunity Employment Commission who was accused of sexual harassment. Presidents have faced a vacancy on the Court an average of once every two years. Prior to 2005, with the retirement of Associate Justice Sandra Day O’Connor and the death of Chief Justice William Rehnquist, the nine justices had worked together since 1994, which is the longest a particular group of justices had been on the bench together since the 1820s. As a result, President Bill Clinton only had two appointments to the Court during his eight years in office, and President George W. Bush did not see a vacancy occur until early in his second term. Some presidents just are not lucky in this regard—President Jimmy Carter had no appointments to the Court during his four years in office—and some have better luck—President Gerald Ford was president for only 29 months but made one appointment to the Court. Since justices serve a life term, this is an important opportunity for presidents to enjoy a lasting political legacy long after they leave office.
746 U .S. Supreme Court
The political climate plays a large role in who is selected, as does race, gender, religion and geography. Nominees are always lawyers, although this is not a constitutional requirement, and many have attended top law schools (such as Harvard, Yale, Stanford, or Columbia). Previous jobs usually include appellate judgeships (state or federal), jobs within the executive branch, jobs within the Justice Department, or highly elected office. Most nominees are older than 50, and a majority are from upper or uppermiddle class families. As the highest court within the federal judiciary, the Supreme Court has tremendous influence over the U.S. legal system as well as the political process. In deciding cases, federal judges must follow constitutional law, statutory law, and administrative law. More than 7,000 civil and criminal cases, on average, are appealed to the Supreme Court each year from the various state and federal courts, but the Court only handles roughly 200 of those cases. The Constitution states that the court is to deal with “cases and controversies” stemming from the U.S. Constitution. The Court has both original and appellate jurisdiction. Original jurisdiction means that certain cases are heard first at the level of the Supreme Court. These cases include legal disputes between states or those involving foreign diplomats. In the entire history of the Court, it has heard only roughly 200 original jurisdiction cases, and most dealt with border disputes between states prior to the 20th century. The Court accepts most of its appellate cases by granting a writ of certiorari, which means that the Court is giving permission for a losing party to bring its case forward for review. Under what is known as the rule of four, only four of the nine justices must agree to grant a writ of certiorari to a particular case. A case stands a better chance of being selected when the U.S. government requests the appeal, which occurs through the U.S. solicitor general’s office (lawyers in this office represent the government in cases before the Court). As part of its job to deal with “cases and controversies,” the Court has developed the tradition of resolving broad legal issues as opposed to minor or technical legal questions. As a result, most cases that come before the Court raise major constitutional issues or are cases from a lower court that conflict with a previous Supreme Court ruling. In recent
years, the Court has averaged roughly 100 cases to which it grants full consideration, meaning that the Court will receive new briefs from the participants, hold oral arguments, and then hand down a decision with a full explanation of its ruling within a written opinion. Other cases receive summary consideration, meaning that no new briefs or oral arguments are necessary, and the Court will either vacate the lower court ruling and/or remand the case back to the original trial court. Some cases receiving summary consideration produce per curiam decisions, where the Court hands down a brief unsigned opinion from the entire Court as opposed to an opinion authored by one particular justice. The Court must be concerned with achieving compliance among other political actors when a decision is handed down. This is an important factor that justices must consider when deciding a case. The Court is a countermajoritarian branch of the federal government, which means that it often overturns the decisions of officials in the legislative and executive branches, both of which are made up of elected officials who must be more responsive to the voters. The justices are often at the mercy of elected officials, whose actions they just deemed unconstitutional but who then must follow along with the new legal precedent handed down by the Court. Since the court has “neither force nor will, but merely judgment,” as stated in the Federalist, other political actors must implement the Court’s policies. These include actors at the federal and state level, particularly other judges, legislators, and executive branch administrators. In this regard, the Court’s reputation among other government officials as well as the U.S. public is crucial. The reverence that the Court enjoys goes a long way to achieving compliance to its rulings. An enduring controversy regarding the proper role of the Supreme Court in the U.S. constitutional system of government surrounds the Court’s role in policy making. Due to vast changes in social and economic circumstances during the 20th century, all three branches of the federal government began to play a much larger role in all areas of the policymaking process. Two distinct yet related schools of thought exist on the role of the Court as a policy maker—judicial activism and judicial restraint. Supporters of the doctrine of judicial activism argue that justices should develop new legal princi-
U.S. Tax Court 747
ples when a compelling need exists, going beyond merely interpreting the law to participating actively in making new law. Judicial activists can adhere to either a liberal or conservative political ideology. Liberal activists view the Constitution as a broad grant of freedom to citizens against government interference, particularly where civil rights and civil liberties are concerned. Conservative activists, like those on the Supreme Court in the early 1930s that struck down key provisions of President Franklin D. Roosevelt’s New Deal program, prefer to let the states, not the federal government, regulate many social and economic affairs. Both sets of activists share in their willingness to overlook the literal words of the Constitution in pursuit of specific policy outcomes. The doctrine of judicial restraint supports the adherence to precedent and respect for the legislature as the proper governing body to determine public policy. According to this view, judges should not seek new principles that can change the existing law but rather look only to precedent to interpret cases. Advocates of judicial restraint also argue that leaving policy making to elected officials maintains the legitimacy not only of the Court as an institution but of a republican form of government. Those who favor judicial restraint also tend to favor the doctrine of original intent, which is a method of constitutional interpretation that seeks to understand the intent of the framers of the Constitution. Some who advocate judicial restraint call themselves strict constructionists. Strict constructionists believe that all that is necessary to interpret the Constitution is a literal reading of the document within its historical context. U.S. citizens have experienced both activist and restrained Supreme Courts at various points in U.S. history. Further Reading Abraham, Henry J. Justices, Presidents, and Senators: A History of the U.S. Supreme Court Appointments from Washington to Clinton. Lanham, Md.: Rowman & Littlefield, 1999; Baum, Lawrence. The Supreme Court. 8th ed. Washington, D.C.: Congressional Quarterly Press, 2004; McCloskey, Robert G. The American Supreme Court. 2nd ed. Revised by Sanford Levinson. Chicago: University of Chicago Press, 1994; McGuire, Kevin T. Understanding the U.S. Supreme Court: Cases and Controversies. New York: McGraw-Hill, 2002; O’Brien, David M. Storm
Center: The Supreme Court in American Politics. 7th edition. New York: W.W. Norton, 2005; Silverstein, Mark. Judicious Choices: The New Politics of Supreme Court Confirmations. New York: W.W. Norton, 1994. —Lori Cox Han
U.S. Tax Court The U.S. Tax Court is a specialized federal court of record (which means that everything that occurs in the Court is recorded) created by the U.S. Congress by virtue of Congress’s power under Article I, Section 8, “To constitute Tribunals inferior to the supreme court.” The function of this court is to provide a judicial forum to taxpayers challenging rulings made by the Internal Revenue Service (IRS). The court’s jurisdiction includes the authority to hear tax disputes concerning notices of deficiency, notices of transferee liability, certain types of declaratory judgment, readjustment and adjustment of partnership items, review of the failure to abate interest, administrative costs, worker classification, relief from joint and several liabilities on a joint return, and review of certain collection actions. These matters are first heard by an impartial officer from the IRS office of appeals who is required to issue a determination on the issues raised by the taxpayer at a hearing. If the taxpayer wishes to appeal the decision of the hearing officer, they must file a petition with the court within 90 days of receiving a statutory deficiency notice from the IRS. As of this writing (January 2007), taxpayers pay a filing fee of $60 for bringing an action. Taxpayers bringing cases to the court do not have to make payment to the IRS until their cases are heard. While the court is the venue for most tax cases, it does not have jurisdiction over all tax matters. For example, the court does not have jurisdiction over federal excise taxes. Taxpayers may also bring an action in one of the U.S. district courts or the U.S. Court of Federal Claims. However, litigants choosing these venues for their litigation would be required first to pay the tax and then to file a lawsuit seeking recovery of the contested amount. Tax matters brought before the U.S. district courts involve tax refunds or criminal tax cases. Unlike the U.S. Tax Court, the taxpayer can request a jury trial in the district
748 U .S. Tax Court
court. Tax-refund suits can also be brought by taxpayers in the U.S. Court of Federal Claims, which hears cases involving monetary claims against the federal government. The taxpayer may decide to have the case heard under procedures for so-called “simple tax cases” (known as “S cases”) if the amount in question is $10,000 or less for any calendar year. In these cases, the decision of the court is final. Other cases may be heard on appeal by the U.S. Courts of Appeals and subsequently by the U.S. Supreme Court if that court grants a writ of certiorari. Very few tax cases are heard by the U.S. Supreme Court, so the U.S. Court of Appeals is often the highest court where most tax cases will be heard. The court issues two types of decisions: regular (or reported) decisions and memorandum decisions. Regular decisions involve new areas of tax law or new interpretations of the Internal Revenue Code and are published. Memorandum decisions deal with interpretations of facts rather than law and are not officially published. The court was originally established as the U.S. Board of Tax Appeals, an independent agency in the executive branch, by the Revenue Act of 1924 (43 Stat. 336). The agency conducted hearings and issued redeterminations in cases appealing adverse income, profit, estate, and gift tax decisions of the commissioner of the Internal Revenue. In 1942, Congress enacted the Revenue Act of 1942 (56 Stat. 957). This legislation established the Tax Court of the United States, an independent agency within the executive branch, superseding the U.S. Board of Tax Appeals. The legislation also abolished the Processing Tax Board of Review which had been established in the Department of the Treasury in 1936 (49 Stat. 1748) to hear appeals from decisions by the commissioner of Internal Revenue involving processing payments under the Agricultural Adjustment Act. The functions of this agency were transferred to the Tax Court of the United States, which also had jurisdiction over cases arising from determinations of tax deficiencies by the commissioner of Internal Revenue and appeals of decisions by the commissioner of claims in applications for refunds of excess profit taxes. The court also held redetermination hearings in cases involving excess profits on war contracts during the Second
World War. It consisted of a presiding judge and 15 judges. The court was given its present name by the Tax Reform Act of 1969 (83 Stat. 730), which also established it in the judicial branch as an Article I court. The Tax Court consists of 19 judges, appointed by the president, subject to the advice and consent of the U.S. Senate. Unlike judges who are appointed to courts established under Article III of the U.S. Constitution, who serve “during good behavior,” judges of the U.S. Tax Court are appointed for 15-year terms. The chief judge of the court is elected by the members for a two-year term. The judges are expected to be experts on the tax laws and the Internal Revenue Code. The court also includes a number of senior judges and 14 special-trial judges. Senior judges are judges who have retired but continue to perform judicial duties as senior judges who can be recalled to active duty. The 14 special-trial judges are appointed by the chief judge and serve at the pleasure of the court. They hear cases and then make recommendations which are passed on to the regular judges for review. In March 2005, the U.S. Supreme Court directed the Tax Court to release reports of the decisions made by special-trial judges (Ballard et ux. v. Commissioner of Internal Revenue, 544 U.S. 40, 2005). In the case in question, the appellants believed that special-trial judges had ruled in their favor, only to be reversed by the Tax Court judge who had reviewed their cases. The Tax Court refused to provide the taxpayers with the reports of the Special-trial-court judges, and the taxpayers then brought actions in the Seventh and 11th Circuits of the U.S. Courts of Appeals, and the cases eventually reached the U.S. Supreme Court. Associate Justice Ruth Bader Ginsberg, writing for the majority, wrote that “The Tax Court’s practice of not disclosing the special trial judge’s original report, and of obscuring the Tax Court Judge’s mode of reviewing that report, impedes fully informed appellate review of the Tax Court’s decision.” Following the Supreme Court decision, the court revised its rules of procedure so that the reports of the special trial court judges are now made public. While the court’s “principal office” is located in the District of Columbia, Tax Court judges may sit “anyplace within the United States” (26 U.S.C. 6212).
writ of certiorari
The court’s Washington, D.C., court house is located at 400 Second Street NW. The court also maintains a field office in Los Angeles, California. Tax Court judges travel throughout the United States to hear cases, and in 2005, the court scheduled hearings for a number of cases scheduled to be heard in New Orleans following Hurricane Katrina. During a typical term, the Tax Court will hold sessions in approximately 40 cities. The U.S. Tax Court is unique in that a person who is not admitted to practice as an attorney may be admitted to practice before the Tax Court as a representative of taxpayers by applying for admission and passing an examination administered by the court. Despite this anomaly, most of the individuals practicing before the court are licensed attorneys (who are not required to sit for an examination) who specialize in taxation. When trials are conducted, the cases are heard by a judge in a “bench trial” (which means no jury is present). The chief judge is currently (as of January 2007) John O. Colvin. The other judges of the court are Carolyn P. Chiechi, Mary Ann Cohen, Maurice B. Foley, Joseph H. Gale, Joseph Robert Goeke, Harry A. Haines, James S. Halpern, Mark V. Holmes, Diane L. Kroupa, David Laro, L. Paige Marvel, Stephen J. Swift, Michael B. Thornton, Juan F. Vasquez, Thomas B. Wells, and Robert A. Wherry. The current senior judges are Renato Beghe, Herbert L. Chabot, Howard A. Dawson, Jr., Joel Gerber, Julian I. Jacobs, Arthur L. Nims, III, Robert P. Ruwe, and Laurence J. Whalen. The current special-trial judges are chief special trial judge Peter J. Panuthos, Robert N. Armen, Lewis R. Carluzzo, D. Irvin Couvillion, John F. Dean, Stanley J. Goldberg, and Carleton D. Powell. Further Reading Crouch, Holmes F. Going Into Tax Court. Saratoga, Calif.: Allyear Tax Guides, 1996; Dubroff, Harold. The United States Tax Court: An Historical Analysis. Chicago: Commerce Clearing House, 1979; U.S. Congress. House Committee on Ways and Means. Organization and Administration of the United States Tax Court: Hearing Before the Committee on Ways And Means, House of Representatives, 96th Congress, 2nd Session, April 1, 1980. Washington, D.C.: U.S. Government Printing Office, 1980. —Jeffrey Kraus
749
writ of certiorari A writ of certiorari (from the Latin “to be informed”) is an order by the U.S. Supreme Court directed to a lower court to send up the records of a case to which the Supreme Court has decided to grant review. This constitutes the most common avenue by which the Supreme Court exercises its discretion in choosing whether to give a full hearing to an appeal of a lower-court ruling. This appellate jurisdiction, which makes up almost the entirety of the cases with which the Court deals, is comprised of legal disputes where a party to a case is asking the Supreme Court to review that case decision put forth from a state supreme court or a lower federal court. A party or litigant argues that there has been an error of law of some type committed in a lowercourt decision. There are three paths of appeal to the Court: appeal as a matter of right; certification; and certiorari. The first two, appeal as right and certification, entail cases of a very narrow nature and thus make up only a tiny portion of the appellate cases which the Supreme Court decides. Of note, the granting of a writ of certiorari is entirely discretionary on the part of the Court, and it is this type of appeal that makes up almost all of the nearly 8,000 cases a year that apply for Court review. Essentially, the Court sets its own agenda to a certain extent by choosing the types of cases and issues that it wishes to resolve. The Supreme Court formally deciding to accept a case for review is known as granting a writ of certiorari—colloquially referred to as “granting cert.” The process of potentially granting cert begins in the Court’s docket where these aforementioned thousands of cases sit requesting review. The docket of the Supreme Court is screened by the justices and their clerks for appellate review. From these cases in the docket, the Court will grant cert to a very small number and place them on its calendar for oral arguments and then eventually render their final decision on a case, referred to as a “decision on the merits.” In the 1980s, between 120 and 145 cases a year received cert and had oral arguments heard before the Court. In the late 1990s, the Court started to hear less than 100 a year, and in the mid-2000s, it is from 80 to 85 cases that receive a full hearing. The Court’s term begins the first Monday in October and remains in session generally until early July.
750 writ of certiorari
To start the process of hearing a case, the Supreme Court must decide the cases that are worthy of cert. Every Supreme Court justice uses law clerks to sift through the cases in the docket. Some justices rely only on the clerks’ memos, while others may become more directly involved. The justices generally use what is called the “cert pool” where all the clerks of the various justices summarize the cases and give their recommendations on whether a case should be granted cert or not. Beginning with Chief Justice Warren Burger in the 1970s, a discuss list was used and continues today. The Chief Justice circulates the list of cases he or she wants to hear, and the other justices can add to it. There are typically 40 to 50 cases on the list for the weekly conference meeting of the justices where they talk about these matters and they present their respective views. A dead list has also developed over time, and it consists of cases that are deemed to be not worthy of discussion and not deserving of cert. Only about 25 to 30 percent of cases in the docket reach the discuss list, and most of the cases on the discuss list are not granted cert. In the justices’ conference meeting, this discuss list generates a core set of cases about which the justices of the Court talk to consider possibly granting cert. It is at this initial conference discussion on a case that the justices vote whether to grant cert or not. It takes at least four justices to grant cert—the Rule of Four. The granting of cert results in the Supreme Court giving a full hearing to a case by subsequently hearing oral arguments from litigants’ attorneys and taking litigants’ legal briefs, as well as amicus curiae (friends of the court) briefs submitted by people or groups that have interests in the outcome of the case but who are not the actual litigants. The granting or not of a writ of certiorari serves as the essential gate-keeping function of the Supreme Court. This raises the important question of “how do the justices decide to decide?” In other words, what makes a case “certworthy” or not? What are the primary considerations of the justices when selecting cases that help us understand both what issues will be resolved and the leading motivations of justices’ decision making in this area on the Court? Scholars in a variety of studies have found that there are two general types of factors that influence the justices’ case-
selection decisions: legal and political. Legal considerations consist of “principled,” more lawbased rationales as opposed to political factors that go beyond purely jurisprudential concerns and bring into play the justices’ extralegal preferences. Both sets of factors are discussed below. To start, the initial legal considerations for granting cert by the Court are questions of jurisdiction, justiciability, and standing of the parties in a case. Does a case fall within the Supreme Court’s appellate jurisdiction—that is to say, does the Supreme Court have the authority to hear and decide the case? The Court’s appellate jurisdiction consists of the following: all decisions of federal courts of appeals and specialized federal appellate courts; all decisions of state supreme courts concerning issues of federal law or the U.S. Constitution; and decisions of special three-judge federal district courts, typically dealing with election matters. Is a case justiciable by the Court—that is to say, is there a real dispute or a true “controversy” in a “case,” as the Constitution mandates in Article III? Does the party in a case requesting certiorari have standing to bring that case to the Supreme Court—that is to say, does that party have a right to bring legal actions because that party is directly affected by the issues raised in the case? A party bringing the litigation must have a personal stake in the outcome of the case and/or controversy— there must be some type of bona fide injury to the litigating party. Beyond these more procedural legal considerations are other legal concerns affecting “certworthiness” that scholars refer to as the Supreme Court’s self-limiting doctrines. These comprise the following: no advisory opinions; mootness; ripeness; political questions; and conflict between federal circuit courts of appeals. The first three, dealing with advisory opinions, mootness, and ripeness, stem from questions related to justiciability. The Supreme Court will not issue advisory opinions on hypothetical or conjectured disputes—a case to be granted cert must possess a true and live dispute between two parties with some injury to one of the parties. The Court will also not hear cases that are technically or legally moot. In other words, the dispute must be alive by the time the case reaches the Court—the “controversy” cannot have come and gone. Related to the mootness doctrine is the question of ripeness. Ripeness is tap-
writ of certiorari
ping the notion that a case should not be reviewed by the Court too early before the facts have solidified sufficiently and adequately. In other words, a case is assessed as not having ripened enough for the Court when that case has not yet turned into a clear-cut controversy; instead, it possesses an ongoing and fluid factual situation. The ripeness doctrine is notably vague in that it is not clear how justices on the Court determine exactly when a case has evolved enough for adjudication. The other two major legal considerations are political questions and conflict between different federal circuit courts of appeals. The Supreme Court will not hear cases that involve political questions—the invoking of this self-limiting doctrine dates back to the early 1800s and the Marshall Court. Here, the Court believes that the resolution of some cases best rests with the elected branches working it out, not the courts. This doctrine has been typically drawn on with cases dealing with disputes between the legislative branch and the executive branch of the federal government or disputes between the federal government and the states. However, it still remains somewhat murky over precisely what constitutes a nonjusticiable political question. The last legal consideration in granting cert for justices on the Court is the presence of conflict in case decisions on very similar issues between circuits in the federal courts of appeals. Contradictory decisions by federal circuits serve as a red flag to the Supreme Court on the need for the justices to resolve an apparent confusion in the law among lower federal judges and to promote consistency throughout the country. Empirical studies show that the presence of different circuits ruling differently on similar cases does increase the probability of an appeal of such a case being heard by the Court. Scholars have concluded that legal considerations are not the only factors that influence the process by which justices’ decide what cases to review—political influences also play an important role here. Many cases in the Court’s docket fulfill these above laid-out legal criteria, so scholars have attempted to map out in greater detail the influences that more fully account for justices’ voting to grant cert. Studies have found that the following four political influences are some of the most significant along these lines: cases that raise an important national
751
policy issue; the ideology and political preferences of the justices; type of litigant involved in a case; and amicus curiae briefs. Great weight is given to cases that raise important national policy issues. Does a case involve a question that the political system wants and needs resolved? The Court recognizes its limited time and resources on its agenda space, so the justices will seek out cases that provide the more compelling policy and political ramifications for the country at large. The ideological predispositions and political preferences of justices on the Court also affect their willingness to grant cert to some cases over others. Both liberals and conservatives on the Court tend to want to hear cases that contain judicial decisions they wish to reverse. In other words, generally speaking, liberal justices will vote to hear cases that were previously decided in a conservative direction, and conservative justices will vote to hear cases that were previously decided in a liberal direction. For example, the predominantly liberal Warren Court (1953–69) in the 1950s and 1960s was well known for protecting criminal defendants in its decisions. It was five times more likely to grant cert in a case appealed by a criminal defendant than the subsequent (and more conservative) Burger Court (1969–85) and Rehnquist Court (1986–2005). The Burger Court tried to push back what it considered to be the excesses of the Warren Court in this area of the law. Quite unlike the Warren Court, the Burger Court was three times more likely to grant cert when a criminal case was appealed by the state. Scholarly studies have found that the type of litigant involved in a case clearly has impact in the granting of cert. One of the most successful litigants in obtaining consistently granted cert is the U.S. solicitor general, the executive-branch official who essentially serves as the federal government’s attorney and advocate before the Supreme Court. Full review is granted by the Court at least 70 percent of the time when the U.S. government is a party to a case and the solicitor general submits a request on the government’s behalf to have that case heard. Other “repeat players”—those litigants who have been before the Court before in a variety of cases and who have built a history of useful Court experience and reputation— are also favored in the granting of cert but not in as dominating a fashion as the solicitor general. Leading
752 writ of certiorari
examples of repeat players include state governments and nongovernmental parties such as the American Civil Liberties Union, the U.S. Chamber of Commerce, the National Rifle Association, and the AFL/ CIO. Least favored by the justices are prisoners and “one-shotters”—those who have never been before the Court previously. The last political factor consists of amicus curiae briefs, also known as friend-of-the-court briefs. Amicus briefs are briefs submitted by a person or group who is not a direct party to a case but who wish the Court to be informed of their views on whether a case should be heard and, if a case is heard, their preferences and rationales on the ultimate outcome. It is less common to have amicus submitted at the cert stage than at the full hearing/oral arguments stage. Studies have shown that the presence of such briefs does significantly increase the probability of the Court deciding to hear a case. Theorizing sug-
gests that the number of amicus briefs and who their sponsors are signals to the justices the salience and importance of the issues at stake in a case. Further Reading Abraham, Henry J. The Judicial Process: An Introductory Analysis of the Courts of the United States, England, and France. 7th ed. New York: Oxford University Press, 1998; Baum, Lawrence. The Supreme Court. 8th ed. Washington, D.C.: Congressional Quarterly Press, 2003; Epstein, Lee, and Thomas G. Walker. Constitutional Law for a Changing America: Rights, Liberties, and Justice. 5th ed. Washington, D.C.: Congressional Quarterly Press, 2004; Pacelle, Richard L., Jr. The Transformation of the Supreme Court’s Agendas: From the New Deal to the Reagan Administration. Boulder, Colo.: Westview Press, 1991; Perry, H. W. Deciding to Decide. Cambridge, Mass.: Harvard University Press, 1991. —Stephen R. Routh
PUBLIC POLICY
aging policy
benefits. Thus, age 65 developed as the nearly universal qualifier for most aging policy. Aging policy is in much part driven by demography, which is the study of population. In 1900, the elderly numbered only 2.4 million, constituted 4 percent of the population, and had very few government policies directed toward them. In comparison, today’s elderly number approximately 35 million, constitute 12.4 percent of the population, and have an entire smorgasbord of age-related policies. Many of these policies are a response to demographic trends within the elderly population. For example, the fact that older women outnumber men (65 males to 100 females over age 65) prompts the need for public policies of care giving support and nursing home regulations. Older men are likely to be married and thus have a live-in spouse available for caregiving. An older woman, on the other hand, is likely to be widowed, so she will need to look outside the family for caregiving support. Another demographic trend is that the fastest growing age group is those 85 and older. This again prompts policy needs, as this oldest group has a greater need for health care and medications, financial support due to outliving one’s savings and assets, and caregiving assistance. As a group, senior citizens are highly politically active. Seniors have one of the highest levels of voter turnout and are the ones most likely to vote in nonpresidential as well as local elections. People over 65 are far more likely than other age groups to belong to a political party, vote in a primary election, and
Although one ages from birth throughout adulthood, the term aging policy describes the governmental programs and benefits aimed at senior citizens. Aging policy composes more than a third of the annual federal budget, and only defense spending surpasses it. The largest components of aging policy are Social Security, Medicare and Medicaid. However, aging policy also encompasses such issues as age discrimination in employment, private pensions, taxes, Supplemental Security Income for the poor, housing and reverse mortgages, nursing home regulation, and the renewal of driver’s licenses. The age of 65 is generally used to denote one’s status as a senior citizen for most programs and policies. There is nothing specific about this chronological marker that suddenly makes an individual old or elderly. Rather, the conventional use of 65 in aging policy stems from the creation of Social Security in 1935, which adopted age 65 for the receipt of benefits. In 1935, policy makers were merely following the age used in the German pension program developed in the 1880s. In addition, age 65 was considered favorable for the policy purposes of Social Security. Enough people would live to 65 so the public would willingly pay into the program on the expectation that they, too, would live to receive benefits. But given the life expectancy of 61 in 1935, not everyone would reach age 65, which was financially advantageous, as many would pay into the program who never received 753
754 aging policy
contribute to a political campaign. Hence, seniors are a favored target for votes and for campaign funds by politicians, who in return are more than willing to protect aging policies and benefits. At the very least, only a few politicians will risk taking on the elderly by suggesting reductions in benefits or massive reforms in aging policy and never in an election year. Aging policy has long been described as the third rail of politics. It is not only the political activism of the elderly that promotes aging policy, but political power also stems from AARP (formerly the American Association of Retired Persons), an interest group dedicated to furthering aging issues. With 35 million members (now 50 and over) and its own zip code in Washington, D.C., AARP is the behemoth of interest groups. As a nonprofit, nonpartisan interest group, AARP does not support or oppose political candidates or donate financially to political campaigns. But few aging policies get enacted without the blessing of AARP, which constantly lobbies on behalf of the elderly. As an interest group, AARP provides material, solidaristic, and purposive incentives to its membership. AARP offers unlimited material benefits including prescription drugs, travel programs and discounts, insurance, safe driver programs, and affiliated programs with companies offering discounts on their products to AARP members. Belonging to a group of like-minded people is a solidaristic benefit, and AARP: The Magazine, claiming the world’s largest circulation, promotes the perspective of an aging community joined by common concerns, ideas, and history. The purposive benefits are those obtained increases or warded off cutbacks in aging programs achieved by AARP lobbying. But like any interest group, AARP suffers from free riders who benefit from AARP’s political activism without having to officially become a member of AARP. The largest public policy program devoted to aging is Social Security. Created in 1935 as part of President Franklin D. Roosevelt’s New Deal programs, Social Security provides financial support to retired seniors and their spouses, the disabled, and survivors of an eligible employed worker. The three requirements for receiving Social Security are that the worker paid into the program for at least 40 quarters over a lifetime, meets the retirement test of no
longer working at a primary job, and meets the eligibility age (originally 65 but now gradually increasing to age 67). Roosevelt intentionally used an insurance model to create the public perspective that workers paid in and therefore were entitled to benefits, forestalling any future policy makers from eliminating the program. Today’s workers pay 6.2 percent of their salary to Social Security (up to a maximum earning of $94,200 in 2006), and their employer also pays 6.2 percent of the worker’s salary, for a total of 12.4 percent per worker. Social Security contributions are compulsory and also portable, in that the worker can take contributions to this government program to his or her next job. Upon retirement and reaching the eligibility age, a worker can begin to receive Social Security benefits, which then continue for the rest of his or her life. A worker can opt for early retirement at age 62 and receive 80 percent of the full benefit amount for the rest of his or her life. The exact amount received from Social Security is based on the worker’s lifetime contribution, with the maximum benefit in 2006 of $2,053 a month. The average Social Security benefit is $1,002. Individuals receiving Social Security benefits do get an annual raise with a cost of living adjustment (COLA). Social Security was never intended to be the sole or even the primary financial support in retirement. Instead, aging financial policy was conceived as a three-legged stool of private pension, savings, and Social Security. Social Security is an intergenerational transfer whereby current beneficiaries are paid from the contributions of current workers and any surplus goes into the Social Security Trust Fund. Despite workers paying into Social Security over their employment history, the average worker earns back in benefits his or her entire contribution amount in 6.2 years. The retired person’s Social Security benefits do not stop then but rather continue for the rest of his or her life. The success of an intergenerational transfer program depends on demographics and the ratio of workers to retired beneficiaries. When Social Security was created, there were 40 workers for each retiree receiving benefits. With the decrease in the birth rate, the ratio has fallen to 3.3 workers per retiree, with further decreases projected when members of the baby boom generation (those born between 1946 and 1964)
aging policy 755
retire. Some of this decline is offset by the increase in women working, which was not adjusted for in the 1935 numbers. Needless to say, with the coming retirement of the baby boomers (which is a huge group estimated to be approximately 76 million people and nearly 29 percent of the entire U.S. population), there are grave concerns about the future of Social Security. Once the baby boomers retire by 2020, those over the age of 65 will number approximately 54 million and could constitute 16 percent to 21 percent of the population (depending on the birth rate from now to 2020). This will dramatically shift the dependency ratio and cause great havoc with the intergenerational transfer model of Social Security. As yet, no major reforms to Social Security have been enacted. This is in part due to policy makers’ reluctance to displease seniors or the baby boomers, due in part to the higher voter turnout rates for this segment of the population. The other major aging policy is Medicare, enacted in 1965 as part of President Lyndon B. Johnson’s Great Society programs, which is the federal program that provides health care to seniors. It is an entitlement program, which means that anyone meeting the requirements of eligibility to now receive Social Security can also receive Medicare benefits without any consideration of financial need. Current workers pay 1.45 percent of their income in a Medicare tax, with that same amount matched by their employer. Medicare beneficiaries have monthly premiums, copays, and deductibles. As a program, Medicare is geared toward acute care of treating a temporary episodic condition and less on chronic care requiring prolonged assistance. Medicare is comprised of Parts A, B, and D. Part A is automatically available to a person eligible for Medicare benefits. It covers the costs of a hospital stay, medical tests, and medical equipment. The older patient pays a $952 deductible (as of 2006) on entering the hospital each quarter, and Medicare pays for the rest of the hospital stay. Part B covers doctor visits and is optional coverage that seniors can opt for by paying a fee of $78 a month (as of 2006) deducted from their Social Security check. The beneficiary has a $110 annual deductible and pays 20 percent of the physician charges. Enacted in 2006, Part D provides coverage of prescription medications for those who receive Medi-
care. The beneficiary pays a monthly premium for the type of coverage selected and an annual deductible of $250. Part D is described as having a donut in the policy program. Medicare pays 75 percent of drug costs from $250 to $2,850 and then 95 percent of costs after the beneficiary has spent $3,600 out of pocket on his or her prescription medications. This lack of coverage from $2,850 to $3,600 is the donut hole. Coverage of prescription drugs under Medicare was a long-debated policy before it was enacted. One of the policy concerns is that the actual program costs are unknown. The annual cost of Part D will be determined by the health needs of the beneficiaries and the drugs prescribed by their physicians. Seniors clearly have the largest usage of prescription medications, but the extent of their prescribed drug use, and thus the cost of Part D, is yet to be determined. While not a specific program for the older population, Medicaid has become one of the most important to seniors and their families. Medicaid, also created in 1965, is a state and federally funded health care program for the poor of all ages. This is a means-tested program, so to qualify for coverage an individual’s income and assets must be below the level determined by each state. Most states use the poverty line or 150 percent above the poverty line as the income level for Medicaid eligibility. Although there are state variations, generally Medicaid covers the costs of doctor visits, hospital stays, medical equipment, and prescription drugs for those who qualify. Medicaid has become increasingly significant for older persons because it covers the costs of continuing care in a nursing home. Medicare, on the other hand, only covers nursing home expenses for a limited stay immediately on discharge from an acute hospital. While only 4 percent of people over the age of 65 are in nursing homes at any one point in time, there is a 40 percent chance that a senior will spend some time in a nursing home during his or her lifetime. Many older persons enter a nursing home paying out of pocket for their care. However, the average nursing home costs more than $35,000 a year, which quickly diminishes older persons’ savings and assets. Often older persons will spend down (deplete their assets) paying for their nursing home costs and then
756 arms control
qualify for Medicaid. At this point, Medicaid will pay for the nursing home and other health-care costs. Obviously, Social Security, Medicare, and the long-term care costs of Medicaid are an enormous outlay of federal dollars for aging programs. These huge costs and the impending aging of the baby boomers into senior citizens have prompted attention to intergenerational equity. Now the questions of aging policy have become, among others, what do American citizens owe older persons, can American citizens afford to continue the same level of benefits, and how should the government allocate resources across generations. When the last of the baby boomers has its 65th birthday in 2020, there will be 54 million senior citizens. This looming magnitude of older persons assures that intergenerational equity will dictate aging policy for decades to come. Further Reading Altman, Stuart, and David Shactman, eds. Policies for an Aging Society. Baltimore, Md.: Johns Hopkins University Press, 2002; Burkhauser, Richard. The Economics of an Aging Society. Malden, Mass.: Blackwell Publishing, 2004; Hudson, Robert. The Politics of Old Age Policy. Baltimore, Md.: Johns Hopkins University Press, 2005; Koff, Theodore, and Richard Park. Aging Public Policies: Bonding across the Generations. Amityville, N.Y.: Baywood Publishing, 2000; Kotlikoff, Laurence J. and Scott Burns. The Coming Generational Storm. Cambridge, Mass.: MIT Press, 2004. —Janie Steckenrider
arms control In the lexicon of the new millennium, the term arms control is frequently equated with the “war on terrorism,” defined by President George W. Bush’s quest to prevent countries such as Iraq, Iran, and North Korea from developing weapons of mass destruction (WMDs) and systems for their delivery in the wake of the terrorist attacks on New York and Washington, D.C., on September 11, 2001. The United States, unable to secure approval from the UN Security Council, invaded Iraq in March 2003 with a nominal international “coalition of the willing” and ousted Iraqi dictator Saddam Hussein. Bush, supported by British prime minister Tony Blair, believed intelli-
gence information corroborated allegations that Hussein had amassed weapons of mass destruction, such as poison gas he had used against the Kurdish minority in northern Iraq in 1991, and intended to use them against America and its allies. Following the invasion, no such weapons were discovered. A decade earlier, in the Persian Gulf War precipitated by Iraq’s invasion of the emirate of Kuwait, the United States led a military coalition backed by the United Nations that successfully restored Kuwait’s independence in spring 1991. As part of the ceasefire agreement with Hussein following Operation Desert Storm, the victorious coalition imposed “nofly zones” in the north and south of Iraq. The objective of this policy of containment was to prevent Hussein from engaging in military maneuvers and to curb his ability to threaten his neighbors or develop WMD. The United States’s successive invasions of Iraq, first in 1991 and again in 2003, accentuate a growing concern among American presidents as well as leaders of international organizations such as the United Nations, about the proliferation of nuclear, chemical, and biological weapons with the potential to devastate civilian populations, especially by Islamic fundamentalist regimes in the Middle East such as Iran, which have professed their unyielding animus towards the West and have taken steps to produce material that could be used for the production of nuclear weaponry. Preventing the proliferation of nuclear weapons in the developing world has been a major concern of the United Nations for several decades and took on added importance after India and Pakistan, bitter rivals, each conducted their own tests of nuclear bombs in 1998. The United States was a signatory to the 1968 Nuclear Non-Proliferation Treaty (NNPT), which banned the transfer of nuclear weapons technology by the nuclear powers of that era—China, France, the Soviet Union, the United Kingdom, and the United States—to developing nations. Nonetheless, the United States did not ratify the Comprehensive Test Ban Treaty of 1996, which sought to preclude all nations from conducting nuclear weapons testing. India and Pakistan also balked at signing the treaty, as did Israel. And North Korea withdrew from the NNPT in 2003 and now claims to have nuclear weapons. Neither bilateral nor multilateral
arms control 757
Premier Mikhail Gorbachev and President Ronald W. Reagan sign the Intermediate Nuclear Forces Treaty. (Collection of the District of Columbia Public Library)
talks with the Communist regime in Pyongyang have borne fruit in convincing the North Korean leadership to abandon its nuclear weapons program, and in summer 2006 the North Korean government threatened to test a long-range intercontinental ballistic missile capable of reaching U.S. territory (Alaska or Hawaii). The Chemical Weapons Convention of 1993, signed by more than 170 countries including the United States, went into effect in 1997. The convention created an independent agency, the Organization for the Prohibition of Chemical Weapons, based in The Hague, Netherlands, to monitor stockpiles and compliance with the treaty. However, Syria and North Korea—two countries that are thought to have chemical weapons production facilities—are not signatories to the 1993 convention, which also does not cover biological weapons (e.g., viruses and other organisms, such as anthrax).
Without a doubt, the stockpiling of nuclear arms by the United States and the Soviet Union in the cold war era, which lasted from the end of World War II in 1945 to the fall of the Berlin Wall and the collapse of communism in Eastern Europe in 1989, was of paramount importance to all U.S. presidents. Nuclear arms control dominated bilateral relations between the two superpowers for four decades. Both the United States and the Soviet Union began to amass nuclear weapons on their own soil and array such weapons in countries within their spheres of influence following World War II—for the United States the North Atlantic Treaty Organization (NATO) countries of Western Europe and for the Soviet Union the Warsaw Pact nations of Eastern Europe. The superpowers’ mutual reliance on nuclear weaponry as their central means to deter an attack or invasion, as well as their mutual distrust of one another, led U.S. and Soviet leaders to wrestle with
758 arms control
ways to ensure a nuclear holocaust could be avoided while simultaneously dissuading potential aggression. Deterrence theory, based on the notion of mutually assured destruction (MAD), posited that the risk of nuclear war could be diminished if neither superpower had a strategic advantage in launching a first strike against the other. The certainty that a first strike would be met with massive retaliation and equivalent damage and civil casualties produced a stalemate, which, paradoxically, was thought to promote stability. To this end, advocates of détente—a relaxation of tensions between the superpowers—sought to negotiate treaties that would lessen the probability of a nuclear conflagration by reducing certain types of weapons, banning other types of destabilizing weapons altogether, and creating a framework of inspections in each nation to build confidence and create transparency so that the doctrine of MAD was secure. President Dwight David Eisenhower (1953–61) led early efforts to avoid an arms race with the Soviet Union. Eisenhower, a fiscal conservative, believed that unfettered spending for nuclear weapons would bankrupt the U.S. economy. In 1953, Eisenhower announced his “New Look” policy, proposing “more bang for the buck”—investing in fewer intercontinental ballistic missiles (ICBMs) that could reach the Soviet Union but ones capable of massive retaliation should the Soviet Union decide to strike the United States. The New Look policy emphasized equipping ICBMs with nuclear warheads with substantially larger destructive potential (i.e., yields of more than 100 kilotons, about 10 times larger than the bombs dropped on Hiroshima and Nagasaki, Japan, during World War II). The policy was entirely consistent with the tenets of MAD. At the same time, Eisenhower sought to forge a relationship with Soviet leader Nikita Khrushchev, who succeeded Joseph Stalin in 1953. In 1959, Eisenhower launched a “Crusade for Peace,” planned to visit the Soviet Union, and announced a summit in Paris, France, aimed at reaching an arms reduction treaty. Khrushchev walked out of the 1960 summit when Eisenhower first denied and then refused to apologize for spying on the Soviet Union through the use of high-altitude U-2 spy planes. That year the Soviets shot down a U-2 spy plane and captured its pilot, Francis Gary Powers, who was put on trial before being expelled
from the country. The failed summit was potentially one of the greatest missed opportunities for a substantial reduction in U.S. and Soviet arms. Without a doubt, the Cuban missile crisis proved the most destabilizing moment for MAD during the cold war. The two superpowers seemingly stood on the brink of nuclear war in what is often referred to as the “missiles of October.” In July 1962, the Soviet Union, in all likelihood as a response to U.S. deployment of medium-range nuclear weapons in Turkey, began a program to situate its own medium-range nuclear missiles on the island nation of Cuba, just 90 miles from the U.S. coast. In an extremely tense two weeks from October 16 until October 28, 1962, President John F. Kennedy and his advisers confirmed the presence of the Soviet missiles in Cuba and settled on a reserved though risky policy that included a naval blockade of the island. Soviet ships heading to Cuba eventually turned away. Through private channels, Kennedy eventually agreed to remove missiles from Turkey six months after the Soviets ended their missile plans for Cuba as part of a face-saving tactic for Soviet leader Nikita Khrushchev. Khrushchev did order the removal of the Soviet missiles from Cuba. In late November 1962, Kennedy ended the naval blockade of the island, and the crisis ended peacefully. In the aftermath of the Cuban missile crisis, U.S.Soviet arms control negotiations stalled, though some innovations, such as the establishment of a telephone “hotline” between U.S. presidents and Soviet leaders, accented the recognition by both sides of the need to avert future crises. By the early 1970s, President Richard Nixon negotiated the Anti-Ballistic Missile (ABM) Treaty with the Soviets and sought a reduction in nuclear weapons through the Strategic Arms Limitation Treaty (SALT I). The ABM Treaty ensured that neither country would construct an antimissile defense system that would potentially protect one or more cities or military sites, thereby giving a strategic advantage to one country. The ABM Treaty remained in effect for 29 years after its ratification by the U.S. Senate in 1972. President George W. Bush ultimately withdrew the United States from the ABM Treaty in late 2001, citing the need to establish a national antimissile defense system to thwart terrorists who might come into possession of nuclear weapons or other WMD.
collective bargaining
President Ronald Reagan broke with his cold war predecessors and took a different approach to arms control issues with the Soviet Union, which he called an “evil empire.” Like Kennedy in 1960, who declared a “missile gap” in terms of the United States’s retaliatory capacity in the event of a first strike by the Soviet Union, Reagan campaigned for an increase in nuclear weapons stockpiles to counter a perceived Soviet advantage, particularly with respect to short-range weapons stationed against Western European countries in Warsaw Pact nations. In 1984, Reagan won German chancellor Helmut Kohl’s approval to place short-range Pershing II missiles in West Germany to counter Soviet SS-20 missiles in Eastern Europe. Reagan also marshaled congressional approval for the development and deployment of MX-missiles (named after “missile experimental”)—long-range intercontinental ballistic missiles (ICBMs) that could carry multiple nuclear weapons—and increased submarine-based Trident II missiles and cruise missiles. Reagan’s “reversal” of positions on the arms buildup began with Mikhail Gorbachev’s accession to the post of general secretary of the Communist Party of the Soviet Union in 1985 following the death of Konstantin Chernenko. Reagan’s advocacy of the Strategic Defense Initiative (SDI), a space-based anti–ballistic missile system, troubled Gorbachev immensely, as it threatened to undermine the ABM Treaty. Many analysts also believed that whether the program was achievable or not was irrelevant, as Gorbachev came to the conclusion that the Soviet Union was not able to compete technologically or economically with the program. In the fall of 1986, Reagan and Gorbachev met for a summit in Reykjavík, Iceland. Reagan refused to concede on SDI, dubbed by the media (often pejoratively) as “Star Wars.” While the summit was a shortterm failure, it did pave the way for the IntermediateRange Nuclear Forces (INF) Treaty of 1987. The INF Treaty eliminated Soviet SS-20 missiles in Eastern Europe, Pershing II missiles in NATO countries, and short-range cruise missiles. Reagan later negotiated the first Strategic Arms Reduction Treaty (START) with the Soviets in 1988, and President George H. W. Bush finalized a second round of START in 1991. Both agreements limited the number of nuclear weapons the United States and the
759
Soviet Union could possess by phasing out stockpiles of medium- and long-range missiles. By the time President George H. W. Bush took office in 1989, domestic upheaval in the Soviet Union was swelling. By the end of the year, with Gorbachev’s announcement of the “Sinatra Doctrine”—that the Soviet Union would not intervene in the internal affairs of Warsaw Pact nations and would allow them to determine their own political and economic fates— arms control issues faded from public concern. The symbolic fall of the Berlin Wall signaled the death knell for communist regimes in Eastern Europe. Moreover, by 1991, Gorbachev’s economic and political reforms—perestroika and glasnost, respectively— hastened the ultimate dissolution of the Soviet Union in late 1991. Still, arms control remained a priority issue for the administration of George H. W. Bush. Bush brokered congressional support for an aid package for the Soviet Union to ensure that nuclear weapons under the control of the Russian military be decommissioned and not find their way to international black markets, where rogue states or terrorists might be able to purchase them. Nuclear weapons in former Soviet republics, such as Ukraine, became the purview of the newly independent countries of the Commonwealth of Independent States (CIS), with which the Bush and Clinton administrations sought bilateral dialogue. See also defense policy; foreign policy. Further Reading Levi, Michael A., and Michael E. O’Hanlon. The Future of Arms Control. Washington, D.C.: Brookings Institution Press, 2005; Maslen, Stuart. Commentaries on Arms Control Treaties. New York: Oxford University Press, 2005. —Richard S. Conley
collective bargaining Collective bargaining is the formal process of contract negotiations between an employer and representatives of its unionized employees. The goal of this process is to reach a long-term agreement regarding wages, hours, and other terms and conditions of employment. While bargaining, both parties are legally obligated to meet at reasonable times and negotiate in good faith.
760 c ollective bargaining
However, neither party is obligated to agree to a proposal or required to make a concession that they feel is unfair. When the negotiations end in an agreement, the result is often referred to as a collective bargaining agreement. When the negotiations fail to end in an agreement, a strike or lock-out may be used by one party to gain negotiating leverage over the other. In rare cases, the employer may choose to go out of business rather than reach an agreement with its employees. In order for collective bargaining to occur, two important criteria must be met. First, the employees must be organized under one labor union that represents them in the bargaining process. Second, there must be the need for a new collective bargaining agreement, because either the union was recently certified by the employees or the current collective bargaining agreement is about to expire. The process of collective bargaining brings a private form of government to a unionized company. Like the government of the United States, a collective bargaining agreement results in a two-party system, with a legislative function, an executive function, and a judicial function. The legislative function occurs in the bargaining process itself. Just as legislators attempt to reach an agreement on new laws, the parties of collective bargaining try to reach an agreement on a new labor contract. Commonly negotiated terms involve wages, hours of work, working conditions and grievance procedures, and safety practices. The parties may also negotiate health care and retirement benefits, supplementary unemployment benefits, job characteristics, job bidding and transfer rights, and the seniority structure. Once the parties come to an agreement, the executive function of the collective bargaining process comes to life. Both parties are legally required to execute the terms of agreement. For instance, if the agreement stipulates that new employees must be paid an hourly wage of $35.00 per hour, the employer must pay exactly $35.00 per hour, no more, no less. Similarly, if the agreement requires employees to work at least 40 hours per week, then the employees must work at least 40 hours per week. If either party does not execute its side of the agreement properly, then the judicial function of the collective bargaining process is initiated. Both the agreement itself and the applicable laws specify what
actions can be taken to resolve improper execution of the contract. These actions may include the use of grievance boards, mediators, or the state or federal courts. There is evidence that labor unions began bargaining collectively in the United States prior to 1800. However, collective bargaining was not introduced into the American legal system until 1806, when the Philadelphia Mayor’s Court found members of the Philadelphia Cordwainers guilty of conspiracy for striking. For more than 100 years, there were no formal laws that created legal boundaries of collective bargaining; there were only legal precedents. The first law to recognize the role of the union as a bargaining agent was the Clayton Antitrust Act of 1914. In it, Congress wrote that unions and their members shall not “be held or construed to be illegal combinations or conspiracies in restraint of trade under the antitrust laws.” It was also intended to prohibit federal courts from issuing orders to stop legal strikes. Despite the intent of the Clayton Antitrust Act, the U.S. Supreme Court quickly negated much of the law by arguing that federal courts maintained the power to stop strikes that might eventually be deemed illegal. The Norris-LaGuardia Act of 1932 made it much more difficult for federal courts to issue orders against a union and its leaders in order to stop strikes. It also clearly defined labor disputes and made it easier for workers to join unions and bargain collectively. It is important to note that the Norris-LaGuardia Act did not prevent state courts from issuing orders to stop strikes, only federal courts. However, many states did enact similar laws in the ensuing years that resolved this loophole. The key piece of collective bargaining legislation was and still is the Wagner Act of 1935, better known as the National Labor Relations Act (NLRA). The centerpieces of the NLRA are the five key duties it places upon employers: employers must (1) not interfere with the employees’ choice to form or run a union, (2) not interfere with the union’s affairs, (3) not discriminate between union and nonunion employees, (4) not discriminate against employees who accuse the employer of unfair labor practices under the law, and (5) bargain collectively in good faith with the representatives chosen by the employees.
collective bargaining 761
The NLRA provides a legal framework for employees to choose their bargaining representative via a vote by secret ballot. It also established the National Labor Relations Board (NLRB), which is responsible for enforcing the NLRA and levying penalties as necessary. While the NLRA is a broad piece of legislation, it specifically excludes employees of the federal government from its protections. The Labor Management Relations Act (LMRA), also known as the Taft-Hartley Act of 1947, established a broad set of rules for collective bargaining. While the NLRA has historically been described as prounion, the LMRA is often viewed as its proemployer successor. The latter upholds and reinforces many of the provisions of the former, but does so in a way that shifts some of the power in the bargaining process from the unions to the employers. For instance, the LMRA adds regulations that make it easier for employees to choose not to collectively bargain and to decertify their union as their bargaining representative. In addition, this act provides the right to free speech to employers, who were previously given only the opportunity to express their opinion about union elections as long as doing so was not perceived as threatening or otherwise improper. It also specifies that employees can only vote to certify a union once a year. There are two important laws on the books that act as substitutes for the NLRA. The Federal Labor Relations Act (FLRA) of 1978 provides much more limited rights for employees of the federal government than those provided under the NLRA, with the exception of postal employees. As President Ronald Reagan once said, “Government cannot close down the assembly line. It has to provide without interruption the protective services which are government’s reason for being.” The FLRA provides federal government workers with the ability to bargain collectively but makes it illegal for them to strike. The Railway Labor Act (RLA) of 1926 applies specifically to railway and airline employees. It aims to allow for collective bargaining in a way that minimizes the disruption of air and rail transportation. Collective bargaining agreements covered by the RLA never expire. Rather, the parties involved may attempt to renegotiate the contract after its ending date if they feel an inequity exists in the agreement.
The intention of this feature of the RLA is to ensure continuity of service. Without a fixed ending date, strikes and lockouts can be more easily avoided. Bargaining is completed under indirect governmental supervision, and the president of the United States has the authority to temporarily force striking or locked-out laborers back to work. Most people are familiar with collective bargaining because of its impact on the professional sports industry. Fans of every major sport in the United States have suffered through a lock-out or a strike. Players in the National Football League struck in 1982 and then once again in 1987. Major league baseball fans were angered by a strike in 1981, then another in 1994, which caused the cancellation of the 1994 playoffs and World Series. The owners of the National Basketball Association locked out the players at the start of the 1998–99 season, resulting in a 50-game season rather than the usual 82 games. Finally, a lock-out at the start of the 1994–95 season in the National Hockey League was followed by another 10 years later, forcing the cancellation of the entire 2004–05 season. Labor disputes in sports are far less damaging to society than those in other industries. For example, the strike called by the operators of New York City’s buses and subways in 2005 was one of the most costly in history. The strike began on the morning of December 20, in the midst of the holiday shopping season, and lasted until the afternoon of December 22. By many estimates, the strike cost the city, its businesses, and its citizens in excess of $1 billion per day. The true costs of a large strike are rarely just monetary. A 1981 walk-out by the air traffic controllers nearly shut down the nation’s air transportation system and resulted in 11,359 controllers being fired. A strike by Boston’s police officers on September 9, 1919, put Bostonians through a terrifying night of murders, looting, and riots before the National Guard restored order the next day. However, the number of major work stoppages per year, defined by the United States Bureau of Labor Statistics (BLS) as strikes or lock-outs involving more than 1,000 employees, has been falling since 1974. The BLS reported just 22 major work stoppages in 2005, compared to 424 in 1974. Overall, union representation and the resulting number of employees covered by a collective bargaining
762 C onsumer Price Index
agreement in the United States have been falling since the 1950s. The decline in union membership may be attributed to changing labor laws, the shifting of manufacturing jobs to countries with less expensive labor, an improvement in working conditions and wages, and a negative public perception of unions. In the airline industry, which has historically been the most organized sector of the economy, union representation has fallen from a peak near 50 percent down to about 39.5 percent. The automotive industry has also seen a significant decrease in the number of its employees covered by collective bargaining agreements, as foreign manufacturers enter the workplace and American companies outsource the manufacturing of their parts. However, some industries have not seen a decrease in union representation of employees in recent years. Due to the changes in the American health-care system, union membership for nurses, lab technicians, medical scientists, and occupational and physical therapists are all increasing. Hotel and casino employees have experienced a steady rise in union membership as hotels and hotel chains grow larger and their demand for skilled labor grows as well. While collective bargaining is not as prominent in the American economy as it once was, it is still a fixture of the employment landscape that will be around for years to come. See also Department of Labor; labor policy. Further Reading Clark, Paul F., John T. Delaney, and Ann C. Frost. Collective Bargaining in the Private Sector. Champaign, Ill. Industrial Relations Research Association, 2002; Herman, E. Edward. Collective Bargaining and Labor Relations. Upper Saddle River, N.J.: Prentice Hall, 1997; Hilgert, Raymond L., and David Dilts. Cases in Collective Bargaining and Labor Relations. New York: McGraw-Hill/Irwin, 2002. —David Offenberg
Consumer Price Index The Consumer Price Index (CPI) is a measure of the average change in the prices paid over time by urban households for a market basket of goods and services. Published monthly by the Bureau of Labor Statistics (BLS), the Consumer Price Index is calcu-
lated for two groups: households of clerical workers and wage earners (CPI-W) and for all urban consumers (CPI-U), which includes all employees in the CPI-W along with professionals, managers, technical workers, the self-employed, and the unemployed. These groups account for 32 percent and 87 percent of the population, respectively. The index is published using unadjusted and seasonally adjusted data, whereby seasonal indexes control for changes that occur at the same time and magnitude every year, such as holidays and climate patterns. In 2002, the Bureau of Labor Statistics introduced a new revisable index, the chained Consumer Price Index for all consumers (C-CPI-U), which is also published monthly but allows for two subsequent rounds of revision annually before being finalized. The Consumer Price Index has three major uses: as an economic indicator, as a deflator of other economic series, and as a means of adjusting dollar values. As the most widely used measure of inflation, the Consumer Price Index both influences the formulation and tests the effectiveness of government economic policy, including monetary policy. As a deflator, the Consumer Price Index is used to translate retail sales, hourly and weekly earnings, and components of national income accounts into inflation-free dollars. Finally, and possibly most importantly to U.S. citizens, the Consumer Price Index functions as the basis of indexation arrangements for consumers’ income payments, levels of government assistance, and automatic cost-of-living adjustments for millions of employees. According to the U.S. Department of Labor, changes in the Consumer Price Index affect the incomes of 80 million workers, 48 million Social Security beneficiaries, 20 million food stamp recipients, and 4 million civil service retirees and survivors. The Consumer Price Index also influences the choice of income tax brackets, the Department of the Treasury’s inflation-indexed government debt and inflation-protected bonds, and many private labor contracts as well. Calculation of the Consumer Price Index begins with the selection of the market basket of consumer goods and services, including food, housing, clothing, transportation fees, health and dental care, pharmaceuticals, and other items generally purchased for day-to-day living. One quarter of the market basket is updated each year, producing a full
Consumer Price Index
rotation of all items every four years. The prices of these goods and services are collected in 87 geographic areas across the nation, including the country’s largest 31 metropolitan cities. Taxes are included in the index, as they are additional expenses incurred by consumers. Most prices are obtained by trained Bureau of Labor Statistics representatives who make telephone calls or personal visits to approximately 50,000 housing units and 23,000 retail establishments. Retail outlets may include catalog vendors or Internet stores in addition to traditional brick-and-mortar businesses. Prices of fuel and a few select items are collected monthly in all locations, while prices for the rest of the sample goods and services are collected every month in only the three largest metropolitan cities and every other month in the remaining areas. Each component of the Consumer Price Index market basket is assigned a weight to reflect its importance in consumer spending patterns. These expenditure weights, along with the choice of market basket items, are formed based on information assembled by the Consumer Expenditure Survey, based on a representative sample of households. The weights are updated every two years. Local data are aggregated to form a nationwide average, but separate indexes are also published by region, by population size, and for 27 local areas. The index measures changes in price relative to a specified reference date, defining the Consumer Price Index to be equal to 100 in a reference base period. Currently, the reference base period is 1982– 84. For example, the CPI-U in May 2006 was 202.5, meaning that prices had increased 102.5 percent since the reference base period. The Consumer Price Index is also used to determine the inflation rate, or the change in the price level from one year to the next. To calculate the inflation rate, the prior year’s index is subtracted from the current year’s index, then divided by the prior year’s index, and this number is multiplied by 100 to generate a percentage. For example, the CPI-U in May 2005 was 194.4. Using this information and the May 2006 index, the inflation rate for this period can be calculated as (202.5 194.4) ÷ 194.4 x 100 = 4.17 percent. The price level has increased every year since 1975, though the rate of increase has changed from
763
being rapid during the early 1980s to slower during the 1990s. Between 1975 and 2005, the inflation rate averaged 5 percent a year, though it occasionally exceeded 10 percent and once was as low as 1 percent. Recently, increases in the cost of oil have driven up transportation costs, with energy prices in May 2006 up 23.6 percent over one year prior. This, combined with a reported 1.9 percent increase in the price of food and 2.4 percent increase in all other items, is responsible for the 4.17 percent inflation rate. Given that the Consumer Price Index has many practical uses with significant implications, measurement accuracy is of extreme importance. If the index is biased or provides a mismeasured rate of inflation, millions of workers and welfare recipients will be disproportionately compensated in cost-of-living adjustments. According to the Boskin Commission assigned to examine potential bias in the Consumer Price Index, if the index reported a change in the cost of living just 1 percentage point over the true value from 1997 to 2006, it would cost the government approximately $135 billion in deficit spending in 2006. Besides this cost of overcompensation, the government also uses the Consumer Price Index to maintain price stability. Costly efforts to avoid nonexistent increases in inflation, along with the cost of unanticipated inflation, make a bias in the Consumer Price Index in either direction cause for concern. Given the importance of accuracy in measuring inflation, areas of potential bias in the Consumer Price Index have been examined by numerous economic experts. The consensus is that the Consumer Price Index tends to overstate inflation, although the size of the bias has been estimated at between 0.3 and 1.6 percent. Averaged across all studies, the Consumer Price Index probably overvalues inflation by around 1 percent. There are five main sources of bias in the Consumer Price Index: substitution, quality change, new item, new outlet, and weighting bias. Substitution bias is when the index overstates changes in the cost of living by ignoring substitutions that consumers make in response to a change in relative prices. For example, if the price of chicken rises faster than the price of beef, consumers tend to buy more beef and less chicken. This bias can occur both across items
764 C onsumer Price Index
(substituting beef for chicken) and within items (substituting generics for brand-name items) and may account for roughly one-third of the upward bias in the Consumer Price Index. The Bureau of Labor Statistics has made several changes in an attempt to correct the source of its bias, such as the introduction of the Chained CPI in 2002 and using an aggregation formula that now assumes a certain amount of substitution since 1999. The next source of bias in the Consumer Price Index is the effect of quality changes. New models of cars and televisions generally cost more than the versions they replace, although this improvement in quality is not measured by the index. Therefore, a price rise that is, in fact, a payment for improved quality might be misinterpreted as inflation. In an attempt to correct this bias, the Bureau of Labor Statistics has used econometric models to estimate the value of different item characteristics in the market. This helps to clarify the difference between an increase in price and an increase in quality and is used for items like computers, televisions, refrigerators, DVD players, and college textbooks. The impact of new goods falls along the same lines, as it is challenging to ascertain the effect of the introduction of new items on welfare. The Bureau of Labor Statistics often faces difficulty in classifying new goods into preexisting categories, creating occasional long lags between their first appearance in the market and inclusion in the market basket. For example, the Bureau of Labor Statistics was criticized for long delays in adding cell phones and home computers to the index. The problem arises from the uncertainty of whether a newly introduced good will become a typical consumer expenditure or never amount to much, be it a new form of video recorder or a “ropeless” jump rope. The Bureau of Labor Statistics faces a trade-off between accepting a delay in the inclusion of essential new goods and incorporating new goods into the index that fail in the marketplace and must later be removed. The next source of bias is the somewhat recent proliferation of discount outlets. As prices rise, consumers tend to shop at discount stores more frequently. Currently, when new outlets enter the Consumer Price Index sample, any difference in price between new and old is contributed to a difference in quality, which may not always be the case. The growth
of discount chains itself suggests an outlet substitution, or a shift in consumer buying patterns. Thus, the Consumer Price Index may either overstate inflation by not properly accounting for the same-quality but less-costly discount alternatives, or understate inflation by dismissing the decrease in price as driven entirely by quality. Finally, the method in which weights are assigned to goods may cause bias in the Consumer Price Index. The expenditure weights are derived from a Bureau of Labor Statistics consumer survey composed of an interview and personal diary. The interview and diary are printed on large paper and are 143 and 67 pages long, respectively. With hundreds of questions to answer, survey respondents may not provide truthful and precise information. Respondents may purposely misreport purchases to avoid subsequent questions, have inaccurate recall, and deliberately exclude unattractive purchases such as alcohol. The expenditure weights play a vital role in the calculation of the Consumer Price Index and must be as close to the true values as is feasible in order to properly estimate changes in the price level. One suggestion for improvement is the use of scanner data, which could provide information about price and quantities at the time of purchase. Scanner data would provide its own set of problems, though, as bar codes change frequently, the cost to purchase data from private firms may be prohibitive, and many goods and services do not have bar codes. Simplifying the questionnaire may also improve the quality of information, although at the cost of losing data. The Bureau of Labor Statistics is continuing to investigate this issue. Although the Consumer Price Index is not currently a perfect measure of inflation, which may be an impossible feat, it nevertheless provides a reasonable estimate for price level changes and affects millions of people. The Bureau of Labor Statistics is dedicated to resolving sources of bias in the index and will continue to improve the accuracy of the Consumer Price Index. See also fiscal policy. Further Reading Abraham, Katharine. “Toward a Cost-of-Living Index: Progress and Prospects.” “Journal of Economic Perspectives 17, no. 1 (Winter 2003); 45–58; Boskin,
defense policy 765
M. J., E. R. Dulberger, R. J. Gordon, Z. Griliches, and D. W. Jorgenson. Toward a More Accurate Measurement of the Cost of Living. Final report to the U.S. Senate Finance Committee from the Advisory Commission to Study the Consumer Price Index. Washington, D.C.: U.S. Government Printing Office, 1996; Lebow, David, and Jeremy. Rudd “Measurement Error in the Consumer Price Index: Where Do We Stand?” Journal of Economic Literature 41, no. 1 (March 2003): 159–201; United States Department of Labor, Bureau of Labor Statistics Web site. Available online. URL: http://www.bls.gov/cpi/home.htm. —Jennifer Pate Offenberg
defense policy Defense policy refers to decisions and actions that seek to protect the interests of the United States. While homeland defense, or the protection of U.S. territory and borders, represents the most basic meaning of defense policy, the term also encompasses international actions that serve to further U.S. security. American defense policy has evolved gradually in the past 200 years, often proportionately to the expansion of the United States’s role in the world that originated in the late 19th century. During the cold war, the United States institutionalized the development of defense policy by creating a formal executive bureaucracy to assist the president in making defense decisions. In the aftermath of the terrorist attacks of September 11, 2001, the United States made significant changes in its defense policy infrastructure to adapt to the needs of the 21st century. In the century after its inception, U.S. defense policy concentrated primarily on establishing the international legitimacy of the new nation and protecting its borders, which expanded steadily throughout the contiguous United States. In 1803, Thomas Jefferson nearly doubled the territory of the United States through the Louisiana Purchase, which ensured control of the Mississippi River and its trade routes. The War of 1812 narrowly but definitively established the independence of the new nation from Great Britain. The Monroe Doctrine of 1823 expanded U.S. defense policy from the country to the hemisphere with its famous declaration that “We should consider any attempt on [the Europeans’] part to extend their system to any portion of this hemisphere as danger-
ous to our peace and safety.” President James K. Polk expanded U.S. borders westward in the MexicanAmerican War of 1846–48 through the annexation of the territory that would become California and New Mexico. Texas and Oregon also became part of the United States in the 1840s. The expansion of the United States through the continent in this period would come to be known as Manifest Destiny. By the end of the 19th century, the growing economy in the United States spurred a greater interest in international affairs, in part to find new markets for trade but also for political reasons. As Frederick Jackson Turner wrote, “at the end of a hundred years of life under the Constitution, the frontier has gone, and with its going has closed the first period of American history.” Manifest Destiny now would extend beyond the Western Hemisphere, making the United States a world power and increasing its defense commitments. In the Spanish-American War of 1898, the United States gained control of Cuba and Puerto Rico in the Caribbean and also Guam and the Philippines in the Pacific. Yet the United States remained ambivalent over its responsibilities for collective defense vis-à-vis its allies. It did not enter World War I until 1917, three years after the global conflict began, and then only because German submarine warfare refused to recognize the rights of neutral countries such as the United States. Although American defense interests had grown, defense was still defined largely in national terms. The first U.S. effort to incorporate collective security into defense policy failed miserably. After World War I, President Woodrow Wilson launched a grassroots campaign to build support for the Treaty of Versailles, but the treaty failed to garner a two-thirds vote in the Senate, falling short by seven votes. Consequently, the United States did not participate in the League of Nations. For the next decade, U.S. defense policy focused primarily on protecting economic opportunities and limiting military spending. The United States hosted an international conference on naval disarmament in 1921–22, which limited the naval power of the United States, Great Britain, Japan, France, and Italy. As Adolf Hitler rose to power in Germany in the 1930s, the United States passed neutrality laws four times to ensure that it would not participate in the burgeoning conflict. After World War II began, the United States provided some aid to
766 def ense policy
its allies through “cash and carry” and Lend Lease programs, but its defense policy remained narrowly focused. Only after Japan attacked Pearl Harbor on December 7, 1941, did fighting in World War II become part of American defense policy. The Allied victory renewed questions about American global responsibilities in defense policy. While defense spending dropped sharply following World War II, U.S. security interests had expanded considerably with the origins of the cold war. The Truman Doctrine and Marshall Plan illustrated U.S. commitment to defending itself and its allies from the encroachment of communism. To assist the president in making defense policy, Congress passed the National Security Act of 1947, which created the National Security Council, the Department of Defense (previously the Department of War), and the Central Intelligence Agency, and formally authorized the positions of the Joint Chiefs of Staff. The United States institutionalized the development of defense policy to ensure that its wideranging interests in the cold war would be pursued fully and systematically. U.S. defense policy during the cold war can be defined broadly as containment of communism, though important variations emerged in different administrations. John Lewis Gaddis writes that U.S. defense policy in the cold war shifted regularly between “symmetrical” strategies, which aimed to meet any challenge posed by the Soviets regardless of cost, and “asymmetrical” strategies, which focused on selective interests and sought to control costs. The Truman administration’s initial containment policy focused primarily on political and economic interests, although it did emphasize collective security with the creation of the North Atlantic Treaty Organization (NATO). After the Korean War began, Truman sharply increased defense spending, and U.S. interests in the cold war were more broadly defined. President Dwight D. Eisenhower reined in defense spending with his “New Look” policy, while Presidents John F. Kennedy and Lyndon B. Johnson pursued a policy of “Flexible Response” that again expanded U.S. interests and costs. In the administration of President Richard Nixon, the United States made significant advances in reducing threats to its defense by renewing ties with both China and the Soviet Union. President Jimmy Carter
tried to continue détente, but his efforts halted after the Soviet invasion of Afghanistan in 1979. President Ronald Reagan initially viewed the Soviet Union suspiciously, calling it an “evil empire” and increasing defense spending so the United States would be prepared to meet any threat posed by the communist superpower. In particular, Reagan initiated the Strategic Defense Initiative (SDI), popularly known as the “Star Wars” plan, which aimed to create a defense shield to protect the United States from attack. While Reagan steadfastly maintained his dedication to SDI, in his second term he also began to pursue arms control negotiations with the new Soviet leader Mikhail Gorbachev, who came to power in 1985. The two leaders eventually participated in four summit meetings and signed the Intermediate Nuclear Forces Treaty in 1987. The United States also restructured its defense policy apparatus with the GoldwaterNichols Act of 1986, which gave more power to the chairman of the Joint Chiefs of Staff as well as to regional military commanders. The end of the cold war prompted a reassessment of U.S. defense policy in the 1990s. The “new world order,” as President George H. W. Bush famously called it, permitted nations to work together in ways not possible during the cold war. When Iraq invaded Kuwait in the summer of 1990, the United States and the Soviet Union stood together in opposing the aggression. Bush successfully negotiated a United Nations resolution supporting the use of force against Iraq, and Congress ultimately passed a joint resolution supporting the use of force just days before the Gulf War began. Thus, the United States developed both internal and allied coalitions that viewed Saddam Hussein’s actions as threats to the international order and their own defense interests. Defense policy took a secondary role in the administration of President Bill Clinton because the public and the president were concerned foremost about the economy. Without immediate threats to U.S. security, U.S. defense policy lacked clear direction. Humanitarian interventions in Somalia and Haiti and NATO interventions in Bosnia and Kosovo served interests other than American defense and prompted many debates about U.S defense needs in the post– cold war era. The Clinton administration tried to replace the containment strategy of the cold war with a strategy of “democratic enlargement,” which focused
diplomatic policy 767
on enacting economic reform in other nations through free markets as a means of promoting democracy. Although the phrase did not serve to replace “containment,” it did illustrate how defense policy in the 1990s focused more on common economic interests with other nations than on traditional security concerns. When George W. Bush assumed the presidency in 2001, he made some important changes in defense policy, most notably by announcing that the United States would withdraw from the 1972 Anti-Ballistic Missile (ABM) Treaty so it could pursue national missile defense freely. Bush also declared that the United States would work to contain proliferation of nuclear weapons and other weapons of mass destruction. At the same time, Bush promised to limit U.S. defense commitments, especially in the area of nation building. However, the terrorist attacks of September 11, 2001, recast the focus of defense policy to homeland security, an issue that had not commanded public attention since the cold war. Just as defense policy in the 19th century referred to protection of U.S. borders, so, too, does the term today signify foremost protection of U.S. territory. While global concerns in pursuing the continuing campaign against terrorism remain part of American defense policy, the need for homeland defense is sharply etched into the public conscience and will remain so for the foreseeable future. For this reason, the Department of Homeland Security was created in 2002, merging 22 previously independent government agencies, including the Coast Guard, Secret Service, and Customs and Border Protection. Recommendations from the commission that investigated the September 11, 2001, attacks also spurred significant changes in U.S. intelligence gathering, most notably by the creation of a Director of National Intelligence to centralize information gathering among 16 independent intelligence agencies. The National Security Strategy (NSS) of 2002 clearly outlined the multipronged strategy of the United States to prevail in the war on terror. Most controversially, the 2002 NSS left open the possibility that the United States might respond preemptively to anticipated attacks by an enemy, “even if uncertainty remains as to the time and place of the enemy’s attack.” Although the United States built an international coalition to implement this strategy in waging
war against Iraq in 2003, many states, including traditional U.S. allies as well as the United Nations, opposed the war. Preemption remains a component of U.S. national security strategy, though its application again in the near future is uncertain. As presidents develop American defense policy in the coming years, they will have to balance the interests of the United States with the concerns of U.S. allies. In particular, questions about U.S. intervention, especially preemptive or preventive action, will require both domestic and international justification. While the definition of defense policy remains the same as it was in the early days of the republic, the audience that witnesses, and participates in, the practice of American defense is much larger. See also foreign policy. Further Reading Gaddis, John Lewis. Strategies of Containment: A Critical Appraisal of Postwar American National Security Policy. Rev. ed. New York: Oxford University Press, 2005; LaFeber, Walter. The American Age: United States Foreign Policy at Home and Abroad. 2nd ed. New York: W.W. Norton, 1994; The National Security Strategy of the United States; September 2002. Available online. URL: http://www.whitehouse. gov/nsc/nss.pdf. Accessed July 1, 2006; The 9/11 Commission Report: Final Report of the National Commission on Terrorist Attacks upon the United States. New York: W.W. Norton, 2004; Schulzinger, Robert D. U.S. Diplomacy Since 1900. 5th ed. New York: Oxford University Press, 2001. —Meena Bose
diplomatic policy Diplomacy is communication between international actors that seeks to resolve conflicts through negotiation rather than war. Occasionally, this term refers to the management of international relations in general. Diplomatic policy, however, is the means by which states pursue their national interests. While some see diplomacy as attempting to achieve stability, peace, order, justice, or the distribution of wealth, many others see it as a means to instill one’s ideology or viewpoint or to cover for indiscretions or unsavory positions. Others have seen it simply as a precursor to more forceful action.
768 diploma tic policy
Diplomatic policy since the 17th century has generally been practiced by an elite diplomatic corps set apart from others in society. These elite spoke French and practiced the standard international relations concept of the balance of power. After World War I (1914–18), President Woodrow Wilson sought to create a more open diplomatic system that would not lead states into war. Large state bureaucracies increasingly relied on a meritocracy approach instead of the previous reliance on social class and old school ties. These social ties, however, remained through informal networks in government. Despite its origins in carrying reports back and forth and engaging in negotiation, diplomatic policy has come to employ persuasion and force, promises and threats, as well as what modern-day practitioners refer to as signaling. It is worth examining a few of these tools and tactics diplomacy has at its disposal as well as specific cases in U.S. diplomatic politics. Three broad groupings of diplomatic forms include coercion, inducements, and co-option. More generally, these are frequently referred to as sticks, carrots, and soft power. Coercion, or threat of using “sticks” in international relations, involves threatening the use of hard power. Power, or the ability to get another country to do what it otherwise would not have done, is classified as hard power when military effort is involved. This is directly punishing another country for its behavior. This can be through the use of military force of differing levels or economic sanctions. Military force runs the gamut from signaling dissatisfaction with another country by publicly displaying force, such as military maneuvers or exercises, to small air strikes and raids, to large invasions and occupations. While total war is risky and rare in international relations, lower-level threats of military power are common in diplomatic negotiation. Military threats are frequently left in the background as unspoken signals as to what is coming if countries do not agree. While condemning the American war in Iraq in 2003, French president Jacques Chirac did acknowledge that the threat of force was a necessary aspect of U.S. diplomacy. Economic sanctions have proven to be more problematic than military force. While this may be because of their ineffectiveness, it may also be because of
their misapplication. Economic sanctions, such as embargoes, blockades, or fines or levies against another country, may lack the directed threat of military power. While military force can destroy the opposing armed forces and force pain and suffering on a population, armed forces, and leadership, economic sanctions may find it more difficult to coerce all groups in a society. Economic sanctions may by necessity need to be more focused on different groups in a society in order to avoid punishing innocent civilians. In addition, economic sanctions are difficult to maintain, especially in a multinational setting. Since there is economic profit to be made through trade and investment as well as issues of national interest at stake, countries will frequently attempt to skirt the sanctions or become exempt from their effects. During the late 1990s, when the United States and the United Kingdom sought the imposition of economic sanctions on Iraq to force the destruction of Iraq’s weapons of mass destruction program, Jordan and Turkey regularly received exemptions from the United Nations Oil for Food program. Arguing that it was a matter of vital national security, Jordan and Turkey were allowed to gain access to Iraqi oil shipments and were generally exempted from international sanctions. Lastly, economic sanctions may be imposed with unrealistic expectations and be ill advised in many crisis situations. Economic sanctions might prevent another country from developing certain military capabilities, but they are unlikely to force a government to resign from power. Because of their misapplication, economic sanctions may be doomed to fail before they begin. It is up to the diplomat to convince other countries to cooperate with economic sanctions. A second type of diplomatic policy involves offering inducements. Frequently referred to as “carrots,” these can include trade and investment incentives as well as official foreign aid in order to persuade others to change policy positions. While this is frequently seen as an alternative to the use of military force, it can be used in conjunction with threats of military force. The “Big Three” European Union countries— France, Germany, and the United Kingdom—offered Iran economic inducements to stop its uranium enrichment program, while U.S. President George W. Bush
diplomatic policy 769
did the same in 2005. President Bush’s offer of inducements—selling airplane parts and aiding Iran in its bid to join the World Trade Organization— was coupled with the threat of censuring Iran in the UN Security Council, which entailed the possibility of economic sanctions. The United States used inducements in the form of economic aid during the cold war to fight the spread of communism. While two main countries, Israel and Egypt, have long maintained their position as the largest targets of U.S. foreign aid, other countries receive more or less aid based on U.S. national interests. Currently, U.S. aid has focused on countries engaged in the “Long War” or the “War on Terror.” Afghanistan, Pakistan, Sudan, and Indonesia have therefore received greater attention than they previously did. Since President Bush’s announcement in his 2003 State of the Union speech of a five-year, $15-billion initiative to combat AIDS, U.S. aid has flown to countries at the heart of the AIDS crisis, such as Haiti and South Africa. Previous targets for U.S. aid have been central and eastern European countries following the end of the cold war, as well as the Balkan region during the disintegration of Yugoslavia in the 1990s. Currently, the largest recipient of U.S. foreign aid is Iraq, as the United States seeks to rebuild the country after the 2003 Iraq war. Foreign aid takes many forms, but common varieties include military aid, bilateral economic assistance, economic aid to support political and military objectives, multilateral aid, and humanitarian assistance. Bilateral development aid occupies the largest share of U.S. assistance programs and is focused on long-term sustainable progress. The lead agency involved is the United States Agency for International Development (USAID). Money frequently goes to projects involved in economic reform, democracy promotion, environmental protection, and human health. AIDS/HIV projects also fall under this area, as does the Peace Corps. The second-largest area of U.S. aid is military aid. Military aid peaked in 1984 and has declined since then as a percentage of the overall aid budget. Military aid can take the form of grants or loans to foreign powers to procure U.S. military hardware, training given to foreign military officers and personnel, and aid for peacekeeping operations around the world.
The third-largest area of U.S. aid is economic aid that is used in support of U.S. political and security objectives. This includes money to support Middle East peace proposals as well as those focused specifically on the war on terror. Aid has recently increased in areas including narcotics, crime, and weapons proliferation. Congressional actions such as the NunnLugar Amendment have helped pay for the removal and destruction of nuclear weapons in the former Soviet Union. In addition, biological and chemical weapons are targeted for dismantling, and antiproliferation efforts have increased. The fourth area of U.S. aid is humanitarian aid, constituting roughly 12 percent of the U.S. aid budget. This aid money is used to alleviate short-term or immediate humanitarian crises. A growing area here is food aid, carried out under the auspices of the Food for Peace program. In addition, U.S. programs send U.S. volunteers out to provide technical advice and train others in modern farming techniques. Lastly, the United States contributes money to multilateral assistance programs. Aid is sent to the United Nations, including the United Nations Children’s Fund (UNICEF) and the United Nations Development Programme (UNDP), as well as multilateral development banks such as the World Bank. A third type of diplomatic policy is to attract or co-opt other countries. Frequently referred to as soft power, this policy seeks to make other countries want what you want. Countries promote their values, culture, or political institutions in the hopes that others will emulate them. Instead of coercing countries through force, soft power attracts countries by example. Open political systems, where citizens are free to say and do what they will, are held up as examples others should copy in their own societies. Shared values, such as adherence to law, open trading systems, and respect for human rights, are also promoted as the high point of state evolution. In order to promote soft power, countries often rely on public diplomacy. Public diplomacy is the act of promoting a country’s interests, culture, and institutions by influencing foreign populations. In essence, it is a public relations campaign carried out by a country to attract like-minded countries and sway those with differing opinions. Tools used to this end include international broadcasting, such as through
770 diploma tic policy
the Voice of America radio network, education, sports, and cultural exchanges, as well as international information programs that some have called “propaganda activities.” President Woodrow Wilson created the Committee on Public Information during World War I in order to disseminate information overseas. During World War II, President Franklin D. Roosevelt established the Foreign Information Service to conduct foreign intelligence and disseminate propaganda. In 1942, the Voice of America program was created and first broadcast in Europe under the control of the Office of War Information. The cold war saw the mobilization of American resources with the goal of swaying other countries against communism while reassuring American allies. Granted congressional authority under the U.S. Information and Educational Exchange Act of 1948, international broadcasts accelerated to counter the Soviet information campaign in Europe. The CIA established the Radio Free Europe and Radio Liberty programs in 1950 and beamed pro-Western information into Eastern Europe and the Soviet Union. While reorganization has cut some programs and solidified control under the State Department, Congress and others looked to increase the programs after the terrorist attacks of September 11, 2001. The war on terror pointed to the need for an information campaign. The 1990s saw a decrease in the overall level of funding for public diplomacy, while at the same time international favorable ratings of the United States were high. Since 9/11 and the wars in Afghanistan and Iraq, there has been a downturn in international opinion of the United States. Recognizing the need for a revamped international information campaign, the George W. Bush administration increased funding to the State Department, and the public diplomacy budget has seen a steady increase since 2001. Recent information programs geared toward the Middle East include an Arabic language magazine and a Persian Web site. Cultural exchanges have been carried out with the Iraqi National Symphony as well as Arab women with activist and political backgrounds from 15 countries who traveled to the United States in 2002. In addition, the United States operates an Arabic-language television station, Al-
Hurra, and maintains radio stations in both Arabic and Persian. Along with an increase in spending, the Bush administration attempted to address perceived shortfalls in public diplomacy with high-profile moves. Secretary of State Colin Powell made an appearance on Music Television (MTV) in February 2002, answering questions from young people from around the world. MTV reached 375 million households in 63 countries at the time. A second act was the appointment of Karen Hughes to the position of undersecretary for public diplomacy and public affairs at the State Department in July 2005. Hughes, counselor to President Bush for his first 18 months in the White House, as well as a communications consultant to the president during the 2004 election campaign, has been a long-time adviser and confidante to President Bush. Her appointment to this position was seen as a recognition by the president of the importance of combating a declining U.S. image abroad. A frequent criticism of public diplomacy and soft power in general is whether it works and how much is needed to counter negative images. While Congress and former public diplomacy officials maintain that academic exchanges, increased through the use of scholarships, and overseas academic programs such as sponsored lectures and the building of libraries are necessary, others have pointed to the perceived lack of success. While Egypt receives one of the largest shares of U.S. aid, the Egyptian population overall has a negative view of Americans. Clearly, more needs to be done to change this perception. Diplomatic policy does not function in a vacuum, and the different aspects of diplomatic policy, whether hard or soft power, cannot exist on their own. Striking the proper balance between inducements, coercion, and attraction is a difficult task. As many politicians have recognized, carrots without sticks may be taken advantage of, while sticks without the promise of rewards may also signal a deadend policy. Further Reading Barston, Ronald P. Modern Diplomacy. London: Longman, 1997; Hamilton, Keith, and Richard Langhorne. The Practice of Diplomacy. New York: Routledge, 1994; Nye, Joseph. The Paradox of American
disability policy 771
Power. New York: Oxford University Press, 2002; United States Under Secretary for Public Diplomacy and Public Affairs. Available online. URL: http://www .state.gov/r/. Accessed June 25, 2006. —Peter Thompson
disability policy Disability policy in the United States has followed changes in thinking about people with disabilities (hereafter PWDs). Although the stages overlap and are not distinct, a medical focus was followed by a focus on rehabilitation followed by a civil rights focus. Policy used to be focused on compensation for an impairment’s effects; it is now directed toward full social participation by PWDs. The changing focus affected policy debates in employment, transportation, housing, education, civil rights, health care, and other areas. Some policy makers increased emphasis on PWDs in the policy process, not just as passive recipients of public policies. Policy challenges remain, as is evident in deliberations over disaster preparedness, affordability and universal design of buildings, acute and long-term care, assistive technology, and access to recreation. Disability is defined differently by policy makers with different purposes. Education laws and policies usually set forth a list of conditions. Civil rights laws such as the Americans with Disabilities Act (ADA) refer to “substantial limitation of a major life activity.” These words were carried over from amendments to the Rehabilitation Act defining a “handicapped individual.” Beyond obvious examples like inability to walk or speak, policy makers debated the meaning and application of the definitional phrase. The United States Supreme Court drew a narrow interpretation, but sponsors of the ADA such as Representative Steny Hoyer (D-MD) insisted that the Supreme Court’s interpretation was “not what we intended.” Analysts with a social conception of disability insisted that policy must include restoration of the ADA and include citizens with a variety of physical, mental, cognitive, and sensory conditions. Policy would need to counter “disablement,” by which social and economic barriers were imposed on PWDs. Policy making was greatly affected by adoption of the Rehabilitation Act in 1973, the Individuals with Disabilities Education Act in 1975 (initially the Edu-
cation of all Handicapped Act) and the Americans with Disabilities Act in 1990. Other important federal legislation included the Developmental Disabilities Act in 1963, and as recently amended and reauthorized in 2000, the Developmental Disabilities Assistance and Bill of Rights Act; the Architectural Barriers Act of 1968; the Developmental Disabilities Services and Facilities Construction Act of 1970; Project Head Start (changes in 1974 required 10 percent of participants to be disabled children); the Air Carrier Access Act of 1986; the Technology-Related Assistance Act of 1988; and the Workforce Investment Act of 1998. State and local policy making in education, employment, transportation, and medical care also affects PWDs, and most federal acts rely on state and local policy makers for their implementation. The Rehabilitation Act’s role has increased over time. Section 504 of the act has been used to advance nondiscrimination and access to government offices and to entities such as schools that receive government funding. Implementation of the Rehabilitation Act was accelerated by sit-ins at federal office buildings, first in San Francisco and later in Washington, D.C., in 1977. A more recent (1998) significant addition to the act was section 508 promoting access to electronic and information technology (e.g., requiring federal Web sites to be accessible to blind users of screen readers unless it would be an “undue burden” to do so). The ADA has reflected the strengths and weaknesses of federal disability policy. Most policy-making debate has involved application of the first three of the ADA’s five titles. Title I deals with employment and provides for nondiscrimination. Under title I, some disabled people are entitled to provision of auxiliary aids (such as adaptive telephones, listening devices, or furniture) from employers. Title II refers to nondiscrimination in services provided by state and local governments. It includes provisions promoting access to transportation. In the 1999 U.S. Supreme Court decision Olmstead v. LC and EW, the Court held that title II’s nondiscrimination guarantee extended to the right of institutionalized individuals to live in a less restrictive community environment. Title III refers to “public accommodations” such as stadiums and shopping malls. They, too, are obligated to ensure nondiscriminatory access to services. The act was adopted on July 26, 1990, and its standards
772 disabilit y policy
under titles II and III were applicable from January 26, 1992. It included qualifiers relating to such factors as “fundamental alteration,” “undue hardship,” historic buildings, and pre-ADA construction. These qualifiers have often been misinterpreted as exemptions; instead, they call for application of different standards. Increasingly, policy recognizes many PWDs’ desire to live and work with a disability. This is a change from the Revolutionary War, in which an “Invalid Corps” retained soldiers who did not prefer immediate discharge and a pension, and even from the Social Security Act. Under that act, many PWDs under 65 receive payments either under the Supplemental Security Income (SSI) program or under Social Security Disability Insurance (SSDI, providing a higher degree of benefits based on years in the workforce). Although definitions of disability used by policy makers vary widely, the PWD population has always been great, and the proportion of the population affected by disability policies (often because of employment or the disability of a family member) much greater still. In the 2000 census, 34.8 million people 16 and older in the United States reported a physical disability, 26.8 million reported difficulty going outside, 16.5 million reported a sensory disability, 14.6 million reported a mental disability, and 11.3 million reported a self-care disability. The census used overlapping categories. Below age 16, many more Americans would be in the “mental disability” category. That category combined cognitive and psychiatric disabilities. Overall, the U.S. PWD population was 19.3 percent of the 257 million people (the 5 and older noninstitutionalized population). An active disability rights movement contributes to the shaping of policy. Centers for Independent Living work with the U.S. Rehabilitation Services Administration to provide employment services. Federally mandated protection and advocacy organizations (many affiliated through the National Disability Rights Network) interact with state and local officials. Groups such as ADAPT (initially American Disabled for Accessible Public Transit, now extended to other disability issues including personal assistant services) exert pressure on state and federal policy makers. The American Association of People with Disabilities was founded in 1995
and seeks to influence policy on behalf of the PWD community, often through mobilizing its members. Prominent public interest law firms including the Disability Rights Education and Defense Fund and Disability Rights Advocates seem to influence policy in this area. Making of disability policy in the United States happens at every level of government. Cabinet departments of Health and Human Services, Justice, Transportation, Housing and Urban Development, Labor, and Education are extensively involved; on particular issues, parts of the Homeland Security, Commerce, Defense, and State Departments are involved as well. Often, disability policy making comes from independent agencies such as the Federal Communications Commission, the Social Security Administration, the Equal Employment Opportunity Commission, and the Access Board. Since 1978, the National Council on Disability, now an independent federal agency, has offered advice to policy makers on a wide range of disability issues. Disability policy making has been spurred by an active disability rights movement. Within the Department of Education, the Office of Special Education and Rehabilitative Services (OSERS), supplemented by state and local policy makers, plays a major role in implementing the Individuals with Disabilities Education Improvement Act (2004, IDEA). The essentials of education policy under the act (and its predecessors, going back to the Education of All Handicapped Act of 1975) were zero reject (no matter how severe the disability, the federal government must ensure education); free and appropriate public education (FAPE); nondiscriminatory assessment; individualized educational program (IEP); least restrictive environment (LRE); and due process / procedural safeguards (involving parents’ and students’ participation). The federal share of funding for programs under IDEA has consistently been less than targeted. Under the Developmentally Disabled Assistance and Bill of Rights Act of 1975 (and 2000) and the No Child Left Behind Act of 2001, the federal government became involved in withdrawing or giving funds to institutions according to prescribed guidelines. Policy makers’ expressed goal has become to decrease the gap between education of PWDs and education of nondisabled children.
disability policy 773
The Department of Justice has a Disability Rights Section, which has brought cases on service animal access, movie theater seating, building construction, and other issues. Titles II and III of the ADA are enforced partly through complaint mechanisms. The Disability Rights Section occasionally pursues litigation or mediation on behalf of individuals who filed complaints or may institute proceedings itself. Chiefly because of resource limitations, the section is only able to pursue a small fraction of alleged ADA violations. An interagency Architectural and Technical Barriers Compliance Board (Access Board) creates guidelines for the Architectural Barriers Act and the 1990 Americans with Disabilities Act. The Access Board was created in 1973 under the Rehabilitation Act. The Equal Employment Opportunity Commission (EEOC) administers the ADA’s title I. Complaints to the EEOC may involve discrimination in hiring or firing or a failure to provide auxiliary aids. The overwhelming majority of title I complaints have been rejected, and when complaints are pursued through litigation by the commission, they have encountered mixed success. Because of delays and limited remedies, the EEOC handles only a small portion of the cases of employment discrimination. A few others are handled by state agencies, including, for example, California’s Department of Fair Employment and Housing. The U.S. Department of Labor, along with state governments and nongovernmental actors (for-profit and nonprofit), administer policies seeking to address high rates of PWD unemployment (40 to 70 percent, according to different sources). Programs such as the “Ticket to Work” were designed to address the disincentives to disability employment from loss of other benefits. Since the administration of President Franklin Delano Roosevelt, policy makers have encouraged private initiatives to employ some disabled people at very low wages. A later version of earlier policies is the Javits-Wagner-O’Day (JWOD) network. Lax monitoring and occasional fraud have characterized this program designed eventually to boost disability employment. The JWOD Committee for Purchase from People Who Are Blind or Severely Disabled includes representatives from the Department of
Labor, the Department of the Army, the Department of Education, the Department of Agriculture, and elsewhere. The Federal Communications Commission (FCC) shapes changes in telephone, television, and radio services. Section 255 of the Communications Act of 1934, as amended, deals with “access by persons with disabilities.” The ADA’s title IV addresses “services for hearing-impaired and speech-impaired individuals.” The FCC administers policy on captioning of television broadcasts, telecommunications for the deaf, and other issues. Federal housing policy concerning PWDs focuses on nondiscrimination, especially through the Fair Housing Act Amendments of 1988. A landlord may not discriminate by demanding a higher security deposit, prohibiting necessary disability-related alterations, or prohibiting a PWD’s needed service animal. Although not federally mandated, universal design and “visitability” (accessibility to visitors) are promoted by the organization Concrete Change. Visitability policy is incorporated in municipal ordinances in Long Beach, California, and elsewhere. Disability advocates castigated the U.S. Department of Homeland Security’s failure to take disability into account during the 2005 evacuations in New Orleans and the Gulf Coast area from Hurricane Katrina. Following a 2004 executive order, the Interagency Coordinating Council on Emergency Preparedness and Individuals with Disabilities was established. The U.S. Department of Transportation has primary responsibility for implementing the Air Carrier Access Act. The act includes rules that new aircraft with more than one aisle must have accessible restrooms and rules on wheelchair storage, seating, and boarding. The act was passed in response to increased activism by such organizations as Paralyzed Veterans of America. Many of the initial piecemeal policies to promote accessible public transit became outdated with passage of the ADA. Many other concerns remain, however, including accessibility in bus and rail transportation. Transportation policy making has thus been characterized by PWD pressure at local, state, and federal levels. The organization ADAPT’s last two letters initially indicated its focus on “public transportation.” Today, ADAPT’s name and focus extend to other disability
774 drug policy
rights topics, but transportation access is still a major concern. The U.S. Department of Transportation issued regulations to improve transit access, first under section 16a of the 1970 Urban Mass Transit Act and later under the Rehabilitation Act. The regulations applied to recipients of federal funds. With adoption of the ADA, lifts or ramps were routinely installed on fixedroute buses. As the 21st century began, ADAPT, CILs, and policy makers confronted the issue of long-term care, and particularly of personal assistance. At the federal and state levels, escalating Medicaid and Medicare expenditures (administered through the U.S. Department of Health And Human Services) were consequences of institutionalization (e.g., in state hospitals and nursing homes) and a rising life expectancy. Medicaid and Medicare are used by many PWDs to obtain health care. An “institutional bias” means widespread reliance on nursing homes, mitigated by more common use of waivers. The Social Security Administration is an independent federal agency. It administers Social Security Insurance (SSI) and Social Security Disability Insurance (SSDI), relied upon by many PWDs who are not employed or whose wages are low enough to qualify. Some PWDs receive higher income with SSDI, based on contributions during prior employment. Disability policy is sometimes made inadvertently when other policy concerns are addressed. For example, the Deficit Reduction Act (2005) included a section on Money Follows the Person, rewarding state programs that allowed for consumer choice with Medicaid long-term care funds. Since the program might allow Medicaid recipients to live outside more expensive nursing homes, cost savings resulted. Government cutbacks enabled this disability-related program but have jeopardized others. PWDs’ education remains underfunded, and programs to improve technology access, employment, and transportation have been implemented at a slower pace because of financial constraints. Many bureaus that make disability policy are inadequate to address the great social inequalities confronting PWDs. Pressure from the disability rights movement and help from key allies inside and outside government will therefore continue to play a vital role.
Further Reading Blanck, Peter, Eve Hill, Charles D. Siegal, and Michael Waterstone. Disability Civil Rights Law and Policy. St. Paul, Minn.: Thomson-West, 2004; Colker, Ruth. The Disability Pendulum: The First Decade of the Americans with Disabilities Act. New York: New York University Press, 2005; Colker, Ruth, and Adam A. Milani. Everyday Law for Individuals with Disabilities. Boulder, Colo.: Paradigm Publishers, 2006; Krieger, Linda Hamilton. Backlash against the ADA: Reinterpreting Disability Rights. Ann Arbor: University of Michigan, 2003; O’Brien, Ruth. Crippled Justice: The History of Modern Disability Policy in the Workplace. Chicago: University of Chicago, 2001; Scotch, Richard K. From Good Will to Civil Rights: Transforming Federal Disability Policy, 2nd ed. Philadelphia: Temple University Press, 2001; Shapiro, Joseph P. No Pity: People with Disabilities Forging a New Civil Rights Movement. New York: Times Books, 1993; Switzer, Jacqueline Vaughn. Disabled Rights: American Disability Policy and the Fight for Equality. Washington, D.C.: Georgetown University Press, 2003. —Arthur W. Blaser
drug policy The modern fight (approximately 1980 to the present) against the selling and use of illicit drugs such as marijuana, heroin, cocaine, and methamphetamines— commonly referred to as the “war on drugs”—has been at the forefront of U.S. domestic policy for the past two and a half decades. The war on drugs, as it has been waged in the modern era, is perhaps best encapsulated in federal and state laws that have imposed more severe punishments on drug sellers and drug users and in the billions of dollars in taxpayer money used to thwart the supply of drugs in the market. The significant resources used to combat the problem are a direct result of the significant social costs illicit drugs impose on American society. Drugs and drug use are often blamed for causing criminal activity, the break-up of families, loss in work productivity, and the spread of diseases such as HIV/AIDS. Although considerable attention is given to illicit drug use today, it was not always viewed as a major social problem. In 1900 drug use was an issue that
drug policy 775
A member of the U.S. Coast Guard protects more than 11.5 tons of cocaine before a press conference. (Coast Guard)
failed to reach the government’s policy making agenda, as few saw it as a serious social problem. Opium and its derivatives, morphine, heroin, cocaine, and cannabis (marijuana) were all legal substances and freely available to whoever chose to use them. For example, cocaine was a widely used ingredient in soda-pop (Coca-Cola until 1903), medicine, and alcoholic beverages such as wine. This laissez-faire approach to drugs during this period did not come without costs, as common use of such drugs led to addiction. It is estimated that 2 to 4 percent of the American population was addicted to morphine with many of those being middle- and upper-class citizens. The political climate surrounding drugs and drug use began to change shortly thereafter as antidrug crusaders were able to make a convincing case that the growing drug epidemic was a threat to society. As a result, support grew for drug prohibition policies and policies that punished those who sold, possessed, or used cocaine or heroin. Between 1900 and the
1930s, an important political shift occurred in connection to the drug issue—for the first time, curbing the use of drugs became a legitimate goal of national government policy. These changes in the public’s perception of drugs and the social costs connected to them fostered a wave of legislation at both the federal and state levels of government aimed at reducing the sale of drugs. The first federal drug legislation, known as the Harrison Act, was enacted in 1914. The Harrison Act prohibited the sale of heroin, cocaine, and their derivatives except by a physician’s prescription. In comparison with later policies, this legislation was relatively lenient on drug users themselves, as drug use was not considered a crime under the new law; users were simply asked to turn to doctors for prescriptions to buy them. However, by the early 1930s, under considerable political pressure from the Department of the Treasury, Congress expanded the reach of antidrug
776 drug policy
laws from those that focused solely on drug sellers to those that punished drug users as well. In 1937, Congress passed the Marijuana Tax Act, which placed federal regulations on marijuana for the first time. During the 1950s, fears about communism and organized crime were tied in political rhetoric to the drug issue, which enabled the Boggs Act of 1951 and the Narcotics Control Act of 1956 to pass by wide legislative majorities. Both of these pieces of legislation dramatically increased the penalties for violating federal drug laws. The Boggs Act imposed a mandatory twoyear sentence for a first conviction of possession, five to 10 years for the second offense, and 10 to 20 years for third time offenders. The Narcotics Control Act increased the mandatory minimum penalties even further for second and third time offenders. Importantly, it was during this era when illicit drug use was outlawed, and the sanctions for use were slowly ratcheted up that laid the foundation for the modern-day war on drugs. The current drug war primarily follows a deterrence-based approach whereby policies are designed to limit the supply of drugs into the marketplace through increased drug enforcement efforts and to impose tough sanctions (via incarceration) to punish those who sell or use drugs. Together, these policies aim to remove drug offenders from the larger community and deter would-be drug users from becoming involved with drugs in the first place. This approach to the drug problem has led to a dramatic explosion in federal and state tax dollars spent on eradicating drug supplies in the United States, steep increases in the number of individuals arrested and incarcerated on drug-related charges, and longer mandatory sentences for drug dealers and users. For example, following the deterrence-based approach led to dramatic increases in appropriations earmarked for bureaucratic agencies closely involved with drug enforcement duties. The Drug Enforcement Agency (DEA), the main federal agency charged with the eradication of drug supplies in the United States, saw its budget increase from $215 to $321 million between 1980 and 1984. Antidrug funds allocated to the Department of Defense more than doubled from $33 to $79 million, and the Customs Department’s budget increased from $81 million to $278 million during this same period.
The focus given to eradication of drug supplies through tough enforcement had negative implications for those executive agencies designed to reduce the demand for drugs through treatment and antidrug education. Between 1981 and 1984, the budget of the National Institute on Drug Abuse was reduced from $274 million to $57 million, and the antidrug funds allocated to the Department of Education were cut from $14 million to $3 million. By 1985, 78 percent of the funds allocated to the drug problem went to law enforcement, while only 22 percent went to drug treatment and prevention. The Anti-Drug Abuse Act of 1986 was a major piece of legislation that symbolizes much of the deterrence-based policy approaches to the fight against drugs. This bill increased federal funds toward narcotics control efforts, instituted the death penalty for some drug-related crimes, and established tougher drug sentencing guidelines. These sentencing guidelines included new mandatory sentencing laws that force judges to deliver fixed sentences to individuals convicted of a drug crime regardless of other possible mitigating factors. Congress had initially intended for these mandatory sentencing laws to apply primarily to so-called drug “king pins” and managers in large drug distribution networks. However, analysis of sentencing records shows that only 11 percent of federal drug defendants are high-level drug offenders. During President George H. W. Bush’s administration (1989–93), a record 3.5 million drug arrests were made, causing a significant strain on federal and state prison populations, as more than 80 percent of the increase in the federal prison population between 1985 and 1995 was a result of drug convictions. More recent statistics show similar trends. In 2002 (the most recent data available), approximately one-fifth (or 21.4 percent) of state prisoners were incarcerated on drug-related charges. Moreover, drug offenders, up 37 percent, represented the largest source of jail population growth between 1996 and 2002. Given these trends, what specific set of forces helped produce U.S. drug policies that place such a heavy emphasis on reducing drug supplies and issue stringent punishments on both drug sellers and users? As mentioned earlier, part of the answer can be traced back to the first half of the 20th century, when illicit drugs were first outlawed and longer sentences were
drug policy 777
imposed for violating federal drugs laws. But the policies connected to the highly salient, modern-day war on drugs are dramatically different from those of the earlier era, both in terms of the amount of tax dollars spent on drug enforcement and in the number of people arrested and incarcerated on drug-related offenses. To better understand the current state of drug policy, it is useful to look back to presidential politics of the late 1960s. It was in the 1968 presidential election that the drug issue jumped onto the national political agenda and precipitated the buildup to the drug policies of today. Seeking to gain stronger political support in the southern states, the Republican Party along with its 1968 presidential candidate, Richard Nixon, began highlighting issues such as crime and welfare—both issues appealed to southern white voters. Using campaign symbols that had strong racial undertones, the Republicans were able to link blacks and minority groups as instigators of crime, an important social problem that Nixon pledged he could help solve if elected president. In priming social issues like crime, the Republicans were able to split much of the New Deal coalition by getting many poor, rural, southern white voters with hostilities toward blacks to switch their party allegiances away from the Democratic Party, which in the end helped solidify Nixon’s slim electoral victory in 1968. Using strong rhetorical language while in office, President Nixon linked drugs and drug use as major causes of criminal activity in America, and ever since, the two issues have been closely intertwined in political debate. Not only did drug use cause more crime according to Nixon, the two issues were closely connected in that they shared “root” causes that had to be dealt with in similar fashions. More specifically, Nixon and other political (ideologically conservative) leaders espoused the idea that crime and drug use were a result of individual failure—drug use was a result of poor choices made by individuals. This “individualist” nature of drug use was exemplified several years later by Republican president Ronald Reagan in a nationally televised speech in which he stated, “Drug users can no longer excuse themselves by blaming society. As individuals, they are responsible. The rest of us must be clear that we will no longer tolerate drug use by anyone.” The drug and crime issues viewed from this perspective
were in stark contrast to the traditional liberal Democratic perspective, which tended to blame drugs and crime on systemic factors such as poverty and homelessness. As is the case with any public policy problem, the factors that are widely understood to cause a problem also help structure the solutions to it. Drug policy over the past two and half decades has largely been shaped in an era in which criminal activity and drug use have been viewed by many citizens and elected leaders in both the White House and Congress in very individualistic terms. Given this, it is no coincidence that deterrence-based policies such as increased drug enforcement, incarceration, and mandatory sentencing—policies that try to get people to think twice before they use drugs and punish severely those who do—have become pillars of America’s drug policy. Alternatively, others argue that incarcerating large numbers of individuals for drug possession, creating mandatory prison sentences, and spending a large proportion of funds aimed at cutting drug supplies is misplaced. Instead, it is argued that scarce resources should be used to fund drug policies that are driven by the presumption that drug abuse is a medical problem, and because of this, policies should promote treatment, not punishment. Motivated by stresses caused by overcrowded prisons, high prisoner recidivism rates, and growth in the use of highly addictive drugs such as methamphetamine, some U.S. states appear to be rethinking their drug policies (that over the past two decades have largely followed the deterrence-based approach at the federal level of government) and have begun to place greater emphasis on drug treatment. In 2000, voters in California, the state that had the highest rate of incarceration for drug users in the nation, enacted the Substance Abuse and Crime Prevention Act (Proposition 36) using the state’s initiative process. This initiative requires that individuals arrested and charged with nonviolent drug offenses be placed into drug treatment instead of prison. Estimates suggest that this policy has saved California tax payers $1.4 billion over the program’s first five years largely because of reductions in prison costs resulting from fewer people being sent to prison on drug charges. The initiative remains controversial, however, as opponents argue that the law is too lenient on drug
778 educa tion policy
offenders and that treatment success rates have been far too low. Using the significant discretion states have traditionally had under the U.S. Constitution to define criminal law and protect the health and safety of their citizens, U.S. states have also passed additional drug reform laws. Those state laws that have gained the most attention are those that permit the use of marijuana for medical purposes. Since 1996, a total of 11 states, including Alaska, Arizona, California, Colorado, Hawaii, Maine, Maryland, Nevada, Oregon, Vermont, and Washington have removed state-level penalties for marijuana use by medical patients who have a doctor’s recommendation. These state laws allow marijuana to be used to treat certain diseases and to help cope with pain. The extent to which patients and doctors are protected under medical marijuana laws varies by state. States’ medical marijuana laws conflict with federal drug policy, which does not recognize a medical use for marijuana and mandates that the drug cannot be used under any circumstances. These differences in states’ medical marijuana laws and federal drug law came to a head in 2005 when the federal government challenged California’s medical marijuana law in court on the grounds that the California law violated (a higher authority) federal law. In the case of Gonzales v. Raich (No. 031454), the United States Supreme Court ruled in favor of the federal government and said that the federal government could prosecute individuals for using marijuana even in those 11 states that explicitly permit it. The majority of the Court based its decision on the power of Congress to regulate interstate commerce. This ruling remains controversial, but the different approaches states have taken in their fight against drugs illustrates how drug policy in the United States, when examined more carefully, is becoming more diversified than is often believed. Without a doubt, drug abuse and the costs it imposes on society will remain problems that the government will be asked to solve in the years ahead. However, what is not known is the direction and form drug policies will take. Will drug policy continue to place greater emphasis on punishment of drug sellers and users, or will future policy place greater emphasis on prevention and drug treatment? In the end, these
are political questions that will be decided by legislative bodies at the federal, state, and local levels of government. Further Reading Beckett, Katherine. Making Crime Pay: Law and Order in Contemporary American Politics. New York: Oxford University Press, 1997; Bertram, Eva, Morris Blachman, Kenneth Sharpe, and Peter Andreas. Drug War Politics: The Price of Denial. Berkeley: University of California Press, 1996; Longshore, Douglas, et al. “Evaluation of the Substance Abuse and Crime Prevention Act.” University of California, Los Angeles, Integrated Substance Abuse Programs, 2005; Meier, Kenneth J. The Politics of Sin: Drugs, Alcohol and Public Policy. Armonk, N.Y.: M.E. Sharpe, 1994; Tonry, Michael, “Why Are U.S. Incarceration Rates So High?” Crime and Delinquency 45, no. 4 (1999): 419–437; U.S. Sentencing Commission. “Mandatory Minimum Penalties in the Federal Justice System: Special Report to Congress.” Washington, D.C.: Government Printing Office, 1991. —Garrick L. Percival
education policy Education policy has been one of the top domestic issues in America for most of the past two decades. Citizens, the news media, and politicians across the ideological spectrum regularly call for improvements in the nation’s education system. Many states and the federal government are responding with reform proposals in the hope of improving the quality of education for all students to deal with international competitiveness concerns, as well as to address the achievement gap faced by low-income and minority students. Education policy is significant both for its efficiency and equity dimensions. Most American concerns about competing in economic terms with other nations, which were focused on Japan in the 1980s and now on China and India in the future, emphasize the fact that education is critical to building an intelligent, flexible workforce. At the same time, the huge achievement gaps in America limit the ideal that education can be the force that propels low-income children into success in the “American Dream” in their economic futures.
education policy 779
One of the most significant influences on the development of the contemporary reform movement was the publication of A Nation at Risk, a 1983 U.S. Department of Education report that indicated that the United States was falling behind other nations in educating its children. The report called for the establishment of academic standards as a means toward improving the education system and prompted widespread education reform in the states. Calls for educational reform, taking various forms, have continued since 1983. But while many reforms have been tried, few have demonstrated the ability to either improve average levels of performance or decrease the large achievement gap between higher-income students and lower-income students. Although there is generally widespread consensus that the nation’s public education system needs to be improved, the appropriate method of reform is the source of fierce political disagreements. The education system is incredibly large and fragmented and fraught with politics and various philosophical opinions regarding how children learn. Some of the most pressing current topics include federalism issues, the No Child Left Behind (NCLB) Act, standardized testing, curriculum policy, school choice, and school finance. Control of the American education system has historically been in the hands of local governments, with the state and federal governments playing very limited roles. Although responsibility for education is ultimately vested in state governments by their state constitutions, it is often argued that local governments are better suited to make education decisions for local students in areas such as bilingual education, special education programs, curriculum, textbooks, and funding. Over the past several decades, however, there has been an increase in state and federal responsibility for public education, both in terms of funding, especially at the state level, and more recently the establishment of standards and other requirements. Beginning in the 1950s, the federal government increased its share of responsibility for funding public education. In the 1980s and 1990s, state governments began instituting standards-based reform, whereby states specify subject matter that students are to be taught and expected to learn. Even with such standards, local school districts were still given the discre-
tion to determine how to reach such goals. In recent years, however, the federal government has passed comprehensive legislation, drastically increasing its role in the nation’s education system. The No Child Left Behind Act of 2001, discussed below, is a clear example of the recent shift in power toward the federal government, even though it gets its authority through the focus of funding low-income students. Calls for national standards or requirements, such as those mandated by NCLB, generate considerable opposition on the grounds that the local districts (or at least the states) are the more appropriate arenas for such decisions to be made. Even with these higher levels of government involved, American local school districts have more power over education than in most other countries. The NCLB of 2001 is a reauthorization of the Elementary and Secondary Education Act (ESEA), which was initially enacted in 1965. In 2002, the Education Commission of the States called it “the most significant federal education policy initiative in a generation.” The complex and comprehensive legislation requires states to pursue more stringent accountability systems and annual testing of all students in grades three through eight, as well as requiring “highly qualified” teachers. The legislation mandates school report cards and data reporting that include performance by race, income, and gender. According to the legislation, states must make progress in raising students’ proficiencies in certain areas as well as in closing the gap between advantaged and disadvantaged students. Under NCLB, there are a number of progressively more important consequences for schools and districts that do not achieve the required progress. Students whose schools are placed on “improvement status” must be given school choice options and must be provided “supplemental education services,” such as tutoring services, during the second year of improvement status. Furthermore, districts must take corrective actions such as decreasing funding, restructuring, and replacing school staff. In exchange for these stringent expectations, the legislation provides increased federal funding for education and some flexibility for using such federal dollars. The implementation of NCLB has proven quite controversial. Supporters believe that it is a major step toward focusing attention on the need for all
780 educa tion policy
children to achieve. Some critics are opposed to extensive testing of students, while others focus on the high costs involved with implementing the legislation. Others are concerned with the potential for adverse consequences resulting from the school choice provisions in the legislation. In any event, states and districts are currently struggling to meet the requirements imposed by the legislation. Standardized testing refers to the use of largescale achievement tests to measure students’ mastery of designated subject matter. Such tests may also come with incentives or consequences, such as sanctions for schools with low or stagnant test scores, which add a “high-stakes” element. Examples of “high-stakes” tests include high school graduation tests, competency tests for grade level achievement, and tests used to rate schools with such categorizations as “failing” or “successful.” All of these “high-stakes” tests are tied to accountability—holding students, teachers, and schools accountable for learning at a designated level. Although standardized testing has played a role in education in the United States since at least the 1920s, current usage differs dramatically from earlier purposes. For example, the main purpose of testing in the 1950s and 1960s was to measure and monitor individual student performance and coursework. Today, the tests are also used for accountability measures, including evaluating schools’ effectiveness, as noted above. As with NCLB more generally, contentious disagreements surround the role of standardized testing. Critics object to the particular standards themselves, the federal government’s role in stipulating standards, and the types of assessments used to measure the standards. Some also argue that the greater focus on test scores shifts priorities to “teaching to the test” and potentially creates incentives for cheating. The public, however, has generally been in favor of accountability standards and testing. While certain core curriculum elements have always been part of American education—“reading, writing, arithmetic”—knowledge advances rapidly, and debates arise about what should be emphasized and how it should be taught. There is a broad notion that American education has bypassed some of the “basics” as it has become more inclusive of different perspectives and that some rigor has been lost in the
process. While the education expert Richard Rothstein and others emphasize that this comes partly from inaccurate perceptions of a prior “golden age” of American education, it has spawned a “back-tobasics” argument about what students should be taught. E. D. Hirsch’s “Core Knowledge” curriculum is one manifestation of this. What began as an idea that schools should share a well-designed curriculum with proper sequencing to provide all students with a shared knowledge foundation has evolved into a curriculum outlining specific material to be taught at each grade level. One of the features of American federalism is that there is no national curriculum. States set curriculums or allow local districts to do so. As a result, there is considerable variation compared to systems in other countries. The federal government has made attempts to create national standards in education, but such efforts are generally criticized (and thus far, neutralized) on the grounds that they go against the strong tradition of local control. Major textbook publishers probably play an implicit role in partially standardizing curriculums across states, as do the requirements of national college entrance exams such as the SAT and ACT. Curriculum policy, however, is currently undergoing a shift away from predominantly local control toward a greater influence by the states and federal government. NCLB requires that states align their standardized tests with state curriculum policy. Along with the resulting increased role of the states, there has been a noted shift toward a greater focus on math, reading, and science, the subjects at the core of NCLB. In fact, schools that fail to make “Adequate Yearly Progress” (AYP) in these areas are subject to strict sanctions, including school restructuring. Some critics have argued that NCLB’s focus on these areas has resulted in a “narrowing” of curriculum in public schools, including a decline in classroom time for other areas, such as the arts, social studies, and foreign languages. One major ongoing debate centers on whether students should be taught specific facts and knowledge, or whether more emphasis should be placed on problem solving and ways of learning to think. The former approach is likely to align more closely with standardized testing expectations. That is, agreement is easier on a set of things to know and test, while
education policy 781
advocates of the latter approach stress that it can lead to more innovative thinkers who are more likely to be interested in “life-long learning,” compared to the “drill-and-kill” approaches. School choice generally refers to the ability of parents to select a school for their child to attend, as opposed to the traditional assignment of a public school based on residency within particular school zones. Middle- and upper-income families have long exercised a form of school choice based on residential mobility. In recent decades, a number of different school choice options have emerged across the states, including open enrollment, charter schools, and vouchers. Options such as open enrollment within the public sector are long standing and relatively uncontroversial. More recent choices, such as charter schools and vouchers, are more controversial. Charter schools, which emerged in the early 1990s, are publicly funded schools that operate under a charter from an authorizing entity or board and may be managed by nonpublic entities such as private companies or nonprofit organizations. Charter schools are accountable to the charter granting entity but have far greater discretion in the operation of the school than traditional public schools. Their enrollment (and thus survival) is based entirely on parents choosing to send their children to the school. Voucher programs allow parents to apply some of the public money allocated for a child’s education toward private school tuition. To date, there are only a handful of voucher programs across the nation, including those in Milwaukee, Cleveland, and Washington, D.C., and they have faced significant challenges in the courts. Although voucher programs were originally advocated by free-market conservatives and Catholics, recent supporters include more minorities and parents of students attending unsuccessful urban schools. In fact, most of today’s voucher programs are targeted to such students, as opposed to universal voucher programs available to all students. Voucher programs can take various forms; policy decisions include whether to include religious schools, the amount of the voucher, eligibility requirements for participation (e.g., low-income families, failing schools), and accountability standards. School choice has a diverse mix of supporters, including individuals who view choice as a means to
improve education for underserved populations (e.g., minority and low-income children), free-market conservatives opposed to the bureaucracy of public education, members of the education establishment who desire autonomy from administration, and individuals seeking to support religious education. School choice opponents argue that many of these options have a harmful effect on the public school system by taking away funds from public education or “creaming” the most successful students from traditional public school. Opponents have also argued that bureaucracy is an effective structure for managing the diverse problems and issues within public schools. Opponents of vouchers, specifically, have claimed that such programs violate the First Amendment’s separation of church and state doctrine when religious affiliated schools are permitted to participate. Taken as a whole, school choice is a complex issue that has come to the forefront of education policy in recent years by emphasizing bottom-up parental accountability rather than top-down district control over student assignment. School funding for K-12 education represents the largest component of most state budgets. The federal government, states, and local governments share the costs of financing K-12 education, with more than 90 percent of the burden typically split relatively evenly between state and local governments. One of the significant debates regarding school finance concerns the need for greater spending for education. Although there are often demands to increase education funding, increased funding does not seem necessarily to result in greater performance. Also notable are the debates concerning “equity” and “adequacy” in school finance. Because a large portion of funding comes from local districts, there is often a gap between education spending levels in high-income versus low-income communities. Over the past 30 years, there have been calls for equity in school finance as a means to reduce or eliminate the spending gap between wealthy and low-income school districts. Although the idea of distributing education dollars equally to all schools and districts is attractive, from a political standpoint, it is difficult to put an upper limit on school districts’ spending. In recent years, there has been a subsequent shift in focus from equity to adequacy. States have faced lawsuits, which argue that spending is too low,
782 ener gy policy
or not adequate, for students to meet educational standards. The question then becomes whether all districts are receiving and spending sufficient funds to provide students with an education enabling them to reach achievement standards, as opposed to whether the funding is equal across districts. NCLB standards are now used to support some of these adequacy suits. Another relatively new development in school finance is the proposal for “weighted student formulas” (WSF) as a means of distributing school finance. This concept recognizes that it costs more money to educate some students than others. For example, educating students with special needs, from lowincome families, or with limited English ability may involve additional costs compared with students not facing such challenges. The idea behind WSF is that schools use specific weights, or funding dollars, based on certain student characteristics. The funding then follows that student to his or her particular school. Schools with relatively more challenging student populations are given larger sums of money to help them achieve. Proponents of WSF often suggest granting more flexibility to principals, in combination with this new method for distributing funds, as a means for improving education. Clearly, there are a number of important debates at the forefront of education policy in the United States today. Improving the education system in general and closing achievement gaps between advantaged and disadvantaged students are worthy yet difficult goals. This article has discussed some of the major issues surrounding federalism issues, the No Child Left Behind Act (2001), standardized testing, curriculum decisions, school choice, and school finances. Prior to K-12 education, Americans are focusing more attention on the value of preschool programs, especially for low-income children, but have yet to devote significant public resources to them. At the same time, after K-12 education, American higher education remains at the forefront of the world, mainly due to strong emphases on research, competition, and choice. Further Reading Hochschild, Jennifer, and Nathan Scovronick. The American Dream and the Public Schools. New York: Oxford University Press, 2003; Moe, Terry M., ed. A
Primer on America’s Schools. Stanford, Calif; Hoover Institution Press, 2001; Peterson, Paul E., ed. Our Schools and Our Future: Are We Still at Risk? Stanford, Calif.: Hoover Institution Press, 2003; Peterson, Paul E., and David E. Campbell, eds. Charters, Vouchers, and Public Education. Washington, D.C.: Brookings Institution Press, 2001; Ravitch, Diane, ed. Brookings Papers on Education Policy. Washington, D.C.: Brookings Institution Press, 1998–2005; Schneider, Mark, Paul Teske, and Melissa Marschall. Choosing Schools: Consumer Choice and the Quality of American Schools. Princeton, N.J.: Princeton University Press, 2000. —Aimee Williamson and Paul Teske
energy policy As a preeminent industrial power, the United States is dependent on fossil fuels in order to meet its energy needs and to sustain its economy. Whereas petroleum remains the major source of energy for automobiles, coal is the primary energy source for electricity. While the United States is a storehouse of coal and has considerable natural gas reserves (along with a supply of natural gas from Canada), it has become heavily dependent on foreign sources of oil from unstable and problematic regions around the world. Fossil fuels constitute approximately 85 percent of total U.S. energy supplies, with petroleum accounting for about 40 percent and coal and natural gas about 22 percent. Alternative sources of energy represent only a fraction of the U.S. energy pie. While nuclear power provides less than 10 percent of U.S. energy needs, renewable energy sources (e.g., solar, wind, biomass) make up less than 5 percent of the total. In many ways, energy policy is about conflicting goals. Policy makers must ensure that the country has a secure source of energy while focusing on development of new sources on one hand and conservation of energy sources, protection of public health, and ensuring environmental quality on the other hand. The United States has relied on a variety of domestic energy sources over its more than 200-year history, including coal, timber, and water power, among others. By the 1950s, several developments began to have a profound impact on American society, including the rise of an automobile society and the expansion of the highway system, that required more fossil
energy policy 783
Oil rigs undergo repairs after Hurricane Katrina, Galveston, Texas. (Getty Images)
fuels and new consumer demands that resulted in an increase in electricity use in households and industry. Although the United States produced twice as much oil as the rest of the world as late as the 1950s, it lost its self-sufficiency in oil production a decade later and self-sufficiency in natural gas by the 1980s. By the end of the 20th century, the U.S. was importing approximately 65 percent of its oil needs, and it is expected that U.S. imports of foreign oil will continue to increase rather than decrease in the foreseeable future. During the 19th century until the mid-20th century, coal was a major source of energy for an industrializing United States. Historically, coal was an important energy source especially for industry and transportation. Although coal has been replaced by oil as the dominant fossil fuel used in the United States, it remains an important part of the energy mix for the nation. However, it is also a very serious source of pollution. While natural gas has been characterized as a “clean” source of energy, it also has potential problems for environmental quality. Environmentalists
have raised concerns about the impact on wildlife and habitat, for instance, in the process of exploration and production of natural gas as a major energy source. During the last three decades of the 20th century, the United States experienced several crises involving the security of the nation’s energy supply and price increases that demonstrated the vulnerability of Americans’ lifestyle that has been based on cheap energy sources. Moreover, by the early part of President George W. Bush’s second term in office in 2005 and 2006, gas prices averaged close to $3 per gallon in some parts of the country. Given this background, the extent to which conservation and the use of alternative sources of energy should be used has divided Democrats and Republicans in Congress and candidates for the presidency. Moreover, organized interests have exercised their power in the energy debate. Over the last three decades or so, a common theme in the energy debate has been the extent to which the United States should increase or decrease its production and consumption of fossil fuels. During
784 ener gy policy
the Richard Nixon (1969–74), Jimmy Carter (1977– 81), and Bill Clinton (1993–2001) years in the White House, there was an effort to reduce dependence on foreign oil, although each administration pursued a different approach to the problem. During the Nixon administration, the Organization of Petroleum Exporting Countries (OPEC) cut domestic production and exports to developed countries, including the United States. While Congress established a Strategic Petroleum Reserve, President Nixon proposed “Project Independence,” which sought to reduce foreign imports of oil while increasing domestic production and pushing for more nuclear power. In contrast, Jimmy Carter, who also faced an energy crisis in 1979–80, pushed for a “National Energy Plan” that emphasized conservation measures and encouraged research and development of alternative sources of energy. He also worked with Congress to establish the new Department of Energy. In a speech to the nation in 1979, he referred to the energy crisis and reducing U.S. dependence on foreign oil as the “moral equivalent of war.” A decade and a half later, the Bill Clinton–Al Gore team pushed an energy policy that was concerned with the reduction of carbon emissions that had a negative effect on global climate, and they encouraged continued research and development of alternative sources of energy. In 1997, Vice President Al Gore signed the Kyoto Protocol that committed the United States and other nations to mandatory cuts in carbon dioxide emissions. In contrast to the Nixon, Carter, and Clinton efforts to reduce the country’s dependence on fossil fuels and pursue a strategy that included conservation and alternative energy sources, Presidents Ronald Reagan, George H. W. Bush, and George W. Bush emphasized development and production over conservation and alternative sources of energy. Reagan pursued a policy that encouraged continued use of fossil fuels and opening up federal lands for more exploration and production. President George H. W. Bush did little to reduce U.S. dependence on foreign oil, and he supported opening up Alaska’s Arctic National Wildlife Refuge (ANWR) to oil exploration. Although he supported modest incentives for renewable energy sources, he moved the United States further along in an attempt to reduce carbon emissions when he signed the Global Climate Change Conven-
tion at the Earth Summit in Rio de Janeiro in 1992. However, he used his influence to revise the guidelines and timetables of the international environmental agreement to reflect voluntary rather than mandatory efforts. President George W. Bush has promoted an energy policy that reflects the approach of President Ronald Reagan, namely, emphasis on production and development over alternative energy sources and conservation. The Energy Bill of 2005 ensured that the United States would remain increasingly dependent on foreign oil as the president continued to push for oil exploration and drilling in ANWR, while little was done to promote conservation, improve fuel efficiency, or encourage alternative sources of renewable energy. Although providing only a small proportion of total U.S. energy needs, nuclear power and alternative (renewable) sources of energy have the potential to become the focus of increasing attention as policy makers, interest groups, and citizens debate the future of U.S. energy policy. Similar to fossil fuels, both nuclear and renewable energy sources also have problems. For nuclear power, safety and storage issues are the primary concerns of opponents. Touted in the past as the energy source of the future, environmental and public health crises involving nuclear accidents at Three Mile Island in Pennsylvania in 1979 and Chernobyl in Ukraine in 1986 led to political problems for the nuclear power industry. The question of how and where to store used nuclear materials remains another issue. Nuclear materials (e.g., nuclear power plant rods) become outdated in about three decades and must be placed in a safe location in order to protect citizens and the environment. The issue of where to store these materials has been demonstrated by the political conflict that arose between federal authorities and state officials and citizens in Nevada, a federal storage site. However, in recent years, the United States, along with members of the Group of 8 (which also includes Canada, France, Germany, Italy, Japan, Russia, and the United Kingdom), indicated that nuclear energy must play a part in the mix of energy use. Alternative, renewable energy sources have much potential since they contribute fewer harmful pollutants, and they are a relatively available source of energy. Among these energy sources are solar, wind, hydroelectric and geothermal power, biomass, and
energy policy 785
hydrogen fuel cells. Energy can be produced from the Sun (solar radiation) in several ways, including the use of solar panels on rooftops and the employment of photovoltaic cells. Although the United States was a leader in promoting solar energy, Japan and members of the European Union have surpassed the United States in promoting this source of energy. The generation of energy from wind power has been derived from the use of windmills dating back for more than a century in the United States. In more recent years, especially after the oil crisis that occurred in the early 1970s, increased federal funding was directed toward obtaining energy from this source through new wind technology. However, as oil prices stabilized in the 1980s, federal funding for continued research and development of wind power decreased. Consequently, wind power remains a fraction of total energy sources. The major source of energy for producing electricity in the United States is water. Waterways provide the energy for hydroelectric plants to produce relatively inexpensive power for homes and industry. On one hand, hydroelectric power produces little pollution; on the other hand, flora and fauna can be adversely affected by the construction of dams. Energy obtained from underground steam is known as geothermal power. In contrast to solar and wind power, which have lacked strong federal support for research and development, geothermal power is one area of the energy pie in which the United States has assumed a global leadership position. Biomass serves as another example of renewable energy, whereby organic material (e.g., wood, plants, and animal fat) is converted into a source of energy. For instance, with the increase in prices at the gas pump for American consumers, ethanol, which is produced from corn, has assumed increasing attention as a gasoline additive since it has been found to reduce pollutants that are produced from the burning of traditional gasoline used in cars and trucks. As a nonpolluting source of energy, hydrogen fuel cells may be the energy source of the future. In the process of producing energy from hydrogen, only heat and water are produced. Hydrogen is captured from a source such as water or natural gas and then separated, stored, and used by a battery to produce energy. However, the process needs an energy source. If fossil fuels rather than nonpolluting energy sources (e.g., solar, wind, or hydroelectric) are used as the
energy source for producing hydrogen, this new source of energy that is filled with promise will become the center of future debates about appropriate uses of energy. Future U.S. energy policy faces four major challenges, namely, geopolitics, (political instability, competition), political conflict over oil drilling in ANWR, pollution, and global warming. First, unlike the past in which the United States could count on reliable sources of energy from oil-rich countries, existing petroleum sources are beginning to peak, and potentially new opportunities for obtaining petroleum are found in deep water near politically sensitive areas in the Caspian region and Africa. Moreover, the rise of China and India are already producing concerns about increasing demands on the world’s oil supply. China, in particular, has the potential to become a major rival for the United States and the European Union as it seeks to diversify its energy sources in order to sustain its economic growth. Second, ANWR has become an arena where political conflict between Senate Republicans on one side and Senate Democrats and environmentalists on the other side has occurred over support for oil exploration in the sensitive, pristine northern slope in Alaska. The debate involves several concerns, including the amount of oil available, the impact on the environment and wildlife, and the extent to which the oil would be used for domestic purposes rather than used for export. Third, air pollution and acid rain will continue to be problems as long as fossil fuels constitute the majority of energy use in the United States. While progress has been made in improving air quality in major metropolitan areas, air quality problems have become conspicuous in the nation’s national parks. Further, coal-fired power plants in the Midwest produce sulfuric acid (a source of acid rain) that has created environmental problems in the Northeast and Canada. Finally, the scientific community both in the United States and internationally has produced a consensus that human activities are having a negative impact on global climate. The United States is the largest producer of greenhouse gases that contribute to global warming, a problem that scientists warn might have catastrophic consequences for the planet unless action is taken very soon. In fact, climatic
786 ener gy policy
change is already having an effect on habitat, weather, wildlife, and coral reefs. Although the global scientific community has articulated its concerns about the impact of the continued use of fossil fuels on the global climate, President George W. Bush rejected the Kyoto Protocol in March 2001 and substituted instead a voluntary effort to address U.S. carbon emissions. There is concern among the global community that without leadership by the United States, the effort to address global warming will be difficult at best. As U.S. policy makers move forward into the 21st century, they are faced with many difficult choices regarding energy policy. One option is to continue with more exploration and production of fossil fuels. Second, more emphasis can be placed on conservation measures. Third, alternative, renewable sources of energy might be pushed in an effort to address the
energy needs of the nation. On one hand, some observers have argued that it will take a mix of these three options to sustain the economy of the United States and meet the energy needs of the American people. On the other hand, as petroleum reserves shrink, policy makers may well be forced to place more emphasis on new technologies and alternative sources of energy. Further Reading Duffy, Robert J. Nuclear Politics in America. Lawrence: University Press of Kansas, 1997; Energy Information Administration, Department of Energy. Available online. URL: www.eia.gov. Accessed June 20, 2006; Hoffman, Peter. Tomorrow’s Energy: Hydrogen, Fuel Cells, and the Prospects for a Cleaner Planet. Cambridge, Mass.: MIT Press, 2001; Melosi, Marvin V. Coping with Abundance: Energy and Envi-
entitlements 787
ronment in Industrial America, 1820–1980. New York: Newberry Awards Records, 1985; Morton, Rogers C. B. “The Nixon Administration Energy Policy,” Annals of the American Academy of Political and Social Science 410 (January 1973): pp. 65–74 Prugh, Tom, Christopher Flavin, and Janet L. Sawin. “Changing the Oil Economy.” In State of the World 2005: Redefining Global Security. Washington, DC: Worldwatch Institute, 2005; Roberts, Paul. The End of Oil: On the Edge of a Perilous New World. Boston: Houghton Mifflin, 2004; Smil, Vaclav. Energy in World History. Boulder, Colo.: Westview Press, 1994; Wirth, Timothy E., C. Boyden Gray, and John D. Podesta. “The Future of Energy Policy.” Foreign Affairs 82, no. 4 (July/August 2003): pp. 132–155 Yetiv, Steve. Crude Awakenings: Global Oil Security and American Foreign Policy. Ithaca, N.Y.: Cornell University Press, 2004. —Glen Sussman
entitlements Entitlements are federal government programs that require payments to any individuals or organizations eligible to receive benefits defined by law. There are many different types of entitlements, though most of the entitlement expenditures of the federal government are distributed to the most vulnerable individuals in society—the poor, disabled, and elderly. Consequently, in addition to providing a legal right to payments for eligible beneficiaries, many entitlements carry a moral obligation to those in need. Moreover, some of the most costly entitlement programs, such as social security and Medicare, are supported in public opinion polls by large majorities of Americans and are bolstered by powerful interest groups. Since entitlements are products of legislation, entitlement benefits can only be increased or reduced either by changing existing law or by adopting new law. Reducing entitlement benefits through legislative reforms has proven to be difficult, though there is a compelling case for cutting entitlement spending. Entitlement expenditures have been largely responsible for the long-term growth in federal government spending since the mid-1960s, and the greatest budgetary effects are yet to come. Spending projections for meeting retirement and health care obligations of
the burgeoning “baby boom” generation over the next 50 years are literally unsustainable under existing law. Ultimately, entitlement benefits will need to be reduced or additional taxes will need to be raised in order to cover the expected growth of entitlement spending. A basic understanding of entitlements requires an introduction to the variety of entitlement programs, the development of entitlement legislation and the causes of spending growth, future projections of entitlement spending, and the challenge of entitlement reform. Entitlement programs are typically classified as either “means tested” or “non–means tested.” Means tested programs take into account an individual’s financial need, whereas non–means tested programs distribute benefits regardless of an individual’s financial need. Means tested entitlements include such programs as Medicaid, Supplemental Security Income (SSI), food stamps, student loans, and unemployment compensation. Non–means tested programs include Social Security, Medicare, government pensions, military retirement, and veterans’ benefits. These programs vary in terms of their size, complexity, and the constituencies they serve. The largest entitlement, in terms of both cost and number of beneficiaries, is Social Security, which provides benefits for retirees and the disabled as well as benefits for their dependents and survivors. In 2005, Social Security paid benefits totaling $521 billion to more than 48 million people. Medicare, the health insurance program for people 65 years of age or older, is the second-largest entitlement program, covering benefits of more than 42 million people at a cost of $333 billion in 2005. Medicaid, the health insurance program for low-income individuals, is the third most costly program; it served 44 million people at a cost of $181.7 billion in federal expenditures. These three programs alone consumed 42 percent of all federal spending and about 71 percent of all entitlement spending in 2005. Programs such as unemployment compensation, food stamps, government pensions, military retirement, student loans, and veterans’ benefits are geared toward smaller constituent groups. All entitlement programs contain an array of details that define eligibility and benefits, though some are more complex than others.
788 entitlements
Several programs, including Medicaid, food stamps, and unemployment compensation, depend on contributions from state governments. The origins and development of entitlements are as various as the programs themselves, though they typically emerge from crises, broad public concerns, and/or the innovations of policy makers or wellorganized groups. Social Security began as a modest program under the Social Security Act of 1935 during the Great Depression to provide income security to aged people who could no longer work to make a living. The Social Security Act of 1935 also created Aid to Dependent Children (ADC), later changed to Aid to Families with Dependent Children (AFDC). ADC provided cash benefits to families with children who had lost a primary income earner. Social Security benefits increased with amendments to the Social Security Act in 1950 and the addition of disability insurance in 1954. But the largest growth in entitlement programs occurred in the 1960s and 1970s during the Great Society era and its aftermath. Medicare and Medicaid were created in 1965, along with several smaller programs, such as food stamps and the Guaranteed Student Loan program. From 1967 to 1972, Congress and the president (both Lyndon B. Johnson and Richard Nixon) passed several increases in Social Security retirement and family support benefits. Two major enhancements in 1972 capped off this period of program expansion: Supplemental Security Income (SSI), a program to assist poor elderly, blind, and disabled individuals, and automatic cost-ofliving-adjustments (COLAs) to retiree benefits. COLAs guaranteed that retiree benefits would increase with the rate of inflation, thus ensuring that the recipients’ purchasing power would not be eroded by economic forces that increased prices of goods and services. As large deficits emerged in the 1980s and 1990s, policy makers generally stopped adding new entitlement benefits. In fact, on several occasions Congress and the president enacted legislation that reduced benefits for farm subsidies, veterans, food stamps, government pensions, Medicare, Medicaid, and even Social Security. Though many of these cuts were modest, all of them were politically difficult to enact, and some amounted to very significant policy changes. In 1996, for instance, Congress
and President Bill Clinton approved a welfare reform law that eliminated the entitlement status of AFDC and replaced it with a block grant to states entitled Temporary Assistance for Needy Families (TANF). Despite attempts to control spending, one consequence of the program expansions of the 1960s and 1970s has been the growth of entitlement spending as a percentage of all federal spending. In order to make this point, it is helpful to identify three broad spending categories of the federal budget. First, discretionary spending refers to spending for domestic and defense programs that are subject to annual appropriations approved by Congress. Thus, if it wants to increase spending for homeland security, or raise the salaries of civil servants, or cut spending for afterschool enrichment programs, it may do so. Literally thousands of line items for discretionary programs are adjusted annually through the appropriations process. A second category is mandatory spending, which covers entitlements. Mandatory programs are not subjected to annual appropriations; the amount spent on entitlement programs is determined by how many individuals or institutions qualify for the benefits defined by legislation. The third category is interest on the national debt; when the budget is in a deficit, the Department of the Treasury needs to borrow money to pay the bills, and it must, of course, pay interest on that debt. In 1964, prior to the creation of Medicare and Medicaid and the expansions of Social Security, mandatory-entitlement spending accounted for 34 percent of all federal spending; in 2005, mandatoryentitlement spending had grown to about 58 percent of all spending. Thus, while Congress cut some benefits in the 1980s and 1990s, it did not do nearly enough to halt the upward spending growth in entitlements. The shift from a budget based primarily on discretionary programs to a budget driven by entitlements has profound implications for spending control. Since discretionary programs can be adjusted annually in the appropriations process, at least theoretically, Congress can control spending from year to year. But entitlement spending is uncontrollable so long as the law defining benefits does not change; spending for entitlements depends mainly on the number of eligible beneficiaries, the types of the benefits, and
entitlements 789
numerous uncontrollable forces, such as the state of the economy, demographic changes in the population, and the price of health care. If the economy goes into a recession, claims for means tested entitlements—food stamps, unemployment insurance, and Medicaid—increase. If the number of retirees increases, if people live longer, or if inflation increases, expenditures for Social Security will grow. Medicare, one of the most expensive and fastestgrowing entitlements, provides a good example of the difficulties of controlling entitlement spending. Over the past 30 years, large increases in health-care costs above the rate of inflation accounted for the dramatic increases in public health programs. When health inflation rises in a given year, the president and Congress cannot simply decide to spend less. Under existing law, doctors and hospitals are entitled to be reimbursed, and beneficiaries are entitled to medical services and treatment. Total annual spending on Medicare depends on the costs of those services and the number of eligible Medicare beneficiaries who use the health-care system. Thus, in order to reduce Medicare spending, the laws specifying eligibility must be changed first, which means reducing the benefits, increasing the costs to senior citizens, or cutting reimbursements to doctors and hospitals. Though Congress and the president have made such changes from time to time, the effects on total spending are overwhelmed by the general increase in health care spending. Thus, the rapid growth in entitlement spending began as a result of policy changes in the1960s and early 1970s, but policy makers generally stopped adding more entitlement benefits by the mid-1970s. The growth in overall entitlement spending after 1974 resulted from demographic, economic, social trends, and health care cost inflation. Even though overall entitlement spending grew more than discretionary programs in the 1980s and 1990s, except for Medicare and Medicaid, it grew at a slower pace than in the 1960s and 1970s. Entitlements are projected to grow dramatically in the future as the baby boomers retire and make unprecedented claims on retirement benefits and the public health-care system. From 2010 to 2030, the number of individuals over the age of 65 will double, and the percentage of people over the age of
65 will increase from 13 to 19 percent of the population. As a result of this demographic shift in the population, by 2030, Social Security, Medicare, Medicaid, and interest on the national debt will consume virtually every dollar of expected revenues under existing law. The long-term budget outlook for entitlement spending was compounded in 2004, when Congress and President George W. Bush enacted the Medicare Modernization Act (Medicare Part D), which provided prescription drug coverage to Medicare-eligible individuals. As of January 2006, about 22.5 million of the 43 million Medicare recipients had enrolled in Medicare Part D, and the program is expected to cost $558 billion over the first 10 years and will grow even more rapidly thereafter. David Walker, comptroller general of the Government Accountability Office, has been the most recent voice among public officials who have declared the projected path of entitlement spending growth “unsustainable.” If nothing is done to slow the rate of growth in the big entitlement programs, the next generation of workers and their children will face massive tax increases, a reduction in their standard of living, or both. Entitlement reform advocates say it is economically, fiscally, and morally unacceptable to not change this course. The next generation should not be saddled by the excesses of the previous generations, especially when the problems are clear. But the prospects for reining in entitlement spending are complicated by practical considerations, moral claims, and political forces. Despite the massive total cost to finance Social Security, the average monthly benefit is just over $1,000 per retiree. The good news is that a small average reduction in benefits would generate massive budget savings, but the bad news is that many retirees depend on every dollar of Social Security for subsistence. Meanwhile, Medicare and Medicaid are essential programs for millions of Americans now and in the future who will need access to the health-care system. Advocates of Social Security, Medicare, and Medicaid point out that these programs have rescued tens of millions of senior citizens from a life of poverty in old age. Any cut, particularly for low-income recipients, would be a step backward in terms of addressing the needs of the elderly.
790 en vironmental policy
Importantly, Social Security presents a simple set of solutions compared with Medicare and Medicaid. Demands on Social Security are fairly easy to calculate, given average life expectancies and readily available demographic data, and the alternatives for cutting spending are clear enough. Increasing the retirement age, reducing the amount of benefits, and changing the way inflation adjustments are calculated are a few notable changes that could produce savings. The costs of Medicare and Medicaid, on the other hand, are tied to the costs of health care in general. Thus, while government reforms such as reducing fraud and waste and developing a more competitive pricing structure will reduce spending, the key to controlling the costs of Medicare and Medicaid is to contain health-care costs in general, a more vexing challenge for policy makers. The political obstacles to entitlement reform are formidable. Public opinion polls repeatedly show that Americans oppose cuts in Social Security, Medicare, and, to a lesser extent, Medicaid. Support for these programs is broad and deep; there are no clear divisions across party lines or among age groups. Younger individuals are more inclined to support private accounts as a substitute for the current Social Security program, but they do not support spending cuts. Moreover, entitlements are supported by powerful interest groups. The American Association of Retired Persons (AARP), which spearheads a coalition of senior citizen groups, has over 35 million members and amounts to one of every four registered voters. More specialized groups—hospitals, nursing homes, doctors, health maintenance organizations (HMOs), insurance companies, and now drug companies—all have a stake in the outcome of policy changes. Entitlement reform is certainly possible; after all, we have examples from the past, but the political opposition should not be understated. Thus, we are left with a complicated and challenging puzzle: How does the federal government meet its legal obligations and deliver the necessary benefits to individual recipients of popular programs and also address the inevitable imbalance of entitlement spending to projected tax revenues? Addressing the problem will require considerable leadership in order to build a consensus that balances the claims of multiple constituencies. Indeed, the lives of virtually
every American over the next 50 years will depend on the answer to this question. See also welfare policy. Further Reading Derthick, Martha. Policymaking for Social Security. Washington, D.C.: Brookings Institution Press, 1979; Kotlifoff, Laurence J., and Scott Burns. The Coming Generational Storm. Cambridge, Mass.: MIT Press, 2004; Light, Paul. Still Artful Work: The Continuing Politics of Social Security. New York: McGraw-Hill, 1995. Moon, Marilyn, and Janemarie Mulvey. Entitlements and the Elderly. Washington, D.C.: Urban Institute Press, 1996; Peterson, Peter G. Running on Empty. New York: Farrar, Straus & Giroux, 2004; Samuelson, Robert J. The Good Life and Its Discontents. New York: Random House, 1995. —Daniel J. Palazzolo
environmental policy Environmental policy in the United States has evolved over time, as have other policy areas, but its development was hastened by dedicated activists and shifting public opinion that responded to an escalating environmental crisis. The United States was an early adopter of aggressive policies and institutions to control pollution, first through command-and-control regulations and later through voluntary and marketbased policies. The roots of contemporary environmental policy can be found in the nation’s preindustrial past. An anthropocentric view of nature prevailed as the United States grew from an agricultural society to a fully industrialized country. The policy emphasis remained on conservation of natural resources for future extraction and continued economic growth, rather than protecting nature for its own sake. This conservation ethic led to the nation’s first national parks during the late 19th century and the founding of the U.S. Forest Service in 1905. Gifford Pinchot, the director of the Forest Service under President Theodore Roosevelt, saw the environment primarily as a source of raw materials to satisfy human needs and sought to manage those materials as efficiently as possible. As the conservation movement broadened and influenced natural resource policy decisions, a rival
environmental policy 791
A general view of the air pollution over downtown Los Angeles, California (Getty Images)
perspective emerged in the form of the preservation movement. An early leader was John Muir, who founded the Sierra Club in 1872 in response to the devastation of the Sierra Nevada foothills in the frenzy of the California gold rush. Preservationists promoted a biocentric approach to land management. The biocentric approach shunned the purely instrumental view of nature and sought to place human values into a larger context, whereby such goals as pristine ecosystems and saving species from extinction have inherent value. The public increasingly valued national forests and grasslands as a source of recreation and enjoyment. In response to this evolving public view, Congress directed the Forest Service to manage public lands for multiple uses and benefits in addition to sustained yields of resources. In the area of pollution control, little progress was made prior to 1970. The patchwork of state pollution control laws was weak and ineffective, and federal attempts to give them teeth were largely unsuccessful. However, the 1960s experienced a growing appetite
for government action to solve domestic problems as well as a series of ecological catastrophes, which placed the environment squarely on the national agenda by the end of the decade. The major shift in the role of the federal government began in the 1960s, when the environment emerged as a national policy issue. Air and water quality had been in decline for decades until the urgency of the environmental crisis was raised by several focusing events, such as the publication of Rachel Carson’s Silent Spring in 1962 and the Santa Barbara oil spill and the fire on the Cuyahoga River in 1969. These developments led to the first Earth Day in 1970. Originally conceived as a teach-in by Wisconsin senator Gaylord Nelson, 20 million Americans took part in events across the country, demanding that the federal government take action to deal with the environmental crisis. After Earth Day raised the consciousness of the American public and its policy makers, the focus of the environmental movement shifted to Washington, D.C. Republican president
792 en vironmental policy
Richard Nixon and Democratic congressional leaders vied to demonstrate which was the “greener” of the two political parties. Determined not to be upstaged by his Democratic rivals, President Nixon created the U.S. Environmental Protection Agency (EPA) by executive order and signed the first of several major environmental statutes into law, including the Endangered Species Act. The National Environmental Policy Act (NEPA) requires government agencies to prepare environmental impact statements before undertaking major projects and allows the public to challenge those actions on environmental grounds. Throughout the “environmental decade” of the 1970s, Congress would enact other major laws regulating different forms of pollution. Regulation of air pollution was federalized with the enactment of the Clean Air Act (CAA) Amendments of 1970. This landmark statute authorized the EPA to set national ambient air quality standards for particulate matter, sulfur dioxide, nitrogen dioxide, volatile organic compounds, ozone (which contributes to smog), and lead. The act created 247 air quality control regions around the country and required the states to submit implementation plans to reach attainment of the standards. States were given a fiveyear deadline to reduce their emissions by 90 percent, although that deadline was repeatedly rolled back. The law also established auto emission standards for the first time. Water quality was the focus of the next major environmental statutes enacted by Congress. The Clean Water Act (1972) authorized the EPA to set national water quality standards with the goal of making the nation’s rivers and streams fishable and swimmable. The act established a pollution discharge permit system and funded grants to help municipalities build water treatment plants. Recognizing that the Clean Water Act did not go far enough to protect public health, Congress passed the Safe Drinking Water Act (SDWA) in 1974. The EPA was authorized by the SDWA to set drinking water standards and funded additional grants to upgrade community water systems. After addressing air and water concerns, Congress set its sights on threats from toxic and hazardous materials. The first action was the Resource Conservation and Recovery Act of 1976. This law
actually was intended to promote recycling. However, it authorized the EPA to regulate the storage, transportation, treatment, and disposal of hazardous waste. The same year, Congress enacted the Toxic Substances Control Act, which allows the EPA to ban or regulate any chemicals presenting an “unreasonable risk of harm” to human health or the ecosystem and requires the testing of new chemicals before they go on the market. The last major environmental statute was enacted as the environmental decade closed in 1980. Congress passed the Comprehensive Environmental Response, Compensation and Liability Act (Superfund) to address the problem of abandoned hazardous waste sites, such as the New York community of Love Canal, which had been constructed on a decades-old chemical waste pit. The law required the EPA to create a list of abandoned hazardous waste sites and rank them by their risk to human health and provided a $1.6 billion fund to clean up the most dangerous sites. In 2002, more than 50,000 sites had been identified, and the Superfund National Priority List included 1,291 sites. The cost of toxic mitigation in some high-profile contamination cases, such as Glen Avon, California, and Times Beach, Missouri, contributed to a backlash against environmental regulation in the 1980s. When President Ronald Reagan took office in 1980, he made no secret of his hostility to government regulation and used executive powers to stymie the implementation of environmental policy. Reagan issued an executive order requiring a cost-benefit analysis for every major regulatory action, cut the EPA’s budget, and appointed people known for their antienvironmental views to his administration, most notably Interior Secretary James Watt and EPA Administrator Anne Burford. This decade also gave rise to the Wise Use movement. Seeking to reverse the decades-old multiple-use policy of land management, Wise Use supporters were motivated by an ideological view that the most efficient decisions about public lands were made at the local level by those who used their resources. With the changing climate in Washington, D.C., the environmental movement went into a defensive mode. The movement’s strategy shifted from legislation to litigation. With fewer allies in the White House and Congress, environmentalists turned to the courts
federal debt and deficit
to block the weakening of the major laws that were created during the 1970s and to fight for their enforcement. The 1980s saw a rapid growth of environmental interest group membership as well as the number of grassroots environmental organizations, which provide information to policy makers and the electorate, mobilize voters, and force implementation of environmental laws through citizen suits. Policy makers responded to public demands for increased environmental protection during the 1990s, though not at the level witnessed in the 1970s. When the Clean Air Act was amended in 1990, the revised act reflected congressional frustration with the EPA’s lack of progress in implementing the law. The 1990 amendments included specific air quality goals and deadlines for nonattainment areas, replacing the original act’s language that cities make “reasonable further progress” toward attainment of their goals. Another provision required the EPA to begin regulating 189 additional hazardous air pollutants, a mandate that carried deadlines and hammer clauses to ensure its implementation. The limitations of command-and-control policies had become apparent by the 1990s, and voluntary and market-based systems of pollution control became increasingly popular. One provision of the 1990 CAA amendments created a market for trading sulfur dioxide emissions, which contribute to acid rain. Under the trading system, industries and coalfired utilities were given one permit for every ton of sulfur dioxide emissions, with the intent of reducing the number of permits by half over 20 years. Each participant had the freedom to reduce its own emissions ahead of schedule and sell the excess permits to others in the market. The success of the program has made it a model for tradable emissions markets on the local level and in other countries. The focus of most environmental policy making had been on domestic problems prior to the 1990s, but Congress began to address global and transboundary problems as well. In response to the 1989 Montreal Protocol, the 1990 CAA amendments listed specific chemicals that deplete the ozone layer and included a provision to phase out their production and use. Other, more modest goals were accomplished by the decade’s end: The Food Quality Protection Act was passed in 1996, stricter air quality standards for ozone and particulate matter were pro-
793
mulgated by the EPA in 1997, and President Bill Clinton placed large portions of public lands off limits to development by designating new national monuments and wilderness areas and designating 65 million acres of public land as roadless, a move that enraged many western politicians. The first term of President George W. Bush (2001–05) was reminiscent of Reagan-era environmental policies, which sparked a resurgent environmental movement. After taking office, Bush reversed many of his predecessor’s regulatory actions, such as a new arsenic standard for drinking water (later reinstated). Bush also reversed a campaign pledge to begin regulating carbon monoxide, a gas that contributes to global warming. He also denounced the Kyoto Protocol and withdrew the United States from the agreement to reduce greenhouse gas emissions. The administration’s land management policies were just as retrograde: Bush supported initiatives in Congress to open the Arctic National Wildlife Refuge for oil exploration and initiated his Healthy Forests plan, which opened new areas of national forest to logging in the name of preventing forest fires. Further Reading Cahn, Matthew A. Environmental Deceptions: The Tension between Liberalism and Environmental Policymaking in the United States. Albany: State University of New York Press, 1995; Davies, J. Clarence, and Jan Mazurek. Pollution Control in the United States: Evaluating the System. Washington, D.C.: Resources for the Future, 1998; Dietrich, William. The Final Forest: The Battle for the Last Great Trees of the Pacific Northwest. New York: Simon & Schuster, 1992; Kettl, Donald. F., ed. Environmental Governance: A Report on the Next Generation of Environmental Policy. Washington, D.C.: Brookings Institution, 2002; Rosenbaum, Walter A. Environmental Politics and Policy. 6th ed. Washington, D.C.: Congressional Quarterly Press, 2006. —David M. Shafie
federal debt and deficit There is a difference between the federal debt and the federal deficit. The debt is the accumulated total of all deficits. Each year the federal government spends trillions of dollars on programs and other
794
federal debt and deficit
spending projects. When it spends more money than it takes in, that is a called a budget deficit. The U.S. federal government began its existence with a large debt. During the Revolutionary War, the nascent government borrowed money from France and anyone else who might be of service to the colonies. (France was very willing to help finance the American Revolution because it was a heated enemy of the British, and any enemy of my enemy, so the saying goes, is my friend.) The Continental Congress also borrowed money in the form of what we today might call war bonds from the colonists themselves. It was not uncommon for a patriotic colonist to mortgage his farm or land and buy these bonds. The promise was that after the war, the new government would pay back the colonists. But after the war was won, the new federal government, under the Articles of Confederation, did not have the power to tax. When France came to the new government demanding a repayment of its debt, the U.S. government was unable to meet its responsibilities. The same was true when the farmers of the United States went to the government demanding payment of the bonds they had bought. This caused the new government significant problems. Its major ally, France, was demanding the money that the United States owed, and the farmers were losing their property because of the failure of the new government to keep its word and repay the debt owed. In several states, minirebellions occurred. The most famous of these is referred to as Shays Rebellion. Daniel Shays was a former captain in the Revolutionary army. He led a group of disgruntled farmers to the state capital in Springfield, Massachusetts, and attempted to shut down the government. This and the minirebellions that took place in every one of the 13 states compelled the states to rethink the wisdom of the Articles of Confederation and greatly contributed to the movement to jettison the articles and write a new constitution for the nation. In the new government created by the U.S. Constitution, taxing power was conferred to the new Congress. The new government had the means, but as one will see, not always the will, to pay the debts owed. Almost every year (the recent exceptions were the last few years of the Bill Clinton presidency) the federal government runs up a deficit as expendi-
tures exceed revenues. If one were to add up all the deficits, this is what is often referred to as the national debt. The debt is the total amount of all the deficits, or the amount the government has borrowed and thus owes to American citizens, banks, other nations, and the bond market. The government must also pay interest on the debt. In 2005, the federal government spent $352 of each taxpayer’s money on interest payments on the national debt; interest payments do not pay back the principal of the loan. Today the Department of the Treasury’s payment on the interest of the national debt alone makes that budget item the third-largest expense in the federal budget, behind only spending on the Department of Defense and the Department of Health and Human Services. Currently, the national debt is roughly $8.2 trillion (as of 2007). The per-day increase in the debt grows at roughly $2.3 billion. The population of the United States is estimated at nearly 300,000,000 people. If the national debt were divided up to estimate each citizen’s share, it would come out to a bill of roughly $27,500 per person. And the debt continues to grow. There is also a concern for business and private debt. The overall debt of U.S. businesses and households is estimated to be more than the federal government’s debt. When did the debt-deficit dilemma rise to near crisis proportion? The federal government has always had difficulty living within its means. During and after World War II, the federal government, in an effort to fight first the Great Depression and then World War II, ran up huge deficits. In 1945, the federal debt and deficits were at alarmingly high rates (more than 100 percent of the gross domestic product). But this reflected emergency spending, and soon the budget deficit-debt crisis was brought under greater control. By the 1950s, the trend lines were all going in the right direction, even as the federal government continued to have difficulty keeping revenues and expenditures in line. In fact, the trend lines continued to move in the right direction until the late 1970s and 1980s. It was during the administration of President Ronald Reagan (1981–89) that an explosion of spending and cuts in taxes threw the debt-deficit balance dramatically out of line. All the trend lines started moving at a fast pace in the wrong direction. President Reagan’s budget proposals as approved by
fiscal policy 795
Congress, which included an increase in defense spending, a cut in taxes, and a reduction in domestic program spending, contributed to a dramatic increase in both the federal debt and yearly budget deficits. When Reagan took office in 1981, the United States was the world’s largest creditor nation. When he left office eight years later, the United States was the world’s largest debtor nation. It was left to Reagan’s successors to clean up the economic mess left behind. President George H. W. Bush, faced with this harsh economic reality, was compelled to break the famous campaign pledge he made in 1988 (“Read my lips: No new taxes!”) and reached a compromise with the Democratically controlled Congress to raise taxes in 1990 in an attempt to reduce the national debt and deficit. This was a small but significant step in righting the economic imbalance. As economic indicators and the overall state of the economy began to improve by 1992, President Bill Clinton (1993–2001) and Congress were able to help further bring the imbalance under control, even running a budget surplus toward the end of the 1990s. And while both Bush and Clinton may have paid a political price for getting the budget crisis under control, clearly putting more discipline into federal spending programs had a very positive impact on the debt-deficit crisis. The presidency of George W. Bush, propelled by the response to the terrorist attacks of September 11, 2001, and Bush’s insistence on a substantial tax cut, once again led to large deficits and a significant increase in the national debt. Bush recommitted the economic sins of his political hero, Ronald Reagan, and called for both a huge increase in defense spending (to fight the war against terrorism) and a large tax cut. This led to negative economic consequences for the debt-deficit dilemma. Many economists were alarmed at the explosion of the deficit and warned of significant negative consequences if the U.S. government did not get it under control. Those concerned that large deficits will be a significant drag on the overall economy argue that the deficits are unfair in that they let the current generation spend government revenues but leave the bill behind for the next generation to pay. This allows the current generation to consume more but pay less. The deficit also reduces the amount of money that is saved and invested. This means less capital and poten-
tially higher interest rates. Higher interest rates draw more foreign investment into the United States but may lead to larger trade imbalances (which are already at high rates). Those who are less concerned about the rise in the debt and deficit argue that the evidence of negative consequences is by no means clear and that the United States has nonetheless done well within the broader economic realm even while amassing large budgetary deficits. Clearly, questions of intergenerational justice are raised when the current generation lives well but passes the enormous debt on to its children. The preamble to the U.S. Constitution states that “We the people of the United States, in Order to form a more perfect Union, establish Justice, insure domestic Tranquility, provide for the common defence, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity . . .” (emphasis added). Many citizens would argue that the current generation should act more responsibly in regard to the economic well being of future generations and not leave them mired in debt. See also budget process; income taxes. Further Reading Cavanaugh, Francis X. The Truth about the National Debt: Five Myths and One Reality. Cambridge, Mass.: Harvard Business School, 1996; Manning, Robert D. Credit Card Nation: The Consequences of America’s Addiction to Credit. New York: Basic Books, 2000; Rivlin, Alice M. Reviving the American Dream: The Economy, the States, and the Federal Government. Washington, D.C.: Brookings Institution Press, 1992. —Michael A. Genovese
fiscal policy Fiscal policy is the term used to describe the spending decisions governments make. When we begin to list what governments do on the domestic level or in the international arena, most government actions require spending money. Increased police protection, for example, requires spending money to hire more police officers and to provide them with the equipment necessary to do their jobs. Increasing the quality of education requires spending money for administrators to monitor levels of quality as well as spending to make certain educators are properly trained and
796 fiscal policy
updated on curricula and that schools have all the resources necessary to provide an environment conducive to learning. Making the decision to go to war involves spending a good deal of money, as soldiers and other military personnel must be paid. They require equipment as well as medical supplies, food, and water; not only this, but the weapons soldiers use must be replenished or replaced, depending on the type of weapon used. Most government actions, then, require spending money. The sources of government revenue include taxes, investments, fees, and borrowed money, otherwise known as debt. As a government begins to decide what actions to take, it must consider the sources and amounts of revenue it has. All of this, then, greatly increases the politics of fiscal policy, because taxes are not popular among citizens, and many people disagree over the direction of government actions and the allocation of resources for those actions. Generally speaking, people want government services, such as well-maintained roads and protection, but they do not want to pay taxes to support those services. Questions that are intrinsic to fiscal policy include: What actions should a government take? How much support for each action exists among the citizenry? How much money should be spent on this action? If the action requires additional revenue, from where will it come? Another key question that must be addressed within the context of American government is what level of government should administer this policy. Our government is federalist, meaning that we have one national government and 50 subgovernment units, called states. Each state has many subgovernment units in turn, known as counties, parishes, or boroughs, and within those, cities and towns. The U.S. Constitution provides broad responsibilities for the national government and reserves all other powers (and responsibilities) to the states. Because of constitutional phrases that direct the national government to “promote the general welfare” and “make all laws necessary and proper” to carry out duties, citizens and political leaders debate what responsibilities belong at the national level and which should be the sole responsibility of the states. In general, the national government provides all the services directed in the U.S. Constitution and subsidizes the costs of other government services that are provided by the
states. Education, some health care, transportation, welfare, and senior and children’s services are examples of such government services. Some government services, are provided by cities but subsidized by the federal government, such as infrastructure preservation or renovation. Fiscal policy, then, consists of three intergovernmental relationships: national to state, national to local, and state to local jurisdictions. When one government subsidizes the cost of providing a service, it does so through a grant. Federal grants have two main purposes: to direct policy implementation and outcomes in the states and to strengthen the fiscal capacity of states to provide services. Throughout the last century, many examinations of federalism have considered the effectiveness of federal grants. Federal grants have followed three paths: categorical grants, in which the grantor government places restrictions on and remains an active principal in grant implementation; block grants, in which the grantor government establishes parameters for the grant and monitors the outcomes of the grant; and general revenue sharing, in which the grantor government provides resources for the general use of the subgovernment, often with wide parameters established for the use of the grant. Each type of grant has its own political constituency and, necessarily, its detractors. A categorical grant may be open or closed; that is, it may or may not have upper limits on the amount subgovernments may receive. Many categorical grants require matching funds from subgovernments or have maintenance of effort (MOE) requirements. Some grants may be established as entitlements, which pose a threat of budget deficits, as programs in this category, by nature, are open-ended. Block grants enable the national government to promote goals broadly, while also funding programs in a targeted fashion, through formulas that favor states with the greatest needs. States often prefer block grants and general purpose fiscal assistance because these allow the greatest flexibility for local spending preferences. Federal officials argue from time to time for an increase in block grants in order to limit uncontrolled spending in matching funds and to reduce duplication in categorical grants. Despite trends toward block grants and general fiscal assistance, the national government continues to provide the bulk of its fiscal assistance in the form of categori-
fiscal policy 797
cal grants. Before welfare was changed to nonentitlement status and made a block grant, only 16 percent of federal domestic assistance was given in the form of block grants, and these were largely for transportation or community development. There is considerable overlap in government grants. Within the last decade, the Government Accounting Office (GAO) reported more than 160 programs in existence for job training and more than 90 programs for childhood development. Proposals to consolidate grants argue that the change would increase efficiency and promote state innovation to solve problems of poverty and economic development. Many studies have suggested particularly that grants geared toward poverty elimination only increase dependency and that they should therefore be eliminated or changed. Perhaps more important, when the federal government is faced with pending budget cuts, one compromise that can be reached to gain state support for reduced funding is to consolidate grants and to change their structure from categorical to block, thus allowing states dominant reign in implementing the grants. The nature of federal funding streams tilts on a balance between allowing federal policy makers greater control of implementation and outcomes through categorical grants and increasing the efficiency and innovation of state programs, not to mention the political support of governors and other state policy makers. Discussion of fiscal policy should include political manipulation of grants to various congressional districts. Congress may decide the locational recipient for a grant, such as a military base, water project, or dam, but in many cases, Congress provides skeletal guidelines for a grant program (e.g., eligibility criteria, limits on grant amounts), leaving the bureaucracy to decide grant recipients, award amounts, and other conditions of the grant. Congressional legislators can better demonstrate their efforts in the states through categorical grants and therefore have often preferred these, particularly when they are politically dependent on district constituencies, as opposed to state or local policy makers, for support. Grants may be implemented more effectively and have a greater stimulus effect when the congressional oversight committee for its authorizing agency has a greater interest in the grant. Agency
monitoring is more intense when Congress has an active interest in the policy outcomes of a grant. Increased agency monitoring subsequently has been found to trigger increased effort on the part of the subgovernment recipient to fulfill the mission of the policy. While agency administrators have the authority to approve projects for congressional districts, skepticism exists about the influence that the president or Congress has in the decision and release of grants. Given the need to strengthen constituent support, agencies can assist members by reminding them that the support for projects of poor quality can ultimately harm, rather than garnish, constituent support. But what about political influences for granting highquality projects? Political effects in grant distribution appear to occur most often during the initial years of a grant program and when a grant appears to be threatened. Politics affects district projects, but only in subtle ways. Bureaucrats are able to insulate their agencies from illegal political influences by allowing the release of grants to be politicized. Grants being considered in election years are usually hurried for release in the electoral season if the grants are viable and would normally have attained approval. Grants for districts of key members and members of the president’s party are given priority when possible. Fiscal policy addresses community needs in a couple of ways. The most needy metropolitan areas have their projects met regardless of the president in office because they meet grant qualifying criteria, such as having populations of 500,000 or more, being located in a targeted Northeast or Midwest frostbelt region, or meeting the economic conditions of most distressed criteria. Cities’ needs are also met because of congressional district concerns and, most importantly, because these cities have a dedicated lobbyist stationed in Washington, D.C., to work to identify grants, package the grant applications, and make both federal agencies and their congressional delegations aware of the cities’ needs. Cities lacking in grant writing and lobbying skills often are overlooked unless they meet other grant priority criteria outlined above. Research on fiscal policy has often concentrated on the effectiveness of grants to motivate the actions of the recipient government. Much of this literature is based on the principal-agent theory, which states
798 fiscal policy
that the principal (in this case the government providing the grant) must understand how to motivate the agent (in this case the government receiving the grant). Considerations include whether the grants that require subgovernments to match resources or to maintain funding efforts actually create a stimulus effect, whereby the local government becomes committed to enacting the new policy and eventually internalizes the need for government involvement in the given area. Some evidence suggests that categorical grants with matching requirements do create a stimulus effect for state and local spending. A similar theory exists with respect to block and general revenue sharing grants. This thesis is known as the flypaper effect. It postulates that the lump sum grants will trigger state and local spending. Included in this thesis is the fiscal illusion hypothesis, which states that a voter will approve of increased state and local spending, even if this results in an increase in taxes, because he or she perceives the greatest burden in providing the good to have shifted away from him or her. More explicitly, the voter believes that if federal grants are subsidizing the good, then his or her cost (via state and local taxes) is minimized. A countertheory to the flypaper effect suggests that federal grants, whether in the form of categorical, block, or general revenue sharing, provide no stimulus, but rather are used to replace state and local funds that might have otherwise been committed to the issue area. Some evidence suggests that state and local funding have actually diminished in the areas of education and poverty relief once federal funds became available. When federal grants create incentives for increased funding on the state and local levels, those jurisdictions increase their expenditures in the given policy areas. However, when provided the opportunity to receive federal funding without matching requirements, state and local governments typically decrease their expenditures, treating federal funds as fungible resources. If a state or local government has internalized the need for a policy, it generally remains committed to funding the policy even when federal grants diminish. Fiscal policy, then, is intrinsic to all other public policies. Fiscal policy determines the directions governments will go in taking action to provide certain services. It determines how the government will gen-
erate revenue to fund its actions. It also determines what level of government will provide the funding, or a portion of the funding, for the service, and what level of government will be responsible for implementing and monitoring the policy. In the last 20 years, the United States has moved toward block grants for social services such as welfare. It has devolved most of the policy responsibility to the states and is expanding this policy to Medicaid. The economist Richard Musgrave contends that a central question must be asked when considering fiscal policy and the responsibility for providing a government service: What is the highest level of jurisdiction that will be affected by policy outcomes? The answer, he indicates, should dictate which government should act as the principal in initiating funding and monitoring implementation and outcomes of the policy. A state may not be affected by whether a city erects a traffic light, for example. However, the nation is affected if all its citizens are not self-sustaining. Therefore, the federal government has a direct interest in education and welfare policies. As the debate continues over what level of government is appropriate for providing services, it is imperative that citizens weigh all possible outcomes before informing their representatives of their preferences for government level shifting of responsibilities. Further Reading Anagnoson, J. Theodore. “Federal Grant Agencies and Congressional Election Campaigns.” American Journal of Political Science 26, no. 3 (August): 547– 61; Cammisa, Anne Marie. Governments as Interest Groups: Intergovernmental Lobbying and the Federal System. Westport, Conn.: Praeger Publishers, 1995; Chubb, John E. “The Political Economy of Federalism.” American Political Science Review 79 (December): 994–1015; Conlan, Timothy. From New Federalism to Devolution. Washington, D.C.: Brookings Institution Press, 1998; Early, Dirk. “The Role of Subsidized Housing in Reducing Homelessness: An Empirical Investigation Using Micro Data.” Policy Analysis and Management 17, no. 4 (Fall 1998): 687– 696; Hanson, Russell, ed. Governing Partners: StateLocal Relations in the United States. Boulder, Colo: Westview Press, 1998; Hedge, David. “Fiscal Dependency and the State Budget Process.” Journal of Politics 45 (February 1983): 198–208; Hofferbert,
foreign policy 799
Richard, and John Urice. “Small-Scale Policy: The Federal Stimulus Versus Competing Explanations for State Funding of the Arts.” Journal of Political Science 29, no. 2 (May): 308–329; Logan, Robert R. “Fiscal Illusion and the Grantor Government.” Journal of Political Economy 94, no. 6 (1986): 1,304–1,317; Megdal, Sharon Bernstein. “The Flypaper Effect Revisited: An Econometric Explanation.” Review of Economics and Statistics 69, no. 2 (May 1987): 347– 351; Murray, Charles. Losing Ground: American Social Policy, 1950–1980. New York: Basic Books, 1984; O’Toole, Laurence J., ed. American Intergovernmental Relations. Washington, D.C.: Congressional Quarterly Press, 2000; Rich, Michael. “Distributive Politics and the Allocation of Federal Grants.” American Political Science Review 83, no. 1 (March 1989): 193–213; Welch, Susan, and Kay Thompson. “The Impact of Federal Incentives on State Policy Innovation.” American Journal of Political Science 24, no. 4 (November 1980): 715–729; Wood, Dan B. “Federalism and Policy Responsiveness: The Clean Air Act.” Journal of Politics 53 (August 1991): 851–859; Zampelli, Ernest M. “Resource Fungibility, The Flypaper Effect, and the Expenditure Impact of Grants-InAid.” Review of Economics and Statistics 68, no. 1 (February 1986): 33–40. —Marybeth D. Beller
foreign policy Foreign policy encompasses the decisions, actions, and communications that the United States takes with respect to other nations. Foreign policy matters range from informal discussions between diplomats to summit meetings between heads of state. American foreign policy interests have expanded greatly since 1789, when the United States was most concerned about protecting its territory and borders. In the 21st century, as the United States assumes the leading role internationally in combating terrorism, it works closely with other nations to protect its political, economic, and military interests. While foreign policy in the early days of the American republic was often a choice, today interacting with other nations is, in effect, a necessity. American foreign policy in the 18th century is perhaps best defined by President George Washington’s farewell address to the nation in 1796. Circu-
lated via newspapers, the address clearly and concisely summarized the first president’s views on the health and future of the American republic. In foreign affairs, Washington counseled caution foremost; while he supported trade with other nations, he warned that the United States must “steer clear of permanent alliances” with other nations. Washington did not advocate isolationism for the United States, but he did state that close ties to other nations could hinder pursuit of U.S. interests. In calling essentially for unilateralism in American foreign policy, Washington captured the unique position that the United States faced at the time with respect to other nations. Its geographic separation from the established states of the time meant that the United States possessed the luxury of choosing when it would engage in foreign affairs. Consequently, American foreign policy in the 19th century frequently overlapped with protection of U.S. national security. When the United States did engage with other nations in the 19th century, it sought primarily to secure or expand its territorial borders. The War of 1812 with Great Britain reinforced American independence, and afterward the United States was careful not to become entangled in European politics. In 1821, Secretary of State John Quincy Adams explicitly stated in a Fourth of July address that America “goes not abroad, in search of monsters to destroy. She is the well-wisher to the freedom and independence of all.” Two years later, President James Monroe announced in his annual state of the union message that any efforts by other states to gain power in the Western Hemisphere would be construed “as dangerous to [American] peace and safety.” Although the Monroe Doctrine has gained popular currency as a U.S. commitment to protecting the independence of other states in the Western Hemisphere, at the time it was primarily a defensive posture to ensure that Europe did not interfere with U.S. interests. American foreign policy in the 19th century extended primarily to ensuring its own liberty and independence, not to actively promoting those values for new revolutionary states. Perhaps the most significant feature of American foreign policy in the 19th century was its focus on U.S. expansion. The Louisiana Purchase in 1803 doubled the size of the United States and opened the nation to westward expansion. Historian Frederick
800 f oreign policy
Jackson Turner later identified this expansion as manifest destiny, and the concept aptly described the American perspective on the right to grow and develop as a nation. The Mexican-American War of 1846 further expanded U.S. territory, establishing the southern boundary of Texas and extending the U.S. border to California. But the Civil War of the 1860s halted manifest destiny and forced the United States to focus on ensuring its internal stability and security before engaging in foreign affairs again. When both the Atlantic and Pacific Oceans bordered the territorial United States, the next logical step for a growing industrial power was to pursue global expansion. The United States embarked on this journey with the Spanish-American War of 1898, defeating Spain and gaining control of Cuba, Puerto Rico, Guam, and the Philippines in just four months. After the war, however, members of Congress debated whether the United States had a duty, or even a right, to take control of overseas territories, and what responsibilities would follow. In 1904, President Theodore Roosevelt unabashedly asserted the U.S. right to engage in international affairs with his statement that the United States would intervene in the Western Hemisphere whenever it saw evidence of “chronic wrongdoing” within states. The Roosevelt Corollary to the Monroe Doctrine expanded U.S. foreign policy interests greatly; whereas previously the United States had focused on defending the region from outside influence, now it was willing to act, in Roosevelt’s words, as “an international peace power.” Roosevelt exuberantly demonstrated his willingness to employ such power in the region and beyond, building the Panama Canal and negotiating control from a newly independent Panama; winning the Nobel Peace Prize for successfully orchestrating peace talks in the RussoJapanese War of 1904; and showcasing U.S. naval power by sending the Great White Fleet around the world. The U.S. role as a global power in the early 20th century was marked most clearly by its entry into World War I. Although President Woodrow Wilson had pursued diplomatic and military interventions in Latin America, particularly Mexico and the Dominican Republic, he initially sought to steer clear of the growing conflict in Europe. In fact, Wilson campaigned for reelection in 1916 with the slogan “He kept us out of war.” In 1917, however, due to continu-
ing provocations, the United States declared war on Germany. In his address to Congress requesting a declaration of war, Wilson announced an agenda that went far beyond protecting U.S. interests, proclaiming that “the world must be made safe for democracy.” This far-reaching goal indicated that the United States now had not only the right, but indeed also the obligation, to assist other nations in pursuing peace, liberty, and democracy. After World War I, Wilson immersed himself in the peace treaty negotiations, focusing in particular on the creation of a League of Nations, which would prevent future world wars. But Wilson’s refusal to address congressional concerns about U.S. commitments in an international organization resulted in the failure of the Senate to ratify the Treaty of Versailles. For the next decade, American foreign policy interests would move back sharply from Wilson’s ambitious agenda. As Europe inched toward another global conflict in the 1930s, the United States again initially tried to maintain a neutral position. Congress passed several neutrality laws in the 1930s, forbidding the United States from assisting either side. Although President Franklin D. Roosevelt provided some assistance to the Allied Powers, he promised in his 1940 campaign that “Your boys are not going to be sent into any foreign wars.” After becoming the first president to win election to a third (and later fourth) term, Roosevelt declared that the United States must be “the great arsenal of democracy,” providing arms and supplies to the Allies without engaging directly in the war. Public and congressional sentiment favored this cautious strategy; in September 1941, a vote to extend the draft, originally enacted in 1940, passed the U.S. House of Representatives by just one vote. But the Japanese attack on Pearl Harbor on December 7, 1941, galvanized the United States to enter the war, which it did the very next day. The Allied victory over the Axis Powers in 1945 raised the question of what role the United States would play in the postwar world. Learning from the experiences of Woodrow Wilson, Roosevelt had supported the creation of an international organization to promote peace during World War II itself, and the United Nations came into existence in 1945. The United States also was committed to rebuilding Japan and Germany for economic and security reasons. Most significantly, however, the need for a continued
foreign policy 801
U.S. presence in global affairs became evident with the origins of the cold war. The cold war was fundamentally an ideological struggle between the United States and the Soviet Union over democracy versus communism. From its beginnings to the dissolution of the Soviet Union in 1991, the United States practiced a policy of containment, albeit with many modifications across administrations. As defined by foreign policy expert George F. Kennan, the strategy of containment aimed to limit Soviet influence to existing areas, but it did not promote U.S. aggression against the Soviet Union. Rather, the premise of containment was that ultimately the internal flaws within communism would cause it to fall apart of its own accord. The cold war became a hot war in many places, notably Korea, Vietnam, and Afghanistan, but when the two superpowers came closest to military conflict during the Cuban missile crisis in 1962, they ultimately defused tensions. The end of the cold war is credited to many factors, including the U.S. defense buildup, with which the Soviet Union could not compete, the commitment of President Ronald Reagan to ridding the world of nuclear weapons, and the leadership of Soviet leader Mikhail Gorbachev, who loosened restrictions on the economy and public discourse, but its peaceful conclusion was by no means predictable. The role of the United States as one of two superpowers during the cold war fostered some significant changes in the American foreign policy process. Most significantly, the National Security Act of 1947 created the National Security Council, the Central Intelligence Agency, the Joint Chiefs of Staff, and the Department of Defense (which replaced the Department of War). All of these agencies served to increase the power of the president in American foreign policy, sometimes at the expense of Congress. While Congress largely deferred to the president in the early part of the cold war, the Vietnam War prompted a resurgence of congressional engagement in foreign affairs. Most importantly, Congress passed—over President Richard M. Nixon’s veto—the War Powers Resolution in 1973, which aimed to restrict the president’s ability to send troops abroad for extended periods without congressional approval. But no president has recognized the War Powers Resolution as constitutional, and Congress
has refrained from trying to force a president to comply with its provisions. The end of the cold war raised many new questions about American foreign-policy power and interests. President George H. W. Bush defined the era as a “new world order,” but that concept of nations working together to implement the rule of law seemed to apply primarily to the 1991 Persian Gulf War. Subsequent U.S. interventions in Somalia, Haiti, and Bosnia raised much more thorny questions about when the United States should send troops to other nations and why. In particular, the role of the United States in leading humanitarian relief efforts was widely debated, especially after U.S. soldiers were killed in a firefight in Mogadishu, Somalia, in October 1993. Just months later, the United States declined to intervene in the civil war in Rwanda, as no national security interest was at stake, though hundreds of thousands of people were killed in mere weeks. While the United States also witnessed some important foreign policy successes in the 1990s, notably the passage of the North American Free Trade Agreement and the 1998 peace accords in Northern Ireland, the post–cold war era was largely defined by questions about America’s role in the world. The U.S. foreign policy agenda came into sharp relief with the terrorist attacks of September 11, 2001. The United States immediately condemned the attacks and quickly built an international coalition to combat terrorism. While the coalition worked together closely during the war in Afghanistan in 2001, divisions soon became evident as the United States began to consider waging war against Iraq. In the summer of 2002, President George W. Bush explicitly stated that the United States would not hesitate to take action against a potential aggressor, and the president’s case for “preemptive” war was incorporated into the 2002 National Security Strategy. When the Iraq war began in 2003, the United States had the support of many allies, termed the “coalition of the willing,” but it lacked support from the United Nations. This conflict raised charges of American unilateralism as well as concerns about rebuilding Iraq. In the United States, public and congressional support for presidential leadership in foreign policy was strong after 9/11, although some of that support has dissipated as the Iraq war continues.
802 Gr eat Society
Ultimately, American foreign policy leaders continue to wrestle with the same questions that concerned George Washington more than two centuries ago. The need to engage with other nations for commercial reasons is clear, and, of course, advances in technology mean that the United States no longer enjoys geographic isolation. Nevertheless, debates continue about how engaged the United States needs to be in world affairs and how closely it must work with other nations to pursue its aims. No president can ignore American foreign policy, but the United States enjoys the luxury of making its foreign policy choices from a position of strength and with a wide degree of independence. See also defense policy; diplomatic policy. Further Reading Ambrose, Stephen E., and Douglas G. Brinkley. Rise to Globalism: American Foreign Policy Since 1938. New York: Penguin Books, 1997; Jentleson, Bruce W. American Foreign Policy: The Dynamics of Choice in the Twenty-First Century. 2nd ed. New York: W.W. Norton, 2004; McDougall, Walter A. Promised Land, Crusader State: The American Encounter with the World Since 1776. Boston: Houghton Mifflin, 1997; Smith, Tony. America’s Mission: The United States and the Worldwide Struggle for Democracy in the Twentieth Century. Princeton, N.J.: Princeton University Press, 1994. —Meena Bose
Great Society The Great Society is the name given to a series of programs and laws established during the administration of Lyndon B. Johnson (1963–69). President Johnson called for a Great Society to be formed in a 1964 speech at the University of Michigan. In it, he stated that he wanted people from throughout the United States to come together in seminars and workshops to address social and economic problems and to form a Great Society. The white middle class was enjoying considerable prosperity at the time of the Great Society speech. However, minorities and poor Americans were not enjoying the prosperity of their fellow citizens. The poverty rate in 1963 was 23 percent but had been as high as 27 percent in 1959. The Civil Rights move-
ment that began in the 1950s had success in making white Americans throughout the United States aware of the inequalities faced by African Americans in employment opportunities, in exercising their right to vote, and in societal segregation. As the Civil Rights movement increased awareness in the inequality of life for African Americans among whites, it simultaneously empowered many African Americans to demand government changes in order to provide equality of conditions. Urban riots over housing, employment, and education inequalities frightened many whites. Empirical evidence suggests that fear of black aggression did result in a response from policy makers to create better housing and school environments and to increase welfare benefits. The Great Society movement was not racially altruistic. Awareness of inequalities, then, had been developing for some time before the Great Society speech. President John F. Kennedy had not experienced the devastating poverty of Appalachia until he campaigned for president in 1959 and 1960. This region of the country had a per capita income that was 23 percent lower than the national average in 1960. Onethird of all Appalachians lived in poverty. As president, he formed a commission, the President’s Appalachian Regional Commission (PARC), to study and make recommendations on alleviating poverty in this area. The PARC Report was presented to President Johnson in 1964. The obvious disparity faced by the poor and minorities needed to be addressed. The national mourning of President Kennedy’s death in November 1963 helped create an overwhelming Democratic victory in the 1964 election. President Johnson carried the election with 61 percent of the vote, the highest margin ever experienced in a presidential race. This gave Johnson courage to propose many policy changes to move the United States toward racial and economic equality and to expand cultural and educational quality; 96 percent of his policy proposals were passed by Congress. During the Great Society era, Congress passed several laws and established programs in the areas of racial equality, social and economic equality, education, environmental improvement, and culture. Medicare, the nation’s health care program for senior citizens, was established during the Great Society era in 1965, and Medicaid, the national health care program for the poor, was expanded to include
Great Society 803
all families who qualified for Aid to Families with Dependent Children, commonly referred to as welfare. Other major pieces of legislation are outlined below. The Civil Rights Act of 1964 banned discrimination based on race, color, religion, gender, or national origin. It ended public segregation, the practice of having separate facilities for blacks and whites, largely in the South. While enforcement of the act was not uniform, this legislation paved the way for an end to the inferior treatment that African Americans in particular had faced. One year later, President Johnson signed executive orders 11,246 and 11,375, further prohibiting discrimination in hiring practices (established earlier by President Kennedy under executive order 10,925 in 1961) and requiring contractors earning more than $50,000 per year from the federal government to have written affirmative action policies for recruiting and hiring minorities in order to integrate their workforce. The Voting Rights Act of 1965 outlawed the poll tax and literacy tests, two mechanisms that had been used in southern states to discourage African Americans from voting. The act also gave authority to the Department of Justice to approve or disapprove any proposal for changing voting rules in districts that were at least 5 percent African American. Moreover, the Department of Justice took control of voter registration for districts in which at least 50 percent of eligible African Americans were not registered to vote. The Hart-Celler Act of 1965 abolished the Chinese Exclusion Act of 1882 and opened immigration so that immigrants were no longer restricted based on race or ethnicity. Previous immigration policy gave strong preference to northwestern Europeans. This act greatly increased migration to the United States from Asia and Latin America. In 1968, another civil rights act was passed. This act is popularly known as the Fair Housing Act because it banned discrimination in the rental, sale, or financing of property. It also prohibits threatening, coercing, or intimidating any person seeking to rent or purchase property. Prior to the Fair Housing Act, many deeds had housing codes on them stipulating that the property could never be sold to African Americans, and some even prohibited the sale of property to Jewish people.
President Johnson’s War on Poverty was a critical part of the Great Society movement. This call for economic and social equality for society’s disadvantaged was initially made in the president’s 1964 State of the Union Address. The speech stimulated the creation of many agencies and programs to reduce poverty. The Appalachian Regional Commission, the nation’s only truly federal agency, was formed and given authority to fund projects to improve the health, economic development, highway system, water, and sewage conditions of Appalachia. The Appalachian Regional Commission is governed equally by a federal commissioner and the governors of the 13 states that make up the Appalachian region. The Economic Opportunity Act of 1964 helped to establish community action programs to address the needs of people for education, skill development, and employment. The Job Corps was created to help disadvantaged youths to develop skills. The Neighborhood Youth Corps helped teenagers and young adults to acquire summer employment, and Upward Bound was established to send poor high school students to college. The Food Stamp Program became a permanent program, and Head Start was expanded from a summer only to a year-round program. Culturally, many advances were made during the Great Society years. The Corporation for Public Broadcasting, which oversees public radio and public television, was established, along with the National Endowment for the Humanities, the National Endowment for the Arts, the Kennedy Center, located in Washington, D.C., and the Hirshhorn Museum and Sculpture Garden, part of the Smithsonian Institution, also located in Washington, D.C. The prevailing criticism of the Great Society programs is that its work resulted in an explosion in the welfare rolls. Aid to Families with Dependent Children (AFDC) was altered significantly in the Public Welfare Amendments of 1962, when Congress enacted inducements for states to provide services to AFDC clients that would lead their clients to selfsufficiency. If states provided these services, as approved by the Department of Health, Education, and Welfare, the federal government agreed to pay 75 percent of the service costs. This was a dramatic increase in AFDC funding for the states. Federal funding for normal program services was based on a formula that matched state money. In the enacting
804 Gr eat Society
legislation, the funding match to states was one-third. Eventually, the formula was changed to provide greater assistance to poorer states: For each AFDC beneficiary, the federal government would match state spending (up to one-third of the first $37.00 in 1967; up to five-sixths of the first half of the average state payment by 1975), and then an additional proportion of state spending, depending on the state’s per capita income (ranging from 50 to 65 percent). The business community embraced the Great Society programs during the 1960s on political and economic grounds. Of political concern was the anxiety that voters would perceive an alignment between the business community and the Republican Party and therefore vote in favor of liberals who wanted to expand social programs in a time of economic recession and high unemployment, such as that experienced in the 1950s. Corporate leaders such as the CEOs of Xerox, Ford, and Chase Manhattan adopted policies emphasizing corporate responsibility to assist in societal development. Many businesses aligned themselves with Presidents Kennedy and Johnson for economic reasons as well. Government programs that resulted in transfer payments often benefited the business community, particularly in the areas of housing development and job training. Additionally, as long as the tax burden for welfare came from society at large, the business community would not be targeted for assisting in unemployment or health care for the disenfranchised. Soaring welfare rolls throughout the 1960s escalated concerns over the structure of the program and its ability to help the poor become self-sufficient. Combined state and federal expenditures on AFDC rose from $1 billion in 1960 to $6.2 billion in 1971. Changes in AFDC, from expansion of eligibility to institutionalized patients and two-parent households, as well as the elimination of residency requirements in the United States Supreme Court ruling in Shapiro v. Thompson (1969), resulted in a tripling of the AFDC caseload between 1960 and 1974. Of particular alarm was the growing awareness that young unmarried girls were having babies and qualifying for public assistance and housing, often continuing a cycle of poverty as their circumstances prevented them from moving toward self-sufficiency. Attempts to promote self-sufficiency through work requirements for AFDC clients had shown little
success. The Work Incentive Initiative (WIN) Program of 1967 tied welfare benefits to work by requiring local offices to refer qualified adult participants for training and employment. Exempt from the requirements were women whose children were under six years of age, or those whose pending employment was determined to be adverse to the structure of the family, a determination that was made by caseworkers on a subjective basis. Though day care provisions were included in the same AFDC amendments that established the WIN program, funding was inadequate, the work requirements were laxly enforced, and the jobs to which clients were referred generally provided only superficial training. Support for Great Society programs waned during the presidencies of Richard M. Nixon (1969–74) and Gerald R. Ford (1974–77). President Ronald Reagan (1981–89) curtailed a good deal of federal spending, including negotiating a 50-percent decrease in funding for the Appalachian Regional Commission. By 1996, AFDC was replaced with a block grant program, Temporary Assistance to Needy Families, which set a 60-month lifetime limit on cash assistance to poor families. Some of the programs have remained, however. Head Start and Upward Bound continue to receive funding, as does Medicare and Medicaid, although President George W. Bush signed legislation in 2005 that altered Medicare payments, and Medicaid funding on the federal and state levels continues to diminish. In 2006, Congress passed and President Bush signed legislation reauthorizing the 1965 Voting Rights Act. Funding for the arts, humanities, and public broadcasting has diminished but continues to enjoy some support at the federal level. See also entitlements; welfare policy. Further Reading Appalachia: A Report by the President’s Appalachian Regional Commission (The PARC Report), 1964. Available online. URL: http://www.arc.gov/index.do ?nodeId=2255. Accessed July 20, 2006; Blank, Rebecca, and Ron Haskins, eds. The New World of Welfare. Washington, D.C.: Brookings Institution Press, 2001; Derthick, Martha. The Influence of Federal Grants. Cambridge, Mass.: Harvard University Press, 1970; Executive Order 10925, 1961. Available online. URL: http://www.eeoc.gov/abouteeoc/35th/ thelaw/eo-10925.html. Accessed July 20, 2006; Execu-
gun control 805
tive Order 11246, 1965. Available online. URL: http:// www.eeoc.gov/abouteeoc/35th/thelaw/eo-11246.html. Accessed July 20, 2006; Fording, Richard C. “The Conditional Effect of Violence.” American Journal of Political Science 4, no. 1 (January 1997): 1–29, Jansson, Bruce S. The Reluctant Welfare State. 4th ed. Belmont, Calif.: Wadsworth, 2001; Jennings, Edward T. “Racial Insurgency, the State, and Welfare Expansion: A Critical Comment and Reanalysis.” American Journal of Sociology. 88 (May 1983): 1220–1236; Jennings, Edward T., Jr. “Urban Riots and Welfare Policy Change: A Test of the Piven-Cloward Theory.” In Why Policies Succeed or Fail. Sage Yearbooks in Politics and Public Policy, vol. 8, edited by Helen M. Ingram and Dean E. Mann. Beverly Hills, Calif.: Sage Publishers, 1980; Noble, Charles. Welfare As We Knew It. New York: Oxford University Press, 1997; President L. B. Johnson’s Commencement Address at Howard University, “To Fulfill These Rights,” 1965. Available online. URL: http://www.lbjlib.utexas.edu/johnson/archives.hom/speeches.hom/650604.asp. Accessed July 20, 2006; Quadagno, Jill. The Color of Welfare: How Racism Undermined the War on Poverty. New York: Oxford University Press, 1994; Stefancic, Jean, and Richard Delgado. No Mercy: How Conservative Think Tanks and Foundations Changed America’s Social Agenda. Philadelphia: Temple University Press, 1996; United States Bureau of the Census, Poverty Rates over Time. Available online. URL: http://www. census.gov/hhes/www/poverty/histpov/hstpov3.html. Accessed July 20, 2006; Weaver, R. Kent. Ending Welfare as We Know It. Washington, D.C.: Brookings Institution Press, 2000. —Marybeth D. Beller
gun control Gun control is one of the most enduringly controversial issues in modern American politics, yet it has long historical roots. Guns have long been a root of American violence yet are also inextricably intertwined with the Revolutionary and frontier traditions, cultural and recreational activities, and American mythology. Gunrelated mayhem has been far more evident in large urban areas than in American frontier life, where fanciful images of a gun-toting, shoot-em-up existence were far less prevalent in real life than in Hollywood movies. And while most assume that gun controls
(regulations pertaining to gun ownership or operation imposed by some level of government) are an artifact of the late 20th century, strict gun controls existed throughout American history, even extending back before the Revolutionary era. In recent decades, the political and policy debate over gun control and gun violence has intensified, while the nature of the controls contemplated has centered on a set of relatively modest and limited changes. Government-enacted gun control policy extends back to the colonial era. From that point through the early Federalist period in America, firearms possession was regulated in two primary ways. One type of regulation required eligible males to own guns as part of their responsibility to their service in local militias, even though there was a chronic shortage of working firearms from the colonial period until after the Civil War. In 1792, Congress passed the Uniform Militia Act, which required a militia-eligible man to “provide himself with a good musket or firelock, a sufficient bayonet and belt, two spare flints, and a knapsack, a pouch with a box therein to contain not less than twenty-four cartridges . . . each cartridge to contain a proper quantity of powder and ball.” Within the next two years, all 15 states passed similar measures, yet they lacked enforcement power, and these laws were widely ignored. In addition, states often reserved the right to take, or “impress,” these guns if they were needed for defense. The other type of early gun control law barred gun ownership to various groups, including slaves, indentured servants, Native Americans, Catholics or other non-Protestants, non–property-owning whites, and those who refused to swear oaths of loyalty to the government. Laws barring distribution of guns to Native Americans were among the first such measures. As early as the 1600s, persons discovered selling guns to Indians could be subject to death. Pennsylvania went further than other states to take guns away from citizens deemed disloyal when it passed the Test Act in 1777, which specified that those who refused to swear an oath of allegiance to the government would be disarmed, referring specifically to “persons disaffected to the liberty and independence of this state.” According to one historian, this law disarmed up to 40 percent of the state’s adult white male population. Further, the government conducted periodic gun censuses both before and after
806 gun control
Evidence connected to the Columbine massacre on display for the first time (Getty Images)
the adoption of the U.S. Constitution of 1787. In 1803, for example, Secretary of War Henry Dearborn coordinated the most extensive and thorough such gun census ever conducted up until that time, concluding that about 45 percent of all militiamen had “arms,” or about a quarter of the white male adult population. A similar census seven years later produced about the same results. Two types of events spurred the frequent calls for tougher gun laws in the 20th century: the spread and fear of gun-related crime and the assassinations of political leaders and celebrities. Despite enduring popular support for tougher gun laws, new federal gun regulations have been infrequent and limited in scope. The first modern push for gun control laws arose from the Progressive Era. A dramatic rise in urban crime in the late 1800s, linked to the proliferation of handguns that were heavily marketed by gun companies to urban populations, prompted citizen groups, newspaper editors, and other civic leaders to press for new regulations. In 1903, for example, the New
York City police estimated that at least 20,000 citizens in the city carried handguns on a regular basis. Gun crimes received extensive press coverage, and states and localities throughout the country enacted laws barring the carrying of concealed weapons. The federal government did not intervene in early gun control policy efforts, based on the prevailing sentiment of the time that gun regulatory decisions should be left to the states and localities. In several legal challenges to gun regulations, however, the United States Supreme Court upheld the constitutionality of such regulations and established that the Constitution’s Second Amendment (the “right to bear arms”) only applied to citizens when in the service of a government-organized and -regulated militia (e.g., Presser v. Illinois, 1886; U.S. v. Miller, 1939). Until 2008, no gun control law had ever been declared unconstitutional as a violation of the Second Amendment. Among the earliest and most sweeping of these new state laws was that enacted in New York State in 1911. Spurred by spiraling urban violence and the attempted assassination of New York City mayor
gun control 807
William J. Gaynor in 1910, the Sullivan Law (named after the state senator who championed the bill) subjected the sale, possession, and carrying of deadly weapons to strict regulation. In particular, pistol carrying was strictly licensed, with violation elevated to a felony by the new law. The 1920s ushered in a new era of freedom but also one of alcohol prohibition and a concomitant rise of illegal alcohol production and smuggling, which in turn accelerated the rise of organized crime, tied to highly profitable alcohol bootlegging. As rival criminal gangs jockeyed for control of the enormous illegal market, crime-related violence rose. At the same time, pressure on the national government mounted as more civic leaders demanded a coordinated federal response. In 1922, for example, the American Bar Association commissioned a study that concluded that 90 percent of all murders nationwide occurred with handguns. The organization then endorsed a nationwide ban on the production and sale of handguns and handgun ammunition, except for law enforcement. As early as 1921, the Senate Judiciary Committee held hearings on a bill to bar the interstate shipment of handguns, with a few exceptions. The measure was pushed annually from 1915 until 1924, but it was always killed in committee. A similar fate met most other federal gun control efforts during this period. By the late 1920s and early 1930s, crime escalated, the Great Depression set in, and newspapers reported gangland killings and the growing popularity among gangsters of a hand-held machine gun first developed for use in World War I, the Tommy Gun. In mob-run Chicago, for example, bootlegger Hymie Weiss and his mob attacked rival Al Capone’s gang headquarters in 1926, firing thousands of rounds into the building. Capone escaped; a few weeks later he sought revenge, killing Weiss and his accomplices. In a single month in 1926, 215 Chicago gangsters were murdered, with another 160 killed by the police. The public watched with horror and fascination as events from the St. Valentine’s Day Massacre to the crime sprees of Bonnie and Clyde, Pretty Boy Floyd, and John Dillinger’s crime spree and subsequent shooting death at the hands of government agents covered newspaper headlines. As if to punctuate the country’s gun crime worries, an unemployed anarchist fired five shots at president-
elect Franklin D. Roosevelt early in 1933, who was visiting Florida at the time, narrowly missing him but fatally wounding Chicago mayor Anton Cermak. The assassin had bought his .32 caliber revolver at a local pawn shop for $8. At the federal level, the first successful effort to enact gun policy began with a 10 percent excise tax on guns enacted in 1919 and a law prohibiting the sale of handguns to private individuals through the mail enacted in 1927. The rise of gangsterism and the election of President Franklin D. Roosevelt in 1932 spurred enactment of the first significant national gun measure, the National Firearms Act of 1934, which strictly regulated gangster-type weapons, including sawed-off shotguns and machine guns. This initial measure also included a system of handgun registration, but that provision was stripped out of the bill by gun control opponents. The Federal Firearms Act of 1938 established a licensing system for gun dealers, manufacturers, and importers. America’s involvement in World War II turned the nation’s attention and resources to the war effort. After the war, millions of returning soldiers who had for the first time experienced gun use while in military service helped spawn a rise in gun ownership, mostly for hunting and sporting purposes. The relative prosperity and stability of the 1950s pushed crime issues to the back burner. No new federal gun control laws reached the president’s desk until 1968, when a five-year push for tougher laws culminated in the enactment of the Gun Control Act. Momentum for new controls took shape in the aftermath of the assassination of President John F. Kennedy in November 1963. His assassin, Lee Harvey Oswald, had purchased a rifle through interstate mail and used it to kill the president by firing three times from the sixth floor of a building along the president’s motorcade route through downtown Dallas, Texas. By the mid-1960s, escalating crime rates and the spread of urban disorder raised new fears about spiraling gun violence. Such fears peaked in 1968, as urban rioting continued and when, in that same year, the civil rights leader the Rev. Martin Luther King, Jr., and Senator Robert F. Kennedy were both assassinated. Those two murders provided the final impetus for passage of the Gun Control Act. The law banned interstate shipment of firearms and
808 gun control
ammunition to private individuals; prohibited gun sales to minors; strengthened licensing and recordkeeping requirements for dealers and collectors; extended regulations to destructive devices including land mines, bombs, hand grenades, and the like; increased penalties for gun crimes; and regulated importation of foreign-made firearms. Cut from the bill was the original proposal, backed by President Lyndon Johnson, to enact blanket gun registration and licensing. The next major gun law enacted by Congress, the Firearms Owners Protection Act of 1986 (also called the McClure-Volkmer bill), rolled back many of the provisions of the 1968 law at a time when anticontrol forces, led by the National Rifle Association (NRA), exerted great influence over Congress and the presidency of Ronald Reagan. It allowed interstate sale of long guns (rifles and shotguns), reduced record keeping for dealers, limited government regulatory powers over dealers and gun shows (in particular, limiting inspections of gun dealers to one a year), and barred firearms registration. Highly publicized incidents of mass shootings in the late 1980s and 1990s, combined with the election of gun control supporter Bill Clinton to the presidency, resulted in a new and successful effort to enact gun laws. Yielding to public pressure, Congress enacted the Brady Law in 1993 and the Assault Weapons Ban in 1994. Named after Reagan press secretary James Brady, who was seriously wounded in the 1981 assassination attempt against Reagan, the Brady Law required a five–business-day waiting period for the purchase of a handgun, during which time local law enforcement authorities were to conduct background checks on purchasers to weed out felons, the mentally incompetent, and others barred from handgun possession; increased federal firearms license fees; financed improved record keeping; and called for implementation of the National Instant Criminal Background Check System (NICS) in 1998. Since then, handgun sales can be completed as soon as the check inquiry is cleared. Dealers have up to three days to verify that the applicant is eligible to purchase a handgun, although 95 percent of all purchases clear within two hours, according to the FBI. From 1994 to 2001, the Brady Law stopped about 690,000 handgun purchases, representing about 2.5 percent of all handgun purchases.
In 1994, Congress enacted a ban on 19 specified assault weapons plus several dozen copycat models, which were distinguished from other semiautomatic weapons by virtue of their distinctive military features, including a more compact design, short barrels, large ammunition clips, lighter weight, pistol grips or thumbhole stocks, flash suppressors, or telescoping stocks (traits that facilitate concealability and “spray fire”). The law also exempted from the ban 661 specifically named weapons. According to a U.S. Department of Justice study, after the ban’s enactment, assault weapon crimes dropped from 3.6 percent of gun crimes in 1995 to 1.2 percent in 2002. The federal ban was imposed for a 10-year period, and lapsed in 2004. Congress failed to renew the law. In 1997 and 1998, the country’s attention was riveted by a series of seemingly inexplicable schoolyard shootings committed by school-age boys in small cities, towns, and rural areas around the country, culminating on April 20, 1999, when two high school boys brought four guns to Columbine High School in Littleton, Colorado, and began shooting. When they were done, 12 students and one teacher had been killed in the space of less than 15 minutes; 23 others were wounded. As police closed in on 18-year-old Eric Harris and 17-year-old Dylan Klebold, the two turned the guns on themselves. In the aftermath of the incident, national shock and outrage put unprecedented pressure on Congress to respond. The leadership in the U.S. Senate yielded to national pressure despite the fact that its Republican leaders opposed new gun control measures. On May 20, 1999, the Senate passed a bill that would have required background checks at all gun show sales, flea markets, and pawn shops (closing the “gun show loophole”), revocation of gun ownership for those convicted of gun crimes as juveniles, tougher penalties for juvenile offenders who used guns in crimes and also for those who provided such guns to juveniles, required sale of locking devices or boxes sold with all new handgun purchases, blocked legal immunity to those who sold guns to felons, and a ban on the import of highcapacity ammunition clips (those that could hold more than 10 bullets). The Senate-passed bill was defeated in the House of Representatives by a coalition of pro–
health-care policy 809
gun control representatives who considered a compromise bill too weak and anti–gun control representatives who opposed any new controls. The following Mothers Day in May 2000, more than 700,000 protestors staged the Million Mom March in Washington, D.C., in support of stronger gun laws and against gun violence. The George W. Bush presidency was highly sympathetic to foes of stronger gun laws. The Bush administration supported the NRA’s top legislative priority, enacted in 2005, a bill to grant the gun industry and gun dealers immunity from lawsuit liability, making the gun industry unique in possessing such a protection. The Bush administration also restricted access by law enforcement to gun purchase data and opposed efforts to regulate civilian access to highpowered sniper rifles. Renewed interest in stricter gun control laws increased following the shooting massacre at Virginia Tech in Blacksburg, Virginia, in April 2007. However, even in the face of such tragedies, in 2008 the Supreme Court ruled in District of Columbia v. Heller that the Second Amendment is protective of an individual’s right to possess a firearm for personal use. National gun control policy is implemented by the Bureau of Alcohol, Tobacco, Firearms, and Explosives (ATFE), which has had only limited success in implementing full enforcement of national gun laws. Enforcement lapses have resulted from legislative restrictions on its authority, budget cutbacks, a tarnished reputation resulting from its handling of the confrontation with the Branch Davidian compound in Waco, Texas, in 1993, and political opposition and criticism from the NRA. Further Reading Cook, Philip, and Jens Ludwig. Gun Violence: The Real Costs. New York: Oxford University Press, 2000; DeConde, Alexander. Gun Violence in America. Boston: Northeastern University Press, 2001; Spitzer, Robert J. The Right to Bear Arms. Santa Barbara, Calif.: ABC-CLIO, 2001; Spitzer, Robert J. The Politics of Gun Control. Washington, D.C.: Congressional Quarterly Press, 2004; Uviller, H. Richard, and William G. Merkel. The Militia and the Right to Bear Arms. Durham, N.C.: Duke University Press, 2002. —Robert J. Spitzer
health-care policy Health-care policy includes actions that governments take to influence the provision of health-care services and the various government activities that affect or attempt to affect public health and well-being. It can be viewed narrowly to mean the design and implementation of federal and state programs that affect the provision of health-care services , such as Medicare and Medicaid. It also can be defined more broadly by recognizing that governments engage in many other activities that influence both public and private health care decision making, such as funding health science research and public health departments and agencies, subsidizing medical education and hospital construction, and regulating food, drugs, and medical devices. Even environmental protection policies, such as clean air and water laws, are an important component of public health. Health-care policy is a relatively recent endeavor for the U.S. government. What we consider to be the core of health-care policy emerged in the United States only after the 1930s, with the idea of health insurance. Individuals could take out an insurance policy, much as they did for their lives, houses, or cars, that would defray the cost of health care should an illness develop or an injury occur. Today most people are insured through their jobs, and the insurance policies cover routine medical services as well as preventive health care. Others are covered through the federal Medicare and Medicaid programs or through the Veterans’ Health Care System. The United States relies largely on the private market and individual choice to reach health-care goals. That is, most health-care services are provided by doctors and other medical staff who work in clinics and hospitals that are privately run. The U.S. government plays a smaller role than is found in most other developed nations, where national health insurance programs are common. The result is a health-care system that is something of a hybrid. It is neither completely private nor fully public. One consequence of this approach to health care is that some 45 million individuals in the United States, or 18 percent of the nonelderly population, have no health insurance. Health care can be expensive. This affects the choices that both government and employers make in providing health care insurance and services. Moreover, general health-care costs have been rising
810 health- care policy
sharply in recent years, well above the inflation rate, and drug costs have been rising even more steeply. As costs continue to increase, it is a sure bet that government health care budgets will be under severe pressure, and most employers will be forced to pass along their own added burden to employees. Those employees will likely find themselves paying more for health insurance and also receiving fewer benefits. No other areas of public policy reach so deeply into the personal lives of Americans as health care and how to pay for it. For some, it is literally a matter of life and death, and for many more access to health care can significantly affect the quality of their lives. Government policies influence not only access to and the quality of health services across the country, but also the pace of development and approval of new drugs and medical technologies and the extent of health research that could lead to new life-saving treatments. Whether the concern is periodic medical examinations, screening for major diseases, or coping with life-threatening illnesses, health-care policy decisions eventually affect everyone, and often in ways that are not equitable. The U.S. health-care system is widely recognized as one of the best in the world in terms of the number of physicians per capita, the number of state-ofthe-art hospitals and clinics, and the number of health-care specialists and their expertise. The United States also has a large percentage of the world’s major pharmaceutical research centers and biotechnology companies, which increases the availability of cuttingedge medical treatments. Despite these many strengths, however, a World Health Organization study in 2000 put the nation as only 37th among 191 nations, even though it spent a higher percentage of its gross domestic product (GDP) on health care than any other country. Such findings reflect the highly unequal access of the population to critical healthcare services , from prenatal care to preventive screening for chronic illnesses. The poor, elderly, minorities, and those living in rural areas generally receive less frequent and less adequate medical care than white, middle-class residents of urban and suburban areas. Much of the contemporary debate over healthcare policy revolves around the major federal and state programs and the ways in which they might be changed to improve their effectiveness in delivery of
health-care services , constrain rising costs, and promote equity. As is usually the case in American politics, there are often striking differences between liberals and conservatives and between Democrats and Republicans on these issues. Liberals and Democrats tend to favor a stronger governmental role in health-care insurance, in part to reduce current inequities in access to health-care services and because they see such access as a right that should be guaranteed by government and not subject to the uncertainties of market forces. Conservatives and Republicans tend to believe reliance on the private sector and competition among health-care insurers and providers is preferable to having government do more. Consideration of the major federal programs illustrates the challenge of changing health-care policy. Medicare is the leading federal program. It was approved by Congress in 1965 to help senior citizens, defined as those 65 years of age and older, to meet basic health care needs. It now includes those under age 65 with permanent disabilities and those with diabetes or end-stage renal disease. As of 2006, Medicare had some 40 million beneficiaries, a number certain to rise dramatically in the years ahead as the baby boom generation (those born between 1946 and 1964) begins reaching age 65. In effect, Medicare is a national health insurance program, but only for a defined population—senior citizens. Medicare offers a core plan, called Medicare Part A, that pays for a portion of hospital charges, with patients responsible for a copayment. Most Medicare recipients also select an optional Part B, which offers supplemental insurance for physician charges, diagnostic tests, and hospital outpatient services. In 2006, the cost to individuals was about $90 per month. Many health-care costs are not covered by either Part A or Part B of Medicare, which led Congress in 2003 to add a new Part D to cover a sizeable portion of prescription drug costs. The Republican majority in Congress designed the new program to encourage competition among private insurance companies and to promote individual choice among insurers. Critics complained, however, that the program was made unnecessarily complex and rewarded insurance companies with substantial benefits while doing little to control health-care costs. Debate over the future of Medicare is likely to continue, especially in light of projections of higher demand for its services and ris-
health-care policy 811
ing program costs. It is also likely to reflect partisan and ideological differences over health-care policy. Medicaid is the other major federal health-care program. Also established in 1965, Medicaid was intended to assist the poor and disabled through a federal-state program of health insurance. It does so by setting standards for hospital services, outpatient services, physician services, and laboratory testing and by sharing costs of health-care services for program recipients with the states. The states set standards for eligibility and overall benefit levels for the program, and both vary quite a bit from state to state. In 2003, Medicaid provided for some 54 million people, a number expected to increase to 65 million by 2015. In 1997, Congress approved a State Children’s Health Insurance Program (SCHIP), which was designed to ensure that children living in poverty had medical insurance. As is the case with Medicaid, the federal government provides funds to the states, which the states match. The states are free to set eligibility levels. Some 2 million children are covered under the SCHIP program who would not be eligible under Medicaid. Except for education, Medicaid is the largest program in most state budgets. In response to the soaring number of Medicaid recipients and rising costs, in 2005 Congress approved broad changes that give states new powers to reduce costs by imposing higher copayments and insurance premiums on recipients. States also were given the right to limit or eliminate coverage for many services previously guaranteed by federal law. Even before the new law, many states were reconsidering how they structure their programs and what services they could afford to provide. For example, some states have tried to reduce the use of expensive nursing homes and to foster health care in the home or community. Many states also have given greater attention to detection of fraud and abuse on the part of service providers, which some analysts have estimated to cost as much as 7 percent of the entire Medicaid budget, and much higher in some states, such as New York, that have done little to control these costs. The third major federal program is in many ways one of the most successful and yet not as visible as Medicare and Medicaid. The Veterans Health-care system is designed to serve the needs of American veterans by providing primary medical care, specialized care, and other medical and social services,
including rehabilitation. The Veterans Health Administration operates veterans’ hospitals and clinics across the nation and provides extensive coverage for veterans with service-related disabilities and disease, particularly for those with no private health-care insurance. In 1996 Congress substantially expanded the veterans’ health programs. The new health care plan emphasizes preventive and primary care but also offers a full range of services, including inpatient and outpatient medical, surgical, and mental-health services; prescription and over-the-counter drugs and medical supplies; emergency care; and comprehensive rehabilitation services. At the request of senior military leaders, in 2000 Congress approved another health care program for career military personnel. It expands the military’s health plan, known as TriCare, to include retirees with at least 20 years of military service once they become eligible for Medicare. TriCare pays for most of the costs of medical treatment that are not covered by Medicare. This brief description of major federal healthcare programs only hints at the major challenges that heath care policy faces in the years ahead. Perhaps the most important is what to do about rising costs. The Centers for Medicare and Medicaid Services estimates that health care expenditures will grow at some 7 percent annually, rising from $1.9 trillion in 2005 to $3.6 trillion by 2014. Drug costs are expected to grow at an estimated 9 to 12 percent annually over the next decade. The federal government has projected per capita expenditures for health care to rise from $6,432 in 2005 to $11,046 in 2014. Hence, there is likely to be considerable pressure on both public and private payers to cover these accelerating costs of health care. Yet such demands come at a time of substantial federal budget deficits, similar constraints on state budgets, and a public reluctance to see tax rates increase. What is the best way to deal with the predicament? Two broad trends suggest possible solutions. One is the effort to encourage individuals in both public and private health-care insurance programs to rely on health maintenance organizations (HMOs) or other so-called managed-care programs. These are designed to promote cost-effective health-care service by encouraging regular screening exams, limiting access to costly services and specialists, and providing for lower fees that are negotiated with service providers
812 health- care policy
(e.g., for physician or hospital services). Managed care now dominates the U.S. health-care system, and there is little doubt that it saves the nation billions of dollars a year in health-care costs. Despite some misgivings by the public, particularly during the 1990s, managed care is likely to continue its dominant role. Its future may well include additional economy measures, such as restrictions on drug coverage or access to specialists. The other major solution to rising health-care costs is to encourage greater reliance on preventive health care, that is, on promotion of health and prevention of disease. If individuals are given incentives to take better care of themselves throughout their lives, they are likely to be healthier and require less medical care than would otherwise be the case. Preventive health-care measures include regular physical examinations and diagnostic tests; education and training in diet, exercise, and stress management; and smoking cessation programs, among others. For example, routine screening for serious diseases such as diabetes and high blood pressure could lead to earlier and more effective treatment. Improved health-care education could lead individuals to better control their diets and make other lifestyle choices that can improve their health. Two of the most obvious concerns are smoking and diet. Smoking accounts for more than 440,000 deaths annually in the United States, making it the single most preventable cause of premature death. Secondhand smoke takes an additional toll, particularly in children. About half of those who smoke die prematurely from cancer, heart disease, emphysema, and other smoking-related diseases. Smoking cessation at any age conveys significant health benefits. Diet is equally important. The U.S. surgeon general has reported that if left unabated, the trend toward an overweight and obese population may lead to as many health care problems and premature deaths as smoking. About 30 percent of those age 20 or older, some 60 million people, are obese. Another 35 percent of the adult population are overweight, and the number of young people who are overweight has tripled since 1980. Being overweight increases the risk of many health problems such as hypertension, high cholesterol levels, type 2 diabetes, heart diseases, and stroke.
What can be done to halt or reverse the trend? A change in the American diet would be one helpful action, as would other changes in lifestyle, such as regular exercise. The U.S. population increasingly has consumed foods high in calories, fat, and cholesterol, and most Americans also fall well short of the recommended levels of physical exercise and fitness. Changes in diet and exercise can come as individual choices, but governments can also help, as can private employers. Many school districts, for example, have improved nutrition in cafeterias and limited highcalorie food and drinks in their vending machines. Governments at all levels and employers as well have tried to educate people on diet and exercise, though much more could be done. As this overview of health-care issues suggests, there is a clear need to assess the effectiveness, efficiency, and equity of all the programs and activities that constitute health-care policy in the United States. The high costs of health care alone suggest the logic of doing so, especially in terms of the standard of efficiency or costs that are applied to all public policy areas. But health-care policy also affects individuals so directly and in so many important ways that a search for better policies and programs is imperative to ensure that all citizens have reasonable access to the health care they need and that such care be of high quality. There is no one right way to change health-care policy, and solutions will emerge only after the usual process of public debate and deliberation. The suggested readings and Web sites listed below provide essential information about healthcare policy. They also can assist individuals in analyzing policies and programs and seeking creative solutions to better meet these needs. See also Department of Health and Human Services; entitlements; welfare policy. Further Reading American Association of Health Plans. Available online. URL: www.aahp.org. Accessed August 11, 2006; Bodenheimer, Thomas S., and Kevin Grumbach. Understanding Health Policy. 3rd ed. New York: McGraw Hill, 2001; Centers for Medicare and Medicaid Services, Department of Health and Human Services. Available online. URL: www.cms.hhs.gov. Accessed August 11, 2006; Hacker, Jacob S. The Road to Nowhere: The Genesis of President Clinton’s Plan for
housing policy 813
Health Security. Princeton, N.J.: Princeton University Press, 1997; Health Insurance Association of America. Available online. URL: www.hiaa.org. Accessed August 11, 2006; Kaiser Family Foundation. Available online. URL: www.kff.org. Accessed August 11, 2006; The Kaiser Network. Available online. URL: www.kaisernetwork.org. Accessed August 11, 2006; Kraft, Michael E., and Scott R. Furlong. Public Policy: Politics, Analysis, and Alternatives. 2nd ed. Washington, D.C.: Congressional Quarterly Press, 2007; Patel, Kant, and Mark E. Rushefsky. Health Care Politics and Policy in America. 3rd ed. Armonk, N.Y.: M.E. Sharpe, 2006 —Michael E. Kraft
housing policy The philosophical foundation justifying the investment of public resources by government in the provision of shelter for its citizens can be found in the democratic theory of John Locke, who argued that the fundamental purpose of government and the basis of its legitimacy to exercise authority is the protection of its citizens. Though even prior to the 20th century it was widely believed that access to some form of shelter was essential for protection, the capitalist economic system Locke embraced emphasized something more. While socialist ideologies stress communal ownership, capitalism depends on the existence of private property, leading the founders of the American government, who were well versed in Locke’s theory, to argue that the basis for citizenship is property ownership, especially of a home, and it is the duty of government to protect and promote that ownership. Thus, in capitalist democracies such as the United States, an individual’s material, social, and mental well-being tends to be associated with home ownership. Consequently, political leaders in the United States have developed a housing policy that not only emphasizes the widespread availability of shelter, usually affordable rental housing, but the promotion of home ownership as well. What has made this policy contentious is the extent to which government has gone to achieve these goals by using incentives to shape the housing market and regulate the detrimental impact the market may have on society. Because middle- and upper-class individuals and families already posses the assets and credit worthiness to obtain home mortgage loans,
government housing policy has necessarily been directed at low-income families, often including racial minorities, in urban centers where housing has often been scarce and expensive to obtain. Housing policy has also become deeply intertwined with issues of urban renewal, the gentrification of traditionally ethnic neighborhoods, and the overall health of entire communities because the value of property ownership not only provides individuals with financial assets, but also directly affects the value of the surrounding properties. Federal housing policy is best understood from three perspectives: provision of publicly funded rental housing, promotion of home ownership by insuring higher risk home mortgage loans, and laws requiring banks to make more mortgage loans available irrespective of income and race. Though the provision of shelter is perhaps easier to justify as a basic human right than home ownership, the government’s provision of affordable housing to the poor by dramatically increasing the availability of rental housing for low-income individuals and families has perhaps been the most controversial aspect of housing policy. A late addition to President Franklin D. Roosevelt’s New Deal programs, the Housing Act of 1937 sought to increase the number of affordable rental housing units in urban centers by either renovating existing units or helping to finance the construction of new units. The policy was actually designed to serve two purposes. The first was to provide shelter for the thousands of families made poor and homeless as the Great Depression put people out of work, drove up the number of foreclosures, and reduced the availability of mortgage loans. But it was also designed to stimulate the job-producing construction industry by providing money for building and renovation. To obtain widespread political support and local assistance in administration, Roosevelt chose to provide federal money directly to state approved but locally operated public housing authorities charged with identifying building sites and managing apartment complexes. The economic boom following World War II helped fuel the construction of large numbers of public housing projects in major cities. Local control in site selection, however, allowed communities fearful of the impact low-income housing might have on property values and quality of life to use advocacy with city halls and zoning laws to keep these projects
814 housing policy
out of the suburbs, concentrated many public housing complexes in the older inner cities rather than in middle-class neighborhoods, and is widely believed to have contributed to the emergence of ghettos. Poorly funded by Congress and most local governments, and with little effort made to make sure that facilities were well maintained or met basic safety standards, public housing projects developed reputations for terrible living conditions, poor management, urban blight, and centers of drug dealing. So widely was this believed that by the 1970s it became a social stigma to live in the “projects.” In 1968, the Lyndon B. Johnson administration, as part of its War on Poverty and Model Cities programs, vastly expanded federal public housing projects under the new U.S. Department of Housing and Urban Development (HUD). More federal funds became available, many new units were planned to be built or renovated, and even the Federal Housing Administration was directed to begin underwriting funding for public housing. Federal funding for housing programs and most urban renewal programs in general began to change radically with the Richard Nixon administration’s greater emphasis on federalism and local policy control. The funds for these programs were pooled together into Community Development Block Grants (CDBGs) giving local politicians more flexibility to divide the money up between programs so that program priorities would match local needs. The flip side, of course, was a reduction in funding guarantees for individual programs such as public housing. Nixon’s Housing and Community Development Act of 1974 also created the Section 8 Program (named after its section number in the U.S. Code) designed to subsidize the rent of low-income tenants in nongovernment-supported apartment housing, the amount of the subsidy paid to the property owner on behalf of the tenant calculated as the difference between 25 percent of the tenant’s annual income (30 percent later) and the unit’s fair market value. Public housing programs and Section 8 subsidies and indeed all of HUD came under assault in 1981, when President Ronald Reagan made drastic cuts across the board in domestic spending. His actions were bolstered by a growing public perception that all public housing had done was, at best, to concentrate the poor in decaying jobless neighborhoods beset with drugs and alcohol, or, at worst, give free
hand-outs that encouraged the poor to remain poor rather than try to find jobs and elevate themselves out of poverty. Though Congress later resisted deeper cuts in these programs and even restored some of what had been lost, construction of new housing projects virtually ceased in the early 1980s. President George H. W. Bush and his HUD secretary, Jack Kemp, however, took a somewhat more benign view of public housing, if still grounded in the idea of helping the poor to help themselves. Though funding for and availability of public housing was arguably far less than the demand during the Reagan and Bush years, Kemp did come into HUD with a signature selfempowerment plan. His Housing Opportunities for People Everywhere, or HOPE, program was an ambitious plan to provide federal support for tenants wishing to collectively purchase their public housing complex. He hoped that tenants would ultimately buy their own apartments and become home owners and that this would give low-income families a greater sense of commitment to both their properties and to their communities, though critics claimed that such a program could only benefit those low-income families fortunate enough to have steady incomes. The Bill Clinton administration’s rather lukewarm interest in and financial support for housing policy caused this and the more traditional programs to stagnate, so that the current state of public housing policy is not dramatically different from that of the 1970s. Even before creating public housing policy, the Franklin Roosevelt administration tried to boost the sagging housing construction industry and address the growing problem of homelessness in the 1930s by helping the battered financial industry make more mortgage loans. Falling incomes had left many individuals and families unable to qualify for standard home mortgage loans or prevent their current homes from being foreclosed, leaving a housing market so small that both the financial and construction industries were in danger of collapse. The only way to bolster the market was by reducing the risk to banks of loan default by promising that the government would cover a bank’s loss if a customer did default. Thus, Congress created the Federal Housing Administration (FHA) in the National Housing Act of 1934 to use the resources of the federal government to underwrite mortgage loans considered high risk by the banking industry. To further boost the lending indus-
housing policy 815
try, the Roosevelt administration also created the Federal National Mortgage Association, or FannieMae, to purchase FHA-secured mortgages from banks, bundle them, and then resell them in a special market to other financial institutions. By purchasing the mortgages in this “secondary” market, FannieMae provided lenders with greater liquid assets that could in turn be reinvested in more loans. Although the FHA enjoyed tremendous political support from the banking and home building industries and many members of Congress, it may have also contributed to inner-city decline and further concentration of the poor. Even with the federal government guarantee and a secondary mortgage market, financial institutions were often still unwilling to make loans to the very poor, in some cases using appraisal standards in the 1940s that bordered on discriminatory. The result was a continued neglect of the poor in the inner cities, where the only source of credit, when there was actually housing to buy, came from loan sharks. The Lyndon Johnson administration attempted to refocus the direction of the FHA in the 1960s by requiring it to help underwrite loans for the construction and renovation of subsidized rental housing in the inner cities. Johnson also sought to further expand mortgage availability by allowing FannieMae to resell non–FHA-backed loans, handing off that responsibility to the new Government National Mortgage Association (GinnieMae). Unfortunately, lax oversight of the FHA’s work in subsidized housing, both in the terms of the mortgage loans and the quality of the housing produced, led to not only very poor-quality housing being built but large-scale fraud from developers and local politicians. Johnson’s attempt to refocus housing policy on ending racial segregation and providing federal assistance to the poor brought the issue of discrimination to the forefront of housing policy. The result was adding a stick to the array of carrots government already offered to the lending industry. Urban community activists organizing in the 1960s as part of the Civil Rights movement accused the mortgage lending industry of deliberately marking off sections of cities where racial minorities were concentrated in red marker (“redlining”). Congress responded by passing the Fair Housing Act and the Equal Credit Opportunity Act, the so-called fair lending laws, making it illegal to discriminate in the granting of mortgage
credit on the basis of race and giving the U.S. Department of Justice and bank regulators the authority to prosecute lending institutions found to be engaging in mortgage loan discrimination. The problem with the fair lending laws has been the identification of discrimination and enforcement. Claiming that bank regulators were unwilling to put time and resources into investigation and prosecution of the financial industry for redlining, advocates successfully persuaded Congress to pass the Home Mortgage Disclosure Act of 1975, or HMDA, requiring all lenders to document the number of mortgage loans made and where they were made. (HMDA data can be found at http://www.ffiec.gov/hmda.) The public availability of this data would, they hoped, pressure banks into making more widespread loans and pressure regulators to prosecute if they did not. When the early data appeared to show lending patterns favoring middle- and upper-income white neighborhoods, either in the suburbs or gentrified urban centers, advocates pressed their accusations of discrimination against the banking industry. Again Congress responded by passing the Community Reinvestment Act of 1977 (CRA) requiring all lending institutions to reinvest a portion of their assets, especially home mortgage loans, in all communities from which they solicited deposits, which often included low-income and minority neighborhoods. Failure to comply with the law and to document their actions with HMDA data would require financial regulators to lay sanctions on the banks and even deny applications to merge and acquire other institutions. Together HMDA and CRA have been the most effective tools urban advocates and bank regulators have to push the lending industry into increasing its efforts to make home ownership more widely available to the poor, though recent changes in banking laws under the Gramm-LeachBliley Act of 1999 may have undermined them by allowing banks to acquire and move assets into affiliate insurance and investment institutions not covered by these laws. Perhaps the greatest ongoing challenge in federal housing policy is continuing to find ways to provide public funds to support programs that provide shelter to the less fortunate while at the same time providing opportunities for these same individuals to climb out of poverty and own homes of their own. At the same time, government must also balance this need against
816 immigr ation
the health of the financial industry; pressuring lenders into making too many high-risk loans could have far-reaching consequences to the overall availability of credit and the viability of this industry. That said, discrimination in mortgage lending on the basis of race and income as well as the concentration of the poor into ghettos remain the paramount issues of concern for lawmakers and public housing advocates. Though large-scale statistical studies using HMDA data have not turned up clear evidence of racial discrimination in mortgage lending, it is worth noting that African Americans make up only 13 percent of the population of the United States, and whites 77 percent in 2005 (according to 2005 U.S. Census Bureau data) and that 49.3 percent of individuals and families served by public housing programs are black and only 46.5 percent are white, according to HUD’s Fiscal Year 2005 Annual Report on Fair Housing. Furthermore, in 2003, a total of 1,793,000 families were reported living in public housing, 50 percent of whom had an annual income of less than $10,000 (the median annual income was $9,973) according to the 2003 American Housing Survey in the United States. The differences in home loans financed by the Federal Housing Administration are somewhat less stark. Here, 78.6 percent of all FHA-insured singlefamily loans in 2005 were to whites, while African Americans received only 14.1 percent. In terms of annual income, 41 percent of all families receiving government-insured mortgage loans (from the FHA and other programs) reported making less than $10,000, though the median income was $12,918. Finally, recent years have seen a growing concern regarding subprime lending to low-income urban families, or loans with high interest rates and other fees offered to customers considered to be high risk, which places a tremendous strain on families with variable income streams as they attempt to pay back loans that can border on usury. With these apparent disparities, the government’s housing policy is likely to remain politically controversial well into the 21st century. Further Reading Bradford, Calvin. “Financing Home Ownership: The Federal Role in Neighborhood Decline.” Urban Affairs Quarterly 14 (March 1979): 313–336; Bratt, Rachel G. “Housing for Very-Low Income Households: The
Record of President Clinton, 1993–2000.” Report W02-8, Joint Center for Housing Studies, Harvard University, 2002; Calem, Paul S., Jonathan E. Hershaff, and Susan M. Wachter. “Neighborhood Patterns of Subprime Lending: Evidence from Disparate Cities.” Housing Policy Debate 15, no. 3 (2004): 603– 622; Dye, Thomas R. Understanding Public Policy. 2nd ed. Englewood Cliffs, N.J.: Prentice Hall, 1972; Hays, R. Allen. “Ownership and Autonomy in Capitalist Societies.” In Ownership, Control, and the Future of Housing Policy, edited by R. Allen Hays. Westport, Conn.: Greenwood Press, 1993;———. The Federal Government and Urban Housing. 2nd ed. Albany: State University of New York Press, 1995; Kleinman, Barbara A., and Katherine Sloss Berger. “The Home Mortgage Disclosure Act of 1975: Will It Protect Urban Consumers From Redlining?” New England Law Journal 12, no. 4 (1997): 957–989; Munnell, Alicia H., Geoffrey M. B. Tootell, Lynn E. Browne, and James McEneaney. “Mortgage Lending in Boston: Interpreting HMDA Data.” American Economic Review 86 (March 1996): 25–53; Santiago, Nellie R., Thomas T. Holyoke, and Ross D. Levi. “Turning David and Goliath into the Odd Couple: The Community Reinvestment Act and Community Development Financial Institutions.” Journal of Law and Policy 6 (Fall 1998): 571–651; Squires, Gregory D., and Sally O’Connor. Color and Money: Politics and Prospects for Community Reinvestment in Urban America. Albany: State University of New York Press, 2001; U.S. Census Bureau. American Housing Survey for the United States in 2003. Washington, D.C. 2003; U.S. Department of Housing and Urban Development. The State of Fair Housing. Report by the Office of Fair Housing and Equal Opportunity. Washington, D.C., 2005. —Thomas T. Holyoke
immigration At first, immigration to the United States was relatively unregulated, as there was plenty of room in America for newcomers and no perceived threat to those already here. In many states, especially the newer western ones, immigrants were given the right to vote and hold public office before becoming citizens (as long as the individual had declared his or her intention to become a citizen). No restrictive
immigration 817
The fence that runs along part of the border of the United States and Mexico (Getty Images)
federal immigration laws were passed until the Immigration Act of 1875, with the notable exception of the 1809 ban on the importation of slaves. As relatively large numbers of Irish and German Catholics, many of them desperately poor, began to immigrate to the United States in the late 1820s and 1830s, anti-immigrant sentiment combined with the costs of supporting the poor led cities and states to start using legislation to try to stem the tide. Some eastern states passed head taxes (e.g., $2 a head in Massachusetts) for all passengers, to be paid by the owners of immigrant vessels. Some eastern states passed immigration laws in the 1840s, usually to limit the inflow of immigrants from Ireland and Germany, but these laws were invalidated by the United States Supreme Court in the Passenger Cases (1849), which declared the regulation of immigration to be a federal power under the commerce clause. By 1890, immigration to the United States shifted from “filling up” the new world to the importation of labor to an industrialized nation. At the same time, there was a shift in immigration from “traditional” countries (England, Ireland, Germany, France, and the Scandinavian countries) to those of
east central and southern Europe. Immigration boomed at the turn of the 20th century as America recruited laborers from even more remote rural regions in eastern Europe. Because they came to work, not to live, many returned home after a relatively short period (five years or less). Nevertheless, immigration policy since the turn of the century has often aimed, either explicitly or implicitly, to encourage immigration by “desirable” races and ethnicities while discouraging or banning members of “undesirable” groups. In the post–Civil War period, various immigration laws were passed to keep out foreigners deemed undesirable for reasons other than race or ethnicity (although some of the excluded categories had racial underpinnings): prostitutes and felons (1875); lunatics, idiots, and persons likely to become public charges (1882); victims of loathsome or dangerous diseases, polygamists, and persons convicted of misdemeanors involving moral turpitude (1891); epileptics, professional beggars, procurers, anarchists, and advocates of political violence (1903); imbeciles, feeble-minded persons, and persons with mental defects (1907); and illiterates (1917).
818 immigr ation
In the 1840s and early 1850s, anti-immigrant sentiment led to the emergence of the American (KnowNothing) Party, whose major platform was restrictive immigration and naturalization laws. After the Civil War ended in 1865 and the subsequent ratification of the Fourteenth Amendment to the U.S. Constitution in 1868, which established a uniform national citizenship, Congress changed the nation’s naturalization laws to allow for the naturalization of “white persons and persons of African descent,” pointedly excluding Asians. This was then used to bar Asians from entry to the United States on the grounds that they were ineligible for citizenship. The Page Act of 1875 added to the list of those barred from entering felons and any women “imported for the purposes of prostitution” and was particularly aimed at Asian women. In the early 1870s, anti-Asian nativism triggered by Chinese immigration was rampant in both political parties. Racial prejudice, combined with fears of economic competition by white working men in the West, led soon afterwards to overwhelming bipartisan support for the Chinese Exclusion Act of 1882, which almost completely shut off immigration from China and also barred thousands of Chinese who had left the country temporarily from ever returning. The act was upheld by the U.S. Supreme Court, which ruled in a series of cases that the exclusion of a particular “class” of immigrant was constitutional. It was finally repealed in 1943. Asian immigration was so severely restricted that for most of America’s history, Asians have represented less than 1 percent of the total population. In 1980, that increased to 1.5 percent, and by 2005 census estimates, people of Asian descent now make up 4.2 percent of the population. In 1897, the U.S. Supreme Court ruled in Henderson v. Mayor of New York that the federal government, not state and local governments, had authority over immigration. Federal head taxes followed soon afterward, as did regulation to bar certain types of prospective immigrants (such as criminals, those likely to become a public charge, those suffering from contagious diseases, and those who failed to meet certain moral standards, such as polygamists). However, very few immigrants were turned away. In 1901, after President William McKinley was assassinated by an anarchist, the list of those excluded from immi-
grating was expanded to include anarchists and other political radicals. By 1920, more than a third of the U.S. population was immigrants, and anti-immigrant sentiment increased in response to perceived threats to the American way of life and economic threats to laborers, as well as racism against nonwhites and “inferior” Europeans such as Slavs and Jews. Public nativism was reflected in Congress by various attempts, led by Massachusetts congressman Henry Cabot Lodge, to require prospective immigrants to pass a literacy test. Congress approved Lodge’s bill several times (1897, 1913, 1915, and 1917), and it was vetoed each time by Presidents Grover Cleveland, William Howard Taft, and Woodrow Wilson. However, in 1917, fueled by World War I nationalism, Congress voted to override the president’s veto (287-106 in the House, 62-19 in the Senate). The 1917 act required literacy, although not necessarily in English, and also barred all Asian immigrants (the “barred zone”) except Filipinos (considered American nationals in the wake of the Spanish-American War) and Japanese (in recognition of the Gentleman’s Agreement of 1907–08, whereby Japan agreed not to issue passports to laborers and to restrict immigration to family reunification). After the war ended, the media warned of a coming flood of undesirable immigrants fleeing warravaged Europe; despite the lack of any evidence of such a flood, Congress voted to severely restrict immigration using a system of quotas based on the number of people from each nation already in the United States (the Dillingham plan, after its author, Vermont Republican Senator William P. Dillingham). The new laws were supported by domestic labor unions, which feared foreign competition, and by large firms hoping to shift from white to black labor, which they viewed as docile and unlikely to organize. President Wilson pocket vetoed the first Dillingham bill just as his second term expired, but it was quickly reenacted in special session called by the new president, Warren G. Harding, who signed it. The Dillingham plan (the Quota Law of 1921) set an overall limit on immigration of about 355,000. It allowed unlimited immigration from the Western Hemisphere (e.g., Canada and Mexico) and allowed a tiny quota for Japanese but no other Asians (except, of course, Filipinos), and Europeans were given a limit of 3 percent annually of the
immigration 819
number of foreign-born Europeans in the country, assigned in proportion to the nationality recorded in the 1910 census. Congress extended the bill for two years in 1922. Although immigration regularly exceeded these statutory limits, it was dramatically reduced. The 1924 Immigration Act (the Reed-Johnson Act) set up a two-stage system of quotas. Until 1929, the base was shifted from 1920 census figures to the 1890 census, and quotas were reduced from 3 percent to 2 percent, with an overall limit of only 165,000. The second phase shifted the base back to the first census of 1790 and further tightened Asian exclusion by abrogating the Gentlemen’s Agreement with Japan and barring all Japanese. These shifting census dates were intended to increase quotas assigned to “desirable” nations, such as the British Isles, Germany, and Scandinavia, while reducing quotas for “undesirables.” The national-origin quota system of the Immigration Act of 1924 remained in effect until 1965, although it was changed slightly over the years. In 1943, Chinese exclusion was repealed and a quota granted. In 1946, quotas were given to the Philippines and India. All other Asians, however, were still denied admission. In 1952, Congress passed the McCarran-Walter Act, which ended racial and ethnic exclusions to immigration and naturalization law. However, national origin quotas remained in effect using the 1920 census. President Harry S. Truman vetoed the bill, objecting to maintenance of the quota system, but Congress overturned his veto. Nevertheless, between 1952 and 1965, the proportion of immigrants coming from Europe declined, while the proportion from Asia and Latin America rose considerably. Many of these were refugees. Another notable aspect of the McCarranWalter Act, related to the anticommunist mania of the times, was a ban on immigration by lesbians and gay men, with the rationale that they were particularly vulnerable to communism and blackmail. The exclusion was repealed with the Immigration Act of 1990 (which also removed the ban on former members of the Communist Party); homosexuals are still disproportionately barred from immigration due to exclusion based on HIV infection and cannot take advantage of family unity visas for their partners.
As the Nazis rose to power in Germany and Europe saw the outbreak of World War II, many Jews tried to immigrate to the United States to escape the Holocaust. Most were turned away. Americans tended to believe such efforts would compromise America’s neutrality, and Americans also feared that a mass influx of Jews would threaten American values. Even when news of the wholesale slaughter of Jews in eastern Europe became common knowledge, politicians offered words of concern but not action to save those in the death camps. President Franklin D. Roosevelt’s State Department consistently made it difficult for Jewish refugees to obtain entry into the United States, even sponsored scholars and children. Although even a much more liberal refugee policy could not have saved the 6 million individuals killed in the Holocaust, many more could have been saved with less racist and anti-Semitic policies. The Immigration Act of 1965 ended the national origins system, instead stressing family reunification and occupational skills. Quotas by country of origin were replaced by hemispheric caps, including for the first time a ceiling on immigrants from the Western Hemisphere. A transitional component required that 40 percent of visas issued go to Irish immigrants. However, the long-term consequence has been a shift toward third world countries, including Asia, Africa, Latin America, and the Caribbean. The family reunification clauses have brought the demographic composition of immigrants closer to that of native born (more women and children). The law set aside a set portion of admissions for refugees, but large numbers of Cubans and then Southeast Asian refugees (in the wake of the Vietnam War) illustrated the unworkability of this system. In response, Congress passed the Refugee Act of 1980, which allowed refugee aliens of political and related persecution to apply for the first time for asylum in the United States. Today, most legal immigrants stay in the United States permanently rather than returning to their country of origin. Most labor immigration today is illegal, or undocumented. These laborers, like their European counterparts a century ago, tend to remain in the United States only temporarily. Some cross the border surreptitiously; others overstay their tourist or student visas. In the 1980s, nativist attitudes reemerged, this time focusing on the issue of illegal
820 immigr ation
immigration and specifically on the issue of economic competition from Mexicans. The 1986 Immigration Reform and Control Act (IRCA) included an amnesty provision for those who had been in the United States continuously for five years, requirements that employers verify the eligibility of all newly hired employees, sanctions for employers who knowingly hired illegal aliens, and an agricultural guest worker program for California and Texas. The enforcement provisions, however, were intentionally weak, allowing continued use of illegal immigrant labor by employers with little difficulty. IRCA granted amnesty to more than 2.7 million people, including more than 2 million Mexicans. Despite fears that IRCA would lead to more illegal immigration, evidence from border apprehensions suggests otherwise. And despite hopes that IRCA would reduce the flow of unauthorized aliens, again, data from the Immigration and Naturalization Service (INS) show no such impact. In the wake of the 9/11 terrorist attacks, Congress approved additional categories of banned immigrants as part of the USA PATRIOT Act of 2001, including aliens suspected of
being involved in terrorist activity or who have publicly endorsed terrorism. In 2005, Congress again moved to reform immigration, with a focus on undocumented (illegal) immigrants. In the House of Representatives, James Sensenbrenner (R-WI) won approval for a bill focusing on securing the border and making it a felony to help illegal immigrants. The bill sparked massive protests by immigrants across the country. In the Senate, John McCain (R-AZ) and Edward Kennedy (D-MA) pushed for an alternative approach that included an amnesty similar to that of IRCA, a new guest-worker program, and tougher enforcement of laws against hiring undocumented workers. Estimates in the spring of 2006 put the size of the unauthorized immigrant population at 11.5 to 12 million. As the election of 2006 approached, no compromise was in sight, as Democrats and Republicans squabbled over how best to reform immigration policy. The political controversy over how to regulate immigration persisted in 2007 even after the Democratic Party won back both houses of Congress in the 2006 midterm elections, as this
income taxes 821
promises to be an ongoing issue in American politics for a long time. Further Reading Archdeacon, Thomas J. Becoming American: An Ethnic History. New York: Free Press, 1983; Chan, Sucheng, ed. Entry Denied: Exclusion and the Chinese Community in America, 1882–1943. Philadelphia: University of Pennsylvania Press, 1991. Daniels, Roger. Coming to America: A History of Immigration and Ethnicity in American Life. 2nd ed. Princeton, N.J.: Perennial, 2002; Daniels, Roger. Asian America: Chinese and Japanese in the United States since 1850. Seattle: University of Washington Press, 1988; Diner, Hasia R. The Jews of the United States, 1654 to 2000. Berkeley: University of California Press, 2004; Feingold, Henry. Bearing Witness: How America and Its Jews Responded to the Holocaust. Syracuse, N.Y.: Syracuse University Press, 1995; Johnson, Kevin R. The “Huddled Masses” Myth: Immigration and Civil Rights. Philadelphia: Temple University Press, 2004; Orrenius, Pia M., and Madeline Zavodny. “Do Amnesty Programs Reduce Undocumented Immigration? Evidence from IRCA” Demography 40, no. 3 (August 2003): 437–450; Pew Hispanic Center. “Estimates of the Unauthorized Migrant Population for States Based on the March 2005 CPS.” Fact Sheet, April 26, 2006. Available online. URL: http://www.pewhispanic.org. Accessed June 23, 2006; Portes, Alejandro, and Robert L. Bach. Latin Journey: Cuban and Mexican Immigrants in the United States. Berkeley: University of California Press, 1985; Reimers, David M. Still the Golden Door: The Third World Comes to America. 2nd ed. New York: Columbia University Press, 1992. —Melissa R. Michelson
income taxes Income taxes are essential to modern public finance. In the United States, the federal and many state governments depend on them as their main source of revenue. However, income taxes are more than just exactions from the earnings of individuals and corporations needed to pay for public expenditures. Politicians have come to use them to address myriad policy issues. Especially at the national level, income taxes are now the primary devices employed to both
manage the macroeconomy and instill ideological principles into public policies. Indeed, few policies so arouse partisan passions and affect so many people as do income taxes. Republicans and Democrats clash routinely over who should bear the burden of such taxes as well as their imputed effects on work effort, savings and investment, entrepreneurship, and the federal budget. As such, the following discussion briefly touches on both the mechanics and the politics of income taxes. The goal is to convey their many economic, political, and ideological aspects. The federal income tax, first made legal by a Constitutional amendment in 1913, did not become a “mass” tax—affecting the vast majority of income earners—until World War II, when the urgent need for revenues demanded that nearly everyone help pay for the war effort. Over the ensuing decades, as America’s new role in world leadership and certain domestic issues demanded greater budgetary and other commitments, the federal income tax became not just an essential revenue source but a tool for macroeconomic and social policy making. This only complicated the two most important questions for tax policy makers, namely, what constitutes “income” and how best to tax it. Parenthetically, these questions are relatively recent in the history of public finance. Taxes have been levied for thousands of years. For the most part, they have been excise, or sales, taxes of some sort, charged on purchases or similar transactions. The main source of revenue for the U.S. federal government prior to the income tax was tariffs, sales taxes applied exclusively to goods imported from abroad. State and local governments in the United States have long used sales taxes on domestic goods and services as well as property taxes assessed against the value of one’s home to pay for their spending, namely public education and infrastructure. Only in the last century has the notion of taxing a person’s annual income and accumulated wealth become accepted as legitimate at both the state and federal levels. As highlighted below, this notion arose largely out of a sense of fairness in the wake of marked income inequality wrought by the Industrial Revolution. Tariffs and other sales taxes were considered unfair, or “regressive,” because they were paid largely by the poor and working classes. Today, at both the national and state levels,
822 inc ome taxes
income taxes, often highly “progressive,” with tax rates increasing with income levels, serve the dual purpose of raising revenues and redistributing tax burdens upward, or toward the wealthier income earners. As everyone from high school and college students working part time to Fortune 500 CEO’s can attest, the primary source of income open to tax stems from wages and salaries—payments for hours of labor supplied on the job. However, the federal income tax base, or the total amount of taxable sources, does not stop there. Other key sources stem from “capital,” or “investment,” income. These include dividends and interest paid annually by corporations and capital gains accrued on certain assets such as homes, small businesses, farms, and investment portfolios (i.e., mutual funds, stocks and bonds, etc.). In addition, such types of income as rents and royalties, alimony, unemployment compensation, Social Security payments, and pensions are revenue sources for the federal government. Congress, the branch constitutionally charged with writing tax laws, regularly redefines these and other sources as “taxable” income. This entails delineating how much of each type of income is legitimately open to taxation as well as the rate, or the percentage, at which the tax is applied. For instance, almost all wages and salaries are open to taxation. These are taxed at increasing, or graduated, rates ranging from 10 percent to 35 percent, meaning, on every additional, or marginal, dollar of wages and salaries earned, the federal tax takes anywhere from 10 to 35 cents. Conversely, 85 percent of Social Security payments are open to taxation, while only a fraction of capital gains income is subject to tax and often at noticeably lower rates. As tens of millions of taxpayers know, the annual endeavor to fill out their income tax returns involves calculating, first, their “adjusted gross income” by adding up all of the income from these many sources. Then, they subtract certain allowable deductions and exemptions for one’s self and one’s spouse and other dependents, thereby yielding their “taxable” income. They then apply the appropriate tax rate or rates to their taxable income, arriving at their final tax obligations. Many taxpayers find that they have overpaid during the year and are entitled to a “refund,” while others find that they still owe taxes. Enforcement of the federal income tax, or
Internal Revenue, Code is the responsibility of the Internal Revenue Service (IRS). As mundane and haphazard as all this might appear at first, the power to delineate what represents taxable income and how much to tax gives Congress enormous political power. No other public policy has come to so routinely attract the efforts of lobbyists seeking “loopholes,” ideologues seeking “fairness” or “efficiency,” and politicians seeking influence and reelection as has federal income tax policy, for no other public policy has come to so regularly affect hundreds of millions of income earners and voters, as well as tens of trillions of dollars in commerce. Indeed, this coalescence of political and economic forces is responsible for increasing the number of pages of the Internal Revenue Code from barely a dozen in 1913 to several thousand today. First, lobbyists and the industries and other groups they represent are largely responsible for “loopholes,” or stipulations in the tax code that exclude certain income from the federal tax base. The ability to convince Congress to grant or maintain such a legal exclusion, or deduction, from the tax code is highly prized. For instance, realtors hire lobbyists to jealously protect the ability of home owners to deduct their annual mortgage payments from their adjusted gross income, thereby reducing their taxable income. Lobbyists for restaurant owners and workers fight just as assiduously to protect the ability of corporations to deduct from their tax bills the costs of business lunches and dinners. Even state and local government representatives lobby on behalf of the deduction for interest earned on state and municipal bonds. Second, individuals and organizations more ideological than mercenary in nature solicit Congress to change the tax code on philosophical grounds. Those from a left-of-center, or liberal, perspective believe staunchly in the aforementioned notion of progressive taxation. For example, if someone who makes $10,000 pays $1,000, or 10 percent of his or her income in taxes, then someone making $100,000 ought to pay, not 10 percent, or $10,000, but, say, $20,000, or 20 percent in taxes. Those making several million dollars ought to pay a still higher share in income taxes. To these thinkers, income taxation, especially using progressively higher marginal tax rates, is key to redistributing income earned toward
income taxes 823
the poorer members of society. More conservative thinkers counter by arguing that such moralistic methods of taxation run the risk of becoming counterproductive. They point out that richer income earners have a higher “tax elasticity,” meaning they are more sensitive than lower-income earners to higher tax rates and have the ability, through accountants and tax lawyers, to shield much of their income from higher tax rates. Or, if necessary, these wealthier income earners have the capacity to simply not undertake taxable activity in the first place. Thus, conservative thinkers argue that keeping tax rates low across all income levels is a more efficient way of helping society overall. Promoting risk-taking and entrepreneurship by allowing individuals and businesses to keep more of the rewards of their efforts leads to a bigger and wealthier economy. As will be seen below, this “equity versus efficiency” debate has largely characterized, in one form or another, the tax politics in America for more than a century. Finally, although these diverse pressures culminate in often contradictory demands, politicians quickly discovered the benefits of indulging them. The promises of legislating legal favoritisms into the tax code can garner many members of Congress immense campaign contributions and even votes on election day. Others enjoy the ability to use the income tax to engage in “social engineering,” granting benefits to or imposing costs on certain types of economic and other behavior. For example, deductions and other breaks have been granted to individuals and organizations who donate to charities, to parents with children in college, to home owners who insulate their windows and attics, and, more recently, to soldiers serving in Iraq and Afghanistan. In short, the power to tax and to exempt from tax, whether for parochial, ideological, or political reasons, remains one of the greatest prerogatives in all of policy making. It has redefined the role of the income tax from simply raising adequate revenue to fine tuning the macroeconomy and instilling social justice. In fact, the modern federal income tax was more the product of a broad regional and ideological movement than of any economic or budgetary need. By the late 19th century, American wealth was concentrating in the Northeast as the steel, railroads, and banking industries came of age. Their benefactors in Washington, D.C., the Republicans, served their
interests by keeping tariffs high, thus protecting such industries from foreign competition and shifting the burden of federal taxation onto workers and consumers. In the 1890s, Progressive Era Republicans representing farmers and ranchers in the Midwest joined forces with Democrats, largely from the agricultural South, in repeated attempts to lower tariff rates and instead enact taxes directly on the accumulating wealth in the Northeast. Indeed, for the next generation this tariff–income tax relationship would define the American political economy. After winning control of the national government in 1892, the Democrats, aided by Progressive Republicans, lowered tariff rates and introduced the first peacetime income tax in 1894. However, just a year later, the U.S. Supreme Court ruled the income tax unconstitutional on the grounds that its burden would not be apportioned evenly across all states. After numerous political setbacks, the Progressive-Democrat coalition finally prevailed in 1913, when the Sixteenth Amendment to the U.S. Constitution was ratified. It gave Congress the capacity to tax any kind of income without consideration to burden across the states. Later that same year, the first constitutionally sanctioned income tax was enacted as an amendment to a much larger tariff reduction bill by the new Democratic president and congressional majority. The first federal income tax law was but a few pages and had a top rate of just 7 percent. Thanks to very high exemption levels for individual income earners and their family members, it applied to less than 2 percent of all income earners. Even after America’s entry into World War I four years later, when the top rate was raised to almost 80 percent, only a small fraction of income earners paid any income tax. During the 1920s, when the Republicans returned to power, tariff rates were raised substantially, while income tax rates were sharply reduced. In the 1930s, the Democrats became the new majority party in the wake of the Great Depression and duly lowered tariffs and raised income tax rates to unprecedented levels, largely in a class war to punish the rich, whom they blamed for the economic calamity. It was not until the onset of World War II a decade later that the federal income tax became ubiquitous. The federal government needed more revenues to pay for the costs of this total war. The existing system of waiting for
824 inc ome taxes
taxpayers to pay their annual tax bills was no longer practical. In 1942, the U.S. Department of Transportation enacted the system we know today called “withholding.” Primarily on wages and salaries, this system requires employers to withhold from each paycheck a certain amount of money that goes directly to the federal government. At the end of each year, taxpayers calculate their precise tax bills and, if they paid too much during the year, they get a refund of the difference. Otherwise, they have to pay still more by every April 15. When combined with the sharply lower exemption levels, the income tax thus became a “mass” tax, affecting upward of 80 percent of all income earners. Also, in the aftermath of the war, the Democratic Congress and President Harry S. Truman enacted the 1946 Employment Act. Following the intellectual tenets of the Keynesian economic paradigm, which argued that national governments should actively correct flaws in the private sector, this law basically committed the federal government to use all necessary fiscal (i.e., taxing and spending) and monetary policies to keep the national economy from falling back into a depression. In doing so, this act defined yet another responsibility for the income tax. Along with raising adequate revenue and redistributing incomes via top marginal rates then more than 90 percent, the federal income tax would now be used to tame the business cycle. This macroeconomic role was most thoroughly developed in the 1960s, when the Democrats, led by Presidents John F. Kennedy and Lyndon B. Johnson, used major income tax cuts to jump start an ailing economy and then tax increases to tame a growing inflation in consumer prices. By the late 1970s, this inflation had combined with the graduated income tax to create rising real tax burdens for many Americans. While the inflation raised their incomes on paper, it forced them into higher tax brackets. This phenomenon, known as “bracket creep,” thus robbed them of real buying power while increasing their real tax burdens. In 1980, Republican presidential candidate Ronald Reagan promised large cuts in income tax rates for all taxpayers as part of a conservative reaction to what was increasingly seen as a federal government grown too unwieldy and expensive. During his tenure in the White House, Reagan won enactment of two historic tax bills. Together, they reduced all tax rates, includ-
ing bringing down the top marginal rate from 70 percent in 1981 to 28 percent by 1988, in exchange for the elimination of many tax loopholes. Reagan and his supporters credited these major changes in income tax policy for ushering in the booming 1980s. But his critics blamed them for unprecedented federal budget deficits. In the 1990s, Democrat Bill Clinton and a Democratic Congress reversed this downward trend in tax rates, claiming the Reagan tax cuts had unfairly favored the wealthy. Then, in 2001 and 2003, the Republican Congress worked with President George W. Bush to cut tax rates once again, claiming that lowering tax rates was more conducive to economic growth. In short, income taxes are far more than exactions from Americans’ paychecks. They are the product of many diverse political, economic, and ideological considerations. After nearly a century, the federal income tax has become part of American life. It now affects almost all income earners, and politicians have come to use it to address numerous policy issues. Its ubiquity has, in fact, caused it to become highly complex and often quite burdensome to taxpayers. In the past few years, Congress has considered replacing it with a national consumption tax, which would tax only what individuals and corporations purchase, rather than what they earn. Another option is a “flat” tax, basically the existing income tax, but with no or just a few deductions. While neither is likely to be enacted any time soon, they are the latest attempts to define “income” and prescribe how “best” to tax it. See also fiscal policy. Further Reading Birnbaum, Jeffrey H., and Alan S. Murray. Showdown at Gucci Gulch: Lawmakers, Lobbyists, and the Unlikely Triumph of Tax Reform. New York: Random House, 1987; Conlan, Timothy J., Margaret T. Wrightson, and David Beam. Taxing Choices: The Politics of Tax Reform. Washington, D.C.: Congressional Quarterly Press, 1990; Leff, Mark. The Limits of Symbolic Reform: The New Deal and Taxation, 1933–39. New York: Cambridge University Press, 1984; Pechman, Joseph. Federal Tax Policy. 5th ed. Washington, D.C.: Brookings Institution Press, 1987; Rosen, Harvey. Public Finance. New York: McGraw Hill, 2004; Stein, Herbert. The Fiscal Revolution in America. Chicago: University of Chicago Press, 1969; Stockman, David.
Keynesian economics 825
The Triumph of Politics: How the Reagan Revolution Failed. New York: Harper & Row, 1986; Wilson, Joan Hoff. American Business and Foreign Policy 1920– 1933. Lexington: University of Kentucky Press, 1971. —Alan Rozzi
Keynesian economics The term Keynesian economics has several meanings. All stem from ideas of the British economist John Maynard Keynes (1883–1947), but not from all of his ideas. Specifically, it refers to a type of macroeconomics that rejects “classical economics” and suggests a positive role for government in steering a market economy via its effect on aggregate expenditures on goods and services. Classical economics dominated the profession from its birth until the 1930s. This school argued that rapid adjustment of prices—Adam Smith’s “invisible hand”—prevents economic recessions and ends stagnation. If serious unemployment persists, it is due to nonmarket factors, such as governments or labor unions. Laissez-faire policies were recommended to allow the economy to supply as many products as possible, attaining what is now called potential output or full employment. For many, the Great Depression sounded the death knell for classical theories: During the early 1930s, the U.S. and the world markets collapsed. In 1933, U.S. unemployment approached 13 million workers, 25 percent of the labor force. Worse, for some countries, depressed conditions began soon after World War I. Many began to see stagnation as normal. Classical economists advocated wage and price cuts, but increasing numbers of people sought more timely solutions. Different schemes were proposed, including faster increases in the money supply, government investment in public works, fascist-style governmentbusiness cartels, and Soviet-type economic planning. In Germany, Nazi economics minister’s Hjalmar Schacht’s policies were “Keynesian” (broadly defined), involving government spending on infrastructure and militarism. Some others had had Keynesian ideas before Keynes, including Michał Kalecki of Poland. The most influential was Keynes, who was very prestigious at the time. It helped that he steered a
middle path, breaking with classical laissez-faire but advocating a relatively minor increase in the role of the government compared to fascism and Soviet planning. Keynes’s General Theory (1936) sparked a movement embraced by many of the younger generation of economists. Rejecting the classical postulate that market economies operated only at full employment, he aimed to understand the unemployment equilibrium that he saw as characterizing the 1930s. Though using theory, Keynes aimed to understand the real world. Because classicals saw economic problems as cured in the long run, he stressed that “In the long run, we are all dead.” To Keynes, price adjustment can fail to solve the problem of excess saving. The classicals saw increased saving as boosting the supply of funds available to borrow, so the price of borrowing (the interest rate) falls. In turn, businesses and individuals borrow all the new funds to finance fixed investment in new factories, housing and so on. This increases expenditure by precisely the same amount that saving rose. Any demand shortfall due to saving is thus exactly cancelled out by fixed investment expenditure. To Keynes, fixed investment decisions were primarily determined by expectations of future profitability, not by interest rates. Investment was thus not moved significantly by saving. This breaks a key link in the classical chain. Further, it is not simply saving and investment that determine interest rates. The amount of money circulating and liquidity preference (the desire to hoard money) can be crucial: In a period of turmoil, people hold money as a safe asset, propping up interest rates and discouraging investment rises. Excess saving means that people are abstaining from purchasing goods and services. If persistent, the demand shortfall for most products implies unwanted accumulations of inventories, so firms reduce production and employment, cutting incomes. This in turn causes saving to fall, since people are less able to save. It is thus not adjustment of interest rates but an adjustment of income and employment—a recession— that allows achievement of equilibrium, the equality of saving and investment. This attainment then ends any interest rate adjustment. Crucially, it occurs not at full employment, but with deficient-demand (“cyclical”) unemployment, below potential.
826 K eynesian economics
To Keynes, uncertainty was the fundamental problem. Suppose one saves more to buy a house in the future. Nobody knows why you are saving more. Doing so lowers demand for current products without simultaneously creating demand for future houses (which would induce investment). A central concept is the income-spending multiplier, which says that an initial change in spending (consumer demand, fixed investment, etc.) induces a larger change in income and output. A fall in investment, for example, reduces expenditure, revenues, and consumer incomes, pushing spending downward again. This again cuts production, employment, incomes, and so forth. Each step is smaller than the previous, so eventually the process stops at a new equilibrium level of income and output. To Keynes, classical wage-cutting polices to end unemployment were self-defeating because they decrease income and demand. By hurting profitability and production, this decreases the demand for labor, encouraging unemployment. Further, economist Irving Fisher had argued (1933) that steadily falling prices (deflation) would deepen recessions. Although Keynes’s main book is not generally about policy, it was used to guide it. Since he had no control over his “brand name,” much policy after Keynes is not “truly” Keynesian. However, our focus is on how his ideas were used, even if poorly. If recessions do not solve themselves, the government can provide a helping hand for Smith’s invisible hand. Increasing the economy’s quantity of money would inspire business to spend more on investment. While classicals assumed that inflation would inevitably result, Keynesians saw that as happening primarily once full employment was achieved. But to many during the 1930s, this approach failed. One cliché was that “You can’t push on a string.” The special Keynesian solution was fiscal policy, using the government’s budget (expenditures and taxes) to change the nation’s total spending. Classicals advocated a balanced budget (avoiding government borrowing). But in a recession, Keynesians argued that raising the government deficit (either by raising government outlays or cutting taxes) would stimulate the economy. After this “pump-priming” or “jump-starting,” the private sector could take care of itself.
Thus, for many pundits and politicians, “Keynesianism” and “deficit spending” became synonymous. But this was not true: The “balanced budget multiplier theorem” said that a rise in government purchases stimulates the economy even if there is an equal rise in taxation. Second, amplifying the multiplier process, fiscal stimulus under recessionary conditions encourages private fixed investment (the accelerator effect). Business optimism, use of existing capacity, and cash flow relative to debt service obligations can all rise. Government and the private sector can be complementary rather than in conflict. Further, the theory of functional finance, developed by Abba Lerner, argued that the government debts arising due to deficits (to solve recessions) could be reduced by raising taxes (or cutting expenditures) in periods when inflation was threatening. Finally, even with deficits, fiscal policy can have the same impact as private investment in terms of raising an economy’s ability to produce. Government deficits can be used to finance projects that businesses shun, for example, investment in infrastructure (roads, airports, etc.), education, basic research, public health, disaster relief, and environmental clean up. In practice, fiscal policy was not used to end the depression: It was not Keynes (or Roosevelt) who did so, but war. General military buildup implied expansionary fiscal policy, stimulating the U.S. and world economies. U.S. unemployment fell to 670 thousand (or 1.2 percent) in 1944. Conscious and active fiscal policy was not used in the United States until decades later, in 1964–65, after President John F. Kennedy had appointed Walter Heller and other advocates of the “new economics” (practical Keynesianism) to his Council of Economic Advisors. Unfortunately, because it took so long for this policy to affect the economy, it encouraged the undesired results. It had been proposed to help the slow economy of 1960 but ended up reinforcing the inflationary results of the Vietnam War spending surge in the late 1960s. The next active fiscal policy was contractionary: President Lyndon B. Johnson raised taxes in 1968, futilely trying to slow inflation. This revealed another limitation of fiscal policy: Temporary changes in government budgets may have little or no effect. Together with the long lags of the 1964–65 tax cuts, this undermined the popularity of active fiscal policy—and
Keynesian economics 827
“fine-tuning” the economy—except during emergencies. That, of course, does not mean that politicians avoided it. The two most important recent cases of active fiscal policy came from “conservative” presidents who might have been expected to oppose Keynesianism. Ronald Reagan’s tax cuts and military buildup implied fiscal stimulus, as did George W. Bush’s tax cuts and wars. On the other hand, Bill Clinton, sometimes described as a “liberal,” raised taxes to create a budget surplus, even though Keynesians predicted recession. Clinton’s tax hikes were followed by a demand boom. It is possible, however, that his fiscal austerity eventually contributed to the recession of 2001. Active policy has been rare. Fiscal policy’s role has mostly been passive. The large military budget of the cold war era and various domestic programs acted as a balance wheel, moderating fluctuations in total spending. Second, “automatic stabilization” occurs because deficits rise in a recession: Tax collections fall, while transfer payments such as unemployment insurance rise. These forces encouraged the general stability of the U.S. economy after World War II. Almost as soon as it was born, challenges to Keynesianism arose. Many argued that the economy would automatically recover from depression. The “real balance effect” meant that falling prices would raise the real stock of money (real balances), raising individual wealth, causing rising expenditure, and canceling out recessions. Though most see this as too small to solve a full-scale depression, it does indicate that Keynes’s unemployment equilibrium was not really an equilibrium. Movement toward the “true” equilibrium can take much too much time, however, so that Keynesian policies may still be needed. Further, it is common to simply assume that money wages do not fall quickly in response to unemployment (as seen in most evidence). Given this, practical Keynesian policies can be applied. Another challenge was monetarism, led by Milton Friedman. Originally, the debate centered on the relative strength of monetary and fiscal policy, with Friedman favoring the former. This was mostly a false debate, since Keynes had always recognized the role of monetary policy. The problem for Keynesians was that it did not seem to work under the specific conditions seen during the depression. It did work under
the “normal” conditions seen during the 1950s and after. In their 1963 book Monetary History of the United States, Milton Friedman and Anna J. Schwartz argued that the depression was a failure of monetary policy: The Federal Reserve allowed a “great contraction” of the money supply. While Keynes had blamed the market economy for the depression’s persistence, Friedman blamed government (Federal Reserve) policy for its origin. Practical Keynesianism began to decline about the time that President Richard Nixon proclaimed that “we are all Keynesians now.” It had presumed that a rise in aggregate demand would not only lower unemployment but pull up prices and, if done too much, cause steady price rises (inflation). This was described by the Phillips Curve, a “trade-off” between unemployment and inflation. To some, the government could choose the best combination of these evils using demand-side policy. But the 1970s saw both inflation and unemployment rise (stagflation), so this policy could not work. Any given unemployment rate was associated with more inflation than before (and vice-versa). The persistent inflation of the late 1960s had become built into inflationary expectations and the price-wage spiral in the early 1970s, implying inflationary persistence. Second, oil shocks of 1973–74 and 1979–80 meant higher inflation. The trade-off had not been abolished but seemed useless for achieving the best combination of evils. Many economists embraced the “natural rate of unemployment” hypothesis that Edmund Phelps and Milton Friedman proposed in 1967: In the long run, demand-side variables (the government’s budget and the money supply) have no effect on the unemployment rate. If the unemployment rate is kept below its “natural rate,” this encourages rising inflationary expectations and accelerating inflation, until money’s purchasing power is destroyed. If unemployment stays high, inflation expectations decreases until deflation occurs. Only at the natural rate would inflation maintain a steady pace. Demand-side efforts of any other unemployment could only work in the short run. Friedman did not propose that policy makers try to find this rate. Rather, he suggested that a “steady as she goes” policy of constant and slow increases in the
828 labor policy
money supply would allow the economy to find it on its own, while avoiding the extremes of unreasonable inflation and depression. This was the opposite of fine-tuning. Taking Friedman further, the new classical school of Robert Lucas et al. argued that demand-side policies could not work even in the short run: The level of output was determined by price adjustment, just as for the classicals. In response, the “new Keynesian” (NK) school arose. The new classicals had attacked the practical Keynesians’ and Monetarists’ assumption of price and money wage stickiness as theoretically weak. So the NKs developed theories of wage and price inertia (rejecting Fisher’s and Keynes’s view that price adjustment could be disastrous). On the other hand, the NKs generally opposed government deficits. The success of the NKs is seen in President George W. Bush’s appointment of N. Gregory Mankiw to chair the Council of Economic Advisors. Another limitation of Keynesianism is that demand-side policies work best for a large economy that operates relatively independently of the rest of the world, as with the United States during the 1950s and 1960s. It does not apply well at all to a small open economy: The multiplier and accelerator effects leak out to the rest of the world, while supply shocks such as those due to rising oil prices are more likely. A small country also has a very difficult time having interest rates different from those of the rest of the world. In recent decades, the United States has become increasingly like one of those economies. Due to international exchange regime changes, monetary policy has also been strengthened compared to fiscal policy since the 1970s. Thus, macropolicy-making power has shifted from government to the Federal Reserve. Former Federal Reserve chairman Alan Greenspan was considered the economic “maestro” from 1987 to 2006. Interestingly, Greenspan maintained the activist spirit of early Keynesianism, even “fine-tuning” the economy at times. Further Reading Fisher, Irving. “The Debt-Deflation Theory of Great Depressions.” Econometrica 1, no. 4 (October 1933): 337–357; Friedman, Milton. “The Role of Monetary Policy.” American Economic Review 58, no. 1 (March 1968): 1–17; Friedman, Milton, and Anna Jacobson
Schwartz. Monetary History of the United States, 1867–1960. Princeton, N.J.: Princeton University Press, 1963; Gordon, Robert J. Macroeconomics. 10th ed. Boston: Pearson, 2006; Keynes, John Maynard. The General Theory of Employment, Interest, and Money. Available online. URL: http://marxists.org/ reference/subject/economics/keynes/general-theory/ index.htm. Accessed June 22, 2006;———. “The General Theory of Employment.” Quarterly Journal of Economics 51, no. 2 (February 1937): 209–223; Leijonhufvud, Axel. On Keynesian Economics and the Economics Of Keynes: A Study in Monetary Theory. New York: Oxford University Press, 1968; Mankiw, N. Gregory. “New Keynesian Economics.” In The Concise Encyclopedia of Economics. Available online. URL: http://www.econlib.org/LIBRARY/Enc/ NewKeynesianEconomics.html Accessed June 22, 2006; Meltzer, Allan H. “Monetarism.” In the Concise Encyclopedia of Economics. Available online. URL: http://www.econlib.org/LIBRARY/Enc/Monetarism. html. Accessed June 22, 2006. —James Devine
labor policy A nation’s labor policy may be characterized as its deliberate interventions in the labor market to accomplish national goals. Labor market interventions are considered justified because of the perceived failure of the market to achieve its major goal of efficiently allocating national labor resources and incomes to the satisfaction of policy makers or the major stakeholders in the labor market. But that goal is perceived differently by two of the major stakeholders: workers and employers. Workers perceive the legitimate goals of labor policy to be official acts to improve their tangible and intangible work conditions. Employers perceive the legitimate goals of labor policy to be official acts to help them maximize profitability by minimizing costs. As a consequence, labor policy may be seen in a more dynamic light as the prevailing balance in this struggle to achieve these conflicting perceptions of the policy goals. The outcome of the conflict between the goals of workers (which necessitates the right of workers to organize to gain increases in labor’s share of production) and the goals of employers (which necessitates restrictions on the activities of workers’ organizations in
labor policy 829
their attempt to enhance labor’s share) has swung back and forth like a pendulum. Some labor historians remark that unlike other industrially advanced countries, the United States had little that could be called a “labor policy” before the New Deal era that began in the 1930s. The definition implicit in this use of the term, however, limits the labor policy domain to the written body of laws, administrative rules, and precedents employed by government to intervene in the labor market. Others broaden that domain to include all the institutions that have emerged to accommodate the government’s laws and rules, including the processes by which policy proposals are created, placed on the national political agenda, and subsequently made into law or defeated. In this latter view, the labor policy domain includes not only the prevailing laws, rules, and precedents but the current official attitudes and behaviors that reflect favor or disfavor of employer or employee goals. The above distinction is not a trivial one. In the second view, the failure of the founders of the United States to acknowledge the equality of black slaves and, to some extent, women and commoners, whom they regarded beneath their own elite station, could be considered an early “labor policy” that continues to influence the attitudes and behaviors of many toward labor, and especially the working poor, in the United States today. Labor’s declining share of national income and the backlash against policies such as equal employment opportunity and affirmative action in recent decades is considered by some as just desserts after the period of forward movement between the Great Depression of the 1930s and the decline of the Civil Rights era in the 1970s. A review of the state of workers’ rights in those periods reveals some evidence that justifies the observation that the policy swings have been wide. Labor policies went through a cycle of initially enhancing workers’ rights considerably during and after the 1930s but have reversed that trend considerably of late. Yet, despite the elitism of the founders, most of those who fought and died in the Revolution were commoners, that is, farmers, workers, and some former slaves. Indeed, the importance of common people for the Revolution is seen in the reception
of Thomas Paine’s pamphlets, which were written for the masses and not for the elite. Their tremendous sales indicate the level of interest the average person had in the emerging ideology of independence. During this period, there were numerous instances of workers uniting to better their condition. While Thomas Paine may have envisioned a social policy that ideally would have been based on greater equality, in reality the Revolution resulted in an elitist system that favored the wealthy upper classes. Major victories regarding work conditions were won by labor in the years following, especially in the northern states in the period described as Jackson Democracy, but these failed to balance the growing impact of the vast disparities between the elite and the common worker in income and wealth. The basis for later gains in tangible labor rights at work and rights of unions to organize were laid between World War I and the New Deal. In 1913, President Woodrow Wilson signed a bill creating a cabinet-level Department of Labor whose secretary was given power to “act as a mediator and to appoint commissioners of conciliation in labor disputes.” The next year saw the passage of the Clayton Act, which limits the use of injunctions in labor disputes (but the U.S. Supreme Court found in 1921 that the Clayton Act did not protect unions against injunctions brought against them for conspiracy in restraint of trade and, in another decision, that laws permitting picketing were unconstitutional under the Fourteenth Amendment). In 1919, American Federation of Labor (AFL) president Samuel Gompers made a visionary recommendation for labor clauses in the Versailles Treaty that ultimately created the International Labor Organization (ILO). This was to have profound implications later for labor rights as inalienable civil rights. One way to gain an appreciation of the increase in the scope of labor’s fortunes is to review the typical labor economics textbook summary of legislation that gave form to current labor policy: the U.S. Supreme Court’s decision in Coronado Coal Co. v. UMMA (1922), in which the United Mine Worker strike action was held not to be a conspiracy to restrain trade within the Sherman Anti-Trust Act; the Norris-LaGuardia Act of 1932, which increased the difficulty for employers to obtain injunctions and
830 labor policy
declared that Yellow Dog contracts (prohibiting employees from joining unions) were unenforceable; the Wagner Act of 1935 (also known as the National Labor Relations Act), which guaranteed labor the right of self-organization and to bargain, prohibited a list of “unfair labor practices” on the part of employers, established the National Labor Relations Board with authority to investigate unfair labor practices and made strikes by federal employees illegal; the Taft-Hartley Act of 1947, which established a final list of unfair labor practices, regulated the internal administration of unions, outlawed the closed shop but made union shops legal in states without “right to work” laws, set up emergency strike procedures allowing an 80 day cooling off period, and created the Federal Mediation and Conciliation service; the Fair Labor Standards Act of 1938, which abolished child labor, established the first minimum wage, and institutionalized the eight-hour day; and the Landrum-Griffin Act of 1959, which required regularly scheduled elections of union officers, excluded Communists and felons from holding office, held union officers strictly accountable for union funds and property, and prevented union leaders from infringing on worker rights to participate in union meetings. The Landrum-Griffin Act already begins to reveal the reversal of the pendulum swing away from the worker organizations. In particular, business has found it increasingly easy to change work rules, especially in the older factories of the Frost Belt, and to free corporate resources for even greater southern and global expansion. These setbacks for labor were accompanied by an equally devastating reversal of the hard-won gains by civil rights leaders for African American workers that have remained so visible in the stubborn differentials in black-white wages and unemployment rates. Before the New Deal, there was no effort to equalize opportunities for African Americans or other minorities. The government was opposed to any program of assistance to the destitute freed slaves during the Reconstruction Era and a half century thereafter. But later, the Great Depression, the maturation of civil rights organizations, and the New Deal’s changes in American principles of labor policy laid the foundation for a policy shift toward a concept of proportional racial representa-
tion in employment. At the federal level, in particular, government contracting rules moved between World War II and the early 1960s from an equal treatment model of nondiscrimination to race conscious proportionalism. This era ended with the U.S. Supreme Court’s application of strict scrutiny standards to racial preference in the rulings in both Richmond v. J.A. Croson Co. (1989) and Adarand Constructors Inc. v. Pena (1995) affirmative action cases. Some writers observe that it may be time to cash in Samuel Gompers’s idea of an appeal to the International Labor Organization to recognize labor rights as human rights. The U.S. reluctance to ratify ILO conventions no. 87 and 98, recommended by the secretary of labor in 1949 and the solicitor of labor in 1980, concerning the freedom of association and right to bargain collectively, are considered part and parcel of the mindset that led to opposition to the International Criminal Court, the refusal to sign the Kyoto Agreement on global warming, the unwillingness to join a global ban on land mines, and the war in Iraq, all examples of American “exceptionalism,” which even former secretary of state George Schultz concludes “erodes U.S. moral authority abroad.” The number of writers who continue to voice their concerns with this mindset are shrinking, and they have been overwhelmed by the large and growing number of corporately funded “think tanks” that justify the employer’s view of the appropriate goals of American labor policy. The United States seems to be entering a long phase in which the view of the goal of the employee is completely deconstructed, along with all vestiges of the New Deal. See also collective bargaining. Further Reading Bluestone, Barry, and Bennett Harrison. The Deindustrialization of America. New York: Basic Books, 1982; McConnell, Campbell, and Stanley L. Brue. Contemporary Labor Economics. New York: McGraw Hill, 1988; Reynolds, Lloyd G., Stanley H. Masters, and Colletta H. Moser. Labor Economics and Labor Relations. Upper Saddle River, N.J.: Prentice Hall, 1998; Wilson, William J. The Truly Disadvantaged. Chicago: University of Chicago Press, 1987. —Robert Singleton
minimum wage 831
minimum wage The minimum wage is the lowest hourly rate or wage that may be paid to a worker. Often, minimum wages are determined by a labor contract or union contract that is the product of collective bargaining between the employer and the employees (usually through their union). Some minimum wages are established by state laws, and the federal government also established a national minimum wage, below which it is illegal to pay employees for their work. Should the federal or state governments set minimum wage standards? Many business leaders assert that the only mechanism that should set wages is the free market. They argue that a government that sets wages and is intrusive in the workings of business hurts the economy. Let the free market of capitalism guide wages, they argue, and a fair wage will be established. While this position has its adherents, most today side with the federal government setting minimum wage standards, and their argument is that it is good for the worker (giving them a living wage) and good for business (because these workers reinvest their wages into the community and into the economy by buying food, clothes, appliances, cars, etc.). Today, there is widespread support for the setting of a government standard minimum wage but also widespread argument over at just what level that minimum wage should be set. The Fair Labor Standards Act (FLSA), passed in 1938, is the overarching enabling legislation that established a national minimum wage. The U.S. Department of Labor’s Employment Standards Administration is responsible for enforcement of the federal minimum wage law. In June 1933, President Franklin D. Roosevelt, early in his administration, attempted to deal with the Great Depression by, among other things, calling for the creation of a minimum wage for hourly work. He stated: “No business which depends for existence on paying less than living wages to its workers had any right to continue in this country. By business I mean the whole of commerce as well as the whole of industry; by workers I mean all workers—the white-collar class as well as the man in overalls; and by living wages I mean more than a bare subsistence level—I mean the wages of decent living.” Following President Roosevelt’s lead, Congress passed the National Recovery Act. How-
ever, in 1935, the U.S. Supreme Court declared the National Recovery Act unconstitutional, and the fledgling minimum wage was abolished. The minimum wage was reestablished in 1938 as part of the Fair Labor Standards Act, and the Supreme Court did not invalidate this effort. In 1938, the national minimum wage was started at 25 cents an hour. In 1991, it was set at $4.25 an hour. As of 2006, the federal minimum wage was set at $5.15 an hour. Thus, a full-time worker who is the sole earner in a family of four, even though he or she is working a full-time job, still falls below the national poverty level. For the most part, the politics of the minimum wage has been a partisan divide between the Democratic Party, which, representing the interests of the worker, has often attempted to push up the minimum wage, and the Republican Party which, representing the interests of business, often tries to keep the minimum wage down. Democratic president Harry S. Truman often said, “The Republicans favor a minimum wage—the smaller the minimum the better.” Does a higher minimum wage help or hurt the overall economy? Those who support a higher minimum wage argue that on both fairness and economic grounds, a higher minimum wage is good for the country. Fairness demands that those who work full time should earn a wage that allows them to live with dignity. If a full-time worker earns wages that are below the poverty line, that is both unfair and unwise. On economic grounds, they argue that a higher minimum wage puts more money in the hands of those who are most likely to spend it on the necessities of life, thereby putting more money into the economy, and by buying more goods, helping local businesses and thereby adding tax revenues to the public till. There is, they further argue, a widening gap between workers and managers that can be socially as well as politically dangerous if it continues to grow. It is, they argue, a win-win situation and makes moral as well as economic sense. Those who oppose a higher minimum wage argue that it is bad for business, especially small and struggling businesses, and might drive them to bankruptcy, thereby hurting the economy as well as eliminating jobs. In what direction does the evidence point? For the most part, when the minimum wage increases
832 New Deal
slowly and steadily, it actually serves to benefit the economy. There is no solid systematic evidence to support the proposition that increases in the minimum wage have a negative impact on the economy. Who benefits from an increased minimum wage? Roughly 7.4 million workers earn minimum wages (roughly 6 percent of the workforce). More than 70 percent of these are adults, and about 60 percent are women. And while the minimum wage has not kept pace with inflation (if adjusted for inflation, the current minimum wage would have to be raised above $7.00 per hour), there is some evidence that trying to match the minimum wage to the rise in the rate of inflation would further benefit not only those earning minimum wages but the overall economy as well. Is the minimum wage a “living wage”? That is, are those who earn the minimum wage at or above the federal poverty level? There is a difference between the minimum wage (the lowest level a full-time employee can earn) and a living wage (the amount of money needed to live by minimal standards of decency). Many scholars argue that the minimum wage is not a living wage. A full-time worker (someone who works 2,080 hours per year) earning the federal minimum wage would earn $10,712 a year, significantly below the federal poverty line of nearly $15,000 per year. When adjusted for inflation, the minimum wage reached its peak in 1968, toward the end of the Great Society era of President Lyndon B. Johnson, who attempted to wage a War on Poverty through his domestic policy agenda. Some states have established minimum wages that exceed the federal standard. By law, states can set the minimum wage at whatever level they like. Today, roughly half the nation’s population resides in states where the state’s minimum hourly wage exceeds the federal level. Most of the industrialized nations of the world have a minimum wage, and most of those countries have minimum wage levels that are higher comparatively than that of the United States. Most of these nations also have more expansive social welfare spending programs than exist in the United States. It is believed that New Zealand was the first nation to establish a type of minimum wage when it passed the Industrial Conciliation and Arbitration Act, allowing the government to establish wages in industries. Two
years later, the state of Victoria in Australia established similar acts. In 2006, the minimum wage issue became something of a political football, as in an election year, the Democrats pressed the Republican congressional majority to raise the minimum wage, and the Republicans, calling the bluff of the Democrats, proposed legislation to do so. But there was a catch: Attached to the minimum wage bill was a massive tax reduction measure as well. This became unacceptable to the Democrats, and eventually, the minimum wage bill failed. Then each side blamed the other for its failure. In the blame game, each party felt it had a case to make, and as the 2006 mid-term elections approached, the minimum wage issue became a source of rancor and debate. However, in 2007, the newly Democratic controlled Congress was able to pass an increased minimum wage law, signed by President George W. Bush, to increase the federal minimum wage to $7.25 by 2009. See also fiscal policy. Further Reading Adams, Scott, and David Neumark. A Decade of Living Wages: What Have We Learned? San Francisco: Public Policy Institute of California, 2005; Andersson, Fredrik, Harry J. Holzer, and Julia I. Lane. Moving Up or Moving On: Who Advances in the Low-Wage Labor Market? New York: Russell Sage Foundation, 2005; King, Mary C., ed. Squaring Up: Policy Strategies to Raise Women’s Incomes in the United States. Ann Arbor: University of Michigan Press, 2001; Kosters, Marvin H., ed. The Effects of Minimum Wage on Employment. Washington, D.C.: AEI Press, 1996. —Michael A. Genovese
New Deal The New Deal refers to a series of laws passed and programs established during President Franklin D. Roosevelt’s administration (1933–45) that concentrated federal government investment in the welfare of citizens. The economic devastation that occurred in the United States as a result of the Great Depression in 1929 garnered support for an increased effort on the part of the federal government. Before the New Deal, social programs had largely been the
New Deal 833
A poster for Social Security, 1935 (Library of Congress)
responsibility of states and local governments. The effects of the Great Depression created widespread poverty and despair, which led the middle classes to support social programs funded on a federal level. Consider this contrast: The U.S. census reports that in 2003 (the most recently available data), 12.5 percent of all Americans lived in poverty. In 1932, nearly 17 percent of all Americans lived in poverty, and 25 percent of adults who might normally be employed were out of work. Among the programs established during the New Deal were unemployment compensation, public housing, vocational education and rehabilitation, employment services, and aid programs for cities and rural areas. Social programs for the poor, elderly, and disabled were also established. Most of these pro-
grams operated in the form of categorical grants from the federal government to the states and local governments. The acceptance of resources by local governments often entailed conditions under which the resources were to be used and had the result of local governments relinquishing control of many policies to the national government. New Deal programs typically left the responsibility for program adjudication to local jurisdictions, requiring program accountability and often fiscal commitments from local governments. The New Deal also established laws that improved working conditions for most employees and took bold steps to save banks and improve financial investments. One of the most dangerous effects of the Great Depression was a run on banks. In a loss of confidence when the stock market crashed in October 1929, investors withdrew their money from banks, and banks that had loaned money did not always have enough cash on reserve to provide investors with their full deposit. Knowledge of this prompted even more people to withdraw their savings, causing banks around the nation to close. By 1933, governors in many states had ordered banks to close. Two New Deal actions saved American banks. The first was the establishment of the Federal Deposit Insurance Corporation (FDIC) to insure bank deposits. Once investors knew their money was safe, they began to deposit money in banks again. The Emergency Banking Act of 1933 provided a mechanism for reopening banks under the authority of the U.S. Department of the Treasury, authorizing the Treasury to provide loans to banks when necessary. The National Labor Relations Act of 1935, also known as the Wagner Act, established the National Labor Relations Board and gave it authority to investigate and make determinations regarding unfair labor practices. The act also granted employees in most private sector occupations the right to organize and to engage in collective bargaining. Included in this act is the right to strike and to engage in peaceful activities in support of demands associated with workers’ employment. Another law regulating the workplace was the Fair Labor Standards Act of 1938, which established a federal minimum wage standard and set a standard of 40 hours per week as the full-time limit for employees, with guarantees of overtime pay of one and one-half
834 New Deal
times the pay rate for employees exceeding that limit. It also established specific conditions under which children could be employed. Children under the age of 14 may not be employed by noncustodians. Between the ages of 14 and 16, a child may be employed during hours that do not interfere with schooling. The act gives authority to the secretary of labor to establish which occupations may be hazardous to the health or well-being of a child and to prohibit children under the age of 18 from being employed in those occupations. The Works Progress Administration, generally referred to as the WPA, was established in 1935 to nationalize unemployment relief. This program created employment for largely unskilled workers, building roads, dams, sewage lines, government buildings, and so on. Females employed by the WPA were most often assigned work sewing bedding and clothes for hospitals and orphans. Both adults in a two-parent family were not encouraged to seek work with the WPA. Wages paid by the program varied depending on the worker’s skill and the prevailing wage in the area in which a person lived. No one was allowed to work more than 30 hours per week. At its peak, the WPA employed 3.3 million Americans. The U.S. entry into World War II halted the WPA and other employment agencies begun during the New Deal, as men left to go to war and other men and women went to work in factories to build munitions. Perhaps the most well-known aspect of the New Deal was the passage of the Social Security Act of 1935, which established federal social insurance programs for targeted populations on an entitlement basis. This is significant in three key ways. First, by creating entitlement programs, the government accepted responsibility for the well-being of the populations targeted in the act. Second, by establishing thresholds for assistance qualifications, the government committed to providing assistance to all people who met the threshold, thereby committing budgetary resources on a long-term basis. Third, this act took authority for poverty relief from the states. Some congressional debate took place over the loss of control faced by the states, but ultimately the act passed. The Social Security Board established regional offices to provide guidance and to monitor the states’ implementation of relief programs. Program audits in the
states were required to be performed annually by accountants, causing many social workers to complain because they perceived the emphasis on poverty relief had shifted from client care to fiscal management. The Social Security Act of 1935 divided relief into three classes: relief for the aged, relief for the blind, and relief for dependent children. Thus, the program design immediately established separate constituencies that would grow and organize individually rather than collectively. The public perception of aid preceded categorization of constituency. While the aged and blind are held in the public eye as deserving of aid, poor children are viewed as the products of undeserving parents. Therefore, aid to the poor from the beginning has met limited support. Race was an obstacle that threatened support for the entire act. Southern Democrats threatened to break from their northern wing and join Republicans if the act was to include mandatory assistance for blacks. The dilemma was resolved by eliminating agricultural laborers and domestic workers (the dominant forms of employment for African Americans) from social security eligibility and turning poverty assistance qualification determination over to the states. Social Security established a pension for senior citizens over the age of 65. The pension was funded by employment taxes paid by workers and employers, based on 1 percent of the wages earned by the employees. The Social Security trust fund began to collect taxes in 1937, and benefits were released by 1940, giving the pension program time to develop a reserve. The assistance program for needy children was called Aid to Dependent Children (ADC). It differed from other programs in the Social Security Act in that it allowed states to establish eligibility criteria as well as cash benefit levels, with no minimum set by the federal government. ADC was a categorical grant program. The federal government agreed to match state spending by 30 percent. In addition to committing funds for the program, states had to agree to have a central office and an appeals policy that would be evenly applied to applicants and clients. Some states adopted eligibility requirements that the client home be “suitable” and that parents be “fit.” Such vague criteria often resulted in elimination
New Deal 835
of African Americans from eligibility. Other southern states would end ADC subsidies to African American families during farming season or if elites were in need of domestic servants. One of the more noted criteria was the “man in the house” rule, which stated that benefits could be reduced or denied if a man not legally responsible for the children was living in the house. In extreme cases, police were often hired by the local welfare offices to visit client homes in the middle of the night to ascertain whether a man was cohabitating. By the middle of the 1970s, courts had struck down many of these regulations, suggesting issues of morality, residency requirements, and parental or housing fitness should not preclude needy children from receiving assistance because they violated federal or constitutional law. Many in the business community were originally reluctant to support statewide public assistance. As states moved to adopt widows’ pensions, however, businesses joined the push for nationalization of the program. If all states were required equally to contribute, no business would suffer a tax disadvantage. One chief negotiating principle that Roosevelt established with the New Deal was that no government program could compete with the private sector. With respect to ADC and later AFDC (Aid to Families with Dependent Children), the expanded version of ADC, many businesses often supported the programs in order to demonstrate corporate responsibility for poverty assistance and thereby garner public support. Strategically, the Committee on Economic Development convinced many businesses and business organizations, including the National Chamber of Commerce and the National Association of Manufacturers, that it was a smart strategic move to increase consumer demand and to keep welfare policy nationalized because a systematic policy was more reliable than individual state plans that had formerly relied on general revenue funds for poverty assistance. Furthermore, many in private industry succeeded in increasing their business by providing support services connected to poverty relief programs. Unions did not rally in support of federalization of antipoverty assistance or the expansions of this program that followed. Many union leaders viewed the program through the lens of territoriality: To the extent that labor was dependent on union negotia-
tions for unemployment and health compensations, the unions could remain strong. Exclusivity in many unions to whites also diminished support for poverty assistance, particularly when it was perceived that these programs might benefit African Americans at the expense of white taxpayers. Criticisms of the New Deal are that it favored the poor and disadvantaged and was too prolabor in its outlook. Other criticisms are that it created a rise in “big government.” The realities are that the New Deal altered the nature of federalism in the United States by expanding the responsibility of the federal government. The appearance of a booming national government is attributable to the decline in the gross national product (GNP) and a rearrangement of jurisdictional responsibilities. National government spending as a percentage of GNP appeared to grow because GNP fell from $97 billion to $59 billion from 1927 to 1932. The national government increased its powers in domestic policies in response to the crisis brought about by the Great Depression. In considering jurisdictional spending shares of all nonmilitary expenditures, state percentages remained relatively constant (20 percent prior to 1932 and 24 percent after 1940), while local government spending drastically decreased (50 percent prior to 1932 and 30 percent after 1940), and national government shares increased (30 percent prior to 1932 and 46 percent after 1940). Many of the New Deal programs have continued to this day, although they have been revisited and altered by Congress from time to time. Social Security and laws protecting workers continue, as does the FDIC. ADC expanded through the 1950s up to 1996, when the program ceased to exist as an entitlement and became a time-limited block grant program known as Temporary Aid to Needy Families (TANF), which gives states considerable responsibility in deciding program rules and administration. Supplementary Security Income (SSI) continues the assistance for the aged, blind, and disabled that began with the Social Security Act of 1935. See also entitlements. Further Reading Derthick, Martha. The Influence of Federal Grants. Cambridge, Mass.: Harvard University Press, 1970; DiNitto, Diana M. Social Welfare: Politics and Public
836 public assistance
Policy. 6th ed. Boston: Pearson, 2007; Noble, Charles. Welfare As We Knew It. New York: Oxford University Press, 1997; Quadagno, Jill. The Color of Welfare: How Racism Undermined the War on Poverty. New York: Oxford University Press, 1994; Schneider, Anne, and Helen Ingram. “Social Construction of Target Populations: Implications for Politics and Policy.” American Political Science Review 87, no. 2 (June 1993): 334– 347; Skocpol, Theda. Social Policy in the United States: Future Possibilities in Historical Perspective. Princeton, N.J.: Princeton University Press, 1995; The Fair Labor Standards Act of 1938. Available online. URL: http://www.dol.gov/esa/regs/statutes/whd/0002.fair.pdf Accessed July 20, 2006; The National Labor Relations Act of 1935. Available online. URL: http://www.nlrb .gov/nlrb/legal/manuals/rules/act.asp. Accessed July 20, 2006; United States Census Bureau. Available online. URL: http://www.census.gov/. Accessed July 20, 2006; Wallis, John Joseph. “The Birth of Old Federalism: Financing the New Deal.” Journal of Economic History 44, no. 1 (March 1984): 139–159; Weaver, R. Kent. Ending Welfare As We Know It. Washington, D.C.: Brookings Institution Press, 2000. —Marybeth D. Beller
public assistance Public assistance refers to a division of social welfare programs that are need based. In order to qualify for the benefits of these programs, then, recipients must meet minimum income eligibility standards. The term public assistance typically is used to mean welfare, a cash assistance program for families available only to low-income parents who have custodial care of their children. Public assistance also includes public housing programs, food stamps, reduced-price or free school meals, Medicaid, and Supplemental Security Income. Public assistance policy in the United States has changed dramatically over the course of the last century. What began as locally funded and administered poverty assistance programs developed into federal programs, often jointly funded and jointly administered, with states retaining discretion only for eligibility and benefit determination. The current welfare program receives federal funding and has federal regulations, but the bulk of the administration has returned to the states and, in some cases, to the local
level. Throughout this change from decentralization to centralization and back again, one constant remains. Assumptions that poor personal behavior choices cause economic dependence have led to a series of attempts to control personal behavior through welfare policy regulations. With the exception of Massachusetts, which established poverty and housing assistance as early as 1675, most states began in the 19th century to provide limited social assistance via housing for the deaf, blind, and insane as well as housing for felons. Governments held no legal responsibility for providing assistance: The 1873 U.S. Supreme Court case The Mayor of the City of New York v. Miln established that New York had the right to bar the poor from migrating into the city. By 1911, some states started to authorize local governments to direct aid to support mothers of young children, at times providing state aid to assist in the effort. Poverty relief gained wide acceptance as a role for state or local governments through widows’ pension programs. These programs were designed to keep children at home rather than in orphanages by providing subsidies to their mothers. The programs were selective in their eligibility and requirements for clients: Illegitimate and black children were often not served, and requirements could be placed on mothers for socially suitable work they might obtain to supplement the assistance. By 1920, 40 states had established some form of a widows’ pension program, generally implemented at the local level. This assistance was not universally implemented and could have eligibility requirements: California, for example, established a three-year residency requirement in 1931. As late as 1934, only half of the counties in the United States provided some form of aid to mothers. Many businesses became reluctant to support these growing programs, in large part because they were not universally accepted, and, therefore, states and counties with programs disadvantaged their businesses through heavier tax burdens. Many businesses operating across state lines and those faced with competition from out-of-state companies began to lobby for a policy shift; nationalization of poverty relief for women and children would result in a level playing field for the private sector because funding appropriations would become universal.
public assistance 837
The Social Security Act of 1935 established federal social insurance programs for targeted populations and did so on an entitlement basis. This is significant in three key ways. First, by creating entitlement programs, the government accepted responsibility for the well-being of the populations targeted in the act. Second, by establishing thresholds for assistance qualifications, the government committed to providing assistance to all people who met the threshold, thereby committing budgetary resources on a long-term basis. Third, this act took authority for poverty relief from the states. Some congressional debate took place over the loss of control faced by the states, but ultimately the act passed. The Social Security Board established regional offices to provide guidance and to monitor the states’ implementation of relief programs. Program audits in the states were required to be performed annually by accountants, causing many social workers to complain because they perceived the emphasis on poverty relief had shifted from client care to fiscal management. The first national assistance program for needy children, Aid to Dependent Children (ADC), was established by the Social Security Act of 1935. It differed from other programs in the Social Security Act in that it allowed states to establish eligibility criteria as well as cash benefit levels, with no minimum set by the federal government. Another New Deal public assistance program was public housing. The Department of Housing and Urban Development (HUD) today provides funding for local governments to construct apartment complexes for the poor. Additionally, housing vouchers are made available to families who can rent HUD-approved homes and apartments, known as Section Eight housing. These vouchers pay the rent. Neither public housing apartments nor Section Eight housing is available to all who qualify or need this assistance. Waiting lists for assistance take months and even years in some cities throughout the United States. President Harry S. Truman signed the School Lunch Act of 1946, which initiated the movement to provide meals to children. The program began as a federal categorical program, with states appropriating the bulk of the funding but matched by federal dollars. In return, states agree to provide meals that meet federally mandated nutrition guidelines and to offer
reduced-price or free meals to low-income children. This has been one of the most popular public assistance programs. It has expanded over the years and now provides breakfast as well as lunch to school children who qualify. In 1950, ADC was expanded to provide assistance for the caretaker parent. This new program was called Aid to Families with Dependent Children (AFDC). Under AFDC, administrative rules were established at the federal level, and states bore the responsibility for implementing the rules and making program decisions including the level at which the program would be administrated. Congress increased the federal match to 50 percent, up from an original grant of 33 percent, with a grant limit of $6 for the first child and $4 for each subsequent child. Congress also required that states begin to provide services to clients in order to help move them from poverty to self-sustaining employment. This requirement had no specific administrative directives, though, and implementation from the states was very weak. The program expanded again in 1961, when AFDC was changed to include two-parent families wherein one parent was unemployed but had a history of working. The new component, known as AFDC-UP, met with some contention by local office administrators who disagreed with the extension of benefits. AFDCUP was implemented as a state option. By 1988, only 25 states had adopted the AFDC-UP provision. The Food Stamp Act of 1964 nationalized a program that had been piloted from 1939 to 1943 and again from 1961 to 1964. Part of the New Deal included a pilot food stamp program to help the thousands who were recovering from the devastating effects of the Great Depression. The start of World War II eliminated this need, as many men joined the armed services and went to war and other men joined with women in working in munitions factories to support the war effort. After his campaign visits to West Virginia in 1960, President John F. Kennedy restarted the pilot program to alleviate the widespread poverty he found. This pilot program expanded to 22 states serving 380,000 citizens by 1964. The Food Stamp Act allowed states to determine eligibility and gave them responsibility for processing applications and monitoring the program. By 1974, when all states had
838 public assistance
implemented food stamp programs, 14 million Americans received food stamps. AFDC was altered in another significant way in the Public Welfare Amendments of 1962, when Congress enacted inducements for states to provide services to AFDC clients that would lead their clients to self-sufficiency. If states provided these services, the federal government agreed to pay 75 percent of the service costs. This was a dramatic increase in AFDC funding for the states. Federal funding for normal program services was based on a formula that matched state money. In the enacting legislation, the funding match was one-third. Eventually, the formula was changed to provide greater assistance to poorer states: For each AFDC beneficiary, the federal government would match state spending (up to one-third of the first $37.00 in 1967; up to five-sixths of the first half of the average state payment by 1975) and then an additional proportion of state spending, depending on the state’s per capita income (ranging from 50 to 65 percent). Soaring welfare rolls throughout the 1960s escalated concerns over the structure of the program and its ability to help the poor become selfsufficient. Combined state and federal expenditures on AFDC rose from $1 billion in 1960 to $6.2 billion in 1971. Changes in AFDC, from expansion of eligibility to institutionalized patients and twoparent households, as well as the elimination of residency requirements in the U.S. Supreme Court ruling in Shapiro v. Thompson (1969), resulted in a tripling of the AFDC caseload between 1960 and 1974. Attempts to promote self-sufficiency through work requirements for AFDC clients had shown little success. The Work Incentive Initiative (WIN) Program of 1967 tied welfare benefits to work by requiring local offices to refer qualified adult participants for training and employment. Exempt from the requirements were women whose children were under six years of age and those whose pending employment was determined to be adverse to the structure of the family, a determination that was made by caseworkers on a subjective basis. Medicaid, the health insurance program for the poor, began in the 1960s. This program provides basic health care to poor children, adults, and seniors. It is a federal program in which the national government
meets state appropriations on a three-to-one ratio, with states setting parameters for eligibility and determining, within federal guidelines, what services will be offered. Further centralization of public assistance came in 1974 through the consolidation of Aid to the Blind, Aid to the Permanently and Totally Disabled, and Aid to the Elderly into a new program, Supplemental Security Income (SSI), which was assigned to the Social Security Administration for operation. This program was designed to supplement income for poor disabled adults and children as well as poor senior citizens. The federal government provided the funding for SSI, established eligibility criteria, and set benefits for SSI recipients, thus removing state authority from poverty relief for clients qualifying for this program. Many states argued that eligibility for SSI was too broad and encouraged abuse of the system because adults suffering from alcohol or drug addiction and children diagnosed with behavioral disorders could qualify for benefits. The Food Stamp Act of 1977 streamlined the process for dispensing food stamps while establishing penalties for adult recipients who quit their jobs. The most controversial part of the act was to allow stores to return up to 99 cents in change to recipients who paid for food with a food stamp. This process did reduce the paperwork involved in reimbursing stores for the amounts that did not equal whole dollars, but it also provided a mechanism for the poor to “cash out” some of their food stamps. Many critics claimed this was an abuse of the system. The Food Stamp Program was reduced during the 1980s by increasing enforcements and penalties. States were now allowed to require job searches by adult recipients of food stamps and to increase disqualification periods for adults who quit their jobs. The Family Support Act (FSA) of 1988 made AFDC-UP mandatory for states, expanding benefits to include two-parent families in which the primary wage earner was unemployed. The FSA gave states the option of limiting cash assistance to six out of every 12 months for two-parent families but mandated that states provide Medicaid to these families year-round. The FSA also began the JOBS program (Job Opportunities and Basic Skills) and increased enforcement efforts for child support. In addition to
public assistance 839
benefiting two-parent families, the law benefited the states: Administration money for the JOBS program was funneled to the states, and states retained child support payments from noncustodial parents for families receiving AFDC benefits, passing through $50.00 of the support funds per month to families. The strength of JOBS was to be an emphasis on education and training, implemented through coordination of programs at the local level. The new emphasis on work activity added a component that increased the likelihood of families succeeding in maintaining economic self-sufficiency. Medicaid and child care benefits were extended to families for one full year after they left welfare. Exemptions from work activities for single mothers and mothers with children under three also reduced the effectiveness of integrating the poor into the workforce. Ultimately, AFDC rolls increased, and the JOBS program began to be viewed as unsuccessful. President Bill Clinton came into office in 1993 having campaigned to “end welfare as we know it.” While his administration did send a proposal to the 103rd Congress, the proposal did not arrive until June 1994, when election concerns, health care reform, gun control, and other items could easily claim precedence on the legislative calendar. The 104th Congress forged two welfare reform bills that were vetoed by the president. In those two bills, the Republican governors and congressmembers agreed to block grant welfare with a flat rate of funding for five years, end entitlement to benefits, but retain the right of states to establish eligibility criteria. In 1996, Wisconsin governor Tommy Thompson, president of the National Governor’s Association, brought the governors back together to work on a bipartisan proposal for reforming AFDC. The proposal kept the block grant and flat rate funding initiatives agreed to earlier, with an addition of federal funding to reward states that lowered their welfare rolls. Additionally, the maintenance of effort (MOE) required by states was reduced to 80 percent, and states were allowed to keep surplus funds created by declining rolls. The governors brought their proposal to Capitol Hill in February of that year and worked with Congress on a modified bill. In the end, the governors agreed to accept a provision that would apply sanctions to recipients who did not meet work requirement efforts. Requirements of the new wel-
fare program proposed in two earlier bills remained options for states to determine. These included the child cap provision, which allows states to eliminate or reduce benefits increased for children born nine months after an adult beneficiary enters the program; time limits lower than the federal 60-month limit; work requirements that exceed the federal rules; and flexibility in adopting a sanctions policy. The bill became known as the Personal Responsibility and Work Opportunity Reconciliation Act (PRWORA) that replaced AFDC with Temporary Assistance to Needy Families (TANF). By 1994, the number of Americans receiving food stamps had grown to 28 million. The PRWORA included a provision that allowed states to restrict food stamps to three out of 36 months for healthy adults without dependent children who were not working at least 20 hours a week. The economic prosperity of the 1990s helped more Americans to become self-sufficient, and food stamp usage declined every year for seven straight years. Since 1990, food stamp usage has been on the rise. The U.S. Department of Agriculture reported that in 2004, 38 million Americans qualified for food stamps, although only 23 million received them. Poverty in the United States has increased every year since 2000. The public assistance programs that are in place may not meet the demand that the poor have as affordable housing shortages increase, Medicaid services decreased, and cash assistance for poor families remains capped at a 60-month lifetime limit. Further Reading Blank, Rebecca, and Ron Haskins, eds. The New World of Welfare. Washington, D.C.: Brookings Institution Press, 2001; Cammisa, Anne Marie. Governments as Interest Groups: Intergovernmental Lobbying and the Federal System. Westport, Conn.: Praeger Publishers, 1995; Conlin, Timothy. From New Federalism to Devolution. Washington, D.C.: Brookings Institution Press, 1998; Edin, Kathryn, and Laura Lein. Making Ends Meet: How Single Mothers Survive Welfare and Low-Wage Work. New York: Russell Sage Foundation, 1997; Gallagher, L. Jerome, et al. “One Year after Federal Welfare Reform: A Description of State Temporary Assistance for Needy Families (TANF) Decisions as of October 1997.” Available online. URL: http://www.urban.org. Accessed
840 public debt
June 15, 2006; Hanson, Russell, ed. Governing Partners: State-Local Relations in the United States. Boulder, Colo.: Westview Press, 1998; Thomas: Legislative Information on the Internet. Available online. URL: http://thomas.loc.gov. Accessed June 15, 2006; Jansson, Bruce S. The Reluctant Welfare State. 4th ed. Belmont, Calif.: Wadsworth, 2001; Marmor, Theodore R., Jerry L. Mashaw, and Philip L. Harvey. America’s Misunderstood Welfare State: Persistent Myths, Enduring Realities. New York: Basic Books, 1990; Murray, Charles. Losing Ground: American Social Policy, 1950–1980. New York: Basic Books, 1984; Noble, Charles. Welfare As We Knew It. New York: Oxford University Press, 1997; Quadagno, Jill. The Color of Welfare: How Racism Undermined the War on Poverty. New York: Oxford University Press, 1994; Rochefort, David A. American Social Welfare Policy: Dynamics of Formulation and Change. Boulder, Colo.: Westview Press, 1986; Skocpol, Theda. Social Policy in the United States: Future Possibilities in Historical Perspective. Princeton, N.J.: Princeton University Press, 1995; United States Department of Agriculture Office of Analysis, Nutrition, and Evaluation, Food Stamp Program Participation Rates, 2004. Available online. URL: http://www.fns.usda.gov/oane/MENU/ Published/FSP/FILES/Participation/FSPPart2004 -Summary.pdf. Accessed June 15, 2006; Weaver, R. Kent. Ending Welfare As We Know It. Washington, D.C.: Brookings Institution Press, 2000; Wright, Deil S. Understanding Intergovernmental Relations. North Scituate, Mass.: Duxbury Press, 1978. —Marybeth D. Beller
public debt The “public” or “national debt” should instead be called the “government debt.” This is because it is a debt owed by the government, rather than by the public or the nation (i.e., the citizens). From the perspective of citizens, much or most of the government’s debt is an asset. That is, U.S. citizens own most Treasury bills, notes, and bonds along with the everpopular U.S. Savings Bonds. Most references to government debt concern only the federal debt, because state and local governments borrow much less than the federal government. In the early 2000s, for example, the debt of the state and local governments totaled only about 20
percent of the total for government, even though their expenditure was about 64 percent of the total. The small size of this debt arises because most states and municipalities are constitutionally required to balance their budgets (i.e., to not borrow). However, they make an exception for capital expenditure, that is, public investment in real assets such as schools and infrastructure (roads, bridges, sewers, etc.). This last point reminds us that the government has assets along with debts. At the end of 2005, the federal government’s total liabilities of $9.5 trillion corresponded to $608 billion in financial assets. In addition, there are nonfinancial (real) assets, such as buildings, military bases, and mineral rights adding up to about $3.2 trillion. On balance, federal net worth equaled −$5.7 trillion. Federal net worth (assets minus liabilities) is negative. Why is the government not insolvent or bankrupt, as a private business would be? As “The Budget of the United States Government: Analytical Perspectives 2007” notes: “The [federal] Government . . . has access to other resources through its sovereign powers. These powers, which include taxation, will allow the Government to meet its present obligations and those that are anticipated from future operations.” Because of its ability to tax the citizenry and the long-term strength of the U.S. economy, the federal government is extremely unlikely to go bankrupt. Further, the U.S. federal debt involves an obligation to pay using U.S. dollars. Thus, the government can (through the agency of the Federal Reserve) arrange to have paper money printed and used to pay its debts. Because the U.S. dollar is currently used as a world currency, people around the world are willing to accept its paper money. However, beyond allowing normal demand-side growth, this option is usually avoided in order to prevent excessive inflation and unwanted declines in the price of the dollar relative to other currencies. Its abilities both to tax and to issue currency make the federal government much less likely to go bankrupt than private-sector debtors and state and local governments. During the Great Depression of the 1930s, more than 2,000 local governments became insolvent. This is one reason why these governments have positive net worth. At the end of 2005, state and local governments had $2.6 trillion in liabilities and
public debt 841
$2.2 trillion in financial assets. Their real assets added up to about $4 trillion. At the end of 2005, the federal debt was about $7.9 trillion. Of this, only about 58 percent was owed to people and organizations outside the federal government, since the Federal Reserve and other government-sponsored organizations owned much of the government’s IOUs. About 15 percent of this was owned by state and local governments, including their pension funds. The rest was held by banks, insurance companies, mutual funds, and individuals. Of the privately held government debt at the end of 2005, 45 percent was directly owned by the “foreign and international” sector. The actual number may be higher to the degree that foreigners own U.S. banks and the like, which in turn own U.S. government debt. In recent years, as a result of the large U.S. balance of trade deficit, this percentage has been rising. Similarly, the percentage of U.S. private debt that is foreign-owned has risen. Government debts should be distinguished from the similar-sounding government deficits. A deficit refers to the situation of the government budget, that is, to the flow of money out of the government to buy goods and services (or to transfer to individuals or corporations) minus the inflow of money from tax revenues and fees. A deficit occurs when spending exceeds revenue inflow, while a budget surplus occurs when revenues surpass spending. On the other hand, the government debt refers to an outstanding pool of obligations to others. When the U.S. government runs a deficit (i.e., spends more than the revenues received), its debt increases. Government surpluses reduce its debts (as in the late 1990s). Put another way, government debts are accumulated deficits (and are reduced by surpluses). A balanced budget leaves government debt unchanged. A federal debt of $95 trillion is extremely hard to understand. To do so, we must put it into context: The size of the government debt must be corrected for the effects of inflation and the growth of the economy. Both of these corrections are usually done by dividing the debt by gross domestic product (GDP), that is, the size of the U.S. market economy during a year. This gives a rough feeling for the size of the debt relative to the potential tax collections, that is, how well the nation can cope with the debt.
Between 1946 and the late 1970s, the ratio of the privately held federal debt to GDP generally fell, mainly because of the growth of nominal GDP and secondarily because of the small size of deficits (and some rare surpluses). A small deficit makes the ratio’s numerator rise less than its denominator, so that the ratio falls. The 1980s saw a rise in this ratio, primarily due to the Reagan-era tax cuts and military spending increases, along with back-to-back recessions early in the decade. (Tax revenues usually fall as incomes fall in recessions, while transfer payments such as unemployment insurance benefits rise.) This debt-GDP ratio stopped rising during George H. W. Bush’s administration. Then, during President Bill Clinton’s second term, the privately owned federal debt shrank compared to GDP to about 33 percent due to tax increases, budget surpluses, and a booming economy. But under President George W. Bush, recession, tax cuts, and military spending increases meant that the federal debt rose to about 38 percent of GDP during the early months of 2004. It is expected to rise more due to promised tax cuts, further spending on the wars in Iraq and Afghanistan, and the steeply rising cost of the Medicare and Medicaid programs. In addition, rising interest rates (seen in the mid-2000s) imply that the government must make larger interest payments on its outstanding debt than in the past. However, in the near future it is not expected to attain anything close to the 106 percent reached at the end of World War II. Should the government have a debt, especially a large one? The answer partly depends on one’s political-economic philosophy. The “classical” economists (starting with Adam Smith in 1776 and persisting to this day) opposed any increase in government debt. Beyond the necessary functions of national defense, law and order, and the enforcement of contracts, the government was seen as a parasitic growth on the economy and society. Typically, however, exceptions were made during times of war, allowing deficits and rising debt. Nonetheless, advocates of classical economics are more likely to favor such ideas as adding a federal “balanced budget amendment” to the U.S. Constitution. Modern views, influenced by Keynesian economics, are more nuanced. In “functional finance,” an increase in government debt (i.e., running a
842 public debt
government deficit) is tolerable if benefits exceed the costs. Start with the latter. The burdens of the government debt have often been exaggerated. The “burden of the debt to future generations” refers to principal and interest payments on that debt that would be paid by future generations. Because these payments are to that same generation of people (or a subset), there is no purely intergenerational debt burden. The debt—the principal—does not have to be paid off. Instead of reducing the number of outstanding bonds, the federal government has such a good credit rating that it can easily borrow new money to replace the old bonds with new ones, “rolling over” the debt. This can be a problem if the government’s debt is extremely high relative to its ability to tax. In this case, its credit rating falls, and the government has to pay higher interest rates on new borrowings. This has never happened to the federal government. In fact, despite the extremely high federal debt after World War II, the United States enjoyed a long boom of GDP growth. This was a period when many middle-class people owned government bonds, a very safe kind of asset, boosting their economic security and purchasing power. The situation in which the government’s debt grows too high and hurts its credit rating usually has happened due to war and civil war. It has primarily been a situation of countries other than United States, especially poor countries whose currency is not generally acceptable in the world market. In normal times, the burden of the debt primarily consists of the interest payments that must be made (unless bankruptcy is declared). It is true that most of these payments are to residents of the United States. However, these payments put a restriction on the use of tax revenues. A rise in interest payments implies that a government must either cut other types of outlays or raise taxes in order to keep the budget in balance. In 2005, net interest payments represented about 7.4 percent of government outlays (and 1.5 percent of GDP). These ratios would be higher if interest rates were higher: If rates were similar to the average for 1979–2005, then the percent of payments going to net interest would be about 12.5 percent, and the percentage of GDP would be 2.5 percent.
These interest payments primarily go to those who are already wealthy (who are the main owners of federal IOUs). This can intensify existing inequality in the distribution of income, which has already been trending upward during the last 30 years or so. Similarly, much of the interest is paid to those outside the United States. (The percentage of U.S. government interest payments going to foreigners rose steeply in the mid-2000s, attaining 34 percent in 2005.) This implies that the United States must produce more output beyond that needed for domestic use, benefiting those who have lent to our government. This is also true for interest payments to the rest of the world by private sector debtors. It is also possible that government borrowing competes with private sector borrowing over the available supply of funds. This means that some private sector spending, including investment in factories and the like, may not happen. That is, private fixed investment can be crowded out by increased government borrowing (deficits), which might hurt the growth of the economy’s potential. On the other hand, as the Keynesian school points out, increased government borrowing can stimulate aggregate demand. Rising incomes increase the amount of saving and thus the funds available for borrowing by both the private sector and the government. Second, if business spending on fixed investment (factories, machinery, etc.) is blocked by unused productive capacity, excessive corporate debt, and pessimistic expectations about future profitability, the rise in aggregate demand can encourage (“crowd in”) private fixed investment. Thus, the problem of crowding out is crucial only when the economy is already operating near full employment. Next, if increased government borrowing causes higher interest rates, this encourages an inflow of funds from the rest of the world to buy both U.S. dollars and U.S. assets. The resulting rise in the dollar exchange rate hurts the competitiveness of U.S. exporters while encouraging U.S. imports. The benefits of increased debt depend on how the government uses the borrowed money. As noted, even classical economists saw winning a war as a good reason for deficits. If IOUs are incurred entirely for waste, bureaucracy, or gambling, however, it can be a
public utilities 843
disaster. If they are acquired in a way that encourages the economy and the tax base to grow, it is much like the case of private investment in a factory, since the project can pay for itself. (No one complains about private borrowing that goes into productive investment.) Since the 1920s, many have argued that deficits should finance projects usually not profitable for private business, such as investment in infrastructure, general education, public health, cleaning up the effects of environmental destruction, basic research, and the like. In general, this investment complements private sector activities and can help the economy’s ability to supply GDP (its potential output). As noted, Keynesian economics also sees rising government debt accumulation as providing fiscal stimulus that helps to reverse serious recessions and end economic stagnation. This is most appropriate, as suggested, if the economy is not already near full employment, so that the crowding-out problem is minimal. This kind of policy is like an investment when it is perceived that the private sector would leave a lot of resources unused. The new supply-side economics, on the other hand, sees tax cuts (which also increase government indebtedness) as encouraging extraordinary economic growth by unleashing the private sector, raising the economy’s ability to supply. This school rejects the idea of using increased government expenditure and, in fact, wants to decrease that spending. The supply-side view assumes that “getting the government out of the private sector’s business” will unleash productivity and creativity, raising potential tax revenues. In fact, some supply-siders (e.g., Arthur Laffer) argued that tax revenues may actually rise when tax cuts are instituted because this “unleashing” effect is large. Both the Keynesian and modern supply-side schools see special tax cuts for business fixed investment as beneficial. However, most research on this subject suggests that the positive demand- and supplyside effects of such cuts are quite limited. This is because fixed investment decisions are generally made based on long-term corporate plans, in which after-tax profits play only one part. As noted, both the supply-side and demand-side benefits of fiscal stimulus can be cancelled out if government expenditure is wasteful or benefits only the
cronies of insider politicians or those politicians themselves. Government investment may be in “pork barrel” projects that benefit no one but vested interests. In fact, one of the arguments in favor of a balanced budget amendment to the U.S. Constitution is that having such a rule keeps the politicians from abusing their power in this way. Of course, such an amendment may make it extremely hard for the government to deal with depressions or such disasters as the destruction of much of New Orleans by Hurricane Katrina in 2005. Instead, many call for more popular control of and involvement in the business of governing. See also fiscal policy. Further Reading Eisner, Robert. How Real Is the Federal Deficit? New York: The Free Press, 1986; Federal Reserve, Flow of Funds, tables L.105 and L.106. Available online. URL: http://www.federalreserve.gov/releases/Z1/Current/. Accessed June 18, 2006; Gordon, Robert J. Macroeconomics. 10th ed. Boston: Pearson, 2006; U.S. Council of Economic Advisors. Economic Report of the President. Available online. URL: http://www.gpoaccess. gov/eop/ Accessed June 18, 2006; U.S. Office of Management and Budget. Budget of the United States Government: Analytical Perspectives. Fiscal Year 2007. Available online. URL: http://www.gpoaccess.gov/ usbudget/fy07/pdf/spec. pdf. Accessed June 18, 2006. —James Devine
public utilities Public utilities are very large operating organizations that provide the basic infrastructures for public services, including electricity, telecommunications, natural gas, energy, water and sewage operations, and in some instances cable television (CATV). In the United States, these public services are generally provided by private firms that are regulated by the states and by the federal government. Often in other countries but only occasionally in the United States, these public utility organizations are publicly owned. The “public” aspect comes from the fact that most people in society receive necessary services from these organizations, often on a daily basis. In political terms, as large entities public utilities have considerable political influence themselves, and the manner in which
844 public utilities
they are regulated or operated also has important public policy consequences. Public utilities tend to have social and economic characteristics that traditionally have differentiated them from other industries and consequently justified government involvement or regulation. Public utility firms use their networks for product and service distribution over specific geographic areas (often states or metropolitan areas in the United States). They face very large initial fixed costs to provide the facilities necessary to start providing services, which then gives them elements of a “natural monopoly,” making it difficult for competitors to meet their low prices since the competitors would be faced with duplicating all of that large initial fixed investment. Public utilities also provide what have become quite essential services for most Americans, for which there are usually no direct substitutes. As a result, in economic terminology, the demand for their services is highly inelastic, giving them at least the opportunity to raise their prices to monopoly levels absent regulation or other controls. Public utility firms often also produce what economists call significant positive and negative externalities in the course of providing their services. For example, the fact that telephone companies connect all Americans provides enormous positive network externalities to all users. On the negative side, electricity-generating public utilities have traditionally been large sources of pollutants of various kinds. The conventional pricing mechanism of the market generally fails to take account of these externalities, so it may be difficult to induce the utility to produce the level of service at a price that is socially desirable. Over time, policy makers have also come to believe that guaranteeing universal access to a basic level of these utility services (water, communications, and energy) is essential to the protection of equal opportunity for individuals, while serving the basic necessities for modern human beings. Some utility services are also deemed critical elements of the infrastructure of local economic development. There are two policy approaches commonly used for overcoming these economic and social concerns related to public utilities: regulation and nationalization (or municipalization). The politics of each can be complex. In the United States, in contrast to most
other nations, governments’ policy choices generally have been to regulate privately owned utility firms. Some American cities own their electricity firms, and U.S. water utilities are frequently publicly owned. In most other countries, however, the dominant historical pattern was to nationalize all public utilities and run them as government entities. Over the past two decades, however, more nations have adopted a U.S. model of regulating private entities and encouraging some competition among them. Regulating the industries through rule making and administrative procedures leaves the public utility firms owned by private corporate shareholders who seek to earn profits. In the United States, based on constitutional notions of interstate commerce and the Tenth Amendment, such regulation has usually been shared by the states for the intrastate portion of public utility services and by the federal government for the interstate elements. Intrastate electricity and intrastate telecommunications services are overseen by state regulatory bodies, state public utilities commissions (PUCs). Federal regulatory bureaus, such as the Federal Energy Regulatory Commission (FERC) and the Federal Communications Commission (FCC), regulate public utility activities involving interstate commerce. There is not a clear line between intra- and interstate commerce, and technology and legal interpretations have moved the political agreements on where that line stands generally in a direction favoring more federal control over time. Most regulation is at the state or even local level in the case of water and sewage utilities and CATV companies. Regulators oversee the prices that utilities can charge to various classes of customers (residential, business, and commercial); the determination of the total revenue needs so that firms can earn adequate profits to generate further capital investment; the sanctioning of entry, exit, and expansion for particular services; safety and services issues; and the territorial limits in the industry. Regulatory agencies also usually prescribe uniform accounting systems and procedures, conduct accounting and management audits, supervise utility financial practices, and examine both quantity and quality of services provided. Governmental regulation is generally carried out through rule making and formal hearings and proce-
public utilities 845
dures, in which all the interested parties participate. These parties generally include consumer groups, government advocates, representatives of business user groups, but most importantly the public utility firms themselves. Some political scientists argue that public utilities have often been able to “capture” their regulators using their advantages of information and resources and that they therefore gain very favorable regulation that ensures profits and bars entry by potential competitors. In most other nations, rather than regulating private enterprises, the political response to the rapid development of public utilities was to establish them as government-owned entities. These public utilities often became the largest employers in many countries. The utilities and their labor union organizations often wielded considerable political power, and sometimes policy decisions seemed to favor the interests of the employees over those of consumers. Thus, a literature emerged that argued that these firms should be “privatized” and regulated more like the U.S. model, so that they would provide services more efficiently. As a result of those arguments plus pressure from international trading groups such as the World Trade Organization, in the past 25 years several large nations have privatized or partially privatized many of their public utility organizations. Privatization has often been coupled with the growing trend toward greater competition in these industries. Some of the technological and engineering aspects that initially gave rise to public utilities as “natural monopolies” have changed over time. For example, wireless communications reduced the monopoly status of landline telecommunications utilities. As a result of these technological changes, some portions of the telecommunications and electricity industries have been “deregulated.” Deregulation usually means that government allows competition in some segments of the public utilities marketplace by other firms, and in return government relaxes the degree of regulation over the incumbent public utility firm. While privatization has been the dominant trend in other advanced nations over the past 25 years, in the United States, where firms were already private, deregulation of public utilities has been the main trend. While deregulation in its most extreme form can mean the complete elimination of regulatory oversight over
private firms in these industries, in most cases it has not yet gone that far. We can identify two distinct periods of regulatory regimes in the modern history of U.S. public utility regulation: government-oriented and market-oriented regimes. During the first half of the 20th century, the idea that a public utility is a natural monopoly and regulation is the best substitute for competition dominated legislative and regulatory practices. The legal cornerstone for modern U.S. public utility regulation was laid in 1877 in the U.S. Supreme Court case of Munn v. Illinois regarding the right of the state to regulate rates charged by public warehouses. The Munn v. Illinois decision did not assert that warehousing belongs to the state function and should be nationalized through regulation. However, the Court clearly ruled that the economy would be disrupted if a monopoly firm could impose an unjust pricing burden on those customers who had no choice but to use its service. So the Court encouraged the legislature to take a larger role to regulate the rates and services in the name of the public interest. At the federal level, Congress established the first regulatory agency in 1887, the Interstate Commerce Commission, the members of which were appointed experts. At that time, the railroads were the dominant public utility, and they had achieved monopoly pricing power as well as disproportionate political power in the United States due to their enormous wealth. Thus, the regulation of railroads was a major political and electoral question, much more so than utility regulation is today, when it is generally viewed as more of a technical issue unless prices are skyrocketing or major changes are taking place. After some initial uncertainty about the powers that Congress had actually intended to delegate to appointed regulators, in 1890 the U.S. Supreme Court affirmed that the “reasonableness of utility rates” could be the subject of regulatory and judicial review. Several states had earlier experimented with “weak” regulatory commissions with limited powers, such as Massachusetts in the 1860s, but in 1907 the states of Wisconsin, New York, and Georgia established “strong” regulatory commissions, with jurisdiction and power over telephone, telegraph, gas, electric, and water companies. Other states followed
846 public utilities
rapidly. State regulatory commissions were widely regarded as the means by which the public was protected from excessive rates, unsafe practices, and discriminatory treatment by monopolies whose services were increasingly required by a growing American middle class. The period from the 1920s through the 1970s was fairly stable and quiet in terms of public utility regulation in the United States. Services were expanded to reach most Americans partly because of a policy goal of achieving “universal service” and partly because prices were mostly falling during this period. New technologies lowered prices further, public utility firms earned solid profits, and, apart from some financial and regulatory issues during the Great Depression of the 1930s, public utilities prospered. Most Americans were happy with their services. This stable relationship was altered significantly in the 1960s and 1970s. In the electricity industry, many new controversies developed around nuclear power plant siting, energy shortages, skyrocketing prices, and pollution problems. At the same time, the technologies of telecommunications changed considerably, eventually leading to the break-up of the monopoly that AT&T held in 1984. In short, a series of new disruptions and issues led to greater interest in the decisions made by public utilities and focused greater political attention on these industries and their regulation. The fallout from these events led to a more market oriented regime in the 1980s and 1990s. More policy makers became convinced that deregulation and more market oriented regulations would enhance service quality and lower prices. So-called “Chicago school” economists promoted a vision that public utilities were no longer natural monopolies and that competition could flourish in most of these industries. More recent debates have centered on how far deregulation should go in the public utility arena and what forms of continued regulation make the most sense. Mainly due to technological changes such as wireless communications and the Internet, telecommunications has changed a great deal. Deregulation and change have been slower in electricity policy, in part due to continued problems with pollution, higher input prices for oil and other energy sources, and policy failures and scandals such as Enron that led to
the politically unacceptable blackouts of power in California in 2001. The regulation of public utilities in the United States is a blend of political decision making and technical decisions based on expertise. Regulatory commissions, both state PUCs and federal regulatory agencies, are mostly made up of officials with some degree of expertise who are appointed, respectively, by governors and presidents. The public utility firms often go before these regulators to seek rate increases or other changes, and then a quasi-judicial administrative proceeding takes place. Utilities hold the advantage of having the greatest stake in the decisions, leading them to spend money on lobbying politicians, paying highlevel consultants and experts to testify, and providing the most information in the process. Utilities are also powerful politically because they are critical to future job development; they have many employees, most of whom are tied to a particular state; they have large investments in that state; and they make large campaign contributions. Their usual opponents, who may include potential competitor firms, consumer groups, and others, now can marshall more resources than they did prior to the 1960s, but they are still frequently overwhelmed by the large public utilities. The regulators, who are appointed in 39 states by the governor but elected in 11 others, are mostly insulated from direct political influences, but studies have shown that they are subject to broad political influence in some states. The outcomes of regulatory decisions are usually very complex, lengthy documents with considerable technical detail about accounting, economic, and engineering questions that are beyond the ability of most laypersons and voters to comprehend. Thus, public utility regulation can most accurately be described as a blend of political and technical decision making. Though the services of public utilities are essential to daily survival for most Americans, they are often taken for granted, like the “pipes” and “wires” that lie underground as the infrastructure for these services. This is true except in those rare occasions when they fail, such as blackouts, when the salience of policies surrounding public utilities rises substantially. A number of policy trends, including deregulation, privatization, concern about local and global
reproductive and sexual health policy
pollution, energy prices, and concerns about independence, have become intertwined with public utility issues in recent years, raising their importance to policy makers and the American political process. Further Reading Crew, Michael, and Richard Schuh, eds. Markets, Pricing, and the Deregulation of Utilities. Boston: Kluwer, 2002. Gormley, William. The Politics of Public Utility Regulation. Pittsburgh: University of Pittsburgh Press, 1983; Pierce, Richard. Economic Regulation: Cases and Materials. New York: Anderson Publishing, 1994; Teske, Paul. Regulation in the States. Washington, D.C.: Brookings Institution Press. 2004; —Junseok Kim and Paul Teske
reproductive and sexual health policy Campaigns for and against the legalization of abortion, public health interventions aimed at reducing the spread of HIV/AIDS, prosecutions of pregnant women under novel extensions of drug trafficking and child endangerment laws, and debates about the appropriateness of vaccinating teenage girls against the virus that is linked to cervical cancer, each of these examples hints at the breadth of reproductive and sexual health, as well as the intense political struggle with which it is often associated in the United States today. Yet, to understand this area of politics, attention must be paid to historical trends, connections between issues, and how larger power struggles have structured reproductive and sexual health. While often reduced to a delimited topic or group of topics such as abortion or HIV/AIDS, reproductive and sexual health issues are many, varied, and highly interconnected in practical terms. Reproductive health issues are those that revolve around procreative capacities, while sexual health issues are primarily associated with sexual behavior and norms. Some scholars and practitioners create a conceptual distinction between these two sets of issues, yet, in practice, they are incredibly intertwined. For example, societal norms concerning sex outside of marriage, between members of different racial or ethnic groups, or between individuals of the same sex can greatly impact the extent to which funding or services are provided for those struck by sexually transmitted
847
infections, the way in which the children of members of those groups are viewed and potentially provided for under child welfare programs, and the degree to which these acts are socially proscribed or criminalized. Other understandings of reproductive and sexual health are organized around the application or development of codified principles based in either U.S. constitutional doctrine or larger international human rights agreements. A focus on reproductive and/or sexual rights and duties seeks to ensure that both states and private entities respect individuals and groups’ actions and needs in these areas. Many of those doing practical advocacy or theoretical work on reproductive and sexual health focus their efforts in this way, seeing reproductive and sexual rights as essential for both individual autonomy and as a necessary precondition for the exercise of a host of other citizenship and human rights. Understood in this broader and more practically based way, the parameters defining the set of issues thought of as related to reproductive and sexual health are not fixed, solid, nor naturally constituted, and so from a substantive perspective, vary over time and place as new issues are engaged or one issue or another gains greater salience. In the U.S. context, reproductive and sexual health includes, among other issues, contraception and family planning, prenatal care, sexually transmitted disease care (including HIV/AIDS), infant mortality, unintended pregnancy, sex education and information, abortion, sexual violence, notions of sexual pleasure, sexual behavior, adoption practices, fertility treatment, stem cell research, reproductive cloning, and cancers of the reproductive system, arguably extending as far as more general health access and beyond. While these topics make up the substance of reproductive and sexual health, to study the politics surrounding these issues, focus must be put on the interactions of power through which various actors and institutions engage questions of reproduction, sexual behavior, and health. This can include the factors contributing to the salience of particular issues at a given historical moment, the formal policy making through which funding for particular services is determined, the processes of legalizing and/or criminalizing certain behaviors and rights, as well as the more subtle workings of power, and the means through
848
reproductive and sexual health policy
which societies define healthy and/or moral behaviors and practices. The primary patterns of the politics of reproductive and sexual health in the U.S. context are change in salient issues over time; interconnection among issues; structuring through the social and political formations of gender, sexuality, race, class, and nation; and the harnessing of reproduction and sexuality to meet larger social and political goals. First, in the contemporary period, a number of controversial issues have gained saliency from among the broader array of reproductive and sexual health, issues such as abortion, HIV/AIDS, and to a lesser extent, same-sex marriage and stem cell research. Other issues, such as infant mortality, broader health care access, and contraception, have occasionally received attention, but they have not consistently attained a high level of media, governmental, or popular interest in the contemporary moment. Yet, the list of the most controversial issues has not remained static over time. A long-term view allows us to see the historically specific and transitory nature of what is considered reproductive and sexual health politics. For example, early in U.S. history, abortion was legal under common law until the point of “quickening,” that moment when a pregnant woman first felt fetal movement. Reliance on folk methods and a lack of distinction between abortion and contraception during the early stages of pregnancy did not lend themselves to abortion being a particularly salient political issue at this time. Today’s notions of abortion and fetal viability would be quite foreign to early American women, who, along with most health care providers, spoke of the desire to restore a woman’s menstrual cycle rather than aborting a fetus per se. In the late 1850s, however, even this early type of abortion began to be criminalized by the burgeoning medical profession and political elites in state governments. While in many cases exceptions were made for situations in which a woman’s life was at risk, doctors’ policies and states’ regulations as to the conditions that would justify such a therapeutic abortion varied widely. During the time abortion was largely criminalized in the United States, women and their partners still sought out and many procured abortion procedures, mostly beyond public view, from variously skilled and sympathetic doctors, midwives, and other specialized practitioners. In the mid-1950s,
women, joined by some medical practitioners, clergy, and other allies, began the modern push for access to abortion. The 1973 Roe v. Wade landmark decision by the U.S. Supreme Court located a woman’s right to abortion under the right to privacy but also contributed to a relatively successful countermovement seeking to regulate and recriminalize the procedure. The attention to the fetus brought on by this powerful movement has also led to a new arena of political conflict related to reproductive and sexual health, wherein fetal rights are being asserted, most often in opposition to the rights of women. Recent years have seen new interpretations of child endangerment and drug trafficking laws, granting of benefits and services to the fetus, the creation of separate penalties for murder of a fetus, as well as an increased level of controversy in debates about stem cell research. HIV/AIDS and same-sex marriage have also become highly salient issues in contemporary U.S. politics. As noted earlier, there is a clear practical connection between issues that concern reproductive capacity and those that deal with sexual behavior and norms. However, it is becoming clear that the interconnection goes further to the level of individual issues. It is this interconnection that is the second overall pattern in U.S. reproductive and sexual health politics. For example, the above discussion of abortion’s changing saliency through time only begins to hint at the interplay between abortion and other reproductive and sexual health issues. According to women’s reports, unintended pregnancy, the necessary precursor to most abortion procedures, is the result of many factors. Economic factors include the inability to support large families, the need to work outside the home, and the lack of reliable financial support from a partner. Regardless of one’s views, historians have documented the longstanding efforts of women and their partners to control procreation, albeit limited by a persisting lack of access to safe and effective contraceptive methods. These efforts notwithstanding, many American states, and with the 1873 Comstock Law, the federal government, sought to restrict access to contraceptives and family planning information. This aim was not fully invalidated for married couples until the 1965 U.S. Supreme Court case Griswold v. Connecticut and for single individuals through
reproductive and sexual health policy
Eisenstadt v. Baird seven years later. Notably, the link between contraception and abortion was even made through formal jurisprudence, as the right to privacy upon which the Supreme Court based its decision in Roe found its start in these cases concerning contraception. Similar connections can be found between many different reproductive and sexual health issues; action or inaction regarding one issue can greatly impact others. For example, access to prenatal care can greatly affect rates of infant mortality, and public health interventions and modifications to individual sexual practices can curb the spread of not only HIV/AIDS but also rates of other sexually transmitted diseases and unintended pregnancy. Although largely occluded in mainstream political discourse and policy making, the practical interconnections of reproductive and sexual health issues persist nonetheless. While the connection among issues within reproductive and sexual health politics must often be excavated, the connection between this politics and larger social and political formations is more easily discernable. The ways in which gender, sexuality, race, class, and immigrant status, individually and in conjunction, have served as salient factors contributing to the shape and resolution of the politics of reproductive and sexual health is the third pattern in the U.S. context. How the procreation of different groups is viewed and social norms and legislation concerning the appropriateness of various sexual behaviors among different individuals and groups, there is no default position or natural formulation; all are structured by relations of power. Decisions on whether to provide and how to regulate certain types of reproductive and sexual health services are fundamentally political acts. Indeed, reproductive and sexual health politics are experienced and conceptualized differently based on one’s position, and public health policy has not often treated these disparate situations equally. While reproductive and sexual health is relevant to men and women, the definition of this policy area as primarily a set of women’s issues in a society in which women do not hold political power equal to men contributes to the shape that reproductive and sexual politics has taken in terms of funding, prioritization of problems, and more. Heterosexuality’s normative status has also affected reproductive and sexual
849
health through the sorts of public health interventions deemed appropriate and the degree to which sexual behavior that falls outside this norm is regulated. The impact of the social and political formations of race, immigrant status, and class on reproductive and sexual health is evident in, for just several examples, the practices of breeding and family separation in America’s version of chattel slavery, eugenically motivated efforts to reduce the childbearing of those argued to be mentally, racially, or economically unsuitable parents, and the discriminatory policies that barred Japanese and Chinese women from emigrating to join their husbands and start families in the United States. In addition, while the ability to terminate an unintended pregnancy is seen as the most important reproductive and sexual health guarantee, for others, the ability to bear children has also been difficult to attain. Longer-term contraception and permanent sterilization were used coercively among certain social groups, including black women, Native American women, prison inmates, and those deemed to be physically or mentally disabled. While such women have managed to find ways to use these technologies for their own family planning aims, the legacy of their coercive use remains in many communities. In the contemporary United States, a host of reproductive and sexual health disparities persists that evade individual- or group-based explanations. Infant mortality, unintended pregnancy, sterilization, HIV/AIDS diagnosis, and death all occur at disparate rates along race and class lines. As well, the focus on reproductive and sexual rights as a necessary precondition to exercising other types of rights is no panacea to those also marginalized by race, class, or immigrant status. It is not simply that gender, sexuality, race, class, and nation impact the shape of reproductive and sexual health from a structural perspective. A fourth, related pattern throughout the U.S. history of reproductive and sexual health has been the purposeful use of this politics as an instrument to further other social and political goals. This can be seen in the way in which the procreation of certain groups has been encouraged or discouraged and in the manner in which sexual interactions and marriage have been regulated on the basis of race and sex, for just two examples.
850 secr ecy
During slavery, regulations governed the racial status and ownership of children born to African slaves. Breeding practices and other mechanisms encouraged childbearing; each furthered the economic interests of slaveholders and deemphasized the bond between enslaved parents and their children. Today, the procreation of women of color as well as poor, young, and unmarried women is not deemed by many to fulfill societal goals and thus is discouraged. Similarly, in the latter half of the 19th century, the doctors who sought to bring abortion within their purview and beyond the control of women and “irregular practitioners,” such as midwives, were spurred by professional and social aims to strengthen the medical profession’s place in society and by gender norms that privileged physicians’ (primarily men’s) judgment over women’s. The efforts of their state government elite allies overlapped; their motives to restrict access were aimed primarily at reducing abortions among white middle- and upper-class women. Under this eugenic logic, it was these women’s reproduction (and not that of racial and ethnic minorities, immigrants, lower-class, or disabled individuals) that was deemed essential to the progress of the American nation. Similarly, the sexual relations and possibilities for marriage for couples who found themselves positioned on different sides of the color line have also been highly regulated by many states throughout U.S. history. These unions are often seen as potentially undermining the racial and therefore social order. Gender and sexuality factor in as well, as the sanctity of heterosexual marriage has been contested, but most often protected, as seen in recent court decisions. In addition, only recently have laws reached the books of every state allowing for the possibility of prosecuting men who engage in forced sexual relations with their wives. Struggle over sexual and gender norms thus characterizes recent political struggles, as reproduction and sexuality are harnessed in battles over larger social goals. Further Reading Luker, Kristen. Abortion and the Politics of Motherhood. Berkeley: University of California Press, 1984; McLaren, Angus. A History of Contraception: From Antiquity to the Present Day. Oxford: Blackwell Pub-
lishers, 1992; Miller, Alice M. “Sexual but Not Reproductive: Exploring the Junction and Disjunction of Sexual and Reproductive Rights.” Health and Human Rights 4, no. 2 (2000): 69–109; Reagan, Leslie J. When Abortion Was a Crime: Women, Medicine, and the Law in the United States, 1867–1973. Berkeley: University of California Press, 1998; Roberts, Dorothy. Killing the Black Body: Race, Reproduction, and the Meaning of Liberty. New York: Vintage Books, 1997; Roth, Rachel. Making Women Pay: The Hidden Costs of Fetal Rights. Ithaca, N.Y.: Cornell University Press, 2000; Shapiro, Ian. Abortion: The Supreme Court Decisions, 1965–2000. Indianapolis: Hackett Publishing Company, 2001; Solinger, Rickie. Pregnancy and Power: A Short History of Reproductive Politics in America. New York: New York University Press, 2005. —Amy Cabrera Rasmussen
secrecy The first principle can be stated simply: Information should be kept from those who do not have a right to know it. A second principle that frequently guides the behavior of both private and governmental officials is to keep information from those who might use it to harm the official’s interests. Both lead inevitably to conflicts and problems for democratic governance. The U.S. government uses three formal categories to classify information as confidential (the lowest level of sensitivity), secret, or top secret (the most sensitive category). Individuals inside and outside the government are given security clearances that allow access to particular levels of secret materials. The standards and procedures for classifying secrets, declassifying old documents, and granting access to information were most recently defined in Executive Order 13292 issued by President George W. Bush in 2003. Most secret materials involve intelligence agencies, military plans or operations, or relations with other countries. The major offices within the Department of Defense and the various intelligence agencies have developed their own detailed procedures and policies for classifying information, the specifics of which are often classified as secret. The principle of classification is straightforward: The greater the potential damage to important national interests, the more secrecy
secrecy 851
is necessary. But the application of that principle is not automatic; someone must make a judgment about how potentially damaging a piece of information is and what interests are threatened. The difference between protecting a presumed national interest, or protecting an agency of the government from looking bad, or protecting the personal or political interests of key officials is ultimately determined by people who sometimes have a vested interest in equating threats to their own interests with threats to the national interest. An example may be the Department of Defense decision in mid-2006 to stop issuing quarterly reports to the U.S. Congress on the number of Iraqi units who were fully trained and deemed capable of operating without extensive support from U.S. forces. The Pentagon argued that this information was always meant to be secret and had been misclassified in the past. Critics in Congress argued that it was the awkward fact that the number of fully capable Iraqi units was declining and the government of Iraq seemed further away from taking over from American forces that motivated the sudden classification of the information. The concept of executive privilege is also a key to understanding government secrecy. Presidents have consistently claimed the right to keep documents and information secret from Congress on the basis of the separation of powers as defined in the U.S. Constitution. While executive privilege is not directly mentioned in the Constitution or defined in laws, it has been claimed by every American president to keep information away from either Congress or the public. Claims of executive privilege also raise the question of when information really deserves to be kept secret and when the claim is being used to cover up situations or avoid political embarrassment. While it is obvious that secrets ought to be kept secret, there are three fundamental issues that any system of secrecy necessarily raises: conflict between competing rights, confusion of political harm with danger to vital governmental functions, and efficiency. The most common and potentially explosive conflicts over who has a right to know which pieces of information occur between the executive branch of the U.S. government and Congress and between the executive branch and the media. The right of the executive branch to formally classify information as
secret and withhold it from others is based on both constitutional and statutory law. At the same time, the right of Congress to oversee and monitor the executive is rooted in the U.S. Constitution. More often than not, Congress tends to accept executive assertions of secrecy to protect national security or the closely allied principle of executive privilege. But there are times when a congressional committee or Congress as a whole demands access to information to assure that laws are being properly executed. The result is confrontations pitting an executive branch contention that some information is too sensitive to be released to Congress, even with existing safeguards and procedures, and congressional insistence that the real secret being protected is bureaucratic bungling or misbehavior. The historical record contains a number of examples in which it is clear that executive secrecy was primarily aimed at hiding uncomfortable facts. Congress is not the only source of challenges to executive secrecy. A corollary of the freedom granted to the press in the First Amendment is the responsibility of the media to serve as a watchdog for citizens and help make government open and transparent. The desire of the media to know what is going on and the desire of the government to keep secrets are frequently at odds. The Freedom of Information Act (FOIA) of 1966 was an attempt to clarify the right of the media and ordinary citizens to know what the government was up to and the right of government agencies to keep secrets. While most observers feel FOIA improved public access to the workings of government, many feel that because it is the government agency that created the secret classification in the first place that interprets the requirements of FOIA in a given case, the balance of power still swings too heavily to the preservation of secrecy at the cost of transparency. National interests, ideological or political interests, and personal interests are not clear and distinct categories; conflicts and confusions among interests are inevitable. Decisions about secrecy are made by human beings with multiple goals, from serving their country, to protecting the president and his administration, to advancing the interests of their agency or political party, to enhancing or preserving their own reputation. The guiding principle in classifying secrets is the extent to which information can be harmful to
852 social security
the national interest. It is all too easy for an appointed official to think that anything that might be used by political opponents to embarrass the president or the president’s political party is harmful to the nation as a whole and try to use security classifications to keep embarrassing truths under cover. Managers of executive agencies may be inclined to equate their agency’s interest with the national interest and feel that anything that puts them in a bad light is a harmful secret that ought to be kept from Congress or the press. In order for intelligence agencies to “connect the dots” and correctly analyze situations, it is necessary to first “collect the dots.” Unless the relevant information flows quickly to those who have to analyze it and assemble an overall picture, it will be impossible to produce an estimate of what the real threats to the United States are in a timely and useful fashion. But a critical strategy for preserving overall secrecy is to limit access to information on a “need to know” basis. In planning a complex military operation, for example, each participant will be told only what she or he appears to need to know to carry out her or his part of the plan. The plan is compartmentalized so that only a handful of people at the top of the hierarchy have an overall picture. The more complex the plan, the more difficult it is to be sure that each participant knows what they need to know and the more difficult it is to avoid problems or even failure because critical information was not correctly shared. In a complex bureaucracy like those constituting the national security system, such as the Central Intelligence Agency, the Defense Intelligence Agency, the FBI, the National Security Agency, among others, the strategy of compartmentalizing information to keep secrets is referred to as “stove piping.” Information and secrets travel up to the top of a bureaucratic agency rather than being broadcast across boundaries to other bureaus and agencies. The consequences can be extraordinarily serious, as detailed by the Report of the 9/11 Commission. The pursuit of secrecy within agencies can interfere with effective action across agencies. Leaks, the release of information to unauthorized recipients, are endemic in complex organizations such as governments. The term leak is value laden and highly negative. When officials brief reporters “off the record” or “on background,” or
pass on information and talking points to their friends and allies in Congress or the media, they do not define what they are doing as “leaking.” A leak is a release of information that the person in charge does not appreciate. Leaks sometimes violate formal secrecy rules and most often undermine the desire of someone to keep information out of public view. Leaks can be intended to achieve one or more purposes: mobilize public opposition to (or support for) a proposed policy, embarrass political rivals within an administration, or reveal corruption or bungling within an agency. Revealing misconduct within the government is known as whistle blowing and a major source of information for both Congress and the media. Reactions to leaks are ultimately political, depending on whether one approves or disapproves of the effects of the leak. Every administration has strenuously attempted to keep some things secret. Some presidents, for example Richard Nixon and George W. Bush, have presided over administrations that placed a particularly strong emphasis on keeping a great deal of information secret. Further Reading Committee on Government Reform, U.S. House of Representatives. On Restoring Open Government: Secrecy in the Bush Administration. Washington, D.C.: Government Printing Office, 2005; Melanson, Philip. Secrecy Wars: National Security, Privacy and the Public’s Right to Know. Dulles, Va.: Potomac Books, 2002; National Commission on Terrorist Attacks. The 9/11 Commission Report: Final Report of the National Commission on Terrorist Attacks upon the United States. Authorized edition. New York: W.W. Norton, 2004; Roberts, Alasdir. Blacked Out: Government Secrecy in the Information Age. New York and London: Cambridge University Press, 2006. —Seth Thompson
social security “Social security” can encompass an enormous variety of programs to maintain income and provide for health. Abroad, it can even include education, training, and job security. In the United States, the meaning generally is considerably narrower.
social security 853
Technically, the federal program called Social Security could apply to any program that related to the Social Security Act of 1935 with its subsequent revisions. That, however, would include unemployment insurance and means-tested programs such as Supplemental Security Income (SSI) and Medicaid. Americans generally do not consider Social Security to include these. In both popular and professional usage in the United States, Social Security refers to contributory social insurance requiring contributions from workers and providing benefits without regard to need. Thus, it refers to the programs of the Social Security Administration (SSA) that provide old-age, survivors’, and disability benefits (OASDI). The Social Security Administration no longer administers Medicare, the program that provides health coverage largely for those 65 and over, but Medicare also, quite clearly, is contributory social insurance. Its financing mechanism is similar to that of OASDI, which involves two trust funds, one for OASI and another for disability benefits. Medicare is financed through a third trust fund. Certainly, it could be considered “social security,” but it presents serious long-term financing issues that OASDI does
not. Normally, Medicare is considered separately— as it must be for any meaningful evaluation of longrange fiscal prospects. As a rule, then, Social Security refers to the contributory programs designed to provide income support. Thus, Social Security here refers specifically to OASDI, and stipulates Social Security and Medicare when the discussion includes health coverage. Social Security funding comes through Federal Insurance Contributions Act (FICA) taxes. The worker pays 6.2 percent, while the employer matches the paycheck deductions, paying an equal 6.2 percent. The amount subject to FICA tax is capped. For 2005, the maximum amount subject to FICA taxes was $90,000. The cap rises each year based on inflation. A similar tax funds Medicare. The employee pays 1.45 percent, and the employer pays an equal amount. There is no cap; the tax is due from both worker and employer on the full amount paid. Social Security and Medicare taxes go into trust funds, where they finance the current benefits’ administrative costs. Both Medicare and Social Security
854 social security
operate at astonishingly low expenses for administration. For every dollar that comes in, more than 99 cents goes out in benefits. Administrative expenses of less than 1 percent of income reflect administrative efficiency that no private program can come close to matching. Social Security taxes coming into the trust funds currently are far beyond the amount needed to pay benefits. The surplus funds by law are invested in government securities that regularly pay interest into the trust funds. The bonds in the trust funds have the same claim on the U.S. Treasury as any other Treasury bond. They are “only paper” in the sense that a $20 bill or a U.S. Savings Bond is “only paper.” All have value because they have the backing of the U.S. government. Although critics of Social Security usually portray the system as a retirement program only, it is far more. Nearly one-third of the system’s benefit payments go to younger people. In addition to retirement benefits for the elderly, the system pays benefits to qualifying widows and to children of deceased workers. There are also benefits to workers who become disabled and to their dependents. Additionally, there are benefits to workers’ spouses. Social Security’s benefits are protected against inflation; benefits will never lose their purchasing power. Another strength of the program is that the system provides retirement benefits for the life of the beneficiary. A retiree cannot outlive Social Security’s payments. The United States tends to be tardy in adopting social programs. When Congress passed the Social Security Act in 1935, although there had been some previous programs that were narrower in focus such as military pensions, it was America’s first step toward universal social insurance. Another 30 years passed before Medicare’s adoption in 1965, and its passage came only after one of the fiercest political battles in this country’s history. Even then, Medicare was limited almost entirely to the elderly. Germany had put a comprehensive program in place in the 1880s, and other European countries had quickly followed the German example. To be sure, there had been pressures previously. Former president Theodore Roosevelt, in his famous Bull Moose campaign of 1912, had strongly advocated comprehensive social insurance, including health care. The
United States, however, often moves slowly on such issues. The world’s wealthiest country remains virtually alone in the industrial world in failing to provide a system of comprehensive health care for its entire population. Instead, a system of private health insurance provided through employers has evolved. As health-care costs increase, fewer employers offer such coverage, leaving more Americans without access to regular health care. The result has been that by nearly any measure of health-care results, from access, to infant mortality, to longevity, the United States suffers in comparison with other industrial countries—and in fact, it rates poorly even when compared with some countries that generally are thought to be “third world.” Companies that do offer health-care coverage are finding it increasingly difficult to do so. The costs are seriously hampering their ability to compete with other countries in which the health-care costs are widely distributed throughout society, rather than falling almost exclusively on employers. Business interests in other countries frequently were in the forefront of those who advocated for government health coverage. In general, the American business community has yet to do so. Because a healthy workforce is essential, however, and because the burden of providing health benefits is becoming more than they can bear, it is likely that American business leaders will someday advocate government health coverage as good for business. It may even be that American business will come to confront in providing pensions what it now faces with regard to health care. As more and more corporations find it difficult or impossible to meet their commitment to provide retirement benefits to their workers, there could ultimately come to be support for increasing Social Security, rather than truncating or privatizing it. The Social Security Act of 1935 provided only retirement benefits. Although FICA taxes started in 1937, the first retirement benefits were to be delayed until 1942. In 1939, however, Congress amended the act, changed its character, and began payments in 1940. In addition to retirement benefits, the 1939 amendments provided for benefits to spouses, dependent children, and dependent survivors of deceased workers. In 1950 with the strong support of President
social security 855
Harry S. Truman, Congress expanded the program to include the self-employed. Demonstrating that support for Social Security was not limited to Democrats, President Dwight D. Eisenhower, a fiscally conservative Republican, signed into law the 1956 amendments that added disability benefits, a huge expansion. In 1965, President Lyndon B. Johnson (LBJ) signed legislation for which he and his predecessor, John F. Kennedy, had fought strenuously. LBJ’s signature brought the greatest expansion in Social Security’s history by adding health benefits for the aged, Medicare. Until 1972, benefit increases depended on congressional action. That year, President Richard Nixon signed into law provisions to index benefits to inflation, providing automatic annual increases. By 1977, it had become clear that the formula used to index benefits was flawed. Because of what came to be called “stagflation” (a combination of a stagnant economy, stagnant wages, and inflation), benefits had risen too fast in relation to incoming revenues. Amendments in 1977 tried to correct the formula, protecting those who had been receiving benefits based on the old formula. This led to the so-called notch baby issue when new beneficiaries received benefits lower than their immediate predecessors. When Ronald Reagan became president in 1981, he moved immediately to slash Social Security drastically. He had long been a foe of social insurance and had energized the opposition that had existed since the beginning. As early as the 1950s, he had argued that it should be privatized. His actions as president led to a political firestorm. Although he did succeed in trimming the program somewhat, he had to promise not to attack Social Security again. He kept his promise, but his long history of hostile rhetoric had at last made it politically possible to criticize and to some extent even to attack the program directly without paying a political price. By 1982, a cash flow problem had developed, and there were fears that without revisions, Social Security’s income would be inadequate to pay full benefits by late 1983. As many experts have noted, the system’s troubles were far less severe than the news media—which often reported dire predictions from enemies of the system as though they were absolute fact—portrayed them.
Reagan honored his promise and called together a bipartisan commission to recommend action. He named Alan Greenspan as its head. The commission took only months to issue its report. The members unanimously rejected means testing or radical revisions and gave a full vote of confidence to Social Security’s fundamental principles. Among the recommendations were subjecting half of Social Security’s benefits to income tax except for low-income recipients (previously, all Social Security benefits were free from income tax), with receipts to be directed to the trust funds. All new federal workers were to be incorporated into the system. The cost-of-living adjustment would be delayed for six months, and already scheduled tax increases were to take effect sooner. Congress accepted the recommendations and raised the age for full retirement gradually from 65 to 67. Those born in 1938, for example, reached the age for full retirement upon turning 65 and two months. Those born in 1960 or later had to reach age 67 to quality for full benefits. Reagan signed the 1983 amendments, and the trustees then projected that Social Security would be sound for the full period of its long-range projection, 75 years. In 1993, President Bill Clinton signed legislation increasing the amount of benefits subject to income tax. Since then, only 15 percent, rather than 50 percent, may be excluded. The trustees are political appointees, not unbiased experts. They include the secretary of the Treasury, the secretary of labor, the secretary of health and human services, the commissioner of Social Security, and two members from the public. Their reports for nearly two decades have made three projections: Alternative I, Alternative II (the “Intermediate Projections”), and Alternative III. Their “Intermediate Projections” for years have forecast future deficits. This is strange, because the 1983 projections called for a slight surplus, and the economy’s actual performance has been far better than the 1983 projections had assumed it would be. What, then, happened to the projected surplus? Actually, nothing happened to it. The trustees simply began to use more pessimistic calculations. Their Alternative I projections are more optimistic and call for no difficulty in the future. The economy’s actual performance has consistently been much closer to the assumptions underlying the Alternative
856 social security
I projections than to the Intermediate Projections, yet only the pessimistic Intermediate Projections receive publicity. This is largely the result of a well-financed propaganda campaign designed admittedly to undermine public confidence in Social Security. The libertarian Cato Institute, funded lavishly by investment bankers and various Wall Street interests, has been quite active in this regard, as have many other organizations. Among the most successful has been the conservative Concord Coalition, which has managed to convince many in the public and in the news media that it is interested only in “fiscal responsibility” and has no political agenda. Experts recognize that long-range projections on such complicated issues as Social Security are no more than educated guesses. Yet the news media and many policy makers assume the Intermediate Projections are precise and unquestionable. Yet the year that they project for depletion of the trust funds varies from report to report and since 1983 has never been sooner than 32 years away. No projection can be accurate over such a long period. It would be foolish to make radical revisions on such weak premises. Nevertheless, there is widespread sentiment that Social Security must be “reformed.” Supporters of the system have proposed a variety of measures that would increase income, such as raising or eliminating the cap on income subject to FICA taxes or reducing benefits. In the 1990s, there were many proposals to subject benefits to means tests, or to “affluence tests.” Such proposals have fallen into disfavor. Supporters have come to recognize that they would destroy Social Security by converting it into a welfare system, which would have little or no political support. Social Security’s enemies have turned their attention instead toward privatization, or carving “personal accounts” out of the system. Most prominently, President George W. Bush, the first president to speak openly in opposition to the principles of Social Security, favors privatization. Even Reagan had cautiously phrased his comments to disguise his opposition. George. W. Bush also has suggested radical cuts in benefits for all but the poorest Social Security recipients. His proposals have generated furious opposition.
Social Security could indeed benefit from a progressive reform, one that would protect benefits while introducing a modicum of progressivism into its funding. Benefits have always been progressive. They replace a higher portion of a low-income worker’s income than of one earning more. The taxation mechanism, though, in levying a flat tax but exempting all income above a certain amount, has been regressive. This should be corrected by removing the cap on taxable income and exempting the first $20,000 of earnings from FICA taxes on workers—employers would continue to pay the tax from the first dollar of wages. Other changes could enhance the trust funds. Among them would be dedicating an estate tax to fund Social Security. There are serious threats to America’s financial future. The astronomical deficits that Presidents Reagan and George W. Bush have generated are the issue. Social Security is not the problem. The charts predicting trouble reflect Social Security, Medicare, Medicaid, and interest on the national debt. A close look reveals that Social Security’s effect—even with the pessimistic projections—is tiny. The trouble comes from Medicare, Medicaid, interest on the national debt, and the assumption that “the days of 50 percent to 60 percent marginal tax rates are over.” The troubles can be made to evaporate, but that assumption must be discarded. Social Security’s surplus should be dedicated to paying down the national debt (a version of the “lockbox”), thus sharply reducing interest payments. There was progress paying down that debt with Clinton’s balanced budgets until Bush cut taxes and restored deficits. A reform in America’s health care delivery system would provide better and more accessible care more efficiently, as other countries do, and would reduce health-care costs. It would greatly assist American business. Eliminating President Bush’s tax cuts and ensuring that the wealthy pay a greater share will be essential. Business thrived after Presidents George H. W. Bush and Clinton raised taxes. A move toward enhancing Social Security would also aid business. The suggested reforms could permit a doubling of Social Security benefits, thus freeing employers from the burden of providing pensions, a burden that many companies are finding it impossible any longer to shoulder. Expanding Social
supply-side economics 857
Security and Medicare would not only protect America’s people, it would make American business more competitive. See also entitlements; New Deal. Further Reading Baker, Dean, and Mark Weisbrot. Social Security: The Phony Crisis. Chicago: University of Chicago Press, 1999; Ball, Robert M., with Thomas N. Bethell. Straight Talk about Social Security: An Analysis of the Issues in the Current Debate. New York: Century Foundation Press, 1998; Béland, Daniel. Social Security: History and Politics from the New Deal. Lawrence: University Press of Kansas, 2005; Benavie, Arthur. Social Security under the Gun: What Every Informed Citizen Needs to Know about Pension Reform. New York: Palgrave Macmillan, 2003; Eisner, Robert. Social Security: More Not Less. New York: Century Foundation Press, 1998; Gladwell, Malcolm. “The Moral Hazard Myth: The Bad Idea behind Our Failed Health-Care System.” The New Yorker, 29 August 2005, 44–49; Hiltzik, Michael A. The Plot against Social Security: How the Bush Plan Is Endangering Our Financial Future. New York: Harper Collins, 2005; Kingson, Eric R., and James H. Schulz, eds. Social Security in the 21st Century. New York: Oxford University Press, 1997; Lowenstein, Roger. “A Question of Numbers.” The New York Times Magazine, 16 January 2005, 41ff; Skidmore, Max J. Social Security and Its Enemies: The Case for America’s Most Efficient Insurance Program. Boulder, Colo.: Westview Press, 1999. —Max J. Skidmore
supply- sideeconomics Supply-side economics may be the oldest school of modern economics. Its ideas go back to Adam Smith, whose Wealth of Nations (1776) emphasized the growth of the productive capacity (or potential) of a market economy: Expanding the extent of the market allows greater division of labor (specialization), raising labor productivity (goods or services produced per unit of labor). Promoting a nation‘s prosperity thus involves unleashing markets (laissezfaire policies). Despite antagonism toward government (then run by a king), Smith suggested possible positive roles for it, for example, in education.
Both markets and government are part of supply-side economics. In fact, it is hard to define a supply-side economics “school,” since all economics invokes supply. Many thus restrict the term to refer to views that argue for promoting supply using tax cuts, that is, “new” supply-side economics, associated with economist Arthur Laffer and President Ronald Reagan. Even given this definition, some quibble about what “true” supply-side economics is. Instead of that issue, this essay will focus on the contrast between new supply-side economics and “traditional” supply-side economics embraced by most economists. This involves any government policies aimed at increasing potential or that actually does so in practice. It goes back millennia, including the Roman Republic’s draining of swamps to promote health. Today it centers on the notion that many projects needed to encourage potential growth are public goods or other goods that private enterprise will not produce (unless subsidized) because profitability is so low. These include investment in infrastructure, public health, basic research, homeland security, disaster relief, and environmental cleanup. They might also include privatization of government services and the creation of artificial markets (as for electricity). Such projects can also stimulate demand. If the economy starts with high cyclical unemployment, government investment lowers it, but if the economy is already near (or attains) full employment, it encourages inflation, because it typically pumps up demand much more quickly than supply. Such investment also typically raises the government deficit (all else constant), since few benefits accrue as tax revenues, especially in the short run. Most importantly, such projects (including some involving privatization) may be in “pork barrel” schemes promoting the fortunes of only a minority of politicians, their districts, and financial backers, but not those of the citizenry as a whole. So there may be unwanted distributional impacts, unfairly helping some and hurting others. Thus, traditional supply-side economics requires an active and empowered citizenry to monitor it, along with a clear consensus about national goals. Traditional supply-side economics can involve tax cuts. Standard theory says that any tax entails two types of burdens. For a tax on a specific type of item
858 supply -side economics
(an excise or sales tax), there are first direct burdens on the buyers and sellers. This is what they pay, directly or indirectly, to the tax collector. Second, to the extent that they can avoid exchanging the items, some trades never occur. This excess burden loss reduces the extent of the market and can restrict specialization and hurt productivity. Lowering taxes can thus have general benefits: For example, cutting the payroll tax workers pay could not only increase after-tax pay and (perhaps) after-tax firm revenues, but also might increase employment by abolishing some excess burden. If employment rises, so does output. Most economists see the supply response to changed wages as very low, however, so this effect is minor. Cutting taxes implies forgoing some possible benefits. These include funds for programs that the electorate and/or politicians want; cutting taxes raises the government deficit (all else constant). Second, some taxes, called “sin taxes,” are on products such as alcohol and tobacco that the government and/or voters deem undesirable. Thus, the excess burden cost must be compared to the benefits of discouraging “bad” behavior. Regarding the new supply-side economics, this school went beyond these traditional verities. It developed during the 1970s, partly in response to stagflation, when Keynesian and monetarist policy tools were widely seen as failing. Further, inflation pushed many into higher tax brackets, provoking the “tax revolt” (e.g., California’s Proposition 13 in 1978). Though some adherents were former Keynesians, new supply-side economics applied several kinds of non-Keynesian economic thought, particularly Austrian economics and new classical economics. Different versions of new supply-side economics were developed and popularized by economists such as Laffer, Robert Mundell, and Norman Ture; journalists such as Jude Wanniski, Paul Craig Roberts, and George Gilder; and politicians such as Ronald Reagan and Jack Kemp. The term itself was coined by Wanniski in 1975, while his The Way the World Works (1978) presented many of its central ideas. Unlike traditional supply-side economics, new supplyside economics did not center on the specifics of government spending. Rather, the focus was on the incentive effects of the tax system. New supply-side
economics argues that tax cuts can boost aggregate supply by reducing excess burden. In addition, many adherents invoke Say’s “law,” that is, that actual and potential output always roughly coincide (because aggregate demand failure is transitory). In theory, however, new supply-side economics could instead bring in Keynesian or monetarist demand theories. Other supply-siders advocate a gold standard in international exchange, likely to impose deflationary demand constraints. Central is the marginal tax rate of the U.S. federal personal income tax. The marginal tax rate is the ratio of the increase of an individual’s tax obligation to the increase in that person’s taxable income. In 2006, a single individual earning up to $7,550 in taxable income pays 10 percent of any increase to the Internal Revenue Service. For taxable incomes above $7,550 and up to $30,650, 15 percent of any rise goes to tax. The marginal tax rate rises from 10 percent to 15 percent, as for any progressive tax but unlike a “flat tax.” The marginal tax rate differs from the percentage actually paid: Someone earning $10,000 pays 10 percent of the first $7,550 and then 15 percent on the next $2,450, implying an average tax rate of 11.2 percent. To new supply-side economics, rising marginal tax rate creates an incentive to avoid raising one’s taxable income, just as excise taxes discourage exchanging of taxed items. Increasing the marginal tax rate can cause people to leave the paid workforce (e.g., those whose spouses work for money income), stretch out vacations, retire at younger ages, and/or avoid overtime work. Some might avoid risky business opportunities while avoiding taxes via accountants and tax shelters or evading them through illegal activities. Cutting marginal tax rates has the opposite effect, but new supply-side economics does not stress tax cuts for the poor. To James Gwartney, an economist who advocates new supply-side economics, tax cuts have larger incentive effects with higher initial marginal tax rates, as for richer folks. A 50 percent tax rate cut for a single individual in the 35 percent bracket (receiving $336,550 or more in 2006) allows him or her to keep $17.50 more of any $100 of extra income earned, but a 50 percent tax rate cut for the 10 percent bracket allows an individual to keep only $5 more of any extra $100.
supply-side economics 859
Thus, to Gwartney, tax cuts for the rich mean that “given the huge increase in their incentive to earn, the revenues collected from taxpayers confronting such high marginal rates may actually increase.” In sum, tax cuts, especially for the rich, can encourage work, saving, and investment, which in turn can raise potential, the tax base, and tax revenues. Laffer and other economists hoped that this would raise the total tax base so much that total revenues would rise despite across-the-board tax rate cuts. Gwartney argues, however, this is cancelled out by weak incentive effects of such cuts for low-income people. David Stockman, the Reagan-era budget director, famously admitted that for the Reagan administration, new supply-side economics doctrine “was always a Trojan horse to bring down the top rate.” The “supply-side formula was the only way to get a tax policy that was really trickle down,” he continued. “Trickle-down theory” is simply the view that giving benefits to rich people automatically provides similar improvements to everyone. But as John Kenneth Galbraith once quipped, that theory is “horse and sparrow” economics: “if you feed enough oats to the horse, some will pass through to feed the sparrows.” But some argue that supply-side economics involves more than advocating tax cuts for the rich. Some reject the association with “trickle down.” For some, Reagan was no supply-sider, and no “pure” new supply-side economics policies have ever been implemented. However, it is worth evaluating the success of pro-rich tax cuts, the main policy associated with new supply-side economics. Before becoming Reagan’s vice president, George H. W. Bush dubbed this school “voodoo economics”: He saw it as ineffective, promising a “free lunch” (something for nothing). This represents the opinion of most economists. First, there is a fundamental hole in supply-side economics logic. Reducing the marginal tax rate, no matter how much, does not always increase the amount of work, saving, or investment. New supply-side economics emphasizes the substitution (incentive) effect: If one’s after-tax wage rises by 10 percent, that increases the benefit of labor relative to leisure, encouraging more labor time. But economists’ income effect goes the other way: Receiving more income per hour, one can get the same income
with 10 percent fewer hours. Even better, one can work the same hours as before and get 10 percent more income. The individual gets something for nothing even if society does not. The same income effect counteracts any extra incentive to save or invest due to a falling marginal tax rate. Gwartney points to three cases in which new supply-side economics policies have been applied: during the 1920s under Treasury Secretary Andrew Mellon, after the 1964–65 tax cuts by President Lyndon B. Johnson, and under Reagan. However, other economists have pointed to other reasons, usually demand stimulus, why prosperity and tax revenue increases occurred after these tax cuts. Further, they point to very prosperous periods with high marginal tax rates (the Dwight Eisenhower years of 1953 to 1961, for example) and even after increases in taxes on the rich (the Bill Clinton years of 1993 to 2001). In the latter case, labor productivity (a key supply-side variable) grew at a clearly increased rate. Many credit the Internet and similar high-tech inventions for this surge, but these resulted much more from government programs than from tax cuts. It is true that hours of paid work per year have generally increased between 1980 and 2000, especially for women. This may not be due to supply-side economics policies, however, since it could easily reflect efforts to “make ends meet” in the face of stagnant real family incomes (seen during this period) and the general movement of women into the paid workforce (which preceded supply-side economics). Further, increased work hours are not always desirable, as it can cut into time for leisure, family, and community activities. Did tax cuts result in increased saving? In the aftermath of the Reagan tax cuts, it was instead consumption that rose. The era of supply-side economics in general corresponds to rising consumer spending relative to incomes. On the other hand, fixed investment (supposedly encouraged by supplyside economics tax cuts) generally fell during the Reagan (1981–89) and George H. W. Bush (1989– 93) years, but then rose during the anti–supply-side economics Clinton years. In sum, the benefits of the supply-side economics program are mixed at best. The microeconomic analysis above suggests that new supply-side economics tax cuts might cause
860 t elecommunication policy
inflation. Under Mellon, these did not happen due to gold standard discipline, while monetary policydriven recessions prevented inflation under Reagan. Inflation did rise under Johnson, but that was partly due to increases in military spending. Further, the government deficit rose (as a percentage of gross domestic product) under Johnson, Reagan, and George W. Bush, though much of these increases can be blamed on military buildups. Under Mellon, deficits were avoided by cutting government expenditure. On the other hand, Clinton’s anti–supply-side economics tax hikes led to falling deficits— and actual surpluses. Some advocates of new supply-side economics cite the new classical economist Robert Barro’s work, arguing that deficits have no negative effect: Because they are seen as implying future taxes, that encourages saving. But as noted, the Reagan tax cuts did not encourage saving. More see deficits as imposing a needed discipline on government. Seeing any government programs beyond law enforcement and the military as inherently wasteful, these authors want to “starve the beast”: Tax cuts lead to deficits and thus to spending cuts. As a result of these and other critiques, many see supply-side tax cuts as a “special interest” program, akin to pork barrel spending, in this case benefiting only society’s upper crust. They have definitely had distributional effects, reinforcing the already existing rise in the gap between the incomes of the poor and rich during the last 30 years. However, new supplyside economics does not call for “an active and empowered citizenry” or “a clear consensus about national goals.” Shunning democracy, new supply-side economics instead sees the market as the measure of all things, favoring laissez-faire over all. This survey only skimmed the surface. It is no substitute for a fullscale test of new supply-side economics theories versus mainstream alternatives. Further Reading Gordon, Robert J. Macroeconomics. 10th ed. Boston: Pearson, 2006; Greider, William. “The Education of David Stockman.” The Atlantic, December 1981: 19– 43; Gwartney, James. “Supply-Side Economics.” In The Concise Encyclopedia of Economics. Available online. URL: http://www.econlib.org/library/Enc/ Supply SideEconomics.html. Accessed June 18, 2006;
Roberts, Paul Craig. “What Is Supply-Side Economics?” In Counterpunch. Available online. URL: http:// www.counterpunch .org/ roberts02252006 .html. Accessed June 18, 2006; Wanniski, Jude. The Way the World Works: How Economies Fail—and Succeed. New York: Basic Books, 1978. —James Devine
telecommunication policy Who owns the broadcast airways? Why is government regulation of print media different from broadcast media? What does it mean to broadcast “in the public interest?” These questions and more lie at the core of telecommunications policy in the United States. This essay will probe these questions in order to explore the relationship between the national government in Washington, D.C., and private broadcasters, a relationship that has been in place for more than 70 years. In the early 20th century, radio burst upon the scene as a powerful device to communicate to thousands of people widely dispersed across the country. Some radio stations could transmit their signal so that it reached audiences many states away, while a number of local broadcasters, often working from the basements of their homes, would build a transmitter that reached audiences just several blocks away. Whether the signal was powerful or weak, it became immediately clear that someone needed to police the radio transmissions, since many of these signals were crossing one another, leading to a garbled transmission at the other end. Thus, those who owned commercial radio stations found they were losing money because listeners were tuning out. The major commercial radio broadcasters began to lobby the government for some form of intervention that would set aside space on the broadcast spectrum for their use only. In exchange, government would control who received a license, and more important, it would control competition. In an early attempt to regulate the telecommunications sector, the Radio Act of 1927 failed to address the needs of commercial broadcasters, prompting the U.S. Congress (with the backing of President Franklin D. Roosevelt) to pass the first massive legislation to deal with telecommunications. The Federal Communications Act of 1934 not only was a major consolidation of the telecommunications industry
telecommunication policy 861
but also was notable for how quickly it was passed into law. Roosevelt requested that Congress take action on the issue in January 1934, and by June 1934, it was on his desk for him to sign. The act created a new regulatory agency that would oversee telecommunications policy in the United States. Prior to the act, oversight was shared between the Department of Commerce and the Interstate Commerce Commission (ICC). With this act, the shared oversight was now combined into the Federal Communications Commission (FCC), and it was charged to act in the “public interest,” a term that was not defined by Congress or the president and still varies today. The FCC is an independent regulatory agency, which ideally means that it is designed to be free from congressional or presidential political pressure. It consists of five members, three of whom are appointed by the president and two who are selected by the opposition party. All are confirmed by the Senate and serve five-year terms, at which time they may be nominated again by the president. The president also designates one of the commissioners to serve as the chairperson of the FCC. This person will establish the agenda for the FCC, deciding whether proposed rules or changes to rules will receive one hearing or many. The chair also tends to be the public face of the FCC, often interacting one-on-one with the news media. The FCC, as a regulatory body, has “quasilegislative, executive, and judicial” functions. That means it has the ability to write regulations as well as to implement the laws passed by Congress and signed by the president. It executes those regulations to ensure that all entities under the FCC’s jurisdiction are behaving as they should, and finally it has the judicial power to punish those who violate the rules established by the agency. With the implementation of the Federal Communications Act and establishment of the FCC, the government became the owner of the broadcast spectrum, controlling which entities—individuals or corporations—could “use” a portion of the “public airwaves.” This interaction between broadcasters and the government takes the form of licensing. Potential broadcasters must file for a license with the FCC in order to reserve a place on the broadcast spectrum (radio and television). Once a license is granted, it
must be renewed every eight years. During the process of granting a license (or even renewing one), the FCC reserves time for public comment, which allows individuals to come forward and speak in favor of or against a particular applicant. Once an application is granted, it is rarely revoked. However, when granted a broadcast license, the licensee agrees to abide by FCC rules and regulations. As an independent regulatory agency, the FCC uses the license as a means to regulate the telecommunications sector. Those who receive a license to broadcast agree to abide by certain government regulations. There are hundreds of regulations that deal with all types of telecommunications issues—for instance, whether cable providers may offer telephone service or whether telephone companies may offer high-speed internet access. Rather than address all types of regulations, it will be more helpful to look at the major regulations that interact with the political process. Indecency and obscenity have never been protected forms of “speech” or “press” that deserve First Amendment protections. The FCC defines indecency and obscenity in much the same way as does the U.S. Supreme Court. The difference, however, lies in the definition of “community standards.” In general terms, the courts have ruled that when determining whether something is obscene, you needed to look, in part, at the standards of the particular community, thus leaving it to the local communities to determine for themselves what sorts of speech and press are appropriate. In the case of federal regulations, the FCC gets to determine the meaning of “community standards,” which has meant that indecency and obscenity have varied greatly over time. For instance, there was a period in broadcasting when women on television were prohibited from displaying their navels on the screen. But as standards in the country changed, they often were reflected in a loosening of what was permissible on the television or radio. In the 1970s and 1980s, as more and more communities gained access to cable and then satellite television and radio, the broadcast entities that still broadcast “over the air” urged (or pushed) the FCC to relax standards in order to compete. Cable and satellite technology is largely free from FCC regulations of indecency and obscenity because consumers must
862 t elecommunication policy
pay a fee for access to the signal and do not rely on the public airwaves to reach consumers. In the 1990s, it appeared that the FCC had completely withdrawn from oversight of indecent material on television or radio. Most notably on radio, the rise of “shock jocks” such as Howard Stern continued to push the envelope of indecency as far as possible, sometimes too far, prompting fines from the FCC. For radio personalities such as Stern, the fines that he did receive from the FCC were a drop in the bucket compared to the money he made by attracting advertising dollars because his show was so successful. In fact, most of the fines he received were displayed with pride as a marketing gimmick in order to attract more people to listen to Stern and encourage others to push the envelope of indecency. In 2004, the upper limit was reached when pop stars Janet Jackson and Justin Timberlake, performing during the halftime at the Super Bowl (which is watched by millions), ended their performance with Ms. Jackson revealing a portion of her breast. Immediately, the switchboards at CBS (the station that broadcast the Super Bowl in 2004) lit up with protests from individuals all over the country. The next day, letters and phone calls poured in to Congress and the FCC demanding action to end the smut. The FCC immediately reacted to the public pressure. Howard Stern was run off of broadcast radio and to satellite radio (he signed a multimillion dollar deal with Sirius Satellite Radio). Clear Channel, which had profited mightily off shock jocks, was now facing enormous and unprecedented fines. For instance, the FCC leveled more money in fines during the first four months of 2004 than it had for the previous 10 years. Congress got into the act as well. In 2006, it raised the amount the FCC could fine by 10-fold over the previous amount. Congress passed and President George W. Bush signed the Broadcast Decency Enforcement Act, which raised the fines from $32,500 per incident to $325,000 per incident. Since the news media are in private hands, access to the news media, particularly during an election, can mean the difference between winning and losing. The government worried that those who owned radio or television stations could influence the outcome of an election by either denying a candidate access to or charging one candidate less than other candidates for advertising. A requirement in receiv-
ing the license is that the broadcasters not favor one candidate running for office over another. Therefore, when a candidate is sold time to air a political ad, his or her price is the same as all other candidates who bought time to run a political ad. But what happens when the news covers the incumbent during an election at staged events? Would the station be forced to provide the challenger the same amount of time for free? This issue actually did come before the FCC, and Congress amended the Federal Communications Act of 1934 to provide an exemption to news programs or public affairs interviews on Sunday talk shows such as Meet the Press on NBC and Face the Nation on CBS. The idea behind the Fairness Doctrine was to protect those singled out by broadcasters for scorn as well as to force broadcasters to explore the complexities of controversial issues in order to create and sustain a deliberative, democratic public. The first part was challenged in 1968 when an author of a book on former presidential candidate Barry Goldwater was singled out for scorn by a conservative minister who owned a radio station. The minister believed that the book cost Goldwater the election in 1964, and he used the power of his radio station to vent his anger. The author of the book asked for an equal amount of time to reply and was denied. In the U.S. Supreme Court case Red Lion Broadcasting v. FCC (1968), the Supreme Court ruled that the FCC and Congress have the authority to order a privately owned broadcast station to provide an equal amount of time for rebuttal to an individual singled out by the station for scorn. There is no similar provision for the print media if it singles out an individual on its editorial page, for instance. The second part arose from a complaint to the FCC in the 1940s against the owner of three radio stations who ordered his news staff to “slant, distort, and falsify” news against politicians he did not like. When his listeners complained, he responded that he might do whatever he wished because he owned the stations. The FCC responded with an order that required all licensees to explore all aspects of controversial issues whenever they took up discussion of political, social, cultural, and religious issues in an equal and fair amount of time. In 1987, the Reagan administration generated a great deal of controversy when it pushed the FCC
telecommunication policy 863
to repeal this part of the Fairness Doctrine. The administration argued, with considerable evidence, that this part of the Fairness Doctrine actually neutered most broadcast stations because rather than spending the time, money, and effort in covering all sides of controversial issues, they did not cover anything complex or controversial. Thus, rather than creating an informed citizenry, it was actually hurting it by providing milquetoast coverage that informed no one. This decision has remained controversial largely because of the effect it had on talk radio. Once this portion was removed, political talk radio proliferated throughout the country, catapulting such personalities as Rush Limbaugh, G. Gordon Liddy, and other conservatives into the center of many national debates. Conservatives and conservative ideas have flourished on talk radio since the 1987 action, with great effect on politics. For instance, when Newt Gingrich became the first Republican Speaker of the House in nearly half a century in 1995, he credited political talk radio in helping him spread the ideas of conservative Republicans along with their message throughout the country. The Federal Communications Act continues to serve as the guiding force behind telecommunication policy in the United States. As things changed, the act would be updated from time to time by way of amendments to meet changing technological needs. In the 1990s, however, it was apparent that the law would need a major overhaul to meet dramatic changes in the ways in which Americans communicated and in the ways in which Americans listened to radio or watched television. In addition to these changes, there were also a variety of negative externalities from new technology such as the Internet that affected existing laws. For instance, the quick and easy way to view pornography online ran up against state and local laws governing obscenity. To meet these new changes, the Republican Congress and the Clinton administration worked on and finally passed the largest set of changes to telecommunication policy since the Federal Communications Act in 1934. This amendment to the 1934 law was known as the Telecommunications Act of 1996 (hereafter Telecommunications Act). Unlike the Federal Communications Act of 1934, the Telecommunications Act took nearly a year to
pass. It was introduced in Congress in March 1995 and signed into law by Clinton in February 1996. And despite the fact that this law represented the most significant overhaul to telecommunication policy in more than 60 years, most Americans saw, heard, or read very little about the changes that were being made, mostly to the benefit of the telecommunications industry. Rather, the majority of Americans, which included most of the mainstream news media in the United States, were distracted by a peripheral “dog and pony” show that in the end would matter very little to the daily lives of Americans—and certainly would matter very little compared to the massive centralization in the telecommunications industry that would create “media giants” at the expense of competition. During the long debate over the Telecommunications Act, the media became fixed on two particular areas that generated a lot of discussion and conflict, mostly centered on the issues of indecency on television and obscenity on the Internet. The first, indecency on television, largely stemmed from parents who felt that their children were exposed to too much programming that was meant for adults. Further, because of many single parents or “latchkey” kids who were left alone at home after school, parents cried out for some method of control over what their children watched until they could get home and monitor for themselves. In response, Congress added a provision to the law that mandated a programmable computer chip, called the “V-chip,” to be installed on all new television sets manufactured after January 2000. The V-chip allowed parents to lock out programming they found unsuitable for their children. The television industry also supported an industry-backed rating system for all programming that screened for violence, sexual situations, and shows suitable only for mature audiences. The second, blocking obscenity on the Internet, received the lion’s share of the coverage related to the Telecommunications Act. Congress, along with the public, faced nightly newscasts of stories involving nudity, pornography, and obscenity on the Internet, and more troublesome, all of this indecency was at the fingertips of children. There were stories in which children, innocently researching information for school projects, inadvertently put in either a wrong name or a wrong suffix (.com instead of .gov) and
864 t elecommunication policy
were taken to Web pages that featured hardcore pornography. Thus, the new Republican majority decided to do something to combat the accessible porn on the Internet. The Communications Decency Act (hereafter the CDA) had been a stand-alone act that was enveloped into the Telecommunications Act to ensure passage. The CDA made it a crime to “knowingly transmit by any telecommunications device any obscene or indecent message to any recipient under 18 years of age.” It further made it a crime to transmit anything that was patently offensive to anyone under 18 or to be displayed in a way that those under 18 years of age would be able to view it. Violations of the CDA carried a possible jail term of two years. All the time the Telecommunications Act was traveling through the legislative process, the focus of the press and even various public interest groups was on this tiny provision, thought to be unconstitutional if it did become law. Web pages carried a little blue ribbon displaying the owners’ support for individual rights against government intrusion into what an individual reads or views in the privacy of his or her home. Once Clinton signed the Telecommunications Act into law, the CDA was immediately challenged in federal district court in Philadelphia. In the case Reno v. ACLU (1997), the United States Supreme Court found that the CDA was unconstitutionally broad. In an attempt to limit obscene materials, the law also made criminal perfectly legal communication. Nonetheless, the Supreme Court told Congress to rewrite the CDA in order to make it constitutional (and when it did, the CDA was challenged once again in Ashcroft v. ACLU [2002] and found unconstitutional a second time). After the Supreme Court rendered its decision, a cry went out among a variety of groups who were fighting the government over the potential of the CDA, and these groups were widely covered by the press. What did not receive coverage from the press was the massive centralization in the telecommunications industry, not to mention the handover of the new digital spectrum for free. The new act would allow single companies to own radio and television stations as well as newspapers in some cities. An agreement worked out between Congress and the Clinton administration was to apply the relaxed own-
ership regulations to radio as a test case to determine whether competition would increase or not. The answer was almost immediately clear. Radio stations all across the country were gobbled up by large regional and national corporations. One company, Clear Channel, went from a small regional player in Texas that owned 43 radio stations prior to passage of the act to a national giant, owning 1,200 stations by 2000. If one combines Clear Channel with Viacom, another large telecommunications giant, one gets two of the largest companies that control nearly what half of the country listens to on the radio. Also, the number of communications providers in the United States rapidly declined. In the early 1980s, when communications scholar Ben Bagdikian wrote the first edition of Media Monopoly, he warned of the dangers to democracy of 50 large corporations that owned the means of communications and information. After the passage of the Telecommunications Act, that number dropped to just five. When communications and information are controlled by just a few large corporations, there are potential dangers to the needs of a democracy. Some of those dangers are the homogenization of information that creates a “sameness” wherever one goes, thus drying out the uniqueness of local culture. Further, homogenization leaves minority communities stigmatized, as they are either never represented in this communications monopoly, or their representation is skewed or negative. For instance, those who study the effects of the ways in which the local television news treats African Americans have found that night after night, crime on local television news has a black face. In 2003, when the FCC announced that it would move to deregulate ownership of television, there was a massive outpouring from the American public who were disgusted by what happened with radio (the loss of alternative stations and the rise of shock jocks in order to make profits). Prior to the June 2003 decision (and one in which the FCC attempted to limit public input by holding just one hearing), a record 750,000 letters of protest flooded the commission. Despite these numbers, the FCC agreed to deregulate the remaining telecommunications sector. But before the regulations could be implemented, those who had sent letters to the
transportation policy 865
FCC also had begun to pressure Congress, which acted to overturn the FCC decision (along with a lawsuit in federal courts) and restore the pre-June decision, with a demand that the FCC return to the drawing board and try a second time. Further Reading Bagdikian, Ben. The New Media Monopoly. Boston: Beacon Press, 2004; McChesney, Robert. The Problem of the Media: U.S. Communication Politics in the Twenty-First Century. New York: Monthly Review Press, 2004. —Christopher S. Kelley
transportation policy The transportation system in the United States is one of the most extensive and complex in the world. It consists of a wide variety of transportation modes that currently are used to move people and commodities primarily within and between the metropolitan areas in the United States, where about 85 percent of the population lives. This enormous and complicated transportation system is the product largely of governmental policies and programs that have been the outcomes of American public policy making processes that are based on four key elements: federalism, distributive politics, the principle that taxes levied on transportation usage should be spent exclusively on that function (i.e., user fees), and controversy dealing with the desirable degree of private sector versus public sector funding and operation of the American transportation system. The complexity of the American transportation system has been to a large extent due to the principle of federalism, which has resulted in all levels of government—national, state, regional, and local— becoming involved in transportation spending and decision making. At the national level, the U.S. Department of Transportation, which was created as a cabinet-level executive department in 1966 to assume the functions that had been under the authority of the undersecretary for transportation in the U.S. Commerce Department, is the primary agency in the federal government with the responsibility for shaping and administering transportation policies and programs involving all trans-
portation modes except water. For the nation’s inland waterways, the U.S. Army Corps of Engineers is in charge. It operates docks and dams and maintains navigation channels throughout the nation. The U.S. Department of Transportation consists of the office of the secretary and 11 individual operating administrations: the Federal Aviation Administration, the Federal Highway Administration, the Federal Motor Carrier Safety Administration, the Federal Railroad Administration, the National Highway Safety Administration, the Federal Transit Administration, the Maritime Administration, the Saint Lawrence Seaway Development Corporation, the Research and Special Programs Administration, the Bureau of Transportation Statistics, and the Surface Transportation Board. The Homeland Security Act of 2002 authorized the establishment of the Department of Homeland Security, which in 2003 assumed the management of the U.S. Coast Guard and the Transportation Security Administration, formerly Department of Transportation operating administrations. The administrative officials in the Department of Transportation work closely with their counterparts at both the state and local levels, including officials in each state department of transportation as well as administrative officials of transportation-related agencies in regional, county, municipal, and special district governments. The numerous interactions between the Department of Transportation administrators and the state and local transportation administrators largely involve issues related to federal categorical grants (which provide financial assistance to state and local governments for planning, building, and repairing transportation infrastructure) and administrative regulations that state and local government must follow if they are to be the recipients of these federal grants. American transportation policy making also is complicated by its distributive rather than redistributive nature at the state and national levels. Distributive domestic policy making is considered low-level politics in that it is seldom of an ideological nature, largely involves relatively low levels of conflict, does not transfer resources from one segment of society to another, and seldom engages the focused attention of public opinion leaders in the mass media or of the highest level of state and
866 tr ansportation policy
national governmental officials (i.e., the president, state governors, top legislative leaders, or the secretaries of cabinet-level departments). In the development of transportation policies at the state and national levels, the fundamentally distributive nature of the political process features a large amount of lobbying on the part of a wide range of interest groups directed at both low-level government officials on legislative subcommittees and at low- tomid-level bureaucrats in transportation administrative units. The public policy outcomes of these lobbying efforts are usually compromises and largely incremental adjustments to ongoing transportation policies rather than fundamental policy changes based on systematic, comprehensive planning. To a considerable extent, state and national transportation policy can be thought of as “pork-barrel politics” in that it is largely the accumulation of many
individual monetary grants provided to lower levels of government and subsequent payments often made to private sector contractors to build and repair highways and mass transit systems. Unlike distributive transportation policy making, redistributive domestic policy making in the United States (often involving the formulation of social welfare and environmental policies) is usually thought of as high-level politics in that it is typically characterized by high levels of ideologically based conflict; involves the transfer of resources from one group of people to another; holds the attention of major public opinion leaders in the mass media, high-level government officials, and top executives of large corporations and nonprofit organizations; and frequently is heavily influenced by the comprehensive, long-range analytical research efforts of professional planners.
transportation policy 867
Exactly how these four elements of the policy making process (federalism, distributive politics, the reliance on user fees, and disagreements concerning the proper role of the private sector versus the public sector) have affected the transportation system in the United States can be seen through an examination of the nature of the past and present American air travel, railroad, highway, and mass transit public policies and programs. Air travel is becoming increasingly important for moving both people and commodities in the United States. Although most commodities are transported between metropolitan areas either by truck or railroad, air travel is the most frequently used mode of travel by people for nonauto trips between metropolitan areas. Each day in the United States, approximately 2 million people take more than 20,000 commercial flights. Also, overnight shipping of small packages increasingly has come to play a central role in the American economy. Metropolitan areas depend largely on federal grants and locally generated revenues to build, expand, and operate their airports. The federal government taxes every passenger ticket, and these funds go into the airport and airways trust fund, which is a dedicated source of funds. This revenue source, exceeding $6 billion annually, includes approximately $4 billion that is used to operate the nation’s air traffic control system and $2 billion for federal categorical grants for which local governments actively compete. These federal grants must be used to help fund airport improvements. Federal law also permits local governments to levy a head tax on each passenger and requires that these funds be used for airport programs. In addition to federal assistance, metropolitan airports generate a great deal of money locally. Airlines pay local governments to land their planes, rent gates, and lease office space at metropolitan airports. Passengers pay to park their cars and to buy food and other items at the airport. Money is also collected from taxi, shuttle, and car rental companies that use airport facilities. These locally generated revenues usually are sufficient to cover all the airport’s operating costs and costs related to airport infrastructure improvement and still provide enough surplus money that can be transferred to the local government’s general fund to pay for other
public services. Because airports are big money makers and usually thought to be essential for economic growth in a metropolitan area, many local officials, business leaders, and other local citizens promote airport improvements and expansion. However, while the economic benefits of an airport expansion tend to be widely dispersed throughout a metropolitan area, the resulting quality-of-life costs associated with noise, air pollution, and highway congestion tend to be highly concentrated in the neighborhoods immediately surrounding an airport. Consequently, most proposals to significantly expand a metropolitan airport produce intense local political controversies. The federal government’s influence on air travel in the United States has been based not only on its provision of categorical grants to local governments for airport improvements but also on its exercise of regulatory authority. The federal government began issuing administrative regulations involving air travel in 1926, when President Calvin Coolidge signed into law the Air Commerce Act. Over time, a large number of economic regulations were mandated primarily by the Civil Aeronautics Board (CAB), and air travel safety regulations were mandated primarily by the Federal Aviation Administration (FAA). The numerous economic regulations included antitrust oversight, consumer protection, airline economic fitness, the awarding of routes to airline companies, and requirements concerning the prices that airlines could charge. Because these economic regulations were widely viewed as limiting the entry of start-up airlines into lucrative air travel markets and reducing economic competition, President Jimmy Carter signed into law the 1977 Air Cargo Deregulation Act and the 1978 Airline Deregulation Act. These laws phased out the Civil Aeronautics Board’s authority over fares, routes, and airline mergers. Also, pursuant to the 1978 act, the Civil Aeronautics Board itself ceased operations and shut down in 1984. In contrast to the widespread opposition aimed at the federal government’s economic regulatory activity, civilian airline interest groups have tended to mobilize considerable support for the FAA’s safety regulations, in that civilian airline companies tend to believe that their economic growth is based on the American public’s perception that it is safe to fly. Therefore, currently
868 tr ansportation policy
the federal government’s air travel regulations largely involve safety issues. On September 11, 2001, terrorists commandeered four commercial passenger jet airliners and crashed two of them into the World Trade Center in New York City and one of them into the Pentagon in Arlington, Virginia, and the fourth airliner crashed in a field in rural Pennsylvania. Approximately 3,000 people were killed. Responding to this new form of terrorism, Congress passed and President George W. Bush signed into law in November 2001 the Aviation and Transportation Safety Act. This legislation created the Transportation Security Administration (TSA), which was charged with increasing security at the nation’s airports and other transportation venues. In 2002, TSA was incorporated into the new cabinet-level Department of Homeland Security. Prior to the 20th century, state and federal governmental efforts to develop a national transportation system focused largely on canals and railroads. One of the most important elements of the nation’s transportation infrastructure during the 19th century was the Erie Canal, which opened in 1825. New York was responsible for the planning, financing, construction, and operation of the Erie Canal. Widely viewed as the most important engineering marvel of its day in the United States, the original Erie Canal extended 363 miles from the Hudson River to Lake Erie and included 83 locks, with a rise in water level of 568 feet. The Erie Canal was an immediate financial and commercial success. The toll revenue provided more than enough to pay off the canal bonds held by private investors, and the cost of transporting goods from the Midwest to the Atlantic Ocean harbors was greatly reduced. The Erie Canal was instrumental in uniting the United States from east to west and in opening European markets to American grains. The first railroad in the United States was the Granite Railway Co. Three miles in length and pulled by horses, it began operating in 1826 and was used to carry granite blocks in Quincy, Massachusetts. The first steam locomotive was put into service by the Baltimore and Ohio Railroads in 1830. It was capable of hauling 36 passengers at 18 m.p.h. Although some state governments provided subsidies, private corporations built, operated, and owned the railways
in the United States until the 1970s. Despite resistance by state governments and railroad companies, the federal government first began to play a major role in the development of the national railroad network in 1862. In that year, President Abraham Lincoln signed into law the Pacific Railway Act, subsidizing the construction of the Transcontinental Railroad, which was finished in 1869 and owned and operated by private corporations. The federal government significantly increased its oversight of the evolving national transportation system in 1887, when President Grover Cleveland signed into law the Interstate Commerce Commission to regulate rates that railroads could charge for shipping freight, railroad service schedules, and railroad mergers. In the 1950s and the 1960s, the privately owned railroads in the United States decided that transporting freight was more lucrative than carrying passengers, and they began to cut back passenger service. In order that railroad passenger service between the major metropolitan areas would not be eliminated, Congress passed the Rail Passenger Service Act in 1970, creating a semipublic corporation: Amtrak (American Travel and Track). Amtrak took over nearly all intercity passenger service in the nation; with 182 trains and approximately 23,000 employees, it annually transports more than 20 million passengers between 300 American cities. Initially, Amtrak’s operations were intended to become self-supporting, but this has never happened. Consequently, the federal government provides approximately $600 million in annual subsidies. The state governments also make some contributions. In return, state governments participate in deciding which routes will receive what levels of passenger services. Because Amtrak has always run significant budget deficits and requires substantial governmental subsidies to continue operating, it has been criticized by opponents as a waste of taxpayers’ money. However, supporters of Amtrak argue that focusing on its inability to be self-supporting is inappropriate in that both highways and air travel in the United States are heavily subsidized by the federal government. Criticism of Amtrak, its supporters contend, is a reflection of the disjointed nature of transportation public policy in the United States, which is biased toward auto and air travel and biased against travel by rail. They argue that travel by train should be seen as one of the
transportation policy 869
essential components of a balanced, comprehensive national transportation policy. By the early 1970s, six railroads in the Northeast and Midwest that were heavy freight haulers entered bankruptcy. Among the reasons for their economic difficulties were competition from trucks (which were indirectly subsidized through their use of the federally funded Interstate highway system) and governmental economic regulations that made it difficult for railroads to respond effectively to changing market conditions. Due to declining freight revenues, the six railroads deferred maintenance and allowed their tracks and equipment to deteriorate. As a result, increasingly business turned to trucking companies for more cost-effective transportation of freight. Recognizing the national importance of these six railroads, the federal government created the Consolidated Rail Corporation (ConRail) in 1974 and began appropriating the necessary funds to rebuild track and to either purchase new locomotive and freight cars or repair them. Also, federal government economic regulations were loosened beginning in 1980, giving railroads more flexibility to compete with trucks. By 1981, ConRail was making enough income that it no longer required federal funding. In 1987, the federal government sold its ownership interest in ConRail through a public stock offering for $1.9 billion, and the Northeast-Midwest rail freight system became a private sector, for-profit corporation. Throughout the history of the American transportation system, there has been considerable controversy focusing on the most desirable mix of private sector and public sector control and operation of the several different transportation modes. As was the case with the airlines, by the late 1970s, pressure grew for deregulation of interstate railroad freight transportation. Until the 1950s, the railroads had been the dominant transportation mode for moving freight between the nation’s cities. But the dramatic growth of the trucking industry, made possible by the development of the Interstate Highway System after 1956, resulted in a significant decline in railroad profitability as its share of intercity freight hauling eroded. The negative effects of the federal government’s economic regulation of railroads, which had begun with the Interstate Commerce Act of 1887, were of increasing concern to many business groups and policy makers. The result was the enactment of the Staggers Rail Act
of 1980, removing many of the federal government’s controls and allowing freight-hauling railroads to have greater flexibility to change their practices by relying more on market forces. After the implementation of the Staggers Rail Act, railroad freight hauling became profitable again in that railroads were able to respond more effectively to their competition by adjusting rates eliminating unprofitable portions of track, and both integrating and consolidating rail networks. Concern by federal policy makers that railroads might charge some shippers exorbitant rates resulted in the creation of the Federal Surface Transportation Board to arbitrate disputes. Responsibility for the enormous, complex system of roads and highways in the United States is divided among federal, state, and local governments, along with the involvement of many competing interest groups, including motorists, builders, truckers, shipping companies, automobile manufacturers, taxpayers, and others. The planning, building, and financing of major roadway and highway expansions at the local level are often the subject of vigorous debate and conflict among local governmental officials and highly mobilized interest groups (largely because of the significant impacts of such programs on local land use patterns, economic development, and quality-of-life matters). However, at the state and federal levels, highway policy making is best characterized as generally featuring low-level, distributive politics: Usually conflict levels are relatively low, mass media coverage is minimal, and the dominant actors tend to be professional transportation planners, low-level bureaucrats, members of legislative subcommittees, and interest group lobbyists. At the end of their negotiations during decision-making events, these major participants tend to reach carefully crafted agreements on highway policies that will eventually provide at least partial gratification for all of them. High-level state and federal governmental officials tend to seldom become closely involved in the formulation of highway policies, and there is usually little public awareness of the commitment to a new highway program or understanding of highway financing. Although legislative formulas usually are used to help determine which state, congressional district, or local jurisdiction will receive either a federal or state categorical grant to fund a highway project, the selection of the precise location for a highway improvement project
870 tr ansportation policy
and the awarding of a construction contract often involve partisan political considerations associated with “pork-barrel-style” politics. Two other key elements of American transportation policy making are also apparent in the design and implementation of state and federal highway policies: the heavy reliance on user fees that are dedicated (or earmarked) to fund highway projects, as well as both partnerships and tension between the private sector and public sector organizations. In 1900, almost all the roads in the United States were local, and the rudimentary intercity roads often were toll roads owned and operated by either a state government or a private sector entity. Neighborhood streets and most county roads have been the responsibility of local governments for the entire history of the nation. Although the vast majority of the American transportation system’s lane miles are local streets and county roads, state highways and Interstate highway networks currently carry a majority of the traffic volume. Despite their relatively low traffic volumes, local streets are extremely important in the commercial and private lives of a local jurisdiction’s residents. Local streets provide commercial, private, and emergency vehicles access to individual property parcels, and they also serve as the underground conduits for electrical wires and water, gas, and sewer pipes. Because local streets provide access to local properties whose owners are the primary beneficiaries, local governments in the United States always have largely financed the construction, improvement, and maintenance of local streets by levying taxes on parcels of property. Over time, first private sector toll roads and later highways operated by state governments began to serve a complementary transportation mission to the streets financed and maintained by local governments. Before the 20th century, many of the intercity roads and rural roads in the United States were toll roads. During the 1700s, many individuals in rural areas would add gravel to stretches of nearby roads and collect fees from people who used them. Over time, companies formed to develop and maintain larger stretches of rural roadways, and they collected tolls from users to finance such ventures. The first major toll roads in the United States were the Philadelphia and Lancaster Turnpikes built in the 1790s
and the Great Western Turnpike started in 1799, which stretched across much of northern New York. The term turnpike originated because long sticks (or pikes) blocked passage of vehicles until a fee was paid at the toll booth, and the pikes then would be turned toward the toll booth so that the vehicle could pass. Toll roads in New England are still usually called turnpikes. In the mid- to late 19th century, numerous privately owned toll roads also were developed in the Midwest and the West, particularly in California and Nevada. During the first decades of the 20th century, most privately owned toll roads were taken over by the state governments, which sometimes established quasi-public authorities to build and operate toll roads. With the onset of mass production of the automobile and the increased use of the automobile for travel, both within suburbanizing metropolitan areas and between metropolitan areas, faster and higher-capacity highways were needed. In the 1920s, limited-access highways, with dual-lane roadways for traffic flowing in each direction and access points limited to grade-separated exchanges, began to be developed. By the 1950s, there were limited-access highways in many of the larger metropolitan areas of the nation, and most of them were toll roads operated by state governments. In addition to the limited-access toll roads that are funded by a dedicated revenue stream employing one type of user fee (the toll charge), state governments also have developed a much more extensive network of intercity highways that are financed largely by a second type of user fee (the gasoline tax). The state highway systems, designed for longdistance trips, higher vehicle counts, and high-speed travel, augment the enormous but relatively lightly traveled networks of local streets and county roads in the United States. While accessibility to local property parcels is the focus of the streets and rural roads controlled by local governments, travel on state highways focuses on mobility, and the primary beneficiaries are not adjacent property owners, but instead the users of the highway system—motorists, truckers, and shippers. Because the need for and costs of state highways vary largely on the basis of traffic levels, it seemed appropriate to pay for the construction and maintenance costs of the highways by charging users rather than using either property taxes or tak-
transportation policy 871
ing money from the state government’s general fund. In 1918, Oregon was the first state to adopt the motor fuel tax as an alternative to the toll charge user fee. Because a fuel tax is a user fee, state governments earmarked them exclusively for transportation expenditures that primarily involved highways. This means that state transportation programs do not have to compete for appropriations with other public programs. Prior to the use of the automobile, the federal government played a very small role in highway transportation and instead focused largely on the development of canals and railroads. Despite considerable resistance by state governments to relinquish any of their authority over roadways to the federal government, in 1806, President Thomas Jefferson signed into law the first federal highway program, the National Road, which became the main route west over the Allegheny Mountains into the fertile Ohio River Valley. When completed, it stretched from Maryland to Illinois. Relying on federal funding, the National Road established an important precedent, giving the federal government the constitutional authority to provide financial support for interstate highways. In 1916, President Woodrow Wilson signed into law the Federal-Aid-Road Act, initiating the federal government’s first federal aid highway program and providing states with categorical, matching grants for the construction of highways. These federal grants were aimed at helping states construct new highways to improve mail service and for the transportation of agricultural commodities to cities. This law established the basic cooperative relationship between the federal government and the state governments for expanding the nation’s highway system that has existed from 1916 until today. Aimed at creating jobs for large numbers of the unemployed during the Great Depression of the 1930s, President Franklin D. Roosevelt launched public works programs that allocated substantial increases in federal financial assistance for the construction of highways and bridges throughout the nation. Following a sharp decline in federal aid for highways during World War II, the federal government’s funding for highway construction escalated dramatically in the 1950s in order to meet the need for many new and improved highways to accommo-
date postwar economic growth, to rapidly evacuate urban populations if atomic weapons were launched at American cities, and to more effectively move defense-related equipment and personnel throughout the nation during a future war. President Dwight D. Eisenhower signed into law the Federal-Aid Highway Act of 1956 and the Highway Revenue Act of 1956, authorizing the development of the National System of Interstate and Defense Highways and creating the Federal Highway Trust Fund. To finance the construction of an intercity highway program national in scale, the federal government chose a strategy that had been used by the state governments: the reliance on a user fee (the fuel tax) to generate a dedicated stream of revenues to be used for highway construction. The federal government’s creation of the nation’s Interstate Highway System was of central importance in the development of the American transportation system. Once the federal government dedicated itself to building an interstate highway network, the focus of the federal, state, and local policymakers for approximately the next 50 years remained directed at the completion of a vast network of freeways (also called expressways) that would have the capacity to rapidly move large numbers of automobiles and trucks. Relatively little attention was aimed at developing a more balanced, comprehensive transportation system using other modes to move people and freight. Confronted with the willingness of the federal government to pay for 90 percent of the construction costs of the Interstate Highway System (and the states providing only a 10 percent match), state and city governments primarily have chosen either to construct new expressways or improve existing ones to meet federal standards rather than allocating their resources for the development of mass transit infrastructure within metropolitan areas or for high-speed trains for transportation between metropolitan areas. Initially, the federal matching grants could be used only for the development of expressways between metropolitan areas, but once these federal funds became available for building an expressway within a city that would serve as a link in the Interstate Highway System, most local jurisdictions chose to construct new expressways rather than mass transit.
872 tr ansportation policy
Currently, the National Highway Trust Fund, created in 1956, spends about $30 billion per year, and the states allocate billions more for construction and improvement of the Interstate Highway System. There continues to be strong political support in the United States for the periodic reauthorizations of federal legislation funding the Interstate Highway System. In addition to state and city government officials, who are eager to acquire these federal grants, there is extensive interest group support for building highways using federal money. The millions of drivers, automobile and tire manufacturers, oil companies, road construction contractors, and trucking companies all lobby the federal and state governments to spend generously on highways. Presently, the distribution of almost all the goods and services in the United States use sections of the Interstate Highway System’s 46,837 miles of roadways, and approximately one-third of the total number of miles driven by all vehicles in the United States use the Interstate Highway System. Although the Interstate Highway System has been financed largely by the federal government, it is the states that build, own, maintain, and operate this enormous network of highways. However, the federal government establishes standards such as pavement depth, the width of lanes, and signage design, and it also coordinates the planning of the system. Because of the dominant role that the federal government has in financing the Interstate Highway System, it has acquired considerable influence in getting state governments to enact legislation that sometimes is only indirectly related to the operation of the Interstate Highways and to the federal government’s authority to regulate interstate commerce based on the commerce clause of the U.S. Constitution. By threatening to withhold federal matching grants for highways, the federal government has gotten states to pass the following laws: federal speed limits that stayed in effect between 1974 and 1995, increasing the legal drinking age to 21, requiring states to disclose the identification of sex offenders, and lowering the legal intoxication level to 0.689 percent blood alcohol. This type of federal pressure directed at state governments has been controversial. Critics argue that the federal government’s threats to withhold federal highway funds unless states enact legislation that is only slightly connected to Interstate Highways, sig-
nificantly alters the balance of authority between the states and the federal government by infringing on states’ rights and expanding the authority of the federal government. Supporters, however, maintain that this is a strategy that is effective in getting states to pass much-needed uniform legislation dealing with important domestic public policy issues. Because a state government has the option of choosing to forgo accepting federal matching grants for highways, if it decides not to pass state legislation mandated by the federal government, the U.S. Supreme Court has upheld this type of exercise of federal authority to be a permissible use of the Commerce Clause. Although over the years vehicular transportation using the nation’s streets and highways has become the top transportation priority of the federal and state governments, in recent years, there has been growing attention directed toward expanding and improving the mass transit infrastructure in the large metropolitan areas of the United States. As was the case historically with the ownership and operation of the nation’s toll roads and railroads, the early mass transit systems (the original subways, streetcars, and buses) usually were owned by private sector companies. But as the populations in American cities became less dense and as automobiles became increasingly popular, mass transit companies could no longer attract enough riders to make a profit. By the 1960s, most of the large-city mass transit infrastructure had been abandoned by the private sector and taken over by local government jurisdictions (often mass transit special districts or authorities). Because the operating costs could not be met by relying only on the revenues from fares paid by the riders, and also because local governments faced considerable difficulty raising enough locally generated tax revenues to adequately subsidize mass transit, many officials of large American cities began to lobby federal and state government officials to pay for part of the costs of public mass transit by using the dedicated funds acquired through federal and state motor fuel taxes. A coalition of urban mass transit backers was able to convince Congress to pass the Urban Mass Transportation Act of 1964, which for the first time provided some federal assistance in the form of matching grants to states and cities for the construction of urban
transportation policy 873
public or private rail projects. The Urban Mass Transit Administration (now the Federal Transportation Administration) also was created. In 1974, Congress extended federal assistance to cover operating costs as well as construction costs of urban mass transit systems. It was not until 1982, however, that Congress changed the funding formula of the earmarked gasoline tax of the Highway Trust Fund to allow these funds to be used to provide federal financial support for mass transit projects as well as highway construction. Although state governments with relatively small urban populations and relatively large rural populations, along with the highly influential coalition of highway interest groups, opposed the use of motor fuel taxes to subsidize urban mass transit systems, a strong new political movement that began to mobilize in the early 1970s was by the 1980s sufficiently strong to convince Congress that some of the gasoline tax revenues should be made available for urban mass transit. This new political movement was fueled by those who believed that the development of more mass transit in urban areas would help reduce Americans’ reliance on the automobile and thereby help abate traffic congestion and air pollution. People also promoted mass transit instead of the expansion of the nation’s network of urban expressways in order to save green space and reduce the loss of housing units and historic buildings in cities. Downtown business groups supported more federal funding for mass transit because they contended that expressways facilitated flight to the suburbs and the growth of suburban shopping malls. Many professional urban planners and transportation planners began to promote the development of mass transit as a means to reduce urban sprawl, regenerate blighted neighborhoods of large cities, and provide mobility to those who cannot afford or are physically incapable of using an automobile. This new political movement also included people who maintained that spending more federal funds for mass transit rather than for highways would eventually mean that the nation would be able to reduce its dependence on foreign oil. Currently, the federal government provides approximately $6 billion a year in matching grants of up to 80 percent to state and local governments for mass transit. Most of the federal money must be spent for capital expenditures (buying new equipment or
building new rail lines) for a wide range of mass transportation services, including city and suburban bus and paratransit, cable cars, subways, heritage streetcar systems, elevated rapid transit, and light rail and commuter rail services. Federal allocations, however, have not increased enough to meet many of the requests for grants by metropolitan areas intending to expand their public transit services. The competition among metropolitan areas for the receipt of federal matching grants to develop light rail lines has been particularly intense in recent years. Federal funding for air, water, railroad, highway, and mass transit is allocated by Congress for several years at a time. These federal transportation funds presently are allotted to state and local governments using three approaches. First, most of the federal grants are allocated to state and local governments for different categories of transportation services based on relatively complex and inflexible legislative formulas. Second, some federal transportation grants are earmarked for specifically designated highway, rail, and bus projects favored by members of Congress for their districts (a decision-making approach involving distributive politics and often criticized as “pork-barrel spending”). Third, congressional reauthorizations of federal transportation funding in 1991, 1998, and 2005 have given more flexibility to state governments and local-level metropolitan planning organizations (MPOs) to determine precisely how they will allocate a portion of their federal transportation funds among a mix of different transportation programs. This third approach was introduced in 1991 with the enactment of the Intermodal Surface Transportation Efficiency Act (ISTEA). Once-fierce competitors— trucking firms, railroads, airlines, and barges using the nation’s waterways—increasingly have been cooperating in the movement of freight. In response to this intermodal trend, Congress (through enactment of the 1991 ISTEA as well as its subsequent reauthorizations in 1998 and 2005) has provided metropolitan areas with the flexibility to use a portion of their federal funds to redesign their transportation infrastructure to make for more effective air-waterrail-truck connections. The 1991 ISTEA and its reauthorizations also gave metropolitan regions more flexibility to shift some of their federal funds for highways to mass transit (or vice versa) and to use these
874 tr ansportation policy
funds for other types of special transportation programs aimed at reducing traffic congestion and air pollution. As a condition for receiving federal transportation funding, local governments since 1991, acting through their MPOs, must develop more comprehensive and more balanced regional transportation plans that provide ample opportunities for public participation and take into account land use and environmental factors. Although local governments in metropolitan areas currently possess increased control over the use of a portion of their federal transportation funds, and although the Federal Highway Trust Fund no longer is used to exclusively fund highways, the proportion of federal funding that is spent for mass transit presently is equal to only about 25 percent of the amount of federal money allocated for highway programs. And while a diverse set of interest groups has been politically mobilized since the early 1970s presenting competing requests for federal funding to be used to finance a wide range of transportation modes, the highway interest group still appears to have formidable strength. The consequence, according to many critics of American transportation policy, is that the United States continues to rely too heavily on automobiles and trucks using a vast network of highways to move people and freight, and it remains without an adequately balanced, comprehensive transportation system. At the beginning of the 21st century, the United States is faced with a complex and daunting set of transportation-related problems and challenges. The volume of traffic on the nation’s highways continues to increase rapidly, resulting in mounting traffic congestion. Will federal, state, and local governments respond by expanding the highway network? Or will they design and implement new transportation policies that use more intelligent, computerized technologies to manage traffic flows more efficiently on existing major urban arteries and expressways? Will they also envision walking, cycling, and mass transit as viable alternatives to the automobile? The American transportation infrastructure is aging, and the obsolescence of much of it is likely to produce substantial economic costs and reduce the quality of life throughout the nation. Presently, a majority of federal and state transportation funds is
generated by the increasingly precarious approach of relying on dedicated gasoline (or fuel) taxes levied on a per-gallon basis and not as a percentage of the market price motorists pay for fuel. As a result of energy conservation measures and increased fuel efficiency of newer vehicles, federal and state gasoline tax receipts have hit a plateau and have begun to erode when adjusted for inflation. Will the state and federal governments respond by raising the gasoline tax rates? Or will they design, test, and employ new funding mechanisms, including funding transportation improvement programs through the use of general obligation bonds, developing more toll roads, and introducing congestion pricing (charging motorists for driving on high-traffic routes or at popular times in order to obtain additional public revenues and reduce traffic volume)? By disproportionately allocating federal and state government funds for the construction of highways, the resulting urban sprawl has reduced the amount of open space in metropolitan areas, worsened air quality, and contributed to the growing spatial mismatch between suburban job centers and low-income innercity residents, making it very difficult for inner-city workers to gain access to metropolitan labor markets. Will urban sprawl, which has made large numbers of both residents and commuters in metropolitan areas dependent on their cars for virtually all their transportation needs, continue to be promoted by transportation policy in the United States during the early decades of the 21st century? Or will American transportation policy increasingly provide funds to build a variety of state-of-the-art mass transit systems and thereby help shape a new approach to urban land use patterns, fostering denser, more economically efficient, and more environmentally sound residential and commercial development and providing better connections between places of employment, residential units, and all the amenities of metropolitan areas? Further Reading Balaker, Ted, and Sam Staley. The Road More Traveled: Why the Congestion Crisis Matters More Than You Think, and What We Can Do About It. Lanham, Md.: Rowman & Littlefield, 2006; Dilger, Robert Jay. American Transportation Policy. Westport, Conn.: Praeger, 2002; Downs, Anthony. Still Stuck in Traffic:
welfare policy 875
Coping with Peak-Hour Traffic Congestion. Washington, D.C.: Brookings Institution Press, 2004; Hanson, Susan, and Genevieve Giuliano, eds. The Geography of Urban Transportation. New York: Guilford Press, 2004; Katz, Bruce, and Robert Puentes. Taking the High Road: A Metropolitan Agenda for Transportation Reform. Washington, D.C.: Brookings Institution Press, 2005. —Lance Blakesley
welfare policy Nobody likes welfare much, not even those who receive its benefits directly. Liberals criticize welfare as providing inadequate support for poor families with children and for requiring recipients to submit to intrusive investigations by zealous caseworkers. Conservatives believe welfare encourages dependency on government assistance by otherwise fully capable adults and that it offers perverse incentives leading to increased family breakups and out-ofwedlock births. Politicians from both the Democratic and Republican Parties have often sought to capitalize on the public’s low esteem for welfare, and there have been many calls for reform. Perhaps none has been as memorable as Bill Clinton’s 1992 presidential campaign pledge to “end welfare as we know it.” With few constituencies providing political support and many calling for its overhaul, it should not be surprising that in 1996 “welfare as we knew it” was effectively dismantled and replaced with a new program of assistance for poor families with children. What should be surprising, however, is that “welfare as we knew it” remained structurally intact for 61 continuous years despite repeated efforts to “reform” it or replace it with an alternative program of federal support for the poor. Welfare represents an unusual political paradox: a program that is intensely unpopular and yet unusually resistant to efforts to reform or replace it. To understand the roots of this paradox, one must have a basic understanding of the original structure of welfare, its place within the broader welfare state, and the impact of these origins on welfare’s subsequent political development. Welfare’s political development, as we shall see, was driven in great part by vast changes in the size and character of its recipients in conjunction with fundamental changes in Ameri-
can racial and gender relations. As the original welfare program became increasingly unfit for the challenges of a new social and political context, the calls for reform grew. Political efforts to reform welfare, however, repeatedly failed due to a number of political challenges inherent to welfare. It took a significant shift in partisan and institutional politics to lay the groundwork for a restructuring of federal welfare. Welfare generally refers to government programs providing financial assistance to poor or low-income individuals or families. Most often, when political leaders discuss welfare they are focused on Temporary Assistance for Needy Families (TANF), the successor to the Aid to Families with Dependent Children (AFDC) program, which provides cash assistance for poor and low-income families with children, almost all of whom are fatherless families. TANF is a means-tested program, meaning that only families below a certain income level and with limited assets are eligible to receive the assistance. In addition to TANF, there are other means-tested federal and state programs for the poor, including both cash and in-kind assistance. The latter refers to the provision of directly usable goods or services that the family or individual would otherwise have to purchase. The most prominent of these is the Food Stamps program, first established in 1961 and expanded significantly in the early 1970s, which provides stamps for people who qualify because of their lack of income or low incomes and which are redeemable for groceries. The income ceiling for Food Stamps eligibility is significantly higher than for TANF, and many recipients earn incomes above the poverty line. Medicaid, established in 1965, provides health insurance for the poor and people with low incomes. The Earned Income Tax Credit (EITC), originally established in 1975 and significantly expanded in 1993 and 2001, provides low-income families who pay no federal taxes with a cash refund nonetheless, varying in amount depending on the size of the family and the level of income. Today, the EITC is larger than federal welfare under TANF, aiding more people and costing more federal dollars, and many of its recipients are above the official poverty line. The EITC enjoys political support, as opposed to welfare, because it is provided through the tax code and is connected to employment.
876 w elfare policy
Aside from these better-known programs of federal assistance, there is a much less familiar network of aid, including federal tax credits, smaller federal assistance programs, state and local programs of assistance, and private nonprofit and employee benefits. The federal government provides billions of dollars in assistance for different categories of recipients through a vast array of tax credits, such as the EITC. Some of these tax credits are received by lowincome and working-class populations, but most of them go to middle- and upper-class income levels. There are also a range of smaller federal programs assisting poor and low-income people with in-kind aid for basic needs such as housing and nutrition. Many states and localities (counties, cities, and towns) offer their own programs of assistance for various recipient categories, including poor single men, many of whom are homeless. Private charities also provide a range of services and assistance, ranging from small locally based organizations to large national organizations such as Catholic Charities, which has a budget larger than $3 billion annually. Finally, the vast majority of U.S. citizens receive health, disability, and retirement benefits through their private employers, benefits that are usually provided by welfare state programs in other advanced industrial democracies. The United States remains the only advanced industrial democracy that does not provide universal health insurance. TANF, however, is the program that is generally referred to when the term welfare is used. TANF was established in 1996 as the centerpiece of the Personal Responsibility and Work Opportunity Reconciliation Act (PRWORA) (P.L. 104–193). It was the successor to the AFDC program, which was created as part of the public assistance title of the 1935 Social Security Act (Title IV). The Aid to Dependent Children (ADC) program, the original federal “welfare” program, provided assistance for fatherless children as part of this legislation. In 1959, the adult caretaker of the children assisted by ADC was included in the grant, and in, 1962 the name of the program changed to Aid to Families with Dependent Children, reflecting this addition. ADC/AFDC provided financial assistance for poor mothers and their children as a federally guaranteed entitlement for 61 continuous years. The replacement of AFDC by TANF was much more than a simple reform of the existing welfare
program. First, the entitlement status of welfare was replaced by a federally capped block grant to states ($16.4 billion annually from 1996 to the present). Second, for the first time in the history of federal welfare, there were time limits established for receiving welfare: Recipients were limited to five years of welfare over their lifetime. In addition, the new welfare program requires that adult recipients of TANF be engaged in a state-authorized work activity after receiving welfare for two years. Third, welfare reform also devolved a great deal of administrative discretion over the design and shape of state welfare programs from the federal government to state governments. Aside from designing the shape of their own programs, states can decrease the lifetime limit, and almost half the states have opted for shorter lifetime limits. States are also permitted to shorten the time period before requiring recipients to work, but more than half use the federal maximum of 24 months. In 2005, as part of the Deficit Reduction Act of that year, the TANF program was reauthorized with some minor changes. Federal welfare was established as a response to the economic collapse of the late 1920s and early 1930s. The Economic Security Act of 1935, commonly known as the Social Security Act (SSA), set up the basic structure of the new federal welfare state. The structure of the welfare state established by this landmark legislation was enormously important to the subsequent politics of welfare. The SSA established the major programs of the federal welfare state, setting up two tiers of assistance. The top tier included politically popular social insurance programs, such as Social Security, and was financed and administrated completely by the federal government. The programs in this tier enjoy political legitimacy due to their financing mechanism: payroll taxes matched by employers, often viewed as contributions or even premiums. The second tier public assistance programs, in contrast, were structured as federal-state partnerships, with federal financing contingent on state government contributions. States were limited by loose federal guidelines but were given responsibility for setting benefit levels and eligibility requirements within these guidelines, resulting in wide variation in benefit levels and eligibility tests nationally. These programs required recipients to be below a basic level of income—hence their identity as means-tested pro-
welfare policy 877
grams. The original federal means-tested public assistance programs included ADC, Aid to the Blind (AB), and, after 1939, Aid to the Permanently and Totally Disabled (APTD). By the mid-1950s, ADC began to overshadow the other programs in its tier and in the 1960s totally eclipsed them in size and political significance. The federal-state structuring of welfare was very important in shaping its political trajectory in the decades following World War II. Prior to the 1960s, many states used the administrative discretion they enjoyed as part of this structure to limit their AFDC caseloads and benefits. ADC-AFDC benefits were notoriously low in southern states and highest in the Northeast and in California. Southern states also used their administrative powers under the program to limit eligibility, keeping their caseloads artificially low while effectively discriminating against AfricanAmerican families who would otherwise have been eligible for benefits based on income and family structure. These practices were successfully challenged in the political atmosphere of the mid-1960s, partly in response to the Civil Rights revolution. As a result, the federal government increasingly exercised tighter oversight over state welfare administration, prohibiting unfair and/or arbitrary rules for determining eligibility, including those that resulted in racial discrimination. As a consequence, there were rapid increases in the size of the AFDC caseload. Between 1950 and 1970, the number of ADC recipients grew by 333 percent, with the bulk of this increase taking place in the years between 1965 and 1970. In the 1980s, President Ronald Reagan and other prominent conservatives sought to devolve federal control over numerous assistance programs to state governments. This devolution agenda accelerated considerably once the Republican Party achieved majority status in the House of Representatives in 1994 for the first time in 40 years. With welfare reform a central part of the new House Republican agenda—the “Contract with America”— states were to become the central authorities in designing federally funded welfare programs. This shift in the trend in administrative control is a signature aspect of welfare politics, first from the states to the federal government in the late 1960s and 1970s, and then from the federal government to the states in the late 1980s and 1990s.
In the 1960s, the characteristics of the welfare caseload changed considerably as well. ADC was originally modeled on the Mothers’ Pensions programs of the 1920s. It was intended to enable primarily white widowed mothers to remain home with their children. However, the proportion of black ADC recipients increased to about 40 percent by 1961, where it remained until 1996. Since the 1996 reform, the greatest proportion of welfare “leavers,” former recipients who have left the welfare rolls for work or other forms of support, have been white. This has left the remaining welfare population increasingly composed of African-American and Latino families. In addition to racial changes, today the majority of families on welfare are no longer widowed, and the percentage of out-of-wedlock births to mothers on AFDC increased 25 percent between 1950 and 1960. Within this context, since the 1960s public criticism of welfare has increased dramatically. As a consequence, beginning with President John F. Kennedy’s public assistance reforms in 1962, welfare has been the target for federal reform every four or five years. Significant reforms to AFDC were achieved in 1962, 1967, 1981, and 1988. In 1962, President Kennedy embraced a rehabilitation approach to welfare and succeeded in adding social services supports to augment AFDC. The purpose of these supports was to help move recipients successfully into employment. In 1967, a congressional coalition of conservative Democrats and Republicans sought to establish tough work requirements as part of AFDC but succeeded only partially with the establishment of the Work Incentive Program (WIN). WIN permitted states to exempt large proportions of their caseload from participation in this program of job training and work involvement, and as a result national participation rates were very low. In 1981, President Ronald Reagan was able to pass significant rule changes for eligibility determinations and other minor aspects of AFDC. These changes had a combined impact of significantly reducing the size of the welfare caseload rise for the first time in more than 20 years. Reagan achieved this success as part of a much larger budget reconciliation package. In 1982, when he sought a more direct reform of welfare—to have states assume full financial and administrative control over AFDC in exchange for the federal government assuming full
878 w elfare policy
control over Medicaid (also a federal-state partnership)—his proposal failed. Indeed, more ambitious efforts to reform AFDC directly have repeatedly been met with political failure. Presidents Richard Nixon and Jimmy Carter both proposed comprehensive reforms of welfare and made these singularly important pieces of their domestic policy agenda. In both cases they failed to achieve their goals. In 1988, congressional leaders and the Reagan administration achieved what was then touted as a major welfare reform: the Family Support Act (FSA). However, in subsequent years it became increasingly clear that FSA was just another minor reform of the strangely politically persistent AFDC program. The 1988 FSA provided new federal monies to assist states in encouraging and training welfare recipients for work under the Job Opportunities and Basic Skills (JOBS) program. However, JOBS was never fully implemented. In order for states to access the federal financing for new job training programs for welfare recipients, they were required to match the financing. Because state governments were unwilling or unable to spend the voluntary matching grant monies necessary to access federal dollars available under the legislation, most of the federal financing available was never spent. Still, FSA was politically significant as a culmination of almost a decade’s efforts to shift the purpose of federal welfare away from supporting poor children in fatherless families and toward encouraging self-sufficiency for poor single mothers through work. FSA was understood at the time to reflect a new consensus on welfare, emphasizing the contractual responsibilities of welfare recipients rather than the social services supports that were emphasized in the Kennedy-Johnson years. There was a growing consensus in the 1980s that welfare recipients should be required to work or prepare for work and that states should be given greater latitude in designing their own welfare programs. These principles laid the necessary groundwork for the landmark welfare reform of 1996. Welfare’s unusual capacity to resist successful reform despite its political weakness poses a unique political paradox. Welfare’s political roots were from a time when women stayed home with their children, and racial segregation and other forms of discrimination were widely tolerated. The original purpose and structure of welfare accommodated these social reali-
ties. America’s social context changed dramatically in the years following World War II—segregation was overturned, discrimination was rejected by majorities, and women were increasingly likely to participate in the labor force. Welfare, however, remained intact: a program structured for the world of the 1930s but somehow persisting as the federal government’s main tool for addressing poverty among poor families. The conflict between the changed social context and the U.S. welfare state’s original structure engendered increasing levels of political friction. But any reforms of AFDC seemed to exacerbate other problems: Increasing benefits made welfare too expensive and more attractive, and decreasing benefits made it too hard for recipients to care for their children adequately. National reformers were frustrated in their efforts to “end welfare as we knew it.” By 1996, however, riding a growing consensus concerning welfare reform and within a context of important political changes that had become manifest in Congress and in presidential politics, a policy making window, as political scientist John Kingdon might describe it, opened. The successful restructuring of federal welfare in 1996 was achieved because of the intersection of a growing consensus on welfare, reflected in FSA, with significant changes in federal politics. The establishment of a Republican majority in the House of Representatives in 1994 was of central importance. This majority was unusually cohesive and sought the ending of AFDC and a return of control over welfare to state governments. At the same time, the Democratic president, Clinton, in political trouble and facing an upcoming election, had yet to deliver on his popular 1992 campaign promise to “end welfare as we know it.” Together, these circumstances catapulted welfare reform to the top of the national agenda between 1994 and 1996, altering the potential for assembling a majority congressional coalition supporting welfare reform. The resulting replacement of AFDC by TANF in 1996 represented a watershed in U.S. welfare policy and politics. As a result of this landmark legislation, welfare is no longer an entitlement; adult recipients can no longer count on assistance being provided indefinitely, and work is a requirement for most adults receiving welfare. States have far more power in shaping welfare programs than the federal government, with a resulting wide variety of policies across the
welfare policy 879
United States. Many policy makers see this transformation as an unadulterated success, with caseloads reduced in size by more than 50 percent between 1996 and 2001. Others argue that the experience of many families making the transition off welfare to work frequently leaves that family worse off, with children in such families facing a greater likelihood of experiencing hunger, lack of needed health care, and bouts of homelessness. TANF faces many of the old problems of the past while encountering new challenges as it moves into the future. The 1996 law mandated a reauthorization of TANF by 2002, but it took Congress until May 2005 to provide that reauthorization, partially because of difficulties in achieving legislative consensus on how strongly the federal government should require states to move larger and larger numbers of their welfare recipients into work. Moreover, at the state level, the early successes in moving large percentages of welfare recipients from welfare to work seems to have cooled off since 2002, as those who remain on welfare face multiple barriers to successful self-sufficiency: educational deficits, mental illness, substance abuse problems, experience with domestic violence, and so on. These challenges suggest that states will require additional funds to assist their remaining welfare caseloads in making successful transitions from welfare to work. The difficulties in achieving just a simple reauthorization of TANF, which is widely viewed as a virtually unalloyed success, suggest that the future of U.S. welfare
policy and politics will continue to be conflict ridden, politically divisive, and resistant to effective reforms. See also Great Society; New Deal. Further Reading Cammisa, Anne Marie. From Rhetoric to Reform? Welfare Policy in American Politics. Boulder, Colo.: Westview Press, 1998; Gilens, Martin. Why Americans Hate Welfare: Race, Media, and the Politics of Antipoverty Policy. Chicago: University of Chicago Press, 1999; Gordon, Linda. Pitied but Not Entitled: Single Mothers and the History of Welfare. Cambridge, Mass.: The Belknap Press of Harvard University Press, 1998; Hacker, Jacob S. The Divided Welfare State: The Battle over Public and Private Social Benefits in the United States. New York: Cambridge University Press, 2002; Katz, Michael B. The Price of Citizenship: Redefining the American Welfare State. New York: Henry Holt & Co., 2001; Lieberman, Robert C. Shifting the Color Line: Race and the American Welfare State, Cambridge, Mass.: Harvard University Press, 1998; Skocpol, Theda. Protecting Soldiers and Mothers: The Political Origins of Social Policy in the United States. Cambridge, Mass.: The Belknap Press of Harvard University Press, 1992; Weir, Margaret, Ann Shola Orloff, and Theda Skocpol, eds. The Politics of Social Policy in the United States. Princeton, N.J.: Princeton University Press, 1988; Weaver, R. Kent. Ending Welfare as We Know It. Washington, D.C.: Brookings Institution Press, 2000. —Scott J. Spitzer
STATE AND LOCAL GOVERNMENT
board of education
and any society that claims to be democratic must decide the best possible way to educate citizens about the values of democracy. Because education shapes the moral character of future generations, determining who should exercise control over the curriculum is a difficult task and, at times, highly political. Parents, the state, and professional educators are all interested in the content of the character of a child as he or she progresses through the public school system. By the beginning of the 20th century, there were literally thousands of school districts that had emerged throughout the United States, and virtually all of the districts were subject to local control. School boards were the principal method of governance. Despite the large number of school districts, there was surprising consistency in the subjects that were taught from community to community. As Diane Ravitch has noted, most public schools emphasized reading, writing, basic math, patriotism, citizenship, and a moral code that was generally endorsed by the community. The schools were an integral part of the community and reflected the beliefs of the members of the community. Indeed, schools were often the focus of various kinds of community associations, and schools often served as the meeting place for various other community organizations. Thus, one would expect that there would be consensus about the instructional materials and the values taught in the schools. As in The Music Man, the school board was conceived to be the protector of the morality of the community. Therefore, when Professor Hill came to town, his
Boards of education, commonly referred to as school boards, were a natural development of the growth of communities and localized structures of government in the United States. School boards date back to colonial times, and as the nation grew, the education of children was viewed as a local responsibility. Kindergarten through 12th-grade education became the quintessential “public good.” In the parlance of classical economics, a public good is one that is characterized by “jointness of supply” and the impossibility of excluding anyone from consumption. With regard to education, this meant that schools were typically funded by local taxation, and all children of the community were to be served by the school. Along with the need for public finance came the need for public control and accountability. The school board, which was either elected or appointed by mayors or other executives, became the institutionalized method of democratic oversight and control of a particular school district. As an institution of democratic control and oversight, most school boards are popularly elected and generally function much like a legislature, taking on responsibility for bureaucratic oversight, policy development and initiatives, budget oversight, and setting broad goals for the school district. Early on, the school board was also responsible for working with teachers to ensure that the curriculum met the needs of the community. The content of education is, of course, one of the toughest questions a society must answer, 881
882 boar d of education
first task was to see that the board “sang in harmony.” If members of the school board were not “bickering,” then they would be more likely to bring the community together. In this idealized view, the school board merely reflects the dominant view of the community. Ravitch goes on to point out that the similarity between what was taught from district to district was reinforced by textbook publishers interested in developing texts that would appeal to the largest number of districts. However, after the turn of the century, it became clear that there were a number of competing values that the schools were being called upon to teach. Several trends contributed to the decline in the consensus over the individual values that should be taught. First, the Progressive Era movement was interested in the professionalization of government, particularly civil service, and this concern carried over to education. Often, in cities, school boards were under the control of the political ward system, and Progressives introduced professional norms for teachers, norms for bureaucracy, unionization of teachers, and a major reemphasis on curriculum. All of these parts of the Progressive program worked against the idea of local control and the political harmony within the district that the school board was supposed to maintain. Second, the neighborhood school, because of its extended organizations such as the Parent Teacher Association (PTA) and after-school activities, has always provided a natural meeting place for parents. In many communities, there was an overlapping membership between church, school, and neighborhoods, and dense social networks began to form. Schools naturally became a location for both social interaction and often for political organization. At the same time, as the population became more mobile, new families moved into the school districts, bringing with them different priorities and values. The consequences were often conflict over the content of the curricula and the emergence of the school as a natural base from where political action emanated. Faceto-face encounters generate, as political scientists have noted, “social capital” that fuses both norms of trust and civic awareness, thus making possible the transition from voluntary to political organizations. Because conflicts are expressed in elections, school boards, in some communities, no longer sang in partisan harmony but were the center of intense
political conflict. The school board increasingly dwelt in a vortex of multiple groups and interests that were often in conflict with one another. The political environment that the school board finds itself in consists of the following stakeholders: the state bureaucratic apparatus, teachers, local administrators, textbook companies, parents, and taxpayers in general. The effectiveness of any board is dependent on its success in marshalling political support within the community. Therefore, the school board must build political coalitions among politically diverse groups in order to maintain a smoothly functioning school system. If it is true that the conditions for democratic control over schools are a function of a strong, accountable school board, then school boards must respond to the general public interest rather than private interests. Yet, in general, because of the nature of coalition building, highly organized interest groups are often able to gain control over a school board. Declines in voting turnout also contribute to the dominance of particular groups and specialized interests. Conflict within a school district is usually expressed in electoral terms, and contested elections are normally a part of a cultural conflict or a conflict over the values taught in the public schools. Low turnouts in most school board elections indicate that, most of the time, the public is willing to defer to the financial and oversight expertise of the board. Most disputed elections are about values, and curiously, voting turnout does not seem to increase during contested elections. Rather, conflict seems to occur among dominant community groups. Boards in districts where there is sharp cultural conflict normally end up being controlled by one faction or another. Under these conditions, boards can lose their sense of democratic accountability. Most analysts over the past 20 years have identified the conflict in public schools as a conflict over precisely which values will be taught to the children. On one hand, certain groups believe that parental values should dominate the educational process. This view has led many to argue for various “voucher schemes” as a way of allowing parents to send children to schools that generally reflect their own values. The underlying idea is that the public schools have somehow abandoned a commitment to community values. The neighborhood school, so the argument goes, has increasingly become dominated by
board of education
the values of professional educators, administrators, and various social engineers. On the other hand, many professionals as well as lay people adopt the view that the purpose of the public schools is to teach values that will lead children to become good citizens in a liberal polity. These values include toleration, respect for the diversity of opinion, relfectiveness, autonomy, and critical thinking. Given the potential conflict over values, it is not difficult to see why that often, in diverse districts, school board elections become heated arenas of political conflict. The state, through the school board, is often intruding on the life of the child, particularly with respect to values. If parents within a district who are already partially mobilized disagree with the values emphasized by the school, conflict will often center on school board members. Debates over such value-based issues as sex education, the role of religion, recognition of different lifestyles, and the presumed political biases of texts are not uncommon issues in some school districts. From 1995 to 2000, there were approximately 30 recall elections in school districts nationwide. In each of these cases, some conflict over values was involved in motivating the recall movement. In one view, the state, in order to reproduce itself, must make sure that certain democratic values are transmitted to students. On the other hand, some claim that the family unit best handles certain areas of instruction. The problem with democratic education is that it is not possible to amalgamate all possible values within a single curriculum. No single distribution of K-12 education will satisfy all parents. Voucher schemes cannot solve this embedded problem, since students who remain in the local school will still be subject to a curriculum that some feel is biased or inadequate. School boards are instrumental in maintaining the legitimacy of the public school. This is a particularly difficult task due to the political cleavages that have developed around what values should be taught in the schools. These divisions have been made even more acute in many districts due to various social problems that confront teachers and administrators on a daily basis. The problems, in any district, may include the influx of non–English-speaking immigrants, fractured families, more mothers in the workforce, racism, the increasing influence of television
883
and popular media, drug and alcohol addiction, and the general decline in civic culture. The politicization of these social and cultural issues has forced school boards to deal with increasingly partisan conflict. In many instances, boards, which historically have been nonpartisan, have been forced to side with various partisan factions in order to maintain equilibrium in a school district. Under these circumstances, boards seem to function best where they are most democratic. When openness, deliberation, and compromise are values adopted by the board, the community becomes more “harmonious.” The board must be able to represent broad community interests rather than particular partisan factions. See also Department of Education; education Policy. Further Reading Barber, Benjamin B. An Aristocracy of Everyone: The Politics of Education and the Future of America. New York: Ballantine Press, 1992; Burns, Nancy. The Formation of American Local Governments: Private Values in Public Institutions. Oxford: Oxford University Press, 1994; Chubb, John E., and Terry M. Moe. Politics, Markets, and American Schools. Washington, D.C.: Brookings Institution, 1990; Danzberger, Jacqueline P., Michael W. Kirst, and Michael D. Usdan. Governing Public Schools: New Times, New Requirements. Washington, D.C.: Institute for Educational Leadership, 1992; Davis, Mike. Prisoners of the American Dream. London: Verso Press, 1999; Friedman, Milton. Capitalism and Freedom. Chicago: University of Chicago Press, 1962; Fullinwider, Robert K., ed. Public Education in a Multicultural Society. Cambridge: Cambridge University Press, 1996; Gutmann, Amy. Democratic Education. Princeton, N.J.: Princeton University Press, 1987; Hirschman, Albert O. Exit, Voice and Loyalty: Responses to Decline in Firms, Organizations, and States. Cambridge, Mass.: Harvard University Press, 1970; Jeffe, Sherry Bebitch. “Bilingual Bellweather?” California Journal. (January 1998): 39; Levinson, Meira. The Demands of Liberal Education. Oxford: Oxford University Press, 1999; Matthewson, Donald J. Cultural Conflict and School Board Recall Elections. Paper presented at the annual meeting of the American Political Science Association, Philadelphia, Penn, 28–31 August 2003; McGirr, Lisa. Suburban
884 boar d of elections
Warriors: The Origins of the New American Right. Princeton, N.J.: Princeton University Press, 2001; Peterson, Paul E. City Limits. Chicago: University of Chicago Press, 1981; Putnam, Robert D. Bowling Alone: The Collapse and Revival of American Community. New York: Simon & Schuster, 2000; Piven, Frances Fox, and Richard A. Cloward. Why Americans Still Don’t Vote: And Why Politicians Want it That Way. Boston: Beacon Press, 2000; Ravitch, Diane. Left Back: A Century of Battles over School Reform. New York: Simon & Schuster, 2000; Riker, William. Liberalism against Populism. Lone Grove, Ill.: Waveland Press, 1982; Thoburn, Robert. The Children Trap. Fort Worth, Tex.: Dominion Press, 1988; Sharp, Elaine B., ed. Culture Wars and Local Politics. Lawrence: University of Kansas Press, 1999; Walker, Jack. Mobilizing Interest Groups in America: Patrons, Professions and Social Movements. Ann Arbor: University of Michigan Press, 1991. —Donald J. Matthewson
board of elections The board of elections (sometimes called “election commission” or “election board”) is responsible for planning and implementing all stages of every local, state, and federal election held in a state. Boards of elections perform a variety of election-related duties, from registering voters and candidates to counting and verifying the final ballots cast. Every board has a variety of responsibilities, but the main purpose is to ensure that every primary and general election is conducted legally and uniformly across the state. The power to determine how elections are conducted is left primarily to the states, as stated in Article I, Section 4, of the U.S. Constitution: “The times, places and manner of holding elections for Senators and Representatives, shall be prescribed in each state by the legislature thereof; but the Congress may at any time by law make or alter such regulations, except as to the places of choosing Senators.” The U.S. Constitution gives states the power to determine how each will organize and implement its elections, with minimum interference from the federal government. While state legislatures create their own set of election laws, another government agency is needed to perform the administrative duties of implementing these laws. Therefore, every state (as well as
the District of Columbia) has its own board of elections to ensure that every election is held in accordance with that state’s law. States may also have boards of elections at lower levels of government, such as a local or county board of elections. While many local and county election boards have many of the same duties and functions as state boards, this essay mainly focuses on those at the state level. The state’s secretary of state and/or a committee of several board members often head a state’s board of elections. Appointment to a board varies from state to state depending on what the state’s constitution or laws mandate. For instance, sometimes the governor appoints all the members to the board, while in other states, members may be appointed by the two major political parties or by some combination of both governor and party appointments. Boards have a tremendous workload and rely on the assistance of full- and part-time staff members. Just as board memberships vary in size, so do staff sizes. The staff is responsible for the day-to-day administrative functions and assists board members in everything from answering telephones to creating reports on current election-related legislation being considered. Because the U.S Constitution leaves the responsibility of conducting elections to the states, states’ election laws vary across states. However, all must still abide by federal law. For example, consider voter registration. The U.S. Constitution grants the right to vote to all citizens over the age of 18 regardless of race, gender, or creed. However, states determine the voter registration requirements for citizens. One state may require that voters register weeks prior to an election, while another may allow registration on the same day of an election. Both registration rules are completely legal as long as the state does not impose unreasonable registration requirements, such as a literacy or aptitude test. Every state has its own set of laws governing how elections are to be run, and each state has created a board of elections to oversee that elections are held in accordance to state (and federal) law. While states’ election laws differ, the main responsibilities and duties of each state’s board are the same and include (but are not limited to) the following. Implement all state and federal election laws: Every state has its own constitution that defines
board of elections
how elections are to be conducted. The board’s main responsibility is to oversee the election process and ensure that every election is legally conducted according to the state’s law. Sometimes boards may formulate their own laws to submit to the legislature for consideration, but mostly state legislations (and sometimes the U.S. Congress) create (or revise) election laws. It is up to the board to see that these new laws are followed and to communicate to local election boards any changes that may alter how elections are conducted. While the constitution gives states the power to determine how to run their elections, there are times when the federal government passes new legislation that all states must adopt but are given some leeway as to how the new law is to be administered. It is then the states’ responsibility to implement the new federal law or act, and the boards promulgate all necessary rules and regulations to meet new federal requirements. An example of boards implementing federal law is the National Voter Registration Act (NVRA) of 1993 (also known as the “Motor Voter Act”). To help increase the number of citizens registered to vote, NVRA requires that states register voters in one of three places: during registration for a new (or renewed) driver’s license, in all offices that use state funds to provide assistance to persons with disabilities, and/or through state mail-in forms. While some states are granted exemption from this law, 44 states and the District of Columbia are responsible for meeting the requirements of the NVRA, and each state’s board of elections must demonstrate to the federal government that its state complies with the act. Register all individuals and parties involved with elections: Every state’s board is responsible for maintaining current records of any person(s) and group(s) involved with elections, from registered voters and candidates to lobbyists and political action committees. When an individual submits registration materials to become a voter or candidate for office, it is the duty of the board to ensure that all paperwork is properly completed and filed. It is also the board’s responsibility to certify official lists of all candidates in a particular election. The board collects records of campaign contributions from individuals and political action committees and tracks each can-
885
didate’s contributions. Boards of elections keep updated records on individuals and parties involved in elections to ensure that all participants are conducting themselves in a legal manner. Gather and disseminate information: Boards of elections are responsible for making sure that all elections are conducted legally, and sometimes state legislatures revise or create new election laws or the courts make a ruling on how a particular election law should be executed. Because these events may significantly alter how boards operate, they must constantly monitor all proposed election-related laws and court proceedings that may impact their role. Such legal proceedings may include journals and acts of the state legislature, political practice pledges and ethics reports filed by elected state officials, or federal laws. Throughout the year, the board must then pass along this information to local election boards if the legislation and/or court rulings will alter the election process. While boards collect information from legislatures and courts, they may also serve as an information source for them. For example, a state legislator may wish to revise an old election law and ask the state’s board of elections if the proposed law adheres to current state and federal law. Board of elections members may also give their expert input as to the feasibility of various proposals and/or may testify in front of committees or legislatures when a proposed election law is being debated. They may also be asked to testify in cases involving an illegal election activity. Boards of elections also collect information from and serve as a resource for those directly involved in elections. During campaigns, candidates, political parties, and political action committees spend thousands (sometimes even millions) of dollars on advertising, staff, travel, and numerous other campaign-related activities. To ensure that there is no illegal financial activity, all must file finance reports with the board of elections to demonstrate exactly how they spend their funds. Boards collect and file these reports and check for any suspicious activity. They may also be asked to provide this information to the public, such as community watchdog groups or the media. The board also advises potential candidates of the qualifications and requirements for running for office both prior to and during a campaign.
886 boar d of elections
Boards also serve as an information source for the general public. Each board has its own Web site with information on its services, records of previous elections results, campaign finance reports, and so on. Interested individuals can obtain various electionrelated materials, from voter registration forms to campaign finance reports. Monitor and certify elections: During elections, the board must monitor each voting location to ensure that the election is being run legally and uniformly. To make sure that this happens, before each election the board carefully tests voting devices and certifies that each is operating accurately. When new voting devices are proposed, such as updated machines or voting online, the board also tests these devices before approving them for use. The board also trains and certifies local election officials prior to an election. At the conclusion of an election, the board certifies the total number of ballots cast and reports any ballots that were printed and delivered to the polls, disqualified, or unused as well as any overvotes or under-votes. They certify all final election outcomes, whether it is the winner of a particular race or the result of a referendum. Investigate cases: Sometimes allegations arise of unlawful activity that occurred during a campaign or election, such as bribery, voter fraud, illegal campaign contributions, false signatures on petitions, and so on. Such allegations must first be filed with the board, which then investigates the case. If the board finds the allegations to be false, the case is dropped; otherwise, the board moves forward with the case and takes action, whether it be disqualifying a petition or candidate or even taking court action against an individual or group. As explained above, boards of elections have a vast variety of duties to perform. Despite the wide variety of responsibilities, the most important function that a board of elections performs is ensuring that elections are run uniformly and fairly, but sometimes states have abused their authority and used their law-making powers to prevent particular groups from participating in elections. Or there may have been widespread cases of voter fraud on an election day. At times like these, the federal government (and sometimes the U.S. Supreme Court) has had no choice but to interfere and impose federal election laws on the states. Such actions have created tension
between the federal and state levels of government and have raised questions about the legality of lawmakers in Washington, D.C. and nonelected justices imposing new laws in an area in which the states have sovereignty. These actions have also led to problems for boards of elections in terms of implementing federal laws and programs with little financial aid to properly do so. The most notable example of states abusing their sovereignty in elections is the period when states used election laws to prevent African Americans from voting. After the fifteenth Amendment to the U.S. Constitution was ratified in 1870, making it illegal to deny persons the right to vote on account of race, many states (especially those in the South) used their election laws as a tool to prevent minorities from voting. They created restrictions in the voting registration process, such as requiring a poll tax to be collected or a literacy test to be passed. Given the higher poverty and illiteracy rates of minorities, many African Americans could not meet the strict requirements and were prevented from voting. This went on for years with little interference from the federal government because legally, states could create their own election laws. However, in 1965, Congress passed and President Lyndon B. Johnson signed into law the Voting Rights Act (VRA), which was designed to put a stop to states conducting elections that would prevent minorities from voting. Among the provisions of the act, states were no longer allowed to require unreasonable registration requirements, and it also required several states and counties that had a long history of racial discrimination to obtain preclearance from the U.S. Department of Justice before any electionrelated changes were made, such as redrawn district lines or voting procedures. While many states challenged the VRA in the courts, in 1966, the U.S. Supreme Court upheld the constitutionality of the act in the case South Carolina v. Katzenbach (383 U.S. 301). This act was significant for state boards of elections because it is this agency that is responsible for implementing the act, from reviewing all proposed election-related changes to submit for preclearance to investigating cases of illegal activity. The VRA and its aftermath was also important for boards because this is when the federal government and court system began to become more
bonds (local government) 887
directly involved with an area over which states previously had complete sovereignty. This remains one of the biggest problems that boards of elections currently face and has resulted in far more federal oversight than boards have ever faced. Currently, many states’ boards are struggling with implementing the Help America Vote Act (HAVA) of 2002 and adjusting to the additional federal oversight the act has brought. After the 2000 U.S. presidential election, when the outcome was delayed due to the contested vote in the state of Florida, numerous problems emerged across the nation, such as misread ballots and voter registration fraud. Voting device malfunctions in states such as Florida made many citizens and policy makers push for a new policy to make voting more uniform across states for federal elections. In 2002, Congress passed and President George W. Bush signed into law HAVA, which aims to alleviate some of the problems of 2000 and make voting more uniform and accessible. However, HAVA presents a major problem for boards of elections because now the federal government has more oversight of federal elections than ever before. HAVA established the U.S. Election Assistance Commission (EAC), which serves as a “clearinghouse” for federal elections and establishes standards each state is to abide by in order to meet the requirements of HAVA. The EAC directly reports to Congress, thus making state boards of elections more privy to federal oversight. While it is unclear precisely what HAVA’s affect on states’ boards of elections will be in the future, this act signals that the federal government is continuing to take a more active role in supervising an area once completely controlled by the states—elections. For state boards of elections, this signals two things: first, that the agency now has an additional branch of government to report to and second, that the era of complete state sovereignty over elections may be over. How will boards change in their day-to-day operations, now that they must report to the federal government? If states do not meet federal objectives, will the federal government take over the board’s role? Will new agencies such as the EAC be effective in their oversight? Will the national government continue to take more control over elections or return to state sovereignty in this area? In the future, boards of elections will have to address these questions and
learn how to work with the federal government and agencies in order to meet new federal objectives. See also campaign finance (state and local) Further Reading “About the National Voter Registration Act.” U.S. Department of Justice, Civil Rights Division. Available online. URL: http://www.usdoj.gov/crt/voting/nvra/activ _nvra.htm#1993. Accessed July 25, 2006; “Help America Vote Act of 2002”. Federal Election Commission. Available online. URL: http://www.fec.gov/hava/hava. htm. Accessed July 25, 2006; “Introduction to Federal Voting Rights Laws.” U.S. Department of Justice, Civil Rights Division. Available online. URL: http://www. usdoj.gov/crt/voting/intro/intro.htm. Accessed July 25, 2006; U.S. Election Commission Web site. Available online URL: www.eac.gov. Accessed July 25, 2006; U.S. Federal Election Commission Web site. Available online. URL: www.fec.gov. Accessed July 25, 2006. —Carrie A. Cihasky
bonds (local government) A bond is a certification of debt issued by a government or corporation in order to raise money. An investor who purchases a bond (bondholder) is essentially loaning money to the issuing organization (issuer) on a promise to pay a specific amount of interest periodically and a lump sum payment for the principal on the maturity date of the bond. Local government bonds are debt obligations issued by subnational government units such states, counties, cities, tribes, or other local special units (for example, school, utility, fire protection, redevelopment, or water conservation districts). Local government bonds, regardless of the actual issuing unit, are called “municipal” or “munis” by investors to distinguish them from corporate and federal Treasury bonds. Bond issues are the single most important method local governments use to acquire private funds to finance public projects. Municipal bonds are generally issued to fund longterm projects such road construction, power plants, water facilities, and education infrastructure, although they may also be used for short-term emergency spending needs such as natural disasters. The earliest example of the use of bonds to fund public projects was by England in the 1770s for toll
888 bonds (local government)
roads. In the United States, a revenue bond was issued in the 1800s to finance the New Orleans port, and the first recorded municipal bond was by New York City in 1812. Following a debt crisis in 1837 and widespread state bond defaults in the 1840s, municipal debt rose rapidly as restrictions were placed on state spending. The heavy reliance on municipal bonds for financing many state projects combined with a financial panic in 1873 led to widespread local government debt defaults. By the early 1900s, municipal bonds emerged as a relatively secure form of investment. In 1902, the combined debt of all state and local governments was about $2.1 billion. Local debt decreased after the Great Depression and stayed relatively low throughout the two world wars. However, after World War II, increases in population, migration to urban centers, and changes in the transportation and housing markets generated huge demands for public services. By the late 1960s, local debt had increased to $66 billion, to $361 billion in 1981, and exceeded $1 trillion in 1998. Throughout the 1990s, local government debt remained relatively level, at an annual average of $1.09 trillion. With increased devolution of federal government services to state and local levels, there has been a corresponding increase in local government debt, with total local debt rising to $1.85 trillion in 2005. Approximately 61 percent of all subnational government debt is at the state level and 39 percent with municipal or special government units. There are two primary types of municipal bonds. They differ according to the mechanism used to commit to repay the debt. The first, general obligation bonds (also known as full-faith-and-credit), attach a legal claim by the bondholder to the revenue of the issuer. If an issuer defaults on debt obligations, the bondholder can claim payment from local revenue sources, typically local property, sales, or income tax revenues. Since they are issued against a broad revenue base, general obligation bonds are traditionally used to finance projects that benefit an entire jurisdiction, such as free-access highways, water systems, and fire protection. The second type, revenue bonds, are secured by attaching a claim to user fees or other specific dedicated tax revenues, typically directly related to the service provided by the project funded by the bond, such as higher education, toll roads, water services, and health facilities. In 2004, revenue
bonds accounted for about 61 percent of all longterm municipal debt, and general obligation bonds 39 percent. Nonguaranteed, or limited-liability bonds can also be issued which lack any claim on local revenue sources. While presenting a higher risk to investors and requiring higher interest rate payment by issuers, these avoid legal claims to future revenue streams and permit projects to be funded based on their own merit. Municipalities may also use combinations of general obligation and revenue bonds to enhance the creditworthiness. Bond issuers use a variety of credit enhancements to add additional security guarantees in order to reduce the interest rates a borrower pays. For example, bonds may include state-credit guarantee, a legal commitment by the state government to pay the debt of a local government issuer in case of default. Private banks can also agree to provide a promise of repayment of local government debt with a bank letter of credit. Municipal-bond insurance can also be purchased by local governments to guarantee the bond repayment. Other bond varieties include structured financing, which combines traditional bonds with derivative products such as futures, options, and swaps. This allows an issuer to include expectations about long-term interest rates to improve bond marketability. Local governments may also issue what are known as municipal notes or commercial paper for short-term borrowing, typically for financing cash management needs and budget shortfalls. The Sixteenth Amendment of the U.S. Constitution maintains a tradition of intergovernmental tax impunity, requiring that some activities of state and local governments are immune from taxation. This prevents state and federal taxes on the interest local governments pay to investors for borrowing. This tax-free income has a number of effects. Local governments can borrow at interest rates lower than those in the private market as private investment flows to tax-free municipal bonds rather than taxed private bonds. Since investors use local bonds both as a tax avoidance strategy as well as an investment, it also means there is more capital available to local governments than there would otherwise be on the merits of the project alone. Finally, the use of bond investments as a tax avoidance strategy decreases the tax revenue available to the federal government. Whether this represents a serious market inefficiency
bonds (local government) 889
or an appropriate method of funding public projects remains a continuing area of debate. The 1986 Tax Reform Act is the most comprehensive law regulating the current municipal bond market. It created two categories of municipal bonds: those funding taxable private activities and those issued for public purposes, which remained tax-exempt. Debt issues for some privately owned assets remain tax-exempt, such as the construction of multifamily dwellings for affordable housing, hazardous waste facilities, airports and other mass commuting facilities, and some student loans. Bonds issued by any public purpose special district remain tax-exempt. Purchasing a bond is an investment and always includes risk to the investor that the issuer will default on the debt. This risk is reflected in the interest paid to the investor, and the greater the risk, the higher the interest rate an investor expects. Investing in government bonds typically has lower risk than private corporate bonds, since governments rarely go bankrupt. A number of commercial bond ratings agencies have emerged to assess investment risk of both private and public bond issues. The three most common are Mergent’s (formerly Moody’s), Standard & Poor’s, and Fitch Investors Services. The bond rating scale used by Standard & Poor’s is as follows. AAA: (highest rating) The capacity of the bond issuer to repay debt is extremely strong. AA: Bond issuer’s repayment capacity is high. A: While the bond issuer’s repayment capacity remains strong, it is susceptible to fluctuations in the health of the general economy. BBB: Generally adequate financial commitment by the issuer, but bonds in this category are subject to changing economic conditions. BB, B, CCC, CC, and C: Significant risk, and the financial capacity of issuers to repay is limited. D: Bond payment is in default. Bond ratings are based on a variety of factors, including the debt repayment history of the issuer, the degree of professionalism within a local government unit, its overall financial health, the amount of political control over spending, the health of the local economy, and the size of potential revenue sources. U.S. Federal Treasury Bonds are regarded as fully secure (AAA) since there is little likelihood that the federal government will go bankrupt. In 2004, the ranking for major metropolitan areas according to Standard & Poor’s ranged from AAA for cities such as Indianapolis, Indiana, Seattle, Washington, and Min-
neapolis, Minnesota, to a low of BBB- for Pittsburgh, Pennsylvania, and Buffalo, New York. Bond ratings signal the financial health of a local government and have important political consequences. The 2003 recall election of California governor Gray Davis was partially spurred by Fitch Investor Services lowering the state’s bond rating from A to BBB due to fiscal mismanagement and a $38.2 billion budget deficit. Having a good rating, however, is no guarantee to an investor. In 1994, despite a high bond rating, Orange County, California, filed for bankruptcy in the largestever municipal bankruptcy. While there are disagreements among public finance scholars, there are some general guidelines regarding what constitutes appropriate issuing of debt by local governments. Since a commitment of funds today with a promise of repayment with interest in the future places a burden on future budgets, it is generally considered good practice for the users of a public project to be the ones who bear the burden of the debt. As a rule, bonds should not be issued over a time period beyond the life of the project that the debt funds. Because of this intertemporal element, bonds can be politically attractive as a way to fund services for constituents today while placing the financial burden of repayment on future generations. To control this, many jurisdictions require general obligation bonds to be approved by voters. Another mechanism to control political overuse is the requirement of payas-you-go financing, whereby projects are paid for out of annual appropriations rather than bond issues. However, this can be just as problematic as the misuse of the bond market. Pay-as-you-go financing may place a heavy tax burden on current residents of an area, even if some may leave a jurisdiction and no longer receive the benefits from a project. Since annual appropriation budgets cannot typically afford the large up-front construction cost of many public works, it may discourage otherwise useful projects. Funding high initial construction costs from annual funds can produce tax rate instability, with high rates during the early construction phase of a project and lower rates afterward, even though the project produces the same level of benefits over its lifespan. Municipal bonds are not listed along with corporate and stock investments in most financial sections. In order to check a bond price, it is often necessary to consult a specialized bond dealer or association.
890 bonds (local government)
There are a number of critical pieces of information necessary for understanding the municipal bond market. Below is an illustration of the information given in a typical bond price quote. Issue C oupon M at. Okla 7.700 01-01-22
Price 104¼
Chg. ...
Bid yield 7.35
While corporate and federal bonds are typically sold in $1,000 denominations, municipal bonds are issued in increments of $5,000, though the prices are quoted as $1,000 issues for the purpose of comparison. This $1,000-value for bonds is known as the par (or face) value of the bond. On the date of maturity, this is the amount that will be repaid in full by the issuer. In the example, “Issue” indicates the bond issuing entity, in this case the Oklahoma Turnpike Authority (Okla), a regional transportation special government unit. “Coupon” reports the coupon rate as a percent of par value. The bond is paying a 7.700 coupon, so it pays 7.7 percent of $1,000, or $77. “Mat.” reports a maturity date of 01-01-22, meaning that the bond expires on January 1, 2022. “Price” is the current price to purchase one bond issue and is likewise expressed as a percent of par value. The reported price of 1041⁄4 represents a current price of 1041⁄4 percent of $1,000, or $1,042.50. “Chg.” is the change in price from the previous day, with “. . . ” meaning that the bond did not change value. When change does occur, it is rounded to increments of 1/8th of a percent (0.125 percent) and reported as positive or negative values. “Bid yield” refers to the current yield when holding the bond until maturity. The market value of a bond is composed of two components: The present value of the coupon (a periodic payment made to the bondholder) plus the value of the amount initially borrowed (the principal paid as a lump sum at maturity). The value of both components depends on the interest rate (the amount the issuer pays for using the money) and the maturity date (the time when the bond expires). Calculating the present value of a bond is done with the bondpricing equation, expressed as 1 ⎡ ⎢ 1 − (1 + r) t Bond Value = C ⎢ r ⎢ ⎢⎣
⎤ ⎥ F ⎥+ ⎥ (1 + r)t ⎥⎦
where C = coupon payment, r = interest rate, t = periods to maturity, and F = par value. Suppose a bond pays an annual coupon of $40, an interest rate of 8 percent, has 10 years to maturity, and has a par value $1,000. The value of that bond is then calculated as 1 ⎤ ⎡ ⎢ 1 − (1 + 0.08)10 ⎥ 1, 000 ⎥+ = 731.59 Bond Value = 40 ⎢ 0 08 1 + 0.08)10 . ( ⎥ ⎢ ⎥⎦ ⎢⎣
It is worth $731.59 to an investor purchasing that bond today. There are a number of contemporary debates about and challenges to municipal bond markets. A perennial controversy is the division between private and public activities. The most recent example is the practice of funding sports stadiums for attracting private sports teams using public debt. Other researchers have commented on the relative ease of using the bond markets to fund new projects that have high returns, compared to the difficulty in maintaining existing services. Thus, new cities with expanding public works find capital easily, while older cities are forced to look for other revenue sources. State legislation that places limits on local property tax revenues has forced local governments to innovate new means of raising capital at the same time more service responsibilities, such as welfare and health services, are being devolved to local governments. The tax-free nature of municipal bond investments also continues to generate debate. Bond investments are particularly appealing to investors in high tax brackets as a tax shelter, which creates an incentive to lobby local government officials to pursue particular bond investments. Municipal bonds have played an important role in the United States system of federalism by allowing local government units both the power and the responsibility to take on debt for public purposes. This interaction between private financial markets and public economies has had important implications for the incentives of local political officials. The system of bond ratings compels local officials to maintain fiscal discipline and reputation within the private market. Because of its long history of federalism, the U.S. municipal bond market is the most sophisticated
budgets, state and local
and developed in the world. To increase the funds for infrastructure development as well as improve governance by local officials, other countries are increasingly looking at the U.S. municipal bond market as a model. Municipal bonds will continue to evolve in both form and function in reaction to fluctuations in the availability of funds from state and national government, constituent demands, legal structures, and private capital markets. See also budgets, state and local. Further Reading Fortune, P. “The Municipal Bond Market, Part 1: Politics, Taxes and Yields.” New England Economic Review (Sept./Oct 1991): 13–36;———. “The Municipal Bond Market, Part II: Problems and Policies.” New England Economic Review (May/ June 1992): 47–64; Hillhouse, A. M. Municipal Bonds: A Century of Experience. New York: Prentice Hall. 1936; Mikesell, J. L. Fiscal Administration: Analysis and Applications for the Public Sector. Belmont, Calif.: Thompson/Wadsworth. 2007; Monkkonen, E. H. The Local State: Public Money and American Cities. Stanford, Calif.: Stanford University Press, 1995; Wesalo Temel, J. The Fundamentals of Municipal Bonds. New York: Wiley & Sons. 2001. —Derek Kauneckis
budgets, state and local For state and local governments, as Alexander Hamilton noted, money is the lifeblood of politics. State and local budget decisions determine how much one pays for a parking ticket or for property taxes on a business. State budgets determine how strong a reputation one’s public schools and state university will enjoy. Budget choices determine whether potholes will be fixed, whether a road will be increased to four lanes, and whether one will pay admission to city, county, or state parks. Budgets are an accounting of revenues collected from different sources and expenditures on various items. A budget is at the same time a proposal, a plan of action, and a means for accounting for money already spent. Budgets at the state and local government levels are determined by politics such as election results, state and local political cultures, and a variety of other factors such as the wealth
891
of a state, the health of the economy, and demographic trends. Under the U.S. Constitution, federalism guarantees that states have independent political power and their own constitutions. Under most state constitutions, local and regional government (such as counties) are created by the state and can be changed by the state. Some large cities such as New York, Chicago, and Los Angeles have budgets bigger than some of the smaller states. While sharing a number of characteristics with the U.S. federal budget, state and local budgets are different in several meaningful ways. First, the federal budget is proportionately larger than all state and local governments combined. In the early 21st century in the United States, federal spending at the national level constituted about 20 percent of gross domestic product (GDP), the sum of goods and services produced by the economy. All state and local spending combined made up approximately 10 to 12 percent of GDP, meaning the public sector in the United States was about one-third of the total economy. One main difference is the responsibility of the federal government for national defense, which cost more than $400 billion annually in 2006–07. Another main difference between the different levels of budgeting is that the federal budget is a tool for managing the economy. At the state and local levels, budgets rarely are large enough to shape the overall economy of the state. However, policies on taxes and spending can determine the level of state services, the favorability of the state toward businesses, and other factors that influence the state and local economies. State budgets differ from the federal budget in terms of rules and procedures as well. In Washington, D.C., Congress has constitutional control of the power of the purse. While the balance shifted to a stronger presidential role in budgeting in the 20th century, the president does not have the degree of budget power of state governors. Most governors possess the “line-item veto.” That means that a governor can veto a small part of a spending bill rather than have to veto an entire bill like the president. This is an important power for governors, often envied by presidents. Along another dimension, however, states are more constrained than the federal government. Most states have a balanced budget requirement, which means they are not allowed to spend more
892
budgets, state and local
than they take in except for borrowing for some capital projects such as university buildings or infrastructure projects. The federal government, however, can spend more than it takes in and supplement it by borrowing from the public (U.S. Savings Bonds, for example) and foreign nations. Unlike the states and most local governments, the U.S. government has a large national debt as a result of running budget deficits over the years. Budget processes in state and local governments generally run on annual or biennial cycles, including regular steps that determine where revenues will come from and how they will be spent. Because one year’s budget looks similar to the budget the year before, some scholars have suggested that budgeting is “incremental.” However, when examining budgets over periods of several years or following major changes or crises such as Hurricane Katrina in 2005, budgets can change quite quickly and dramatically. About half the states have a biennial budget—that is, a budget that last two years. The other states and most local governments have an annual budget. The budget cycle at the state and local levels generally consists of executive formulation, legislative approval, and administrative implementation, possibly followed by an audit. This entire cycle can take as much as three years, meaning that at any one time, several budgets are in different stages of development, enactment, and approval. Budget formulation is generally controlled by the executive branch: the governor or mayor and staff. Most states have a central budget office that assists the governor in assembling the budget proposal to send to the legislature. Citizens can check the Web site of their state or city government to find out what entity is responsible for budget preparation. Important parts of state and local budgeting are economic estimates and revenue projections. Since state and local governments are generally limited to spending what comes in, the most important thing is to determine how robust tax collections will be. During an economic boom, with housing prices soaring, revenues can grow rapidly, allowing spending on new programs. In times of economic downturn, however, revenues may plummet, requiring painful cuts in education, health care, and other popular programs. Most state and local governments try to maintain some kind of “emergency,” or “rainy day,” fund for
such contingencies, but it is difficult to resist the current spending pressures from various interests. As the budget is being prepared in the executive branch, agencies, departments, and various public entities such as universities make requests for allocations for the coming year or biennium. These entities generally are advocates for greater spending— “claimants”—and they are supported by various interests in the state or local jurisdiction. Much of what government does helps people (health and senior citizen programs), protects people (police, fire departments, and the National Guard), or makes their lives easier (transportation, highways) and is therefore popular with constituencies. As a result, there are heavy political pressures for greater spending in the budget process. That makes it necessary for the governor or mayor and budget office to act as guardians, or “conservers,” in the process and say “no” to increased spending. That is necessary because although many programs are popular and effective, people generally do not like to pay taxes, and some programs are wasteful and ineffective. Budgeting, then, is a struggle between the parts and the whole of the budget. Many participants are trying to increase their piece of the pie, while those who have responsibility for the totals must try to resist pressures to spend more and more. This dynamic between spenders and savers takes place both in executive formulation and legislative approval. When the governor, mayor, or county executive finally prepares a set of budget requests, they are submitted to the legislative body: the state legislature, city or county council, or board of aldermen. At this stage of state and local budgeting, legislative hearings are usually held in which the administration is called on to defend its requests, and various interests can testify for or against the budget. Law enforcement leaders may show up to argue for more money for police. Local activists may oppose cutbacks in day care programs or senior citizen activities. This is the stage at which politics and partisanship become important. In the states, governors usually have greater success with their budgets if one or both houses of the state legislature are of the same political party. Many municipalities are nonpartisan, but the political relationship between the mayor and the town council can be crucial in determining whether the executive proposals are accepted or amended.
campaign finance (state and local) 893
Individual budget rules are very important and vary between states and municipalities. These can be researched online by examining the budget process in a specific jurisdiction or state. In virtually all states and municipalities, the legislative body must give final approval to the budget. At that point, it goes back to the executive for the execution of the budget—actually cutting the checks, filling the potholes, and spending the money. At the state level, there are different agencies and departments, including the central budget office, that participate in the implementation of the enacted budget. The state or city treasurer’s office often plays an important role in this. Revenues must be collected, whether from property taxes, state income taxes, speeding tickets, or lottery ticket proceeds, and deposited in the treasury. Again, budgeting is very dependent on estimates, which are informed predictions of what money will come in and what must be spent. State and local governments have procedures for monitoring budgeting and making slight adjustments if necessary. Running a shortfall can make the next budget cycle very difficult. What forces shape state budgets, and how can differences be explained? The 50 states have different histories, levels of wealth and population, and political cultures. States such as Wyoming and Alaska are resource-rich, and state coffers overflow when energy prices are high. In Alaska, citizens pay low taxes, and each receives a check from the state each year from oil and gas royalties. Compare that to California, the most populous state, which often runs shortfalls as large as many billions of dollars in an economic downturn. Some states are more politically liberal than others and may be willing to extend state services even if it means higher taxes. In more conservative states, it is usually more difficult to start new programs. State electoral laws are an important factor in budgeting. Western states that allow citizen ballot measures such as the referendum and initiative have in recent years seen the rapid growth of “ballot box” budgeting. Voter-approved measures that either limit how states can raise revenues or mandate spending for certain popular purposes, such as health care or education, have proliferated in recent years. While often well intentioned, the result is to make budgeting much harder for the elected representatives in both the executive branch and legislative branch.
In the early 21st century, state and local budgeting remains highly influenced by what goes on in Washington, D.C. Federal mandates, which involve Congress passing laws requiring states to do certain things, often without supplying the money, have become more prevalent and more resented by states. No factor is more influential on state budgets than the Medicaid program. Medicaid helps low-income and indigent citizens receive health care, but the costs are increasing so rapidly that they are crowding out other important outlays in state budgets. Mandates affect local governments as well, such as the No Child Left Behind Act, which required changes in education spending by state and local governments without receiving what was promised from Washington. Fiscal federalism, along with the close financial interrelationship among the budgets of all levels of governments, is the reason that state and local budgets cannot be understood without understanding the role of the national government. Despite constitutional guarantees of independent political power, Congress often uses money (or the threat of withholding it) to get states to do what it wants. This was the case when the federal government threatened to withhold federal highway funds from any state that did not raise its legal drinking age to 21 during the 1980s. State and local governments retain ultimate control over their own budgets, but those budgets are also highly determined by what the federal government does and the larger national political and economic climate. Further Reading Rubin, Irene. The Politics of Public Budgeting. Washington, D.C.: Congressional Quarterly Press, 2005; National Conference of State Legislatures, “Fundamentals of Sound State Budgeting Practices.” Available online. URL: www.ncsl.org/programs/fiscal/fpfssbp.htm. Accessed July 16, 2006; Kelley, Janet M., and William C. Rivenback. Performance Budgeting for State and Local Government. Armonk, N.Y.: M.E. Sharp, 2003. —Lance T. LeLoup
campaign finance (state and local) For the most part, state and local governments are free to set their own campaign finance regulations.
894 campaign finance (state and local)
Federal laws apply only to candidates for Congress and the presidency, although states and localities are bound by the First Amendment and U.S. Supreme Court decisions applying it to campaign finance regulations. Given this latitude, states vary significantly in the types of rules governing campaign finance. Some of this variation is due to differences in running for office. For example, campaigning for the Wyoming state legislature is quite different from campaigning for mayor of New York City. But much of the variation is a result of differences in the willingness to implement reforms; some states and localities have been more amenable to experimentation with campaign finance regulations to make elections fairer and to limit either the appearance or reality of corruption that can stem from contributions to candidates for public office. The most common campaign finance regulation is disclosure—requiring candidates to state contribution and expenditure information. All states and localities have some type of disclosure requirement, although what is required to be disclosed varies significantly. For example, about half the states require the occupation and employer of the contributor to be listed, while the other half require only his or her name and address. Also, about 10 states do not require independent expenditures to be disclosed. Some states require campaign disclosure statements to be filed electronically—making the data easily available—while others require paper filings, severely limiting the utility of the data for the press, public, and researchers. Beyond simply requiring candidates to disclose their campaign finance activity, many states and localities also place limits on the amount of money an individual, labor union, or business can contribute to candidates and political parties. For example, Los Angeles limits contributions to city council candidates to $500 during primary elections and $500 during runoff elections. Contribution limits often vary across different offices (for example, gubernatorial candidates usually have different limits than state legislative candidates) and sometimes vary across the type of contributor (for example, individuals and businesses may have different limits). Unlike on the federal level, in most states, corporations and labor unions can give directly to candidates and parties.
Contribution limits are meant to reduce the overall amount of money spent in elections, diminish the influence of large donors, and force candidates to rely on small contributions from “average” citizens rather than large contributions from wealthy donors and political action committees. Research on contribution limits has generally found that they are not effective at accomplishing these goals. There are many ways donors get around contribution limits, the most prominent of which are independent expenditures that cannot legally be limited. Also, contribution limits, rather than forcing candidates to seek funds from “average” citizens, may prompt them to rely on lobbyists who will assist in fundraising by collecting large numbers of relatively small donations from their clients and delivering them to the candidate (a process referred to as “bundling”). Thus, there is minimal evidence that contribution limits reduce the overall amount of money raised by candidates or prevent wealthy individuals and organizations that wish to spend large sums of money influencing elections from doing so. A less common reform, but one that has more promise than contribution limits, is the public financing of campaigns. The basic idea is to replace privately raised money with public money in order to eliminate the corrupting influence of the former. All public financing programs are voluntary; as the Supreme Court ruled in Buckley v. Valeo (1976), government cannot prevent candidates from raising funds or using their own money to finance their campaign. A common form of public financing is through a system of matching funds, whereby private money raised by candidates is matched with public money in exchange for candidates agreeing to certain restrictions. This is partial public funding, in that candidates still raise some of their funds from private sources. The cities of New York and Los Angeles, along with a handful of states, have matching funds programs. In Los Angeles, candidates who join the matching funds program agree to a limit on expenditures, agree to a limit on the use of personal funds, and also are required to participate in debates. To qualify, a city council candidate must raise $25,000 from individuals in sums of $250 or less (there are higher requirements for mayoral candidates). Once they qualify, they can have contribu-
campaign finance (state and local) 895
tions from individuals matched on a one-to-one basis, meaning if they receive a $250 contribution, they can receive $250 in matching funds from the city. There is a limit to the overall amount of matching funds the candidate can receive that varies depending on the office and whether it is a primary or runoff election. New York City structures its program in a similar way, except they have a four-to-one match: for every $250 contribution, candidates can receive $1,000 in matching funds. Like Los Angeles, New York City imposes an expenditure limit on participants and will match funds only up to a certain amount. The record on matching funds programs has been mixed. On one hand, both Los Angeles and New York City have high participation rates: The majority of serious candidates in both cities accept matching funds. Further, there have been examples of candidates who would not have been able to mount a serious campaign without public funding. So matching funds programs have increased the number of candidates able to run for office (although exactly how many is hard to tell). On the negative side, some candidates have rejected public funding with few negative repercussions. Most notable were Richard Riordan and Michael Bloomberg, successful mayoral candidates in Los Angeles and New York, respectively. Both men financed their own campaigns with personal wealth and significantly outspent their publicly funded opponents. While both New York City and Los Angeles have provisions that benefit program participants when faced with a high-spending opponent, these provisions simply were not enough. Mike Woo, Riordan’s main opponent in the 1993 mayoral election, was outspent by $4 million, while Mark Green, Bloomberg’s 2001 opponent, was outspent by more than a 4 to 1 ratio. Further, it does not appear that matching funds programs have reduced the overall amount of private money raised; public money has been used to supplement private money rather than replace it. That said, it is unclear whether matching funds programs have altered the sources from which candidates receive funds or whether they rely less on “established interests” and more on “average citizens.” Another way public financing has been implemented is through “clean money” programs. These are similar to matching funds programs in that they
provide public money to candidates in exchange for their agreement to abide by certain restrictions. The difference is that private donations do not get matched; all private money (except for a handful of small donations) is prohibited, and candidates receive a lump sum to run their campaigns. In other words, it is a full public financing program rather than just a partial one. The program works like this: To qualify for public funds, candidates need to raise a certain number of small (typically $5) contributions from individuals within the jurisdiction. This requirement is meant to demonstrate public support for the candidate ( justifying the use of public funds) as well as weed out “vanity” candidates who are not serious about the election. Once candidates raise enough $5 contributions, they are given a lump sum of public funds to run their campaigns. The amount they receive varies depending on the type of election (primary or general), the level of opposition they face, and whether their opponents are also participating in the clean money program. They are prohibited from raising additional private money or using their own funds during the campaign (although sometimes they are allowed to raise small sums at the beginning of the campaign). Maine was an early adopter of clean elections in 1996, and other states (e.g., Arizona and Vermont) have followed suit. Most states limit clean money to state legislative races. Early indications from Maine and Arizona were promising: There appeared to be more candidates running for office, and elections were generally more competitive, one of the central goals of clean money reforms. However, isolating the effects of clean money from other factors (such as term limits, which can also increase competitiveness) is problematic, and some studies have found that clean elections have a minimal effect on competitiveness. Also, many candidates choose not to accept clean money and usually are able to raise significantly more than their clean money opponents (although the percent of candidates refusing public money varies significantly from state to state). If candidates sense that accepting clean money puts them at a disadvantage to their opponents, they are less likely to participate in the program even if they support its goals. Independent expenditures may also create problems for clean money regimes. While some states provide additional money to clean money
896
charter, municipal and town
candidates when they are opposed by independent expenditures, the existence of substantial independent expenditures—both in favor of and opposed to clean money candidates—may create anomalies in the funding formula that undermine the fairness of the system. In general, public financing programs suffer from two major problems. First, because they are required to be voluntary, candidates can opt out, limiting the effectiveness of the program. None of the existing public financing programs prevents candidates from raising large sums from interest groups and wealthy individuals if they so choose. Second, most of the public financing programs are underfunded, creating a competitive disadvantage for candidates who accept public funds. There is often significant public opposition to using taxpayer money to fund political campaigns, and thus politicians are often stingy with funding these programs. Even when they initially fund public financing programs at appropriate levels, they often lose their value over time through inflation. For example, the amount of public matching funds candidates in Los Angeles can receive is the same now as it was in 1993, when it was first implemented, despite the fact that the cost of campaigning has increased significantly. Without adequate funding, not only do candidates have an incentive to reject public funding, but those who do join are likely to be significantly outspent by their nonparticipating opponents. If states and localities increased funding for their public financing programs, it could provide incentives for more candidates to participate and level the playing field between candidates who are publicly funded and those who are not. In conclusion, states and localities have experimented with different types of campaign finance regulations in an effort to increase competition, reduce the cost of elections, and limit the influence of wealthy donors and interest groups. There have been laboratories in which different types of campaign finance regulations have been tried and tested. Assessing whether these experiments are successful, however, is quite difficult due to the difficulty of isolating the effects of various reforms and a lack of usable data. Despite these hurdles, as more research is conducted on state and local campaign finance reform, we will gain a better understanding of the
impact of these reforms and their effectiveness at accomplishing their goals. See also campaign finance. Further Reading Gierzynski, Anthony. Money Rules: Financing Elections in America. Boulder, Colo.: Westview, 2000; Gross, Donald A., and Robert K. Goidel. The States of Campaign Finance Reform. Columbus: Ohio State University Press, 2003; Malbin, Michael J., and Thomas L. Gais. The Day after Reform. Albany, N.Y.: Rockefeller Institute Press, 1998; Schultz, David, ed. Money, Politics, and Campaign Finance Reform Law in the States. Durham, N.C.: Carolina Academic Press, 2002; Thompson, Joel, and Gary F. Moncrief, eds. Campaign Finance in State Legislative Elections. Washington, D.C.: Congressional Quarterly Press, 1998. —Brian E. Adams
charter, municipal and town City, county, and town charters are legal documents adopted by governments, usually through a public vote that grants the local government limited autonomy to manage the public’s affairs. The presence of a charter is referred to as “home rule.” (For the purposes of this essay, city and county governments are referred to as municipal governments.) Charters must be permitted by state law. Five states, including Alabama, Hawaii, Nevada, New Hampshire, and North Carolina, have no provisions for home rule. Nine other states limit home rule to certain categories of cities only. States also permit counties home rule, though only 28 states permit county home rule. County charters are also frequently limited to certain categories of counties. Municipal charters appear to be similar to the U.S. Constitution. Both establish the powers of their respective governments. Both describe the structure of the government and generally establish limits of governmental authority. They also determine fundamental questions concerning elections and the duties of major officeholders. However, there are fundamental differences between municipal charters and the U.S. Constitution. City charters have very limited legal standing. Municipalities are always subordinate to state laws
charter, municipal and town
and rules. States reserve the right to take away city and county charters, and the state retains significant control over local governments through legislation. It is common, for instance, for the state to limit the taxing authority of cities and counties. Certain classes of city employees are often governed by state laws rather than city ordinances. Police officers, for instance, are often governed by state codes, and the city is often limited in its ability to hire and fire police chiefs. State and federal legislation often requires cities and counties to follow prescribed rules and regulations. California cities, for example, are required to meet state requirements for the recycling of refuse material, and federal and state legislation provides rules for health standards, air pollution control, and affirmative action. Cities and counties are best viewed as “creatures” of states, whereas the federal government is sovereign. The powers of municipalities are limited to those described in state law and subject to state review. The legal basis for limited autonomy, referred to as Dillon’s rule, was first enunciated in 1868 and is generally unchallenged today. As stated by Judge John Dillon of the Iowa Supreme Court, the powers of municipalities are limited to “(f)irst those granted in expressed words; second those necessarily or fairly implied in or incident to the power expressly granted; third those essential to the accomplishment of the declared objects and purposes of the corporation.” While challenges to this notion occur, the principle appears to be well established in case law. One might ask, therefore, of what value are city and county charters? They provide municipal governments with control over some important matters, such as the form of government, the duties of elected and appointed officials, and the structure of election districts. At the city’s options, other provisions can and generally are included. This essay examines the major provisions of municipal charters and looks at the advantages and disadvantages of charters. City charters have several common features. They describe the form of government, which includes the powers of major appointed and elected officials. Particularly significant is the presence of several very different forms of government reflected in city charters. Because of the presence of charters, the organization of government in cities contains greater variety than occurs at other levels of government. Normally,
897
cities and counties follow one of four generally established structures of government. The earliest and still a major form of municipal government is referred to as the weak mayor plan. In weak mayor governments, an elected city council is the major policy making body. A mayor is separately elected with some independent powers described in the charter, but major decisions remain with the council. Major appointments of appointed officials, for instance, are the prerogative of the council. In most weak mayor cities, the mayor is elected citywide. Council members are elected either citywide or in election districts. The strong mayor form of government provides an elected mayor with significant control over the day-to-day running of city government. This includes appointment of city employees and significant discretion over the organization of departments within the city. The city council in a weak mayor city exercises general oversight of important public policies and can pass ordinances subject to the mayor’s veto. The commission form of government elects city department heads directly and creates a city council composed of the elected heads. Thus, voters elect a police chief, fire chief, public works chief, and so on. The charter creates a council composed of these officials and gives them major responsibility for running the affairs of the city. Under a commission charter, there is usually no mayor. The council-manager form of government, which elects a city council, is few in number and grants it major oversight of city policy but not of city administration. Day-to-day management is delegated to an appointed city manager, who is expected to be a professional administrator selected for his or her knowledge and skills. The council-manager structure assumes that city council members are part-time officials. They are expected to have very limited powers over city management and frequently are not involved in the hiring and firing of city employees. Usually, city council members are elected at large on a nonpartisan ballot. The mayor is a member of the council selected by council members with no duties beyond presiding at city council meetings and representing the city at public functions. Each city adopts its charter with these general models in mind but often with significant variations. Some cities, for instance, refer to the major
898
charter, municipal and town
appointed official as the “city administrator” and reserve some of the powers usually held by the city manager to the council. Cities often vary the kinds of election districts and choose to add other elected administrators. A second feature of charters describes the election systems of the city. Some cities permit partisan elections, while others require that elections be explicitly nonpartisan. Nonpartisan elections are required in all council-manager cities and are common in all but the largest cities. The nature of election districts also varies. Traditionally, city council members were elected in wards, small divisions of the city designed to reflect differences in racial and ethnic composition. Proponents of the council-manager structure advocate atlarge elections whereby each city council member is elected by the entire city electorate. The number of city council members also is described in the charter. Some cities elect more than 50 city council members, other cities as few as five. Council-manager cities generally contain five to seven council members. Some cities elect the council members at large but require some or all to live in prescribed districts. Cities frequently elect some staff members in addition to the city council and mayor. City treasurers are commonly elected separately. Some cities elect their city clerks and city attorneys. The city charter describes these positions and often includes the qualifications for officeholders. City charters are often long and detailed documents that usually include numerous additional provisions. Charters describe the structure and duties of various city commissions and special districts. The charter of Los Angeles, for example, describes a system of advisory neighborhood councils. Charter provisions frequently respond to special interests of particular politicians or citizen groups. The City of Milwaukee charter states that “Depots, houses or buildings of any kind wherein more than 25 pounds of gun powder are deposited, stored or kept at any one time, gambling houses, disorderly taverns and houses or places where spirituous, ruinous or fermented liquors are sold without license within the limits of said city are hereby declared and shall be deemed public concerns or common nuisances.” The Houston, Texas, charter until recently included a provision banning the playing of hoops in city streets, a common form of children’s play in the 1890s.
Charters are legally subordinate to state constitutions and legislation, and in most states, the state legislature and the courts severely curtail city powers. Despite these limitations, charters serve a number of important purposes and are often of significant benefit to the residents. Charters do permit some degree of response to the interests of the citizens. They can provide greater representation of neighborhoods or more significant influence of the city as a whole. By varying the kind of charter system, the city can respond to varying goals and interests. Variations in charters have encouraged innovation in government management. The council-manager government is frequently credited with improving efficiency and generating innovations, particularly in public management. The city manager profession has often been in the forefront of changes in public administration practice. Without the centralized authority and tradition of hiring professional city managers, cities would be less able to attract skilled practitioners. The flexibility of charters and the ability of cities to change them has allowed cities to respond to pressures for changes in the policy process. As demands for neighborhood representation come to the fore, charters can be changed to encourage greater diversity of representation. With increasing concern for central policy direction and more central management, cities have changed their charters to provide the mayor with more authority. Commission and council-manager structures have often followed evidence of corruption and mismanagement in some cities. Last, the presence of different forms of city organization has permitted analysts to evaluate different structural arrangements and thereby test some basic propositions of political science. Does the representation of racial and ethnic minorities increase with more city council members or district elections? Does efficiency improve with central authority and professional management? Do nonpartisan elections encourage different kinds of decisions than partisan elections? By testing such propositions, political scientists know more about the effects of different forms of organization and representation on public policy. Critics cite several problems that occur because of the charter system. The charter system places emphasis on the city as defined by city boundaries,
city manager 899
not the urban area of which it may be a part. Because of the emphasis on the interests of specific residents, it encourages the creation of smaller cities rather than metropolitan governments. Critics argue that the home rule system encourages separating cities within an urban area, and it makes cooperation among cities more difficult. Suburbs, for instance, often use the presence of a charter to attack attempts to share the burdens of policies that are regional in nature. Taxing residents to pay for areawide problems, for instance, can be viewed as an infringement on home rule. Because the creation of small communities is relatively easy, the home rule system promotes the separation of residents within communities, encouraging communities that are stratified by wealth, race, and ethnicity. The presence of the charter and the term home rule may give residents the impression that the city has more control over its destiny and more autonomy than is actually present. This may lead to disillusionment on the part of residents when they attempt to change policy. Municipal charters determine the primary organizational features of American cities. They spell out the important structural features of the government and the election system. They provide the residents with some degree of control over public policy, though it must be emphasized that control ultimately resides with the state government. Charters have encouraged innovation and provided a variety of kinds of city government that permit one to analyze the value of different structural arrangements. The charter system, however, is often cited as an impediment to intergovernmental cooperation, and it may encourage the stratification of cities by class and race. Further Reading Frug, Gerald E. City Making: Building Communities without Walls. Princeton, N.J.: Princeton University Press, 1999; Ross, Bernard H., and Myron Levine. Urban Politics: Power in Metropolitan America. 6th ed. Itasca, Ill.: Peacock, 2001; Saltzstein, Alan. Governing America’s Urban Areas. Belmont, Calif.: Wadsworth-Thompson, 2003; Syed, Anwar. The Political Theory of American Local Government. New York: Random House, 1966. —Alan Saltzstein
city manager Under a council-manager structure of government, the city council (legislative branch) appoints a city manager to serve as the chief executive and administrative officer of the city. The city council also oversees the city manager and has the ability to fire him or her. The city manager appoints and removes department heads and sees that a city’s ordinances are enforced. In addition, the city manager is usually highly educated and trained and is considered a professional. In fact, 63 percent of all city managers held at least a master’s degree by the turn of the 21st century. Most city managers hold master’s degrees in public administration, urban planning, or public policy. In the classic council-manager structure of government, there is no separately elected mayor to function as chief executive. In such traditional council-manager cities, the city council selects the mayor, who is also a member of the city council. Change has occurred, however, whereby many mayors in council-manager cities are now directly elected. The council-manager structure of government was part of Progressive Era reforms in the early 20th century. Ending boss rule and electing honest men to city office were the original goals of the reformers. Reformers also desired the council-manager form of government because they believed a city manager would impartially and rationally administer public policy established by a nonpartisan city council. Thus, they believed that the city manager would be apolitical. Also, during this time period business and corporate ideals were very popular, and the council-manager structure of government reflected these ideals. Many saw the council-manager structure as introducing businesslike efficiency in government. Early writers compared a community’s citizens to stockholders in a business, the city council to a firm’s board of directors, and the city manager to a corporate manager. In addition, Woodrow Wilson argued that a politics-administration dichotomy existed. Therefore, in a traditional council-manager system, the city council should dominate politics and policy, and the city manager should dominate the administration and implementation of policy. In fact, the council-manager system does not work in this way, and city managers themselves as well as others discredit the politics-administration
900 cit y manager
dichotomy. City councils do not focus exclusively on politics or policy, and city managers do not focus exclusively on administration or implementation. For a long time, city managers have been pulled between the roles of technician and agent of the council on one hand and politician and policy leader on the other. Although Progressive Era reformers viewed the traditional city manager as a politically neutral administrative expert, the modern city manager is not apolitical. In fact, the modern city manager is deeply involved in policy making and politics. The policy role includes control over the council agenda and policy initiation and formulation. There has been a significant increase in the percentage of city managers who perceive the policy role to be the most important of three roles of policy, politics, and administration. However, the city council does have final say over policy making. The city manager influences public policy but should not determine it. For example, it would be inappropriate and in some cases unethical for a city manager to take policy making credit away from the council. Likewise, city councils expect the city manager to accept blame for a failed policy. City managers also serve a political role as brokers or negotiators of community interests. In this role, city managers spend most of their time sharing knowledge, educating, negotiating among various nongovernmental groups and individuals within the city, and encouraging communication by linking people. They attempt to resolve conflicts and create compromises. Therefore, city managers are not political in the sense of building a constituency, but they are political in the sense of trying to build consensus. Currently, the boundary line between the roles of the city manager and city council is increasingly blurred, but the two continue to have shared but distinct responsibilities. Their interaction might be referred to as an activist-initiator pattern. In other words, city council members are active policy proponents. City managers are actively involved in developing middle-range, broad-range, long-term, and citywide proposals. City manager involvement in policy is not new. What is new is the need for city managers to be the source of broad policy initiation. At times, city managers may be bureaucratic entrepreneurs who create or exploit new opportunities to push their ideas forward. Bureaucratic entre-
preneurs promote and implement policy innovations. If a community has a mayor, it is significantly less likely that a city manager will emerge as an entrepreneur. In addition, entrepreneurial city managers are more likely to emerge in cities whose local public sector workers are highly paid. A more heavily unionized local municipal workforce reduces the chances that an entrepreneurial manager will emerge. Finally, very weak taxpayer groups or very weak municipal unions increase the probability that an entrepreneurial manager will emerge. Leadership is the key factor that pushes entrepreneurial city managers to promote their ideas. The dominant approach used by most entrepreneurial city managers is teamwork. Handling issues quietly, behind the scenes, is the main strategy employed. Entrepreneurial managers need to be salespeople for new policies both to elected officials and citizens. Another key aspect of the city manager’s role involves whether the city council can maintain control over him or her in an efficient manner. Principalagent theory is helpful in examining this issue. Under principal-agent theory, the city council—the principal—has political incentives to control the city manager—its agent. According to principalagent theory, an agent is passive and undynamic. In addition, some may view the city manager also as a principal, because he or she has responsibility for subordinate employees, his or her agents. Examining the city manager’s power as agent, one finds that he or she has the power of policy implementation and also participates deeply in other aspects of the policy process. The power that the city manager has to appoint and remove department heads varies. City managers often discuss these decisions with the city council. In addition, the city manager is likely to develop the budget in consultation with the city council. Most city managers have significant autonomy in managing administrative operations and making staffing decisions. The city council may also influence the city manager. The council’s ability to hire, retain, and fire the city manager is a key example. The council also has authority over city managers through performance evaluations. Likewise, the council has authority to make adjustments to the city manager’s salary. In reality, however, few city councils monitor city managers
city manager 901
to any sizeable degree. As a result, city managers may be agents, but their principals (city councils) appear to exercise little formal oversight. In fact, the relationship between the city manager and city council is often viewed as cooperative. James M. Banovetz asserts that four sets of public attitudes toward government have profoundly affected city managers. These sets of public attitudes include an early 20th-century idea that “government is corrupt,” which stemmed from machine politics. From 1915 to 1935, the public believed that “government should be limited.” During this period, the council-manager plan was sold to the public on the basis of its parallels with contemporary business models and the politics-administration dichotomy. Progressive Era reformers made the case for needing a city manager. The public believed that “government is paternalistic” from 1935 to 1965. During this period, city managers supervised a growing list of programs and responded to a wider set of citizen demands. In addition, the council-manager plan’s popularity grew rapidly during this and the following period. From 1965 to the present the public has believed that “government is excessive.” During this period, people have viewed the government’s role in society as too pervasive. Banovetz also offers the possibility that future city managers will represent a synthesis of the administrative officer and the policy activist-leader. This is a much different conception than others vision that the city manager’s focus will continue to move further toward policy and political roles. Opponents of the council-manager plan question whether the plan and, therefore, the city manager can survive. Critics contend that the council-manager form of government is elitist, unresponsive, and insulated with a leadership void and no capacity to manage conflict, among others. The criticism that this structure of government is elitist stems from its characterization during the Progressive Era reform movement. During the reform movement, elected officials were drawn from the upper strata of the community, and the mayor was not directly elected. Currently, the mayor is usually directly elected, albeit this may lead to a decline in the power of the city manager. Changes in the council-manager structure of government to increase responsiveness have also been made. In addition to a directly elected mayor, the
council is more diverse, with many cities switching from at-large to district council elections. Proponents of the council-manager form of government argue that its key features can remain intact. Although the city council now does more administration and management and the city manager does more in the way of mission formulation, the basic division between the city manager and city council is not being lost. City managers continue to look to the city council for direction, even though they must do more to frame options and press the council for resolution. City council members continue to rely on the city manager and staff for professional support at the same time that they want to be broadly involved themselves. The council also respects the manager’s position as head of the municipal or ga ni za tion. In addition, the council seeks to be informed about administrative and managerial decisions. As a third argument, Frederickson, Johnson, and Wood contend that the council-manager label has become meaningless, because the direct election of a mayor in a traditional council-manager structure of government adds a powerful element of political leadership. It also changes the role and functioning of the city manager and the relationship between the city manager and city council. For this reason, they call council-manager cities with directly elected mayors “adapted administrative” cities. In addition, they categorize council-manager cities with more than half of their council elected by district as “adapted administrative” cities. This essay has addressed the role of the city manager, changes in the city manager’s role, and changes in the council-manager structure of government. Council-manager cities will likely continue to evolve, and, therefore, so will the role of the city manager. Further Reading Banovetz, James M. “City Managers: Will They Reject Policy Leadership?” Public Productivity and Management Review. 17, no. 4 (1984): 313–324; Caraley, D. City Governments and Urban Problems: A New Introduction to Urban Politics. Englewood, N.J.: Prentice Hall, 1977; Frederickson, H. George, Gary Alan Johnson, and Curtis Wood. “The Evolution of Administrative Cities.” In The Adapted City: Institutional Dynamics and Structural Change. Armonk, N.Y.: M.E. Sharpe,
902 c onstitutions, state
2004; Nalbandian, John. “The Contemporary Role of City Managers.” American Review of Public Administration. 19, no. 4 (1989): 261–278; Newell, Charldean, and David N. Ammons. “Role Emphases of City Managers and Other Municipal Executives.” “Public Administration Review 45 (May–June 1987): 246– 253” Selden, Sally, Gene A. Brewer, and Jeffrey L. Brudney. “The Role of City Managers: Are They Principals, Agents, or Both?” American Review of Public Administration 29, no. 2 (1999): 124–148; Stillman, Richard J. “The Origins and Growth of the CouncilManager Plan: The Grass-roots Movement for Municipal Reform.” In The Rise of the City Manager: A Public Professional in Local Government. Albuquerque: University of New Mexico Press, 1974; Svara, James H. “Is There a Future for City Managers? The Evolving Roles of Officials in Council-Manager Government.” International Journal of Public Administration 12, no. 2 (1989): 179–212;———. “The Shifting Boundary between Elected Officials and City Managers in Large Council-Manager Cities.” Public Administration Review 59, no. 1 (1999): 44– 53; Teske, Paul, and Mark Schneider. “The Bureaucratic Entrepreneur: The Case of City Managers.” Public Administration Review 54, no. 4 (1994): 331– 340; Watson, Douglas J., and Wendy L. Hassett. “Career Paths of City Managers in America’s Largest Council-Manager Cities.” Public Administration Review 64, no. 2 (2004): 192–199. —Susan E. Baer
constitutions, state A constitution is an official document that establishes the rules, principles, and limits of a government. While Americans largely revere the U.S. Constitution as sacrosanct, most are completely unaware that every state in the union has its own constitution as well. The general purpose of state constitutions is similar to the federal Constitution. They describe the structure of government, the powers of officials, and the process of amending the document itself. They also delineate the individual rights recognized by the state, while conferring various obligations and responsibilities on the state. Like the U.S. Constitution, state constitutions shape the environment in which government takes place by establishing what governments must do and what they may not do. However, state constitu-
tions differ substantially from the U.S. Constitution, and they differ considerably from one another. The basic framework, however, is fairly uniform. Like the U.S. Constitution, most state constitutions begin with a preamble. Minnesota’s is typical: “We, the people of the state of Minnesota, grateful to God for our civil and religious liberty, and desiring to perpetuate its blessings and secure the same to ourselves and our posterity, do ordain and establish this Constitution.” They are intended as broad statements of general purpose, and efforts to add more specific goals have not been successful. In 1976, for example, South Dakota voters overwhelmingly rejected a proposal that would have added the phrase “eliminate poverty and inequality” to its constitution’s preamble. The first actual article of most state constitutions contains a list of fundamental individual rights. This is a clear contrast to the U.S. Constitution, in which nearly all rights have been added as amendments to the original document. Most of the rights listed are similar to those in the U.S Constitution, such as freedom of speech and freedom of religion, or restrictions forbidding cruel and unusual punishment. However, many state constitutions contain more rights than those listed in the federal Constitution. A total of 19 states specifically provide for equal rights by gender, and nearly 40 states recognize various guarantees to victims of crime. At least 11 states clearly establish a right to privacy (which has been a thorny issue at the federal level) by including passages like this from Montana’s constitution: “The right of individual privacy is essential to the well-being of a free society and shall not be infringed without the showing of a compelling state interest.” Some rights were recognized by states long before they were legally established nationally. Wyoming, for example, extended voting rights to women in 1869, more than half a century before the Nineteenth Amendment to the U.S. Constitution was ratified in 1920. Even more notably, Vermont’s prohibition of slavery—“All persons are born equally free and independent . . . ; therefore no person . . . ought to be holden by law, to serve any person as a servant, slave or apprentice . . . unless bound by the person’s own consent”—appears in its constitution of 1777. Declarations of rights are typically followed by a statement addressing the division of powers. All states
constitutions, state 903
share the same essential separation of powers framework as the national government, but whereas the U.S. Constitution largely implies this division, most state constitutions express it clearly, as in Idaho’s: “The powers of the government of this state are divided into three distinct departments, the legislative, executive and judicial, and no person . . . charged with the exercise of powers properly belonging to one of these departments shall exercise any powers properly belonging to either of the others.” Lengthy descriptions of the functions, structure, powers, qualifications, and selection process of the three branches usually follow. Every state but one establishes a bicameral legislature; only Nebraska’s constitution specifies that “the legislative authority of the state shall be vested in a Legislature consisting of one chamber.” The number of legislators, their terms of office, and their qualifications for office are all spelled out in state constitutions, and there is wide variance in all of these areas. The structure of the executive branch varies across states, as well. In many states, governors are elected separately from other executive officials, such as the secretary of state and the U.S. attorney general, effectively creating a plural executive. Likewise, state constitutions establish matters such as whether governors (and other executive branch officials) shall be term limited. Finally, the rules pertaining to the judicial branch are spelled out, including the important matter of selection. Unlike federal judges, most state judges are elected to office, as prescribed by the rules in the state constitution, though the actual election procedures vary substantially. In some states, there is an ordinary election with competing candidates; in others, judges are initially appointed to office and later face a retention election, in which voters decide whether to remove him or her from the bench. State constitutions also contain provisions for the impeachment of public officials. Even here, though, there is some variance, as the constitution of Oregon does not permit ordinary impeachment but notes that “incompetency, corruption, malfeasance or delinquency in office may be tried in the same manner as criminal offenses, and judgment may be given of dismissal from office.” The institutional differences between the states and the federal government can be consequential, but the purposes of these sections in the constitutions themselves are essentially the same. That cannot be
said of the substantive provisions pertaining to policy matters found in every state constitution. Unlike the U.S. Constitution, state constitutions commonly address rules of taxation (and exemptions to taxation), health care and welfare issues, environmental issues, and a host of other policies, from abortion to lotteries. Providing education is a constitutional obligation in nearly every state; passages such as this one in Kentucky’s constitution are almost universal: “The General Assembly shall, by appropriate legislation, provide for an efficient system of common schools throughout the State.” By contrast, the U.S. Constitution generally avoids any mention of specific issues. Political scientist Christopher Hammons has found that 39 percent of state constitutions deal with these kinds of subjects, compared to only 6 percent of the U.S. Constitution. State constitutions also establish procedures of direct (participatory) democracy (if any), and they determine the rules regarding local governments. Finally, like the U.S. Constitution, state constitutions outline the process for amending the documents themselves. There are several methods for amending state constitutions, but the most common by far is the legislative proposal. If both chambers of the legislature agree to an amendment (usually by a supermajority vote of two-thirds or three-fifths), the measure is placed before voters for their approval. (Voter approval is not required in Delaware.) Some 18 states permit voters to amend their state constitutions directly, without involving the legislature at all, through a constitutional initiative process. In recent years, this procedure has resulted in a number of constitutional bans on gay marriage, similar to the one in Ohio’s constitution: “Only a union between one man and one woman may be a marriage valid in or recognized by this state and its political subdivisions.” A number of states take the amendment process even further and require that voters be asked every 10 or 20 years whether to hold a constitutional convention to review the entire document. While wholesale changes to constitutions are infrequent, amendments are quite common, and the vast majority of those proposed by legislatures are ratified. The typical state constitution is more than three times longer than the federal one; even the shortest state constitution (Vermont’s) is longer than its national counterpart. The specific policy provisions discussed
904 c onstitutions, state
above explain at least part of the reason for the greater length. These provisions can be quite narrow and precise; from the Alabama constitution: “The legislature may . . . provide for an indemnification program to peanut farmers for losses incurred as a result of Aspergillus flavus and freeze damage in peanuts.” New York’s constitution meticulously details the maximum length and width of ski trails on specific mountains. Another reason for the length of state constitutions is that they are so frequently amended. While the U.S. Constitution has been amended only 27 times, the average state constitution has roughly 120 amendments. California’s constitution has been amended more than 500 times since its inception in 1879. Another key difference is longevity. Most states have replaced their entire constitution at least once since their entry into the union, and many states have done so multiple times. Louisiana’s current constitution is its 11th. Conventional wisdom has long held that the lack of permanency is an inevitable by-product of excessive specificity, the argument being that such constitutions rather quickly become obsolete and must be replaced. The “father” of the U.S. Constitution himself, James Madison, was an advocate of a short constitution focusing on institutions and more general principles. He believed that brevity was essential for the document to be durable and therefore to provide for the necessary stability of the government. But Christopher Hammons has recently argued that longer constitutions are actually more durable than shorter ones, demonstrating convincingly that the state constitutions that have been lengthier and more specific have actually survived longer than the briefer versions. He concludes that “a particularistic constitution may better represent the people that live under it, reflecting the changing policy preferences of a diverse people.” If true, it would suggest that political stability is actually enhanced by constitutions that address narrower issues than what Madison believed prudent. There is indeed wide variation across state constitutions, a result of differences in history, culture, politics, and even geography. In New Mexico, where more than 40 percent of the citizens are Latino, the constitution contains a provision dealing with bilingual education. The constitution of Illinois was written in 1971 and distinctly reflects the environmental consciousness of the time: “The duty of each person is to provide and maintain a healthful environment for the
benefit of this and future generations.” Colorado’s constitution has an entire article dealing with mining and irrigation, while Hawaii’s mandates that “the state shall provide for a Hawaiian education program consisting of language, culture and history in the public schools.” Even broader institutional differences across state constitutions can sometimes be explained by such factors. State constitutions written during the Progressive Era are the most likely to permit direct democracy, such as the recall of elected officials. Similarly, the New England tradition of town hall democracy is evident in the constitution of Vermont, the only state that mandates biennial elections for the governor and both chambers of the state legislature. Still, despite the fact that the differences across state constitutions can be quite significant, their similarities are far greater. The specifics may vary in nontrivial ways, but the framework described above is fairly standard, and that is the more important point to emphasize. State constitutions will never receive a fraction of the attention that the U.S. Constitution does, and it is hard to argue that it should be otherwise. States are important in our federal system, but the U.S. Constitution is the key document in American democracy. State constitutions do not have its historical value nor its status as a model for nascent democracies worldwide. Moreover, as Saffell and Basehart correctly point out, they are “painfully boring documents,” lacking “stirring phrases” or “eloquent prose.” Nevertheless, for the same reasons that American government cannot be understood apart from the U.S. Constitution, an appreciation of state constitutions is essential for understanding government and politics at the state level. Further Reading Hammons, Christopher W. “Was James Madison Wrong? Rethinking the American Preference for Short, Framework-Oriented Constitutions.” American Political Science Review 93 (1999): 837–849; Maddex, Robert L. State Constitutions of the United States. 2nd ed. Washington, D.C.: Congressional Quarterly Press, 2005; Rutgers Center for State Constitutional Studies. Available online. URL: http://www -camlaw.rutgers.edu/statecon/. Accessed June 20, 2006; Saffell, David C., and Harry Basehart. State and Local Government: Politics and Public Policies. 8th ed. Boston: McGraw Hill, 2005; Smith, Kevin B., Alan Greenblatt, and John Buntin. Governing States and Localities.
correction systems 905
Washington, D.C.: Congressional Quarterly Press, 2005; Tarr, G. Alan. Understanding State Constitutions. Princeton, N.J.: Princeton University Press, 1998. —William Cunion
correction systems Correction systems are an integral part of the criminal justice system in the United States and involve agencies and personnel at the national, state, and local levels of government. However, the bulk of law enforcement duties in the United States is carried out by state and local agencies and officials. Prisons and jails are the twin pillars of state correction systems but can also be extended to include state courts of law and probation and parole agencies. The primary purpose and function of state correction systems is to help protect public safety and welfare by arresting and charging individuals involved in criminal activity and placing under supervision or removing from the larger community individuals who have been convicted of committing a criminal offense in a court of law. At the local level of government, sheriffs, city police, and other law enforcement officials play a critical role within states’ correctional systems. These officials’ primary job is to arrest those believed to be perpetrators of crimes or civil violations. County and district attorneys prosecute those charged with crimes, and state courts decide the guilt or innocence of those charged with criminal violations. After conviction in a state court of law, criminal offenders are placed under the supervision of probation departments or, in many cases, sentenced to incarceration in state prisons. In 2004, nearly 7 million people in the United States were under some form of correctional supervision; the greatest numbers of these are on probation. Probation is court-ordered community supervision of criminal offenders by a probation agency that requires offenders to adhere to specific rules of conduct. Another type of community supervision is called parole. Parolees are individuals who are conditionally released to community supervision by a parole board decision or by mandatory conditional release after serving a prison term. Parolees are subject to being returned to jail or prison for rule violations or other offenses.
A more severe criminal sanction involves incarceration, or the confinement of criminal offenders in prisons or jails. Prisons are federal or state correctional facilities that confine offenders serving a criminal sentence of more than one year. Jails are correctional facilities found at the local level of government (typically county governments) that are reserved for confining individuals awaiting trial or sentencing or those who are waiting to be transferred to other correctional facilities after conviction. According to the most recent statistics available (2003), there are just under 700,000 people confined in jails, while 1.25 million are confined in state prisons. Although each state’s correctional system is designed to maintain public safety and welfare, which is a much-valued and popular goal in any democratic society, corrections-related policies have been at the center of domestic political disputes over the past three decades. The crux of the political debate involves concern over the increasingly punitive nature of corrections policies since the 1980s. The punitive nature of corrections policies is perhaps best exemplified by the use of the death penalty and the enactment of mandatory sentencing and three strikes laws. Those in favor of tough correctional policies argue that threats of long-term incarceration or execution help reduce crime by deterring potential criminals and repeat offenders from committing criminal acts because they can be certain of their sentence if
906 c orrection systems
caught. Others suggest that punitive measures do little to deter crime, have led to overcrowding in jails and prisons, and place a large strain on states’ fiscal resources. One of the defining characteristics of correction policies in the Unites States is the high rate of imprisonment. Incarceration rates in federal and state prisons, 738 per 100,000 persons in 2005, have reached unprecedented levels, making imprisonment much more likely in the United States than in any other Western democracy. According to the Bureau of Justice Statistics, if incarceration rates remain unchanged in the United States, an estimated one of every 15 persons will serve time in prison in their lifetime. Not only are imprisonment rates in the United States the highest among Western democracies, correction policies across the United States also tend to be more punitive in other aspects. Among advanced Western democracies, only the United States retains and uses the death penalty. At the state level, the presence and use of the death penalty varies across state correction systems because state governments have significant discretion in deciding not only whether to enact a death penalty law, but among those states that have such a law, how often criminal offenders are sentenced to death. These differences in states’ propensity to enact death penalty laws and the rate in which executions are carried out is largely determined by the level of political support for the death penalty among citizens, elected officials, and judges within each state. Currently, 38 of the 50 U.S. states retain the death penalty, and combining each of the states’ death row inmates brings the total to more than 3,000. Of the 16 states that carried out death penalty executions in 2005 (all by lethal injection), Texas had the greatest number, with a total of 19, while Connecticut, Florida, Maryland, Delaware, and Mississippi each executed the fewest, with one each. The increasingly punitive nature of corrections policy in the United States is also reflected in the growth and diffusion of mandatory sentencing and three strikes laws. The move toward mandatory minimum sentences for specific crimes began as early as the 1970s, gained speed during the “drug war” of the 1980s, and culminated in the spread of three strikes legislation in the 1990s. Under mandatory sentencing, judges are required to impose minimum periods
of incarceration for a specific crime. By 1994, all 50 states had adopted some version of mandatory sentencing laws for certain crimes. In a similar vein, three strikes laws require judges to sentence criminal offenders convicted of three felonies to a mandatory and extended period of incarceration. The severity of sanctions imposed by three strikes laws is perhaps best exemplified in California, which enacted its three strikes law in 1994. Under the California law, a criminal offender convicted of a felony but who also has two prior convictions for “serious or violent felonies” is sentenced to three times the normal presumptive term (to be served consecutively), or 25 years to life, whichever is longer. Three strikes laws diffused dramatically across the U.S. states during the early to mid-1990s; 24 states adopted three strikes laws between the end of 1993 and the end of 1995. If the rise in imprisonment rates, three strikes and mandatory sentencing laws, and the use of the death penalty are indicative of an overall trend toward tougher sanctions and punishments for criminal behavior in the United States since the early 1980s, a key question remains: What exactly is driving these punitive policies? One strong possibility is that crime rates have significantly increased in the United States over the past two decades, leading policy makers and citizens alike to support greater efforts at putting criminal offenders behind bars and expanding the use of “get tough on crime” policies. However, research suggests that this is not the case. In fact, the rise in punitive correction policies has been largely independent of the crime rate in the United States. Thus, if the rise in punitive corrections policies is not directly connected to the crime rate, what is the answer? The driving force appears to be political in nature. To better understand how politics has played such a critical role in the rise in tougher correctionsrelated policies, it is helpful to take a look back at recent political history. Corrections and public safety policy before the early 1960s were generally viewed as a subject for practitioners and technocrats—those who had the greatest amount of expertise on the subject—while elected leaders gave it little public attention. However, as crime rates began to rise during the early 1960s, the issue became central to national partisan politics in the 1964 presidential election. During this
correction systems 907
time, the Democratic and Republican Parties appeared to follow two divergent paths over how best to solve the crime problem. Republicans, focusing on “wedge” issues such as crime and welfare in an attempt to secure votes from politically leaning Democratic voters, tended to blame rising crime rates on bad choices made by individuals, lenient judges, and soft punishments. Because of this, the Republican policy prescription followed a deterrence and incapacitation approach that promoted increased arrests, higher rates of incarceration, stricter probation monitoring, and mandatory sentencing aimed at dissuading criminal activity by removing offenders from the larger community. In contrast, liberal Democrats viewed rising crime as a result of structural problems such as urban poverty and lack of job skills. They promoted policies aimed at the “root” causes of crime and not tougher sanctions. When Ronald Reagan was elected president in 1980, crime control policy got further detached from its technical, less political roots of the 1950s and came front and center in America’s “morality politics” fight of the 1980s. In the process, the crime issue became closely connected to an ongoing cultural debate in the United States about what social ideals were most valued and what it meant for citizens to lead “just” and “righteous” lives. As part of this debate, the crime issue has often been portrayed by elected leaders, interest groups, and the media in broad, politically polarizing ways and has been used to help define what it means for individuals to act responsibly or irresponsibly and what constitutes moral and immoral behavior. Moreover, there was a renewed dissatisfaction with what were perceived to be the leniency of judges and the sense that convicted felons were not receiving sufficiently harsh treatment for the crimes they were committing. In short, the debate surrounding the crime issue became emotionally charged, and, as a result, support for punitive policies such as increased arrests, more incarcerations, and longer sentences, all driven by emotional and symbolic rhetoric, became the norm. Not wanting to be caught on the “wrong” side of a salient political debate laced in moralistic terms, riskaverse Democrats concluded that the only way to defend against law and order politics as defined in the 1980s and into the 1990s was to “get to the right of Republicans.” In other words, many Democrats felt
that in order to gain public support on the crime issue and win elections, Democrats had to be as tough as or tougher than Republicans on the issue. This shift created a convergence on the crime issue among Democrats and Republicans, lending enough political support to form legislative majorities in support of tough corrections policies at both the federal and state levels of government. The rise in the number of people behind bars in the United States, a growing number of whom are serving mandatory sentences, has important spillover implications for states’ correctional systems. One of the most prevalent problems is overcrowding in prisons and jails. With more prisons across the states filled beyond their designed capacity, some have argued that states need to allocate more money for new prison construction. This is much easier said than done, however, as construction of new prisons often comes to a grinding halt due to local political concerns and the NIMBY (not in my back yard) problem. The NIMBY problem occurs in situations in which the public supports a policy in a general sense—for example, the building of a new prison—but does not want a new prison built in its neighborhood. Thus, in many local communities, building new prisons is widely unpopular because of concerns about local public safety. Some states’ inability to sufficiently increase prison capacity to meet the demand has forced prisons to release some less-violent prisoners back to the community before they have served their full sentences. Many argue that simply building more prisons to ease the problem of overcrowding is misguided. Critics of new prison construction often point to the high fiscal costs connected to state correction systems, whereby rising prison costs have helped drain state coffers strapped for money needed to pay for other valued public services such as education, health care, and protecting the environment. Instead of building more prisons, it is argued that policy makers need to find ways to reduce existing prison populations. High rates of recidivism among offenders released from prison after serving their sentences is one area that has received attention recently. Although tracking recidivism rates can be technically difficult, the latest data from the Bureau of Justice Statistics shows approximately 25 percent of prisoners released from prison are rearrested and returned to prison with a
908 c ounty government
new sentence three years after being released. To help reduce this “revolving door” effect, advocates of corrections reform suggest that correction systems need to do a better job of preparing prisoners for reentry into community life by providing prisoners greater access to drug rehabilitation, job training, and other education programs while serving out their sentences. Raising the likelihood that released prisoners will become productive members of society may serve as a long-term solution to the prison overcrowding problem. The extent to which prisons provide rehabilitation and reentry programs to prisoners varies widely. Some prisons have high-quality reentry programs that are used by a large number of prisoners, others have programs that are poorly run and underused, while still others have no programs at all. States’ fiscal capacities and support for rehabilitation programs among elected leaders and prison officials explain much of the variation in access to prisoner reentry programs across states’ correctional systems. Whether status quo correctional policies remain or whether significant reform is in the making is, in the end, a highly emotional political question that will continue to play a prominent role in domestic policy debates in the United States in the coming years. Further Reading Beckett, Katherine. Making Crime Pay: Law and Order in Contemporary American Politics. New York: Oxford University Press, 1997; Lin, Ann Chih. Reform in the Making: The Implementation of Social Policy in Prison. Princeton, N.J.:, Princeton University Press, 2000; Pastore, Ann L., and Kathleen Maguire, eds. Sourcebook of Criminal Justice Statistics. 31st ed. Available online. URL: http://www.albany.edu/sourcebook/. Accessed July 29, 2006; Smith, Kevin B. “The Politics of Punishment: Evaluating Political Explanations of Incarceration Rates.” Journal of Politics 66, no. 3 (2004): 925–938; Tonry, Michael, “Why Are U.S. Incarceration Rates So High?” Crime and Delinquency 45, no. 4 (1999): pp. 419–437. —Garrick L. Percival
county government The county is a unit of local government. Other local units include cities, towns, townships, and special
districts. Unlike cities and towns, counties are not municipal corporations. They are not formed by means of local residents sending forth petitions to the state government. Counties are subunits of states, with their territorial limits and governmental powers defined by state governments. There are 3,034 counties in the United States. County-type governments are called parishes in Louisiana and boroughs in Alaska. There are no county governments in Connecticut and Rhode Island. In addition, there are 34 consolidated city-county governments. Examples of this type include Baltimore, Maryland; Denver, Colorado; Indianapolis, Indiana; Jacksonville, Florida; Nashville, Tennessee; New York, New York; Philadelphia, Pennsylvania; Saint Louis, Missouri; and San Francisco, California. There is a mixture of form and scope among the counties in the United States. Counties vary greatly in physical size and population. The smallest county in terms of physical size is Arlington County, Virginia, which occupies 26 square miles. The biggest is North Slope, Alaska, which occupies 87,860 square miles, which is slightly larger than Great Britain. In terms of population, the smallest is Loving County, Texas, which has 67 residents. The largest is Los Angeles County, California, which has 9,519,338 inhabitants. If Los Angeles County were a state, it would rank ninth in population. Despite this great range in physical size and population, the average county contains approximately 600 square miles—a square with 24 miles on each side— and less than 50,000 inhabitants. The modern county in the United States has its origin in the English shire. An earl ruled the shire with the assistance of the shire court. The principal officer of the shire court was the “shire reeve,” the ancestor of the modern office of “sheriff.” The sheriff collected taxes and presided over the shire court. After the Norman Conquest in 1066, the shires took on the French name counties, and earls were stripped of their office, leaving the title as one signifying nobility and not necessarily the holding of an office. Further change greeted the English county when King Edward III (1312–77) added the office of justice of the peace. A justice of the peace exercised executive powers in a county, replacing many of the duties previously performed by the sheriff. When English government was transferred to the American colonies, the county government structure
county government 909
was stronger in the South than in the Mid-Atlantic and New England colonies. While many county officials in the colonial period were appointed, the early 19th century saw significant reforms of county officer selection. Commissioners, clerks, coroners, sheriffs, and justices of the peace became elected offices in many states. This development continued to the point that many counties have several directly elected executives today. The relationship between the county and state was firmly established by what came to be known as Dillon’s rule. In the Iowa case of Merriam v. Moody’s Executors (1868), Justice John Dillon of the Iowa Supreme Court established the doctrine that counties and all other local units of government had no inherent sovereignty and were subject to the will of the state legislature for their power and authority. The Progressive Era of the late 19th and early 20th centuries had as one of its goals the modernization of county government. This could be achieved, so the argument went, through the replacement of many directly elected officers with appointed professionals, increased reliance of county officials on salaries instead of collected fees for income, and the advancement of the concept of “home rule.” The home rule movement was a response of sorts to Dillon’s rule. A state legislature could choose to allow a county the opportunity to exercise home rule, through which the county could exercise broad governing powers. California was the first state to adopt home rule provisions, and in 1913 Los Angeles County became the first county to exercise home rule. Today, approximately 2,300 counties in 38 states exercise some form of home rule. The next great change came in the form of the automobile and the post–World War II rush into suburbia. As people moved in great numbers to unincorporated areas of counties, they came to expect the same level of services they received in cities. This forced counties to enter into policy areas that went beyond those normally associated with counties. County governments got into the business of building parks, libraries, and even hospitals and expanded their role in other policy areas. The post–9/11 era has brought great challenges to county government as well. The county is a natural level of government for emergency planning, since it has the potential to coordinate the responses of many
municipalities and act as an intermediary between smaller local units and the state and federal governments. The aftermath of Hurricane Katrina revealed all too well a fundamental truth about local government: When things go well, very few notice a job well done. When things go poorly, national attention can be thrust upon officials who, before the incident, were probably unknown outside a county. The cooperation of all local units in the face of disaster is vital. Such cooperation can be useful in day-to-day affairs of county governments as well. Many counties are also members of regional associations, such as a council of governments. For example, most counties in California belong to regional associations. These organizations are quasigovernmental units devoted to cooperation and discussion on issues of regional concern, with mass transit being a prominent example. Counties within a regional association send representatives to serve on a board of directors, share research data, conduct studies, and resolve intercountry disputes. The two largest associations in California are the Southern California Association of Governments and the Association of Bay Area Governments. The Southern California association is composed of representatives from Imperial, Los Angeles, Orange, Riverside, San Bernardino, and Ventura Counties. The Bay Area group is composed of representatives from Alameda, Contra Costa, Marin, Napa, San Francisco, San Mateo, Santa Clara, and Solano Counties. There are three main types of county government. The most common is the commission model. Approximately 1,600 counties have a commission form of government. Such counties are governed by a board of commissioners, also known as supervisors in some states. This board typically has the power to enact ordinances, approve budgets, and appoint nonelected county officials. The standard board has three to five members who are elected to four-year terms. This type of government is generally found in counties with smaller populations. The second-most-frequent form of county government is the commission-administrator model. This form of government is similar to the council-manager form of government adopted by many cities and is a result of the reform-oriented mission of the Progressive Era.
910 c ounty government
The goal of this movement—with respect to local government—was to replace the tangled intricacies and intrigues of partisan political machines with technically adept cadres of professional administrators, thus hoping to separate the politics of local government from the policy outputs of local government. The Progressive belief was that formally trained experts would make better allocation decisions than elected officials who might be compromised by close ties to special interests. In fact, departments of political science, as they exist in colleges and universities in the United States, owe their origins to the desire by the Progressives to create an academically trained elite devoted to policy implementation. Accordingly, in the commission-administrator form of government, the administrator is not elected. The administrator is selected—and removed, if necessary—by the county commission. While the commission sets the overall goals, the administrator is in charge of the day-to-day affairs of policy implementation. Like city managers, however, county administrators often find themselves tangled in the web of local politics. While the Progressive Era formed, in part, as a response to corrupt local practices, the rise in appointed positions since the advent of the Progressive Era has bureaucratized most of the local government system and placed local constituents even further from the decision-making process. The third and final form of county government addresses, to some extent, this concern by providing for an elected executive. Approximately 400 counties use the commissionexecutive form of government. This form is a prominent one in counties with larger populations. In addition, Arkansas, Tennessee, and Kentucky require counties to elect executives. A county executive is elected directly by the voters. Like a president or governor, the executive prepares the budget, appoints and removes nonelected county officials, makes policy recommendations, and has a veto power over ordinances passed by the county commission. In addition to the commission, administrator, and executive, there are other officials common to county government in the United States, whatever form the government may take. The traditional county officials trace the origins of their offices back to Great Britain and are as follows: sheriff, prosecutor, attorney, clerk, treasurer, assessor, and coroner. Quite often, these
officials are directly elected and hold significant autonomy to make and implement policy. In addition, counties may avail themselves of further offices, which can be either appointed or elected. Some examples of which are as follows: register of deeds, auditor, road commissioner, surveyor, engineer, and school superintendent. The desire by counties to have several directly elected executive officials, known as a plural executive, stems from a movement known as Jacksonian Democracy, named after President Andrew Jackson (1829–37), who is credited with inspiring the rise of mass political participation in the United States. Local and state governments that have plural executive structures, in which there are several directly elected executive officers, are legacies of Jacksonian Democracy. In fact, the Progressive movement formed, in part, to deal with the perceived excesses of Jacksonian Democracy. In some county governments, an appointed administrator—a legacy of the Progressive movement—serves alongside an elected sheriff, prosecutor, or clerk, officials whose legacy is derived from Jacksonian Democracy. The very structure of county government reveals the history of American democracy and the tensions that have developed along the way. The functions of county government have changed over the years as well, but many areas for which counties are responsible today have remained similar to those that existed in the colonial period: courts, jails, law enforcement, records, roads, social services, and taxes. New areas of county responsibility in the 21st century include, but are not limited to, the following: consumer protection, economic development, fire protection, health care, recreation, sanitation, transportation, and water quality. In addition, some counties have contracts with local governments within their boundaries to share the costs for certain services. This is called the Lakewood Plan, after the California city that initially adopted this method of providing services. The most common policy areas that fall under the Lakewood Plan are fire and police services. The state or federal government mandates most of a county’s policy responsibilities. Social services account for more than 60 percent of county expenditures while education is next, with a little more than 10 percent. This mandated spending restricts the bud-
county manager (executive) 911
get flexibility of the modern county, and even though state aid often does not cover the costs of state mandates, state revenues are a significant source of county funds. A recent study by the U.S. Census Bureau estimates that 46 percent of county revenue is derived from state governments. Property taxes account for 20 percent of revenues. Charges and fees account for another 20 percent, while sales taxes account for 5 percent. Federal aid and other revenue sources make up the balance. In order to maintain the flow of state and federal aid, many counties are even willing to retain the services of professional lobbyists who will bring a county’s needs to the attention of key policy makers. In addition, the National Association of Counties, founded in 1935, represents the broader interests of counties at the federal level. The modern American county faces a variety of challenges. Counties are forced to confront the frightening possibility of terrorist attacks, yet, at the same time, counties must also consider the most desirable location for a park or library. On one hand, they may be dealing with the aftermath of a natural disaster while, on the other hand, dealing with the complexity of water drainage and its effect on an endangered species. While it is true that cities, states, and the federal government deal with complex issues as well, no other level of government is required to do more while strapped with a frustrating web of constraints imposed on it by other levels of government. In many ways, county governments are the unsung heroes of the American political system. Further Reading Austin, David. “Border Counties Face Immigration Pressure: Locals Shoulder Growing Law Enforcement Costs.” American City and County 121, no. 4 (2006): 24; Benton, J. Edwin. “An Assessment of Research on American Counties.” Public Administration Review 65, no. 4 (2005): 462–474; Bowman, Ann O’M., and Richard C. Kearney. State and Local Government. 3rd ed. Boston: Houghton-Mifflin, 2006; Coppa, Frank J. County Government: A Guide to Efficient and Accountable Government. Westport, Conn.: Praeger, 2000; “Local Leaders Lean on D.C. Lobbyists.” American City and County 118, no. 2 (2003): 10; National Association of Counties. An Overview of County Government. Available online. URL: http:// www.naco .org/ Content/ NavigationMenu/ About _
Counties/ County _Government/ Default271 .htm. Accessed July 25, 2006; National Association of Counties. History of County Government. Available online. URL: http://www.naco.org/Template.cfm?Section=History_of_County_Government&Template=/ContentManage ment/ContentDisplay.cfm&ContentID=14268. Accessed July 25, 2006; Ostrom, Vincent, Robert Bish, and Elinor Ostrom. Local Government in the United States. San Francisco: ICS Press, 1988; Swartz, Nikki. “Housing Boom Spells Property Tax Gloom: Local Governments Try to Give Residents Some Relief.” American City and County 120, no. 11 (2005): 20–22; Wager, Paul. County Government across the Nation. Chapel Hill: University of North Carolina Press, 1950. —Brian P. Janiskee
county manager (executive) county government has a long and important history in the American political system and can be traced back even before the American founding to its use in England as the administrative arm of the national government (known in England as the “shire” rather than the “county”). The first county government in the United States was formed at James City, Virginia, in 1634; 48 of the 50 states now use the county form of government, although Alaska calls them “boroughs” and Louisiana calls them “parishes.” The remaining two states, Connecticut and Rhode Island, although divided into geographic, unorganized areas called “counties” for the purposes of elections, do not have functioning county governments according to the U.S. Census Bureau. As in England, counties in the United States operate primarily as the administrative arm of the state in which they are found, but many also function as local public service providers with responsibilities for economic development, hospitals, parks, libraries, and many other traditional local services. According to the most recent census, there are 3,033 counties across the country, with Hawaii and Delaware having the fewest (three each) and Texas the most (254). In addition, currently there are 33 consolidated city-county governments whereby cities and counties have combined their governments for a variety of reasons, including economies of scale, elimination of duplicated services, and the simple fact that some cities have grown so large that their boundaries
912 c ounty manager (executive)
are practically coterminous with the surrounding county. (Jacksonville-Duval, Florida, is an example of this consolidated type of structure.) Further, counties vary widely in geographic size and population. Counties range in area from 26 to 97,860 square miles (Arlington County, Virginia, represents the former, and the North Slope Borough, Alaska, represents the latter) and from populations of 67 to more than 9.5 million (Loving County, Texas, and Los Angeles County, California, respectively). Despite the range in size, the majority of counties (75 percent) still have populations less than 50,000. Three primary forms of county government exist in the United States: commission, commissionadministrator/Manager, and council-executive. (The last of these is the focus of this entry, but it cannot be fully understood without some attention paid to the other two.) The commission form is the oldest and most traditional model for county government. This model is characterized by an elected county commission (sometimes known as a board of supervisors) that has both legislative authority (e.g., adopt ordinances, levy taxes, and adopt a budget) and executive or administrative authority (e.g., hire and fire employees and implement policy). Importantly, many significant administrative responsibilities are performed by independently elected constitutional officers such as the county sheriff, county clerk, and county treasurer. In short, the commission form embraces an elected board that has both legislative and administrative authority alongside a number of other elected positions that possess independent administrative authority in specific task areas. The second form, the commission-administrator, is similar to the standard commission form in many respects with the exception that the elected board of county commissioners appoints an administrator (or manager) who serves at the pleasure of the elected commission. As part of the municipal reform movement of the late 1800s and early 1900s, government reformers argued for increased professionalism in county government, with more effective administration and clearer accountability to the voters. This, it was noted, could be accomplished by hiring a professional administrator who would separate him- or herself from many of the political functions of county government and, instead, would concentrate his or her energies on service management. This appointed
administrator relieves the commission from many of the day-to-day responsibilities of running county government and is often vested with significant powers to hire and fire department heads, oversee policy implementation and coordination, and develop and implement the county budget. This new form of county government was first adopted by Iredell County, North Carolina, in 1927. The third form of county government, the council-executive, has as its most distinguishing feature an elected executive (or administrator). Among all three models, the principle of separation of powers is most clearly reflected in this governing structure. In its purest form, there are two distinct “branches” of county government, the legislative and the executive. The elected county commission heads the legislative branch of county government and, as a result, maintains legislative authority. In the second branch, there is an elected executive who heads the executive branch and maintains executive or administrative authority but who also traditionally has the ability to veto ordinances passed by the county board of commissioners. This form of government most closely resembles that found at the national level, where Congress has legislative authority and the president has executive authority who can veto legislation found unacceptable. Here, too, the county commission (similar to Congress) can override an executive veto, typically by a supermajority vote. Not unlike the commission-administrator form, the county executive has general responsibility for the day-to-day operations of county administration (again, outside those areas for which other elected constitutional officers have responsibility, e.g., the elected sheriff or county assessor). In addition, the executive often represents the county at ceremonial events, has responsibility for providing expertise and advice to the larger elected commission, and must oversee implementation of county policy more generally defined. Again, similar to the commissionadministrator, the elected executive has the power to hire and fire department heads under his or her control (often with the consent of the commission). Although the standard commission form is in use in the majority of counties in the United States, many are shifting their structure to include either an appointed administrator (commission-administrator) or an elected executive (commission-executive). In
governor 913
part, this is due to the increasingly complex local government landscape, in which the needs of a professional administrator are becoming more apparent. No longer in larger, more densely populated and urban counties can all day-to-day administrative functions be effectively handled by an elected board of commissioners. Indeed, Arkansas, Kentucky, and Tennessee mandate that counties have a separate executive elected by voters. Supporters of the commission-executive form of county government argue that there are several distinct advantages to this institutional arrangement. Among the most prominent is the idea that an elected executive is more responsive to the public interest than an appointed (or absent) chief administrative officer. Because they are answerable at the next election, an elected county executive will be more mindful of the public’s concerns. Second, an elected executive provides much-needed political leadership in a diverse community. This executive is less likely to resign (or be fired) during a political crisis. Third, this model contains the best reflection of a system of checks and balances available. Through the veto (and veto override), each “branch” can express its vision of policy that it believes is in the best interest of the community. Fourth, counties with an at-large elected executive have a single individual (the elected county executive) who represents the entire county. For good or for bad, this single individual can be a powerful voice of community government. Among the arguments against an elected executive, one of the most important is that it is often difficult to find a single individual who has both excellent political and administrative skills. Although not mutually exclusive, as the demands placed on county governments increase, it is often a full-time job to keep up with the latest public management practices and technology developments. In order to address this challenge, many counties provide a professional administrative staff that can offer support to the elected executive. This institutional arrangement combines many of the advantages of the elected executive along with the skills of a professional administrator. Counties, regardless of form, remain a valuable feature of American government. When citizens demand more political responsiveness and increased administrative professionalism and expertise from
their public officials, the elected county executive will remain an important option in county government. Further Reading Braaten, Kaye. “The Rural County.” In Forms of Local Government, edited by Roger L. Kemp. Jefferson, N.C.: McFarland & Co., 1999; DeSantis, Victor S. “County Government: A Century of Change.” In International City Managers Association Municipal Yearbook 1989. Washington, D.C.: ICMA; Duncombe, Herbert Sydney. “Organization of County Governments.” In Forms of Local Government, edited by Roger L. Kemp. Jefferson, N.C.: McFarland & Co., 1999; Fosler, R. Scott. “The Suburban County.” In Forms of Local Government, edited by Roger L. Kemp. Jefferson, N.C.: McFarland & Co., 1999; National Association of Counties. “The History of County Government.” Available online. URL: http:// www.naco .org/ Content/ NavigationMenu/ About_Counties/History_of_County_Government/ Default983.htm. Accessed July 17, 2006; Salant, Tanis J. “Trends in County Government Structure.” In ICMA Municipal Yearbook 2004. —Robert A. Schuhmann
governor The governor is a state’s chief executive officer, directly elected by the people in all 50 states and in most cases for a four-year term. Generally speaking, governors play the same role within their states that the president plays on the national stage, setting the state’s agenda, acting as its main spokesperson, and managing crises. However, the wide variance across states in terms of the formal powers of governors makes it somewhat difficult to generalize about the office. In the country’s early years, most governors had few powers, a clear reflection of the suspicion of executive power shared by most Americans at the time. Under English rule, the king had appointed colonial governors, who could essentially ignore the legislative will of the people’s representatives. Between the perceived abuses by the British Crown and his appointed governors, many early Americans equated executive power with tyranny and sought to limit the opportunity for such abuses. As a result, the early state constitutions invested the legislative branch
914 go vernor
Governors Tim Pawlenty (speaking, R-MN), Joe Manchin (D-WV), and Jennifer Granholm (D-MI) at a meeting of the National Go vernors Association, July 21, 2007 (National Governors Conference)
with nearly all power; in nine of the original 13 states, the legislature itself elected the governor (and in most cases for one-year terms). In only three states, South Carolina, New York, and Massachusetts, did governors have the authority to veto legislation, and South Carolina rescinded that authority by 1790. In addition, governors had little control over state agencies, as most legislatures retained the power to appoint individuals to these posts. Most indicative of the widespread distrust of executive power, in both Georgia and Pennsylvania, the first state constitutions established a committee to constitute the executive branch. The rise of Jacksonian Democracy in the 1820s reestablished some legitimacy to executive power, and the populism associated with this movement led to the direct election of governors in most states. This
change provided governors with a base of power outside the legislature and generally added to the prestige of the office, though officeholders continued to face significant institutional restraints, including the inability to veto legislation in most states. Somewhat ironically, the populist reforms also undermined gubernatorial power by establishing separate elections for other state executive positions, such as U.S. attorney general and secretary of state, a tradition that continues to the present in most states. Nearly every state has rewritten its constitution since this era, and the powers of governors have waxed and waned according to the mood of the electorate at the time. During periods of perceived legislative inefficiency, governors were granted greater authority. At times, when corruption was seen as
governor 915
a key problem, governors were stripped of powers that could potentially be abused. There may also be cultural factors that led some states to expand or restrict executive power independently of the broader trends of national public opinion. As a result, state constitutions vary widely in terms of the specific powers of governors, and these differences make it difficult to generalize about governors’ powers presently. Nevertheless, it is clear that governors have far more power than their early American counterparts did. Scholars have identified several formal powers (those that are constitutional or granted by statute) shared by all governors to varying degrees. Among these are tenure potential, appointment powers, budgetary authority, veto power, and judicial power. Regarding tenure potential, balancing the need for a strong executive against the risks of abuses by that individual creates a challenge in determining the appropriate length of a governor’s term in office and whether he or she should be limited in the number of terms he or she can serve; 48 states currently establish four-year terms for governors, modeled after the presidency. New Hampshire and Vermont allow governors to serve only two-year terms, consistent with the “town hall” democratic spirit in those states. A large majority of states (36) limit their governors to two consecutive terms, and Virginia permits the governor only a single four-year term. The remaining 11 states, which can be found in all regions of the country, allow an unlimited number of four-year terms for officeholders. The increasing powers of governors can be seen clearly in the trends pertaining to tenure potential. As recently as 1960, 15 states limited governors to a single four-year term, and another 16 states established only two-year terms for their chief executives. All else being equal, longer terms mean more independence and provide more opportunities for leadership. Although most of the early governors lacked the authority of the appointment power entirely, all governors today have some ability to appoint individuals to administrative agencies. Even here, though, substantial disparities exist across states. For example, the governors of New Jersey and Hawaii appoint officials as significant as attorney general and treasurer, major positions that are elected separately in
most states. Other governors have much less discretion in this area. In North Carolina, for instance, top officials in the departments of labor, education, and agriculture are all elected statewide. Even in that case, the governor does hold substantial appointment powers, as North Carolina’s governor appoints individuals to serve on more than 400 boards and commissions. And all governors are empowered to appoint temporary replacements to fill vacancies in a wide variety of offices, including the U.S. Congress. In nearly all cases, gubernatorial appointments require the approval of the state legislature or some other body acting on its behalf. The importance of the appointment power is often underappreciated. The people who head such agencies inevitably retain some degree of discretion, and their administrative choices can sometimes produce very different outcomes. A governor with strong ideological views on welfare, for example, will find that his or her policy vision is more fully realized if he or she can appoint a like-minded individual to direct those programs. The logic is most clearly seen with respect to judicial appointments. Unlike federal judges, most state-level judges are elected to office, but in six states, governors have the power to nominate individuals to judgeships. Although these nominees must be confirmed by the state legislature, it is easy to see how a governor could have a strong influence on the legal process as a result of his or her powers of appointment. When considering a governor’s budgetary authority, the growth of the role of the governor in the budget process strongly mirrors the same pattern at the federal level. Legislative bodies long held the responsibility for submitting a budget to the executive, who could then accept or reject the offer from the legislature. Governors possessed limited authority in such a process. In the 1920s, around the same time that Congress delegated the budgetary responsibility to the president, most state legislatures were likewise assigning budgetary power to governors. By centralizing the process in the governor’s office, the budget is arguably more coherent but certainly more closely reflects what the governor’s priorities are. There is again some variance across states. The state legislature in West Virginia, for example, may not increase the amount of the governor’s proposal. But there is much less variance across states in
916 go vernor
this area than in other areas of power. For the most part, governors present a budget to the legislature, which is then authorized to modify it as the majority wishes. Of course, the legislature is constrained by the fact that governors may reject the legislature’s counterproposal. The veto power, which is the ability to reject legislative acts, is probably the most significant formal power held by governors. Although all legislatures have the authority to override a veto, the supermajority that is in most cases required means that overrides rarely occur. An extreme example of this was in New Mexico, where Governor Gary Johnson had pledged to veto any bill that increased the size of government; of the more than 700 bills he vetoed in the 1990s, the legislature was able to override only one. By itself, the veto power is incredibly strong, but the governors of 43 states have an additional power to veto portions of bills, a power known as the lineitem veto. This authority emerged in response to budgetary crises during the Great Depression, intended to have the effect of controlling excessive spending. In recent years, the line-item veto has extended far beyond mere appropriations, and governors have been remarkably bold and creative in exercising this authority. Most notably, Wisconsin governor Tommy Thompson (1987–2001) wielded his veto pen very freely, striking brief passages or even single words such as “not” from legislation, essentially reversing the meaning. In 2006, Governor Ernie Fletcher of Kentucky vetoed a portion of a budget bill that would have required the election of judges in that state; by doing so, Fletcher retained for himself the power to nominate judges. Whether he should have been praised for his clever exercise of the line-item veto or criticized for subverting the will of the people through the legislature is largely a matter of perspective, one that illustrates the contradictions of executive power. In a sense, governors also have the ability to veto decisions of the judiciary by issuing pardons or reprieves for crimes. Strictly speaking, governors possess the power of clemency, which encompasses a family of actions that reduce the penalties for a crime. The most extraordinary recent example of this authority was the 2003 decision by Illinois governor George Ryan to commute the sentences of all 167 death row inmates in that state to life in prison. The risk of
abuse here is obvious and not at all hypothetical: In 1923, Oklahoma governor John Walton pardoned hundreds of criminals and was later convicted for taking bribes from most of them. Walton’s case is unusual but not unique. A series of Texas governors granted thousands of pardons between 1915 and 1927, eventually prompting lawmakers to establish a Board of Pardons and Paroles to review such cases. Even today, there is an independent clemency board in that state that can reject any gubernatorial act of clemency. Other states established similar restrictions on governors. In California, the state supreme court must approve any reprieves granted to recidivist felons. Additionally, many states require that governors provide the legislature with a written explanation of every act of clemency. While governors have a substantial amount of latitude in this area, most of them face some restrictions or regulations regarding their clemency power. While there are substantial differences across states in terms of the powers of governors, there is considerably less variance in terms of the expectations of the officeholder. Generally speaking, the key responsibilities of a governor include being its chief legislator, its crisis manager, and its main spokesperson. As the chief legislator, the governor’s role in the legislative process is very similar to the president’s. Formally, governors recommend legislation through a “state of the state” address, and as explained above, they propose the state’s budget and possess the powerful tool of the veto. But their actual responsibilities in lawmaking are much greater than these formal powers suggest. As former North Carolina governor Terry Sanford (1961–65) explained, “Few major undertakings ever get off the ground without his support and leadership. The governor sets the agenda for the public debate; frames the issues; decides on the timing; and can blanket the state with good ideas by using his access to the mass media.” Though legislators obviously initiate most bills, major proposals are likely to begin in the executive branch. While representatives often focus on more narrow concerns affecting their own districts, governors have a less parochial perspective that centralizes the legislative agenda in that office. Additionally, since many legislatures are part time, governors in those states have an increased opportunity to dictate the state’s agenda. To
governor 917
be sure, governors do need the support of legislatures to enact their programs, which is often a difficult task, particularly if the legislature is controlled by the opposing party. Nevertheless, just as the president is the political center of the national government, governors are the most important political figures in their states, and productive lawmaking depends on their leadership. Governors must also play the role of crisis manager. From natural disasters to urban riots, governors often find themselves facing situations they had not anticipated. As with presidents, governors are often defined by their responses to such crises. Formally, governors have the authority to act as commander-inchief of the state’s National Guard (though the guard may be federalized, in which case the governor is no longer in charge). Ideally, the governor can employ the guard to quell disturbances, provide emergency supplies, and generally to restore peace. However, the most well-known examples of such actions are now seen quite negatively. In 1957, Arkansas governor Orval Faubus (1955–67) ordered units of the National Guard to Little Rock’s Central High School to maintain peace in the town by preventing African American students from entering the building, as had been ordered to do by the U.S. Supreme Court. Several years later, Ohio governor James Rhodes (1963–71, 1975–83) installed nearly 1,000 National Guard troops on the campus of Kent State University to restore order following several days of antiwar protests. On May 4, 1970, confused troops opened fire on student protestors, killing four. Though both of these governors would serve additional terms in office, their historical reputations are defined by these events, which were largely beyond their control. The role of chief spokesperson has both substantive and symbolic significance, occasionally at the same time. On the substantive side, as explained above, the governors are expected to propose solutions to the major problems facing the state and to articulate those proposals to lawmakers and citizens. They are also expected to negotiate with companies and even foreign countries to locate businesses in their states, and most governors now have offices in Washington, D.C., for the purposes of lobbying the federal government to advance the state’s interests, as well. Symbolically, governors represent the state at
major ceremonial functions, such as dedications of new hospitals and social events for distinguished guests. While such functions may seem trivial, they are very time consuming and an indispensable part of the job. The substantive and symbolic aspects of this role sometimes merge when a state faces a crisis. During the Civil Rights movement, southern governors such as Ross Barnett of Mississippi (1960– 64) actively led the (unsuccessful) attempt to prevent James Meredith from becoming the first black student to enroll at the all-white University of Mississippi. By contrast, Georgia governor Jimmy Carter (1971–75) signified a new era of civil rights by declaring in his inaugural address that “the time for racial discrimination is over.” Like presidents, governors have considerable political power, which has increased substantially since the early days of the country, but also like presidents, governors ultimately have more responsibility than formal powers. They are expected to do far more than they can do alone and depend heavily on a cooperative legislature to enact their programs. Some governors, however, have more powers to facilitate governing than others due to factors such as longer tenure potential or a greater line-item veto authority. Regardless of their formal powers, though, governors’ leadership skills matter tremendously, as they are positioned to set the state’s legislative agenda and to establish the moral tone for the state. Thus, it may be appropriate that they are the ones held responsible for a state’s well being, even if they are not fully invested with the power to do so. Further Reading Behn, Robert D., ed. Governors on Governing. New York: Greenwood Press, 1991; Beyle, Thad. “The Governors.” In Politics in the American States: A Comparative Analysis. 8th ed., edited by Virginia Gray, Russell L. Hanson, and Herbert Jacob. Washington, D.C.: Congressional Quarterly Press, 2003; Beyle, Thad L., and Lynn R. Muchmore, eds. Being Governor: The View from the Office. Durham, N.C.: Duke Press Policy Studies, 1983; Dresang, Dennis L., and James J. Gosling. Politics and Policy in American States and Communities. 6th ed. Boston: Allyn & Bacon, 2006; Ferguson, Margaret R., ed. The Executive Branch of State Government. Santa Barbara, Calif.: ABC-CLIO, 2006; Saffell, David C., and Harry
918 initia tive (direct democracy)
Basehart. State and Local Government: Politics and Public Policies. 8th ed. Boston: McGraw Hill, 2005; Smith, Kevin B., Alan Greenblatt, and John Buntin. Governing States and Localities. Washington, D.C.: Congressional Quarterly Press, 2005. National Governors Association. Available online. URL: www.nga. org. Accessed May 23, 2006. —William Cunion
initiative (direct democracy) Historically, institution of an initiative process and other tools of direct (participatory) democracy were responses to corrupt politicians who demonstrated that they could not be trusted to promote the public good. Initiatives and referendums were first adopted in South Dakota in the mid-1880s, and early use of these political innovations was championed by what were considered radical political groups at the time, such as the Populist Party and the Socialist Party. In the next two decades, these came to be advanced by politicians and others associated with the Progressive Era, a chief concern of which was the intrusion of big business into the political process to the detriment of working people. Other turn-of-thecentury concerns advanced through the initiative process included Prohibition and women’s suffrage. While the actual use of these tools declined during the 20th century, they have recently experienced a significant comeback, with around 100 now appearing during each two-year election cycle. It is no longer the case that initiatives are the tools of the average citizen. Rather, they seem increasingly to receive their support from big business and other special interests, from which the people once sought protection through these means. The initiative is a tool of direct democracy whereby citizens can voice their preferences on issues of public concern directly through their votes on ballot initiatives, rather than through their representatives; other tools include referendums, or plebiscites, and recall elections. Use of mechanisms of direct democracy initiated by citizens serves to strengthen popular sovereignty, though these tools also present challenges to good governance in the pluralist setting, because directly appealing to and giving expression to the wishes of the people constitutes populism, not necessarily democracy.
The initiative process allows citizens to raise legislative proposals and place them on the ballot. In about 24 states and the District of Columbia, an initiative may be placed on the ballot by an organized group with a sufficient number of citizens’ signatures or by the state legislature or governor. A direct initiative provides the public with the ability to raise legislative proposals and vote to approve them, bypassing the legislature, whereas indirect initiatives are submitted by the voters to the legislature and can be submitted directly to the voters only thereafter. In many cases, legislators or the governor have the power to sponsor ballot initiatives to further those ends they believe would not command a majority in their legislative assembly. When the government sponsors a ballot initiative, it risks the vote turning into a popular plebiscite on the government itself, especially when it goes down to defeat. As many as 17 states allow constitutional initiatives whereby the people can amend the state constitution. In all cases where the citizens propose legislation, there is a numerical requirement for getting the issue placed on the ballot, usually expressed as a percentage of the electorate or voters in the most recent regular general election. Article I of the U.S. Constitution prohibits popular initiatives in regard to federal legislation, as this is a congressional power that cannot be delegated. A referendum, or plebiscite, is a form of direct democracy in which legislative or constitutional measures are proposed by a legislator to the voters for ratification. This is the usual form in which amendments to state constitutions occur and often the manner in which important categories of local government regulations such as high budget items or redevelopment plans are approved. The referendum allows for greater political participation and can provide opportunities to educate and inform voters regarding important public issues. However, the referendum process relies on popular will, which may not be educated beyond how the referendum has been marketed to them and can be manipulated by officeholders to advance a favored political agenda, avoid responsibility for crafting sound legislation, or stall their political opponents. Ballot initiatives and referendums are often used to pass laws regulating conduct that is disapproved by many people, such as drinking, drug use, or sexual
intergovernmental relations 919
behavior. While initiatives and referendums may on occasion supplement representative democracy in cases of persistent congressional gridlock on important issues, such as taxation, for example, the populism of these tools may lead to abuse by unscrupulous politicians who seek power for themselves and not necessarily to effect sound public policy. In the past, democracy by plebiscite has provided a fig leaf for demagogueries and dictatorships because of the appearance of political leaders appealing and responding to popular will, which they have already formed to suit their unjust and undemocratic aims. The right to recall elected government servants gives the public an additional degree of control over them beyond the ability to vote them out of office at each regular election and per their terms of office. A successful recall election dismisses an elected politician from office before his or her term of office has expired and so provides a potent tool of control over politicians in the democratic setting. Any serious threat of a recall gives a politician a clear warning of popular dissatisfaction with him or her and a clear indication of intent to remove him or her from office. Still, there may be a political motive behind encouraging a recall, one that only later becomes clearly apparent. The initiative and other tools of direct democracy do, however, present challenges to orderly government by the people. While some people may believe that majority rule, in some form the standard by which direct democracy works in the United States, is always appropriate and legitimate, constitutional expert Cass Sunstein argues that it is, in fact, a caricature of rule by the people. In Sunstein’s opinion, democracy works best when it is deliberative, and the role of the Constitution and its scheme of accountability in government is to ensure that constitutional protections of individual rights are respected even in the face of popular will to the contrary. While democracy implies rule of the people, deliberative democracy insists that a self-governing citizenry form and promote its preferences in an orderly way, one with the liberty and equality of all citizens in mind. Hence, some of the issues of public morality that are often the subjects of popular initiatives and referendums are constitutionally troublesome because there may well be no filter in place to refine popular opinion to ensure it is in keeping with the
constitutional order. This becomes particularly vexing when popular will is moved to act on behalf of tradition, when the constitutional order would be better served by interrogating a traditional practice or understanding. When initiatives are passed that exceed constitutional limits, it becomes the job of the courts to check the popular will, though this resolution brings the courts into the political arena, where their perceived legitimacy is at its lowest. In the republican democracy the founders worked out, sovereignty rests with the people, though this power is exercised through a system of governmental institutions that share this one fount of power and are characterized by checks and balances. While the initiative and other tools of direct democracy are not inimical to the American political regime, when the system is working properly, recourse to them need only be made infrequently. Further Reading DuBois, Philip L., and Floyd Feeney. Lawmaking by Initiative: Issues, Options, and Comparisons. New York: Algora Publishing, 1998; Ellis, Richard J. Democratic Delusions: The Initiative Process in America. Lawrence: University Press of Kansas, 2002; Sunstein, Cass R. Designing Democracies: What Constitutions Do. Oxford: Oxford University Press, 2001. —Gordon A. Babst
intergovernmental relations For many years, government practitioners, the public, and even those in the academic community frequently used the terms federalism and intergovernmental relations interchangeably. However, conventional wisdom about the supposed synonymous meaning of these two terms eventually gave way to the understanding that they each refer to and describe two different, albeit related, phenomena. On one hand, federalism refers to the legal relationships between the national government and the states. On the other hand, intergovernmental relations can be conceptualized as the relations that occur or result from the interaction (both formal and informal) among popularly elected and/or full-time employed officials of different levels of government. Indeed, both William Anderson and Deil Wright emphasize that the concept
920 int ergovernmental relations
National, State, and Local
of intergovernmental relations has to be understood in terms of human behavior. More precisely, there are no intergovernmental relations but only relations among the officials in different governing units. In its simplest terms, intergovernmental relations are the continuous, day-to-day patterns of contact, knowledge, and evaluations of the officials who govern. Other reasons exist for preferring intergovernmental relations to federalism. First, intergovernmental relations both recognize and analyze interactions among officials from all combinations of governmental entities at all levels, while federalism (although not precluding state-local links) has historically emphasized national-state relationships. Second, the intergovernmental relations concept transcends the mainstream legal focus found in federalism and includes a rich variety of informal and otherwise submerged actions and perceptions of officials. In addition, the concept contains no hierarchical status distinctions. That is, although it does not exclude the existence of such power differences, neither does it imply, as the concept of federalism often does, that the national level is the presumed superior. Finally, the concept of intergovernmental relations is more conducive to understanding and explaining how public policy is formulated and implemented.
In an effort to understand how the intergovernmental relations concept plays out or to simplify the complexities and realities of governance where several governments are involved, scholars have developed three models. These include the coordinate-authority, inclusive-authority, and overlapping-authority models. It would appear, based on endorsements from scholars, that the overlapping-authority model is the most representative of intergovernmental relations practice. The overlay among the circles suggests three characteristic attributes of the model: Large areas of governmental operations involve national, state, and local units (or officials) concurrently; the areas of autonomy or single-jurisdiction independence and full discretion are relatively small; and the authority and influence available to any one jurisdiction (or official) is considerably restricted. The restrictions create an authority pattern best described as bargaining. Bargaining typically is defined as “negotiating the terms of a sale, exchange, or agreement.” Within the framework of intergovernmental relations, sale is much less pertinent than exchange or agreement. More specifically, many areas of intergovernmental relations involve exchanges or agreements. A case in point is when the national government makes available a myriad of assistance programs to states and their local governments in exchange for their agreement to implement a program, carry out a project, or engage in any one of a wide range of activities. Naturally, as part of the bargain, the government receiving assistance (usually financial, but not necessarily) must typically agree to conditions such as the providing of matching funds or in-kind work and the satisfaction of accounting, reporting, auditing, and performance stipulations. Students of American government and politics have identified seven phases, or periods, of intergovernmental relations. Each of these phases (some of which overlap) will be briefly described below with reference to the following: What policy areas dominated the public agenda? What dominant perceptions or mindsets did the chief intergovernmental relations participants have? What mechanisms or techniques were used to implement intergovernmental actions and objectives? What is known as the conflict phase (the 1930s and before) is the early period of intergovernmental
intergovernmental relations 921
relations that focused on pinpointing the proper areas of governmental powers and jurisdiction and identifying the boundaries of officials’ actions. This emphasis operated at the state and local levels as well as between national and state governments. During this period, Dillon’s rule became synonymous with state supremacy in state-local matters and was used to identify the exact limits of local government authority. The national, state, and local government officials who sought exact specification of their respective powers assumed that the powers would be mutually exclusive. Furthermore, officials appeared to have expected opposition and antagonism to be part of the usual process of determining who was empowered to do what. Identifying roles and spelling out clear boundaries (typically through court interpretation of statutes and regulatory authority) were major features of the conflict period. The well-known U.S. Supreme Court case of McCulloch v. Maryland (1819), for instance, early on clearly instituted the conflictoriented pattern in one policy area with specific relevance for intergovernmental relations—finances. Although this case is perhaps better known for its interpretation of the U.S. Constitution’s Necessary and Proper Clause and particularly for sustaining the power of the national government to establish a bank, it firmly established the precedent that state laws are invalid (null and void) if they conflict with the national government’s delegated powers, laws enacted by Congress, or federal treaties. The preoccupation with separating or sorting out powers gave rise to the metaphor “layer-cake federalism” to describe exclusive or autonomous spheres for national, state, and local governments. What is known as the cooperative phase occurred between the 1930s and the 1950s. Although there was always some degree of intergovernmental collaboration during the 19th and 20th centuries, such collaboration typically was not a significant or dominant feature in American political history. However, there was one period in which complementary and supportive relationships were most pronounced and had notable political consequences. That period is the cooperative phase, which was prominent for about two decades. The principal issues of concern to the country during the period were the alleviation of the wide-
spread economic suffering that occurred during the Great Depression and responses to international threats such as World War II and the Korean Conflict. Therefore, it was logical and natural for internal and external challenges to national survival to result in closer contact and cooperation between public officials at all levels of government. This increased cooperation took several and varied forms, but especially such innovations as national planning, tax credits, and formula-based grants-in-aid. The principal intergovernmental relations mechanism as well as the main legacy of this period was fiscal. Substantial and important financial relations were firmly established and were the harbingers of more to come. Subsequently, these relations inspired a new metaphor of intergovernmental patterns—the much-publicized “marble cake” expression (popularized and elaborated on by Morton Grodzins)—as contrasted with the layer cake conception of the previous period. Morton argued that government operations in the United States were wrongly depicted as a threelayer cake. “A far more accurate image,” he said, “is the rainbow of marble cake, characterized by an inseparable mingling of differently colored ingredients, the colors appearing in vertical and diagonal strands and unexpected whirls [sic].” From Grodzins’s perspective, the U.S. system of governance should be viewed as one of shared functions in which “it is difficult to find any governmental activity which does not involve all three of the so-called ‘levels’ of the system.” Supportive of these shared functions was an implicit (and sometimes explicit) mood and pattern of behavior among participants— collaboration, cooperation, and mutual and supportive assistance. What is known as the concentrated phase occurred between the 1940s and 1960s. During the presidencies of Harry S. Truman, Dwight D. Eisenhower, and John F. Kennedy, intergovernmental relations became increasingly specific, functional, and highly focused, that is, concentrated. Between 1946 and 1961, 21 major new grant-in-aid programs were created, nearly doubling the total enacted in the Great Depression era. With this expansion of categorical grant programs, greater attention was increasingly paid to service standards. To that end, administrative rules and regulations rather than statutes began to
922 int ergovernmental relations
govern such things as award criteria, reporting, and performance requirements. These expanded grant-in-aid programs focused on two prominent problem/needs areas: capital works, public construction, and physical development; and middle-class service needs. Examples include airports, hospitals, highways, slum clearance and urban renewal, schools, waste treatment, libraries, and urban planning. Despite the fact that there was a substantial federal government involvement in local affairs via such programs, substantial local political control was both encouraged and practiced. In this respect, the techniques of grants, service standards, construction projects, and the like matched the local tradition of initiative and voluntary participation. Moreover, intergovernmental relations techniques were consistent with middleclass values of professionalism, objectivity, and neutrality and therefore gave the appearance that objective program needs rather than politics were being served. These major political values coincided in 1946 with the reorganization of Congress and the creation of congressional standing committees with explicit program emphases. The latter of these happenings was to have significant intergovernmental relations implications. Simply stated, these congressional committee patterns soon became the channels for access and leverage points for influencing program-specific grants. Some have used the “water taps” metaphor to describe the intergovernmental relations process during this period. Because of the flow of influence combined with the concentrated, or focused, flow of funds in the 1946 to 1961 period, the national government had become an established reservoir of financial resources to which a rapidly increasing number of water taps were being connected. Funds could be made to flow best by those most knowledgeable (the program professionals) at turning on the numerous “spigots,” and federal funds typically went directly to states that could then release them in part or in whole to local governments. Although cooperation was commonplace during this period, it occurred in concentrated and selectively channeled ways. And it was during this phase that the interconnectedness and interdependency of national, state, and local relations were confirmed and solidified.
What is known as the creative phase occurred during the 1950s and the 1960s. The foundation for the creative phase can be traced back to the cooperative and concentrated periods of intergovernmental relations, in that it stressed the need for decisiveness in politics and policy as well as the articulation of national goals. The term creative is commonly associated with this period partly because of President Lyndon B. Johnson’s use of the slogan “creative federalism” and partly because of the many new innovative programs in intergovernmental relations. Three intergovernmental relations mechanisms were characteristic of this period. First, comprehensive local, areawide, or statewide plans had to be submitted and approved prior to the receipt of any federal grant funds. Second, extensive use was made of project grants, whereby grant proposals had to be submitted in a project type format. Project grants not only involve extensive and often elaborate proposals or requests but also give much greater discretion to grant administrators than do formula grants, in which statutory or administrative formulas determine recipient entitlements. In addition, public participation was strongly encouraged through the insertion of the “maximum feasible participation” requirement in legislation. This involvement of clients in program operations and administrative decisions often introduced a significant and unsettling disconcerting element into intergovernmental programs. The period is perhaps best known for the proliferation of the number of grant programs, hence the employment of the metaphor “flowering.” The chief policy issues and major themes addressed by this creative activism were two-fold: an urban-metropolitan focus and attention to the disadvantaged through antipoverty programs and aid-to-education funds. By 1969, there were an estimated 150 major programs, 400 specific legislative authorizations, and 1,300 federal assistance activities. Of the 400 specific authorizations, 70 involved directly funneled money to local governments, thus bypassing the states. In dollar magnitude, federal grants jumped from $4.9 billion in 1958 to $23.9 billion in 1970. Over this same time span, state aid to local governments also increased from $8.0 billion to $28.9 billion. The significant increase in project grants and the amount of money accompanying them compared to formula grants led to a noticeable change in the atti-
intergovernmental relations 923
tudes and behavior of intergovernmental relations participants. Specifically, a grantsmanship perspective grew rapidly and widely. Playing the federal grant “game” became a well-known but time-consuming activity of mayors, city managers, county administrators, school officials, governors, and particularly program professionals. The competitive phase of intergovernmental relations occurred during the 1960s and the 1970s. This phase was distinguished by the escalation of tensions that was fueled by the proliferation of federal grants, the clash between program professionals and participation-minded clients, and the intractability of domestic and international problems. In addition, by the late 1960s, it was becoming increasingly apparent that much of the whirlwind legislation of the Great Society had fallen far short of achieving the lofty goals set for it. Issues related to bureaucratic behavior, administrative competence, and implementation became dominating concerns. Perhaps the most daunting concern was the lack of coordination within and among programs and within and among levels of government. Other festering concerns related to program accomplishment, effective service delivery, and citizen access. In such a political climate, candidates for public office at the national, state, and local levels focused attention on organizational structures and relationships that either hindered or helped the provision of goods and services. The period was also marked by a sharply different approach with regard to appropriate intergovernmental relations mechanisms. Pressure mounted to change and even reverse previous grant trends. One idea put forward was the consolidation of the many grant programs under the rubric of block grants and “special” revenue sharing. General revenue sharing was proposed by President Richard M. Nixon as a means of improving program effectiveness and strengthening state and local governments, especially elected officials in their competition and disagreements with national bureaucrats. Nixon also sought mightily to slow down or even reverse the flow of grant funds by the frequent use of impounding. On the national administrative front, efforts were made to encourage metropolitan and regional cooperation (under Office of Management and Budget circular A-95) and reorganize government agencies.
The unwarranted disagreement, tension, and rivalry that earned this period the label of competitive intergovernmental relations was summarized best by the late senator Edmund Muskie (D-ME): “The picture, then, is one of too much tension and conflict rather than coordination and cooperation all along the line of administration—from top Federal policymakers and administrators to the state and local professional and elected officials.” Yet the competition differed in degree, emphasis, and configuration from the interlevel conflict of the older layer cake phase. First, there was competition between professional program specialists or administrators (national, state, and local) and state and local elected and appointed officials. Second, there was competition among several functional program areas (e.g., highways, welfare, education, health, urban renewal, etc.), whereby like-minded program specialists or professionals, regardless of the level of government in which they served, formed rival alliances. These cross-cutting rivalries and fragmentation prompted former North Carolina governor Terry Sanford to suggest the “picket fence” metaphor as descriptive of this period of intergovernmental relations. The calculative phase of intergovernmental relations occurred during the 1970s and the 1980s. This period was marked by the out-of-control finances and near-bankruptcy of New York City during the mid1970s. But these problems were not confined to just NYC but rather were reflective of broader societal and political problems across the nation. The problems included lack of accountability of public officials, bankruptcy and fiscal stress, unwise and heightened dependency on federal aid, a perceived overbearing and meddlesome role of federal authorities, and the loss of public confidence in government and government officials generally. While intergovernmental relations during this period still tended to revolve around federal aid to state and local governments, it was practiced, according to Deil Wright, from a calculative perspective and contained only the surface trappings of federalism. State and local governments had the appearance of making important choices, but the choices in reality were few and elusive. The major choice was whether to participate in federal assistance programs. Simply stated, there was a greater tendency to estimate the “costs” as well as the benefits of getting a federal
924 int ergovernmental relations
grant. If they decided to take part in federal programs, a larger array of more limited options was available. These choices, however, were constrained mainly, if not exclusively, by nationally specified rules of the game. This is why “façade federalism” has been chosen as a metaphor characterizing this phase of intergovernmental relations. The perceptions of intergovernmental relations participants of this period can be summarized as gamesmanship, fungibility, and overload. Gamesmanship means that intergovernmental relations players used various strategic “games” to achieve desired ends. One case would be grantsmanship that, although identified with the creative phase, was perfected through the competitive and calculative phases to the point that some of the rules (see Wright for examples) by which local officials played the game had been codified. Fungibility refers to the ability of state and local governments to shift or exchange resources received from the federal government for one purpose to accomplish another purpose. State and local governments often used general revenue sharing and block grants to reduce the amount of their own resources devoted to nationally assisted programs. Finally, overload refers to the belief that democratic governments have been expected to do more than they are capable of doing in an effective, efficient, and low-cost manner. Moreover, this also implies an increase in excessive regulation on the part of all governments. Several mechanisms were used to implement intergovernmental relations activities in the calculative phase. One was the channeling of more federal aid on a formula basis as entitlements and in the form of general aid and block grants to states and their local governments. Associated with the mechanism of general aid is the technique of bypassing, whereby federal funds would go directly to local governments without having to pass through state coffers. Loans (e.g., to New York City and to students) constituted a third intergovernmental relations mechanism in the calculative phase. Regulation (in the form of grant guidelines in the Federal Register, grant law, and crosscutting requirements) was the final implementing mechanism. The contractive phase of intergovernmental relations began during the 1980s and continues today. This most recent period will probably be remem-
bered as one in which federal aid was shrinking, local autonomy was eroding, and court decisions and congressional legislation constricted the range of action of state and local governments. It is also likely that this time will be associated with the increasing tendency of governmental agencies at all levels to enter into contracts for the purchase and delivery of services. At present, it is uncertain whether this phase is still evolving or if it has come to a close and a new phase is emerging. Four major intergovernmental relations problems seem to have confronted public officials during this period. First, all levels of government have been preoccupied with borrowing and budget balancing as the size and persistence of the federal deficit has loomed large and has cast a foreboding shadow over the current and long-term intergovernmental relations fiscal scene. Second, federal deficits and conservative politics have resulted in significant cuts and changes in federal aid to state and local governments. Third, the federal courts and nonelected officials have increasingly directed their attention to detailed, specific, and judgmental policy actions of state and local officials (e.g., schools, prisons, and mental health facilities) and have found numerous faults with their actions. Moreover, these nonelected, nonconstitutional officials have sought to convene, cajole, and convince popularly elected officials into following the “best courses of action” in resolving disputes. Fourth, federal mandates in the form of court orders, congressional statutes, and administrative regulations abound and are identified by Joseph Zimmerman, “as the principal irritant of American intergovernmental relations” in recent years. Managing and complying with these mandates have presented state and local officials with a significant challenge both administratively and financially. Several terms seem to capture the perceptions of participants during this most recent phase of intergovernmental relations. Contentiousness, disagreement, and even confrontation between federal and state and local officials often characterize these interactions. Furthermore, state and local officials have come to the realization that the “good old days” of bountiful federal grant money and a nonintrusive federal government are over and that they should be wary of an overbearing federal government. These perceptions have produced a sense of defensiveness
justices of the peace
and distrust on the part of officials at all levels of government and subsequently have resulted in more litigation. In response to an obvious deterioration of intergovernmental relations, President Ronald Reagan was aggressive in efforts to reshape, reform, and restore national-state relationships through decentralization of the federal system. The instruments by which intergovernmental relations activities were implemented during the contractive phase reveal some novel elements as well as some links to prior intergovernmental relations phases. Statutes and court decisions were prominent in the first phase of intergovernmental relations, and it is no surprise to see them during the last phase, given the strong sense of “us against them” perceptions held by all participants. Three intergovernmental relations mechanisms (information sources, negotiated dispute settlement, and privatization) appear to be new. Problem solving has been revolutionized as a result of changes in computer technology (particularly the sharing of information among liked-minded intergovernmental relations actors) and the development of new social technology (for example, mediation). Finally, privatization, through the encouragement of competition and innovation, has given especially state and local officials the opportunity to provide services in a more efficient, effective, and cost-effective manner. While it is uncertain whether we have entered into an eighth phase of intergovernmental relations, it is certain that intergovernmental relations are destined to evolve and change in the years ahead. Moreover, it is a given that the tone of intergovernmental relations will have a significant and lasting effect on policy making at all levels of government. Will state and local governments be restored as equal or nearequal partners and exert more influence on their destinies? What impact will looming national and international events and problems have on shaping the basic contours of American intergovernmental relations? No crystal ball can provide answers to these and other questions, but we can say with some modicum of certainty with history as a predictor that intergovernmental relations will continue to vacillate between varying levels of cooperation and conflict. Further Reading Anderson, William. Intergovernmental Relations in Review. Minneapolis: University of Minnesota Press,
925
1960; Benton, J. Edwin, and David R. Morgan. Intergovernmental Relations and Public Policy. Westport, Conn.: Greenwood Press, 1986; Elazar, Daniel J. American Federalism: A View from the States. 3rd ed. New York: Harper & Row, 1984; Glendening, Parris N., and Mavis Mann Reeves. Pragmatic Federalism: An Intergovernmental View of American Government. 2nd ed. Pacific Palisades, Calif.: Palisades Publishers, 1984; Nice, David, and Patricia Fredericksen. Politics of Intergovernmental Relations. Chicago: Nelson-Hall, 1995; O’Toole, Laurence J. Jr., ed. American Intergovernmental Relations. 4th ed. Washington, D.C.: Congressional Quarterly Press, 2007; Sanford, Terry. Storm over the States. New York: McGraw-Hill, 1967; Wright, Deil S. Understanding Intergovernmental Relations. 3rd ed. Pacific Grove, Calif.: Brooks/Cole Publishing, 1988. —J. Edwin Benton
justices of the peace A justice of the peace is a judge of a court that has limited jurisdiction. These positions are occasionally called municipal or magistrate judgeships, but they are functionally the same. A justice of the peace usually presides over a court that hears traffic violations, misdemeanor cases, and other petty crimes. In some states, the justice of the peace is given authority over cases involving small debts, landlord and tenant disputes, and other small claims court proceedings. They are also known to perform weddings. The justice of the peace position differentiates itself from nearly every other legal role in society in that its practitioners do not need to have a law degree or any formal legal training. The history of the justice of the peace begins in 1195, when Richard I (“the Lionheart”) of England first commissioned knights to keep the peace in troublesome areas. They were responsible for making sure the laws were upheld. They were commonly referred to as keepers of the peace. In 1327, an act declared that “good and lawful men” were to be appointed to “guard the peace.” These men were called conservators of the peace or wardens of the peace. It was not until 1361, during the reign of King Edward III, that the position became known as justice of the peace. The position was primarily occupied by the
926
justices of the peace
gentry, or land-owning nobles, and to a lesser extent those of a good family. As the English immigrated to North America, many of their institutions were recreated and established to simulate the motherland. Aiding this process was the fact that the indigenous inhabitants were either integrated into the new society or ejected from the area and that they did not already have formal systems of law in place. Many of these early immigrants were entrepreneurs from the English upper class, which meant that they tended to be considered gentry. Others came because they were more interested in the adventure of the New World, particularly the younger generation. Soon after the first groups of English came over, James I granted charters that would establish the colonies of North America. These charters dictated that English common law and institutions would be established. Within the colonies, the English gentry usually administered law and government. Many of these colonial leaders had already served as justices of the peace in England and felt it blasphemous to suggest governing the colonies in a manner unlike those used in England. The first justice of the peace in North America was appointed in Virginia in 1634. These justices of the peace closely resembled their counterparts in England. They were expected to be “men of substance and influence” and “impartial to rich and poor . . . and free from hatred or malice.” Justices of the peace were not paid for their services and were often men of great respect. Their word alone held authority. They were often officers in the militia and also wardens in the local church. Few were educated in law but used their knowledge of the world and their neighbors to make judgments they deemed fair. Justices, particularly in the early years, could also serve as coroners and responded to a variety of complaints and demands of every nature. In Massachusetts, a justice of the peace was also expected to organize the local militia. Of course, this was not the case in every colony. In some colonies, justices of the peace were not “men of substance and influence” even though that was the expectation. The justices of the peace in New Hampshire, Georgia, and North Carolina were renowned for their ignorance of law. Colonial records in New Hampshire and Georgia indicate that there were only a few qualified lawyers in the colonies throughout the
18th century. New York suffered from problems of a different nature. Originally a Dutch colony until officially ceded in 1667, judicial chaos reigned as the colony debated which country’s laws were to be followed. Additionally, a number of New York’s justices of the peace could not read or write, causing concern over the fairness of their decisions. Throughout their history, justices of the peace have been primarily laymen not specifically trained in the legal profession. In the period of colonial development, lawyers were highly despised. For instance, Georgia wanted to be a “happy and flourishing colony . . . free from that pest and scourge of mankind called lawyers.” The people wanted someone from within their own region whom they respected to serve as justice of the peace. Historical records indicate that areas where the justice of the peace was a local resident had the fewest complaints about the quality and decisions of their justice of the peace. Part of the reasoning behind appointing laymen as justices of the peace was that colonies, counties, and towns could not afford to pay someone with a legal background. Of course, this assumed that there were even enough legal minds available to fill the positions. Justices of the peace were also expected to be close to the public they adjudicated. The common person had a rudimentary understanding of law, and justices of the peace were no different. As a result, they based their decisions on what they thought was right and wrong and their knowledge of the individuals involved. As the colonies grew, the role of justice of the peace became more localized. By the end of the American Revolutionary War, counties were resisting unilateral control over the justices of the peace by the colonial, or state, governors and/or legislatures. Indeed, county and local leaders resented that they were left out of the decision-making process, particularly because a number of the new justices of the peace were not from the region, thus not knowing local culture and expectations. In some locales, justices of the peace were so inept that colonial or state governments were being buried in complaints. Associated with a more localized system came a shift whereby the American justices of the peace no longer resembled their counterparts in England. The position gradually became increasingly commercialized, which caused it to decline in the social scale. Plus, as the country continued to grow, each new state ush-
legislative process, state and local 927
ered in new cultural experiences that ultimately influenced the role of the justice of the peace. Following the Civil War and during the Industrial Revolution, the public began to demand the right to elect their own justices of the peace, as opposed to the governor or legislature making those determinations. Of course, this resulted in more commercialization and an even greater decline in the prestige of the position. As the country spread west, formal law was virtually nonexistent. The West was commonly known to be a wild and, at times, dangerous place, and in many areas, the justice of the peace was the only source of order and protection for miles. The federal government eventually recognized that justices of the peace were not sufficient to keep order in the western states and territories. Many of the justices of the peace were functionally replaced by federal marshals who shared many of the same powers as the justices of the peace but had the backing of the national government. By the time America entered World War II, 47 of the 48 states still had justices of the peace. Because the new states did not have landed gentry from which to pick justices of the peace, many altered their expectations so that the position essentially required no qualifications. Indeed, in 1940, only two of the 47 states that still used justices of the peace had specific qualifications for the job. This trend continues today for states that still use a justice of the peace. For instance, in Texas, the only qualifications for an individual to hold this office are be a citizen of the United States; be at least 18 years of age on the day the term starts or on the date of appointment; not have been determined mentally incompetent by a final judgment of a court; not have been finally convicted of a felony from which the person has not been pardoned or otherwise released from the resulting disabilities; as a general rule, have resided continually in Texas for one year and in the precinct for the preceding six months; and must not have been declared ineligible for the office. With the proliferation of the automobile, the position of justice of the peace began to lose its status quickly. Justices of the peace were responsible for adjudicating traffic violations, and the public hated this. As a result, the public began to view the justice of the peace with less and less admiration. In small towns, they were often seen as tyrants and corrupt. There has been a push, primarily organized by the American Bar Association, to abolish the
office of the justice of the peace. The American Bar Association, which is an organization of America’s attorneys, believes that nonlawyer judges are no longer necessary because there are more people with formal legal training than ever before in American history. Many states have responded by either abolishing the position or incorporating it into other courts. Currently, the following states still rely on justices of the peace or their functional equivalent: Arizona, Arkansas, Connecticut, Delaware, Georgia, Louisiana, Massachusetts, Montana, New Hampshire, New York (in small towns and villages), South Carolina, Texas, Vermont, West Virginia, Wisconsin (depends upon locality), and Wyoming. Recently, the function of the justice of the peace that always provided the least controversy, their ability to conduct weddings and civil unions, has become a hot-button issue. In Massachusetts, a justice of the peace can perform same-sex marriages if religious officials are unwilling to do so. Interestingly, a Massachusetts justice of the peace is legally not allowed to refuse to perform a same-sex marriage. In Connecticut, they can preside over same-sex civil unions. Further Reading Graham, Michael H. Tightening the Reins of Justice in America: A Comparative Analysis of the Criminal Jury Trial in England and the United States. Westport, Conn.: Greenwood Press, 1983; Skyrme, Sir Thomas. History of the Justices of the Peace: England to 1689, Volume I. Chichester, England: Barry Rose Law Publications, 1991; Skyrme, Sir Thomas. History of the Justices of the Peace: England 1689–1989, Volume II. Chichester, England: Barry Rose Law Publications, 1991; Skyrme, Sir Thomas. History of the Justices of the Peace: Territories Beyond England. Vol. 3. Chichester, England: Barry Rose Law Publications, 1991; Wunder, John R. Inferior Courts, Superior Justice: A History of the Justices of the Peace on the Northwest Frontier, 1853–1889. Westport, Conn.: Greenwood Press, 1979. —James W. Stoutenborough
legislative process, state and local The last few decades have witnessed great change regarding legislative bodies at the state and local levels. A report published in 1968 by the nonpartisan
928 legislative process, state and local
public affairs forum American Assembly noted that “state legislatures have failed to meet the challenge of change because they have been handicapped by restricted powers, inadequate tools and facilities, inefficient organization and procedures.” Some 30 years later, a much different picture was drawn; David Hedge wrote in 1998 that “few political institutions have experienced as much fundamental and farreaching change in such a short period of time as have state legislatures.” An instigating cause of state legislatures’ enhanced power is the U.S. national government. One result of President Richard Nixon’s and President Ronald Reagan’s new federalism, combined with President Bill Clinton’s policy of devolution (particularly with respect to welfare policy) has been to increase the power and responsibilities of state governments. At times, the devolution of power has not stopped at the state governmental level; Clinton’s welfare reform allowed states to further devolve responsibility for programs down to the local level. According to the Tenth Amendment of the U.S. Constitution, “powers not delegated to the U.S. by the Constitution . . . are reserved to the States . . . or to the people.” Hence, as a result of a combination of constitutional amendment, statutory directive, political ideology, and practical necessity, state and local governments have assumed added significance in recent years. State legislatures share many features with the U.S. Congress. Mostly, they are bicameral bodies. To be more precise, 49 states have legislatures comprised of two legislative chambers; Nebraska is unique in that it is a unicameral legislature. All state legislatures use a committee system; this facilitates the lawmaking process. The average state legislature is made up of 10 to 20 committees in the upper chamber and 15 to 30 in the lower chamber. It is in these smaller assemblies that legislation is written and oversight hearings are held. Local legislative bodies, whether they be city councils or county commissions, are nearly always unicameral in nature. They do, however, share a similarity with Congress and with state legislative bodies in their use of committees as a way to divide the workload among the members. The number of committees at the local level rarely equals that at the state level. State legislators and members of Congress perform similar functions: writing legislation, representing constituents, and overseeing the executive and
judicial branches. Like members of Congress, they spend much of their time engaged in constituency service, also known as casework, and bringing home pork to the district (that is, they obtain governmentally funded projects that benefit their constituents). These two tasks are often done with an eye toward reelection. At the local level, particularly in the case of county commissions but also in the case of some city councils, legislative bodies also perform executive and administrative duties. These may include appointing employees, supervising road work, and heading departments. One stark difference that exists among legislatures pertains to what is known as professionalism. This concept refers to the extent to which legislators are full time versus part time, how much they are paid, and how much staff support they have. In no state or local government does the level of professionalism equal that of Congress. However, there is great variation among states and localities regarding the components of professionalism. First consider state governments. At the upper end of the scale are the so-called professional legislatures. Legislators in states such as California and Ohio and in about seven other states are paid $50,000 to $100,000 annually for what is a full-time legislative job; they are also assisted by full-time staff. In the middle of the scale are roughly 20 states, such as Tennessee and Washington. In those states, the legislators tend to work half time, earning $12,000 to $45,000 annually. While they are assisted by professional staff, their staffers are more likely to be session-only staff (that is, the staffers are employed only when the legislature meets in session). At the lowest end of the professionalism scale are about 10 states, including Nevada and New Mexico. These also employ part-time legislators, with the pay ranging from a $144 per diem expense in New Mexico to $27,300 annually in Indiana. Staff assistance in these states can be exceptionally small. For example, Wyoming in 2003 had only 29 full-time staffers serving a legislature of 90 members. The variation that is apparent at the state level is magnified at the local level. Not surprisingly, higher levels of professionalism are more likely to be seen in the nation’s most highly populated cities and counties. The membership of state legislative bodies represents one area in which the greatest amount of change has been seen in recent decades. In terms of the
legislative process, state and local 929
male-female breakdown, the late 1960s witnessed state legislatures consisting of approximately 4 percent women. Today that figure is greater than 22 percent. In terms of the racial breakdown, roughly 8 percent of all state legislators are African American, whereas about 3 percent are of Hispanic heritage. Similar changes are apparent with respect to local legislative assemblies. For example, in 2002 there were nearly 5,500 African-American and 2,000 Hispanic local level legislators. Finally, across state and local legislatures there are about 200 openly lesbian and gay officeholders. One of the most contentious issues concerning legislatures at any level of government concerns reapportionment and redistricting. Several factors necessitate the redrawing of district lines. These include population changes, changes in partisan control of redistricting commissions, and the impact of court cases. In the early 1960s, two key U.S. Supreme Court cases—Baker v. Carr (1962) and Reynolds v. Sims (1964)—had a dramatic impact. These cases had the cumulative effect of bringing to an end the gross malapportionment of legislative districts. For example, at the time, one state senator in California represented 14,000 rural residents while another represented 6 million residents of Los Angeles County. The Warren Court based its decision on the principle of “one man, one vote,” thereby rejecting the notion of drawing district lines based on geographic and/or governmental boundaries. The practical outcome of these decisions was to increase the representation of urban and suburban residents; these individuals had historically been underrepresented, while rural residents had been overrepresented. In the case of League of United Latin American Citizens et al. v. Perry et al. (2006), the Court reaffirmed a commitment to not allowing gerrymandered districts that corral minorities into one oddly shaped district. In the same case, the justices permitted those who control the redistricting process to redraw district lines in the middle of a decade (the traditional redistricting timeframe had been once at the beginning of a decade). As is the case in Congress, political parties play important roles in many state and local legislatures. Specific functions typically revolve around the following: the organization and leadership of the legislature, the recruitment of candidates to run in elections, and the provision of services to legislative
candidates. State legislatures vary in the extent to which one or two (or more) parties have organizational strength. For example, states such as New Jersey and Michigan exhibit strong parties on both sides of the aisle. Other states, such as Alabama and Mississippi, have only one political party that exhibits anything resembling organizational strength and political power. While parties are certainly important components of the political scene in most local governments, the nature of their involvement varies quite dramatically from that at the state level. For instance, many local elections are nonpartisan affairs. One result of this is the fact that electoral ballots may not cite a candidate’s political party affiliation, something that Americans expect to see on ballots for national-level offices. Relations with the executive help explain the amount of power that state and local legislatures wield. At the national level, Americans are accustomed to electing only a president and a vice president, the former having significant veto, appointment, and budget-making powers. In the vast majority of states, there are several elected executive officials, and the governor may or may not have powers similar to those of the president. What this means is that in many states, the legislature holds as much or more formal power as does the governor. However, state legislative bodies are disadvantaged compared to the governor in those states where the legislature meets less than full time and has a small amount of staff assistance. Perhaps no single issue other than welfare policy better illustrates the changes that have occurred over the past few decades with respect to the power of state and in some cases local government. When President Clinton signed the Temporary Assistance for Needy Families (TANF) Act in 1996, he gave great new powers to state governments. Individual states were granted the power to determine program qualifications and requirements that were tailored to their particular circumstances. Furthermore, more than a dozen states devolved power to regional and/ or county jurisdictions. Devolution of the welfare system has led to enhanced powers for lower levels of governments; it has also necessitated an expansion in the capacity of those same governments. Finally, it has necessitated greater cooperation between state and local governments, private businesses, and nonprofit organizations in an area that was once the sole province of the national government.
930 legisla tures, state
Finally, a discussion of state and local legislatures that is focused on change must include mention of recent efforts centered on term limits and ethics reform. The term limit movement became especially pronounced in the 1990s. During that decade, 21 states saw the advent of legislative term limits through constitutional amendment, voter initiative, or other means. Legislators are only recently experiencing the impact of that movement. Proponents argue that term limits have inhibited the careers of allegedly corrupt politicians; opponents argue that term limits have served to shift power to governors, agencies, legislative staff, and others. Term limits are in some ways related to a larger effort directed at ethics reform. This is an area that encompasses various potential actions. One such action is the prohibition of the receipt of gifts from individuals, lobbyists, and others. One assessment (Goodman et al., 1996) of all 50 state legislatures ranked only four states—Hawaii, Kentucky, Tennessee, and West Virginia—as having a strong code of ethics guidelines for their legislators; 16 states ranked at the low end of the scale. Local governments are also going through extensive changes, albeit due to different pressures and in different contexts than is the case with state governments. Urbanization causes friction among adjacent local governments—cities, townships, counties, and school districts—as the demands of their individual jurisdictions affect one another. This has led to a movement toward so-called shadow governments (such as home owners’ associations and development corporations) and regional governments. The former engender questions related to accountability and equity, while the latter bring up contentious notions of regional versus local cultures and perspectives. It is a good bet that states and localities will assume additional powers, their populations will increase, they will continue to sprawl across political boundaries, and those both within and outside legislative institutions will clamor for change. As a result, these legislative bodies will evolve in unforeseen ways as the 21st century progresses. Further Reading Beyle, Thad L, ed. State and Local Government 2004–2005. Washington, D.C.: Congressional Quarterly Press, 2004; Goodman, Marshall, Timothy
Holp, and Karen Ludwig. “Understanding State Legislative Ethics Reform.” In Public Integrity Annual, edited by James S. Bowman, Lexington, Ky.: Council of State Governments, 1996; Hedge, David. Governance and the Changing American States. Boulder, Colo.: Westview Press, 1998; Jewell, Malcolm E., and Marcia Lynn Whicker. Legislative Leadership in the American States. Ann Arbor: University of Michigan Press, 1994; Morehouse, Sarah McCally, and Malcolm E. Jewell. State Politics, Parties, and Policy. Lanham, Md.: Rowman & Littlefield, 2003; Rosenthal, Alan. The Decline of Representative Democracy: Process, Participation, and Power in State Legislatures. Washington, D.C.: Congressional Quarterly Press, 1998; Van Horn, Carl E., ed. The State of the States. Washington, D.C.: Congressional Quarterly Press, 2006. —Barry L. Tadlock
legislatures, state All 50 states have legislative bodies similar in form and function to the federal legislature. All state legislatures except one (Nebraska) are bicameral in nature. Meeting in the state capital either annually or biannually, state legislatures propose and pass laws, provide oversight of the executive branch, and serve constituents. This discussion will emphasize features that all or nearly all 50 state legislatures have in common as well as emphasize a few key differences. The existence of state legislatures predates the American Revolution. Following the English parliamentary model, each of the colonies (later states) adopted elective legislative bodies to govern internal matters. The oldest of these, the Virginia General Assembly, has roots dating back to 1619. For most of the 18th and 19th centuries, these bodies served as the main and sometimes only conduit for representing the will of the people. During the late 19th and early 20th centuries, the franchise was expanded, the U.S. Constitution was amended to allow for direct election of senators, and the size and scope of the federal government expanded greatly, all factors that contributed to the relative decline in the importance of state legislatures as the primary voice of the people in American government. As the 20th century progressed, however, state legislatures experienced a revival. This reemergence
legislatures, state 931
can be traced to two key factors. First, as the world of state politics became more complex, states increasingly turned their legislatures into full-time, professional bodies. The typical legislature at the beginning of the 20th century met every other year for 2 to 3 months and was made up of citizen-legislators who were either unpaid or received a very small stipend. Today, about two-thirds of states have semiprofessional or professional legislatures that meet every year for a significant period of time; their legislators receive a salary and have a paid staff. Even today, though, it is typically only the largest states that have all the features of a fully professionalized legislature. A second key component to change resulted from the 1962 U.S. Supreme Court decision Baker v. Carr. Prior to this Court decision, many state legislatures were unprofessional bodies dominated by rural interests. Despite increases in urbanization, many state legislatures tended to be apportioned based on a population distribution that was often many decades out of date. Since state legislatures were in charge of their own apportionment, and since entrenched rural blocs wanted to maintain their grip on power, legislatures simply refused to redraw district boundaries in a way that would more accurately reflect population changes. In many states, this led to urban areas being allotted the same number of representatives as rural regions even though the former had a population several times as large. This malapportionment was particularly bad in Tennessee, where some urban districts were 19 times more populous than the most rural ones. Based on the Tennessee experience, the Baker decision held that such a representational structure violated the Fourteenth Amendment’s equal protection clause. As a result of this “one person, one vote” ruling, state legislatures are now apportioned based on formulas that are much more in line with current populations. Thus, legislatures have become more responsive to the needs of the state populations as a whole, making them more relevant to the lives of citizens. The contemporary state legislator is likely to be college educated and have a professional background, such as law or business. According to the National Conference of State Legislatures, about 23 percent of state legislators are women, about 8 percent are African American, and about 3 percent are Latino. The average age for a state legislator is 53, and nationwide 50 percent are Republicans, 49 percent are Democrats, and
the remaining members are independents or members of third parties. While some legislators see their position as a stepping-stone to higher political office, many wish to retain their present position or return to the private sector after a few years in public office. Legislative campaigns vary greatly by state. According to the Institute on Money in State Politics, in New Hampshire, a state with a part-time, citizen legislature, the average campaign for the state house of representatives in 2004 raised $495. At the other end of the spectrum, that year’s races for the highly professionalized California State Senate cost an average of $438,495, an amount greater than the total raised by all 815 New Hampshire House candidates combined. Compensation and responsibilities, of course, also vary. The New Hampshire representatives receive $200 per two-year term and represent about 3,100 citizens each, while the California senators receive more than $110,000 per year and represent about 850,000 Californians. New Mexican legislators receive no salary at all. The life of the typical state legislator is a busy one. Whether a citizen legislator or full-time professional, all legislators must focus on three aspects of the job: considering bills, overseeing state agencies, and serving constituents. Scrutinizing, amending, and ultimately passing or rejecting laws are the primary tasks of legislators. Most legislators run for office with a particular agenda in mind—perhaps they want to increase the quality of public schools or state highways, or they may want to change or want to rein in what they see as excessive state spending. Whatever the issues, this small number of items often becomes the subject matter for legislation the member will draft. Legislators reach the state capital with ideas for change that they developed during their campaign and are likely to find some other like-minded legislators. Although there is no formal requirement for cosponsorship of legislation, members often find it beneficial to develop their legislation in conjunction with other legislators in order to form a base of support. This is particularly necessary for large-scale and controversial legislation. Since nearly all state legislatures are bicameral, they need to find allies in the other chamber as well. Shouldering the responsibility for drafting and sponsoring a bill, seeking support for it, and seeing it through to passage is often referred to as “carrying” a bill. Without a cadre of dedicated bill carriers, most legislation dies a quiet death in committee.
932 legisla tures, state
While any given legislator is responsible for authoring and sponsoring only a tiny fraction of the total legislative output, he or she must weigh-in on hundreds or even thousands of potential laws each legislative session. Since no individual can develop expertise in the myriad fields of state law, legislators seek assistance and input from legislative staff and other interested parties. When considering the merits of bills, legislators often meet with lobbyists and other individuals who share their perspectives. For example, when considering a measure to fund a new state university campus, a legislator may hear from professional lobbyists representing building contractors, citizen lobbyists representing a teachers’ union, and local constituents concerned about the location of the proposed facility. Though much is made, quite rightly, of those occasions when lobbyists and legislators overstep the bounds of professionalism and enter into bribery, the vast majority of interest group activity in the legislative process is beneficial to legislators and ultimately results in laws that have been thoroughly vetted by all interested and affected parties prior to passage. Though shaping legislation is the central function of the legislator’s job, oversight and constituency service are also crucial. Legislative oversight, typically handled at the committee level, is the process of making sure that the laws passed are indeed being carried out as intended. A typical oversight activity is auditing, making sure that monies allocated by the state are being spent responsibly and in the manner prescribed by the state budget. A legislator’s constituency is the population residing in the district he or she represents. These are the people the legislator has been elected to represent, and he or she must be attuned to their needs or risk being turned out of office in the next election. In order to best understand the wishes and concerns of the constituency, a legislator often maintains an office in the home district and/or assigns staff the responsibility of responding to constituents’ concerns. These issues, ranging from the adverse effects of a law on a single constituent to the road and infrastructure needs that affect the economy of an entire community, cannot be overlooked by a responsible legislator. Responding to these needs, whether in the form of legislation or simply listening or providing needed information, can be the most personal part of a legislator’s job.
In addition to the tasks of rank-and-file legislators, legislative leaders have additional powers and responsibilities. Every legislative body elects a presiding officer, often called a speaker or president, to run the legislative process. Presiding over floor activity and implementing and following the parliamentary rules of the chamber, this individual is often responsible for making committee appointments, assigning bills to committee, setting the legislative calendar, and doling out resources such as office space and staff budgets. In some states, he or she is also a party leader, but in nearly all states the presiding officer is in communication with the elected leadership of both parties in order to identify areas of agreement and hammer out compromises when necessary. Though term limits have rendered seniority a less important qualification for leadership posts in several states, the presiding officer must be able to work well with others and must have earned the respect of his or her peers. At the heart of the legislative process lies what is arguably the most significant source of legislative power, the presiding officer’s ability to oversee the committee system. Virtually every bill must pass through one or more committees on its way to becoming a law. The strength of committees is often a function of party strength. In other words, states with a stronger party system tend to have weaker committee powers and vice-versa. Standing committees are designed to provide expertise in revising bills before they are brought to the floor. Such expertise sometimes occurs, such as when farmers serve on an agriculture committee. The presence of term limits in many states, however, and the shifting composition of committees from one session to the next make policy expertise less important and the gatekeeping function—the decision about when, whether, and how (favorably or unfavorably) to report bills to the chamber—becomes primary. In addition to standing committees, most chambers have one or more leadership oriented committees, typically dominated by the majority party. Such committees have one or more of the following functions: assigning membership to all other committees, shaping the legislative agenda by applying rules to bills, and planning policy strategy for the majority party. These functions highlight the importance of being the majority party in a legislature. For example, an important issue, such as health care reform, may
legislatures, state 933
become the topic of legislation introduced by dozens of legislators from each political party. The decisions about which committee to send a bill to, under what rules, and in what order are often crucial factors in determining which bills will become law and which will wither away in committee. Though representing a separate branch of government, governors interact with state legislatures in important ways. While in all states the governor presents a budget to the legislature for review and approval, in some states, budget control is merely perfunctory because the governor is given primary budgetary responsibility. In other states, though, it is a lengthy process of give and take between the executive and legislative branches that can be one of the legislature’s most important functions. The governor also has veto power in nearly every state (North Carolina excepted), and in most states the veto is rarely overridden due to supermajority requirements (Alabama requires only a simple majority). In 43 states, the governor also has a form of line item veto on spending bills in addition to the general veto. One of the most significant changes in the functioning of state legislatures in recent decades has been the adoption of term limits in several states. Public scandals, mostly at the federal level, led to a national wave of anti-incumbent sentiment in the 1980s and 1990s. This resulted in 21 states adopting some form of term limit legislation between 1990 and 1995. These laws prevent an incumbent from running for reelection after a few terms in office (typically two to four terms totaling six to 12 years). Though limits on federal terms were found unconstitutional by the U.S. Supreme Court, most of the state restrictions withstood legal challenge. As a result, by the early 2000s, 16 state legislatures were affected by limitations on the length of time an individual could serve in office. The National Conference of State Legislatures notes that in 2006 alone, 268 legislators were prevented from running for reelection due to term limits. There have likely been some positive effects to these restrictions, such as driving corrupt officials from office, breaking up “good old boy” networks, and providing additional opportunities for women and minorities. Along with these positives, however, have come a slew of unintended consequences. Though the measures were designed in part to limit the influence of
special interests on the political process by preventing long-term relationships between legislators and lobbyists, the practical result has often been a career path in which termed-out legislators move into the ranks of lobbyists, using their personal contacts and knowledge of the legislative process to push their clients’ interests. Moreover, critics note that the high turnover in state legislatures has led to a lack of institutional memory and an inefficiency of process. Unelected legislative staff members are increasingly relied upon to help novice legislators figure out their jobs, and legislatures have to re-solve the same problems every few years simply because no one serving was in office the last time the issue was addressed. Finally, the evidence suggests that term limits have done little to eliminate the presence of career politicians. Those who want to remain in politics simply move from one venue to the next—from the lower chamber to the upper chamber, from elected to appointed office, and sometimes back again. It remains to be seen whether states will continue to accept these negative consequences of a well-intentioned reform, modify their term limit laws, or follow the lead of Idaho and repeal their term limits entirely. Because of their vital role in the American political process, state legislatures have become an important topic for empirical research. Though this entry has provided a brief sketch of the role of state legislatures, one can find out more about history, trends, and up-to-date statistics by consulting the work of professional political scientists such as Malcolm E. Jewell and Alan Rosenthal, academic journals such as State and Local Government Review and Publius, and the magazines Governing and State Legislatures (along with their corresponding Web sites). Further Reading Carey, John M., Richard G. Niemi, and Lynda W. Powell. Term Limits in the State Legislatures. Ann Arbor: University of Michigan Press, 2000; Jewell, Malcolm E. The State Legislature: Politics and Practice. 2nd ed. New York: Random House, 1962; Moncrief, Gary F., Peverill Squire, and Malcolm E. Jewell. Who Runs for the Legislature? Upper Saddle River, N.J.: Prentice Hall, 2001; Morehouse, Sarah McCally, and Malcolm E. Jewell. State Politics, Parties, & Policy. Lanham, Md.: Rowman & Littlefield, 2003; National Conference of State Legislatures. Available
934 ma yor
New York City mayor Michael Bloomberg at the 40th Annual West Indian–American Day Parade, September 3, 2007 (Getty Images)
online. URL: http://www.ncsl.org/index.htm. Accessed June 19, 2006; Rosenthal, Alan. Heavy Lifting: The Job of the American Legislature. Washington, D.C.: Congressional Quarterly Press, 2004; Rosenthal, Alan. The Decline of Representative Democracy: Process, Participation, and Power in State Legislatures. Washington, D.C.: Congressional Quarterly Press, 1998; The Institute on Money in State Politics. Available online. URL: http://www.followthemoney.org/index. phtml. Accessed June 19, 2006; Wright, Ralph G. Inside the Statehouse: Lessons from the Speaker. Washington, D.C.: Congressional Quarterly Press, 2005. —Charles C. Turner
mayor A mayor is the head of a city or municipal government. Depending on the form of municipal government, a mayor can wield extensive power or play a mostly ceremonial role. Mayors are appointed by the
city council or elected by the population to serve a term (usually from two to six years). In a city with a strong mayor-council form, the mayor is usually elected at large by the population and acts as executive, administrator, and titular head of the city. The council, which is also elected by the population, forms and suggests policy but is considered weak because its decisions can be vetoed by the mayor. Also, the policy that the council forms is carried out by administrators, usually appointed by the mayor. In a weak mayor-council form of government, the council has ultimate say in the direction and administration of policy. In this form, the mayor may be selected from among the council and not elected by the population. Therefore, the council has predominant power, and the mayor acts more as a coordinator. In the councilmanager form, the mayor is mostly a ceremonial figure, and the manager performs the role of the administrator, who theoretically is not a political figure but is hired as a CEO to run a large corporation. It is the city council that maintains the role of forming policy. This form of government is usually found in medium to small cities, and one problem with this form of government is that it does not provide public figures with strong political power to unite citizens. In large cities, such as New York and Chicago, however, a strong mayor is usually present. The responsibilities of mayors vary based on how strong the mayors are. A strong mayor will help set policy and also serve as an administrator. Weak mayors, as noted, will often be appointed by the council but will still hold their seat on the council, giving them some oversight but remaining as a peer. One main distinction is that a strong mayor will often have veto power over the council, but a weak mayor will not. Strong mayors appoint administrators to head the various departments in a city, such as the police, waste management, and education, as well as other positions such as treasurer and city clerk. Weak mayors do not have this appointing power, but instead the council as a whole does this. In fact, strong mayors have recently begun to take even more control. One prime example is in the field of education, where some mayors have taken over some political power in the area of public education. In most cities, the school board is elected, but poor performance in many city schools has created community pressure for mayors to become more active. In some cities, such as Chicago and Boston, the mayor has extended city con-
mayor 935
trol over the school districts (and this has been attempted recently in Los Angeles as well). Certainly, such moves are not seen as being beneficial in all communities and remain controversial. The role of the mayor is also a function of the structure of the U.S. system of federalism. The federal government has its own constitutional power that is specific to it, as do the states, and there is also shared power—areas where the federal and state governments both play a role. Cities may share some political power with the states in some aspects, but they normally must yield to state power. In practice, this political balance can be very difficult in big cites; the mayor and the governor must test their boundaries and come up with a balance between the large city and the state power. This line is also becoming more blurry with large cities within larger county areas, and once again a balancing act must be struck between the city government and the county government in order to avoid overlap and to incur the least amount of confrontations. One way to determine the effectiveness of mayors is to examine what is considered a successful mayor. Typically, the success of mayors was measured in their ability to draw support from and also to provide services to their urban citizens. This can especially be seen in the success for years of machine-style urban politics. Bosses of machines were successful because they helped bring jobs to people, brought them support when they needed it, and also were personable—the people could relate to the bosses and also could have personal relationships with them, something that not many people can say about mayors in big cities today. Some political scientists argue that mayors’ political personality is the key to measuring their success. To be more specific, the ability to relate to people and to solve their problems and to be popular and reelected is the trademark of a successful mayor. However, others view this as being too simplistic. Another theory of mayors’ success is to consider how the mayor has governed the city and how his or her governing effectiveness changes over time. In this way, for example, a mayor who has strong support and also one who has a strong ability to govern may be successful for a period of time, until support wanes with the public as funding to carry out his or her vision runs dry. This theory is criticized because it posits the idea that most mayors will ultimately fail because a city is too complex,
too much of an urban jungle, to govern successfully. A third theory examines mayors within the context of their cities. The political, economic, and social environments of cities are all different. Each city is unique, and some cities are more governable than others. Therefore, taking that into account as well as the personality and the ability of the mayor, this context theory determines how successful mayors are by figuring these aspects together. More recently, scholars have begun to examine the mayoralty in comparison to the presidency, using the same types of studies and considerations. Mayors have the same responsibilities as presidents but on a different scale in many respects: They are the symbolic figure of the city, they manage or administer policy, and they also act as mediators between the city and the state and the city and the federal government, much like a president acts as a foreign ambassador. Presidents are often examined in terms of the time period in which they served—presidents who serve in a crisis and handle it well are often viewed more favorably and have more potential to be successful than those who do not have such a sensational event to prove their worth. This theory divides time into four periods when mayors serve: reconstructive, articulation, disjunction, and preemption, which then cycle over time. Reconstructive mayors will be most likely to be successful; they are innovators, changing the system that was in place before them. Articulation mayors will be less successful, as they uphold the legacy that was installed before them by the innovator, or reconstructive, mayor. Disjunction mayors will be least likely to be successful, as they serve when the current system is being attacked and criticized from all sides, and they are holding on to a sinking ship. Finally, preemption mayors have some chance for success, as they challenge the existing regime before it is completely defunct, but their success or failure is not guaranteed. Fiorello La Guardia, mayor of New York between 1934 and 1945, can be seen as one of these preemption mayors who was very successful; in fact, he has been ranked as one of the top mayors in recent history. Frank Rizzo, mayor of Philadelphia between 1972 and 1980 as well as Dennis Kucinich, mayor of Cleveland between 1977 and 1979, on the other hand, are often ranked as two of the worst mayors, although they, too, fall into the category of a preemptive mayor.
936 militias , state
This is because their tactics to change the system failed. To be sure, the success of this type of mayor is more dependent on the factors of the city as well as the persona of the mayor. The Daley’s of Chicago are also often considered successful mayors. Richard J. Daley was mayor from 1955 to 1976 and governed basically from machine politics—he controlled the city but did so as more of a mediator, following the will of the people and not instigating policy himself. His oldest son, Richard M. Daley, as an elected mayor beginning in 1989, has a much different style. He is considered to be a part of the “new breed” of mayors in a way. Daley’s taking over of the Chicago public school system in 1995 has been heralded as a success and also vouches for the changes in mayoral leadership since the 1990s. Many other mayors have adopted this take-charge attitude. Mayor Mike White of Cleveland also initiated the takeover of the public schools, which was inspired by Daley. These big city mayors of the 1990s and beyond are marked by their desire for efficiency in government. They have initiated reforms to cut down on wasteful spending, often turning to a market-driven approach based on competition. This can be seen in Mayor John Norquist of Milwaukee, who forced the city’s Bureau of Building and Grounds to compete with private contractors; the bureau ultimately won the bids by eliminating wasteful practices and spending. Many Democrats, such as Norquist, are turning to these right-wing policies of market forces, while many Republicans, such as Michael Bloomberg of New York, have turned to liberal tactics. This blurring of the line between parties has changed mayoral politics to focus more on the candidates and the issues instead of parties and interest groups. While this can be problematic, as not having a political base to work from can be dangerous and alienating, it appears to be the new trend. Bloomberg was reelected in 2005 despite having made many enemies of conservatives and liberals alike. These mayors seem to function more as managers do in a council-manager form, focusing on the precise administration of services and running cities like businesses. Further Reading Dye, Thomas R., and Susan A. MacManus. Politics in States and Communities. 12th ed. Upper Saddle River, N.J.: Pearson Prentice Hall, 2007; Flanagan,
Richard M. Mayors and the Challenge of Urban Leadership. Lanham, Md.: University Press of America, 2004; Holli, Melvin G. “American Mayors: The Best and the Worst since 1960.” Social Science Quarterly 78 (1997): 149–156; Stein, Lana. “Mayoral Politics.” In Cities, Politics, and Policy: A Comparative Analysis, edited by John P. Pelissero. Washington, D.C.: Congressional Quarterly Press, 2003. —Baodong Liu and Carolyn Kirchhoff
militias, state At the founding of the republic, militia meant a citizen army of the state, what we would today refer to as a National Guard (the guard was founded in 1916). However, today, the term militia usually refers to a rightwing, antigovernment or paramilitary organization. The word militia comes from Latin meaning “military service.” The original understanding of militia as a volunteer army made up of citizen-soldiers from each state formed the bedrock of both national defense in the new republic as well as a temporary method of organizing a military or police force to maintain civil order in cases of threat or emergency. In this sense, a militia is distinct from the regular or permanent army. The Second Amendment to the U.S. Constitution (a part of the Bill of Rights) guarantees each state the right to form a militia and that the militia could “keep and bear arms.” Today, some read the Second Amendment as guaranteeing individuals the right to bear arms, but the original understanding and the words of the U.S. Constitution refer to the maintenance of a militia and that militia’s right to keep and bear arms. Some of the framers believed that the presence of state militias was a way to keep the federal government in check and might serve as an antidote to the accumulation of tyrannical power by the central state. If, the belief went, the states were armed and could resist the potential of encroachment by the power of the central government, this might serve as a check on federal power. In Federalist 29, Alexander Hamilton wrote that “The power of regulating the militia and of commanding its services in times of insurrection and invasion are natural incidents to the duties of superintending the common defence, and of watching over the internal peace of the confederacy.” He further dismissed warnings of some of the anti-Federalists that this
militias, state 937
militia might become a threat to the republic, arguing that “There is something so far-fetched and so extravagant in the idea of danger to liberty from the militia that one is at a loss whether to treat it with gravity or with raillery; whether to consider it as a mere trial of skill, like the paradoxes of rhetoricians; or as a disingenuous artifice to instill prejudices at any price; or as the serious offspring of political fanaticism. Where in the name of common sense are our fears to end if we may not trust our sons, our brothers, our neighbors, our fellow citizens? What shadow of danger can there be from men who are daily mingling with the rest of their countrymen and who participate with them in the same feelings, sentiments, habits, and interests?” The framers of the Constitution feared a standing army. They believed it would be a threat to the republic, could be used by despots to take over the government, and might tempt ambitious leaders to venture out on imperial adventures. Therefore, the framers cautioned against maintaining a standing army. Thus, after every major military encounter in the first 150 years of the republic, the United States demobilized its military and reintegrated the armies back into the community. It was only in the past 60 years, with the advent of the cold war, that the United States maintained a standing army of any size. However, in the early years of the nation, there were still threats to safety and stability that on occasion had to be met with force. Thus, a militia was authorized to form and defend the state. Not a standing, but a temporary, army, the militia was to be called together in times of threat and quell the emergency, and then these citizen-soldiers were to return to their normal lives having served their community. In peacetime, the militias of each state were under the control of the state governor. In wartime or during a national emergency, Congress may call up the National Guard, and it is then under the control of the president of the United States. When the National Guard is “nationalized,” it is no longer under the control of the state governor but of the federal government. Today, the National Guard is funded by federal government monies. The National Guard is often called into the service of a state during natural disasters such as floods or hurricanes (e.g., in 2005 during Hurricane Katrina), during threats of civil disturbance (such as the Watts Riots in Los Angeles in 1965 as well as the riots throughout Los Angeles in 1992), and at times to serve overseas during a crisis, war, or threat-
ened emergency (for example, in the 2003 war in Iraq). In 1990, the U.S. Supreme Court, in Perpich v. Department of Defense, held that Congress may authorize the National Guard to be put on active federal duty for training and use outside the United States (in this case, in the Persian Gulf) without the consent of the state governor and without a formal declaration of war or national emergency. Given that the term National Guard has replaced militia to signify the state citizen-soldiers, the term militia has taken on a wholly new meaning. Today, militia refers to paramilitary organizations, usually rightwing and anti–federal government in sentiment, that oppose the power and reach of the federal government. These new militias became more prominent in the late 1980s and early 1990s. They communicate extensively over the Internet but also have a presence at gun shows, rallies against the government, and in newsletters and extremist right-wing politics. Three of the seminal events in the development of the militia movement during the 1990s involved high-profile standoffs between federal government officials and antigovernment separatists. The first, in 1992, occurred in Ruby Ridge, Idaho, at the home of white separatist Randy Weaver, who had initially failed to show up in court on a gun-related charge. Weaver’s wife was killed when federal agents stormed the home. The second standoff occurred in 1993 just outside of Waco, Texas, at what was known as the Branch Davidian Complex. An exchange of gunfire when federal agents attempted to deliver a search warrant led to the killing of four federal agents and six members of the religious sect living in the complex. After a 51-day standoff with federal authorities, Branch Davidian leader David Koresh set the entire complex on fire, which killed all 79 people inside. The third, in 1997, occurred in Texas when the Republic of Texas separatist movement, led by Richard McLaren, held a husband and wife hostage in the west Texas town of Fort Davis in an attempt to force state officials to release two members of the group who had been arrested. McLaren and his followers claimed that the state of Texas had been illegally annexed to the United States and was still a sovereign nation. These incidents had a powerful symbolic resonance for the militia movement, served to encourage these movements’ members to identify the federal government as “the enemy,” and helped persuade
938 municipal courts
members that there was a conspiracy in the federal government to destroy the freedoms (especially their perceived right to keep and bear arms) enjoyed by “free Americans.” The federal government continues to monitor these groups, and occasional confrontations erupt into violence and disorder. In the modern world with the methods of modern warfare, is the militia necessary or even functional? Or is it an anachronism of a far different era? While constitutionally protected, militias in the traditional sense may be politically unnecessary in our age. As such, the nation might be better served by eliminating them altogether or modifying militias to perform more modern tasks. But such changes would require a change in the U.S. Constitution, and Americans are very reluctant to change the Constitution. For this reason, whether militias make political and military sense in the 21st century is secondary to the larger conundrum of America’s deeply felt reluctance to change the Constitution. Further Reading Abanes, Richard. American Militias: Rebellion, Racism & Religion. Downers Grove, Ill.: InterVarsity Press, 1996; Cornell, Saul. A Well-Regulated Militia: The Founding Fathers and the Origins of Gun Control in America. New York: Oxford University Press, 2006. Hamilton, Neil A. Militias in America: A Reference Handbook. Santa Barbara, Calif.: ABC-CLIO, 1996. Whisker, James B. The Rise and Decline of the American Militia System. London: Associated University Presses, 1999; —Michael A. Genovese
municipal courts Municipal courts are authorized by village, town, or city ordinances (laws) and are funded by these local units of government. The courts possess limited powers and jurisdictions. More persons, citizen and noncitizen, come into contact with municipal courts than all other state and federal courts in the United States. Thus, an individual’s impression of a state’s judicial system may depend on that individual’s experience with municipal courts. More media attention and research inquiries are focused on federal and state district, appeals, and supreme courts than on municipal courts. This is unfortunate, since munici-
pal courts help to protect peace, dignity, and civilized behavior in most states. Past and present authors of state constitutions and city, town, and village charters have been generally suspicious of governmental powers. Therefore, legal and political limits are placed on the legislative, executive, and judicial departments of government. Each branch or department is expected to play a significant role, however varied, in the checking and balancing of the power of the other branches in the governing of communities. Thus, municipal courts’ powers tend to be limited and shared with city councils and boards (legislative bodies). Their powers are also shared with city managers, police departments, city attorneys, and other administrative agencies and executive bodies. Political and legal powers are widely dispersed and not concentrated in any one branch of state or local government. Sharing of power occurs when municipal court judges are appointed by a mayor (an executive) with the advice and approval of the city council or town board (the legislative branch). Those judges usually serve a term of between two and four years (although some terms may be longer) as determined by the state (or local) legislative and executive branches. Limits on tenure in office also act as a check on the courts’ and the judges’ power. checks and balances are present when a state legislature sets a fixed annual salary for municipal judges to help prevent two potentially “bad” things from occurring. First, an annual fixed salary policy prevents an angry or vindictive municipal board and/or executive from reducing a judge’s pay to punish that judge’s court decisions. Second, a fixed salary prevents an enterprising judge from padding his or her salary by imposing, collecting, and pocketing more fines and fees than would be appropriate. The political interplay of differing personal values and ambitions, the economic status of communities, and the different branches of government encroaching on the powers of each other result in different authority and jurisdiction granted to municipal courts. Some states, such as South Carolina, have laws that prevent municipal courts from having jurisdiction over civil disputes. But these same courts do have jurisdiction over violations of municipal ordinances, such as motor vehicle violations and disorderly conduct. Some states limit the municipal court’s
municipal courts
ability to fine or incarcerate violators by setting maximum limits of $500 for fines or 30 days for jail time or both. In this case, the legislative body is clearly checking the power of the judicial branch. On the other hand, the Philadelphia, Pennsylvania, municipal court has argued for expanded jurisdictional powers and has been granted such authority. The municipal court in Philadelphia, is responsible for adjudicating civil cases in which the amount being contested is $10,000 or less for small claims, unlimited amounts in landlord and tenant cases, and $15,000 in school tax or real estate disputes. Philadelphia’s municipal courts may also resolve criminal cases that carry a maximum sentence of jail time of five years or less. Fines are the most common form of punishment handed down by municipal judges. Municipal courts, in general, deal with less serious crimes or misdemeanors. In Texas, when municipal ordinances relating to fire safety, public health, or zoning are violated, fines of up to $2,000 may be charged, but only if authorized by the local governing unit within the municipality. This introduces yet another check on a municipal court’s ability to balance a diverse community’s needs with the public’s interests. Some portion of the monies assessed by each municipal court in fines or fees goes into the municipal government’s general fund. Some of these same dollars help to fund the municipal court and, perhaps, improve its efficiency. Therefore, there is a sharing of economic resources, legal jurisdictions, and discretion by these policy-making institutions. While municipal courts share authority in balancing and checking power, for the most part the judiciary is different from the other branches of government in a most important way. Municipal courts may not set their own legal and political agenda. Mayors, town or village board members, and municipal administrators can initiate new policies, but judges cannot. Moreover, municipal court judges cannot introduce or file lawsuits, civil actions, and misdemeanor violations cases on their own initiative. Municipal judges can resolve conflicts and legal infractions only in cases brought before the bench by others. This is a significant check on judicial power. The Tennessee state legislature both gives and takes away legal jurisdiction to and from municipal courts, as do other states at different times. In 2006, the Tennessee General Assembly legislated that munic-
939
ipal offenses, also contained in many municipal codes across the United States, may no longer be enforced in the municipal courts of that state. Thus, Tennessee municipal courts may not hear cases for violations of municipal ordinances such as requiring drivers to yield to emergency vehicles, preventing cruelty to animals, vandalism, window peeping, or possession of an abandoned refrigerator. The same legislature, however, allowed municipal court judges to issue inspection code warrants to building enforcement officers when probable cause exists that a code violation may be occurring on an individual’s private property. Legislative bodies can curb municipal judges or, on the other hand, increase their power and jurisdiction. This dynamic process balances the state and local prerogatives, at the same time checking judicial power. In other situations, however, municipal court judges may expand their powers by checking and balancing the local and state legislative and executive branches. One of the first actions taken by a newly appointed municipal court judge from Wilmington, Delaware, in 1946 was to abolish the segregated seating he found in his courtroom. Before this decision, a police officer would stand at the entrance to the municipal courtroom and direct members of the public to one side or the other depending on the color of their skin. This judicial action of racial integration upset state and local customs. It also checked legislative mandates and executive decisions heretofore imposed on municipal courts in Delaware. Municipal courts can be limited laboratories of experiment within a state. Municipal court judges obtained volunteer counsel for indigent persons accused of municipal code violations (a crime) even before the U.S. Supreme Court did so in 1963. The former municipal court system in Wilmington, Delaware, even helped set up a probation system and permitted first-time offenders to be freed on their own recognizance. Later, these procedures, initiated by the municipal courts, became state law. These events demonstrate that a balancing of power within one branch of government may provide checks and balances on other branches of government in that state. The issue of judicial independence in the separation of powers system arises in several ways
940 municipal courts
concerning municipal courts. A 2004 Missouri survey revealed that most municipal court administrators and/or clerks said they wanted greater separation from the executive branch. It seems that many court employees not only have to report to the municipal judge but also to another member of the city government’s executive branch (city manager, director of finance, city clerk, or chief of police). Some court officers hold additional titles such as police dispatcher, city clerk, or even city collector. Still others may be full-time city or village employees but do not work full time within the municipal courts. Municipal courts depend on local community funding and operate with mostly parttime judges and part-time staff. An outside observer might contend that such courts do not have sufficient capacity to manage their own internal workings. Such limitations may ultimately undermine the independence of the judicial branch at the municipal level. Under these circumstances, the responsibility (if not the obligation) of the municipal court to be an independent check on the other branches of government is jeopardized. The separation of powers principle is violated when executive branch officers disallow training funds for judges, court administrators, and clerks. Another check on judicial independence occurs when, in states such as Colorado, residency is required by a municipality. This may deprive the community of a more meritorious judge who happens to have chosen to live outside the confines of that municipality. Municipal judges in Wisconsin are required to run in an at-large community election, presumably to be accountable to the electorate. Some argue that elections make for a more independent judiciary because legislative and executive preferences may be negated or checked by voters. Others maintain that litigants in municipal courts may consist of groups or classes of people who might be unpopular with vast segments of the electorate. It could be argued that municipal judges up for reelection and mindful of the electorate’s biases might subject such litigants to maximum fines and jail sentences or discriminate against them in some other way. The influence of voters then can check and balance, at times, all three branches of government. Legitimacy, trust, and confidence in municipal courts is necessary to both the rule of law and to
the strengthening of the judicial branch within the checks and balances system. It is important that states and localities measure the current level of confidence in municipal and state courts. The nation’s most populous state, California, recently conducted a survey that disclosed that about two-thirds of the public had an overall positive opinion of the state’s courts (including municipal courts). The results signified an improvement in the opinion of courts in the eyes of average people, because in 1992, only 50 percent of Californians polled viewed the courts favorably. When challenged by the other branches of government, the courts might be able to point to the public support of their performance. During 2006, Las Vegas, Nevada, made an evaluation of its municipal court judges that provided supportive evidence to bolster an overall positive assessment of judicial performance. Around 70 percent of respondents believed that nearly all municipal court judges were familiar with the records and documents of a case. They also believed that the judges weighed all the evidence and arguments judiciously before handing down fair verdicts. The professional conduct of these Nevada municipal court judges is said to indicate that their courtrooms are characterized by a lack of bias with regard to race, gender, religion, and the contending parties. These reports from California and Nevada seem to show public support for municipal courts. On the other hand, New York State’s 300-yearold municipal court system, present in many villages and towns, has been criticized for putting up with less than adequately trained judges and a lack of professional facilities. These failings diminish the judiciary’s standing in the community and limit its ability to check and balance the other two branches of government. It might even be suggested that some municipal courts are simply unable to command enough respect within their communities to bring the judiciary up to par. It has been reported that in some rural areas and smaller towns, municipal judges hold court in highway department garages, fire stations, or their homes. Moreover, these part-time judges are said to occasionally jail people illegally or deny fundamental court procedural guarantees. Some defendants allege that they were subjected to other kinds of discrimination based on race or gender. Such criticism indicates the potential for a flawed municipal court system in New York. What accounts
municipal government 941
for these fissures in the judicial branch at the local level? Most small town municipal court judges are poorly paid, and up to three-fourths are not lawyers. Some have light caseloads, but others with heavy caseloads may be able to spend only little time with each case that comes before them. Under these conditions, it may appear that justice is not being well served in municipal courts and that these courts function badly in the checks and balances framework. In situations similar to those mentioned above, it appears that municipal judges need more frequent training and to be required to pass more examinations before taking the bench. Requirements that municipal judgeships transition to full-time lawyers instead of nonlawyers and a call for improved supervision of municipal court judges by the state’s district or supreme court commissions seem appropriate. In some jurisdictions, providing municipal courts with more computers, digital recorders, improved facilities, annual audits, and additional staff seems justified. Municipal courts were originally installed to settle local disputes in rural, small town America. There was less need for full-time judges, support staff, or well-equipped courtrooms in the early days of the republic. In the 21st century, however, with the increased or even excessive caseloads, there is more work for the municipal courts and their personnel. Many municipal judges handle 6,000 to 10,000 cases per year. If municipal courts are to remain flexible and perform the role of checks and balances within a state’s system of justice—a major contribution to the maintenance of law and order—then municipal courts must continue to be evaluated with a critical eye and supported with adequate resources by their diverse municipalities. Further Reading Gambitta, Richard A. L., Marlynn L. May, and James C. Foster. Governing through Courts. Beverly Hills, Calif.: Sage Publications, 1981; Karlen, Delmar. The Citizen in Court: Litigant. Witness. Juror. Judge. New York: Holt, Rinehart, & Winston, 1964; Meyer, Jon’a, and Paul Jesilow. “Doing Justice” in the People’s Court: Sentencing by Municipal Court Judges. Albany: State University of New York Press, 1997. —Steve J. Mazurana and Paul Hodapp
municipal government The U.S. Constitution established a framework in which American governments have separate but not completely independent authorities. Certain functions, such as interstate commerce and national defense, are assigned to the national government, while many others are left to the states. The Tenth Amendment in the Bill of Rights asserts that “the powers not delegated to the United States by the Constitution, nor prohibited by it to the States, are reserved to the States respectively, or to the people.” However, the intergovernmental relations among federal, state, and local governments in the United States are extremely complex. Traditional federalism (dual federalism) suggests that federal and state governments operate interdependently within their separate jurisdictions without relying on each other for assistance or authorization. In order to cope with complicated economic and social issues, there have been several waves of institutional and ideological changes in the implementation of federalism in the United States. The New Deal period began a continuous increase in the importance of intergovernmental relationships in the United States, followed by creative federalism in Lyndon Johnson’s presidency, new federalism under Richard Nixon, and recent devolution of federal involvement. The pressures from a new global economy further complicate the relationship between different levels of government. Federal, state, and local governments are continuously looking for the optimal mechanism by which they can cooperate and serve the public interest. There are several different forms of local government in America, such as county government, city or municipal government, special districts (e.g., school, fire, and sewage), and special authorities (e.g., transit and port). Municipal governments are local governments established to serve residents within an area of concentrated population. As the United States became a highly urbanized country, with more than 80 percent of its populations living in urbanized areas, its local governments have assumed a critically important role to directly serve the needs of local residents, providing a wide range of services from police, fire protection, and garbage collection to housing, transportation, and planning. According to the 2002 Census of Governments, in addition to the
942 municipal government
federal and 50 state governments, there were 87,849 units of local governments as of June 30, 2002. Of these, 38,971 were general purpose local governments—3,034 county governments and 35,937 subcounty governments, including 19,431 municipal governments and 16,506 township governments. The remainder, which constituted more than half the total, were special purpose local governments, including 13,522 school district governments and 35,356 special district governments. To carry out the functions of local government, cities are chartered by states, and their charters detail the objectives and authorities of the municipal government. Traditionally, the state legislature is recognized as having plenary (complete) control over municipal governments except as limited by the state or federal Constitution, which is commonly referred to as “Dillon’s law,” named for Judge John Forest Dillon, the chief justice of the Iowa Supreme Court more than l00 years ago, after he authored two seminal opinions establishing the modern rule of law by which the powers of local governments are evaluated. Dillon’s law requires that localities obtain express permission from the state before enacting certain kinds of legislation. Legislation required by Dillon’s law is often called “enabling” legislation. Under Dillon’s law, state legislatures had to devote attention to the details of local government issues and found insufficient resources and time to deal with substantial matters of state policy. On the other hand, local government did not possess sufficient authority to deal with complicated local issues. Therefore, in some states, municipal reformers created a new concept of local control, which incorporated part of the inherent right to local self-government rule yet retained a part of the sovereignty of the states. That new principle became known as home rule. In very general terms, home rule can be defined as the transfer of power from the state to units of local government for the purpose of implementing local self-government. Home rule has taken various forms around the country in the more than 40 states that have adopted it. In most states, home rule provides those local governments with some measure of freedom from state interference as well as some ability to exercise powers and perform functions without a prior express delegation of authority from the state.
While their powers are derived from the state constitution and laws enacted by the legislature, municipalities themselves are created only by the request and consent of the residents within the municipal area. Communities may incorporate as cities if their residents initiate a petition conforming to state laws. For example, in California, incorporation may be initiated by resolution of a county board of supervisors or by citizen petition, which must be signed by at least 25 percent of the locally registered voters. The petition is then submitted to the Local Agency Formation Commission (LAFCO), which reviews the proposed plan for incorporation, conducts a hearing, and then approves or denies the proposal. If the petition is approved, the board of supervisors conducts a hearing and, if without majority protest, call an election. The incorporation must be approved by a majority of the voters living within the proposed municipality. A municipality can also be disincorporated if petitioned by more than 20 percent of the voters, and a majority of those voting in a special election will determine the outcome. Municipal governments may legislate to protect the safety, welfare, and health of their residents. Municipal governments safeguard lives and property through fire and police protection; provide public facilities such as water, roads and sewage; and determine local land use in a way that is most compatible with community economic, environmental, and social goals by planning and zoning. Most local governments rely heavily on intergovernmental transfers, property taxes (sometimes income taxes), and miscellaneous charges to provide services to their residents. According to the 2002 Census of Governments, intergovernmental revenue made up about 25 percent of the total revenue for municipal governments, 18.5 percent of which came from the state government, 4.5 percent from the federal government, and the remaining 2 percent from other local governments. General revenue from their own sources made up about 60 percent of the total revenue for municipal governments, 36 percent of which came from taxes and the rest from charges and miscellaneous fees. The rest of municipal finance came from utility revenues. As local government responsibilities have increased but their own revenue sources have not kept pace, partially due to declining state support and receding federal aid, some munici-
municipal government 943
pal governments have started looking for alternative service provision to reduce their financial burdens. Some municipalities seek formal or informal intergovernmental service agreements, and others try to control costs while maintaining a high quality of services by privatizing public services, although the results of privatization or contracting out have turned out mixed. In order to secure a city’s revenue base, urban policy making has been argued to be largely determined by economic forces. Cities must compete for mobile wealth in the intergovernmental marketplace or face perpetual fiscal crises and must pursue “developmental policies”—policies that provide incentives for investors and higher-income residents to locate in the jurisdiction. Economic growth is often viewed as a compelling city interest that all citizens share, as it enhances the tax base and increases economic opportunity. Local government is primarily concerned with economic growth, and many cities have formed an apparatus of interlocking progrowth associations and government units, which is commonly referred to as “growth machines.” Local business communities are the major participants in growth coalitions, with their continuous interaction with public officials giving them systemic power, and they are assisted by lawyers, syndicators, and property brokers. There are three general types of municipal government: the mayor-council, the commission, and the council-manager. Some cities may have developed a combination of two or three of these forms. A mayor-council is the traditional form of municipal government in the United States and was used by nearly all American cities until the early 20th century. The structure of a mayor-council municipal government is similar to state and federal governments and preserves the basic separation of powers between the legislative and executive branches, with an elected mayor as chief of the executive branch and an elected council representing the legislative branch. There are two variations of mayor-council government: the weak-mayor and strong-mayor forms. Under the weak-mayor form, the council possesses both legislative and executive authority and may appoint important administrative officials and approve the mayor’s appointees. The council also exercises primary control over the municipal budget. The mayor lacks administrative power, and the
authorities are fragmented. The weak-mayor form suits only smaller and simpler governments. Under the strong-mayor form, the mayor appoints heads of city departments and other officials, has the power of veto over ordinances, and is responsible for preparing the city’s budget. The council passes city ordinances, sets tax rates, and apportions money among city departments. The strong-mayor form makes the mayor the dominant force in city government and is very popular in many large American cities. A commission form of government combines both the legislative and executive functions in one group of commissioners, usually three or more in number and elected citywide. Each commissioner supervises the work of one or more city departments. One of the commissioners may be named chairperson of the body or mayor, but the mayor normally has no more authority than other commissioners. This form provides for no separation of powers, lacks internal checks and balances, and lacks a strong chief executive. The commission form first evolved in response to a major hurricane in Galveston, Texas, as an emergency recovery mechanism and is rarely used today in American cities. A council-manager form of government consists of a small city council, usually five to seven people, that is often elected citywide and responsible for making a budget, passing ordinances, and supervising the administration of municipal government. A professional city manager is hired with full responsibility for day-to-day operations. A mayor may exist but only performs strictly ceremonial duties and has no involvement in the city’s administrative affairs. Similar to the commission form, all executive and legislative powers reside in the council alone, and no separation of powers or checks and balances is provided. The professional administrator carries out the decisions of the council, produces a budget, and supervises most of the departments. Usually, there is no set term, and the administrator serves as long as the council is satisfied with his or her work. Bringing in such a business manager–like professional administrator appears to improve the efficiency of local government, and the model has been adopted in more and more upper- and middle-class suburban cities. The last half of the 20th century witnessed significant changes of population settlement in many
944 municipal home rule
advanced countries, symbolized by population spreading to low-density suburbs. In the United States, the rate of population growth in the suburbs has been more than twice that of the cities. The global economy imposed another strong challenge for localities to compete in the international marketplace. All these transformations have profound consequences for local government. New regionalism has been developed as one major mechanism that helps cities to solve issues such as city-suburb disparity, unbridled sprawl, and global competition from a regional perspective. Municipalities across the United States have used various new regionalism approaches to expand their jurisdiction or reach beyond formal borders, including city-county consolidations, annexations, interlocal agreements, extraterritorial jurisdiction, and multitiered governments. New regionalism requires that communities look outward to the larger metropolis and consider their collective future. City-county consolidation and forming a unified municipal (metropolitan) government have been one of the most debatable approaches to achieve economies of scale by reducing the number of local government units. Since the consolidation of New Orleans in 1805, there have been hundreds of local government consolidation attempts, but only 34 of them succeeded. The 1960s witnessed some significant successful consolidation efforts, including the Nashville-Davidson, Tennessee, consolidation in 1962, the JacksonvilleDuval County, Florida, consolidation in 1967, and the Indianapolis-Marion County, Indiana, consolidation in 1969. There had not been a consolidation at the magnitude of these three cities until the LouisvilleJefferson County, Kentucky, governments merged into a metropolitan government in 2000. Pittsburgh, Pennsylvania, Albuquerque, New Mexico, Fort Wayne, Indiana, Buffalo, New York, Topeka, Kansas, and Des Moines, Iowa, are among several areas that have recently tried and failed to pass such legislation or are considering city-county consolidation. The proposals to consolidate a city and county government have not achieved a high rate of success (less than 15 percent) due to the reason that consolidation alone does not guarantee effectiveness, efficiency, and equity in municipal governments. Many times, consolidation is just used as a political mechanism to alter
a territorial boundary and political structure for different interest groups. Besides city-county consolidation, there are other ways in which the agenda of new regionalism can be achieved. For example, the twin cities of Minneapolis and St. Paul, Minnesota, applied a multitiered approach. A tiered approach is more agile than consolidation because it allows for some problems to be managed at their most appropriate local level and for regional problems to be addressed by a metropolitan authority. A linked function approach, as was attempted in Charlotte, North Carolina, is another experiment to include a city and its county under the rubric of governance by functions. Unlike consolidation or multitiered systems, linked functions are flexible and require no new levels of government. Another route to new regionalism lies in the notion of “complex networks,” which advocates large numbers of independent governments voluntarily cooperating through multiple, overlapping webs of interlocal agreements. The Pittsburgh area has adopted various kinds of intergovernmental agreements in order to maximize efficiency through complex networks. Further Reading Leland, Suzanne M., and Kurt Thurmaier, eds. Case Studies of City-County Consolidation: Reshaping the Local Government Landscapes. Armonk, N.Y.: M.E. Sharpe, 2004; Logan, John R., and Harvey L. Molotch. Urban Fortunes: The Political Economy of Place. Berkeley: University of California Press, 1987; Peterson, Paul. City Limits. Chicago: University of Chicago Press, 1981; Savitch, H. V., and Ronald K. Vogel. “Paths to New Regionalism.” State and Local Government Review 32, no. 3 (2000): 158–168; Wilson, Woodrow. “The Study of Administration.” Political Science Quarterly 2 (1887): 197–222; Wright, Deil S. “Federalism, Intergovernmental Relations, and Intergovernmental Management: Historical Reflections and Conceptual Comparisons.” Public Administration Review 50, no. 2 (1990):168–178. —Lin Ye and Hank V. Savitch
municipal home rule Home rule refers to the concept of self-governance and the extent to which a particular level of govern-
municipal home rule
ment has authority to govern its own affairs. Synonymous with sovereignty and autonomy, the notion of home rule can be applied to any level of government. In the context of American municipal government, home rule refers to the extent to which municipalities (commonly referred to as cities) govern their own affairs, free from interference by state or national authorities. In recent decades, municipalities in the United States have seen a tremendous erosion of their powers of home rule, particularly over fiscal matters. Today, the regulation of land use remains the primary area over which municipalities retain substantial autonomy. The U.S. Constitution makes no mention of the governments of cities, instead establishing spheres of authority for state and national governments. Prior to the mid-19th century, the debate over the status of cities remained obscure, as events in sparsely populated and geographically isolated cities and towns usually had limited importance beyond their borders. However, as cities began to industrialize in the middle to late 19th century, new economic, social, and political conditions emerged. In particular, industrialization and rapid immigration from Europe as well as internal migration from American farms overwhelmed cities with crowding, pollution, and disease, placing tremendous strain on city services and infrastructure. In order to respond adequately, cities began to petition their state governments for new powers of taxation to pay for services such as infrastructure development, public safety, and education. The crisis spawned a debate over the inherent powers of cities and forced the first serious appraisal of the legal status of cities in the American political system. Although a few legal scholars had previously wrestled with the question of municipal home rule, an 1868 ruling by Iowa Supreme Court justice John F. Dillon has largely framed subsequent debates over municipal sovereignty. Known as Dillon’s rule, the principle established that cities are legally “creatures of the state,” possessing no sovereignty independent of what their state governments permit. According to Dillon, “Municipal corporations owe their origin to, and derive their powers and rights wholly from, the Legislature. [The Legislature] breathes into them the breath of life without which they cannot exist. As it creates so it may destroy. . . .” Although Judge Dillon’s decision relegated cities to the status of supplicants to
945
their state governments, his argument was not well supported by the historical record. Defenders of municipal autonomy pointed out that the sovereignty of numerous American cities during the colonial period preceded the creation of their state governments. Nevertheless, the principle of Dillon’s rule became the dominant legal paradigm governing relations between states and municipalities. The question of why Dillon’s rule came to define relations between states and cities has been addressed by a number of scholars. Most agree that state supremacy was ultimately decided and enforced for political rather than constitutional reasons. Urban historian Stephen Elkin argues that Dillon’s underlying purpose was “to protect private property [from the] kind of democracy developing in cities.” For Elkin, state judicial oversight and other constraints on cities, including the requirement of balanced budgets, limits on taxing authority, and restrictions on borrowing, were imposed to protect corporate interests from the emergence of countervailing regulatory power by some reform-minded municipal governments. In addition, state officials argued that state regulations were needed, in their view, to rid cities of corrupt practices by immigrant-run urban political machines. In analyzing the politics of the home rule debate, scholars also highlight ethnic and religious conflict between Protestant native-born citizens operating at the state level and mostly Catholic and Jewish immigrants from Ireland and southern and eastern Europe who had assumed political control of cities such as Boston, New York, and Chicago. In short, Dillon’s rule enabled largely Protestant rural interests that controlled state legislatures to contain emerging political power in America’s cities and to thwart potential government interference in capital and labor relations. In doing so, these interests could ensure that cities would remain primarily instruments of unregulated capitalism rather than agents of social reform. By the early 20th century, however, urban reformers had mounted a counteroffensive. In making their case for greater home rule, urban reformers drew upon a cultural history of self-governance and deeply rooted traditions of American local control. Known as the “home rule movement” and led by disaffected urban middle-class Protestants, the effort
946 municipal home rule
resulted in allowing cities to write their own constitutions (called charters) to permit increased home rule. As a result, cities in many states—mostly large cities—gained greater leeway to set up their own forms of government, conduct municipal elections, and engage in other initiatives not otherwise prohibited by state law. For example, today, cities with home rule charters will probably have greater leeway in passing living wage ordinances and establishing public financing of municipal elections than cities without home rule charters. Perhaps the most significant legal victory for the municipal home rule movement came in the 1926 U.S. Supreme Court decision City of Euclid v. Ambler Realty Co., which essentially placed the power to regulate land use under the purview of local governments. Today, the power to regulate land development remains the most important area over which municipalities exercise home rule. As part of their strategy to win greater latitude to govern themselves, cities also pushed to establish a legal distinction between issues considered “municipal affairs” from those of “statewide concern.” However, problems quickly arose in clearly defining whether an issue was of municipal or statewide concern. For example, policy areas such as education and housing do not easily lend themselves to rigid classification as either solely municipal or statewide concerns. In practice, significant gray areas emerge in determining which level of government should ultimately decide a particular issue. What has emerged in most states is a political tension between issues of statewide concern and municipal affairs, similar to ongoing political turf battles between federal and state governments. Historically, state courts have intervened on a caseby-case basis to draw these lines, usually erring on the side of Dillon’s rule. Although the home rule movement dates from the 19th century and saw its zenith during the 1920s, several states have granted their cities home rule fairly recently, including Massachusetts (1965), North Dakota (1966), Florida (1968), Pennsylvania (1968), Iowa (1968), and Montana (1972). In 1970, Illinois gave home rule status to all cities with populations of more than 25,000 and made provisions for smaller cities to achieve home rule by citizen petition and voter approval. Some form of home rule now operates in more than half the states and two-thirds of
cities with populations of more than 200,000, although its extent varies from state to state. In recent decades, cities have seen a substantial erosion of their home rule authority, particularly over fiscal matters. For example, in 1978 California voters passed Proposition 13, which capped property taxes at 1 percent of assessed value, severely limiting the ability of cities and other local governments to raise money. Soon after, states such as Massachusetts and later Colorado passed similar laws constraining the ability of local governments to raise money. Moreover, in order to address budgetary crises during the 1990s and early 2000s, some states simply withheld funds allocated to local governments in order to cover their budget shortfalls, forcing drastic service reductions. The decline of fiscal home rule has led to a number of consequences, some predictable, some unintended. In addition to overall declining service levels, cities in California have scrambled to find alternative revenue sources. In order to maximize their tax revenue, California cities are now far more inclined to approve commercial developments, which generate the highly coveted sales tax, rather than housing, a practice known as the “fiscalization land use.” The unforeseen result has contributed to both a shortage of affordable housing in the state and fueled historically high property values. Although cities have been most active in pressing state governments for home rule, a modest home rule movement for counties has been under way. Today, 37 states permit counties some form of home rule charter, usually more limited than that for cities. Nationwide, only about 80 of the nation’s 3,086 counties enjoy even such limited home rule. Most of these are large, urban counties, and 11 are in California. Although home rule seems inherently good in terms of democracy and local control, critics point out that it contributes to social inequality by making municipal governments responsible for policies that often have wider social implications. For example, critics especially blame home rule over land use as a primary factor in the lack of affordable housing and in particular the economic and ethnic segregation of housing patterns. In addition, in parts of the country where municipalities also govern public schools, home rule over land use has led to gross inequalities in access to adequate education for minorities and the poor. Armed with unfettered home rule authority
planning boards 947
over land use, many cities simply refuse to provide any affordable housing at all. However, in a few states, such as Oregon and Washington, state governments have imposed fairly strict conditions that require most cities and counties to build affordable housing. Critics respond that such state interference violates local government home rule, so far to no avail. Ultimately, the ideal of home rule, a cherished tradition of American local government, remains in perpetual tension with the notion of local governments as “creatures of the states” and the need to provide statewide solutions to issues of regional or statewide importance. Further Reading Christensen, Terry, and Tom Hogen-Esch. Local Politics: A Practical Guide to Governing at the Grassroots. Armonk, N.Y.: M.E. Sharpe, 2006. Danielson, Michael N. The Politics of Exclusion. New York: Columbia University Press, 1976; Davis, Mike. City of Quartz. New York: Vintage Books, 1990; Elkin, Steven. City and Regime in the American Republic. Chicago: University of Chicago Press, 1987; Gottdeiner, Mark. The Social Production of Urban Space. Austin: University of Texas Press, 1985; Judd, Dennis R., and Todd Swanstrom. City Politics: Private Power and Public Policy. New York: Longman, 2002; Linowes, R. Robert, and Don. T. Allensworth. The Politics of Land Use. New York: Praeger Publishers, 1973; Logan, John R., and Harvey L. Molotch. Urban Fortunes: The Political Economy of Place. Berkeley: University of California Press, 1987; Ross, Bernard H., and Myron A. Levine. Urban Politics: Power in Metropolitan America. Belmont, Calif.: Thompson Wadsworth, 2006; Weiher, Gregory R. The Fractured Metropolis: Political Fragmentation and Metropolitan Segregation. Albany: State University of New York Press, 1991. —Tom Hogen-Esch
planning boards Planning boards, also known as planning commissions, zoning commissions, and zoning boards, serve an important local and regional government function. Boards work to advise local legislative bodies and planning departments on a wide range of land development, zoning, and land use planning issues. Additionally, planning boards are often delegated
power to make decisions on site plans and other development decisions. They are intended to serve as a link between the community at large, developers, planning professionals, and elected officials. The origin of planning boards in the United States dates back to the 1920s. Most localities in the United States today have zoning ordinances, comprehensive plans, and planning boards. Land use planning in the United States was and is largely a local matter. However, the forces that have shaped land use planning and the structure and powers of planning boards are national in scope. Land use controls have a long history in the United States. They began as soon as cities emerged in the American colonies. For example, farms, cemeteries, and similar uses of land that were once outside the city became a problem once cities grew. Farms produced animal waste that became a public health concern when city dwellers moved closer, and cemeteries were thought to produce unhealthy vapors. Cities passed laws that prohibited these land uses. Beginning in the 1890s, the Progressive movement sought to change the way cities were governed and to improve the conditions of urban life. Modern land use planning in the United States emerged out of these goals, and the creation of planning boards was part of this movement. As the conditions of urban slums were exposed, efforts were made to make cities more pleasant places to live. Parks, both large and small, emerged in cities during this time as efforts to create beautiful places and to meet the recreational needs of urban populations. These efforts, known as the City Beautiful movement, sought to improve the aesthetics of buildings in addition to building parks. Improvement of city government was also a focus of both the Progressive and City Beautiful movements. City planners emerged early in the 20th century to shape American cities, and zoning ordinances were created to define the ways that property could be used. For example, a typical zoning ordinance defines residential, recreational, commercial, industrial, and agricultural areas. The first comprehensive zoning ordinance in the United States was developed in New York City in 1916. Zoning ordinances spread quickly through American cities and developing suburbs, and by 1930 zoning had emerged as a powerful tool for planners to use in shaping the physical and human landscape of communities.
948 planning boards
Zoning and comprehensive planning remain controversial. Supporters of zoning ordinances urge that these rules create a blueprint for the development of a community. Specifically, they promote public safety and public health, discourage overcrowding of land and overpopulation, and provide for good public services and public utilities. Furthermore, zoning protects the integrity of neighborhoods. A good example is the attempt to keep residential areas safe from the encroachment of factories and other businesses. Critics of zoning have two basic concerns. First, opponents argue that zoning, by limiting the use of land, was and continues to be a violation of property rights. A second criticism of zoning is that the regulations are set up to preserve middle- and upper-class enclaves at the expense of the rest of the community. Zoning can be used to exclude unwanted people or uses of property. The U.S. Supreme Court upheld the constitutionality of zoning in Village of Euclid v. Ambler Realty (272 U.S. 375, 1926). The basic structure of zoning and planning in the United States today still follows the recommendations developed in the 1920s by the U.S. Department of Commerce’s 1922 A Standard State Zoning Enabling Act (SSZEA) and the 1928 A Standard City Planning Enabling Act (SCPEA). By 1930, 35 states had passed versions of the SSZEA, and eventually all 50 states passed laws that allowed local and regional governments to zone property. In 47 states, the SSZEA still serves as the basis of state enabling legislation. The SCPEA proved less popular but was passed in many states, and it is still a basis of city planning throughout the United States. These two model laws recommended that states give local governments the authority to regulate land use through zoning in conformance with a comprehensive plan, recommended that public hearings be required before zoning laws were created and enforced, and recommended the establishment of planning boards to develop zoning laws and enforce them. The nation’s first planning board, the Los Angeles County Planning Commission, was created in 1922. After the U.S. Department of Commerce’s model state laws were issued, planning boards spread quickly as zoning ordinances were created. As modeled in the SSZEA and the SCPEA, a typical planning board serves a locality—a city or town—and has seven to
nine members who serve three- to four-year staggered terms. They meet twice a month, and members serve without compensation. There is a great deal of variation outside the typical structure, however. Boards also serve counties or other regional areas. Boards can be as small as five members, while others can have as many as 20 members. Terms of office range widely as well, with some serving only two years and other serving six or more years. Some planning boards meet weekly, but most meet at least once a month. When planning members are paid, they do the work for modest sums generally intended to cover only the costs associated with service. Boards are appointed by the community’s elected leaders and are made up of community members. Some boards include one member from the local legislative body, the city council or the town board. Planning board members often come from the business community. It is not uncommon for members to have experience in development and construction, such as builders, real estate agents, architects, and engineers. Other common professional backgrounds include lawyers and teachers. As with other public officials, the percentage of women on planning boards has grown in recent years. In most instances, planning board members are not required to get training. Training is most likely to be available in more populous communities. Planning boards members also get training by attending regional or national conferences and workshops. Planning boards work with local government planning departments to do their work. These professional staffs vary in size and expertise. The planning department is responsible for giving advice to planning boards and processing the range of documents that are part of developments and comprehensive planning. Land use planning is still primarily a local government function undertaken with the authority of the state. The powers of planning boards vary significantly across the states and between localities within states. In some instances, planning boards serve largely in an advisory capacity to local legislative bodies, while in others, boards serve as the final decision makers on most, if not all, land use issues. Sometimes planning boards also hear appeals from property owners who are not satisfied with decisions made by other
property rights 949
boards, but in most localities, a separate zoning board of appeals serves this function. When planning boards emerged in the 1920s through the 1940s, they were limited to creating and administering zoning laws that restricted the use of land in a variety of ways, including primary uses of the land, the size of building lots, and how close buildings could be to the edges of lots. In recent decades, the issues that planning boards must be concerned with have changed to reflect the evolution of social beliefs and economic realities. For example, many communities have zoning for “single-family” homes. As the notion of the family has expanded, defining what a single family is has become more complex for communities. Similarly, as more Americans work for themselves and telecommute, home offices have become more prevalent. Zoning usually precluded such arrangements in areas zoned residential. A third example is the emergence of group homes and how towns can work them into zoning. Each of these examples challenges existing notions of what was acceptable traditional zoning. As zoning is a way to maintain the status quo, these new arrangements present challenges. Planning boards have also been forced to react to new social goals, often mandated by state and federal legislation, such as environmental regulation and historic preservation. Zoning laws have had to be updated with state and federally mandated environmental requirements concerning clean water and waste management. Many states require environmental impact statements for any new development. Many communities have also sought to preserve their historic assets encouraged by the passage of the National Historic Preservation Act of 1966 and other similar laws. This law created the National Register of Historic Places and required states to participate in creating the register. As a result, many local governments have created historic zoning districts that limit the ability of property owners to change historic structures and grounds. Finally, states and localities have recognized that the actions of individual communities have an impact beyond their own borders. Since 1990, nine states have instituted integrated statewide planning. This has impacted and sometimes superseded the work of local planning boards. In addition, some metropolitan areas have instituted regional planning and elimi-
nated local boards in favor of regional planning boards. Planning boards serve an important function in land use planning and development issues. At their best, they guide communities toward making wellinformed choices on future development both in terms of comprehensive planning and on individual projects. But planning boards do not always work in this fashion. Boards can be so busy with the day-to-day work of reviewing development plans that they do not have time to consider the long-range implications of their actions. At times, boards can work to serve the interests of only segments of the community by thwarting all development, or in contrast they can be aligned too closely with developers. Even wellmeaning boards can be overwhelmed by the financial resources of developers. Planning board members are volunteers, and localities often have only small professional development staffs. Further Reading Cullingham, Barry, and Roger W. Caves. Planning in the USA, 2nd ed. New York: Routledge, 2003; American Planning Association. Available online. URL: http://www.planning.org. Accessed March 2, 2007. —William R. Wilkerson
property rights Few concepts have generated as much debate across the study of politics, law, economics, and philosophy as that of property rights. The commonly understood definition of a property right is the relationship between an individual and a physical object. However, scholars have long understood property rights as a bundle of rules that define the relationship between individuals and access to, control over, and use of virtually anything that produces a benefit to people, from natural resources, to written text, manufactured products, processes, and even ideas. Property rights are defined and enforced in a variety of ways, from social norms to courts, constitutions, and legislation. Examining the variety of ownership relationships in something even as simple as a plot of land helps illustrate the complexity of property rights. Generally, with land, an owner has the right to use, occupy, cultivate, transfer to others, share, rent, lease, or trade.
950 pr operty rights
However, depending on the jurisdiction and resource, it may or may not include the right to extract and sell subsurface resources (such as minerals), or the right to sell the right to use a resource (such as leasing timber rights), or include rights to water that lies under, within, or flows over the land. Rights may be permitted to the quantity of water, but there may be controls over the quality (such as the responsibility not to pollute), and restrictions may be placed on its use (agricultural versus residential use). Rights to water may be allowed to be sold for some purposes (such as agriculture) but not for others (such as habitat restoration). If it is a parcel under the riparian rights system, it would not include rights to use the land under a navigable river, although it would include the land to the river’s edge. Wildlife is probably regarded as state property and requires a permit to harvest. The title may include easements permitting neighbors access across the land and probably public utilities the right to inspect power lines. zoning may place restrictions on the type of activities allowed, and if in an urban area, it may not include air rights (the ability to build additional levels) and view rights (the right to prevent another property owner from obstructing a view). There is no universally accepted definition of a property right. Scholars typically examine some combination of five types of rights; access, withdrawal, management, exclusion, and alienation. Access rights are those that permit decision making about who may physically enter an area. Withdrawal rights determine how a physical unit of the resource is obtained. Management is the right to decide on the type of inputs and use patterns of an area. Exclusion refers to the right to determine who may have an access right. Alienation is the right to transfer a right via sale, leasing, or trade. There are four standard property right types, typically defined according to who holds the majority of rights. The most commonly discussed is that of private property. Private property rights are those that designate the majority of rights to an individual decision maker. For legal purposes, private property can be assigned to individuals or group entities such as corporations and government units. This legally defines the rights in relationship to other claims, so a city government may have private rights in relationship to another city government claiming the same resource, even though in both cases the property is owned by a public entity.
Private property is often contrasted with the situation of open access. Under open access systems, there are no effective rules regulating the access to a resource. Garrett Hardin famously outlined the problem of open access property regimes in his article “The Tragedy of the Commons” (1968). Every individual has an incentive to use a resource before anyone else, leading to overuse and eventual ruin for all. He characterized resources such as forests, fisheries, and the global atmosphere as suffering from the tragedy of the commons. For Hardin, the tragedy could be averted only by making it public property under the control of a state that would then manage it according to the public good or by assigning private rights, so individuals could wastefully overuse it and/or manage to maximize profits. Various scholars have pointed out that many of the resources viewed as open access are actually common property. Under a system of common property, rights exist but are shared among a community of users rather than with an individual. The global atmosphere has very exact rights regarding use in terms of national air pollution laws, although the atmosphere is held in common among all nations. The important distinction is that if a resource is being overused, it is not that property rights do not exist, but that there may be a problem with the design or enforcement of existing rights. The same can occur with private property as well. Systems of common property are pervasive in both traditional societies as well as modern capitalist systems, and it is difficult to imagine effective alternative property structures for some types of resources, such as the atmosphere, outer space, many fisheries, and some hydrological resources. The final property type is that of public property, whereby the majority of rights belong to a legally recognized political unit, such as a city, county, state, or country. In some cases, public property is similar to other property right types, such as when a government unit is recognized as having the full legal rights of private property or when a resource owned by a government unit is shared among its citizens in a common property situation, such as a national park. Public properties are important for supplying resources that are regarded as necessarily open to all citizens and in many cases present their own unique management challenges.
property rights 951
The Supreme Court has ruled that cities can seize homes for private developers. (Getty Images)
Defining a property right is not merely an empty academic exercise. Much of political history can be fundamentally understood as a struggle over property right relationships and the role of the individual, community, and government in maintaining, enforcing, and distributing rights. There are two fundamental issues of debate. The first is the equity of the distribution of rights against the efficiency of an economic system to generate wealth and the second the rights of individuals against the responsibilities owed to broader society. Thinking about these issues extends as far back as Plato advocating common ownership as a means of overcoming distributional conflict in The Republic (370 b.c.), and Aristotle’s rebuke in favor of private property. Property was considered so important that Aristotle devoted much of Politics (350 b.c.) to discussing the benefits of individual ownership and the limits necessary to promote the public good. Likewise, Roman law recognized an elaborate system
of individual private ownership alongside a system of public duties. During the Middle Ages, the lack of separation between church and state meant that all things were considered subject to the king’s ruling under authority ordained by God. The feudal land rights systems were transferred by the right of primogeniture, which required passing lands to one’s firstborn son, leading to land consolidation and the system of tenant farmers legally bound to the land known as serfdom. Modern property rights thought begins with Thomas Hobbes’s Leviathan (1651), in which property rights were the output of a sovereign whose authoritative command guaranteed the resolution of conflict and enforced rights. He writes, “my own can only truly be mine if there is one unambiguously strongest power in their realm, and that power treats it as mine, protecting its status as well.” The first break with the idea that rights are bestowed by a sovereign authority occurs with John Locke’s Two
952 pr operty rights
Treatises of Civil Government (1690). Locke formulates the idea of natural rights that holds that each person owns himself and has certain liberties that cannot be expropriated by others, not even the sovereign authority. The input of labor toward a productive end establishes an inherent property right. This is later know as the labor theory of property and is incorporated into the work of authors as diverse as David Ricardo and Karl Marx. The sanctity of individual rights is echoed in William Blackstone’s Commentaries on the Laws of England (1765–69), in which he postulates that property rights are a unified bundle under the exclusive authority the right holder. This provides the conceptual foundation for the modern legal protection against trespass and the base of nuisance laws. Blackstone’s work was important in informing many of the writers of the U.S. Constitution and continues as a reference in judicial studies. David Hume (1711–76) rejected the idea of natural rights and proposed that property rights emerge from the legal practice, social custom, and economic necessity to limit access to scarce goods. This became known as the pragmatic approach, and is extended by the work of Immanuel Kant (1724– 1804), who posits that since property rights place a claim on resources that limits the freedom of others and require respect of that claim, all rights are reliant on the mutual consent of the community and are therefore acquired, not innate. Anyone claiming a right to property necessarily accepts the legitimate right of a community to place limits in order to ensure sufficient equitable distribution and gain general consent. The pragmatic approach is found in the American legal tradition through the work of U.S. Supreme Court associate justice Oliver Wendell Holmes. In his classic text The Common Law (1881), he rejects the natural law tradition of a comprehensive unit of absolute ownership and distinguishes the right of possession, which imposes responsibilities on others not to contradict the possessor’s control over the resources, and the right of title, which imposes the expectation that others will recognize control rights, even when not in immediate possession. The debate over the appropriate role of the individual property owner in relationship to society occurs in its most visible form in constitutional definitions
over the use of eminent domain and takings compensation. The Fifth Amendment of the U.S. Constitution requires that private property not be expropriated by government without compensation. Eminent domain is the legal right of a government to force the sale of a private property in order to pursue public purposes. A taking occurs when government action has deprived an individual of a recognized property right and is therefore owed compensation. Limits on property rights when they impose a cost on others have long been recognized as a legitimate exercise of police power (which is the necessity of government to prevent harm to its citizens). The defining case, Hadacheck v. Sebastian, 239 U.S. 394 (1915), ruled that an owner of a brickyard was not entitled to compensation because zoning by Los Angeles had prohibited the activity, establishing that when a property owner’s use is harmful, the use may be banned even if it reduces the value of a property. In Village of Euclid v. Ambler Realty Co., 272 U.S. 365 (1926), the use of zoning restrictions were determined not to constitute a taking, even when reducing the overall value of a property. Miller v. Schoene, 276 U.S. 272 (1928), elaborated on what activities encompassed a nuisance and ruled that the higher economic value of orchards justified the destruction of ornamental cedars that hosted a fungus that damaged the orchards, even though it did not harm the cedars. In Penn Central Transportation Co. v. New York City, 438 U.S. 104 (1978), refusal to grant a permit to build office space on top of an existing building due to its historic landmark designation was ruled as not a taking but supporting a community’s authority to adopt regulations to protect the quality of life even if it decreases the value of a property. In Lucas v. South Carolina Coastal Commission, 505 U.S. 1003 (1992), the Court ruled that even though a property owner had purchased a beachfront parcel prior to the existence of land use controls prohibiting development, as long as the regulation was designed “to prevent serious public harm,” no compensation was due regardless of the size of impact on the property value. The general understanding today is that government action to legitimate public interest that reduces the value of a property right is not a taking as long as the owner retains some viable use. While public projects such as highways, railroad lines, utilities, and military bases have long been allowed, the definition of public
recalls 953
interest has been expanded in recent years. In Hawaii Housing Authority v. Midkiff, 466 U.S. 229 (1984), the Court ruled that an act by the state legislature intended to redistribute what it saw as an oligopoly in land ownership inflating land values (with up to 72 percent of private land on one island owned by 22 individuals) did satisfy the public use doctrine and was within the state’s right to intervene to correct market failures. Kelo v. City of New London, 125 S. Ct. 2655 (2005), ruled that private land could be forcibly sold for a city industrial park, even though it included constructing a plant for a private company, since stimulating economic growth was a justifiable public end. Today, research is less concerned with broad ideological ideas than with the practical impact of property rights and their performance within political and economic systems. Most researchers would agree that when market prices closely reflect the values in society and there are functioning markets of voluntary exchange, a system of private rights will tend to be more efficient than other forms. However, when prices do not fully reflect a society’s values (such as in the case of many environmental resources or the value of equity) or when there are market failures (such as high transaction costs to exchanging goods), then private property rights can be suboptimal, and other forms of property may be preferred. With the increasing complexity of society, property rights are understood as flexible bundles that can be used innovatively by private firms, markets, courts, legislatures, and communities toward socially desirable ends. Contemporary approaches are more aware than ever of the economic, political, and ethical implications of different rule structures governing resource use. Further Reading Achian, Armen, and Harold Demsetz. “The Property Rights Paradigm.” Journal of Economic History 33 (1973): 16–27; Anderson, Terry, and Fred McChesney. Property Rights: Cooperation, Conflict, and Law. Princeton, N.J.: Princeton University Press. 2003; Becker, L. C. Property Rights: Philosophic Foundations. Henley, U.K., and New York: Routledge & Kegan Paul, 1977; Bromley, Daniel. Environment and Economy: Property Rights and Public Policy. Oxford: Oxford University Press. 1991; Cole, Daniel “New Forms of Private Property: Property Rights in
Environmental Goods.” In Encyclopedia of Law and Economics, Civil Law and Economics, edited by Boudewijn Bouckaert, and Gerrit de Geest. Cheltenham, U.K.: Edward Elgar, 2000; Cribbet, John, and Corwin W. Johnson. Principles of the Law of Property. Westbury, N.Y.: Foundation Press, 1989; Ellickson, R. C.. “Property in Land.” Yale Law Journal 102 (1993): 1,315–1,400. 1993; Ely, James E. The Guardian of Every Other Right: A Constitutional History of Property Rights. New York: Oxford University Press, 1992; Epstein, R. A. Taking: Private Property and the Power of Eminent Domain. Cambridge, Mass.: Harvard University Press, 1985; Hohfeld, W. N. Fundamental Legal Conceptions as Applied in Judicial Reasoning. Westport, Conn.: Greenwood Press, 1978; Libecap, Gary D. Contracting for Property Rights. Cambridge: Cambridge University Press, 1989; North, Douglass. Institutions, Institutional Change, and Economic Performance. Cambridge: Cambridge University Press. 1990; Ostrom, Elinor. Governing the Commons: The Evolution of Institutions for Collective Action. Cambridge: Cambridge University Press, 1990;———. “Private and Common Property Rights.” In Encyclopedia of Law and Economics, Civil Law and Economics, edited by Boudewijn Bouckaert, and Gerrit de Geest. Cheltenham, U.K.: Edward Elgar, 2000; Radin, Margaret Jane. Contested Commodities: The Trouble with Trade in Sex, Children, Body Parts, and Other Things. Cambridge, Mass.: Harvard University Press, 2001; Rose, C. “The Several Futures of Property: Of Cyberspace and Folk Tales, Emission Trades and Ecosystems.” Minnesota Law Review 83 (1998): 129–182; Sax, Joseph. “The Public Trust Doctrine in Natural Resource Law: Effective Judicial Intervention.” Michigan Law Review 68 (1970): 471–566; Weingast, Barry R. “The Economic Role of Political Institutions: MarketPreserving Federalism and Economic Development.” Journal of Law, Economics, & Organization 11, no. 1 (1995): 1–31. —Derek Kauneckis
recalls A recall election is one in which an elected official is subjected to removal from office by voters before his or her term is complete. It is the least common type of direct democracy, available for statewide officials
954 r ecalls
in only 18 states. (It is, however, fairly common at the local level). Generally, the recall works as follows: Citizens unhappy with an elected official collect signatures on a petition calling for the special election; if a sufficient number of signatures are collected, voters are asked to cast a ballot on a question such as, “Shall Governor X be removed from office?” If a majority of voters answer in the negative, the officeholder in question retains his or her seat. If the voters answer in the affirmative, the officeholder is immediately stripped of his or her position. The procedures that govern choosing a replacement vary, though the most common mechanism involves a subsequent special election. (The well-known 2003 recall election in California used a different procedure to select a replacement; it is discussed below.) Under the U.S. Constitution, federal officials are not subject to recall elections. Members of Congress may be expelled only by a vote of their colleagues, and other officeholders (e.g., the president and U.S. Supreme Court justices) can be removed from office only following impeachment. The founders’ disdain for direct democracy and their corresponding desire for independence on the part of elected representatives prevented them from instituting a recall procedure at the federal level. For them, periodic elections were the appropriate means of popular control of lawmakers. Some states have laws that appear to allow the recall of their federal officials, though it is unlikely that federal courts would permit such an effort. Even at the state level, recall procedures are fairly recent. Oregon was the first state to establish the recall possibility to statewide office, adding the provision to its state constitution in 1908. Since then, the recall has been a rarely used device at the state level. Voter dissatisfaction may be high, but the key challenge appears to be the signature requirement to place the question on the ballot. In order to guard against excessive recall elections, most states require a significant number of signatures to be collected within a specified period of time. In Kansas, for example, sponsors of a recall must obtain signatures amounting to 40 percent of those voting in the previous gubernatorial election, and they may circulate the petition for no more than 90 days—enormous barriers that have yet to be crossed, as no statewide official has ever faced a recall election in Kansas.
Additionally, a few states limit the grounds for a recall and mandate that a court determine whether the charges against the official comply with the state’s law. Minnesota’s constitution requires that a recall be permitted only in the event of “serious malfeasance or nonfeasance during the term of office in the performance of the duties of the office or conviction during the term of office of a serious crime.” In most recall states, the matter is considered a political question rather than a legal one. Regardless, statewide recall elections very rarely occur. In fact, until 2003, only one governor had been successfully recalled from office: Lynn Frazier of North Dakota in 1921. (It is worth noting that Governor Evan Meacham was likely to be recalled by Arizona voters in 1988, but the state legislature impeached and removed him from office before voters had the chance.) In 2003, California voters held Governor Gray Davis accountable for a number of problems in that state. The language of the petition revealed the broad frustration with his administration; following a list of specific grievances, the petition closed by charging Davis with “failing in general to deal with the state’s major problems until they get to the crisis stage. California should not have to be known as the state with poor schools, traffic jams, outrageous utility bills, and huge debts . . . all caused by gross mismanagement.” The signature requirements in California for placing a recall vote on the ballot are relatively low compared to other states; those seeking the recall must obtain the signatures of 12 percent of the number who voted in the previous statewide election (in this case, about 900,000). Moreover, California does not limit the grounds on which a recall may be held. Even these modest requirements are normally stringent enough to keep the question off the ballot. More than 100 previous attempts had been made to recall California governors, but every one had failed due to lack of signatures. In the 2003 case, an enormous contribution of $2 million by Congressman Darrell Issa provided the means to collect the necessary signatures. The special election took place on October 7, 2003, and voters faced a two-part ballot. First, voters were asked, “Shall Gray Davis be recalled (removed) from the Office of Governor?” Second, voters were
recalls 955
asked to select one name from a list of 135 potential replacements. (In the event that a majority of voters answered no to the first question, the second question would be ignored.) Some 55 percent of voters answered the first question in the affirmative, and 49 percent of voters (a plurality) selected actor Arnold Schwarzenegger to replace Davis. Other notable recall elections occurred at the local level. In 1978, Cleveland voters initiated a recall petition to oust mayor Dennis Kucinich after he fired the city’s police chief live on local television. Kucinich survived the recall by less than 250 votes out of more than 120,000 cast. San Francisco mayor Dianne Feinstein faced a recall election in 1983 after proposing a ban on handguns. A handful of activists took advantage of the city’s lenient recall procedures—the petition required the signatures of only 10 percent of those who voted in the previous election—to bring the matter to the voters. Feinstein shrewdly fought the effort by arguing that the recall was “an invitation to chaos” and added that “there is no candidate against whom to compare my record.” Her campaign worked; more than 80 percent of voters elected to keep her in office. Gray Davis notwithstanding, most successful recalls involve acts of malfeasance rather than issue differences. Even many supporters of the recall device concede that it should be used only in unusual cases, not for mere policy differences. Feinstein alluded to this in her campaign to retain her seat: “Orderly government cannot prevail on the shifting sands of a recall brought, not because of any corruption or incompetence, but because of a difference of opinion on an issue.” A notable recent example of a successful recall illustrates the point. In 2005, voters in Spokane, Washington, recalled Mayor Jim West amid charges of having offered internships and other perks to teenagers via the Gay.com Web site. Other charges surfaced that West had been accused of molesting young boys in the 1970s and 1980s. Following a campaign that focused entirely on these matters rather than policy issues, 65 percent of Spokane voters opted to remove West from office. Arguments for and against the recall device are fairly straightforward. On the plus side, the recall provides voters with a method of ensuring continuous accountability. Since many elected offices are for rather long terms—most governors serve four-year
terms, for example—there is ample opportunity for them to ignore the will of the voters between elections. Supporters of the recall argue that elected officials are obliged to follow the wishes of their constituents and that the long period between elections inappropriately allows those wishes to be ignored. If elections are about placing individuals in office to represent the voters, recall advocates argue, it follows that the voters should not have to wait until a predetermined date to remove unresponsive officials. Voters are especially justified in removing corrupt or incompetent officials as soon as possible. In addition, some maintain that the recall reduces the influence of special interests, mainly by keeping officials’ loyalties tethered to the general public even between elections. As with other forms of direct democracy, however, there is competing evidence that organized interests increasingly dominate the recall process; a very small number of wealthy individuals were certainly crucial in securing the necessary petition signatures in California, for example. Less-common arguments in favor of the recall include the claim that it is a more efficient process than impeachment, since it is unencumbered by partisanship. Finally, some argue that it encourages healthy citizenship by giving voters a strong incentive to monitor public affairs regularly. All of these arguments speak broadly to the desirability of maximizing the democratic control of elected officials. Opponents of the recall process tend to focus on a model of representation that permits independence on the part of the elected official. Governing, the argument goes, requires difficult choices whose benefits are often evident only in the long run. If officeholders are constantly worried that each and every decision they make might subject them to the immediate judgment of the voters, they might be less inclined to take the appropriate political risks. This is essentially the argument that prevailed among the founders at the time of the drafting of the U.S. Constitution. There are a number of lesser arguments against the recall, also. Some contend that there is a moral hazard that results from not forcing voters to live with their mistakes. Political scientist Larry Sabato articulated this point in an interview with Campaigns and Elections magazine shortly after the California recall: “It is a good lesson for people, elections matter, you
956 r eferendums
need to cast your ballot carefully. It is not a bad thing to force people to live through the consequences.” Others have pointed out that recall election procedures can invite genuinely bizarre outcomes. In the California election, the large number of candidates made it possible for Davis to be removed from office with more votes of support than his successor. Additionally, a group of political scientists reported in PS that language barriers make it very difficult for nonEnglish-speaking voters in such elections; evidently, recall is a word that does not translate easily. They also found that voter competence to handle the recall question was not high, even among those who do speak English. In their survey, they discovered that 6 percent of those who preferred Davis nevertheless expressed support for the recall, leading these researchers to conclude that “some voters believed that in order to vote for Davis, they needed to vote yes on the recall.” Clearly, there are some oddities and challenges associated with recall elections that add to the controversy. But the central argument remains the philosophical one involving representation, and how one views the desirability of the independence of elected officials will generally mirror one’s opinion about the recall device. The recall is sometimes characterized as “the gun behind the door.” Supporters use this terminology to imply that the recall can effectively keep lawmakers in line, as the threat is ever present. But opponents could just as easily use the phrase to illustrate the danger of the recall, particularly if voters become “trigger happy.” The recall is clearly an effective means of removing corrupt officials, but it presents problems for effective government if it is used lightly. If the grounds for a recall involve policy differences, most observers would prefer that voters make their decision at the next scheduled election rather than holding a special recall vote. Political scientist Thomas Cronin, one of the foremost experts on direct democracy, contends that the proper regulation of the recall procedure can maximize its benefits while reducing the potential for abuse. He specifically calls for “stiff signature requirements,” which have indeed been the primary obstacle for most of these efforts. He adds that the best mechanism to regulate the recall is an educated electorate, for which there is no substitute as a means of guarding against “gullibility and undue haste.” One might add
optimistically that an educated electorate might also minimize the need for recall elections by making wise selections in the first place. Further Reading Bowler, Shaun. “Recall and Representation: Arnold Schwarzenegger Meets Edmund Burke.” Representation 40, no. 3 (2004): 200–212; Cronin, Thomas E. Direct Democracy: The Politics of Initiative, Referendum, and Recall. Cambridge, Mass.: Harvard University Press, 1989; Felchner, Morgan E. “Recall Elections: Democracy in Action or Populism Run Amok?” Campaigns and Elections 1 (June 2004); Maskell, Jack. “Recall of Legislators and the Removal of Members of Congress from Office.” Report of the Congressional Research Service, 2003. Available online. URL: http://lugar.senate.gov/CRS%20reports/ Recall_of_Legislators_and_the_Removal_of_Members _of_Congress_from_Office.pdf. Accessed May 20, 2006; Saffell, David C., and Harry Basehart. State and Local Government: Politics and Public Policies. 8th ed. Boston: McGraw Hill, 2005. —William Cunion
referendums A referendum is a type of ballot measure, a special kind of election in which voters decide directly on an issue rather than voting for candidates for office. Referendums are similar to but distinct from, initiatives. Both involve direct votes on issues, but referendums are held in response to an act of a legislature, while initiatives are citizen-driven proposals that do not involve the legislature. There are two major types of referendums. A popular referendum occurs when citizens collect signatures on a petition to refer a specific piece of legislation to the people for them to accept or reject. A legislative referendum occurs when lawmakers submit a legislative act for voter approval. Popular referendums are available to voters in 24 states, while legislative referendums can be held in all 50 states. The U.S. Constitution does not permit referendums (or initiatives) at the national level. Most of the founding fathers were highly skeptical of direct democracy, preferring a representative system that allows lawmakers to act somewhat independently of public opinion. James Madison most clearly defended
referendums 957
this argument in his essay Federalist 10, in which he claimed that the effect of delegating decision making to representatives would be to “refine and enlarge the public views, by passing them through the medium of a chosen body of citizens, whose wisdom may best discern the true interest of their country, and whose patriotism and love of justice will be least likely to sacrifice it to temporary or partial considerations. Under such a regulation, it may well happen that the public voice, pronounced by the representatives of the people, will be more consonant to the public good than if pronounced by the people themselves, convened for the purpose.” Like many of the founders, Madison was concerned about the possibility of the “tyranny of the majority” and more generally that the passions of the people would interfere with their reason. Some opponents of the Constitution, however, pointed out that the Revolution itself was the product of ordinary citizens and contended that more democracy is always the essential tool to control the tyrannical impulses of elected officials. The debate continues to resonate to this day, as arguments for and against direct democracy reflect these competing concerns. The primary argument in favor of direct democracy is very simple: It gives the people more control over the decisions of their government. Referendums provide an opportunity for citizens to reject decisions of their representatives. In an age in which elected officials are widely seen as unresponsive or “out of touch,” it is not surprising that most people strongly support the referendum process. The Initiative and Referendum Institute reports the results from a 2000 survey that found that those who supported the process outnumbered opponents in all 50 states by at least 30 percent and in 17 states by a margin of more than 50 percent. Moreover, they found that the highest level of support occurred in states in which ballot measures are most frequent. Support for the process is not merely abstract; a number of studies have found that voter turnout is higher in elections in which an initiative or referendum appears on the ballot. Along the same lines, supporters of direct democracy often contend that the process curbs the excessive power of organized interest groups. The argument begins from the premise that legislators are more beholden to so-called special interests than to voters, and as a result, laws reflect the small but pow-
erful groups rather than the electorate as a whole. Direct democracy, according to this line of thinking, provides a vital opportunity for voters to protect their own interests. As a rationale for direct democracy, this logic was a favorite of Progressive Era reformers in the early 20th century; as Robert LaFollette explained, “The forces of the special privileges are deeply entrenched. Their resources are inexhaustible. Their efforts are never lax. Their political methods are insidious. It is impossible for the people to maintain perfect organization in mass. They are often unaware and are liable to lose at one stroke the achievement of years of effort. In such a crisis, nothing but the united power of the people expressed directly through the ballot can overthrow the enemy.” More recently, some have argued that organized interests have subverted this goal by seizing control of the process. From the cumbersome process of obtaining signatures on petitions to the expense of television advertising, such groups are able to “buy” elections. Journalist David Broder sums up this perspective: “Though derived from a reform favored by Populists and Progressives as a cure for specialinterest influence, this method of lawmaking has become the favored tool of millionaires and interest groups that use their wealth to achieve their own policy goals.” This is still a matter of some controversy in the literature, though in the most comprehensive study of the initiative process, John Matsusaka concluded flatly that he was “unable to find any evidence that the majority dislikes the policy changes caused by the initiative.” Thus, if the critics are correct, interest groups have not only hijacked the tools of direct democracy, they have taken control of democracy itself. So if voters like the opportunity to vote directly on issues, and it gives them more of what they want, what could possibly be the problem? One potential concern involves the rights of minorities, though this is less of a concern with referendums than with initiatives, since referendums require the legislature to act first. Thus, referendums threaten the rights of minorities only if the legislature has already chosen to expand rights. Initiatives more clearly jeopardize minorities, as there is no intermediary body to resist a tyrannical majority—just as Madison warned. Because
958 r eferendums
the legislature must act first, referendums generally do not pose an unusual threat to those in the minority. A rare example of this possibility nearly occurred in Alabama in 2000. An anachronistic provision of the state’s constitution prohibited interracial marriage, and lawmakers in both chambers unanimously agreed to amend the constitution to remove the ban. Although the provision had been unenforceable since the U.S. Supreme Court’s ruling in Loving v. Virginia (1967), the constitutional change had to be ratified by the state’s voters, just as any other amendment to the constitution. Voters approved the repeal, but with only 60 percent of the vote; two of five Alabama voters elected to keep the unenforceable ban on interracial marriage in their state’s constitution. Although the effect would have been merely symbolic, a modest change in the vote would have demonstrated the risks that referendums pose to minorities. Other arguments against referendums require less speculation and raise significant questions about the desirability of this process. The key concern expressed by most opponents of direct democracy is that voters lack the competence to make rational decisions on complex policy issues. A wide body of political science literature has repeatedly affirmed that most citizens are highly uninformed about political issues and that they often hold extremely inconsistent opinions based on very few considerations. With this in mind, critics argue that such voters will be susceptible to manipulation by clever ad campaigns or slogans and will ultimately support measures that “sound good” without having seriously considered the potential costs of the proposal. In this light, reconsideration of Matsusaka’s optimistic conclusion is warranted. After all, just because voters are happy with their decisions does not necessarily indicate that they would have made the same choices were they fully aware of the competing arguments. Voter ignorance may simply continue beyond the election itself. The problem of voter competence is not merely abstract. The terminology used in ballot measures can easily result in voter confusion. Consider the Alabama example again. This is the language as it appeared on the ballot: “Proposing an amendment to the Constitution of Alabama of 1901, to abolish the prohibition of interracial marriages.” To the careful, educated reader, the meaning is clear, but it is not
hard to imagine how the double negative in this sentence might confuse some voters. Such problems are a unique danger of direct democracy; representative democracy may have problems, but legislators at least know what they are voting on. Some scholars, however, have found that voters are surprisingly successful at employing various information shortcuts to make smart decisions on even the most complicated ballot measures. In a widely cited article, Arthur Lupia demonstrated that California voters were able to navigate a multitude of insurance reform proposals not by acquiring a large amount of information about the proposals themselves, but by making use of various cues in the public debate. For example, an otherwise uninformed voter might correctly oppose a measure supported by the insurance industry even if he or she does not know its details. Endorsements from trusted public officials and established groups might have the same effect, and the warnings of an utterly irrational electorate may well be overstated. Nevertheless, if voters must rely on elected officials and interest groups for the referendum process to work properly, the key arguments in favor of the direct vote are severely compromised. Direct democracy, including the referendum, is becoming increasingly popular. There were more than four times as many ballot measures at the state level in the 1990s as there were in the 1960s, and there is no sign that this trend is waning. It is not uncommon for voters in some states to cast ballots on more than a dozen separate measures; the voter “pamphlet” for the 2004 election in California ran more than 150 pages, driven mainly by analysis of 15 ballot measures. There is even renewed interest in the idea of a national referendum. One subject that has long been a popular one with supporters of direct democracy is war. During the period between the world wars, a number of prominent Americans advocated a Constitutional amendment that would require a popular vote before committing the country to war. President Franklin D. Roosevelt put a quick end to the proposal in 1938 by expressing strong opposition to the idea. During our current time of war, this idea serves as a tangible case illustrating the pros and cons of the referendum process. If Americans were to vote directly on our involvement in a foreign war, the outcome might well result in a deci-
special districts 959
sion that more accurately reflects current public opinion, but it is easy to see how such a vote might not be “more consonant to the public good.” Further Reading Broder, David. Democracy Derailed: Initiative Campaigns and the Power of Money. New York: Harcourt, 2000; Cronin, Thomas E. Direct Democracy: The Politics of Initiative, Referendum, and Recall. Cambridge, Mass.: Harvard University Press, 1989; Gerber, Elisabeth R. The Populist Paradox: Interest Group Influence and the Promise of Direct Legislation. Princeton, N.J.: Princeton University Press, 1999; Lupia, Arthur. “Shortcuts Versus Encyclopedias: Information and Voting Behavior in California Insurance Reform Elections.” American Political Science Review 88 (1994): 63–76; Madison, James. Essay 10, The Federalist Papers. New York: Bantam Books, 1982; Matsusaka, John G. For the Many or the Few. Chicago: University of Chicago Press, 2004. —William Cunion
special districts Local governments can be placed into two broad categories. There are general purpose governments and special district governments. General purpose governments are responsible for an array of public services. Examples of general purpose governments include counties, cities, towns, and townships. Special district governments have narrow jurisdictions and are quite often funded through special taxes. They provide such services as electrical power, economic development, fire protection, higher education, hospitals, parks, libraries, utilities, mosquito control, real estate development, school construction, sanitation, and transportation. There are approximately 35,000 special district governments in the United States. Special districts that have the ability to charge fees for their services are known as enterprise districts. Those that must rely on general revenues alone, such as property taxes, are known as nonenterprise districts. However, in addition to their fee-based revenue, enterprise districts can also receive appropriations from the general revenue fund of counties or cities. Special districts that have their own governing boards are known as independent districts, while those that are governed
by an existing general purpose government are known as dependent districts. Special districts are a major component of the American federal system. Their influence has increased over the past 40 years. The scope of their activity increased more than 150 percent from 1957 to 1992, more than any other type of government. State laws vary, but for the most part, a city or county may create a special district according to provisions detailed by statute. Private citizens may also, of their own accord, petition the state government for the creation of a district. Special districts are created for a variety of reasons. It may be the case that a general purpose government has failed to provide a certain service. On the other hand, there could be a desire to take a policy issue out of the controversial arena of local politics and make policy making less visible. Another reason for the creation of special districts is the need to allow unincorporated areas to provide services usually associated with urban areas or to provide for regional services whose provision does not fit into the framework of existing municipal boundaries. A special district could also be created to satisfy conditions of a federal grant. The most pressing reason for the creation of special districts, however, is the desire of local officials to get around tax and expenditure limitations (TELs) that have been placed on them by state governments or through citizen initiatives. Political analyst Donald Axelrod refers to special districts as “political bomb shelters.” They operate below the surface and provide shelter for government activities that would otherwise come under greater public scrutiny. Given the low visibility of special districts, it is not surprising that voter participation in elections for special district governing boards is generally quite low, with 5 percent turnout being considered high. In addition, special districts can be difficult to monitor for those who are supposedly in charge of them. The directors of the Brentwood Recreation and Park District in Contra Costa County, California, did not discover that the government of which they were technically in charge went out of existence until seven months after the fact. The sheer amount and variety of local governments is breathtaking. An American would achieve no small feat by simply locating all the different units of local government that exercise authority in a given
960 special districts
area, which could be more than a dozen. As opposed to city or county limits that are marked by road signs, the boundaries of special districts are not obvious to passersby. Locating and charting the special districts that have jurisdiction over one’s life and property could be a daunting task, and even if one did have a professional background in surveying or geography and were able to identify and draw the boundaries for all the governments in one’s area, there is no guarantee that the results of one’s labors would provide a clearer picture. It is not uncommon for the boundaries of special districts to have unusual shapes and confusing boundaries. On the other hand, special districts can provide public services according to the unique characteristics of local demand. In addition, the costs of public services are directly linked to the benefits. Furthermore, because they are specific entities devoted to particular concerns, they are responsive to their constituents. It can be argued, however, that the myriad special districts in America contribute to an inefficient and overlapping system of providing public services. They hinder regional planning and due to the sheer number of governmental units lessen accountability. This is especially apparent when one ponders the massive reserves that have been accumulated by many special districts. There are four main schools of thought with respect to the proliferation of special districts. These are the institutional reform, critical political economy, public choice, and metropolitan ecology schools. The institutional reform school is a legacy of the Progressive Era movement, which was most active from the late 19th to the early 20th centuries. Theorists in this area call for the integration of smaller units of local government into larger and more efficient macrogovernments. Institutional reform theorists prefer strong regional governments—metropolitan or county—that would provide an efficient and accountable means to implement administrative polices. Of course, this would involve the elimination of special districts. Special districts led to the fragmentation of government, which, so the argument goes, leads to a decrease in both the capabilities of general purpose governments and public accountability. Most citizens are not aware of the special district governments
in their area until a tax bill arrives. Even then, it is quite possible that taxes for special districts are hidden in consolidated bills of another government that serves as the collection agency for the special district. In this way, according to John C. Bollens, a leading expert on government reform, special districts are “phantom governments.” Another manifestation of this lack of accountability is the potential for corruption. Special districts may be created with the sole purpose of providing employment opportunities for governing body members and their relatives and friends. The approach of the critical political economy school closely resembles that of the institutional reform school. The main differences are the greater emphasis of the critical political economy scholars on questions of equity and a lower level of concern over bureaucratic efficiency. Most critical political economy scholars write from a Marxist perspective that looks on the proliferation of special districts as another manifestation of the overall capitalist structure. In other words, it is the desire for capital development that drives the creation of special districts. Like the institutional reform scholars, critical political economists look on special districts as phantom governments designed to hide the true source of power within a community. The desired reform of the critical political economy theorists would be a stronger government that would address social inequalities in a direct manner. Those in the public choice school look on the large number of special districts from a decidedly more positive perspective. The thousands of special districts—many overlapping and duplicating one another—allow citizens to enjoy an enormous level of choice when it comes to the desired amount and type of public services. The prevailing theoretical perspective of the public choice school is that the liberty of the individual is the paramount concern of the community. Accordingly, local government should become a political marketplace in which citizens act as consumers of government services and select from a variety of special districts that provide particular services. The most influential work in the public choice school is Charles Tiebout’s groundbreaking 1956 article “A Pure Theory of Local Expenditures.” Tiebout argued that, given the proper conditions, society
state courts 961
would benefit from a large number of local governments competing with one another for residents. In his “pure theory,” Tiebout assumed that consumers had perfect information regarding the revenues and expenditures of each local government. In addition, he assumed that each community had some idea regarding its optimum size. And, finally, he assumed that each community would seek to attain its optimum size. To the extent that these conditions are present in a system of local government, it is possible, so the argument goes, that a society could benefit from a marketplace of many local governments. The main tenet of the metropolitan ecology approach is that a system of government is the result of a community’s political culture, the institutional and legal framework within which governments are created. Each community’s political culture is somewhat unique and has an inherent dignity and identity. If one starts with Alexis de Tocqueville’s premise that the tendency toward a centralized administration— as embodied in both the institutional reform and critical political economy positions—is irresistible in a democracy, short of a countervailing attachment to local communities, the only possible alternative to centralized soft despotism is some variant of the metropolitan ecology approach. In short, one must recognize the clear political fact that special districts will be created where the rules favor their creation. Despite the scholarly controversies over special districts, these narrow-purpose governments are embedded within the American system of federalism. Eliminating special districts would require a sustained political movement that would rival the great reform movements in American history. It is unlikely that such a movement will take hold in the near future. Special districts offer local policy makers— and perhaps groups of private citizens—financial flexibility, political cover, and the appearance of decentralized decision making. Special districts serve a pressing need in today’s political environment. Further Reading Axelrod, Donald. Shadow Government: The Hidden World of Public Authorities—and How They Control Over $1 Trillion. New York: John Wiley & Sons, 1992; Bollens, John C. Special District Government in the United States. Berkeley: University of California Press, 1957; Bollens, Scott A. “Examining the Link
between State Policy and the Creation of Local Special Districts.” State and Local Government Review 18 (1986): 117–124; Blair, George S. Government at the Grass Roots. Pacific Palisades, Calif.: Palisades, 1981; Burns, Nancy. The Formation of Local Governments. New York: Oxford University Press, 1994; Danielson, Michael N. Metropolitan Politics. Boston: Little Brown, 1965; Foster, Kathryn A. “Specialization in Government: The Uneven Use of Special Districts in Metropolitan Areas.” Urban Affairs Review 31 (1996): 283–313; Hawkins, Robert B., Jr. Self Government by District: Myth and Reality. Stanford, Calif.: Hoover Institution Press, 1976: MacManus, Susan A. “Special District Governments: A Note on Their Use as Property Tax Relief Mechanisms in the 1970s.” Journal of Politics 40 (1981): 1207–1214; Stephens, G. Ross, and Nelson Wilkstrom. “Trends in Special Districts.” State and Local Government Review 30 (1998): 129–138; Tiebout, Charles M. “A Pure Theory of Local Expenditures.” Journal of Political Economy 64 (1956): 416–424; Tocqueville, Alexis de. Democracy in America. Translated, edited, and introduced by Harvey C. Mansfield and Delba Winthrop. Chicago: University of Chicago Press, 2000. —Brian P. Janiskee
state courts When discussing policy making, the courts rarely come up, unless it is the U.S. Supreme Court. Much confusion abounds when it comes to discussions of state courts, what they do, their structures, and their role in the policy process. The United States has a two-pronged judiciary, the national and the state, which reinforces the principle of federalism at the heart of the U.S. Constitution and the Tenth Amendment. This essay will discuss the organization and jurisdiction of state courts and the different selection and retention methods in the states. There are no two state judicial structures that are identical, but there are some similarities that the states share. First, like the national scheme, there is a hierarchy in the state judicial structure. There is the trial level, which has original jurisdiction in civil and criminal matters. The trial level also offers jury trials to those who do not choose to enter plea agreements. If there is a decision handed down by the trial court that one of the sides does not agree with, the decision
962 sta te courts
may be appealed. In 39 of the 50 states, there is an intermediate appellate court that a case from the trial court is appealed to. In the 11 remaining states, an appealed case goes directly to the state court of last resort, which is similar to the U.S. Supreme Court. In all cases of capital punishment, there is a mandatory appeal that goes directly to the court of last resort, regardless of a state’s intermediate appellate court. (In two states, Texas and Oklahoma, there are two distinct courts of last resort, one for criminal matters and one for civil matters.) There are three types of jurisdiction: geographical, subject matter, and hierarchical. Geographical jurisdiction simply states that a court has jurisdiction over a particular case if that case is within that court’s geographic area. For instance, a court of last resort in California has geographical jurisdiction over the entire state, whereas a trial court in California has geographic jurisdiction only in a particular city or county. Subject matter jurisdiction divides courts between civil law and criminal law. Civil law covers matters in which a dispute exists between private individuals, and criminal matters are those in which a law has been broken and the government is a party in the case. Hierarchical jurisdiction ranks the courts according to their authority. The highest court in the state’s hierarchy is the state court of last resort, followed by the intermediate appellate court, followed by the trial courts. Of course, the highest court in the United States is the Supreme Court. There are five primary ways vacancies on a state court are filled. The first is through direct partisan election, in which the voters elect judges to office for a fixed term. The second is the nonpartisan election; this method of selection is the same as the first except the candidates are not allowed to disclose their party affiliation. Third, the merit selection, or the Missouri plan, has become the most popular form of judicial selection and retention. The first stage of merit selection occurs when the governor or judicial nominating commission appoints a judge for a fixed term, usually four years. After the initial term is over, the judge faces a retention election. A retention election is when the voters choose, by voting for or against the candidate, if they want that judge to continue serving. In a retention election, the judge does not face an opponent. Instead, the voters choose to either retain or expel the current judge. The last two methods of
judicial selection are gubernatorial and legislative appointment. That is, either the governor or the state legislature will appoint a judge to the bench for a fixed term or in some cases until the age of 70. Because there are generally three levels to the state judiciary, each level can have its own method of selection. Some states select judges to the intermediate appeals court by a different method than their state court of last resort. There is an interesting history of why different states have different methods of selection and retention, which is discussed below. The method by which judges take their offices is a contentious issue at the state and national level. There have been a number of debates, especially recently, about the role the American Bar Association (ABA) plays in Supreme Court appointments and in the way the president selects appointments. But at the national level, there is no move to revise the system in any dramatic way, such as allowing the population to vote for the newest nominee. At the state level, however, reform efforts are more dramatic. In 1878, the American Bar Association began working for a fair and just judiciary by seeking to reform the method of judicial selection. Before American independence, the king of England chose the judges in the colonies. As a move away from monarchy, the state legislatures began appointing judges once independence was won. But by the 1830s, Jacksonian Democracy took hold, and states began to elect their judges through popular election. Election was the popular form of judicial selection, particularly for judges in the court of last resort, until the end of the Civil War, when dissatisfaction set in among the constituency who felt that judges were making their decisions along partisan lines. The late 1800s saw a shift toward nonpartisan judicial elections and systems that were precursors to the current merit plan. In 1906, the ABA, in reaction to legal scholar Roscoe Pound’s speech at the annual meeting of the ABA, began looking for a better solution than nonpartisan elections, and momentum gained for the merit plan. The merit plan began taking form in 1913 with the Kales plan, named after Albert M. Kales, who was one of the founding members of the American Judicature Society, which involved a nominating commission and noncompetitive retention elections. Although California was the first state to use retention elections, it was not until Missouri adopted a merit
state courts of appeals
plan in 1940 that the selection method gained popularity. The merit plan continues to be the centerpiece for judicial reform movements. Bar associations began to spring up wherever there was corruption or the perception of an unfit judiciary. This was particularly true in New York, whose infamous Tammany Hall political machine would commonly place party loyalists in judge seats. In fact, the involvement of the St. Louis Bar Association, the Missouri State Bar Association, and the Lawyer’s Association of Kansas City that led to Missouri adopting the merit plan was in response to what was felt was a corrupt judicial selection process. This story is replicated across the country and recounted in many case studies. State and local bar associations play a large role in reforming the method of judicial selection. The importance of the varying state legal structures is seen in how judges make decisions. What we know is that the institutions within which an actor operates play a large role in determining what gets done. Thus, a judge who is elected will hand down different decisions or act differently than a judge who is appointed for life tenure. Furthermore, the selection and retention method of a state has political implications for the legislature and the governor, in that appointed states often seem more partisan in disputes over judgeships and thus judges are more ideologically motivated than those judges in an elected system. And while judges in an elected system may be more responsive to voter preferences, these judges may not be qualified to make difficult legal decisions. State courts are an often ignored component of the American political system, yet ignoring the state courts is to ignore a vital component of the system. Collectively, the state courts decide far more cases than courts in the federal system and thus potentially have a greater impact on policy and the day-to-day operation of individuals’ lives. Further Reading Gray, Virginia, and Russell L. Hanson, eds. Politics in the American States. 8th ed. Washington, D.C.: Congressional Quarterly Press, 2004; Langer, Laura. Judicial Review in State Supreme Courts: A Comparative Study. Albany: State University of New York Press, 2002. —Kyle Scott
963
state courts of appeals State courts of appeals represent the second of three tiers within most state judicial systems. In total, 39 states use an intermediate appellate court to help resolve appeals originating from trial level courts. In states without intermediate appellate courts, appeals fall entirely on state supreme courts. Through their existence, state appellate courts serve to relieve the burden that state supreme courts once confronted as they sought to settle challenges to lower court decisions. Looking across the states, state appellate courts have become an important institution in an era when state courts have had difficulty responding to rising quantities of civil and criminal appeals. Originating with judicial reforms initiated in the 1960s, state courts of appeals represent attempts by state reformers to alter the professionalization of state courts. Reformers argued that state courts were overloaded and slow, additionally criticizing the level of efficiency in confronting legal challenges. At the same time that reformers of state judiciaries directed their attention toward state methods of judicial selection, they also sought reforms throughout the states that restructured the process of appeals. Today, state courts are highly professionalized institutions collectively handling the demands of more than 100 million cases a year. Federal courts, in comparison, process just over 1 million cases per year, suggesting that state courts are much more integral to the lives of most Americans. Changes adopted in the 1960s were an effort to increase the level of professionalization of state courts, both making state courts more efficient and increasing their ability to handle a greater capacity of cases. Intermediate appellate courts provided a new structure within state judicial systems for accomplishing both objectives. The impact has been greater authority for many state judges, from the trial level to state supreme courts, and greater ability to handle legal demands as they arise. Another impact has been the reorganization of state court systems. Prior to reforms, many state court systems were inconsistently structured with overlapping jurisdictions and no uniform order of appeal. Problems originating from nonconsolidated courts included trial courts with overlapping boundaries, varying rules of procedure by individual courts, and limited jurisdiction over lower courts. With confusing litigation procedures,
964
state courts of appeals
both inefficiency and slowness developed, leading to efforts by reformers to create additional procedures or structures to alleviate these problems. Intermediate appellate courts affected the judicial landscape by providing increased direct supervision over trial courts with both limited and general jurisdiction. State appellate courts imposed judicial hierarchy and provided judicial structure so appeals could proceed in a logical order. Additionally, as state appellate courts developed in many states, they limited the autonomy of lower court judges and helped provide stricter boundaries for trial courts. Through streamlining the litigation process and providing clear paths for appeal from trial courts to state supreme courts, state courts have become vastly more efficient. Related to case quantity, the judicial framework in many states faced crises of management. State judicial systems were overburdened by large quantities of lawsuits and unable to process appeals in a timely or efficient manner. Prior to reforms, state supreme courts represented the last and only appellate courts in most states. By creating a new tier between lower trial courts and state courts of last resort, a lower form of appellate court was created, allowing state supreme courts to implement discretionary dockets. Thereafter, state supreme courts accepted only cases that deserved consideration at the highest level of review. Additionally, state courts of appeal allowed direct and more immediate oversight of lower trial courts, ensuring more efficiency and stricter standards. Effectively, this intermediate review has substantially relieved caseload pressures, allowing state supreme courts to concentrate on controversial areas of law, while lower appellate courts have assumed the responsibility for most appeals. In 1962, only 14 states operated lower appellate courts. By 1980, reflecting the reforms of the 1960s and 1970s, 27 states had established lower appellate courts. Today, two-tier appellate systems are the norm, operating in 39 states. Additionally, lower appellate courts now hear every form of policy common to state courts. From criminal appeals ranging from embezzlement to sexual assault to civil appeals including First Amendment claims and employee injury claims, lower appellate courts are responsible for hearing and often resolving a variety of judicial policies. Importantly, lower appellate courts typically decide cases using three-judge panels as an alternative
to sitting en banc, when the entire court is present, unlike state supreme courts. Therefore, where lower appellate courts are composed of six or more judges, this allows different cases to be simultaneously resolved while placing fewer burdens on the entire court. While the federal courts are largely uniform, with structural similarities within each level of the judiciary, state courts vary substantially. Whether evaluating state supreme courts, intermediate appellate courts, or trial courts, the states offer a variety of arrangements. In no more controversial area is this evident than with state methods of judicial selection. While federal judges are selected by the president of the United States with the advice and consent of the Senate, the states offer five primary forms of judicial selection. Broadly, these forms of judicial selection divide between forms of election and appointment. Elective forms of selection include both partisan and nonpartisan elections. Partisan elections are similar to most other elections, with candidates using party identification. Generally, candidates are designated as a Republican, Democrat, or a third party identifier and benefit from such identification during elections. While popular during the 19th and early 20th century, today only seven states use purely partisan judicial elections. Nonpartisan elections differ by not allowing party identification. Candidates for office are forbidden from mentioning their partisan affiliation and must run for office by focusing on issues rather than partisan platforms. Preferences for nonpartisan elections emerged as opponents to political party activity within judicial selection sought to remove corruption and political accountability from judicial service. Currently, nonpartisan elections are the more popular form of judicial election, with 12 states using elective methods that exclude partisan activity. Appointive methods of selection include three primary types. Traditionally, the most popular appointive method of selection was executive appointment. Very similar to selection within the federal judiciary, governors are responsible for selecting candidates, often with the required consent of the upper house of the state legislature. Like the president, governors select nominees based on several criteria, including partisanship, policy preferences, and familiarity. Today,
state courts of appeals
just three states use executive forms of appointment, and these states are entirely located in the Northeast. Another traditional form of selection is legislative appointment. Now limited to Virginia and South Carolina, legislative appointment involves selection of judicial candidates by the state legislature. Candidates are generally former or current state legislators, causing a close relationship between the state legislature and the judiciary. Merit selection represents the final form of state judicial selection. Currently the most popular form of selection method and used in 17 states, merit selection formats seek to balance both independence and accountability. Judges in this method are typically selected by a judicial nominating commission that delivers three nominees to the state executive. The state executive then selects one candidate who serves an initial term of one or two years. Following this initial term, judges run unopposed in a retention election. If reelected by a majority of the electorate, judges then serve an extended term. Advocated since the 1930s, states have increasingly turned toward merit selection. In total, each form of judicial selection represent attempts by state actors, including political parties and interest groups, to create some balance between independence and accountability for judges while in office. Other characteristics of state courts of appeals include size of office, the number of circuits or divisions, jurisdiction, term length, selection of chief judges, and mandatory retirement. As noted above, court size remains important for state courts because office size relates directly to the number of appeals processed each year. Across the states, the sizes of state courts of appeals range from three judges in Alaska and Idaho to 105 judges in California. In the 39 states with lower appellate courts, the demand on appellate courts varies. Accordingly, the comparative difference between states such as Alaska and California is very different based on population and the number of appeals filed. Related to office size is the quantity of circuits or divisions that each intermediate appellate court must oversee. Many state appellate circuits administer the entire state. However, 20 states have multiple divisions. Beyond a single circuit, several courts of appeals have five or six circuits, with New Jersey operating 15 separate circuits. The purpose of multiple circuits, much like the U.S. courts of appeals, is to provide
965
specific channels of appeals based on territory or geography. Territorial differences may also affect the operation of lower appellate courts, as each court may retain a unique approach to judicial policy. Larger states such as California and Texas may contain substantial regional differences within the state that emerge within the courts based on judge or court preferences. Where consolidated, states with specific geographical circuits provide designated places to file appeals. In addition to office size and the division of labor, state courts of appeals also divide responsibilities based on policy jurisdiction. Four states have appellate courts that focus entirely on criminal appeals. Of these states, both Texas and Oklahoma have created separate criminal courts of last resort. In Oklahoma, criminal appeals move directly from trial level courts to the court of criminal appeals. Additionally, Alaska’s court of appeals hears only criminal appeals, while civil appeals move directly to the Alaska Supreme Court. States with courts of appeals that are entirely civil in nature are the Alabama and Oklahoma Courts of Appeals. Tenure of service also varies dramatically throughout the states. The most common term length for lower appellate court judges is six years, yet the minimum term is four years and the maximum term is life tenure in Massachusetts. Additionally, in several states, primarily those with merit selection, judges must first serve a shorter initial term. Overall, term lengths vary substantially, with some states favoring shorter terms and others with longer terms. Like methods of selection, length of term has a substantial effect on the approach that judges take toward service. Where shorter, judges may reflect on political pressures related to reselection, while longer tenures afford greater freedom from external pressures. Like the federal courts, state courts of appeals designate individual judges to lead their respective courts. Often designated the “chief judge,” these individuals are responsible for managing the court, including how cases are heard, which judges hear each case, and how opinions are structured. Related to the selection of chief judges, states use several approaches. The most common method for selecting chief judges is peer voting, used in 21 states. Chief judges in these states are selected by a vote of the state court of appeals. In many states, the eventual
966 sta te government
judge selected is the most senior judge on the court. Chief judges are also chosen by standard forms of selection including executive appointment, legislative appointment, merit selection, and elections. States also use seniority and appointments by the prior chief judge as sources of selection. Once a chief judge, length of tenure ranges from one year to the remainder of a chief judge’s tenure in office, with elected courts generally allowing shorter terms as chief judge than appointed courts. Terms for chief judges, like associate judges, matter greatly for the approach they take toward office. When serving longer terms, chief judges are granted the authority to shape policy and the direction of the court with few threats of reprisal by those external to the court. In relation to retirement, judges on state courts of appeals leave both voluntarily and nonvoluntarily. Almost 60 percent of lower appellate courts have mandatory retirement rules, whereby judges may serve until a specific age. In the 23 states with age restrictions, 70 years of age is the most common mandatory age of retirement, with an additional three states allowing service until 75 years of age. Acting as the center tier of 39 state judicial systems, state courts of appeals are granted the important responsibility of resolving many legal conflicts. This important task allows intermediate appellate courts to place their imprint on the legal rationale of cases that follow. When areas of law are unclear and when state supreme courts or the U.S. Supreme Court have yet to become involved in an area of policy, substantive doctrine is shaped by the actions and opinions of intermediate appellate courts. As important is the impact of lower appellate courts on state supreme courts. In the 11 states without intermediate appellate courts, state supreme courts serve as the first and only outlet for appeal. In these states, state high courts are given the task of resolving all legal disputes as they emerge on appeal. Without a discretionary docket, state supreme courts are obligated to evaluate a greater percentage of appeals than in states with discretionary dockets, potentially overburdening a state supreme court. On the other hand, in states with lower appellate courts, state supreme courts are granted the ability to determine which appeals are most important. This factor more than any other means that lower appellate courts are valued for their ability to
make the appellate system more efficient and reduce the burdens previously imposed on state supreme courts. As mentioned above, following reforms and reorganization, state courts of appeals provided a mechanism for addressing a growing quantity of legal appeals while reducing pressures on other tiers of the judiciary. Additionally, state courts of appeals possess varying structures, ranging from method of selection to the quantity of judges present within an appeal. Related to these factors, state laws and constitutional provisions have created systems of the judiciary that vary throughout the states, allowing appellate courts and judges to become more or less adept at handling legal challenges as they emerge. Further Reading Hall, Melinda Gann. “State Judicial Politics: Rules, Structures, and the Political Game.” In American State and Local Politics: Directions for the 21st Century, edited by Ronald E. Weber and Paul Brace. New York: Chatham House Publishers, 1999; Meador, Daniel J., and Frederick G. Kempin. American Courts. St. Paul, Minn.: West Publishing, 2000; Segal, Jeffrey A., Harold J. Spaeth, and Sara C. Benesh. The Supreme Court in the American Legal System. New York: Cambridge University Press, 2005; Sheldon, Charles H., and Linda S. Maule. Choosing Justice: The Recruitment of State and Federal Judges. Pullman: Washington State University Press, 1997; Tarr, G. Alan. Understanding State Constitutions. Princeton, N.J.: Princeton University Press, 1998. —Brent D. Boyea
state government In the United States, the bulk of responsibility for governing and executing policies originally lay with the various states. States, being outgrowths of the English colonial governments, were seen as the entities to which citizens and elected officials turned when conducting their political affairs. This is not to say that the national government played no role in governing the newly formed nation, but initially, especially under the Articles of Confederation, the states were dominant in terms of seeing to it that citizens’ needs were met. As it became clear that the confederacy that was created under the Articles of
state government 967
Confederation was in jeopardy of breaking up because of the disproportionate strength given to the state governments, it was decided that a new form of national government was needed. This new form of government, organized under the U.S. Constitution, granted the central government considerably more power as the confederal form of government was replaced with the federal system we are familiar with today. Immediately following the adoption of the U.S. Constitution and in concert with various broad interpretations of the new powers of the federal government, the state governments began to slowly cede control over the formulation and design of public policy in the new nation. With the adoption of the Constitution, however, the states did not sit back and obediently accept the actions and policies of the federal government. Competing philosophies concerning the proper roles of the state and federal governments led to a number of clashes during the early 1800s, most notably regarding the federal government’s stance on the Alien and Sedition Acts, various tariffs, and a series of decisions surrounding the issue of slavery. The philosophical differences between those who favored stronger state governments and those who favored a stronger federal government were loosely tied to partisan affiliations and were largely resolved with the Civil War and the Union victory in that conflict. Following the Civil War, the federal government again exerted its influence over the states, particularly in the South, where the short-lived experience with Reconstruction served to remind state governments in the South that they were subordinate to the federal government in Washington, D.C. The late 1800s saw the maturation of the Industrial Revolution in the United States. A new economy based on new technology and rapid change began to replace the more agrarian and rural society with which Americans had been familiar. Because of the pluralistic nature of American government, these new industries and corporations were able to significantly influence governmental policy at both the national and state levels. Citizens in various regions of the nation, particularly in the West, in the Midwest, and on the Plains, began to react to the influence wielded by these new players by organizing locally, with the ultimate goal of influencing state government so that their interests would be heard. The populist and
Progressive Era movements were the outgrowths of these local organizing activities, and the citizens of these regions ultimately had notable successes in regulating the new economy. Specifically, the newly formed groups were able to infiltrate local and state governments, subsequently regulating the railroads and other large industries. Furthermore, these reform movements led to the adoption of direct democracy at the state level, as exemplified by the adoption of the initiative, referendum, and recall in a number of states. These modes of direct democracy gave citizens much more control over their state governments than they had had in the past. While the federal government remained relatively strong during this period, and while the move toward direct democracy was not adopted at the national level, state governments were central to producing and maintaining the atmosphere of reform that slowly percolated up to the federal government in the early 1900s. With the advent of two major international conflicts, World Wars I and II, the United States again experienced great changes socially and politically. In order to succeed in these two conflicts, it was necessary for the federal government to further centralize its powers so that a truly national war effort could be made. The national income tax was imposed as a way for the U.S. government to support its expansion and growth; very few states imposed similar taxes, and the states’ powers in relation to the federal government continued to slip as a result. Along with the two world wars, the United States was also faced with the Great Depression. The Great Depression furthered the need for the federal government to expand its scope so that the United States would be able to pull itself out of the disastrous economic situation of the late 1920s and early 1930s. State governments were virtually bankrupt, and it was up to the federal government to provide relief to the people of the states. The Federal Emergency Relief Act, through which the federal government provided millions of dollars to the cash-strapped states, was implemented. Also, federal programs such as the Agricultural Adjustment Act and the Civilian Conservation Corps were set up to help the American people by providing economic opportunities for citizens. Thus, state governments were forced to accept supporting roles in the recovery process, and the Great Depression effectively
968 sta te government
served to secure into the future the federal government’s dominance over the states as a multitude of federal programs were put in place. The power of state governments in relation to the federal government reached low tide in the 1960s as the federal government put forth policies that obligated states to adhere to a range of federal laws. U.S. Supreme Court rulings such as Brown v. Board of Education, which was handed down in 1954 and enforced throughout the 1960s and 1970s, forced state and local governments to reform and integrate their public schools so as to give children of all races equal opportunities in school. Additionally, Congress passed the Civil Rights Act of 1964 and the Voting Rights Act of 1965, both of which were designed to ensure fair treatment of racial minorities by state governments. An earlier U.S. Supreme Court decision in 1962, Baker v. Carr, compelled states to reapportion their state and national legislative districts to guarantee equal representation of citizens in the nation’s legislatures, and state governments worked throughout the decade to conform to the Supreme Court’s decision. While most of these policies were specifically aimed at reducing the levels of segregation and discrimination in American society, these federal government mandates, taken together, served to directly influence the activities of state and local governments as the federal government oversaw the execution of these policies at the state level. In fact, state governments to this day are expected to comport to the federal government’s guidelines put forth in all of the aforementioned policies. Since the late 1960s and early 1970s, there has been a reaction to the seemingly relentless expansion and growing authority of the federal government. Generally associated with the Republican Party, the movement to reduce the influence of the federal government over state government affairs was aided by the election of Ronald Reagan to the presidency and the relative successes of the Republican Party at the national level throughout the 1980s. This decade saw the federal government collapse a large number of categorical grants into a smaller number of block grants. This was seen as a victory for state governments, as block grants give states much more discretion over the ways in which they spend and use the money they receive from the federal government.
Also in the 1980s, there was a push to give control over such large programs as Medicaid and welfare to the states, though this push stalled and ultimately failed. Republican candidates for office in the 1990s consistently advocated states’ rights in their campaigns as a way to appeal to those voters who viewed the federal government as bloated and overly intrusive in state affairs. With the adoption of such policies as welfare reform in 1996, passed by a Republican Congress and signed by Democratic president Bill Clinton, it appeared that the nation was ready to return substantial governmental responsibilities to the states. Termed devolution, or new federalism, the movement to give more power back to the states seemed an inevitability in the late 1990s. Since then, however, some scholars have cast doubt over the federal government’s actual willingness to return control over policies to the state governments, as it seems that welfare reform in 1996 was the only substantial policy in which control was devolved. Especially after the terrorist attacks of September 11, 2001, this criticism has become more pronounced because of the subsequent growth of the federal government following that event. Nonetheless, policy scholars and political scientists have continued to view devolution as a real phenomenon that has altered the balance of power, once again, between the federal and state governments. The brief, oversimplified description of historical trends in state government given above provides the context for discussing states at the beginning of the 21st century. The philosophical rift between those who favor states’ rights and those who view the federal government as superior continues to frame politics and government in the United States. Undoubtedly, state governments play a much more prominent role in the everyday lives of ordinary citizens than the federal government, though their impact is not always acknowledged by scholars and the news media. The overall size of state governments and their associated budgets are considerably larger than the federal government, and the frequency with which citizens come into contact with their state governments far surpasses the frequency with which citizens come into contact with the federal government. Anyone who has obtained a driver’s license, who has been pulled over for speeding, who has had to obtain state certifications, or who has per-
state government 969
sonally met a state representative can attest to this fact. Part of the reason for the relatively large size of state governments has been the move toward professionalization in recent decades. Professionalization refers to the movement by state governments to modernize their political affairs and conduct them in a more efficient and standardized fashion. Theoretically, a more professionalized government will produce and execute policies in a manner that is fair and equitable, as government officials are given more resources with which to work. Also, government officials are expected to be more accountable in a professionalized government, as citizens are provided with the tools to oversee their government. Features of government that signify professionalization include, among others, long terms of office, high pay for those in government, increased funding for staffing and research, clear ladders of hierarchy, clear leadership, open meeting laws, high levels of technology usage, and the efficient provision of services to citizens. The increased professionalization of state governments has generally been associated with the increasing number of responsibilities that have been thrust upon the states; that is, in order to meet demands, the states have had to upgrade their operations. Partly for this reason, political scientists as well as other social scientists have begun to view state governments as not only interesting but also very important in terms of policies and their impacts on the people of the states. Thus, the interaction of devolution and increased state government capacity has enhanced the image and standing of state governments. As a result, particularly in the field of political science, some researchers have begun to refocus their efforts to study government and politics at the state rather than national level. Devolution and the centrality of state government to the lives of citizens are not the only factors that have drawn researchers to state government, however. Political scientists have also begun to rediscover the benefits of studying state governmental institutions and state politics with a comparative approach. The 50 states provide researchers with 50 similar but somewhat unique political environments with which to make comparisons and draw conclusions, giving researchers statistical and theoretical leverage when asking important questions about
government and politics. For instance, a researcher interested in studying legislative behavior has the option of focusing on the U.S. Senate and the U.S. House of Representatives and drawing conclusions based on the actions of 535 members from the two chambers. Alternatively, that same researcher could research state legislatures, which altogether have more than 7,000 members, and could also have 99 different legislatures from which to base assumptions and draw conclusions (49 of the 50 states have a bicameral state legislature; Nebraska is the lone state with a unicameral state legislature). Statistically speaking, the researcher studying state legislatures has much more power and theoretical validity than the researcher studying the U.S. Congress because of the ability to work with larger sample sizes. This same logic applies also to researchers interested in studying executives, courts, and bureaucracies. Of course, this is not to say that researching government and institutions at the national level does not provide us with an understanding of government in general; obviously, the national government is very important, and researching it helps scholars better understand government and politics. However, studying state governments helps us understand government and politics much more fully due to our ability to compare across states and institutions. The American states occupy a unique position in the American political system. Originally designed to serve as the loci of government activity in the United States, states have experienced periods and eras in which they have seen their powers expand and contract. Though the federal government today is dominant, the relative powers of state governments are on the rise. In response to their new responsibilities, the states have modernized and professionalized their governments and the delivery of services to citizens to meet new demands. Surely, if historical trends are any indication, the powers of the state governments will continue to fluctuate into the future; regardless of the directions in which state governmental powers move, the states will undoubtedly maintain their distinct position in the American federal system. For this reason, the states will provide an ideal location for political scientists to research government and politics in the years to come.
970 sta te judges
Further Reading Brace, Paul, and Aubrey Jewett. “The State of State Politics Research.” Political Research Quarterly 48 (1995): 643–681; Jewell, Malcolm E. “The Neglected World of State Politics.” Journal of Politics 44 (1982): 638–657; Mooney, Christopher Z. “Why Do They Tax Dogs in West Virginia? Teaching Political Science through Comparative State Politics.” PS: Political Science and Politics 31 (1998): 199–203; Smith, Kevin B., ed. State and Local Government 2005–2006. Washington, D.C.: Congressional Quarterly Press, 2006; Van Horn, Carl E. The State of the States. 4th ed. Washington, D.C.: Congressional Quarterly Press, 2006; Weber, Ronald E., and Paul Brace, eds. American State and Local Politics: Directions for the 21st Century. New York: Chatham House, 1999. —Mitchel N. Herian
state judges By some estimates, there are more than 30,000 judges in the United States. The vast majority of those judges serve on state courts of some type. In the United States, almost all judges are lawyers first before becoming a judge. The 50 states (plus the District of Columbia) have each established their own system of state courts, with each state choosing its own court organizational structure and judicial selection system. This essay will explore some of the various methods for categorizing state judges. Both state and federal judges in the United States can first be divided into two very broad categories: judges who serve on trial courts and judges who serve on appellate courts. In the United States, only one trial judge presides over each case in a trial court. Although the states differ greatly in whether they have a simple or complex organization for their trial courts, all cases enter the state court system through a trial court. If there is a jury present, the jury determines the primary questions of fact at a trial. If there is no jury, then the trial judge determines both questions of fact and questions of law that arise at the trial. A trial in the United States is usually a public event in which the attorneys for each side present their evidence and question witnesses in order to determine the facts of the case. Trial courts and the judges who serve on them can be further divided into courts of general jurisdiction
and courts of limited jurisdiction. The term jurisdiction refers to the power of a court to hear a case. Around 90 percent of state trial courts are courts of limited jurisdiction, which means that the judge hears a very narrow range of cases. Some examples of state courts of limited jurisdiction include traffic courts, small claims courts, juvenile courts, family courts, housing courts, probate courts, drug courts, courts for minor crimes, and other courts with narrowly defined jurisdictions. Depending on the court and the specific state, some judges serve on these courts of narrow jurisdiction for their entire careers, but many judges rotate among these courts of limited jurisdiction. There are more than 9,200 judges on state courts of general jurisdiction that generally hear major criminal and civil cases. Courts of general jurisdiction can hear any case that is not specifically assigned to a court of limited jurisdiction. There are more than 14 times as many judges on state courts of general jurisdiction as there are federal trial judges. Judges who serve on appellate courts hear appeals filed by those who have lost cases in a trial court. Most states have a two-tiered system of state appellate courts, with the court of last resort in the state generally referred to as the state supreme court. Oklahoma and Texas have two state supreme courts, one for criminal cases and one for civil cases. The courts of last resort in Maryland and New York are called the courts of appeals, while the court called the “supreme court” in the state of New York is actually a trial court. Between the trial courts and the state courts of last resort are the intermediate appellate courts, although not all states have this middle layer of appellate courts. State appellate judges serve on panels to hear appeals. Judges on intermediate state appellate courts usually hear cases in panels of three judges, while generally state supreme courts (courts of last resort) have seven judges who all hear each case. Appeals are filed through written briefs, and generally all communication with an appellate court is done in writing. Appellate judges may hold oral arguments on their cases, but the purpose of these oral arguments is for the judges to clarify the assertions made in the written briefs submitted to the court. Appellate judges communicate their decisions through written opinions. One judge generally writes the opinion for the majority on the appellate court, while those judges
state judges 971
with minority views often file dissenting opinions. If an appellate judge agrees with the majority’s outcome in a case, but for different reasons, he or she can also generally file a concurring opinion clarifying their reasoning in the case. For the most part, appellate judges determine questions of law that become precedent for all trial judges hearing cases within the appellate court’s jurisdictional boundaries. For example, the majority decision of a state court of last resort on a question of state law is binding precedent for all trial and appellate judges within that state. Judges on state appellate courts spend a great deal of time researching the law and writing their opinions, with far less time actually spent on the bench hearing oral arguments in the cases before them. States differ a great deal in the selection system they use for state judges. There are generally five models of state judicial selection systems. Some states use several models of judicial selection, depending on the level of judge being selected. These models of judicial selection approach the main principles of judicial independence and judicial accountability differently. Models that favor judicial independence give the state judges long terms and attempt to insulate them from political pressures. In models that favor judicial accountability, the judges face the voters after serving relatively short terms. Thus, these judges are accountable to political majorities in their communities. The five general categories of state judicial selection systems are appointment by the governor, appointment by the state legislature, partisan election, nonpartisan election, and the so-called Missouri plan, including retention elections. Unlike their federal counterparts, who are appointed by the president and confirmed by the Senate for life terms, the American Judicature Society in 2004 estimated that 87 percent of the nation’s 1,243 state appellate judges must stand for some type of election. The first model for state judicial selection, appointment by the governor, most resembles the federal judicial selection system. In states such as Massachusetts and New Hampshire, the governor appoints all judges in the state system, with confirmation by the elected but obscure governor’s council. In these states, judges serve life terms until the mandatory retirement age, usually 70. In states such as South Carolina and Virginia, state judges are appointed by the state legislature. In neither of these
models do the judges face any type of election system. These models promote the principle of judicial independence. The second two models for state judicial selection involve the direct election of judges. In a partisan election system such as those used in Texas and Pennsylvania, candidates for judge run on a partisan ballot just like candidates for any other elected office in the state. These judges serve short terms, typically four years. The candidates for judge must campaign and raise campaign funds just like any other candidate, and the political parties play a key role in these contests. Reelection rates are generally high for these judges, but their reelection efforts can turn on the decisions they made previously in cases before them. In nonpartisan elections such as those held in Wisconsin and Minnesota, the party affiliations of the candidates for judge do not appear on the ballot. However, the candidates for judge who appear on the general election ballot are often chosen through partisan primary elections. In states such as Ohio, even though the party label for each judicial candidate does not appear on the general election ballot, the political parties widely advertise which judicial candidates are members of their parties. Thus, the differences between partisan and nonpartisan elections seem small. These models promote the principle of judicial accountability. The final model attempts to balance the often conflicting goals of judicial accountability and judicial independence. The so-called Missouri plan is a product of the Progressive Era and is the model of judicial selection most used by the states today. Under this model, the governor appoints a judicial selection commission, usually made up of members from each political party and from the state bar association. For each judicial opening, the commission typically submits a list of three names to the governor, and the governor must appoint one of these individuals to be the judge. The new judge then serves a set number of years on the bench, typically seven years. At the end of that fixed term of years, the judge then faces a retention election. The only question on the ballot for the voters is whether or not to retain that judge on the bench. If the voters choose not to retain a judge or if there is an opening for any other reason, the whole process begins anew with the judicial selection commission. Although the so-called merit selection
972 sta te representative
commissions can be used with any of the five models of state judicial selection, the key difference between the Missouri plan and the other models is the fact that under the Missouri plan judges must face a retention election. The Missouri plan therefore attempts to promote both judicial independence and judicial accountability. It is seen as a compromise between these two competing goals. In general, state judges spend most of their time on a variety of tasks and duties. These responsibilities include adjudication tasks such as presiding over trials, legal research, and legal writing. Judges also are involved in a variety of negotiation procedures as they try to settle cases before them without formal litigation. Judges also have administrative responsibilities, including hiring and supervision of staff in many states. Judges also must reach out to the other branches of government and to the community at large for a variety of reasons. In states that promote the value of judicial accountability, the judges must also spend part of their time raising campaign funds and other election-related tasks. State judges generally receive little special training for their often difficult and stressful jobs. Further Reading Baum, Lawrence. American Courts: Process and Policy. 5th ed. Boston: Houghton Mifflin, 2001; Brace, Paul R., and Melinda Gann Hall. “Is Judicial Federalism Essential to Democracy? State Courts in the Federal System.” In The Judicial Branch, edited by Kermit L Hall and Kevin T. McGuire, New York: Oxford University Press, 2005; Langer, Laura. Judicial Review in State Supreme Courts: A Comparative Study. Albany: State University of New York Press, 2002; Tarr, G. Alan. Judicial Process and Judicial Policymaking. 4th ed. Belmont, Calif.: Thomson Wadsworth, 2006. —Mark C. Miller
state representative A state representative is an elected official whose responsibility is to help make public policy that will best meet the needs of the citizens of the district from which he or she is elected. The state representative helps to make public policy through the legislative process as a member of the state legislature.
The position of state representative is constitutionally mandated. The U.S. Constitution declares in Article IV, Section 4, that each state shall have a republican form of government. It refers to state legislators’ duties twice. In Article I, Section 4, the Constitution states that the state legislature shall designate the “times and places and manner for holding elections for [federal] Senators and Representatives.” In Article II, Section 1, it states that the state legislature shall direct the manner in which electors to the electoral college shall be selected. state constitutions direct the process for electing state representatives, including the length of term in office, term limits, if any, the length of the legislative session, and the process for determining compensation. All states except Nebraska organize their state legislature into two groups, an upper house, called the senate, and a lower house, called the house of delegates, house of representatives, general assembly, or assembly. State representatives, then, are given a title corresponding to the legislative house to which they are elected. Representatives elected to the upper house are referred to as senators, and representatives to the lower house are referred to as delegates, representatives, or assemblypersons, depending on their state. For the remainder of this article, members of the lower houses will simply be referred to as delegates, and the lower houses will be referred to as houses. state senators generally have longer terms than delegates. In 34 states, senators are elected for four-year terms, while delegates are elected for two year terms. The remaining 16 states have identical lengths of terms for both houses, although the length of those terms vary: 11 states have two-year terms for all state representatives, and five states (including Nebraska) have four-year terms for all representatives. A total of 18 states have imposed term limits for their state representatives. Arizona, Colorado, Florida, Maine, Montana, Ohio, and South Dakota allow representatives in either house to serve eight consecutive years, while Louisiana, Utah, and Wyoming have a 12 consecutive year limit. Idaho also has an eight-year consecutive limit, but within a 15-year period. Arkansas, California, Michigan, Missouri, Nevada, Oklahoma, and Oregon impose term limits ranging from six to 12 years, with a lifetime ban on serving in that office again.
state representative 973
The size of the legislative body also differs from state to state. Generally, the senate is one-third the size of the house. The average senate has 40 members, and the average house has 112 members. New Hampshire has the largest house, with 400 members, while Nebraska, the only state with a unicameral legislature, also has the fewest number of state representatives, 49. It should also be noted that Nebraska is the only state that elects state representatives in a nonpartisan race. State representatives are classified into three categories that reflect the state constitutional design of the legislature: professional, hybrid, and amateur, or citizen, legislators. These categories refer to the length of time that a legislature is in session each year, the salary paid to legislators, and the amount of staff each legislator has. Six states, Massachusetts, Michigan, New Jersey, Ohio, Pennsylvania, and Wisconsin, have annual legislative sessions that exceed 300 calendar days. On the other end of the spectrum are six states whose legislatures are biennial: Arkansas, Montana, Nevada, North Dakota, Oregon, and Texas. Staff load can vary widely among legislatures. New York employs nearly 4,000 people to staff its legislature, while Vermont employs fewer than 60 legislative staff. Professional legislatures typically have nine personnel per representative; hybrid legislatures have three, and amateur legislatures have on average one staff member per representative. The level of professionalization of a state legislature is critical to understanding the ability of the state representative to make public policy and to understanding the attraction that the position of state representative has for citizens contemplating public service. Longer legislative sessions provide more time for legislators to debate public policy. Therefore, a wider variety of policies may be considered, and more time can be spent on the details of the policy when a legislative body is full time. Staffing is also very important. Staff can help legislators by researching policies that exist in other states and researching the federal rules regarding policies that states are mandated to support, such as Medicaid and Temporary Assistance to Needy Families. Staff can also research the various facets of policy proposals so that the state representative can be more informed of decisions he or she is about to make.
The level of staff and the length of session can increase the capacity of the legislature to act as an equal branch of government to the state governor in terms of policy making. When the legislature and governor are both fully informed about policies and policy alternatives, each branch is better equipped to debate what outcomes are best for the state. Fulltime state representatives are also able to engage in greater oversight of state agencies to ensure that the bureaucracy is implementing policy in the manner prescribed by the legislature. Staffing levels are critical to a state representative in another way. In addition to making public policy, many state representatives engage in constituency services. These services include helping constituents in their interactions with state agencies and submitting requests for district area appropriations. Often referred to as “pork,” such appropriations help to construct or maintain schools, senior citizen centers, parks, bridges, and roads as well as a host of other projects that benefit the public. State representatives with more personnel can provide increased support to constituents who need help with state agency interactions. The staff can also meet with constituents and research the needs for appropriations. Compensation is a critical issue in the case of professional legislatures. State representatives who have careers outside the legislature are not able to spend as much time performing legislative duties as those whose sole employment is as a state representative. Compensation may also help to determine the incentive candidates have for seeking office. Research has found that full-time legislatures with moderate to high salaries may provide a greater incentive for middle-class citizens to seek office. The reason for this is that a person who commands a high salary may not be willing to sacrifice that income in order to become a full-time legislator at a lower salary. Legislatures that meet for shorter sessions may be more attractive to professionals who are self-employed or who can take extended leaves of absence for public service. Business professionals, attorneys, and some state teachers fall into these categories. The average salary paid to representatives in professional legislatures is $68,599 a year; in hybrid legislatures the average falls to $35,326 per year, and amateur legislatures on average pay their representatives $15,984 per year.
974 sta te representative
Because there is a class bias between the parties, Morris Fiorina, a political scientist, has argued that full-time legislatures become disproportionately Democratic. Even while the trend clearly is toward professionalizing legislatures, in 2005 20 states had a majority of Republican state representatives, and another 12 states were evenly split between Democratic and Republican representatives. As in most sectors of the United States, white men have dominated the field of state representatives. This, too, is changing. Women now make up 22 percent of all state legislators. In Arizona, Colorado, Delaware, Nevada, Vermont, and Washington, women hold more than 33 percent of the state representative seats. African Americans hold 8 percent of all state representative seats, while Hispanics make up only 2 percent of state legislatures. Given that women make up 51 percent of the population, African Americans represent 12 percent of the population, and Hispanics make up nearly 13 percent of the population, it is obvious that minorities continue to be underrepresented. The nature of representation must be considered when discussing state representatives. A good deal of research and theory has been devoted to studying representation. The most obvious concern for equal representation comes in the form of geographic representation. This idea posits that the needs of people differ from area to area. Farmers have different needs than miners, for example. Each state forms specific districts for the purpose of electing representatives. Each district has approximately the same number of people and must have contiguous and natural boundaries, so that fairness exists in selecting a state representative and that no groups are favored or rendered unlikely to be able to elect the representative of their choice. States have outlawed the practice of gerrymandering, which is constructing a district to disproportionately favor a political party or group of people. In addition to geographic representation, symbolic representation is also a concern. It was noted above that state representatives are primarily white males. If a state had true symbolic representation, the demographics of that state would be mirrored in the legislature. For example, given the national demographics noted above, to be symbolically representative the U.S. Congress would have the following composition: 51 percent would be women, 12 per-
cent would be African American, 13 percent would be Hispanic, 1 percent would be American Indian, and 4 percent would be Asian American. The underlying assumption of the necessity for symbolic representation is that state representatives who are female or Hispanic will have different points of view than representatives who are male or white. Research on symbolic representation has shown that minorities do change the nature of legislative bodies. An increased presence of women state representatives results in additional introductions of bills addressing the needs of children, health care, and education. Little research exists on the effect of Hispanics in state legislatures. What is known about African-American state representatives is that when they are elected, it is typically from urban districts. AfricanAmerican support for a candidate is generally translated as a tip that the candidate is liberal, and therefore conservatives fail to support the candidate. African Americans do not enjoy broad symbolic representation, but in districts where they are politically active, evidence suggests that their representatives, regardless of race, do respond to their demands. The style of a representative’s policy decision making has also been given considerable thought in the literature on representation. It was noted at the beginning of this essay that a state representative is charged with making public policy that will best meet the needs of the district he or she represents. This process has been classified in two categories. The first, the delegate style of representation, relies on constant interaction between a representative and his or her constituency. Through this interaction, citizens inform their representative of their policy preferences. The state representative then introduces legislation desired by constituents and votes according to the demands of the majority of those people in his or her district. The second form of active representation is known as trusteeship. In this mode, a state representative makes decisions based on what he or she considers best for the district and the state. The relationship that this type of representative enjoys with constituents is one of trust. Citizens do not play an active role in the formation of policy but provide feedback on their approval of the representative’s actions at the ballot box. State representatives, then, are constitutionally mandated positions in which people are elected by
state senator 975
specific districts to make public policy on behalf of their constituents. The representatives are elected to the senate or house in the state legislature, where they meet in session to make policy and conduct constituent services. Most state representatives are white males who have professional backgrounds as attorneys or businessmen, although increasingly minorities and people from other professional and nonprofessional backgrounds are being elected into public service. See also state and local legislative process. Further Reading Dresang, Dennis L., and James J. Gosling. Politics and Policy in American States and Communities. 5th ed. New York: Pearson Longman, 2006; Fiorina, Morris P. “Divided Government in the American States: A Byproduct of Legislative Professionalism?” American Political Science Review 91 (1994): 148–55; Gray, Virginia, Russell L. Hanson, and Herbert Jacob. Politics in the American States: A Comparative Analysis. 7th ed. Washington, D.C.: Congressional Quarterly Press, 1999; Herring, Mary. “Legislative Responsiveness to Black Constituents in Three Deep South States.” Journal of Politics 52 (1990): 740–758; Hill, Kim Quaile, and Patricia A. Hurley. “Dyadic Representation Reappraised.” American Journal of Political Science 43, no. 1 (January 1999): 109–137; National Conference of State Legislatures, Full and Part Time Legislatures. Available online. URL: http://www.ncsl.org/programs/ press/2004/backgrounder_fullandpart.htm. Accessed July 22, 2006; National Conference of State Legislatures, Legislative Budget Procedures. Available online. URL: http://www.ncsl.org/programs/fiscal/lbptabls/ lbpc2t2.htm. Accessed July 22, 2006; Pitkin, Hanna F. The Concept of Representation. Berkeley: University of California Press, 1972; Ray, David. “The Sources of Voting Cues in Three State Legislatures.” Journal of Politics 44 (1982): 1074–1087; Thomas, Sue. “The Impact of Women on State Legislative Policies.” Journal of Politics 53 (1991): 958–975; United States Constitution. Available online. URL: http://www.law.cornell .edu/constitution/constitution.overview.html. Accessed July 22, 2006; U.S. Bureau of the Census, Population Factfinder. Available online. URL: http://factfinder. census.gov/. Accessed July 22, 2006; Weber, Ronald. “Presidential Address: The Quality of State Legislative
Representation: A Critical Assessment.” Journal of Politics 61, no. 3 (August 1999): 609–627; Weisburg, Herbert, Eric Heberlig, and Lisa Campoli, eds. Classics in Congressional Politics. New York: Longman Publishing Group, 1999; Weissburg, Robert. “Collective v. Dydactic Representation.” American Political Science Review 72, no. 2 (June 1978): 535–547. —Marybeth D. Beller
state senator Americans are both blessed and cursed to be represented by three levels of government—local, state, and national—to say nothing of the numerous special districts and other governments that fall somewhere in between. Most Americans are familiar with at least the name of their representatives at the national level. Likewise, their local representatives may be familiar to them because they are physically closer to them and may have even had direct contact with them. State legislators, however, are often caught somewhere in between. They do not have the personal connection with their constituents that local representatives have, and they do not receive the volume of media coverage that their national counterparts enjoy. Despite their relative anonymity, state legislators are extremely powerful figures in American politics. Particularly in the face of devolution, a movement to return power to the states, state legislators will only increase in power in the future. In all states but one, legislators are elected into one of two houses, either the state house (sometimes called the assembly or legislature) or the state senate. (The exception to this rule is Nebraska, which has a unicameral rather than a bicameral legislature.) This essay will consider the people who serve in the senates, people who are generally called state senators. Unlike the U.S. Senate, where all senators serve under identical institutional structures, state senate structures vary across states. As a result, examining state senates allows us to better understand how institutional structure affects senator behavior. Consequently, the essay will focus on the ways senators are similar across contexts as well as the ways in which they vary across states. In the end, this should provide a better understanding of the motivations and actions of state senates than could be achieved by examining similarities alone.
976 sta te senator
Most people are familiar with the basic structure of the U.S. Senate. Exactly 100 senators represent all 50 states and serve for six-year terms. In contrast, state senate terms are no longer than four years, and in many states, senators serve for two-year terms. State senates are also much smaller than their national counterpart, ranging from 20 members in Alaska to 67 in Minnesota. Chamber size is always lower in the state senate than in the lower house of the legislature. Because of wide variations in population and chamber size, the number of constituents per senate district varies considerably from state to state, ranging from a low of about 13,000 in North Dakota to a high of almost 878,000 in California. While the goals and basic jobs of state senators look similar across states, these different institutional contexts provide very different constraints and incentives. In most legislative bodies, the three most important institutional structures for keeping order are legislative leadership, committees, and political parties. State senates include all three of these structures, although their particular functions and powers vary considerably from state to state. For instance, all state senates have leaders, but the number of senate leaders and their specific powers vary from state to state. Similar to the U.S. Senate, most state senates are headed by a single leader elected from the general population, not the senate membership. Generally, the lieutenant governor (similar to the vice president of the United States) occupies this position. While in most states the lieutenant governor holds little power, in some states, he or she can exercise tremendous power over the legislative process. In a sizeable minority of states, senate leaders are elected by the entire body. Similarly, while every state senate has committees, the number of committees, their specific powers, and their rules regarding their make-up vary from state to state. In some states, they are the key to the legislative process, while in other states, committees are comparatively less powerful. Finally, most studies of the U.S. Senate suggest that parties help form ready-made coalitions and structure voting patterns. Recent work suggests that in the 49 states where senators are elected with party labels (Nebraska has nonpartisan elections for state legislators), these party labels help organize government in much the same way they do in Washington.
Three of the most important ways that the legislative environment for state senators varies are in session length, salary, and staff. In some states, such as California, the senate stays in session throughout the year, whereas in other states, such as New Hampshire, the legislature meets for only a few days every year. In states with longer sessions, senators tend to make higher salaries. For instance, in California, where senators are in office year round, they receive about $99,000 a year in salary. New Hampshire senators, with their short sessions, receive only $100 a year. Senators and legislators from the lower house make identical salaries except in Virginia where senators make an additional $360. These trends are not only interesting by themselves but affect the occupational makeup of the senate. In legislatures with longer sessions and higher salaries, senators can afford to not hold other jobs— they are more likely to be professional politicians. This trend is particularly pronounced among women, who are less likely to hold outside careers than their male counterparts. Although there is considerable variation in the types of outside careers held by state senators, occupations that allow greater flexibility, such as lawyers, farmers, and professors are generally the best represented in the state senates. State senators also vary considerably in their staff resources. Some states, such as California, New York, and Florida, give their senators considerable staff resources to aid them with casework, reelection, and lawmaking. Other states, such as Vermont, New Hampshire, and New Mexico, offer no personal staff, instead spreading very few staff members across all senators and senate committees. Staff resources, salary, and session length generally vary together and are frequently combined to form a measure of legislative professionalism that can be used to summarize the institutional capacity of the legislature. Many scholars have noted the increasing professionalization of state senates (and state legislatures more generally). More and more states are adding more staff, increasing session length, and raising salaries to meet the increasing demands of serving in the state senates. Like many political reforms, the professionalization trend has both intended and unintended consequences. Professionalism increases the capacity of state senators and legislators, much as intended. Professionalism also
state senator 977
increases contact between senators and their constituents, leads to policies that are more representative of public opinion, increases legislative efficiency, and increases per capita government spending. Although most of the institutional features that affect state senators remain constant over years, term limits stand as a notable exception. Term limits were introduced in California, Colorado, and Oklahoma in 1990 to remove the ills of election from the senate and to reduce careerism in state politics. Since then, 21 states have passed initiatives to limit the terms of state senators and representatives; six of these have since been repealed, leaving 15 states that currently employ some type of term limit. The maximum number of terms a senator can serve in term limited states varies from eight on the low end to 12 on the high end. In 1996 (the first year senators were affected by term limits), four senators were termed out. The numbers picked up to 22 by 1998. By 2006, 76 senators were termed out and forced to either move onto other political bodies or retire from political careers altogether. There is little doubt that term limits have changed the incentives for running for office as well as the behavior of senators who serve. The early evidence suggests that these incentives do little to alter the demographic makeup of the senate—they do not create an environment more (or less) conducive to electing women, minorities, younger people, or people from different occupations. Term limits do, however, shift representational patterns in noticeable ways. Senators who serve in term limited senates can more easily ignore constituent opinion because they do not depend on constituents for reelection. As a result, policy coming out of these senates may not reflect the will of the people in the same way it would in a non– term limited senate. In addition, the reduced institutional memory of term limited senators increases the power of other political actors, including interest groups, governors, and the media. All U.S. senators are elected in the same manner. As most Americans know, U.S. Senators run in traditional first-past-the-post single-member district elections with staggered terms. In other words, candidates from each party run in a primary. If they win the primary, they then run for the general election. Voters have the choice to vote for one of the candidates running for office. While most state senate elections
work similarly, all the senators in West Virginia and most of the senators in Vermont are elected differently. These senators are elected in multimember districts where voters vote for more than one legislator at a time. They are used less often than they once were, but multimember districts are still an important part of state senate electoral politics in some states. The evidence suggests that multimember districts increase the representation of women and decrease the representation of minorities. In addition, because of the different incentives for voting, multimember districts tend to produce more ideologically extreme senators. Multimember districts not only affect voters but also affect the ways senators do their jobs. They alter patterns of representation and lead to partnership between senators who share district boundaries, even if they are of different parties. The final important consideration in state senate elections is the cost of campaigning. Campaign costs in state senate elections, like in most political campaigns, has skyrocketed in recent years. This has occurred as state senate elections have shifted from being relatively unprofessional, inexpensive, localized affairs to expensive races that are taking on increasing national importance. For instance, Joel Thompson and Gary Moncrief note that political action committee contributions to candidates for the North Carolina legislature jumped more than 400 percent in a 10-year period. The overall cost of campaigning has risen by a similar magnitude. These staggering increases in campaign costs produce different incentives to run for office and may lead to fewer nonprofessional politicians running for office. To combat this trend, many states have passed stricter campaign finance laws to reduce the cost of campaigning and attempt to return state senate campaigns to their less expensive pasts. Thus far we have discussed the institutions state senators operate in, but we have left the demographic makeup of state senators unaddressed. Although the American political system generally does a poor job of representing the demographic makeup of America, state governments are slightly more representative than their federal counterparts. Approximately 21 percent of senators are women, as opposed to a slightly more representative 24 percent of legislators in the lower house. According to data from the Center for the Study of American Women and Politics,
978
taxation, state and local
these numbers have been increasing over time. Not only have women been elected in greater numbers in state senates than in the U.S. Senate, they have also been represented in greater numbers within the leadership. Indeed, women have served in leadership positions in state senates since the 1930s, much sooner than they achieved leadership roles within the U.S. Senate. These data about representation beg the question of whether gender makes a difference in the attitudes and behaviors of state senators or their constituents. The evidence generally reveals that it does. Female legislators propose different bills and exhibit different voting patterns than their male counterparts. Female legislators also perform more casework and place more emphasis on communicating with their constituents. Finally, there is some evidence that electing female state senators can increase the political efficacy of women in the state.The data on the representation of African Americans and other minority groups reveal similar patterns. Virtually no minority group is elected with numbers approaching parity with its numbers in the general population. Further, because the state senate represents the upper house of the state legislature, state senators are less likely to hail from minority groups than their counterparts in the lower chamber. Nonetheless, by 2002 there were more than 150 African American and 59 Hispanic state senators—far surpassing the numbers in the U.S. Congress. Despite these gains, black state legislators report a lower quality of legislative life than their white counterparts. Once again, there is evidence that minority representation matters for the way minorities view state government and for the ways legislators act in office. Black state senators propose and pass different policies than their white counterparts. Similarly, Latino senators act differently in office and are more likely to introduce bills of interest to the Latino community—even when controlling for other factors. State senators range considerably in age. While some are young and at the early stage of a long political career, others serve in the legislature during the end of their professional lives. One factor that affects age is the minimum age one can run for the senate, which varies considerably from state to state. The minimum age ranges from 18 in some states to 30 in others. These minimum ages are often higher than
and never lower than the requirements to serve in the lower house of the legislature. In addition to age requirements, state senators are required to be residents of the district they plan to serve, although the length of that residency varies from state to state. In the end, state senators are politicians who share similar goals across contexts but whose actions are systematically affected by the interplay of institutions found in the state. While they may appear similar to U.S. senators at first glance, the accuracy of this statement varies by state. See also state and local legislative process. Further Reading Hamm, Keith E., and Gary F. Moncrief. “Legislative Politics in the States.” In Politics in the American States: A Comparative Analysis. 8th ed., edited by Virginia Gray and Russell Hanson, Washington, D.C.: Congressional Quarterly Press, 2004; Button, James, and David Hedge. “Legislative Life in the 1990s: A Comparison of Black and White State Legislators.” Legislative Studies Quarterly 21 (1996): 199–218; Jewell, Malcolm E. Representation in State Legislatures. Lexington: University Press of Kentucky, 1982; Rosenthal, Alan. The Decline of Representative Democracy. Washington, D.C.: Congressional Quarterly Press, 1998; Rosenthal, Alan. Heavy Lifting: The Job of the American Legislature. Washington, D.C.: Congressional Quarterly Press, 2004; Squire, Peverill, and Keith Hamm. 101 Chambers: Congress, State Legislatures, and the Future of Legislative Studies. Columbus: Ohio State University Press, 2005; Thompson, Joel A., and Gary F. Moncrief. Campaign Finance in State Legislative Elections. Washington, D.C.: Congressional Quarterly Press, 1998. —Christopher A. Cooper
taxation, state and local A tax is an involuntary fee assessed to persons or businesses as required for financial support of a government. Taxes have deep roots in American history and culture and are often at the center of contemporary politics. In a variety of forms, taxes provide the primary source of revenue for all levels of government. Even state and local governments, which rely heavily on intergovernmental transfers of funds for numer-
taxation, state and local
ous purposes (from highways to Medicaid), derive the majority of their revenues from taxes. It may be too much to say that society could not exist without taxes, but it seems clear that society as we know it would not exist without them. Governing an industrialized state in the modern world is an enormously costly endeavor, and the federal system places much of the burden on states and localities. Currently, state and local governments spend approximately 2 trillion dollars on thousands of programs and services, and lawmakers face two separate challenges in their efforts to raise the necessary funds to pay these staggering costs. The most obvious is the fiscal challenge: how to raise sufficient revenue. Simply raising tax rates does not always produce the desired effect. While economists disagree sharply on the specifics, most agree that too high a tax burden can actually decrease revenues, especially at the state and local levels, since individuals and businesses may respond to high taxes by relocating to places with lower rates. States and communities frequently take advantage of this dynamic by offering tax incentives to attract industry. As a result, fiscal issues alone make setting tax policy a very difficult task for officials. Making it even more difficult is the political challenge. Though no one likes to pay taxes, Americans are unusually hostile to them. This cultural trait may date back to early American history. Not only was American society built on a philosophical foundation of property rights, but a number of key events of that era, including the Boston Tea Party, Shays’ Rebellion, and the Whiskey Rebellion, were all tax revolts. Less dramatically, this legacy survives, as large majorities of Americans believe that their taxes are too high, even though the typical tax burden in the United States is lower than in most other developed nations. Recent political battles over taxes have essentially been debates about which candidate would cut taxes more. One rare exception reveals why: After 1984, Democratic presidential nominee Walter Mondale unequivocally declared during the campaign that he planned to raise taxes and went on to lose all but one state (his home state of Minnesota) in the subsequent election. Historically, politically, and even culturally, Americans are fiercely antitax. One is tempted to say that Americans cynically oppose any tax that they are required to pay, but a
979
more honest assessment suggests that their main concern is fairness, though what is “fair” is not at all obvious. Broadly speaking, taxes can be progressive, proportional, or regressive. A progressive tax is one in which an individual pays a higher percentage of his or her income in taxes as income increases. A tax that levies a 10 percent charge to a person making $25,000 a year but a 20 percent charge to a person making $50,000 a year is an example of a progressive tax. A proportional tax, sometimes called a flat tax, is one that assesses the same percentage to all individuals regardless of income. For example, a community that required its residents to pay 1 percent of their income to the local government would be assessing a proportional tax; wealthier people would pay far more in terms of absolute dollars (a person making $100,000 would pay $1,000, while a person making $10,000 would pay only $100), but the percentages themselves are flat. Finally, a tax is considered regressive if it costs a greater percentage of income as one’s income decreases. A $50 tax to renew a driver’s license is sharply regressive; even though they pay the same amount, the fee represents a much greater percentage of the income of a typical college student than it does for billionaire and Microsoft founder Bill Gates. While Americans generally support the concept of progressive taxes, their deeply held value of property rights limits their confiscatory tendencies. Proportional taxes have a growing and committed group of supporters, but somewhat surprisingly, most kinds of taxes are at least potentially regressive. It is beyond the scope of this essay to fully consider the reasons why Americans have such a complex view of tax fairness, but it is worth mentioning the effect: Americans generally consider something to be unfair about every kind of tax, and the resulting hostility is tangible and widespread. Lawmakers thus face a terribly difficult puzzle trying to find ways to raise the necessary funds to pay for the goods and services people expect from their governments while satisfying the political pressures to keep taxes low. A 2002 public opinion poll of Pennsylvania voters illustrates the problem: Majorities favored increasing spending on education and for prescription drug coverage for the elderly but also opposed any increases in sales taxes or income taxes. The response has been the creation of a dizzying array of taxes at the state and local levels and,
980
taxation, state and local
more recently, an increasing reliance on games of chance to generate revenue. There are dozens of kinds of taxes, but the vast majority of revenues is generated from three main types: sales, income, and property. On the whole, the bulk of state tax revenue comes from the sales tax, which is a percentage surcharge attached to the purchase of a consumer good. The typical state tax is around 5 percent, though variance across states is very wide; five states (Alaska, Delaware, Montana, New Hampshire, and Oregon) have no state sales tax whatsoever. In total, state sales taxes account for about a third of all tax revenue at the state level. Additionally, most states allow localities to assess an additional sales tax, and thousands of communities do add an extra percent or two to finance local government operations. In major cities, a total sales tax of 8 percent or more is not uncommon. Most critics of sales taxes argue that they are regressive, and indeed a pure sales tax hits the poor especially hard. A 5 percent tax on a $100 worth of groceries might be negligible to most people but would be quite burdensome to a family living below the poverty level. Hence, most states exclude “necessities” from sales taxes, including food, prescription drugs, and utilities, and in some cases, clothing, as well. But too many exemptions would interfere with the fiscal goal of raising sufficient revenue, and the results of such choices are often comical. In New York, Kool Aid is taxable, but Ovaltine is not. The tax commissioner of Ohio told Governing magazine that he keeps two bottles of Snapple on his desk to illustrate the absurdity; the fruit punch is taxable, but the iced tea is exempt, as only the latter is classified as food. Even with the exemptions for necessities, most economists contend that the sales tax is at least somewhat regressive. Wealthier people may pay more sales tax in absolute dollars as a result of more spending on consumer goods, but the net effect is disproportionate, as middle- and lower-income individuals pay a greater percentage of what they earn. Hence, the sales tax is politically unpopular, and attempts to increase it are usually rejected by voters. The sales tax faces other fiscal and political challenges as well. Despite the potential for revenue, very few states tax services, and everything from legal work to haircuts is largely free from taxation. Business owners have successfully argued that taxing such services
would discourage their use and thereby harm the economy. Lawmakers have also expressed concerns about the possibility that such taxes would put their state at a competitive disadvantage. Internet sales have also drastically cut into state sales tax revenues. As a result of a 1992 U.S. Supreme Court ruling, Internet sales are largely free from sales taxes. Some states attempt to collect such taxes voluntarily by asking citizens to report such purchases on their tax return form, but it is hard to imagine that is an effective means of collecting in full. But whatever the problems with the sales tax, the alternatives are even less attractive. The other primary source of tax revenue for states is the income tax. This is, of course, the tax levied on the income of an individual (or on the profits of a business), assessed as a percentage of that income. This is the main source of tax revenue for the federal government and for about 25 percent of state tax revenue as well. Although the income tax is a staple of modern American life, the complex rules make the federal income tax difficult for many Americans to understand; add to that the variety of tax systems across the states, and the state income tax is almost incomprehensible. Many states have graduated, progressive tax rates that mirror the federal system, though the rates themselves are substantially lower than the federal rates. In Louisiana, for example, the first $10,000 of income is taxed at 2 percent, the next $40,000 is taxed at 4 percent, and anything more than $50,000 is taxed at 6 percent. A handful of states have flat (proportional) rates of income taxation. Residents of Illinois, for instance, are required to pay the state 3 percent of their income, regardless of how much they earned. Finally, there are several states that do not tax individual incomes at all, including Alaska, Florida, Nevada, South Dakota, Texas, Washington, and Wyoming, while New Hampshire and Tennessee limit their state income taxes to dividends and interest income only. The income tax is the most likely tax to be progressive, and to the extent that Americans believe that it is equitable for wealthier individuals to pay a greater share of the tax burden, it may be the closest thing to a “fair” tax. But even so, the politics are very complicated—how progressive is too progressive? Is it excessive for an individual to pay 20 percent of his or her income to the state? What about 40 percent?
taxation, state and local
And at what point does that individual make the decision to move to a state with lower taxes? The corporate income tax is especially vulnerable to this problem. Even though it is probably the most politically palatable tax, competition among states makes it difficult for lawmakers to impose high tax burdens on corporations for fear that they will relocate to a taxfriendlier state. As a result, the corporate income tax is not a major source of revenue for states. A very small number do generate significant revenue from the corporate income tax, but the national average is less than 6 percent. Both the sales and income taxes are constrained by fears of interstate competition, but at least one tax is exempt by definition from that concern. The property tax is the most significant source of revenue at the local level. It is levied as a percentage of the value of real property, such as a house. The value of the property is determined by a tax assessor who typically bases his or her appraisal on the selling price of nearby similar homes. It is a crucial tax in nearly all states because it is usually the main source of funds for public education. But it is enormously unpopular, with the federal income tax as its only rival for “worst tax,” according to recent public opinion polls. Not only is it complicated, but since most people pay it in a lump sum, it is considerably more painful than an income tax that is paid through weekly withholdings. Moreover, the property tax is potentially very regressive. Since homes tend to increase in value at a rate far exceeding increases in income, the same percentage rate of the property tax will result in consistently higher tax bills for home owners. Seniors on fixed incomes are unusually vulnerable, and many states have instituted caps that will freeze a property’s tax rate based on a formula tied to its purchase value. In addition, nearly all states provide other kinds of exemptions to low-income or disabled home owners. Frustration with property taxes in California in the late 1970s was so high that it led to an open revolt at the ballot box. In 1978, voters there approved a ballot measure that cut property tax rates dramatically and limited the ability of the state legislature to reverse the decision at any future date. Several other states soon followed suit, and revenues from property taxes declined sharply. In the 1950s, localities generated approximately 50 percent of their total revenues
981
through property taxes; today that figure is less than 30 percent and dropping steadily. Even with the protections for the poor, Americans seem to have a special antipathy for the property tax. Lawmakers do have other tax options available to them, such as the excise tax, which is a special kind of sales tax that adds an additional surcharge to specific items, most commonly cigarettes, alcohol, and gasoline. Such taxes are used in every state, though the rates vary widely. Rhode Island assesses a tax of $2.46 on every pack of cigarettes, while South Carolina charges only seven cents (the national average is 79 cents). A few states also benefit from the severance tax, which is a fee charged to industries for the extraction of natural resources, such as oil, coal, and timber. Alaska is so rich in such resources that its severance tax finances most of the cost of governing the state, and there is no sales tax or income tax there. States also tax most licenses and permits, large cash gifts, and even death (the estate tax). These are just a sample of the numerous other kinds of taxes that states and localities use. Ultimately, though, no matter how the tax burden is distributed and no matter what or who is taxed, Americans are extremely suspicious that they are paying more than their fair share. They still want the government to provide services, but they oppose nearly every effort to pay for them. Games of chance have provided lawmakers with a way out of this paradox. In the past 30 years, all but a few states have legalized lotteries and/or casinos as a way of generating revenues without raising taxes. In 2001, Americans legally gambled more than $700 billion; taxes on gambling receipts are so high in Nevada that it does not need to tax the income of its residents. Lotteries are even more common. Americans spend roughly $25 billion a year on lotteries, producing sizable revenues for states. Critics contend that games of chance are seductive, short-term solutions that carry their own hidden costs, such as increases in bankruptcies and divorces. But supporters counter that unlike taxes, games of chance are completely voluntary. It is beyond the scope of this essay to elaborate on that debate, but in an antitax society, even if the long-term problems are severe, it is unsurprising that lawmakers would gravitate toward such a solution. Walter Mondale can attest to the perils of the alternative.
982 t own meeting
Further Reading Barrett, Katherine, et al. “The Way We Tax: A 50State Report.” Governing. Available online. URL: http:// governing .com/ gpp/ 2003/ gp3intro .htm. Accessed June 19, 2006; Brunori, David. State Tax Policy: A Political Perspective. Washington, D.C.: The Urban Institute Press, 2005; Saffell, David C., and Harry Basehart. State and Local Government: Politics and Public Policies. 8th ed. Boston: McGraw Hill, 2005; Slemrod, Joel, ed. Tax Policy in the Real World. New York: Cambridge University Press, 1999; Smith, Kevin B., Alan Greenblatt, and John Buntin. Governing States and Localities. Washington, D.C.: Congressional Quarterly Press, 2005; Thorndike, Joseph J., and Dennis J. Ventry, Jr., eds. Tax Justice: The Ongoing Debate. Washington, D.C.; The Urban Institute Press, 2002. —William Cunion
town meeting The town meeting is the purest, most direct form of democracy practiced in the United States. Town meetings have been used for more than 300 years and are most often practiced in small towns in the New England area. The town hall meeting raises several key questions about both the proper functioning of democracy and the effective functioning of modern government. At what level is politics most democratically and most effectively practiced? Can a nation that is serious about political democracy reject the small town, small government model on which the town hall meeting is based? Can a large or extended republic truly embrace democratic methods and means? Can a superpower truly be a democracy? And at what level is democratic politics best practiced, the local or the national level? A democracy is more than merely voting from time to time. And while this minimalist view of democracy is easy to practice and asks very little of citizens, most of those who study democracy argue that such a narrow definition does a disservice to the more robust and comprehensive view of democracy that the town hall meeting symbolizes. A pure, or participatory, democracy asks a lot of citizens, including time, attention, thought, and commitment. That is why many citizens prefer the minimalist view of
democracy—it is easy to practice and asks very little of the citizen. But if the minimalist view asks little, it returns little as well. The more robust, or participatory, definitions of democracy may demand more of the citizen, but they come with the promise of delivering more as well. To participatory democrats, the value of democracy is not merely in the way it sets up and guides power, but in the effect it has on the citizen who practices democracy. A fully practicing participant in the community is enlivened, developed, and enhanced by the very practice of democracy. He or she becomes more self-aware, more in command of life, more involved with the community, in short, more wholly and fully enriched by the experience of self-governing. He or she becomes more independent and more alive, more in touch with neighbors, and more in touch with the concerns of the community. A fully practicing democrat is a whole person. That is the way the ancient Greeks imagined the impact of democracy on the citizens of Athens. It is the goal of the town meeting form of democracy as well. When the fever of revolution caught hold in the American colonies in the late 1700s, democracy was more an abstract idea than a tangible product. The ideas and ideals that animated the American Revolution were democratic in sentiment but never fully articulated or tangibly described. Democracy was an ideal, something to strive for, but no clear roadmap existed. The revolutionary and democratic sentiments presented by Thomas Paine in his influential pamphlet Common Sense (first published in January 1776) and elaborated on by Thomas Jefferson in the Declaration of Independence (July 1776) contained grand but vague references to what form this new democratic government they were advocating might take. After the Revolution had been won, the hard work of translating these bold ideas into a workable form of government took center stage. How democratic should this new government be? What role should the common man play in this new government? Who should rule, and by what rules? With the Revolution won, the framers were first governed by the Articles of Confederation, a distinctly states’ rights document that left the federal government feeble. Later, a new federal Constitution was adopted giving the central government new and more significant powers. But lost in this
town meeting 983
transition was the ideal of democracy. This new Constitution was decidedly not democratic. In fact, it created a republic. So what happened to the notion of democracy? If the framers abandoned the hope of democracy, there were others who kept the dream alive. In fact, virtually from the beginning of the republic, many, especially in the small towns of New England, kept democracy alive and well in the form of what were called town meetings, the truest and in some ways last form of pure democracy still practiced in the United States. There is no “one size fits all” town meeting. And while the basic concept of the town hall or town meeting is the same—gather together the citizens in a small area, talk about the issues of the day, and make decisions by directly voting on public policy issues—the format of the meetings varies significantly. Different towns have different rules and regulations, usually found in the city bylaws or charter. Usually these rules are written, but in some cases they are a function of tradition and customs and are thus less formal. Generally, the town meeting format is restricted to towns with fewer than 12,000 inhabitants. They mimic the forms of Athenian democracy practiced more than 2,000 years ago. With few exceptions, anyone may speak as long as he or she is a registered voter in the town, and voting is also open to all registered voters. There is usually an agenda that is called a “warrant” that is distributed before any town meeting so that citizens may know what decisions will be made at the town meeting. Most town meetings are open to all citizens who live in the town. They usually decide three things: the salaries of elected officials, the town budget, and the local statutes or laws. While town meetings are fairly common in small towns in New England, they are rarely practiced in other regions of the nation. There is a significant difference between representative government and direct democracy. In representative systems, the citizens elect others to serve as representatives or intermediaries who are to work on their behalf. In direct democracy, of which a town meeting is but one example, the people are personally and directly responsible for decision making. Proponents of the town meeting stress the participatory elements of the meeting and the democratic elements of the practice. It truly is the purest form
of democracy practiced in the United States and speaks to a more genteel and nostalgic view of the potentialities of democracy in America. These town meetings allow citizens to become decision makers, bringing democracy up close and personal. They make the citizens responsible for the government. At their best, they encourage participation and responsibility taking by citizens and thus develop a more sophisticated democratic citizenry. Critics of town meetings argue that even with this more direct form of participation, special interests tend to dominate communities, as they are better organized, better funded, and more committed to getting the “goods” that government hands out. Critics also claim that participation in town meetings tends to be low, undermining the core belief that these town meetings are about participation of the citizens in government. James Madison, in Federalist 55, argued that “In all numerous assemblies, of whatever characters composed, passion never fails to wrest the scepter from reason. Had every Athenian citizen been a Socrates every Athenian assembly would still have been a mob.” This fear of the mob, of the potential for the passions of the citizens to become inflamed, animated the framers to eschew direct democracy and embrace instead a form of representative democracy whereby the whims and passions of the citizens would be filtered through the representative assemblies and might thus temper the whims and passions of the mob. While it is true that only occasionally do these town meetings exhibit the highest forms of participation, rhetoric, and decision making, they do represent a form of democracy worth maintaining. For all their faults—and every form of democracy and every form of government have their faults—they represent part of the tradition and heritage of democracy as practiced in its many forms and varieties in the United States. Faults notwithstanding, the town meeting is an honored and honorable form of direct democracy and one of the few still practiced in the United States today. Further Reading Fishkin, James S. Democracy and Deliberation: New Directions for Democratic Reform. New Haven, Conn.: Yale University Press, 1991; Goebel, Thomas. A Government by the People: Direct Democracy in
984 urban development
America, 1890–1940. Chapel Hill: University of North Carolina Press, 2002; Haskell, John. Direct Democracy or Representative Government? Dispelling the Populist Myth. Boulder, Colo.: Westview Press, 2001. —Michael A. Genovese
urban development The term urban development encompasses myriad theoretical orientations on the origin and evolution of cities. Urban development is interdisciplinary in nature because of the many academicians and professionals who work within it and who bring to it their unique expertise from diverse fields—sociology, psychology, criminology, ecology, political science, history, economics, finance, planning, engineering, landscape, architecture, and geography. While this specialization causes fragmentation, it also has led to enriched approaches to understanding the city, its inhabitants, and its problems. At the turn of the 20th century, the United States was moving from an agrarian society toward industrialization. Production by individual skilled craftsmen and artisans lessened as mass production increased. Cities faced rapid growth, in part as a result of people moving from farms to cities and due to an increasing influx of immigrants. At the same time, improvements in electricity, transportation, and communication were occurring. Demands were placed on cities, and individual cities dealt with those demands such as overcrowding, pollution, and public health concerns in a variety of ways. Physically, the cities grew and developed in a variety of directions, some more orderly than others. If we try to explain or understand that growth or the patterns of development, we have many models of urban development to consider. Among the models are classic location theory and central place theory. Classic location theory suggests firms locate based on how best to minimize costs of land, labor, and capital. Cities vary in their locational advantages. Central place theory is based on a hierarchical ordered classification of cities. High-population areas with greater demand will have higher-order industries or facilities present or will have a greater range of these available. So, for example, we will not find a museum or a hospital in every city, but even the smallest of cities will
have a post office and grocery store. Other theorists have developed typologies, or classification systems, to describe cities based on specific factors such as whether the city has an economic focus on manufacturing, mining, government, business, high-tech industries, education, military, tourism resorts, retirement, or some combination of two or more of these sectors. Other models suggest that cities compete in a competitive marketplace and must strive to promote economic growth. Therefore, decision makers must give priority to policies that promote growth and are very constrained in their choices. Yet another approach suggests that internal political forces shape urban development choices, and what occurs is no more than the outcome of which political leader is successful in winning support for his or her agenda. Under this scenario, there is assumed to be great decision making latitude given to those political leaders. Under regime theory, it is suggested that in addition to local elected officials, there are more complicated public-private interactions at work that involve informal arrangements between and among business leaders, church leaders, union interests, the media, and other interest groups that in turn affect urban development. Thus, with regime theory it is through the study of community power structures that one gains an understanding of the decisions involved in shaping the urban form. Another more comprehensive model suggests urban development is shaped by a combination of market conditions— attracting investment; intergovernmental support— planning, land use controls, infrastructure, and housing; popular control systems—public participation; and local culture—what the citizens value. Urban development depends on the action or lack of action by others—on choices and on choices forgone. At the most local level, it involves all aspects of regulation, annexation, investment, and the location and availability of social services and schools. The actions taken by investors, citizens, and elected officials affect the outcomes of urban development. At the local level, the role of government in urban development includes the functions of planning, regulating, historic preservation, farmland preservation, developing desirable patterns of development, and reducing disparities among citizens. A city must plan and consider whether the results of its plans will have
urban development 985
the intended outcomes. A city must anticipate change and be ready to match available actions with impending problems or opportunities. A city must recognize the interdependent nature of its actions and that there are some things it cannot change. A look at any city of even modest size reveals the presence of a zoning ordinances, subdivision regulations, and a comprehensive plan, all of which help guide local officials in keeping pace with development demands and maintaining a high quality of life for their citizens. Typically, a comprehensive plan will include an analysis of the future demands of demographics, housing, and economic conditions; an analysis of environmental and cultural elements unique to the community; an analysis of community facilities and services, such as fire stations, schools, parks and recreation facilities, libraries, and hospitals; an analysis of infrastructure, including public utilities, water supply, waste management, and technology; an analysis of transportation, including roadways, rail, water, and air travel; and an analysis of land use patterns. Usually, 20-year projections and recommendations for addressing each of these elements will be described in the comprehensive plan. This plan may include how local programs, activities, and land development regulations will be initiated, modified, or continued, and the plan usually addresses how each of these elements may complement or conflict with plans by single or special districts within the municipality, such as water, sewer, and fire districts; adjacent local governments; and regional and state agencies. Most importantly, a comprehensive plan must involve public participation in order to reconcile the long-term needs of its citizens with the shortterm wants of its elected officials. It must be recognized that not all citizens will be in the same position to participate. Ideally, the comprehensive plan will serve to enhance or create vibrant neighborhoods within a city that are sustainable, diverse, democratic, socially and environmentally just, and well balanced. At the state level, urban development is affected by the state’s policies on taxation, regulation, education, economic development, and transportation. Often the state serves to administer funds that are passed through from the federal government to the local jurisdictions, which in turn affects urban development.
In other instances, the state may provide direct funding for infrastructure improvements, for example, or may institute a policy that has direct implications on the climate for doing business within that state. Historically, after World War II, the United States saw the development of federal programs that helped with improving home ownership rates through financing and housing construction and road construction that increased migration to the suburbs. More recently, at the federal level, there has been a focus on urban renewal through planning, housing, and community development programs. Federal funding through the Department of Housing and Urban Development supported programs to improve communities through, for example, community development Block Grants, the HOME program (which provides a federal block grant to state and local governments designed exclusively to create affordable housing for low-income households), and the Low Income Housing Tax Credit program. Depending on the particular program, aid may be to communities for infrastructure improvements or to individuals in the form of housing vouchers or rehabilitation of housing, or in the form of tax incentives to leverage private investment. Other funding programs are focused more toward economic development projects through, for example, the designation of enterprise zones (which help to promote job creation and capital investment in areas of economic distress). As demographic and economic changes—such as increased immigration, an aging population, the formation of smaller households, decline in manufacturing jobs, decentralization, and globalization— continue to occur in the United States, it will be important for all levels of government to continue to play a role in assisting cities to reach their economic potential. It is also important that government address the inequalities that continue to exist among cities and among citizens. Government must think about the services it provides, the mix of services provided, and where these services are provided. A list compiled of core concerns in the study of cities includes evolution; culture and society; politics and government; economics, finance, and regional science; space and city systems; megacities; planning, design, landscape architecture, and architecture; race, ethnicity, and gender relations; and problems,
986 z oning
including politics, poverty, overcrowding, discrimination, and crime. Suggestions to address some of these concerns include enhancing innovative sectors of the urban economy, transforming the physical landscape, growing the middle class through educational opportunities, revitalizing cities through support of lowwage workers, and creating neighborhoods of choice. Over time, the study of urban development has focused on or been entwined with patterns of development, the environment, design and planning, growth, suburbanization, renewal and redevelopment, sprawl, sustainability, resurgence, and the inequities in society, such as poverty and racial segregation, that result from the realities of urban development. While this list is not exhaustive, future trends will indubitably include or build upon studying and understanding all of these issues and their influences on urban development as it continues to shape how we live, learn, work, and play. Further Reading Hopkins, Lewis D. Urban Development: The Logic of Making Plans. Washington, D.C.: Island Press, 2001; Hudnut, William H. III. Halfway to Everywhere: A Portrait of America’s First-Tier Suburbs. Washington, D.C.: ULI-The Urban Land Institute, 2004; Katz, Bruce. Diverse Perspectives on Critical Issues: Six Ways Cities Can Reach Their Economic Potential. Washington, D.C.: Brookings Institution, 2006; Kotkin, Joel. The City: A Global History. New York: Modern Library, 2006; Legates, Richard T., and Frederic Stout, eds. The City Reader. New York: Routledge, 2003; Logan, John R., and Harvey L. Molotch. Urban Fortunes. Berkeley: University of California Press, 1987; Peterson, Paul. City Limits. Chicago: University of Chicago Press, 1981; Savitch, H. V., and Paul Kantor. Cities in the International Marketplace: The Political Economy of Urban Development in North America and Western Europe. Princeton, N.J.: Princeton University Press, 2002; Sharp, Elaine. Urban Politics and Administration: From Service Delivery to Economic Development. New York: Longman, 1990; Stone, Clarence N. Regime Politics: Governing Atlanta, 1946–1988. Lawrence: University Press of Kansas, 1989; Wheeler, Stephen M., and Timothy Beatley, eds. The Sustainable Urban Development Reader. New York: Routledge, 2004. —Victoria Gordon and Jeffery L. Osgood, Jr.
zoning In the United States, zoning is a power vested in local governments to designate areas of a city or county for specific land uses (or zones). Zoning regulates the use of land as well as the size, bulk, and placement of buildings on lots. Constitutionally, zoning is considered a police power that cities and counties derive from their state governments. Zoning decisions are usually made by city or county planning departments staffed by professionals trained in urban planning, usually with oversight by an elected city or county board. Zoning laws are administered by building inspectors, who determine whether buildings are in compliance with zoning laws. The general idea behind zoning is to create an orderly development of land and to separate land uses thought to be incompatible. Examples include setting aside specific areas of a city or county for single-family homes, multifamily dwellings, commercial areas, industry, and agriculture. Critics of traditional, or Euclidean, single-use zoning argue that the practice limits flexibility and creativity and often leads to sterile urban environments. Zoning is the primary tool in a larger effort by local governments to regulate land development, known as general planning. General plans are documents that attempt to regulate a community’s future development. In addition to zoning areas of a city or county for specific land uses, general plans consider the need for infrastructure such as sewers, streets, and lighting as well as parks, libraries, and other public facilities. Because zoning places limits on private property rights, zoning decisions are among the most contentious issues for local governments. The granting of exceptions to zoning laws, known as variances, can sometimes set off political firestorms. Zoning decisions often pit interest groups with an interest in land development against one another. Groups typically fall into one of two camps: progrowth or slow growth. Interest groups such as developers, real estate agents, local newspapers, the business community, and public employee unions often make up powerful progrowth coalitions that seek to promote land development through permissive zoning laws. These interests typically argue that the intensification of land use will result in economic development that benefits the community as a whole.
zoning 987
On the other hand, home owner groups, sometimes aligned with environmental interests, tend to pursue a slow-growth (and sometimes no-growth) agenda. Slow-growth interests typically argue for restrictions on land development in order to protect themselves from quality-of-life threats such as traffic, pollution, noise, and density. Environmental interests seek to block zoning changes that threaten open space, endangered species, and other aspects of the environment. Since the 1970s, home owners and environmentalists have become increasingly important players in zoning decisions. Historically excluded from decision making that had been dominated by progrowth insiders, slow-growth activists have sought to make land use decision making more democratic. “Ballot box zoning,” whereby zoning decisions are left up to local voters instead of planning bureaucrats, is an increasingly popular way of making some land use decisions. During the 1990s, Ventura County, California, north of Los Angeles, became the poster child for ballot box zoning, with a number of municipalities adopting requirements for voter approval of important land use decisions. In 1916, New York City became the first city in the United States to adopt a comprehensive zoning law, modeled on zoning laws in Europe. Advocates of zoning in New York argued that externalities resulting from the city’s burgeoning population and industrial economy needed to be better managed. Soon after, cities around the nation began to adopt zoning as a mechanism for regulating land use. The 1926 U.S. Supreme Court case Village of Euclid, Ohio v. Ambler Realty Co. set an important precedent upholding the practice of zoning. The plaintiff in the case, a realty company, intended to develop land that it owned for commercial purposes. When the Village of Euclid rezoned the property to make it compatible with a nearby residential district, Ambler Realty Co. filed suit, arguing that its property had been taken without due process of law. In its decision, the Supreme Court argued that Euclid’s zoning law represented a constitutional use of government authority to protect public health and safety and promote order. As the Euclid case illustrates, zoning exposes a tension between a government’s duty to both protect private property and promote public health and safety. Unlike the power of eminent domain, for which the
taking of private property requires compensation, courts have ruled that zoning’s limitations on private property do not require that property owners be compensated. Scholars cite three main historical reasons for the emergence of zoning in the United States. First, zoning authority emerged within a context of a deeply rooted tradition of strong local government in the United States. In most other developed nations, land use authority is largely vested in state or national governments. Urban historians cite America’s frontier and colonial experiences and federal political structure as underlying reasons for the nation’s tradition of strong local government. Today, Houston, Texas, is the only major American city to not employ zoning, although the city’s many deed restrictions perform essentially the same function as zoning. Second, zoning emerged within the context of rapid industrialization that took place in America’s cities in the late 19th and early 20th centuries. Meatpacking, steelmaking, and a host of other heavy industries were popping up in cities, sometimes in the middle of residential neighborhoods. Combined with the absence of environmental regulation, the proximity of residential areas to severe industrial pollution prompted civic leaders to consider ways of separating homes from the workplace. Third, widespread zoning also coincided with efforts on the part of white Anglo-Saxon Protestants (WASPs) to establish moral order in American cities during a period of rapid immigration from southern and eastern Europe. During this period, known as the Progressive Era (1900 to 1925), WASP reformers succeed in passing prohibition laws, immigration restrictions, and a number of other measures intended to uphold Protestant moral values. Modern zoning laws that ban the sale of alcohol or outlaw sex shops trace their origins to this era and illustrate the relationship between zoning and morality. Much of the academic literature on zoning focuses on the use of zoning power to segregate populations by class and race, or what scholars call “exclusionary zoning.” Classic examples are suburbs that are zoned exclusively for single-family residences. Cities that ban multifamily dwellings are thus able to exclude residents who are unable to afford the price of a single-family home. Other types of exclusionary, or “snob,” zoning include large lot zoning, where
988 z oning
homes can only be built on large lots, usually between one-half and two acres, but sometimes more. Some zoning laws restrict the number of residents who may live in a house and even limit the number of unrelated persons who can live in a home. Ultimately, critics say, exclusionary zoning allows independent suburban jurisdictions to effectively wall themselves off from minorities and the poor, resulting in metropolitan areas characterized by unequal access to housing, jobs, and education. In 1977, the Supreme Court upheld a Chicago suburb’s zoning ordinance that prohibited multifamily housing throughout much of the city. The plaintiff in the case was a local church group that wanted to build subsidized housing in the mostly white and affluent suburb. Known as the Arlington Heights case, the Court ruled that zoning restrictions are legal if there is no intent to discriminate on the basis of race. The court found that zoning laws that produced only the effect (as opposed to the intent) of racial discrimination were legal. Opponents of exclusionary zoning point out that proving discriminatory intent is an almost impossibly high legal threshold. Although federal courts have been less inclined to overturn local zoning ordinances, opponents of exclusionary zoning have won limited victories in a few states. In a series of cases in New Jersey during the 1970s and 1980s known as the Mount Laurel decisions, housing rights advocates successfully argued that the State of New Jersey could require communities to build affordable housing. Although the Mount Laurel decisions received much attention, they did not result in the construction of much affordable housing. In the end, communities simply devised creative ways of shirking their court-ordered affordable housing requirements. Not all zoning is exclusionary. In recent years, the concept of “inclusionary zoning” has been adopted in a number of communities as a way of increasing the supply of affordable housing. Inclusionary zoning requires developers to set aside a particular number of housing units, usually between 10 percent and 20 percent, for low- and moderateincome people. In exchange, cities often give developers “density bonuses,” which allow a greater number of total units to be built than existing zoning allows. In recent years, planners around the country have sought to increase densities, or “up-zone,”
some communities in an attempt to implement a planning strategy known as “smart growth.” Smart growth represents a fundamental departure from traditional urban planning in that it attempts to integrate, rather than segregate, various land use elements into a community. The idea is to make communities more livable by creating mixed-use neighborhoods consisting of dense housing close to commercial districts, jobs, and public transit. Although smart growth planning has caught on in places such as downtown Portland, Oregon, and San Diego, California, it is unlikely that a downtown condo will ever replace the single-family home as the embodiment of the American dream. Because much of the literature on zoning focuses on the politics of exclusion, still another branch of scholarship advocates the participation of higher levels of government in land use decision making. For example, rather than leaving land use decisions solely up to cities and counties, the state governments of Oregon and Washington mandate that city and county general plans adhere to a number of statewide goals, including the provision of affordable housing and the prevention of urban sprawl. In both states, opposition to state interference in land use decisions remains fierce in some circles, and there is evidence that efforts to protect open space and farmland have limited the supply and affordability of housing. However, despite these and other efforts to oversee local land use decisions, the vast majority of state governments—many of which are dominated by suburban and rural interests—allow their local governments virtually complete control over land use decision making. Further Reading Babcock, Richard F. The Zoning Game Revisited. Madison: University of Wisconsin Press, 1990; Burns, Nancy. The Formation of American Local Governments. New York: Oxford University Press, 1994; Christensen, Terry, and Hogen-Esch, Tom. Local Politics: A Practical Guide to Governing at the Grassroots. Armonk, N.Y.: M.E. Sharpe, 2006; Danielson, Michael N. The Politics of Exclusion. New York: Columbia University Press, 1976; Davis, Mike. City of Quartz. New York: Vintage Books, 1990; Gottdeiner, Mark. The Social Production of Urban Space. Austin: University of Texas Press, 1985; Judd, Den-
zoning 989
nis R., and Todd Swanstrom. City Politics: Private Power and Public Policy. New York: Longman, 2002; Linowes, R. Robert, and Don. T. Allensworth. The Politics of Land Use. New York: Praeger Publishers, 1973; Logan, John R., and Harvey L. Molotch. Urban Fortunes: The Political Economy of Place. Berkeley: CA: University of California Press, 1987; Plotkin, Sidney. Keep Out: The Struggle for Land Use Control.
Berkeley: University of California Press, 1987; Ross, Bernard H., and Myron A. Levine. Urban Politics: Power in Metropolitan America. Belmont, Calif.: Thompson Wadsworth, 2006; Weiher, Gregory R. The Fractured Metropolis: Political Fragmentation and Metropolitan Segregation. Albany: State University of New York Press, 1991. —Tom Hogen-Esch
International Politics and Economics
capitalism
mists, economic and business historians, and many political scientists, many historians and social scientists have instead argued that America’s roots were decidedly precapitalist, if not actually anticapitalist. Over the past four decades, an overwhelming amount of scholarship has fueled an academic debate about the emergence and eventual development of capitalism in the United States. On one hand, a liberalist camp led by economists, economic and business historians, and their intellectual supporters has asserted that America was “born” capitalist and that even the earliest colonial settlements were suffused with an ethos based on an eagerness to acquire property and capital (what some would call greed) that promoted success through profit seeking and free markets. On the other hand, a republicanist group of scholars most prominently represented by social and labor historians, Marxist and neo-Marxist social scientists, and their acolytes has emphasized the centrality of civic humanist and classical-republican principles among colonists and the existence of a communitarian culture that promoted common civic and economic ideals and prioritized communal enterprise over individual interest. Pursuantly, capitalism was not imminent but arose from specific circumstances created by industrialization during the 19th century, circumstances that undermined and prevented the further evolution of communitarian ideologies in the United States. Recently, despite ongoing academic controversies over the nature of capitalism in the United States,
Capitalism in the United States has attracted the attention of scholars and commentators across various disciplines. Perhaps no other general topic and its myriad ramifications has witnessed such diverse proliferation and application among researchers, practitioners, advocates, and critics. The idea and significance of capitalism have presented themselves through writings in economics, history, literary criticism, political science, sociology, philosophy, law, business, psychology, and many others. This is so because capitalism, as both idea and practical reality, is a fundamental component that defines and is simultaneously defined by almost everything that is American. The ways we conceptualize and animate most, if not all, of our political, economic, social, and cultural institutions are inextricably linked to the ways we conceptualize and practice capitalism. Because societal and individual notions of capitalism are so pervasive and America’s devotion to capitalism is ostensibly unshakeable, discussions of capitalism, even among scholars, are frequently based on unreflective self-affirmation. So as a nation, Americans largely assume that capitalism is intrinsic and that its emergence and development in the United States were inevitable. However, the ascendancy of capitalism in the United States, though partly the product of specific structural and situational advantages that have inhered in American society since the 18th century, has not been inevitable or predictable. Indeed, contrary to long-held beliefs among econo991
992 capitalism
certain interpretations have been more firmly established than others, and the definitional features of the history of capitalism in America have been elucidated. Those interpretations and the aforementioned debates have acknowledged that inquiries into the history of capitalism in the United States revolve around two basic questions. First, what is capitalism, or, more specifically, what are the defining characteristics of American capitalism? And second, when and how did capitalism develop in the United States? As should be obvious, the second question presupposes a proper and adequate answer to the first, not least because American capitalist dynamics can differ widely from those of other industrialized nations. By and large, economists agree that capitalism is an economic system dedicated to private enterprise, free markets, and the creation of profit. Some would add that an absence or minimum of governmental regulation or control is indispensable, though this notion is more an outcome of the nexus of capitalism and politics than of capitalism itself. Arguably, the sine qua non of capitalism is profit, so the fundamental impulse in capitalist societies is profit maximization. Profit is the most significant incentive for private ownership and the production of surplus value, whose eventual reinvestment enables continued growth and expansion. Although the generation of profit is conceivable and can be achieved in nonprivate environments, it is secured and logically warranted by private ownership. Capitalist economic principles have been incorporated by different cultures in numerous settings over the course of Western history, with varying results. A common theme that characterizes most of these settings has been the rise of mass industry and the dominance of the corporation, so that today’s industrial capitalism is only a remote cognate of the earliest forms of capitalism depicted by its earliest exponents. The purest forms of capitalism, something akin to the network of small-scale productive ventures envisioned by Adam Smith, generally manifested themselves during the earliest stages of capitalist development in just a handful of countries, but the realities of an industrialized world have rendered Smith’s deceptively simple vision irrelevant and almost meaningless. America in the 21st century is typical of this trend and has probably traveled further than any other
industrialized nation from Smith’s portrayal of capitalism as a system of unencumbered markets governed by efficient and equitable exchanges of goods among small-scale, proprietor-managed enterprises and rational consumers. The U.S. economy is dominated by networks of multinational conglomerates that support markets with monopolistic or oligopolistic rather than capitalist traits. In today’s America, ownership and management have long been divided and separate, so that the link between private ownership and profit has grown increasingly complicated. In fact, the traditional coupling of ownership and profit has been supplanted by a much stronger tie between management and profit. More than any other single factor, the modern corporation has allowed this transformation of American capitalism. The formation and acknowledgment of the modern corporation and its legitimization through common law doctrine and statutory provision have allowed and promoted the creation of professionally managed, externally capitalized megastructures with disproportionate power and influence over markets and market dynamics. America’s conglomerates and its executives enjoy political and economic privileges that belie the equity and balance necessitated by the original proponents of capitalism in the 18th century. This should not be construed as a suggestion that the power and status enjoyed by American corporations are symptoms of a Marxist theory of conspiracy between government and business. Nevertheless, leading businesses and their executives have access to and control over market dynamics and political processes that far outstrip their numerical or even theoretical significance. Moreover, despite the fact that capitalism seemingly demands minimal regulatory interference with markets and correspondingly necessitates governmental neutrality and objectivity toward individual competitors in the marketplace, the reality in the United States has deviated considerably from that ideal. Macroeconomic circumstances and political relationships in the United States have offered some top corporations in crucial industries protections and incentives whose purpose has been to ensure the continued viability and dominance of those corporations. For example, bankruptcy laws and proceedings reveal a decided bias that allows larger corporate entities unusual latitude in their reorganization procedures,
capitalism 993
whereas private individuals and smaller businesses are often handicapped by those laws. At times, the government has even taken a direct role in the resuscitation of troubled industries, as evidenced by its bailouts in the automobile, transportation, energy, and banking industries—to name only a few. Subsidies to farmers, steel manufacturers, and a host of other entities prolong their longevity yet distort the balance and competitive fairness that an unfettered, equitable marketplace should confer. Preferred providers of goods and services to the government, such as government contractors and subcontractors that are considered vital to the implementation and propagation of government programs, rarely compete for lucrative contracts according to the laws of supply and demand. Those contracts are awarded in an artificially restricted marketplace that favors institutional inertia and a business-as-usual environment designed to limit free competition and traditional marketplace variety. Again, this is not intended to insinuate that government and industry actively collude to control markets in the way Marxist and neo-Marxist critics have contended, but it does demonstrate that American capitalism has evolved, or maybe derogated, from its origins to a form that bears little resemblance to the 18th-century ideal. Of course, even early American society did not precisely conform to that ideal, but, in its infancy, American capitalism manifested more of the trappings of veritable capitalism than it does today. A central focus of investigation for economists and economic historians has been the determination of when specific societies become capitalist. Many economists have argued that a society is capitalist if it exhibits sustainable per-capita income growth, and, according to that standard alone, America was indeed born capitalist. Although useful and meaningful statistical indexes are difficult to compute for the colonial period, economic data gathered by historians over the past 40 years indicate more or less consistent per-capita income growth starting during the second third of the 17th century. Such evidence notwithstanding, many scholars are loath to conclude that America was capitalist at this stage just from one statistical index based on admittedly incomplete economic data. For this and other reasons, whether America truly was born capitalist is still an open question that will
serve as intellectual fodder for ongoing debates among scholars. However, scholars now agree that even those colonial communities customarily considered communitarian havens exhibited at least some of the trappings of capitalism quite early. Stephen Innes, among others, has ably documented the presence and pervasive expansion of an acquisitive mentality among 17th-century Puritan settlements. His studies of colonial Massachusetts and Connecticut portray Puritan manufacturing and trading ventures motivated by profit and dedicated to the establishment of viable business pursuits across a region previously thought to have been concerned primarily with the implementation of a specific socioreligious vision opposed to profit seeking. Land speculation, sophisticated trading networks, and aggressive capitalization strategies were some of the hallmarks of these early communities, and, by the mid-18th century, the New England region was characterized by widespread entrepreneurship. In the middle colonies, where religious motives were less pronounced and outright economic motives for settlement in North America were more evident, the development of an entrepreneurial spirit was also obvious. Trade, shipping, and farming were especially important to the region, as were the small manufactories that emerged in cities by the latter half of the 18th century. In this area and New England, most residents were still producers, cultivating their own land and trading surplus commodities and goods produced on their farms with neighbors, merchants, and regional trading businesses. Many of the rest, located primarily in and around urban areas, were artisans who practiced a marketable skill that enabled them to survive or at least supplement farm income. At this time, up to and including the late 18th century, employer-employee relationships of the sort that evolved out of the Industrial Revolution were not a significant factor, inasmuch as the overwhelming majority of Americans, perhaps as many as 80 percent of white freemen, owned farms or at least some physical property. In the upper and lower South, the situation was problematic from an analytical perspective. Following a few generations of indentured servitude mostly in Virginia, slavery gradually replaced a system that was decreasingly attractive both to landowners and indentured servants. During the seminal period from
994 capitalism
the 1670s to the 1720s, black slaves became the dominant source of labor for plantation owners who produced tobacco in Virginia, Maryland, and North Carolina, rice and indigo in South Carolina and Georgia, naval stores throughout the upper South, and sugar in the West Indies. American slavery, though supporting a broader economic system devoted to profit and private ownership, was hardly capitalist. It was based on exploitation and the suppression of free labor markets, and the production of agricultural staples in the South inherently favored large landowners over small farmers. With its manorial settlement patterns and paternalistic social networks based on deference and submission, the economic system in the South, especially after the advent of widespread cotton cultivation in the early 19th century, was reminiscent of feudalism. As accustomed as we have become to stories of the inevitable dominance of slavery in the antebellum South, slavery’s eventual domination of the southern economy was anything but predictable. By the 1780s, the vast profits once enjoyed by tobacco planters had shrunk severely, so that the future of tobacco cultivation in North America was in doubt. At the end of the 18th century, it would not have been unreasonable to conclude that slavery would soon disappear due to the declining viability particularly of tobacco cultivation. Of course, the invention of the cotton gin and the consequent spread of the cotton plantation throughout the deep South resurrected slavery and enshrined it as the centerpiece of the South’s economic system. In addition, cotton highlighted and further entrenched the noncapitalist features of the southern economy. The antebellum South was marked by a high concentration of wealth and income among a small percentage of large plantation owners, which produced a rigid class structure with a de facto white aristocracy increasingly alienated from the rest of southern society. Contrary to popular lore, less than 25 percent of white southerners owned slaves, and no more than 1 percent owned large plantations. Most southern whites were small farmers who, by modern standards, lived in poverty and did not benefit economically from slavery. Moreover, industrialization made comparatively little impact on the pre–Civil War South, which simply accentuated its precapitalist character. Transportation networks were
substandard, so the expansion of markets experienced in the North was reserved mainly for the large planters who sold cotton to international traders and foreign textile manufacturers. During the late 18th and early 19th centuries, the situation in the North and also the West could hardly have been more different. Beginning in the New England and mid-Atlantic regions, a gradual process of urbanization facilitated the foundation of small factories in leading manufacturing sectors, such as textiles and clothing. Capitalization schemes became more sophisticated with the growth of more mature financial markets, and markets expanded through the proliferation of transportation networks. The steamboat and railroad enabled manufacturers to reach an ever-increasing pool of consumers. And the continued movement of residents to burgeoning cities provided the labor needed for industrial growth and diversification. By the late 19th century, most of the prerequisites for perhaps the most transformative phase of capitalist development in the United States were present in the North and West. During the first three quarters of the century, the bulk of economic growth in the United States had been underwritten by familyowned entrepreneurial ventures whose vulnerability to liabilities and relatively limited ability to raise capital governed their growth. The next phase of America’s capitalist development necessarily involved the provision of tools and mechanisms through which business entities could overcome the boundaries that had traditionally defined their growth. As such, the Gilded Age galvanized a period of legal and managerial innovation that culminated in the emergence of the modern corporation. It was during this time and the decades that followed that the types of capitalist megastructures previously described in this article became the norm. In the South, the transition to industrial capitalism was not so simple or linear. Despite the abolition of slavery mandated by the end of the Civil War through the Reconstruction amendments to the U.S. Constitution, the cultural and economic vestiges of slavery were palpable for decades thereafter and are, sadly, still recognizable. Physically, much of the South was destroyed by war, and its capacity for industrial development was minimal. Reconstruction proved to be a political failure, and the integra-
command economy 995
tion of millions of new African-American citizens into southern society was thwarted at every turn by the southern populace and state governments dedicated to segregation at all costs. Eventually, the economic integration of nascent southern industries, such as textiles and mining, into national economic networks promoted the movement of the southern economy toward standards established through industrialization, but this did not happen quickly or easily. Today, state-of-the-art manufacturing centers in the automobile and defense industries, for instance, can be found throughout the South, and southern cities such as Houston, Texas, and Atlanta, Georgia, compete with their northern counterparts as economic hubs. But the twin specters of slavery and industrial backwardness still cast a conspicuous shadow over southern economic development. Poverty levels are high, and lack of modernization is pervasive in too many parts of the rural South. Many residents, especially African Americans, are structurally prevented from competing fairly in the marketplace, with capital, material resources, and access to broader markets unavailable to them. Whether industrial capitalism truly exists in the American South is debatable, as is the related question of whether the South has successfully overcome its precapitalist past. Further Reading Bailyn, Bernard. The New England Merchants in the Seventeenth Century. Cambridge, Mass.: Harvard University Press, 1955; Chandler, Alfred D. The Visible Hand: The Managerial Revolution in America. Cambridge, Mass.: Harvard University Press, 1977; Galambos, Louis. The Rise of the Corporate Commonwealth: U.S. Business and Public Policy in the Twentieth Century. New York: Basic Books, 1988; Innes, Stephen. Creating the Commonwealth: The Economic Culture of Puritan New England. New York: W.W. Norton, 1995; Kulikoff, Allan. The Agrarian Origins of American Capitalism. Charlottesville: University Press of Virginia, 1992; Martin, John Frederick. Profits in the Wilderness: Entrepreneurship and the Founding of New England Towns in the Seventeenth Century. Chapel Hill: University of North Carolina Press, 1991; Montgomery, David. Citizen Worker: The Experience of Workers in the United
States with Democracy and the Free Market during the Nineteenth Century. Cambridge: Cambridge University Press, 1993; North, Douglass C. The Economic Growth of the United States, 1790–1860. New York: W.W. Norton, 1966; Wilentz, Sean. Chants Democratic: New York City and the Rise of the American Working Class, 1788–1850. Oxford: Oxford University Press, 1984. —Tomislav Han
command economy In a command economy, the government takes direct control of the economy instead of relying on individual firms or entrepreneurs to make basic economic decisions and the market to provide feedback. This economic strategy is most closely associated with communist regimes such as the former Soviet Union, China, and North Korea but has some relationship to the policies of African and Asian socialist regimes. In the comprehensive version of the command economy developed in the former Soviet Union and then exported to China, North Korea, Vietnam, and Cuba, all economic decisions are centralized in government agencies. The original goal was to avoid the exploitation believed inherent in unrestrained capitalism by assuring that economic activity met human needs. Command economies were also believed to be necessary to consolidate a revolution by giving a communist party, through the government, control over much of everyday life. A full-blown command economy, such as the one the Soviet Union attempted to create, relies on formal planning and hierarchical decision making for all economic activity. At the heart of the economy is a comprehensive five- or ten-year plan. The big plan is translated into smaller and more detailed provisions down to the level of an individual factory or collective farm. The government employee in charge of a shirt factory, for example, would be told how many shirts of various types the factory was expected to produce in the coming year, where the cloth and thread were going to come from (and how much they would cost), how many workers would be hired and what they would be paid, where the completed shirts were to be sent, and how much they were going to cost. Over time, a number of apparently unsolvable problems have emerged in command economies that
996 c ommunism
have crippled their economic performance. Chief among these have been inadequate planning, supply chain bottlenecks, quality of goods, and the phenomenon of storming. When all economic activity is driven by a central plan, a staggering array of decisions must be consciously made. Planners at various levels must establish a specific price for every raw material, semifinished product, finished product, and each component of the manufacturing process. This raises two fundamental challenges. The first is simply making all the decisions that need to be made by the central bureaucracy before anything can happen. The second fundamental flaw in the planning process was a lack of information. Decisions about how much of what kinds of goods were required, where the raw materials could be obtained and how much they were worth, how production should be distributed, and how much the final consumer would pay for a given item, to name just a few, all require a great deal of information about complex economic and human interactions. At least some of that information is unknowable ahead of time, and plans thus had to rely on guesswork and imagination. In addition, the huge bureaucracies that were necessary to make and implement plans and monitor the economy became both a physical and economic drag on the economy. Individual factories found themselves at the mercy of flawed plans. All the factories that needed coal to fire their boilers, for example, had to wait until the coal mines had been told how much coal to mine and where to ship it before they could hope to get their share. And if a factory needed cloth to make shirts, it had to wait and hope that the knitting mill got its raw materials in time and sent enough of the right kind of cloth. Quality was always a major problem for command economies. Producers were typically credited with meeting their quotas when the goods left the factory. It did not matter if the production was shoddy or did not meet the needs of the consumer, because feedback from consumers was not part of the plan. A tractor that never ran quite right and fell apart in a year counted the same as a well-built, long-lasting tractor. If the plan did not credit a factory for making spare parts, they did not get made, which made repairing broken equipment or even replacing items such as windshield wipers on cars a major headache.
Quotas for production were set for each quarter and on an annual basis. If production had lagged during the period, everyone was pressured to go all out to meet the quota, a practice known in the Soviet Union as “storming.” Workers who would normally be doing maintenance or other jobs were put on the assembly line and machinery was run overtime in an effort to meet the quota. At the end of that big push, production often slowed dramatically as machinery was taken out of service for maintenance and repairs, workers took time off to compensate for their overtime efforts, and other workers took care of various routine tasks that had been neglected. Almost inevitably, production lagged at the start of the new plan period, and storming became part of the routine. The apparently inherent fatal flaws in command economies have led most countries to pursue some level of reform. The primary motives behind the reform programs introduced in the Soviet Union by Mikhail Gorbachev and the waves of reforms beginning with Deng Xiaoping in China were to increase economic efficiency without undermining the political system. While the Soviet attempt ended in the destruction of the Soviet Union, the Chinese experiment in “market Leninism” is still evolving. Command economies are distinctive features of communist regimes. Other types of governments engage in varying degrees of economic planning and management but do not attempt to plan and regulate all the details of economic life. This important distinction is sometimes overlooked in political debates about American economic policy. Further Reading Gregory, Paul R. Behind the Facade of Stalin’s Command Economy. Stanford, Calif.: Hoover Institution Press, 2001; Pei, Minxin. China’s Trapped Transition: The Limits of Developmental Autocracy. Cambridge, Mass.: Harvard University Press, 2006. —Seth Thompson
communism The communist ideology stemmed from the socialist doctrine envisioned by the German economists Karl Marx and Friedrich Engels. In their writing of the Communist Manifesto in 1848, Marx and Engels demanded the elimination of the economic inequality
communism 997
In Vilnius, Lithuania, the statue of Rus sian M arxist revolutionary Vladimir Ilyich Lenin is dismantled on August 23, 1991. (Getty Images)
(unequal wealth distribution) between the lower class and the upper class. According to their theory, the only way to accomplish such elimination is through a revolution led by the poor (workers/proletariats) against the rich (bourgeoisie), since the latter would never give up their wealth. The consequence of this revolution would bring social and economic justice with the achievement of a classless society. The Marxist theory of socialism was highly advocated by the Russians at the beginning of the 20th century. The first and foremost communist regime in the world started in the Soviet Union after the successful Bolshevik (which is the Russian word for “majority”) revolution led by Vladimir Lenin against the autocracy of the czar’s political regime. In 1917, the Bolshevik Party, which was renamed the Communist Party, established Soviet power. This power was rooted in
the principle of the dictatorship of the proletariat (dictatorship of the many over the few). The Russian Communist movement became a model to other Communist Parties all around the world (e.g., Cuba, North Korea, Nicaragua, and Vietnam) after World War I. Among the communist parties were the Chinese, Yugoslav, Hungarian, and Czechoslovakian parties that followed Russia’s lead. Communism reached the height of its influence during the leadership of Joseph Stalin in the Soviet Union from the late 1920s to the beginning of the 1950s. Stalin imposed the principles of imperialism, agrarian collectivism, industrial centralism, and totalitarianism. During this period and after the end of World War II, the cold war started between the two superpowers, the United States and the Soviet Union. This led to the emergence of an anticommunist wave in the democratic bloc that was supported by the American government. By the end of the cold war, the power of communism ultimately collapsed and was defeated by the ideals of capitalism in 1991. Communism is a belief and political practice based on Marxist socialism and further developed by Lenin and Stalin. Although Marx stated that communism must be practiced in economically advanced and industrialized societies, the Soviet leaders put communism into practice despite the fact that their empire was not highly advanced economically. In fact, none of the states that employed communism at the time were industrialized. By contrast, industrialized societies such as the United States use capitalist values. Communists believe that they must abolish capitalism through the dictatorship of the proletariat (the working class), and that will happen through the spread of socialist ideals around the world in order to create a strong communist bloc. The basic principles of communism are 1) religious atheism. This means that the communist society must not believe in God, but only in the Communist Party. Because of this, religious groups are not free to practice their religions in communist countries. This is in direct opposition to the United States; the American government often stresses the belief in God, and it grants freedom to practice any religion; 2) dialectical materialism. As explained by Marx, this means that economic factors determine social relations, and people are subject to the process of change. Throughout the historical process, the proletariat class struggles over material
998 c ommunism
goods; and 3) socialism. This refers to the government ownership of the means of production. The communist government, in this regard, maintains central control over banking, business, housing, education, industry, medical care, and the military. Private property and rights of inheritance are abolished by the government. Property, thus, is publicly owned, and each person is paid according to his or her needs. Contrary to the principle of socialism, capitalist states advocate the individual ownership of property and means of production, but under some governmental regulations and protections against any foreign threat. Capitalism is defined as an economic system based on private ownership in all business and trade fields. Such ownership is conducted under competitive conditions and ruled by the neoliberal and free market principles (i.e., decisions of production, distribution, and pricing are made by the private owners and influenced by the forces of the international market). Capitalism is characterized by an emphasis on self-interest to maximize gain either by the owner or the worker. Its ethics were first set forth by the Scottish philosopher Adam Smith. Ironically, the term capitalism was first introduced by Marx as he attempted to define communism as both its cause and opposite. In a communist state, the Communist Party is the only ruling party. Hypothetically, it represents the majority of the proletariat, but in reality it is just a representation of the leaders of the party itself. Membership in the party provides many privileges that average citizens do not enjoy. For example, in the Soviet Union, party members could have access to foreign merchandise, travel to capitalist states, live in the best housing, and obtain prestigious educations and jobs for their children. The power of the party is of a totalitarian (nondemocratic) nature, since its main function is to exercise unlimited control over the society in all fields of life and to suppress opposition. As communist leaders believe, the party must be tightly controlled and disciplined to be able to lead the revolt against the capitalists. Stalin attempted to put this into practice by creating a repressive bureaucratic system that lasted until the end of the communist era. The hierarchy of the Communist Party began from the general secretary at the highest level and went down to the politburo, central committee,
national party congress, republic congress, district congress, and the party cells as the lowest level. The most powerful policy-making bodies of the party were the general secretary, the politburo, and the central committee. The party employed the communist concept of “democratic centralism.” This means that the decisions of the higher party bodies are imposed on the lower bodies. The absolute power, nonetheless, is in the hand of the general secretary (the president). Unlike the Communist Party, American political parties have no official power to get involved in governmental policy making, and their major focus is to ensure the winning of their candidates in elections. Moreover, the American president does not have absolute power due to the practice of checks and balances by the other governmental branches. The communist system, on the other hand, lacks such checks and balances. Returning to the example of the Soviet Union, the judicial branch was under the direct control of the Communist Party, since there were no independent courts. The legislative members also were parts of the Communist Party who could not regulate the activities of the executive branch, including the party itself. Contrary to American legislators, communist legislators came directly from the nomination of the party, and, hence, they were not elected by the citizens. With the official dissolution of the Soviet Union in 1991, which had consisted of 15 republics under the Russian federation, the prominence of communism ended mainly due to economic reasons. During the era of communism, the Communist Party adopted the model of the centrally planned economy, known also as a command economy. This type of economy is completely directed by the state, which has a monopoly on making decisions regarding production and allocation of goods and services. A communist economy has some advantages to the nation as a whole, such as providing social welfare by the state (e.g., free education, free health care, and free housing to all citizens). In the case of the Soviet Union, the main resource of such welfare came directly from capital investment by the government in building heavy industry. Although the Soviet Union was one of the top world manufacturers, it still lagged far behind the capitalist states that adopted the economic model of the free market. In such a market, firms compete against each other by increasing their production in
developed countries 999
order to gain more profit. The communist economy, on the other hand, has only one big firm (owned by the government) that produces its goods in accordance with the needs of the consumers. Because the needs of the consumers were unknown, the main economic problem in the Soviet Union during the 1980s was shortages of supplies. For instance, people used to stand in long lines just to buy a pair of shoes or one piece of bread before the supply ran out. As a result, people increasingly turned to the black market. This weakened the national economy and the position of the government. Mikhail Gorbachev, the last communist president of the Soviet Union, tried to check this decline by introducing some reforms in 1987 through a program named glasnost that was already specified in his perestroika (reconstruction of the Soviet economy and politics). The basic idea of this program was to turn the communist economy from central planning to free market (e.g., allowing private ownership and foreign investments). However, it was implemented without any prior transitional plan. This led to the governmental loss of control over the national economy, particularly after the decline in tax revenues due to decentralization, and the quality of life regressed. The economic instability caused great public unrest and resentment. The resulting anger came to a head in the August coup in 1991 against Gorbachev. By the end of that year, the Soviet Communist Party had dissolved, crippling Communist Parties elsewhere. While communism still exists, it is weakened to the point of impotence. Currently, most postcommunist states are undergoing economic stagnation and facing new challenges of democratization such as multicandidate elections and freedom of the press. Perhaps these nations can work with the United States and other democratic entities to rebuild and flourish. See also ideology; socialism. Further Reading Daniels, Robert V. The Nature of Communism. New York: Random House, 1962; Edwards, Lee. The Collapse of Communism. Stanford: Calif.: Hoover Institution Press, 2000; Hyde, Douglas. Communism Today. South Bend, Ind.: University of Notre Dame Press, 1973; Ketchum, Richard. What Is Communism? New York: E. P. Dutton, 1955; Kornai, Janos.
The Socialist System. Princeton, N.J.: Princeton University Press, 1992; Meyer, Alfred G. Communism. New York: Random House, 1962; Salvadori, Massimo. The Rise of Modern Communism. New York: Holt, Rinehart, & Winston, 1963; Yoder, Amos. Communist Systems and Challenges. New York: Taylor & Francis, 1990. —Muna A. Ali
developed countries A developed country is a country that enjoys a high standard of living and an advanced, diversified economy. A high standard of living may encompass societal factors, such as high literary rates, education levels, and long life expectancy. An advanced, diversified economy consists of multiple sectors within a country reaching high levels of production, having a high gross domestic product (GDP), as well as generally possessing a high per capita GDP. However, a high GDP does not automatically enable a country to attain the label developed, as this economic achievement may have been attained through natural resource extraction, which is more of a short-term situation and does not reach across multiple sectors of the economy. In short, developed countries possess both economic and noneconomic factors that propel them to higher levels of development than less-developed countries. Former descriptors of developed countries include the terms first world, industrialized countries, or even the more constraining Western countries, which leaves out developed countries in Asia and elsewhere. While these terms are still in use, the current phrase Global North creates a more neutral environment when discussing these countries. Likewise, developing countries that were once referred to as “Third World” or “nonindustrialized countries” are now known as the “Global South.” While international organizations have created their own definitions or categorization of which countries are or are not developed, there is little agreement on exactly what factors are the most important in determining whether a country is developed. The United Nations does not have an established convention for designating a country “developed.” However, most international organizations that study and label countries according to their levels of economic
1000 dev eloped countries
development are in close agreement as to which countries fall into which categories. A list of which countries are and are not developed can be broadly stated. Western European countries are generally all developed countries, with Norway and Finland included on that list. The smaller countries in Europe, such as San Marino, Andorra, Liechtenstein, Monaco, and the Holy See (Vatican City) are also included as developed countries. The United States and Canada are developed, as are the Asian and Pacific rim nations of Japan, South Korea, Singapore, Australia, and New Zealand. In the Middle East, only Israel is frequently classified as a developed country. Many other countries may claim the title of developed countries, but for various reasons, international organizations differ on their classification. Russia, while large and belonging to international groups such as the Group of Eight (G8) most developed countries, has rampant corruption and a per capita income that places it in the category of a developing country. Russia’s inclusion in the G8 and other groups is largely a cold war legacy and not an accurate appraisal of its current developmental level. Likewise, South Africa and Turkey have low percapita incomes, even though other factors may indicate that these countries should be labeled developed countries. Turkey, as a North Atlantic Treaty Organization member and possible entrant into the European Union, has internal security problems and a less developed economy, while South Africa faces one of the world’s highest AIDS infection rates and a less-developed economy as well. Many of the Persian Gulf states, such as Bahrain, Kuwait, Oman, Qatar, Saudi Arabia, and the United Arab Emirates, have a high per-capita income level. However, their economies are focused on a single commodity, oil, and their noneconomic factors are generally at low levels. Similarly, many of the Caribbean nations, such as the Bahamas, Barbados, Antigua and Barbuda, Trinidad and Tobago, and Saint Kitts and Nevis, have high per capita incomes, but their economies are also concentrated in one area, tourism, and thus do not meet the requirement for a diversified economy. Lastly, Hong Kong, Taiwan, and Macau all lay claim to high levels of economic development and high per capita income levels, but their ongoing dis-
putes with China, in which China claims these territories as part of mainland China, complicate the political and economic situation. These discrepancies point to the need for a holistic approach to the classification of countries into developed and developing categories. While certain countries, such as China, Sri Lanka, Poland, and Cuba, might fare better under societal means of classification, such as literacy and life expectancy rates, other countries, such as Saudi Arabia and Kuwait, as mentioned above, suffer from lower rankings in these areas while maintaining higher levels of economic achievement. An outgrowth of the realization that noneconomic factors are of great importance to a country’s level of development is the connection made between development and a country’s regime type. There is a clear, empirical relationship between development and democracy, yet there are questions as to the direction of the causal relation. In one study, gross national product, or the value of goods and services produced by citizens of a country no matter where they are located, was cited as the key explanatory variable for whether a country will become a democracy. Economic development is not the only factor leading to democratic development, but among the countries with the lowest economic development, only a handful of them qualify as free or democratic. As the level of economic development increases, so does the likelihood that the countries will be democracies. But outside influences such as war, domestic instability, cultural factors, leadership, and social movements can all steer a country in a different direction. The connection between democracy and a country being developed is a difficult one that leaves open many avenues for further research. Many authors believed that as more countries grew economically, they would also become freer in the political arena. However, many of the states that constituted the fastestgrowing economically were not democracies, which belied the idea that the gradual transition and development of states would lead to their accepting greater political participation and rights, as well as a stronger commitment to democracy. Instead, political repression and the concentration of political control among political elites have been used by many states to further their economic goals. States argue that the repression of labor groups or the suppression of
developed countries 1001
political liberties to ensure domestic stability are necessary in order to attract foreign investors who can bring needed technology, capital, and skills to developing countries. Countries such as Singapore and South Korea had long been under more oppressive regimes, if not outright martial law. On the other hand, many authoritarian states have never developed market economies or political stability. Others have achieved political stability and even some level of economic development but have seen them disappear in domestic upheavals such as coups and civil wars. Military governments are often the culprits in African and Latin American countries, as their ascension to power usually entails harsh crackdowns and a corresponding retaliation by the population, leading to domestic instability and the loss of any gains made in development. A second case has been made concerning the confluence of development and democracy. This idea holds that while economic development is not necessarily the most important cause for achieving democracy, it is necessary to maintain democracy. Thus, democratic countries that achieve and maintain high levels of economic development will not “regress” to lower levels of development or fail as democracies. Adam Przeworksi notes that no democracy with a per capita income higher than $6,055 has ever fallen. At the same time, he notes that since 1946, 47 democracies have collapsed in poor countries. While external invasion, civil war, economic depression, or crises can test a country, high levels of economic development can buffer the fallout from these incidents. Some developing countries that are seeking higher economic growth and higher standards of living are concerned with issues that arise out of dependency theory. This idea, developed in the 1960s and 1970s, blames the developed countries for taking advantage of developing countries and keeping these developing countries in a perpetual state of exploitation. In this model, the developed core of countries, such as the United States and western Europe, purposefully exploited the natural resources and populations of the developing countries in order to enrich themselves and prevent competition to their established positions. While many developing countries have not fully accepted this theory, they have undertaken policies to try to prevent this from occurring, frequently with unfavorable consequences.
Many of these same countries have attempted to put themselves in the developed country category by using the import substitution industrialization method. Here, developing countries attempted to become self-sufficient by increasing tariffs and producing many of the goods they previously imported as well as focusing on native natural resources such as mining and agriculture. However, corruption, inefficiency, and an inability to secure stable high prices for these commodities led most of these import substitution programs into ruin. By the 1990s, many developing countries had abandoned these policies in favor of export-led programs, such as those practiced by the “Asian tigers”—Singapore, Taiwan, South Korea, and Hong Kong. The tigers followed the example of Japan, produced manufactured goods for export to the world market, and were generally successful in becoming developed countries, although China’s claim on both Taiwan and Hong Kong has occasionally thrown their political stability into question. Societal factors make up a second component that helps determine whether a country is developed. The United Nations Development Programme designed the human development index (HDI) as a combination of life expectancy, educational attainment, and economic measurement of GDP per capita. GDP per capita is a measure of the value of goods and services produced per person within the borders of a country in a given year. Much of the focus in the HDI is on societal factors and the distance between developed and developing countries. For example, expenditures on public health as a percentage of a country’s GDP in the developed world is three times that of the developing world, while life expectancy is nearly 15 years more in developed countries. Many societal factors are not equal among the genders, so international agencies have broken down the HDI further by gender for the gender development index. Literacy rates, a basic indicator of educational level, are frequently lower among women than men. In addition, health-care services that primarily affect women, such as prenatal care and health care for infants, are frequently not available. As women assume greater responsibility for family and household maintenance through their roles as wage earners and landowners, gender-specific issues will increasingly come into focus.
1002 dev eloping countries
The United States has long supported international development efforts through such groups as the United States Agency for International Development. This agency, created by the 1961 Foreign Assistance Act and tracing its roots back to the Marshall Plan for post–World War II Europe, combines economic aid for development assistance with attempts at promoting stability and peace. Among the many policies it carries out are training and scholarship programs, distributing food aid, helping build infrastructure, and providing small business loans. While Congress provides the money for this and other development groups, the State Department and the White House help set the policy directions for this foreign assistance program. The Overseas Private Investment Corporation (OPIC) is another U.S. agency that provides insurance and aid to U.S. corporations looking to invest abroad. Created in 1971, this agency helps protect U.S. businesses while ensuring that they are able to invest in developing countries. The risk of political instability in developing countries is a frequent barrier to investment, so OPIC was created to manage this risk in the hope of realizing profits for American corporations while bringing much needed technology and capital to developing countries. Further Reading Dahl, Robert A. Polyarchy: Participation and Opposition. New Haven, Conn.: Yale University Press, 1971; Diamond, Larry. “Economic Development and Democracy Reconsidered.” In Reexamining Democracy, edited by Gary Marks and Larry Diamond. Newbury Park, Calif.: Sage Publications, 1992; Huntington, Samuel P. Third Wave: Democratization in the Late Twentieth Century. Norman: University of Oklahoma Press, 1991; Kegley, Charles W. World Politics: Trends and Transformation. 11th ed. Belmont, Calif.: Thomson Wadsworth, 2007; Przeworski, Adam. “Democracy and Economic Development.” In Evolution of Political Knowledge: Democracy, Autonomy, and Conflict in Comparative and International Politics, edited by Edward D. Mansfield and Richard Sisson. Columbus: Ohio State University Press, 2004; United Nations Development Programme. Human Development Report. New York: United Nations, 2005. —Peter Thompson
developing countries The term developing countries usually refers to countries that have not experienced economic development to the scale of societies in western Europe and the United States. Most developing countries are in Asia, Africa, and Latin America. They were, at one time or another, colonized by a European power in the modern era. The colonial experience, resulting in exploitation of peoples and resources in these societies, often accounts for the low but varying levels of economic, social, and political development in these countries. In the postcolonial era, as the following essay shows, they have grappled with problems such as political corruption, inefficient governments, a high level of poverty, lack of industrialization, ethnic conflict, and social fragmentation. Nevertheless, a few of these countries also exhibit high economic growth rates, resilient political institutions, and vibrant social and cultural life. The table on page 1003 shows the vast economic and demographic differences between the developed and developing world. Most strikingly, while the lowincome countries contain 36.5 percent of the world’s population, their share of total world income is only 3 percent. Furthermore, 39 percent of the children under five in these countries are chronically malnourished. In the postcolonial era (after World War II), developing countries have been called by different names: poor societies, underdeveloped countries, industrializing countries, and third world countries The last designation, in particular, was in popular usage and distinguished them from the democratic and industrialized first world and the communist second world. After the fall of communism in Eastern Europe, the term Third World became problematic. The colonial era coincided with Europe’s lead in military and navigation technologies. Beginning in the late 15th century, the two factors enabled European explorers such as Christopher Columbus and Vasco Da Gama to travel great distances and establish commercial links, soon followed by political and military domination. The colonial era also coincided with the commercial and Industrial Revolutions in Europe. The newly established colonies, or commercial links, became sources of raw materials and indentured labor, especially from Africa. Existing empires or
developing countries 1003
KEY INDICATORS OF DEVELOPMENT Population (2005) Total Millions % of World Low Income Middle Income High Income World 6,438
2,353 3.073 1,011
36.5 47.7 15.7 100
Gross National Income $ Billion % of World $ Per Capita 1,363.9 8,113.1 35,528 44,983.3
political entities, such as the Incas, the Mayas, and the Aztecs in Latin America and the Mughals in India, were destroyed by colonialism and replaced either with direct European rule or indirectly through their commercial arms, such as the East India Company, which operated in South Asia. The Spanish and Portuguese granted independence to territories in Latin America in the early part of the 19th century, but colonial rule continued elsewhere. However, immigrant populations from Europe ruled these countries, and this period is often characterized as semicolonialism. The 19th century was the colonial century when most of what now constitutes the developing world came under colonial rule or, as in the case of China, was greatly affected by it. The so-called ‘scramble for Africa’ took place in the latter half of the century, when European powers such as France, Britain, Spain, Belgium, and Holland vied with one another to occupy this continent. The United States, itself a British colony until 1776, was not a colonizer except for notable exceptions in countries such as the Philippines (1902–42), but indirectly it asserted its dominance in various places. President James Monroe in 1823 declared to the European powers that the Americas were under U.S. influence. This move later came to be called the Monroe Doctrine. Declarations from President Theodore Roosevelt in 1904, President William Howard Taft in 1912, and President Franklin D. Roosevelt in 1933 widened the sphere of U.S. influence in the region. The United States also began to support dictatorships in Latin America and the Caribbean from the 1930s and in other parts of the world after World War II, as long as these dictators were supportive of the United States. The cold war with the Soviet Union also meant that if the United States did not support
3.03 18 79 100
580 2640 35,131 6.987
% of Adult Literacy Malnourished (% Ages 15 Children and Older) under 5 2000–04 39 11 3 25
62 90 NA 80
these regimes, it would lose their allegiance, as was the case with the Cuban revolution of 1959, when Fidel Castro emerged as Cuba’s leader. Meanwhile, nationalist movements emerged at the end of the 19th century and sought political freedom and sovereignty from Europe. Beginning with freedom from Dutch rule in Indonesia in 1945 and freedom from British rule in India and Pakistan in 1947, other colonial countries soon achieved independence. Most sub-Saharan African countries became independent in the 1960s. Many American ideas and historical events facilitated the nationalist movements. Of particular note were the American independence movement in the 18th century, the Civil War featuring the abolition of slavery, President Woodrow Wilson’s principles for world peace, and the Civil Rights movement of the 1960s. The Harlem Renaissance influenced Leopold Senghor’s vision of nègritude, which argued for a distinct African identity, and people such as Martin Luther King, Jr., were received as heroes in the developing world. The post–colonial governments of developing countries, or semicolonial in Latin America, built their sense of commanding purpose and legitimacy on the mandate that their elites received from leading the nationalist movements. The consensus domestically and internationally was that it was the government that would be entrusted to carry out the political and economic tasks that were necessary to improve the livelihood of people. Despite their failures and shortcomings, there was near total reverence accorded to postcolonial leaders such as Getulio Vargas in Brazil, Lazaro Cárdenas in Mexico, Kwame Nkrumah in Ghana, Julius Nyerere in Tanzania, Leopold Senghor in Senegal, Jawaharlal Nehru in India, and Ahmed Sukarno in Indonesia.
1004 dev eloping countries
There were great expectations about the performance of these “commanding heights” governments. Most of these governments distanced themselves from Western style capitalism, which they equated with colonial exploitation of the past, inasmuch as private agricultural plantations and industries in the former colonial territories were in the hands of Europeans. Many developing countries thus drew inspiration from Soviet-style central planning, in which the government owned and controlled many means of economic production and distribution. These governments also argued that imports would make them dependent on Western countries again. Instead, they encouraged industries in their own territories to produce import substitutes or like-products. This economic strategy came to be known as import substitution industrialization, or ISI. A few countries, especially in East Asia, also experimented simultaneously with earning high revenues from their exports while restraining their imports, so that they would not lose their hard-earned foreign exchange reserves too rapidly. This strategy, styled on the high growth rates generated by Japan in the 20th century, came to be known as export oriented industrialization (EOI). It led to rapid industrialization in countries such as Singapore, Taiwan, and South Korea, which came to be known as the “Asian tigers” or newly industrializing countries (NICS). To a limited extent, Malaysia and China also mimicked this strategy. Developing countries adopted the ISI and EOI economic strategies in an effort to mimic the modernization and industrialization seen in the United States and western Europe. However, these strategies were mandated by governments instead of arising spontaneously through free market forces. Indeed, both ISI and EOI were led by the “visible hand” of governments rather than the “invisible hand” of markets. Postcolonial leaders believed that governments could help deliver to their societies modernization and industrialization experiences more rapidly than in western Europe. The following statement by India’s Nehru in 1957 is typical of this period: “Now India, we are bound to be industrialized, we are trying to be industrialized, we must be industrialized.” Quite soon, though, postcolonial countries ran into economic and political obstacles as a result of their economic strategies. ISI prioritized industry over agriculture, resulting in food shortages and,
more important, neglect of farmers, who made up almost two-thirds of total employment in these countries. Food shortages, famines, and inflation followed in many countries. ISI itself needed imported machinery and technology to be successful, which many of these countries could not afford. Countries in Latin America, for example, borrowed large sums of money from international banks to finance ISI. By the early 1980s, they reneged on these loans. The Latin American debt crisis, which began in 1982, showed the limits of ISI strategy. The political costs of the economic strategy were perhaps even higher. As governments could not meet the rising economic demands, they became increasingly populist or authoritarian. Corruption among government officials became common, and military coups followed in many countries. The United States and the Soviet Union supported many of these dictatorial regimes as their pawns in the cold war. The social fabric of many countries, already weakened by colonialism, became perhaps even more weakened in the postcolonial era. A few scholars argue that dictatorships and authoritarianism were the only instruments available to curb the civil unrest and rising demands in these countries. If so, the costs were high, indeed. Old ethnic hatreds also surfaced, especially in the “artificial” countries European powers created by joining together ethnic groups and territories that had never coexisted before. The Biafra war in Nigeria from 1967 to 1970 and the conflict between the Kikuyus and Luos in Kenya in the late 1960s are examples. Dictatorships arose, such as General Idi Amin in Uganda in 1971 and Pol Pot in Cambodia in 1975, that decimated entire populations. As many as 25,000 to 30,000 people disappeared under the military regime in Argentina in the late 1970s. By the mid-1980s, the failure of the ISI and EOI economic strategies, the developing world debt crisis, and the end of the cold war induced shifts in the politics and economics of the developing world. Politically, the last 20 years or so have witnessed the end of dictatorships in most parts of the developing world. However, fears remain that unless poverty is reduced and economic growth rates take off, these countries will revert to political instability. Economically, the import substitution strategy failed the developing world. The fall of communism also ensured that the developing world would no lon-
distributive justice 1005
ger turn to central planning for economic control. Since the late 1980s, the developing world has, therefore, undertaken a number of market liberalization measures, either selling off formerly government-run enterprises to private enterprises or giving incentives to private investment, including multinational corporations. international trade also received a boost. These measures are often termed the Washington Consensus, after the blessing they received from the World Bank, the International Monetary Fund, and the U.S. government—all based in Washington, D.C. The early results from the implementation of the Washington Consensus are mixed. While most of the developing world has generated high growth rates, it is unclear if poverty has been reduced. A few scholars believe that the new policies have lifted middle classes and urban areas and depressed rural areas further. Nevertheless, countries as disparate as Brazil, Costa Rica, Botswana, and India are now generating very high growth rates. The last holdouts of dictatorships and monarchies are found in the Middle East and the Arab world. The events of 9/11 and thereafter have brought Americans increasingly close to and enmeshed in this part of the developing world. Whereas in the colonial and cold war days, the United States could exercise almost uncontested dominance in international affairs, the ironic twist of the post–cold war era is that the instability in the Arab world now continually threatens the peace of the Western world. Meanwhile, countries in East Asia that had practiced EOI have done well for themselves. A few of them, such as Singapore, Taiwan, and South Korea, are no longer termed developing countries and are counted as developed. China, with an export-oriented strategy but an authoritarian government, generates double-digit growth rates, too, but fears remain about the stresses on its political system. The best hope for peace in the developing world and its relations with the developed world lie in economic development. The goal remains elusive. Unfortunately, nearly half the world’s population still lives in poverty. Further Reading Calvert, Peter, and Susan Calvert. Politics and Society in the Third World. 2nd ed. New York: Longman, 2001; Harrison, Paul. Inside the Third World. New
York: Penguin Books, 1990; Sen, Amartya. Development as Freedom. New York: Anchor Books, 2000; World Bank. World Development Report 2008. Washington, D.C.: The World Bank, 2007. —J. P. Singh
distributive justice Distributive justice concerns the fair, just, and equitable distribution of benefits and burdens. These benefits and burdens span all dimensions of social life and assume all forms, including income, economic wealth, political power, taxation, work obligations, education, shelter, health care, military service, community involvement, and religious activities. Thus, justice arguments are often invoked in connection with minimum wage legislation, affirmative action policies, public education, military conscription, and litigation as well as with redistributive policies such as welfare, Medicare, aid to the developing world, progressive income taxes, and inheritance taxes. Distributive justice enjoys a long and honored tradition in political, economic, and social thought. It is central to Aristotle’s Nichomachean Ethics and Politics. In modern political philosophy, it has been construed in broad terms and seen as a foundation for policy formation and analysis. Michael Walzer, for example, writes that “Distributive justice is a large idea,” and for John Rawls, “Justice is the first virtue of social institutions.” Thus, it is widely regarded as an important concept and influential force in philosophy and the social sciences. This description begs the question, however, of what, exactly, constitutes a “fair,” “just” and “equitable” distribution (these terms are interchangeable). It seems that justice terminology employed with considerable flexibility, and fairness arguments are sometimes even made by both parties on opposite sides of a dispute. There are at least three reasons for this. A large part of the literature on justice involves prescriptive theories, theories that attempt to characterize a phenomenon in general terms and that concern what “ought to be.” They can be contrasted with descriptive theories, which seek to describe in general terms what “is.” Philosophers and social scientists typically propose prescriptive theories of justice as a guide for how people should behave and what policies should be enacted. One characteristic of
1006 distributiv e justice
prescriptive theories is that they are not verifiable; since they deal with values and what one believes to be just, they cannot be empirically tested. Although good theories should have a coherent internal logic, they otherwise have great latitude to proceed from any assumption and can lead, therefore, to a wide variety of very different conclusions. A second source of variation in justice terminology refers to everyday usage and is more patterned than the differences in prescriptive theories of justice. There are different senses of justice that pertain to the specificity of ethical principles being addressed. This distinction can be traced as far back as Aristotle, who wrote that “justice and injustice seem to be used in more than one sense.” He identified justice that “is not a part of virtue but the whole of excellence or virtue” versus “justice as a part of virtue.” In other words, in a very general sense, justice refers to the whole of ethics such that “fair” can be equated with “good” and “unfair” with “bad.” Justice in this most general sense, then, is about more than the distribution of benefits and burdens but also the whole of ethics, including virtues such as honesty, courage, loyalty, and generosity. The focus here, however, is on the more narrow definition introduced at the start, both in light of the fact that most actual usage is more specific and in order to restrict attention to a tractable subject matter. Finally, justice arguments are often put forth, not to promote justice but rather to further the interests of the party employing them. Indeed, skeptics of justice often cite such self-serving arguments as evidence that justice is nothing more than a cloak for selfinterest. Nevertheless, the fact that fairness arguments are regularly advanced is evidence of their moral force: If they were merely subterfuge without any independent ethical content, surely they would cease to carry moral weight. It is now well documented that fairness biases result from the tension between justice and self-interest and that these even lead to self-deception, that is, people often form false beliefs about what is fair in order to align those beliefs more closely with their self-interest. Thus, it is important to distinguish biased views of justice associated with stakeholders, or those who have stakes in the distribution they are judging, from the unbiased justice of impartial spectators, or people who have no such stakes and evaluate fairness from a more or less
neutral stance. Whereas stakeholder views can be extremely heterogeneous due to the wide range of opposing interests that mold them, mounting research indicates that unbiased views of justice converge to fairly well defined categories. In the mid-1960s, social scientists began serious efforts to describe attitudes toward justice and their behavioral effects. This research agenda has intensified more recently and now includes work in psychology, economics, political science, and sociology. The remaining discussion is based on four elements (or forces) of justice that have been proposed to describe the existing social science evidence on unbiased views of justice. They form not only a descriptive theory of justice, but the four elements can also serve as the organizing framework for categorizing various prescriptive and descriptive theories of justice. The category equality and need includes theories that incorporate a concern for the well-being of the least well-off members of society. The most basic and probably oldest concept equates justice with equality, including equality of opportunity, proportions, and rights. The strongest notion of equality is egalitarianism, or equality of outcomes. This serves as the foundation for various prescriptive theories of justice, as well as, more recently, for descriptive theories based on experimental findings. Nevertheless, numerous studies of the distributive preferences of people demonstrate almost universal opposition to equality or near equality of income. The equality sometimes found in experimental studies in the laboratory appears to be an artifact of contextually lean experiments rather than a general preference. Nevertheless, some researchers believe equality is one of several principles people value. Much of the modern interest in justice can be attributed to the publication of John Rawls’s major work, A Theory of Justice, in 1971. This book builds on the theory of the social contract associated with John Locke, Jean Jacques Rousseau, and Immanuel Kant, and equality, duty, and need are central to it. Rawls conceives of a hypothetical original position in which people are behind a “veil of ignorance” of their places in society. Under these conditions, Rawls claims that people would unanimously choose a particular conception of justice. The greatest attention has been paid to his so-called difference principle, according to which all goods are distributed equally
distributive justice 1007
unless an unequal distribution is to the advantage of the least favored. Some economists have criticized the difference principle on theoretical grounds, but various surveys and experiments also suggest that his theory is not a good description of actual values. Although Marxism is commonly thought to be concerned with injustices, equity was actually a controversial concept for Karl Marx and Friedrich Engels, who seemed to consider it a bourgeois construct. To the extent there is a Marxist theory of justice, it seems to be best summed up in the communist distributive principle that Marx (1875) endorsed, namely, “From each according to his ability, to each according to his needs!” Empirical studies reveal an expressed concern for helping those in need and demonstrate a willingness of people to sacrifice materially to realize that goal. Collectively, they suggest that the themes of equality and need can be integrated in the need principle: just allocations provide for basic needs equally across individuals. Specifically, the evidence suggests that need is one of several principles and that it tends to dominate when basic needs are endangered. The second category of theories is consequentialist. These theories share the property of reflecting a concern for the overall consequences of allocations or allocation schemes (as opposed, for example, to the intentions of the actors). These include utilitarianism and welfare economics. The former is the dominant consequentialist theory in moral philosophy, and the latter is the dominant approach in prescriptive economics. Utilitarians such as Jeremy Bentham and John Stuart Mill advocated acting so as to promote the greatest aggregate happiness. Welfare economics is derived from utilitarianism and is based on evaluating choices in terms of their consequences for “social welfare,” which, in turn, typically depends on a composite evaluation of individual welfare, or “utility.” The most widely embraced concept in economics is the Pareto principle, which endorses any change that makes someone better off without making anyone else worse off. A weaker version, called the compensation principle, approves of any change in which the gains of some are more than sufficient to compensate any and all losses of others, even if the prescribed compensation does not actually occur. The usual definition of equity in welfare economics, however, is the absence of envy criterion. In the simplest form, an
allocation is envy-free if no agent prefers the bundle of another. A review of the literature on distributive preferences indicates that people care about the happiness or subjective value derived from allocations. There is also qualified support for the Pareto and compensation principles, although this support is significantly compromised when it conflicts with other distributive goals. Absence of envy, on the other hand, is at most a second-order concern. Together, these studies show that people often seek to maximize surplus, sometimes at a monetary cost, and that this is regarded as “fair.” Efficiency in this sense does not necessarily conflict with justice but instead is itself a kind of justice, as in the efficiency principle. The common feature among the third category of equity and desert is the dependence of fair allocations on individual actions. Desert concerns which individual characteristics are relevant to justice, and equity is what the functional relationship is of individual characteristics to just allocations. The political philosopher Robert Nozick is situated at one extreme. In Anarchy, State and Utopia, he argues that justice is exclusively concerned with rights that are determined by the historical acquisition by and transfer of property among individuals. For Nozick, individual choice trumps social choice, and he believes in a limited role for government. Individuals are held responsible for everything. At the other end of the political spectrum, individual responsibility is seen as minimal and state redistribution as necessary to remedy unjust inequalities occasioned by arbitrary factors such as birth and brute luck. Although effort is commonly accepted as a reasonable basis for different allocations, John Roemer, for example, sees even effort partially as something for which a person should not be held entirely accountable. Numerous field, experimental, and survey studies have verified the importance of desert for views of justice and have established that when disagreements arise about what justice requires in specific circumstances, these can often be traced to differences in perceived responsibility. Another approach that relates individual actions to desired outcomes is equity theory. Equity theorists often trace their origins to the Nicomachean Ethics, in which Aristotle proposed proportionality as the foundation for justice. Specifically, fair outcomes for individuals are in proportion to their inputs. Inputs
1008
Eu ro pe anUnion
are usually thought of as a participant’s contributions and outcomes as the consequences, potentially positive or negative, that a participant incurs in this connection. A significant advance for both desert theory and equity theory came with their merger, which specified that fair outcomes are in proportion to the inputs for which agents are responsible. This version, which has been called the accountability principle, or simply the equity principle, has demonstrated considerable robustness in explaining a wide range of attitudes and behaviors. These three elements of justice helped organize theories around three distinct principles of justice: the need principle, the efficiency principle, and the accountability principle. The fourth element of justice is context, which is not a principle at all but rather the means by which the relative importance of each principle is determined. The idea is that unbiased justice is a multicriterion concept that obeys general principles, but that it is also context dependent, that is, the principles of justice require a set of people and variables that the context provides. This approach provides the means to reconcile a wide range of values and behaviors that are otherwise difficult to explain. For example, in developing countries, a greater emphasis on need relative to efficiency and accountability has been identified. This is surely consistent with both the perception and reality of greater material need in those countries. The rapid growth of empirical research on distributive justice has provided a rich source of data that has informed and helped advance descriptive theories of justice. Stimulated by this work, prescriptive theorists are now beginning to employ these findings to evaluate their own theories and even to draw on empirical results to construct prescriptive theories of justice. Distributive justice can no longer be considered as an amorphous or hopelessly differentiated subject matter. Much work remains, especially in identifying the effects of context and in designing prescriptive theories, but justice has proven to be an important force that can be understood and can help decision makers understand and form policy. Further Reading Aristotle. The Nicomachean Ethics. Translated by J. A. Thomson. London: Penguin Books, 1976; Aristo-
tle. Politics. Edited by David Keyt. Oxford: Clarendon Press, 1999; Babcock, Linda, and George Loewenstein. “Explaining Bargaining Impasse: The Role of Self-Serving Biases.” Journal of Economic Perspectives 11, no. 1 (1997): 109–126; Konow, James. “Which Is the Fairest One of All? A Positive Analysis of Justice Theories.” Journal of Economic Literature 41, no. 4 (2003): 1,188–1,239; Marx, Karl. “Critique of the Gotha Programme.” In Justice, edited by Alan Ryan. Oxford: Oxford University Press, 1993; Rawls, John. A Theory of Justice. Cambridge, Mass.: Belknap Press of Harvard University Press, 1971; Walzer, Michael. Spheres of Justice: A Defense of Pluralism and Equality. New York: Basic Books, 1983. —James Konow
Eu ro pe anUnion In the aftermath of World War II, the French foreign affairs minister Robert Schuman laid the first bricks for the construction of the European Union (EU) in 1950 by declaring the necessity of establishing European economic unification as an initial step toward larger federation. The objectives of such unification were to promote peace and stabilize the economy as well as to enhance the sense of cultural similarity. As a result, a new treaty was signed in Paris in April 1951 by France, Germany, Luxembourg, Belgium, Italy, and the Netherlands. In March 1957, these countries initiated an organizational union in the form of the European Economic Community. Thereafter, several treaties were signed to guide shared economic policies, and the membership expanded to include Denmark, Greece, Spain, Portugal, Ireland, and Britain. A deeper unification was created in February 1992, when the 12 countries extended the economic arrangement to a political one by signing the Treaty of Maastricht. In October 1997, several amendments to the Treaty of Maastricht and to the previous treaties were made by the signing of the Treaty of Amsterdam. After the expansion of the membership of the EU to include 25 member states in 2001, a new treaty known as the Treaty of Nice dealt with the issues of reforming the EU institutions. Since the EU has been guided by a series of treaties, there was a demand to construct a formal constitution for Europe. Although proposing a new constitution was not easy due to cul-
Eu ro pe anUnion 1009
tural and political factors, a draft of the constitution was ratified by the leaders of the member states in June 2004. It defines the main objectives as the advancement of peace, the union’s values, and the well-being of its people. Compared to the U.S. Constitution as drafted in 1787, the EU constitution is
a contract among the governments, whereas the American one is a contract between citizens and the government that defines the political, economic, and social values of both sides. While the American constitution is short and flexible, the EU constitution is long and extensively detailed.
1010
Eu ro pe anUnion
The EU constitution uniquely combines confederalism and federalism while retaining its supranational nature. Within the confederal system, the central government derives its authority from two or more sovereign states. This government has no direct effect on the citizens of these states. An example of confederalism is the United States during the period from 1781 to 1788. The original 13 states formed an agreement known as the Articles of Confederation, or the League of Friendship. The role of the central government was to announce war and to approve treaties. However, it did not have the authority to impose taxation, amend articles, or endorse treaties without the consent of the 13 states. Likewise, the political system of the EU is weak relative to the power of the member states (national governments), and it has no direct relationship with ordinary citizens. The EU, therefore, has no higher authority above the national governments, but it has a higher sense of mutual trust and cooperation. Several aspects of federalism also characterize the EU. The federal system is a political entity that is divided into two levels: central (national government) and local (states). Each level has independent functions in some areas but still shares common interests with the other level. The central government sets the foreign and security policy. The local states set policies such as education and land use. The federal regime is characterized by a directly elected central government, single currency, formal constitution, common military, supreme court, and common tax structure. The central government, thus, has direct relationship with the citizens at the local level. The present system of the United States is the best example of federalism. Although the central government of the United States has higher authority, it cannot amend the constitution without the consent of twothirds of the states, change the number of senatorial representation, redefine the borders of the states, or change the tax policy of the states. Correspondingly, the EU cannot exercise central power over such issues. The federal scheme of the EU involves a single currency (the Euro), the ability to negotiate with nonEuropean countries, the directly elected legislature in the European parliament, the application of common policies, and the supremacy of EU law over national law.
The EU has five major supranational institutions through which decisions are made. Although these institutions have governing power, they still do not resemble the domestic model of governments such as the U.S. government with its formal checks and balances. Each of these institutions has its own organizational structure and functions. The European Council is a legislative institution that makes broad policy decisions during its meetings. The meetings are held in Brussels twice a year for two-day summits. They are attended by prime ministers and ministers of foreign affairs of the member states and by the president of the European Commission and one of his or her vice presidents. Such meetings allow greater understanding of different member states’ opinions and help in bringing them together before deciding certain policies. The European Parliament is the most democratic institution within the EU framework. Its members (currently 732) are directly elected by European citizens for five-year renewable terms. They meet in Strasbourg each month excluding August. The Parliament is responsible for checking the activities of the other EU institutions. It meets biannually with the Council of Ministers to make joint decisions regarding budgetary issues and proposed amendments. The seats of the parliament are divided among the member states based on population. The European Commission is an institution that resembles the cabinet of ministers at the national level. It is directed by a College of Commissioners, which currently consists of 25 members appointed by the governments of the member states. The members serve five-year terms, and one of them becomes a president for the commission. The key role of the members is to prepare proposals for new laws and policies and to ensure their enforcement. The commission’s main office is in Brussels. Under the college, the commission has directorate-generals who play the role of bureaucrats. The Council of Ministers is the major decisionmaking body of the EU. It has a colegislative role in conjunction with the European parliament. It consists of ministers from each member state who decide policies that are directly related to their ministries during their meetings in Brussels. It has nine subcouncils, each of which specializes in a particular policy area. Ministers (or secretaries) of agriculture, for
Eu ro pe anUnion 1011
example, have their own council known as the Agriculture Council. This institution also has the Council of Presidency that is held by ministers of the member states. The presidency council represents the EU at the international venue, organizes conferences, and sponsors cultural events. The Council of Ministers is also served by the Committee of the Permanent Representatives of the Member States, known as COREPER, which is run by civil staff. The European Court of Justice, which parallels the U.S. Supreme Court in some ways, makes judgments regarding the implementation of national and EU laws. Such laws must be consistent with the EU treaties and reinforced in all the member states. The court looks over disputes concerning the EU institutions, the member states, and individuals. Nevertheless, it is not responsible for making decisions relative to the citizens’ affairs, such as cases of criminal or family law. Its membership currently includes 25 appointed judges for six-year renewable terms; each represents a different member state. Through a majority vote, the judges elect a president for their institution from among themselves each three years. The EU has undergone a series of expansions in its membership. It had 12 member states in 1992. This membership extended to include Finland, Austria, and Sweden in 1995. Thereafter, the EU membership was opened up to any European state that can prove capability to satisfy the membership conditions, known as the Copenhagen Criteria. In order to meet the conditions, the states must have democratic government, respect for human rights, adherence to the rule of law, a strong market economy, and strong public administration. By the fulfillment of these criteria, Cyprus, the Czech Republic, Estonia, Hungary, Latvia, Lithuania, Malta, Poland, Slovakia, and Slovenia joined the EU in 2004. Romania and Bulgaria were granted membership in the EU in 2007, but Turkey was not. Further enlargement is expected to include countries within the Balkan region as long as they meet the membership requirements. The process of enlargement entails some privileges to both old and new members. The new member states provide a larger market for western European products. The old member states, on the other hand, support the newer members financially (western investments and EU funding of social development).
Nonetheless, the membership of the eastern and central European countries entails several disadvantages for the older members. This is mainly because of the economic and cultural gap between both groups. The countries that are considered rich among the new members still remain poorer than the poorest old members. This heightens the concerns of the western members about the migration influx of unskilled laborers. Since most of the new members were communist, their embedded culture differs from the western one. This complicates the process of compromising the diverse opinions of the member states vis-à-vis certain policies. EU citizenship is automatically granted to any citizen of the member states. Although it does not add obligations to the citizens at the supranational level, it confers certain rights. Among these rights are the right to free movement within the EU territory (e.g., individuals are not subject to immigration rules), to social security, to fair payment for workers, and to participation in the electoral process of the European parliament. These rights require the national governments to treat citizens from other member states as equals to their own in the application of local policies. By the enjoyment of these political rights, citizens are able to take part in political decisions at the EU level. They participate in the national referendums by voting for whether to join the EU and whether to accept new treaties. They can also vote and run for candidacy for the European parliament. Voters must be 18 years old and must meet local requirements that differ from one state to another. The political participation of the European citizens can also be indirect through the role of interest groups. Like American interest groups, European interest groups play an influential role in determining certain policies. They represent various societal interests that are concerned with issues such as the environment, business, and consumerism. To advance the union’s objectives and to face internal and external challenges, the European leaders (through the EU institutions) have set out joint policies in various fields. These policies are divided into three main areas called “pillars.” The first pillar involves economic, social, and environmental policies. The second pillar concerns military and foreign policies. The last pillar consists of security policies. Regardless of their classifications, all these policies
1012 globaliza tion
are of great advantage to EU citizens. Among these, the economic policies are the most vital and the first to be considered. The European single market, which has been in operation since 1993, is an impressive reflection of such policies. This market eliminates all the barriers to trade and the free movement of capital, goods, labor, and services within the member states so as to be able to compete with the international market and to promote economic growth. After the establishment of the single market in 1999, the European Central Bank introduced the Euro as a new currency in a policy known as monetary union. Since 2002, the euro has replaced the previous national currencies of 12 members (Belgium, Germany, Spain, France, Ireland, Italy, Luxembourg, the Netherlands, Austria, Portugal, Finland, and Greece). Along with the economic policies, other policies have been adopted to instill a sense of “shared European identity,” such as the unified European passport and driver license. These are among many other successful policies. The EU and the United States are major allies. The United States played a significant role in protecting the EU’s security after World War II. It participated in the economic reconstruction process in Europe through its investments under the Marshall Plan. It also advocated the idea of the European Economic Community, and it was a defender of the western European states against the threat of Soviet invasion during the cold war. This protection was under the umbrella of the North Atlantic Treaty Organization (NATO). After the collapse of the Soviet Union, the first Bush administration adopted a New Transatlantic Agenda and Joint EU-US Action Plan, updated in 1995, to promote peace and democracy in Europe and around the world and to expand world trade. Since then, biannual meetings are held between the presidents of the United States, the commission, and the European Council. The meetings are also held between EU foreign ministers and the U.S. secretary of state and between the commission and the U.S. cabinet members. Despite such strong ties, there is a growing demand in the American Congress to dissolve NATO. The rationale for this demand is grounded in the fact that the EU is now more united than ever before, and there is no longer a threat from the Soviet Union that requires American engagement. In Europe, on the other hand, there is
disagreement about U.S. foreign policy in the Middle East. While Britain and eastern European members support the United States, others such as Germany and France disagree with the U.S. policy. Their concern is that the United States is endangering Europe (e.g., the Madrid train bombings). Such clashing views between the United States and the EU, nevertheless, should not take place since they both need each other to challenge the common threats of terrorism. See also diplomatic policy. Further Reading Archer, Clive, and Fiona Butler. The European Union: Structure and Process. New York: St. Martin’s Press, 1996; EUROPA. “Overviews of the European Activities: Enlargement.”Available online. URL: http://europa.eu/pol/enlarg/overview_en.htm. Accessed January 9, 2007; Gerven, Walter. The European Union: A Polity of States and Peoples. Stanford, Calif.: Stanford University Press, 2005; McCormick, John. Understanding the European Union. New York: Palgrave, 2005; McGiffen, Steven. The European Union: A Critical Guide. Ann Arbor, Mich.: Pluto Press, 2005; Nugent, Neill. The Government and Politics of the European Community. Durham: University of North Carolina Press, 1991; Roney, Alex. EC/EU Fact Book. London: Kogan Page, 1998; Warleigh, Alex. European Union the Basics. New York: Routledge, 2004. —Muna A. Ali
globalization Globalization refers to the worldwide diffusion of modes of human culture, society, economic transactions, and politics and the greater interconnectedness of the world’s peoples. It has become widely accepted that humanity has entered into a globalized era, indicating that the old division into a capitalist first world, a communist second world, and a developing third world no longer applies. Because it is a complex of different processes with a great capacity to unite, divide, and re-create all manner of human relationships and interactions, it is hard to judge whether its overall effects are benign or harmful, and in which ways. Both detractors and supporters of globalization can cite empirical evidence to bolster their claims—the for-
globalization 1013
mer, the fact that the number of people living in poverty has not appreciably decreased, nor has the intensity of poverty, especially in rural areas and urban slums; the latter, the fact that aggregate economic data have been improving. However, the persistence of concentrations of immiseration, environmental degradation, disease, and other issues of human mortality all suggest that inequality of access to important resources remains the burden of peoples living in underdeveloped countries. The social, economic, political, and environmental consequences of contemporary globalization have already been deemed controversial yet are presently unknown. Some of the most significant questions surrounding globalization include whether it is, in truth, a new phenomenon; whether it threatens the traditional sovereignty of the nation-state and current international political practice; whether economic globalization is outpacing political globalization to the detriment of the world’s poor; and identity issues such as whether social and cultural aspects of globalization are really Americanization or Westernization in disguise, thus remaking the world in the image of its dominant countries. Globalization is not itself a new phenomenon when understood merely as the extension of longstanding patterns of interaction, albeit intensified and much more fluid, but nonetheless a quantitative and not a qualitative change. Human societies have interacted with one another often over great distances for most of recorded history, though in modern times globalization has seemed driven by predominant states, cultures, and ways of conducting business. Some international political economists argue that concerns about the negative effects of globalization are greatly exaggerated, by which is meant not merely that the beneficial effects far outweigh the negative, but that globalization is itself a misnomer because it is not at all something new but the logical outgrowth of patterns of interaction that began in premodern times. Furthermore, they argue that the greater the economic integration and the lesser the degree of political regulation, the better. But the political situation today is very different than it was during the mercantile era, for example, and calls to mind Aristotle’s admonition that relations of trade are no substitute for political agreement and community.
The difference today is that the processes of globalization occurred in an international context largely determined in the wake of World War II, whereupon ensued a widespread agreement to attempt an international legal regime that extended basic human rights to all the world’s peoples and to conduct affairs between nation-states peaceably, an effect that conditioned both superpowers during the cold war despite their stated mutual animosity. Economic integration might not even have reaped the benefits it so far has been able to were it not for the relatively peaceful, stable international system, politically speaking. By itself, an increase in commerce does not address those fundamentally divisive issues such as territorial and ethnic disputes that are arguably fed as much as abated by the accumulation of wealth on one or more sides of entrenched, historic disagreements. Globalization as the expansion of neoliberal ideology regarding economic policy recalls the old debate between followers of Adam Smith and Karl Marx, regarding whether the free market provides a sufficient form of social regulation and so makes politics irrelevant, and whether the free market will reach an exploitative zenith at which point it will be desirable to craft a global political regime encompassing all humanity to ensure that all peoples both contribute to it and are benefited by it. At present, widening trade liberalization has progressed alongside widening income disparities and ecological destruction. Understood as increasing economic interactions across international borders undertaken by private entities rather than governments, the relevance of the nation-state is seen to be in decline. Globalization today is often regarded as posing a threat to the sovereignty of the nation-state, whereas earlier globalization can be said to have enhanced it, albeit at the expense of peoples not yet organized into recognizable states. Globalization is criticized by some because it seems to have displaced politics from the driver’s seat in what counts in international affairs in favor of large corporations and their interests. By the 1990s, the economic turnover of several multinationals exceeded the gross national product of many states, calling into question the ability of those principally developing world states to control their own destiny. In addition, the control over resources such as oil and other vital commodities by multinational
1014 globaliza tion
companies can be viewed as a new form of imperialism that threatens smaller nations’ control over their domestic economies, making them vulnerable to world market fluctuations that they have little or no ability to control and diminishing their bargaining position. Likewise, in the absence of effective and enforceable international trade law, those large corporations that seem to have no specific country location outside of known off-shore tax havens may present resource, environmental, and other threats even to developed economies. In this context, developing world nations are dependent on alreadyestablished capital and financial markets, even as they attempt to use global communications and information networks to their advantage. The liberaldemocratic inclinations of the evolving global order, however, have the potential to condition any runaway capitalism and channel its benefits to the least advantaged, which are understood to have political and economic rights that ought to be recognized regardless of the capacity of any particular state to secure them. Use of the term globalization to describe an interconnected world in which no corner of the globe has not been penetrated by the financial institutions and multinational corporations of the industrialized West became widespread during the 1990s. Globalization today is generally taken to be driven by the expansion of free market capitalism such as characterizes advanced industrial countries into the developing world, and it is generally criticized for occurring without the benefit of a political envelope or legal regime adequate to the task of managing events and multiple actors in the fast-paced environment of the international economy. Especially in the developing world, the imposition of economic restructuring has led to peasant dispossession and displacement, just as it has weakened states and lessened their ability to effect economic regulation to secure for themselves the benefits and profits of their own resources and industries. One growing feature of globalization is the movement of peoples, whether they are professionals, refugees, migrant workers, or travelers, thus integrating habits of culture such as religion, art, and popular culture. The increasingly borderless aspect of the human community allows for a myriad of novel interactions and fusions, some of which are
resisted in the name of local or regional traditions. The sociologist George Ritzer delineates two processes involved in globalization, which he terms grobalization and glocalization, each with its own subprocesses and characteristics. Grobalization, the globalization of forms of growth, is a complex of capitalism in financial affairs, Americanization in culture, and “McDonaldization” of workplaces, which seem geared toward enhancing the profit, power, and influence of nation-states and corporations in the already developed countries. Here, the global is driving out the local and replacing indigenous processes with nothing particularly special. Glocalization, the globalization of forms of local modes of human interaction, is characterized by diversity and stresses the local independence of persons, places, things, and services from global processes. Here, the local or regional is mixing with the global to produce unique hybrids that are dispersed to other areas of the globe. These two broad processes of globalization are resulting in both more uniformity across the globe and increased attention to a Western–non-Western divide in the case of grobalization and novel forms of human interaction in the case of glocalization. The notion of multiculturalism could be analyzed from both perspectives, as the growth of a heterogenous culture that has been contributed to by a variety of local, regional, and foreign cultures, yet a homogenous worldwide culture that looks pretty much the same anywhere. Globalization is producing both complexity and variety at the same time as it is producing commonality. While through the ease of travel and the desperation for economic opportunity globalization has brought many peoples to the United States and individual Americans to all corners of the globe, it has also brought American and Western dominance to the rest of the world, a dominance in science and culture, for example, that may be as unwelcome as it is perceived to be threatening. To many, the United States represents the Western world and so has become the target of terrorist organizations that seek to abate its influence and maintain the dominance of non-Western ways, unfortunately through destructive acts facilitated by globalization, such as electronic networks of information, financial backers, and arms merchants. Power is no longer centralized in capital cities, and the importance of the center-
international law 1015
periphery relationship within nation-states has been eclipsed by networks of power that increase people’s capacity to access power, just as they increase a country’s security vulnerabilities to the coercive use of power. Defense against aggression such as guerrilla attacks is a feature of daily life in many places, one that challenges traditional forms of governance, encourages new political alliances, and bolsters regional interdependencies. It is unclear whether the diffusion of increasingly common habits of culture will produce a recognition of our common humanity and a political understanding to match that will benefit the world’s peoples, especially those who at present are not able to contribute to the transformations but instead are being remade by them, if not ignored owing to their economic poverty, political oppression, or lack of interest in their situation for one reason or another. Perhaps globalization’s interruptions of the ordinary, accepted, or longstanding, and multiplications of difference alongside uniformity in modes of human existence ultimately raise the most contentious sort of issue of all, identity. As even indigenous peoples are being challenged to get on board to at least have their economic activity conditioned by world demands and practices, just as those sorts of effects are increasingly perceived to present political and ethical challenges to inhabitants of the already developed countries, such as the thorny issue of sweatshops, national and other forms of identity are revealing themselves to be not as well settled as some have believed, but far more fluid and subject to conscious choice. This new space for freedom may be only slight in many cases, including places in the developed world already accustomed to certain lifestyles, prerogatives, expectations, and lacks of concern, but the potential is there for energetic supranational bodies and conscientiously chosen collective forms of determination that will work as an improvement of the human condition. Further Reading Bhagwati, Jagdish. In Defense of Globalization. Oxford: Oxford University Press, 2004; Dunning, John H., ed. Making Globalization Good: The Moral Challenges of Global Capitalism. Oxford: Oxford University Press, 2003; Pirages, Dennis Clark, and Theresa
Manley DeGeest. Ecological Security: An Evolutionary Perspective on Globalization. Lanham, Md.: Rowman & Littlefield, 2004; Ritzer, George. The Globalization of Nothing. Thousand Oaks, Calif.: Pine Forge/Sage, 2004; Steger, Manfred B., ed. Rethinking Globalism. Lanham, Md.: Rowman & Littlefield, 2004; Wolf, Martin. Why Globalization Works. New Haven, Conn.: Yale University Press, 2004. —Gordon A. Babst
international law International law in the United States has undergone changes back and forth from a marginal role, to manipulation for reasons of national interest, to respect and observation. Policy makers have given radically different interpretations of what international law is depending on how or if they sought to use it in determining U.S. policy. The U.S. Constitution includes specific mention of the “law of nations” in Article 1, Section 8, giving Congress power to “punish . . . offenses against the law of nations”; Article VI provides that treaties are the “supreme law of the land.” Beginning in the 20th century, international law has been used in U.S. courts, often along with such statutes as the Alien Tort Claims Act of 1789 (used extensively by human rights advocates after the 1970s), and the Torture Victims Protection Act, enacted in 1992 . The Foreign Sovereign Immunities Act of 1976 was used by Congress to limit the immunities that foreign sovereigns could claim under international law. At varying times, policy makers have called attention to the different sources of international law that are detailed in Article 38 (1) of the Statute of the International Court of Justice. These sources are international agreements, custom, general principles, and, as “subsidiary means” for international legal interpretation, judicial decisions and publicists’ teachings. By Article 59 of the Statute, though, the court’s own decisions are to be applied only to the particular case at issue and not to serve as precedents. So unlike in U.S. courts, where judges must make decisions consistent with mandatory precedents, judges in the International Court of Justice are not similarly obligated. How and whether analysts, advocates, and policy makers invoke international law depends on their
1016 int ernational law
approach to international relations. Advocates of liberal or idealist approaches argue that at its best that the United States fulfill a leadership role by invoking international law. Advocates of realist approaches argue that international law should have a limited role in U.S. foreign policy and should be used only to the extent that it serves U.S. national interest, defined as power. Advocates of critical, world order approaches argue that the United States often narrowly interprets international law in its self-interest. The United States should conform to changing notions of what is required, notions established by an emerging civil society. Law that is “international” is usually, but not always, distinguished as being between “states” with sovereignty over a population and territory. It may include agreements made in a region, or bilateral agreements binding two countries. It may be distinguished from “supranational” or “community” law, by which a higher organ can bind states. Terms such as world and global may describe envisioned orders that go beyond international law. “Domestic,” or internal, law is often contrasted with international law, and “comparative” law examines internal relations of two or more states. “International” law is often designated as “public” or “private,” connoting rules on cross-border, primarily commercial, transactions, often also designated “transnational law.” A Hague Conference on Private International Law, first meeting in 1893, continues to deal with torts, contracts, trusts, and family legal matters. The remainder of this entry focuses on public international law. Public international law consists of formal and informal rules on war and peace, the environment, territory, human rights, the sea, and other matters. Law means rules that go beyond morality, although compliance may come from reciprocal and voluntary agreements rather than from a central authority. The agreements may be written, for example treaties with provisions on “entry into force.” The Vienna Convention on the Law of Treaties of 1969 (entry into force, 1980) is a “treaty on treaties” that sets forth rules usually also followed by nonparties. (As with the other treaties mentioned in this essay, the convention “entered into force” became legally binding on parties, only after gaining ratification by a prescribed number of parties. In the United States,
the executive branch may “ratify” a treaty only after it has been approved in the senate by a twothirds vote.) Within the United Nations, the International Law Commission seeks to codify (place in written form) unwritten understandings. Those understandings on matters ranging from ocean navigation to property rights reflect common obligations, not just common practices. International legal obligations may be reflected in general principles of law, for instance, that interpretation of treaties should assume an intention to obey them, and in the writings and teachings of scholarly analysts. Most agreements made across boundaries involve economic enterprises and consist of private international law. Modern public international law includes overlapping fields: law of (and during) war, international criminal law, international economic law, human rights law, international environmental law, law of the sea, territorial law, humanitarian law, refugee law, economic law, and so forth. Some accounts of public international law go back to before the Common Era and others to the origin of the European state system in the Treaty of Westphalia in 1648. Global changes with the creation of the United Nations in the mid-20th century and with economic and political globalization toward the end of the 20th century led to new ways of thinking about international law. Liberation movements and the emergence of “civil society” (characterized by nonprofit, nongovernmental actors) also challenge traditional conceptions of international law. The 20th and 21st centuries are replete with attempts to create international law. Many of the first institutions reflected Europe’s world dominance; some advocates have sought to redress previous imbalances. The Hague Agreements in 1899 and 1907 brought together European and North American governments in controlling contemporary weaponry. Under the 1899 Hague Convention, the Permanent Court of Arbitration (PCA) was established at The Hague, Netherlands. The PCA continues to provide arbitrators for disputes between states, with five cases pending as of July 2006. In 1921, the Permanent Court of International Justice was founded at The Hague under Article 14 of the Covenant of the League of Nations. The Pact of Paris, or Kellogg-Briand Pact, which outlawed war as an “instrument of national
international law 1017
policy,” was a product of the optimism of the period around the year of its adoption, 1928. The outbreak of World War II resulted in pessimism about international legislation that was divorced from realities of power. Lawmaking through the United Nations, created in 1945, was greatly constrained by the cold war confrontation between the United States and the Soviet Union. Advocates of international legal solutions to world problems emphasized their appropriateness for the world as it ought to be, as opposed to the realities of national selfinterest. A United Nations Convention on the Prevention and Punishment of the Crime of Genocide nevertheless entered into force in 1951, following adoption by the General Assembly in 1948. The United States did not ratify the agreement (an act by the executive branch following approval by two-thirds of the Senate) until 1989. A 1948 Universal Declaration of Human Rights was passed by the United Nations General Assembly. The International Covenant on Civil and Political Rights and the International Covenant on Economic, Social, and Cultural Rights were opened for signature in 1966 and entered into force shortly after receiving their 35th ratification in 1976. The Untied States ratified the Covenant on Civil and Political Rights in 1992. Genocide, war crimes, and crimes against humanity were dealt with in the Rome Statute for an International Criminal Court in 1998. The Rome Statute entered into force in 2002. Additional United Nations human rights agreements include, among others, the International Convention on the Elimination of All Forms of Racial Discrimination (passed by the General Assembly in 1966), the Convention on the Elimination on all Forms of Discrimination Against Women (1979), the Convention Against Torture and Other Cruel, Inhuman, or Degrading Treatment or Punishment (1984), and the Convention on the Rights of the Child (1989). The United States ratified the Racial Discrimination and Torture Convention but not the others. U. S. ratifications have come with reservations, by which countries agree to treaties “except for” certain provisions. Advocates of reservations contend that treaty observance is more likely if countries are able to restrict their commitments. Examples include U.S. reservations to the International Covenant on Civil and Political Rights regarding free speech, allowing for broader protection of racist and militarist speech
than that given under the covenant, and broad application of the death penalty. Regional international legal protections of human rights are strongest in Europe but also exist in the Americas and in Africa. All three continents have courts of human rights. The United States and Canada are not parties to the American Convention on Human Rights, although cases involving the United States have been brought by the Inter-American Human Rights Commission. The United States is a member of the Organization of American States and therefore subject to decisions of the commissioners. The United States was initially a major proponent of United Nations efforts to govern the ocean. Conventions emerged from successive United Nations Conferences on the Law of the Sea. These included the 1958 Geneva Conventions on the High Seas (entry into force, 1962), Territorial Sea, and Continental Shelf (both of which entered into force in 1964), and the 1982 Law of the Sea Treaty (entered into force, 1994). The United States is a party to the 1958 treaties but is not a party to the 1982 Law of the Sea treaty. The 1982 treaty includes an International Tribunal for the Law of the Sea, located in Hamburg, Germany, to settle disputes among parties. International environmental law includes a very successful ozone regime, including a treaty and later protocols with ever-stricter regulations on the production of ozone. A 1992 United Nations Framework Climate Change Convention (UNFCC; entry into force, 1994) seeks to regulate global warming, with specific guidelines set forth in a 1997 Kyoto Protocol (entry into force, 2005). The United States is a party to the convention but not to the protocol. Many of the environmental agreements, such as the UNFCC and Kyoto Protocol, are reviewed at periodic conferences of the parties. International criminal law was administered after World War II at war crimes tribunals in Nuremberg and Tokyo, a major advance from attempts in Constantinople and Leipzig to prosecute war criminals after World War I. Subsequent tribunals have dealt with Rwanda, the former Yugoslavia, and Sierra Leone. In 1998, countries gathered in Rome to approve the Statute for an International Criminal Court. It did not permit reservations. In 2002, the Rome Statute entered into force. The United States is not a party.
1018 I nternational Monetary Fund
An as yet uncompleted section of the Rome Statute will deal with aggression. Governments have been unable to agree on the relation to aggression of related terms such as terrorism, wars of national liberation, and preemptive war. Much of international law deals with whether countries are justified in going to war ( jus ad bello) and the conduct of war ( jus in bellum). Some uses of force have generally been legally permitted, especially when in self-defense or multilateral use of force, but others, such as preemptive war, are heavily criticized by international lawyers. Because of the growing importance of human rights law, some advocates would create an exception to prohibitions on the use of force for humanitarian intervention, but most lawyers and policy makers are skeptical. U.S. officials have questioned the value of international law in promoting human rights and whether it serves U.S. interests. After the events of September 11, 2001, domestic and international critics of the George W. Bush administration charged the United States with widespread violations of international law. Administration supporters argued that an international war on terrorism justified curtailment of international human rights commitments. Increasingly, international organizations, nongovernmental organizations, and individuals have been involved in the international legal process. States remain the primary actors, and the United Nations is structured around a leading role for states. But the emergence of a global civil society, whereby individuals associate across national boundaries, poses challenges to the state system. The challenges are evident in some applications of international law, such as nongovernmental tribunals (the Bertrand Russell and Permanent Peoples’ Tribunal) and globalization whether from popular forces or from commercial interests. See also rule of law. Further Reading Bartholomew, Amy, ed. Empire’s Law: The American Imperial Project and the “War to Remake the World”. London; Pluto Press, 2006; Buergenthal, Thomas, and Murphy, Sean D. Public International Law in a Nutshell. St. Paul, Minn.: West Group, 2002; D’Amato, Anthony, and Jennifer Abbassi. International Law Today: A Handbook. Eagan, Minn.: Thomson-West, 2006; Glendon, Mary Ann. A World
Made New: Eleanor Roosevelt and the Universal Declaration of Human Rights. New York: Random House, 2001; Joyner, Christopher. International Law in the 21st Century: Rules for Global Governance. Lanham, Md.: Rowman & Littlefield, 2005; Schulte, Constanze. Compliance with Decisions of the International Court of Justice. Oxford: Oxford University Press, 2004; Von Glahn, Gerhard, and L. Taulbee James. Law among Nations: An Introduction to Public International Law. 8th ed. Upper Saddle River, N.J.: Longman, 2006. —Arthur W. Blaser
International Monetary Fund (IMF) The International Monetary Fund (IMF), World Bank, World Trade Organization (WTO), and a handful of similar institutions have experienced unprecedented visibility recently, and these once obscure and relatively unfamiliar yet pivotal players in world economic affairs have become known even to those with no interest in finance and economics. During the past decade or so, the common association of the IMF with globalization initiatives has exposed the organization to unprecedented scrutiny from individuals and groups that had traditionally cared very little about the IMF and related institutions. Unfortunately, the politicization of globalization efforts and the associated resistance to those efforts from many circles has attracted controversy over the perceived role of the IMF in the global economy. Much of that controversy is fueled by misconceptions and propaganda intended to undermine liberalization initiatives around the world, and it has contributed to a widely misleading picture of what the IMF actually does and the power it has. The IMF and some of its companion organizations were created from the agreements that emerged out of the Bretton Woods Conference during the last stages of World War II. Although this gathering of world leaders was a manifestation of broader plans to establish and secure geopolitical stability on conclusion of the war, its purposes and focus were rooted in problems that were unrelated to the war itself. The Bretton Woods Conference sought to address the financial instabilities and macroeconomic deficiencies of national, regional, and global systems that produced and perpetuated the financial and economic
International Monetary Fund
crises of the interwar period. More broadly, this meeting was a response to the structural and cyclical dislocations and transformations caused by mass industrialization and the consequent need to confront the inadequacies of pre-Keynesian solutions for the dilemmas of industrialization. Above all, the Bretton Woods agreements were based on the conviction that conflict is avoidable through the facilitation of international economic cooperation and increased prosperity throughout the globe and that, therefore, the removal or gradual reduction of barriers to free and stable exchange is paramount. On the whole, the participants endorsed capitalist economic principles, though they disagreed regarding the proper level of state intervention and control over economic and financial mechanisms. Nevertheless, a consensus did exist concerning the Keynesian realization that industrialized economies require at least some degree of macroeconomic management and that the structural vulnerabilities of industrialized and industrializing nations must be remedied in some manner. In part, since domestic economic and financial stability was, among other things, a function of international economic and financial stability, domestic structural weaknesses would be countered through international processes established to pursue the above goals. Despite the overarching economic objectives that animated all Bretton Woods negotiations and determinations, the primary reason for the existence of the IMF stems from financial concerns. The IMF’s companion organizations, such as the World Bank Group, were conceived as a more direct reaction to economic priorities, particularly the needs of developing and structurally deficient economies, but the IMF sought to devise a methodology and set of mechanisms through which financial stability and cooperation, and especially currency viability, could be promoted and maintained. In fact, though the IMF’s mission is broadly devoted to the implementation of free trade principles and politicoeconomic liberalization policies around the globe, its practical focus has been confined to the oversight, management, and regulation of currencies and the control of factors that ensure currency viability and financial stability. As such, its principal areas of activity include financial supervision and assessment through which
1019
the IMF evaluates the capabilities, effectiveness, and performance of member and nonmember entities; lending and structural aid through currency support programs and financial liberalization efforts; and technical facilitation through training, knowledge management, and infrastructural development of many sorts. It is the second of these, financial restructuring and currency intervention, that has attracted so much notoriety among critics of globalization for its apparent advocacy of U.S. interests and Western cultural priorities. Regardless of its political connotations, this area of responsibility seems inherently driven by Western norms due to the assumptions upon which the IMF was founded, so criticisms such as the one above seem irrelevant. The IMF is a large organization with headquarters in Washington, D.C., and a current subscription of 185 states, each of which has a seat on the board of governors. It is headed by a managing director, who serves a five-year term and answers to a 24-member executive board. Since 2004, the managing director has been Rodrigo de Rato of Spain, but he is expected to resign before his term expires. All too often, not least because of the relative size of the American subscription quota and the resulting influence the United States has over other members, managing directors have been viewed as the hand-picked representatives of the American government, although no American has ever served as the managing director of the IMF. Despite the manifest U.S. impact on the shaping of IMF policy and its organizational governance, the extent and effectiveness of any resulting influence over other members has frequently been overstated by the IMF’s critics. The IMF frequently works very closely with the World Bank and other development based organizations, but its mandate is largely limited to the stabilization of currencies and financial systems. Through loans and other forms of assistance to countries with substantial financial problems, the IMF attempts to restore, establish, or maintain sustainable, feasible, and equitable exchange rates in international markets. It monitors current-account relationships among member states and facilitates favorable balance of trade in regional and global markets. Although its lending programs and currency measures are mostly concerned with the financial aspects of domestic and international stability, intervention and assistance is
1020 int ernational trade
normally contingent on structural reforms in target countries. However, it is important to remember that the IMF’s involvement with development, though significant in particular regards, is secondary to its primary purpose of financial stabilization through the management of currencies and current accounts. Some scholars have argued that the effective collapse of the Bretton Woods system precipitated by the abandonment of the gold standard in 1971 rendered the IMF all but irrelevant and that, therefore, its existence is problematic if not utterly unnecessary. The Bretton Woods system was based on an adherence to a fixed exchange rate mechanism, so the transition to a floating mechanism would indeed seem to undermine the viability of an organization whose major tasks were rooted in the desire to protect an established system of fixed exchange rates. Nonetheless, if the mission of the IMF is considered more broadly within the context of general financial stability and the structures required to maintain that stability, then the socalled collapse of the Bretton Woods system merely requires a reorientation of specific policy objectives to meet the demand of a mission that has remained mostly consistent throughout the last 60 years, the stabilization and coordination of world financial markets and currencies in order to promote an internationally viable financial system that promotes efficiency through balance and sustainability of trade, payments, and capital. As has been true of the World Bank, the IMF has underwritten projects throughout the globe, especially in regions that are underdeveloped or experiencing serious structural difficulties, so it has been labeled by its critics a tool of Western expansionism and a supporter of the exploitation of developing countries by the industrialized world. To be fair, it must be admitted that the relationship between the IMF and the debtor states it assists in the developing world is intrinsically imbalanced, not least due to the economic and diplomatic leverage, to say nothing of military might, Western countries possess. In addition, the financial aid disbursed by the IMF does come with the proverbial strings, so that recipients of IMF financial largesse are often compelled to consider pro-Western structural reforms as the essential condition for assistance.
These observations notwithstanding, it would be altogether illogical and unwarranted to conclude that the causal links between assistance and restructuring are questionable or invalid. This essay is neither a defense nor a rejection of the IMF and its practices, so the debate between proglobalization and antiglobalization forces concerning the IMF should be resolved elsewhere, but the fact that the functional logic of the IMF depends on its ability to endorse and implement pro-Western capitalist norms cannot be forgotten. So people should not be surprised to discover that the IMF conditions its willingness to engage in specific projects on the reciprocal willingness of target states to implement those structural initiatives that will maximize the probability of success for IMFsponsored projects. From a historical perspective, with respect to the definition and fulfillment of normative criteria, the IMF has always been an unashamedly Western club, and it has succeeded, at least in part, due to its commitment to that reality. Further Reading Federal Reserve Bulletin. Available online. URL: www. federalreserve .gov/ pubs/ bulletin/ default .htm. Accessed July 2, 2007; Cesarano, Fillippo. Monetary Theory and Bretton Woods. New York: Cambridge University Press, 2006; Gilpin, Robert. The Political Economy of International Relations. Princeton, N.J.: Princeton University Press, 1987; Krugman, Paul. Pop Internationalism. Cambridge, Mass.: MIT Press, 1997; Udell, Gregory F. Principles of Money, Banking, and Financial Markets. New York: Addison Wesley Longman, 1999; Woods, Ngaire. The Globalizers. Ithaca, N.Y.: Cornell University Press, 2006. —Tomislav Han
international trade The famous French historian Ferdinand Braudel once noted: “No civilization can survive without mobility: all are enriched by trade and the stimulating impact of strangers.” Indeed, international trade is as old as ancient history. Despite ups and downs, international trade has continued to grow throughout history. In 2005, total world merchandise trade was more than $10 trillion, accounting for nearly 20 percent of the total economic product of the world. The U.S. share of world exports in 2005 was 8.7 percent, and
international trade 1021
WORLD MERCHANDISE TRADE (2005) Exports $ Million World 10,392,567
Percentage of Total
Imports $ Million
Percentage of Total
100
10,652,542
100
256,378
2.5
310,841
2.9
Middle- Income C ountries
2,785,199
26.8
2,551,288
24
High- Income C ountries
7,351,037
70.7
7,790,420
73.1
United States
904,289
8.7
1,732,706
16.3
Low- Income C ountries
the import share was 16.3 percent. The table above summarizes world trade. However, international economic exchange is larger than these numbers, which do not show foreign direct investment or the trade in services. The latter includes trade in intangible products such as banking, tourism, telecommunications, and exchange of professional services. Including trade in services would add another $2.4 trillion to world trade. Economics presents trade purely in terms of an exchange of goods and services and its effects on the standards of living for people. However, religious, political, moral, and other sociocultural considerations have always been important in elevating or diminishing trade. One way of understanding these cultural considerations is to make explicit how trade is linked to everyday life and the cultural identity of people. Trade is a natural component of human interactions. A few Greek and Roman writers understood the sea to be a way of promoting human interactions and facilitating commercial exchanges. The modern belief that interaction and exchange underlie prosperity and peace can then be traced back to such ideas. The late 18th-century political economist Adam Smith’s notion of division of labor laid the basis of prosperity through trade inasmuch as he opined that gains from economic exchange accrue to those who specialize in producing things for which they are most suited. These ideas from the late 18th century formed the basis of doctrines of comparative advantage in trade in the 19th century. Similarly, political theorists had begun to argue that as nations exchanged goods, they would be less likely to go to war with each other.
This is best captured in French writer Frederick Bestiat’s words that if trade does not cross frontiers, armies will. But the case against trade is also made in economic and cultural terms. The economic rationale against trade rests on the thesis that economic specialization can make some nations too dependent on others or can result in an unequal exchange whereby one benefits at the cost of another. Cultural arguments against trade are many; the earliest ones were moral and philosophical. To the Greeks we owe the term xenophobia, or fear and dislike of foreigners. Christianity in general decried the profit motive that underlies commerce and trade. It was not until the modern era that such cultural notions regarding trade were questioned, but these arguments continue to be made. Trade wars are often portrayed in negative terms. Take, for example, the overly xenophobic tones in the United States against trade surpluses of East Asian countries. Many in the United States have fretted and fumed over Japanese trade surpluses from the mid-1970s to the mid-1990s, Chinese trade surpluses since the mid-1990s, and more recently the controversies regarding outsourcing of jobs to India. Fueling these controversies are numbers, such as the ones shown in the table above, where the total exports from the United States are $904 billion and imports are $1,733 billion, resulting in a merchandise trade deficit of $828 billion. But this trade deficit is reduced with the U.S. trade surplus on services and its earnings from its foreign enterprises. For example, around 25 percent of the total stock of foreign direct investment in the world comes from U.S. multinational corporations, which generate enormous amounts of
1022 int ernational trade
earnings for the country. However, foreign direct investment is usually not considered part of international trade. Historically, countries flip-flop between participating actively in international trade and following more inward-oriented strategies. In the 1800s, the United States did not actively participate in international trade, preferring instead to develop a manufacturing industry in New England. This led to a domestic conflict in the country whereby the Republican Northeast of the country supported protectionism in trade, while the cotton-producing South and the corn-producing Midwest supported free trade. This factored into the Civil War (1861–65) that ensued between the North and South, though the major issue in the war was abolition of slavery. After the Civil War, a fall in navigation costs and improvements in agricultural technologies further made the United States quite competitive in agriculture. Cheap corn exports to Italy, for example, threw hundreds of thousands of Italian farmers out of jobs, accounting, in turn, for the first wave of Italian immigration to the United States. In the 20th century, the United States strengthened its export profile with manufactured exports, but two world wars, the interwar years, and the Great Depression were not favorable to trade. Nevertheless, it was at this time, unlike in the 19th century, that the United States began to express explicitly a preference for trade and also link it to causes of international peace. The famous Fourteen Principles (Points) espoused by President Woodrow Wilson before Congress in 1918 included freedom of the seas and free trade as steps toward international peace. President Franklin Roosevelt’s secretary of state, Cordell Hull between the years of 1933 and 1944, was another strong advocate of free trade and, in a Wilsonian vein, believed it to be a force of world peace. Hull also supported the moves for the foundation of the United Nations in 1945. At the end of World War II, as the global community went about designing international institutions, the need for creating one on international trade was led by the United States. The General Agreement on Tariffs and Trade (GATT) was the de facto international trade institution created at the Havana Summit in 1947 by the 26 nations that met there. It is believed to have been enormously influential in
boosting international trade. Between 1947 and 1994, GATT undertook eight rounds of multilateral trade talks among its member states to reduce tariffs or customs barriers. Starting with the Tokyo Round of 1974 to 1978, GATT also undertook reductions in nontariff barriers (NTBs) among nations. These NTBs included nontransparent trade laws, quotas and other quantitative restrictions, subsidies paid to domestic producers, and discriminatory government procurement practices. GATT’s Uruguay Round of trade talks, lasting eight years between 1986 and 1994, was important both for bringing new issues into the international trade agenda and also for strengthening GATT itself. Transforming GATT into the World Trade Organization (WTO), which came into being in 1995, effected the latter. The WTO was given some teeth by the formation of a formal dispute settlement body to adjudicate and settle trade disputes among countries. The new issues pushed by the United States revealed the sources of the country’s competitive advantage in the world. This included trade in services led by U.S. exports of telecommunications, banking, airline, hotels, and professional services. U.S. corporations also pushed for and received protections for intellectual property. The latter can be defined as creations of the human mind that go into the manufacture of any product. Intellectual property protections include patents, copyrights, and trademarks. The primary concern for the United States was global piracy of its products, ranging from luxury goods (such as fashion) to music and film videos. The ninth round of multilateral trade talks is the one currently underway since November 2001, known as the Doha Round. However, at the time of this writing it has been slow going, chiefly because of factors within the United States and western Europe. In the United States, Congress has been under tremendous pressure from domestic agriculture and some manufacturing sectors to not allow any more tariff and nontariff reductions. These sectors now believe that the United States would lose jobs and that net gains would be little through further liberalization. The U.S. Constitution gives Congress the right to ratify trade treaties (Article I, Section 6). Even though historically U.S. presidents have favored free trade, Congress, in response to the local interests of its constituencies,
liberal democracy 1023
has been more protectionist. While Congress has ratified all trade treaties submitted by the president, the fate of the Doha Round, even if it were to be concluded, remains uncertain. Opposition to trade has also built up in other parts of the world. The European Union (EU), which represents 25 European countries at the WTO, has dug in its heels on farm subsidies paid to farmers through an enormously entrenched measure called the Common Agricultural Policy (CAP). Despite calls to dismantle CAP, EU countries such as France, Austria, and Poland with powerful farm lobbies remain opposed. In mid-2006, the EU agreed to eliminate all farm subsidies by 2013, but since then there has been a lot of backpedaling. Meanwhile, the developing world remains divided over the issue of trade. Historically, the developing world favored protectionism in seeking to boost its own industries and reach self-sufficiency. Nevertheless, by the late 1980s, it hesitatingly began to move in the direction of free trade. Since then a few countries such as Brazil, Argentina, and China have more or less embraced free trade, while smaller and poorer countries continue to seek preferential access to their products abroad while limiting their imports. The developing world’s free trade coalition is made up of a group of nearly 20 countries (called the G20), while the other countries from Africa, the Caribbean, and the Pacific make up the APC, or G90. There are other groups as well. For example, a group of four African countries (Benin, Burkina Faso, Chad, and Mali—the G4) has asked the United States since 2003 to stop subsidizing its cotton producers, who keep the price of cotton artificially low in the United States and hurt cotton exports from the G4. There have also been vehement protests against free trade from various other groups. As mentioned, the developing world remains ambivalent and divided on free trade. Furthermore, many labor, environment, and human rights groups argue that the competitiveness among nations, which forms the basis of trade, also dilutes labor and environmental standards as countries “race to the bottom” to reduce costs. The Doha Round was, in fact, supposed to be Seattle Round starting in 1999. However, protests in Seattle from advocacy groups delayed the start of the round. Despite the slowdown in the Doha Round, world trade continues to grow. Between 2000 and 2005,
world exports of merchandise grew by 10 percent, far greater than the growth in national incomes. This leads economists to believe that not only will trade not diminish in the future but that it will also lead to economic growth. International trade rules such as those negotiated through GATT and WTO can facilitate the cause of trade, but negotiating these rules takes political will that has been forthcoming much more slowly in recent years. Ironically, international trade has grown despite the political slowdown. Further Reading Destler, I.M. American Trade Politics. 4th ed. Washington, D.C.: Institute for International Economics, 2005; Friedman, Thomas L. The World Is Flat: A Brief History of the Twenty-First Century. New York: Farrar, Straus & Giroux, 2005; Irwin, Douglas A. Against the Tide: An Intellectual History of Free Trade. Princeton, N.J.: Princeton University Press, 1996; Singh, J. P. Negotiation and the Global Information Economy. Cambridge: Cambridge University Press, 2008; Wolf, Martin. Why Globalization Works. New Haven, Conn.: Yale University Press, 2004; World Trade Organization. World Trade Report 2007. Geneva: World Trade Organization, 2007. —J. P. Singh
liberal democracy Liberal democracy is the basis for representative government that allows civil society, the economy, and political culture to evolve while maintaining transparent regulatory and administrative control for the provision of political goods: public order, public health, public welfare, institutions of justice, a free press, and the education of citizens. Citizens in liberal democracies assert vertical accountability in periodic elections for those who govern. Ideally, the renewal of leadership flows in this way from the society at large. Liberal democracies also maintain horizontal accountability across centers of power to prevent institutions of governance from encroaching on one another and on the civil and political liberties of their citizens. Clear examples of horizontal accountability exist in the checks and balances among the three branches of government enshrined in the U.S. Constitution and reflected in the workings
1024 liber al democracy
of the federal government. The U.S. Congress’s enormous power in authorizing and appropriating government funds is subdivided by specific functions reserved to the House of Representatives and the Senate. The president’s power resides in the ratification or vetoing of legislation and in the control over the executive agencies of government. The U.S. Supreme Court determines if the laws of Congress or the actions of the president conform to the Constitution. Liberal democracy in different states balances the power of government and civil society in ways reflecting the history and culture. The Republic of China, on Taiwan, maintains five branches of government (the executive, legislative, judicial, control, and examination yuan). Most parliamentary systems maintain a combined legislative-executive branch and a separate judiciary. All feature open processes for the redress of grievances administratively or before courts of law. In liberal democracies, self-replicating, selfsustaining civil society institutions aggregate political and economic power to advance and defend the interests of their constituents. Examples include political parties, labor unions, and a myriad of associations at the national, municipal, and local levels. These institutions exert vertical and horizontal accountability and are subject to it themselves through internal regulation and accountability local and national law. Liberal democracies generally recognize as the basis for international and domestic legitimacy of governments a respect for the civil and political liberties of citizens, regular elections in which candidates freely compete for the right to hold power, the rule of law (including an independent judiciary, codification of law, representation, and access for citizens sufficient to allow redress of grievances), and formal consent of the governed to those who govern, which can be granted and revoked in cycles of change not involving a resort to force or coercion. Political scientist Robert Dahl lists the institutions required for liberal democracies to function: elected officials; free, fair, and frequent elections; freedom of expression; alternative sources of information; associational autonomy; and inclusive citizenship. Failure in any one of these institutions puts liberal democracy at risk, and the resolution of soci-
etal tensions and conflicts is thrown from the arena of regular and established procedures into contests of arms and other coercive means. Not all democracies are liberal. Some maintain the forms of democratic governance but restrict citizens’ rights to assembly, association, free expression, full participation in the electoral process by opposition parties or candidates, or the independence of the judiciary. These are commonly known as illiberal or electoral democracies. The rulers constrain vertical accountability exercised by the citizens on the government. In contrast, autocracies (authoritarian, totalitarian, or mixed regimes including oligarchies and monarchies) do not use electoral processes to choose leaders and actively suppress freedoms of assembly, association, and speech. The following surveys and reports provide objective criteria for evaluating contemporary governments to determine whether a particular state qualifies as a liberal democracy: Country Reports on Human Rights Practices, submitted annually by the U.S. Department of State to Congress, examines internationally recognized individual, civil, political, and worker rights as proscribed in the United Nations Universal Declaration of Human Rights; Freedom in the World, issued annually by Freedom House, provides accurate summaries of the status of civil and political liberties on a country-by-country basis, with a scoring system for ready comparisons; Handbook on Democracy Assessment, produced by the International Institute for Democracy and Electoral Assistance, provides a methodology for assessing conditions of democracy and progress toward democratization; and Index of Economic Freedom, prepared and maintained online by the Heritage Foundation and the Wall Street Journal, provides objective criteria for measuring economic freedom in 161 countries. Liberal democracy emerged from the European Enlightenment of the 18th century propelled by three competing and often conflicting strands of political thought: democratic, republican, and liberal. These principles shaped habits of mind in the political culture of the British colonies in North America and eventually across Europe for checking the accretion of centralized power, maximizing individual liberty, protecting the rights of minorities, and opening opportunities for the accumulation of personal wealth.
liberal democracy 1025
The ideal of citizens having a voice in the affairs of state and an obligation to advance their views in public originated in the demos (Greek for “people”) of the Greek city-state (621 b.c. to 100 b.c.). The ideal of duty expressed in self-sacrificing public service by elites under the rule of law emerged in the commonwealth of Rome (250 b.c. to 27 b.c.). The Roman insistence on martial valor, no individual or institution standing above the law, and disciplined, self-sacrificing service to the state as the heart of the republican ideal (res publica is Latin for “public things”) proved difficult to sustain. However, the historical appeal of a pax Romana (Latin for “Roman peace”) based on republican virtues remained a guiding political principle from the fall of Rome (a.d. 464) to the modern period. The sovereignty of an individual combined with protection of property rights under contracts enforceable by law emerged during the European Enlightenment of the 18th century as liberalism. The inspiration for the American and French Revolutions can be found in the political thought inherited from ancient Greece and Rome and the writings of John Locke, Baron de Montesquieu, Jean Jacques Rousseau, and others. The American Declaration of Independence (1776) against the British Crown proclaims “Life, Liberty, and the Pursuit of Happiness” as inalienable rights. These sentiments echoed in the rallying cry “Liberté, Egalité, Fraternité,” and in the Rights of Man (1789) written during the French Revolution against the tyrannies of an unrestrained monarchy. The intellectual revolutions of the 17th and 18th centuries laid the social groundwork for enforceable contract law and the political revolutions to follow. Liberal legal and political innovations made the engines of commerce available to harness scientific inquiry and drive the industrial and technological revolutions of the 19th century. The harnessing of social, political, and economic energy to state enterprises led to the consolidation of the European nation-state and drove British imperial expansion and eventually European colonization of technologically less-advanced African, Asian, and Middle Eastern cultures in the 18th and 19th centuries. Ironically, this forced opening of non-Western cultures to the political, economic, and social influences of Europe led directly to the planetwide, revo-
lutionary mid–20th–century decolonizations. The intellectual currents underlying the explosive democratizations of the late 20th century also trace their origins to this marriage of liberal social, political, and economic thought in the European Enlightenment. In the late 20th century, exponential growth occurred in the number of governments formally and regularly accountable to their citizens. Samuel Huntington of Harvard University calculates three waves of democratization. The first rose from the American and French Revolutions and crested in the 1920s with some 30 countries, which declined to a dozen by 1942. The second wave rose and crested following World War II with 30 democracies or so, and then the number fell back. A global tipping point was apparently reached in the mid-1970s with the revolution in Portugal against the Salazar dictatorship’s failed colonial policies and domestic oppression. The third wave of democratization was broad and deep, cresting in the mid-1990s with more than 120 democracies in the world out of 190 nation-states. The fall of the Berlin Wall in 1989 marked the collapse of Marxism as the “last” ideological contender for social and political organization on the planet. By 2005, 60 percent of the population of the planet lived under governments “produced by some form of open, fair, and competitive elections.” Of the remaining 40 percent of humankind, nearly 80 percent live in the People’s Republic of China and the rest in a swath of autocratic states across the Middle East and North Africa, Central Asia, and sub-Saharan Africa. Isolated Marxist regimes hold on in Cuba, the People’s Democratic Republic of Korea (North Korea), and former republics of the Soviet Union. Some of these countries show signs of economic reform and perhaps early indicators of democratic development. Liberal democracies distinguish themselves from states organized along other principles by their performance in the delivery of political goods. Liberal democracies enjoy higher rates of gross domestic product per capita, better infrastructure, greater social mobility, longer life spans, higher literacy rates and notably higher rates of female literacy, and correspondingly lower rates of infant mortality and persecution of minorities. Liberal democracies experience no famines or wars with other liberal democracies.
1026 liber al democracy
The empirical evidence of the comparative performance underscores the attentiveness given by elected officials to their voting constituencies compared to officials not subject to vertical accountability. Authorities who seize power by extraconstitutional means or by subverting periodic and inclusive elections become isolated from citizens’ concerns and insulated from the requirements for delivery of political goods. Often, resources to develop the economy or address social welfare concerns of citizens are diverted to private accounts or squandered on illconsidered schemes of aggrandizement or expansion. The further authorities remove themselves from direct accountability of voters, the greater the likelihood that inappropriate rent seeking, cronyism, and other forms of corruption will arise. Treating the state as a personal preserve even led some autocrats to assert the prerogatives of monarchy and bequeath their children political power (e.g., in Syria Hasan to Hasan, in North Korea Kim to Kim, in Iraq Hussein to Hussein, and in Egypt Mubarak to Mubarak). Finally, extraconstitutional regime changes compound the relative performance failures of autocracies by despoiling infrastructure, lives, and the wealth of the state. Liberal democracy does not represent a panacea for the human condition but an administrative and regulatory mechanism that allows citizens a greater voice in government. Several autocratic governments in the mid and late 20th century using command economies (autarkies) proved capable of delivering high standards of public health, economic growth, and comparatively high rates of literacy. But the relative performance advantage of liberal democracy across all measures of human well-being settled for many the historical debate over organizing principles for human society. “The End of History” metaphor proposed by Francis Fukuyama in his June 1989 article in The National Interest generated enormous confusion about the status of liberal democracy and future world order. Fukuyama stated that the Western concept of history as a dialectic between the forces of order and liberty had come to an end with the collapse of Marxism and the resounding triumph of the West in the cold war. He never discounted the possibility of new wars, new social or religious movements, or even the reassertion of the forces of order in the future.
Fukuyama noted that healthy social orders require change and even pointed to the emergence of violent movements against oppressive governments and even liberal democracies should they fail in the delivery of political goods. Jack Snyder and Harvey Mansfield demonstrated the cycle of political maturation for new democracies in their seminal work Democratization and War. The primitive politics of transitional democracies can drive elites to lowest-common-denominator appeals to win elections. Xenophobic or exclusionary election campaigns quickly turn into aggressive domestic and foreign policies. This phenomenon continues until voters abandon politicians who exploit primitive sentiment for those who articulate more realistic and attainable policies. As the political culture matures, politicians who run for office on familial or cultural-linguistic ties begin to lose out to those who have the capacity to address broader segments of the population. Picking up garbage, organizing effective schools, delivering health care and other public services, and providing economic opportunity begin to gain traction in electoral campaigns and become more important than the passions of identity politics or manufactured threats of exploitation by “outsiders.” Inclusive rather than exclusionary appeals win out. Institutional failures or security challenges can overwhelm the capacity of an emerging government and literally collapse the state. Iraq and Afghanistan show the magnitude of security challenges that arise in transitions. In addition, political elites enjoy steep learning curves if the new democracy survives the early elections. Liberal democracy manifested in representative government should be viewed as a self-sustaining, self-replicating system for the evolution of political culture. Regular elections on the African continent illustrate beneficent cycles of change, that occur even in the partially articulated institutions of lessdeveloped countries. Steffan Linberg in Democracy and Elections in Africa shows that “Repeated elections—regardless of their relative freeness or fairness—appear to have a positive impact on human freedom and democratic values . . .” by linking elections and civil liberties in the minds of voters and those officials charged with the administration of electoral processes. It is reasonable to assume that
liberalism 1027
these phenomena occur in all states transitioning to democracy: Citizens become voters; democratic mechanisms begin to “lock in” as citizens believe they have a vested interest in the government; more citizens and political leaders accept and play by the democratic rules, thus becoming a self-fulfilling prophecy; civic organizations become stronger by providing protection of civil rights and civil liberties; law enforcement and judicial officials are given a formal role in the protection of political rights; and the media begin to play a new role as a “transmitters” for prodemocratic messages. Liberal democracy generates representative governments that deliver political goods while engendering broader tolerance, understanding, compromise, and acceptance of the rule of law and the views of others. Extreme social and political movements, especially those that gain power through the persecution or exploitation of minorities and under the pressures of competitive politics retreat to the deoxygenated margins of political culture. Groups or movements that use exclusionary or intolerant appeals in liberal democracies tend to fade in prominence and eventually lack the financial resources to compete in elections. Further Reading Dahl, Robert. “What Political Institutions Does Large-Scale Democracy Require?” Political Science Quarterly 120, no. 2 (2005): 187–197; Fukuyama, Francis. “The End of History?” The National Interest (Summer 1989): 76–91; Fukuyama, Francis. The End of History and the Last Man. New York: Avon, 1992; Huntington, Samuel P. “After Twenty Years: The Future of the Third Wave.” Journal of Democracy 8, no. 4 (1997): 4–12; ———. The Third Wave of Democratization. Norman: University of Oklahoma Press, 1991; International Institute for Democracy and Electoral Assistance. Handbook on Democracy Assessment. Stockholm, Sweden, 2002; International Institute for Democracy and Electoral Assistance. The State of Democracy: An Assessment in Eight Nations Around the World. Stockholm, Sweden 2003; Lindberg, Staffan I. Democracy and Elections in Africa. Baltimore: Johns Hopkins University Press, 2006; ———. “The Surprising Significance of African Elections.” Journal of Democracy 17 (January 1, 2006): 139–151; Mansfield, Edward D., and Jack Snyder. “Demo-
cratization and War.” Foreign Affairs 74, no. 3 (May/ June 1995): 79–97; Mansfield, Edward D., and Jack Snyder. Electing to Fight: Why Emerging Democracies Go to War. Cambridge, Mass.: MIT Press, 2005; Navia, Patricio, and Thomas Zweifel. “Democracy, Dictatorships, and Infant Mortality Revisited.” Journal of Democracy 11 (April 2, 2000): 99–114; O’Donnell, Guillermo. “Horizontal Accountability in Modern Democracies.” Journal of Democracy 9 (July 3, 1998): 112–126; Przeworski, Adam. Democracy and Development: Political Institutions and Well-Being in the World, 1950–1990. Cambridge: Cambridge University Press, 2000; Sen, Amartya. Development as Freedom. New York: Knopf, 1999; Zakaria, Fareed. The Future of Freedom: Illiberal Democracy at Home and Abroad. New York: W.W. Norton, 2004. —Robert E. Henderson
liberalism During the 1950s, influential writers such as Louis Hartz described liberalism as the central defining characteristic of American public philosophy. However, since the 1980s, liberal is a term employed derisively by conservatives to smear those on the political left as hopelessly out-of-touch bleeding hearts. The tradition of liberalism is commonly understood to go back to the Whig opponents of the British monarch during the 17th century, yet the word liberal was never used politically until the early 19th century. As these two sets of conflicting pictures reveal, liberalism is an immensely complex concept, and its uses have varied greatly over time. Identifying one single commonality to liberalism can be tricky. Nevertheless, it is an essential keyword, and in the historical arguments over who or what is appropriately liberal, we can see in microcosm the historical development of American political thought since the end of the Civil War. Liberal is a term with ancient roots, a translation of the Greek eleutherios, which rendered literally means “free.” This word has been alternatively translated broadly as “civilized” or more narrowly as “generous.” Its original usage in ancient Greece was in reference to the status of a free man, as opposed to women and slaves, as well as the capacity of one who is free to save and distribute his wealth, to generously
1028 liber alism
give money and exercise restraint in accepting it from others. Likewise, the liberal arts or a liberal education described the proper development of a free man. Indeed, until the 18th century, acting with liberality was seen as a male capacity. By the 18th century, however, the term had gained additional salience as a description of an individual who held free and generous opinions and whose mind was unhampered by prejudice, virtues that could be exercised by both genders. It was not until the early 19th century that the term liberal was used to designate a progressive political opinion. Originally, it was an insult uttered by British Tories at antiwar dissenters. It was meant to connote not so much a broadness as a laxness in principle and a foreignness of opinion—a reminder of the Spanish Liberales Party. To be a liberal, in other words, was to appear somehow un-British. Nevertheless, like many terms of ridicule, liberal was quickly adopted by those against whom it was aimed. Inevitably, by the second quarter of the 19th century, the neologism liberalism gained currency as a concept used to identify the political and theological doctrines of a liberal, which has given the term a far greater sense of coherence and unity than it, in fact, has. In the United States, the term liberal was a latecomer to the political vocabulary and did not become a familiar keyword until the arrival of the short-lived Liberal Republican Party (1872–76). At the heart of liberalism stands the individual; it is securing the liberty and personal autonomy of the individual that distinguishes liberal political philosophy from the communitarian aspirations of traditional conservatives on the right and socialists on the left. Liberalism is primarily a judicial mode of thought, and the protection of the individual is often conceptualized through the language of natural and civil rights. Liberty is a status that one has under a government limited by the rule of law. One variant of this juridical philosophy is the tradition of social contract theory, exemplified in John Locke’s Second Treatise on Government (1689). Locke imagined government and all social obligations as a result of a voluntary contract between free individuals who are born free and unencumbered by duties. Individuals are conceived as having been endowed by birth with natural rights and in a state of freedom, which is also a state of insecurity. Government is an artifice
created by free individuals who grant it some share of their natural authority in exchange for protection of person and property. Locke’s theory, conceptualizing government as a contract for mutual interest, has had an extensive influence in the United States, although republican and religious influences were also critical during the Revolutionary Era of the 1760s and 1770s. This contractarian perspective leads to several important conclusions. First, the basis of all legitimate government derives from the voluntary consent of the governed. Popular sovereignty rather than tradition or divine right is the basis for liberal governments. Second, governments are not conceived as natural, sacred, or inviolable institutions, but rather as tools created by and for individual human beings, and they are to be judged by their capacity to protect individual freedoms and capacities. Traditionally, this has meant limited government, protections for individual religious conscience, and tolerance for diversity in religious faiths and practices. A third point derives from the first two. While individuals grant government some of their natural authority, they retain other rights, most importantly the right to unmake and remake government when it fails. Thus, a liberal polity may embrace everything from a spirit of experimentation in governing structures and policies to the right of revolution, famously exemplified by the Declaration of Independence. Within this theory of limited government and individual rights are found a number of philosophical tensions. On one hand, liberalism is founded on the universal principle that all human beings are equal, at least from a political standpoint, and that all should have equal rights under the law. These egalitarian assumptions bear a proclivity toward democracy; popular sovereignty requires democratic institutions of decision making. On the other hand, Locke saw the establishment of government caused by the desire of individuals to have protections for their property. If government is designed to permit the enjoyment of property, then liberal theory permits a great deal of social and economic inequality in the name of the freedom of individuals to acquire and enjoy their property. It is revealing to note, for example, that when James Madison speaks of protecting the rights of a minority in Federalist 10, he is speaking of creditors and others with wealth. Liberalism does not nec-
liberalism 1029
essarily come out in favor of one side, either equality or individual liberty. Rather, liberalism provides the conceptual framework through which generations of Americans have debated (and tentatively resolved) how to balance these competing values. In the last half of the 19th century, concern for the legal conditions of personal autonomy grew to encompass the social and economic contexts in which individual freedom and growth flourishes or withers. The impetus for these considerations was the increasing power and influence of corporations and banks in the decades after the Civil War. During this period that Mark Twain dubbed the Gilded Age, large-scale economic and financial institutions grew to dominate the market and influence government officials, while the opportunities for smaller entrepreneurs, farmers, and workers shrunk. As wealth and industrialization in the United States increased, so, too, did the gap between the rich and the poor. Throughout the large industrializing cities of the United States, slums and ghettos spread, inhabited by poor workers who were often unhealthy and illiterate. Under such conditions, liberal reformers argued, talk of abstract freedom rang hollow. Liberal proposals attempted to improve workers’ health and education, support immigrant communities, and establish minimum wage and maximum hour laws as well as standards for cleanliness and safety in factories and other workplaces. Many of these measures were initially seen as voluntary and philanthropic, but increasingly liberals called for employing governmental powers, starting at the local and state levels. These reformist policies were developed in conjunction with a philosophical rethinking of the social nature of the individual and the conditions in which individual choices are made. A number of late Victorian British intellectuals, such as Thomas Hill Green, J. A. Hobson, and L. T. Hobhouse, raised the philosophical importance of these issues. The movement was called new liberalism, and these liberal philosopher-activists found American counterparts in the pragmatism articulated by William James and John Dewey. Pragmatism imparted an experimental and empirical take on the understanding of personal identity. Of significance is that these programs were not socialist in spirit, but rather they were animated by respect for the individual.
This new liberalism challenged the traditional concepts of limited government, as reformers excoriated the state’s neglect of the poor and encouraged the use of police powers and economic regulations to redress these needs. In response, many business leaders and their advocates proposed a stricter form of laissez-faire economic individualism. They argued that governmental intervention was an assault on personal freedom. They also evidenced a suspicion toward reform. While some argued that reform was utopian and bound to fail, others went further, arguing that social progress required free competition and that by helping the losers in this struggle, government made society weaker. When social Darwinist William Graham Sumner considered What Social Classes Owe to Each Other (1883), his bottom-line answer was nothing. The U.S. Supreme Court, for a time, embraced large parts of this economic liberalism, for example, by invalidating minimum wage and maximum hour laws as violating the freedom of contract. The conflict between social reformers and economic libertarians over the proper role of government in redistributing the burdens and benefits of capitalism was, in many ways, a “family” debate between protagonists who were in agreement over some basic liberal propositions, for example, that the freedom of the individual was the most valuable social goal for the government to protect. In other words, both sides can be designated as liberal, marking the breadth of the liberal doctrine and distinguishing participants in this debate from communists and socialists on the left and from traditionalist opponents of technological progress on the right. Nevertheless, the differences between these two schools of thought were fundamental to an industrializing culture, and subsequently, the more common use of the term liberal would be to identify progressive reformers, while advocates of a market free of government regulation became known as conservatives or libertarians. From the 1930s until the 1960s, the reformist tendencies of liberalism flourished as the ideological backdrop of American public policy. With the New Deal in the 1930s, the U.S. federal government passed laws and designed programs that intervened in the economic life of the nation like never before. Americans, struck by the hardships of the Great Depression, embraced an ideal of “positive liberty,”
1030 liber alism
in which personal freedom was protected from the extreme crashes of a free market system by a set of economic safety nets. The liberal vision of fairness and equality further expanded in the 1960s under the Great Society of President Lyndon Johnson, who led a “War on Poverty” and directed the passage of a number of important pieces of antidiscrimination legislation such as the Civil Rights Act of 1964. At the same time, supporters of laissez-faire economics, who were by now calling themselves classical liberals, warned that the development of a welfare state was a slippery slope toward socialism. Classical liberals argued that welfare gave the government too much authority in the lives of individuals, stifled entrepreneurship, and made recipients dependent. In the words of Friedrich Hayek, the welfare state led down the “road to serfdom” and was therefore inherently antiliberal. While the Great Society inscribed liberalism into the social policies of the federal government, the philosophical career of liberalism also received a shot in the arm as academic debates surrounding liberalism as a political and a moral creed grew in number and quality. At the heart of this resurgence in liberal theory were two texts each written by Harvard philosophy professors, John Rawls’s A Theory of Justice (1971) and Robert Nozick’s Anarchy, State and Utopia (1974). Rawls puts forth an expansive vision of social democracy. He uses a social contract experiment, asking individuals what type of society they would construct assuming they knew nothing about their own identity, interests, or status. Rawls argues that such rational actors would choose a society based on equal rights and committed to a system of redistributive justice, in which any social or economic inequality would benefit the least. Nozick’s libertarian response attacks the idea of a liberal state committed to distributive justice and formulates a portrait of a minimal state designed to protect the property rights of individuals. The debate between these two alternative visions of liberal justice reflects divisions within liberalism that have been evident since the Gilded Age. Scholarly criticism of liberalism has come from a number of different positions, including critical race theorists, feminists, and communitarians on both the left and the right. One common theme is that contemporary liberalism has painted so abstract a picture
of the rational and autonomous individual that it is incapable of actually reflecting the “situated self,” the actual human being located in concrete communities. Self-described liberal theorists have responded in a number of ways. Will Kymlicka, reflecting the legalistic spirit of liberalism, has articulated a sophisticated theory of minority group rights designed to protect cultures against the potentially corrosive effects of mainstream culture. Other liberals, such as Rawls in his later writings, have embraced a theory of “political liberalism,” arguing that liberal philosophy is based on political values and does not presuppose any particular metaphysical doctrine of the individual or of the good life. These recent debates reveal that the traditions and concepts of liberalism remain a vital part of American political thought. Ironically, the vibrancy of recent liberal philosophy has occurred at the same time the term liberal has reached its nadir as a political label. Starting in the 1980s, American conservatives began to use the term liberal to evoke images of a group of out-of-touch elitists all too willing to spend other people’s money. This period also marked a sharp upswing in conservative philosophy, combining free market values with a social reaction against the excesses of the 1960s counterculture. By the 1990s, the invectives had grown so powerful that many Democratic politicians refused to call themselves liberal, preferring to use the term progressive. It remains to be seen whether this avoidance of the “L word” reflects another vicissitude in the up-and-down career of liberalism or a permanent change in the American political vocabulary. Further Reading Arblaster, Anthony. The Rise and Decline of Western Liberalism. New York: Basil Blackwell, 1984; Dewey, John. Liberalism and Social Action. New York: G.P. Putnam’s Sons, 1935; Hartz, Louis. The Liberal Tradition in America. New York: Harcourt, Brace, & World, 1955; Hayek, Friedrich A. The Constitution of Liberty. Chicago: University of Chicago Press, 1960; Kloppenberg, James T. The Virtues of Liberalism. New York: Oxford University Press, 1998; Kymlicka, Will. Multicultural Citizenship: A Liberal Theory of Minority Rights. New York: Oxford University Press, 1995; Rawls, John. A Theory of Justice. Cambridge, Mass.: Harvard University Press, 1971; Rawls, John.
market economy 1031
Political Liberalism. New York: Columbia University Press, 1993; Sandel, Michael, ed. Liberalism and Its Critics. New York: New York University Press, 1984. —Douglas C. Dow
market economy A market economy is an economic system based on the unregulated exchanges between buyers and sellers as the determinant of decisions about what to produce and what prices should be charged. The central distinction between a market economy and all others, such as centrally directed, or command economies, and mercantilist systems, is that governments or other outside agencies do not control the major decisions of economic life. Adam Smith’s classic work, An Inquiry into the Nature and Causes of the Wealth of Nations, published in 1776, is the first theoretical study of market economies. Smith argues that market economies are more efficient than attempts by a government or a king to control economic life. Because markets are more efficient than other ways of organizing economic life, they produce greater good for more people and are therefore morally as well as economically superior. A central question in any economic system is how much something is worth, that is, what is its value. The value of something you do not have, such as a particular item of clothing or an hour of a person’s work, might be based on its usefulness to you. For example, on a cold day, a warm coat is worth more to you than a lightweight one. The value of something you do have might be based on how much time and energy it took you to make it. If you have spent hours and hours writing a song or creating a video, you might feel that it is worth quite a lot. In a market economy, the value of something is expressed as its price. How much something is worth is determined by the price at which someone is willing to sell it and someone else is willing to buy it. The market price is not controlled by people who want to sell something nor by people who want to buy something. The market price is the result of buyers and sellers bargaining with each other until they make a deal. A different way to think about the worth of something is to use a set of ethical values, morals, or beliefs
that might be used to determine the value or price of something. For example, some people might think that an hour spent teaching children to read contributes more to society than an hour spent playing golf or making music videos and is thus worth more. In economies that are not based on market principles, they might try to impose their beliefs by trying to regulate the wages paid to people. The basic question of the worth of something then becomes a political issue to be resolved by control of the government or the distribution of power. In a market economy, the value of something is ultimately determined by what people are willing to trade it for, the price they are willing to pay to buy it, or the price at which they will sell it. This leads directly to the principle of supply and demand as the determinant of what gets produced at what price across the entire economy. Producers create a supply of things and try to sell them at a given price, while buyers demand some number of things at a particular price. When the simple interaction of somebody looking to sell something and somebody looking to buy something is repeated time after time, a market exists. Markets are balanced and are in equilibrium when the supply of things at a given price equals the demand for those things at that same price. If the supply of goods being offered at a particular price and buyers willing to pay that price are not balanced, then people will be motivated to change. In the short run, if there are more items for sale at a given price than people want to buy, sellers will have to lower their prices. Since a seller cannot become more efficient and make a profit selling items at a lower price, they will go out of business and be replaced by a new entrepreneur who can. In the short run, if there are more people who want to buy than there are items to buy, prices will go up. In the longer run, this imbalance in the market will encourage producers to make more of the items people want. Since people are free to compete, if someone can figure out a less expensive way to make a product, then he or she can cut the price, sell more of the product, and still make a good profit. If someone figures out a way to produce something that does not currently exist but that people will want when they see it, then he or she will create a new product. People already selling products in the market are not going to sit idly by while newcomers take away their
1032 mark et economy
business but will try to improve their own products or lower their prices. The resulting competition among sellers tends to create efficiency and variety and improve quality. The market rewards efficient and clever producers and weeds out the inefficient and those slow to change their products. In the most common alternatives to a market economy, politicians or bureaucrats decide what will be produced, in what varieties, and at what price. This can easily lead to inefficient businesses being allowed to survive or to an oversupply of things that people really do not want and an undersupply of things they do want. Markets determine how many people do what kinds of jobs by the same interaction of supply and demand. People who need to hire other people to do a particular job offer a given level of wages. If people who have the skills and qualifications to do that job are willing to work for that wage, they take the job. If the people who can do the job are not willing to work for the offered wage, then the employer will have to raise the offer until the wages are enough to get the number of people needed for the job. Following the basic insight of Adam Smith and many other economists, the interaction of employers and job seekers, the supply of jobs at a given wage, and the supply of willing workers will balance out in the long run. Over time, the market will produce the most efficient and fairest distribution of jobs and salaries. In a market economy, the primary roles of government in economic life are seen as maintaining a level playing field, guaranteeing that promises are kept and contracts are honored, and protecting property rights. If either buyers or sellers in the market collude and try to affect supply or demand or manipulate prices, then the market will not function efficiently nor produce the best outcome. If people cannot be held accountable and be required to keep their promises, then contracts are useless. Only a government has the authority to act as a referee and the power to enforce its decisions. Market economies in the real world do not always perform perfectly for a variety of reasons. Among the most prominent market failures are those that occur when the market takes too long to correct itself and those that result from what is known as the underproduction of public goods.
While Adam Smith argued and most American economists agree that in the long run the market produces the most efficient and best results, there can be serious consequences for people in the short run. For example, as supply and demand for goods and services moves toward the most efficient equilibrium, they can be seriously out of balance in the short run, producing alternating patterns of boom and bust and runaway inflation alternating with recession or even depression. Private goods are things that can be consumed exclusively by an individual who has bought them. Public goods are things whose use or consumption cannot be limited to specific individuals who have paid for it. For example, the car you drive is a private good. You own it and can control who uses it. But the road you drive on is a public good. Some people helped pay for that road with their taxes, others did not, but anyone can use it. National defense is a public good. For example, people who pay taxes pay for the troops and weapons that keep the nation safe from outside attack, but people who do not pay their taxes and do not help pay for the national defense are still protected. This can easily lead to the underproduction of public goods. If you ask individuals, they would much rather spend their money on things they can own and consume by themselves instead of things they have to share with everyone else. Left to their own devices, people would spend far less than is needed on things such as roads, national defense, public education, and so on. It is expected that government will intervene to correct market failures, to compensate for the worst effects of boom and bust, and to provide public goods by collecting taxes and spending them. The major debates about the role of the government in economic policy that divide conservatives, liberals, and libertarians revolve around the related questions of whether the market economy has, in fact, failed in a particular instance and, if it has, how the government can intervene in the most efficient and effective way to restore the market. Debates about when and how government should intervene in a market economy to correct failures are common issues in American politics and presuppose that the market is the best way to organize economic life. Outside mainstream American economics and outside the United States, the fundamental assump-
nationalization 1033
tion that markets create the most efficient and most moral results are directly challenged by Marxism and many socialist theorists. Islamic philosophical and economic thought is often skeptical of unbridled markets, as are major thinkers in Buddhist, Confucian, and Hindu traditions. See also capitalism. Further Reading Boyes, William, and Michael Melvin. Economics. 6th ed. Boston: Houghton Mifflin, 2005: Schiller, Bradley R. Essentials of Economics. 6th ed. Boston: McGraw Hill, 2007: Smith, Adam. An Inquiry into the Nature and Causes of the Wealth of Nations. Chicago: University of Chicago Press, 1976. —Seth Thompson
nationalization Nationalization is the taking of private assets by a state. Usually, nationalization is marked by a specific industry being taken over by the state, such as the oil, railroad, or energy industries. Such state ownership is in marked contrast to the private enterprise, or the free market, approach so often practiced in the contemporary context. The practice of nationalization is a development that emerged in the 20th century and was done for the purpose of social and economic equality. It was often seen as a principle of communism or socialism. Nationalization became an attractive option for many nations in the post– World War II era as fears of price gouging, cartels, monopolies, and exploitation of workers and resources became important political issues for many nations in Europe and elsewhere. Nationalization is often undertaken because the state believes that some greater public purpose is at stake and that nationalization will serve a positive public purpose. For example, following World War II, Eastern European states nationalized all industry and agriculture. Common practice in noncommunist countries often follows the principle of eminent domain when nationalization occurs, which means that companies are compensated, at least in part, for assets seized by the state. However, in communist regimes, in which private ownership is opposed in principle, compensation usually does not occur. Often, foreign properties have been nationalized in underdeveloped
nations where resentment exists of foreign control of major industries. Usually, but not always, when the state takes control of an industry or business, the private enterprise is compensated for the state takeover. Some forms of nationalization, such as when a nation takes control of the assets of a business owned by a foreign entity and the taking is not compensated for, may cause an international incident as the home country of the business being nationalized may be enjoined to come to the aid of the business being nationalized. For example, in 1938, Mexico nationalized several foreign-owned businesses, including major oil properties. Then-U.S. secretary of state Cordell Hull demanded that compensation for the state takeover be “prompt, effective and adequate.” But Mexico, along with a number of other developing countries, often took a different view, arguing that the exploitation of a country’s resources by a foreign private entity did not merit compensation. In 1962, the United Nations General Assembly adopted a resolution stating that when the nationalization of an industry or business occurs, the private owner is due compensation and “shall be paid appropriate compensation in accordance with international law.” While this resolution established the principle that compensation was due, it did not go so far as to establish a procedure to guarantee due or market level compensation. Sometimes a state will take over an important or large business enterprise when that business is in financial trouble and on the verge of bankruptcy. If a business is essential to national security or if a business employs a large number of workers who would otherwise become unemployed, the state may see an interest in taking over that failing business and putting it under state control. In the 1970s, when the British government took over the British-Leyland car maker, it was just such an effort. In fact, there was a wave of nationalization in Great Britain in the post– World War II era, as the government took over British Coal, British Gas, British Petroleum, British Rail, British Steel, and a host of other industries. Examples of nationalization in the United States included the creation of the Tennessee Valley Authority in 1939 (previously the Tennessee Electric Power Company) and more recently the creation of the Transportation Security Administration in 2001 following the
1034 newly industrialized countries
September 11, 2001, terrorist attacks, which nationalized the privately owned airport security industry. In some cases, if certain industries perform at a subpar standards, pressure can be put on the government to reprivatize many of the same industries. Privatization is the reverse of nationalization. It marks the private ownership and control of a business or industry. In the 1980s, a wave of reprivatization occurred in Great Britain and elsewhere. This occurred because many of the nationalized industries did not perform up to industry standards; many were also perceived as inefficient or not serving the public interest. Also, as socialism and communism became discredited in the late 1980s and as market capitalism swept the globe, the wave of capitalist and private ownership momentum became too powerful for most states to resist. However, many developing nations believe that their natural resources, workers, and national integrity are being exploited by large commercial interests from abroad that come in, take control, and limit the sovereignty of the nation. Can small, less developed states resist these takeover ventures? Or is the power of the West, the force of the market, and the demands of consumer publics too powerful to resist? Most of the smaller and less-developed states feel unable to resist these forces even when they may want to. The dominant ideology, the rules of the game, and the reward and punishment mechanisms of the market are too powerful to fight. Thus, they either learn to make their awkward peace with the forces of market capitalism, or they can pay a heavy price. As globalization proceeds, these forces may be even more powerful in the future, leaving the sovereignty of small and less-developed states vulnerable to market forces and regime rules that may run counter to the interests or desires of the states. In the past 20 years, nationalization has become a largely discredited notion. The wave of privatization so prevalent in the 1980s was only the beginning of a transformation that led to the rise of the ideology of market capitalism that swept the globe in the aftermath of the fall of communism. Adhering to the ideals of free market capitalism dominates in many countries, and because of this, few states would risk the political or economic fallout that would result if they tried to nationalize industries.
Therefore, even if a state wished to nationalize a particular industry, it is unlikely that such a nationalization would be politically viable, and the international rules of the game might make the overall cost of such a nationalization prohibitive. This puts many nations in a kind of prison, whereby the selfpunishing mechanism of the market limits their flexibility and narrows their political as well as their economic options. Internationally, the current political and economic trend clearly favors market capitalism, free enterprise, and private ownership as the regime model for the modern economic system, although it remains to be seen how long this trend will last. Globalization has propelled states to embrace market capitalism or be left behind. However, one notable recent exception has been the move to nationalize various industries, such as oil and communications, in Venezuela under the rule of President Hugo Chávez. The possible economic impact for various American corporations in these two fields has made this a prominent political issue for the United States in its diplomatic relations with Venezuela and other countries in the Latin American region. Further Reading Reid, Graham L., and Kevin Allen. Nationalized Industries. Harmondsworth, U.K.: Penguin, 1970; Sclar, Elliott D. You Don’t Always Get What You Pay For: The Economics of Privatization. Ithaca, N.Y.: Cornell University Press, 2000; Tivey, Leonard, ed. The Nationalized Industries since 1960: A Book of Readings. London: Allen & Unwin, 1973. —Michael A. Genovese
newly industrialized countries The term newly industrialized countries (NICs) refers to a loose category of countries that have experienced some sustained economic development over the past two decades. A country’s economy can be based on what economists refer to as the primary, agricultural, sector; the secondary, industrial, sector; or the tertiary, service, sector. The trajectory of economic development moves from reliance on agriculture to greater reliance on industry to greater reliance on the service sector. The primary sector of the economy is based on producing things: food and raw
newly industrialized countries 1035
materials. The industrial sector is focused on producing tangible things and products. The service sector largely produces intangible things, such as knowledge, ideas, technology, managerial skills, and entertainment. In much of the world, agriculture means smallscale, often subsistence, farming, a form of economic life that relies heavily on human labor and does not produce a great deal beyond what is needed for people to survive. When an industrial sector is developed, the emphasis shifts from growing crops to manufacturing products in factories and a growing use of technology. Agrarian societies tend to have very small markets, and many people trade and barter for what they need rather than use money. As societies industrialize, markets expand, more people are working for wages, and the population tends to shift from living in rural villages to urban centers. The next stage of development is a shift from a primary focus on the industrial sector to a growing emphasis on the service sector of an economy, which often includes very sophisticated technology. Some countries today have advanced capitalist economies. Countries such as the United States, France, Britain, and Germany are marked by growing reliance on the service sector as an economic base, widespread use of sophisticated technology, and the highest standards of living. These are the countries often referred to as the first world, the Global North, or the Global Rich. At the opposite end of the spectrum of economic development are the countries identified by the United Nations that have experienced the worst economic performance in the past two decades and have the least prospects for future growth. These countries are categorized as the least developed, or the fourth world. The rest of the world, except for the former members of the Soviet bloc, is referred to as the third world or the global south. This is the largest and least homogenous group of economies in the world, including most of Asia except Japan, Latin America, Africa, and the Middle East (except for the few large oil-exporting countries in the latter region). Many of these countries have had some degree of success in beginning to develop their economies by shifting away from agriculture to a growing level of industrialization in the past two decades. Hence, they are newly industrialized countries.
Some examples will clarify the differences between advanced capitalist economies, newly industrialized economies, and fourth world states. The examples will also illustrate the breadth of the category “newly industrialized” and the differences among such countries. Comparing levels of employment in industry and agriculture will illustrate the differences in economic base. The gross domestic product per capita (GDP/cap) is a standard way of measuring and comparing standards of living in different countries. South Korea, Mexico, and Egypt are all newly industrialized countries, although there are important differences in their levels of economic activity and standards of living. The United States is a good example of an advanced capitalist economy, and Bangladesh will serve as an example of a fourth world country. In South Korea, which is now the 11th-largest economy in the world, only 6.4 percent of the labor force works in agriculture, while 26.4 percent works in industry. South Korea has been one of the most successful of all the newly industrialized economies and boasts GDP/cap of $20,000. Mexico is another newly industrialized country, although not as successful as South Korea. Some 18 percent of the Mexican workforce is employed in agriculture, while 24 percent is in the industrial sector. The Mexican GDP/cap is $10,000. Egypt is developed enough to be counted as a newly industrialized country, but there is a substantial gap between the South Korean, Mexican, and Egyptian levels of development. In Egypt, agriculture continues to employ 32 percent of the workforce, and industry only 17 percent. The Egyptian GDP/cap is $3,900. What these countries share is progress in shifting from agriculture as the most important part of the economy to an increasingly important industrial sector. Standards of living have risen, sometimes a great deal and sometimes only marginally. Despite good years and bad years, the general trend has been for improved economic performance. Bangladesh, one of the least-developed countries, and the United States, an advanced capitalist country, provide sharp contrasts to the three newly industrialized countries. In Bangladesh, 63 percent of the work force has jobs in the agricultural sector, and only 11 percent in the industrial sector. In keeping with the low productivity of agriculture in most of the world, the Bangladesh GDP/cap is only $2,100, and there
1036 newly industrialized countries
has been no sustained growth. In sharp contrast, a GDP/cap of $40,000 puts the United States near the top of the advanced capitalist economies in terms of standards of living. The fact that approximately 23 percent of all American workers are in the industrial sector is less important than the fact that only 0.7 percent of the American labor force is in agriculture. The standard of living, as measured by GDP/capita, has increased steadily. There are two basic strategies that countries have pursued to achieve newly industrialized status: the Asian model and what is referred to as a neoliberal strategy. The Asian model has produced the “Asian tigers” of South Korea, Taiwan, Singapore, and Hong Kong, as well as “lesser tigers” such as Malaysia and Indonesia. Mainland China has pursued a similar strategy. The Asian model focuses on export-led development, that is, relying on a literate and controlled labor force to produce consumer goods to be shipped to consumers in North America and Europe. The strategy relies on strong governmental direction of economic activity, including active recruiting of foreign investment, protection of the local economy from foreign competition, and restrictions on political activity. The strategies included in the neoliberal model are typically prescribed and supported by the advanced capitalist nations, international institutions such as the International Monetary Fund and the World Bank, and many professional economists. They include a strong emphasis on creating a market economy with minimal government intervention, privatizing industries and some government functions, removing restrictions on foreign investment, sharply reducing government spending and taxes, and emphasizing export-oriented industries, particularly those that can take advantage of relatively low labor costs. Whereas the Asian model emphasizes the role of the government and political actors, the neoliberal model emphasizes economic factors and market forces. This has been the predominant model for economic development for most of the world outside of Asia. The newly industrialized countries have been a major focus of U.S. foreign economic policy. As the largest consumer economy in the world, the United States has been a critical market for the exports from countries that are trying to industrialize and develop.
American consumers’ purchases of a wide range of products, from toys to clothing to highly sophisticated electronics, have been a major force for economic success; lack of access to American and European markets has been a crippling obstacle. The fact that Americans have bought more from the newly industrialized countries than they have sold to them has provided stimulus and capital development. While economists disagree about the extent of the outsourcing of jobs and its long term consequences for the United States, when production or other jobs are moved out of the United States, they are relocated to newly industrialized countries. When America buys more from the newly industrialized countries than it sells to them, a trade deficit is created. This has been a continuing problem for the United States. On both a country-by-country basis and in international settings such as the World Trade Organization, the United States has strongly supported measures to open the domestic markets in the newly industrialized countries for American businesses. The promotion of democracy has long been an important goal of U.S. foreign policy. The relationship between economic development and democracy remains controversial among political scientists and economists who study global development. There are some cases, such as South Korea and Taiwan, in which sustained economic development played a significant role in creating the conditions that led to the emergence of stable democratic regimes. There are also cases in which development has not led to increased democratization. There are cases in which relatively democratic regimes were causes of economic development and cases in which democratic regimes did not do well in promoting economic growth. At a minimum, the relationship is complex. Further Reading Bergston, C. Fred, ed. The United States and the World Economy: Foreign Economic Policy for the Next Decade. Washington, D.C.: Institute for International Economics, 2005; Goddard, C. Roe, et al., eds. International Political Economy: State-Market Relations in a Changing Global Order. Boulder, Colo.: Lynne Rienner, 2003; Siebert, Horst. The World Economy. London: Routledge, 2002. —Seth Thompson
North American Free Trade Agreement 1037
North American Free Trade Agreement (NAFTA) The North American Free Trade Agreement (NAFTA) is an agreement between the United States, Mexico, and Canada to expand economic activity by making it easier for a company located in one of the three countries to do business in the other two. This is to be accomplished by drastically reducing or eliminating tariffs (taxes) on goods shipped between countries and coordinating national and local regulations. Since its inception in 1994, NAFTA has been at the center of controversy. Free trade areas and their more elaborate relatives called common markets are intended to spur economic growth and job creation by allowing businesses to treat all the citizens of two or more countries as a single economic unit. In most of the global economy, national borders contain and constrain trade and commerce. Governments use a number of devices, from tariffs (taxes on foreign goods) to rules and quotas, to regulate international trade. Some of those strategies are aimed at achieving fairness and balance. For example, if wages in one country are markedly lower than in its neighbor, a government may intervene to make sure the country is not swamped by artificially low-priced imports. Alternatively, a government may institute tariffs or other barriers to trade with another country because the home country’s industry is very inefficient and its products cannot compete fairly against imports. The dividing line between intervening to make things fair and intervening to protect a weak and inefficient but politically potent industry is often extremely fuzzy. Regardless of motive, tariffs and other barriers to trade make it more expensive to do business between two countries, introducing what economists call transaction costs. This is widely believed to reduce the overall level of economic activity and raise prices for consumers. A free trade area calls for the elimination of almost all barriers to trade between two or more countries. Governments will agree to drop tariffs against industries in each other’s countries (and they may also agree to raise barriers to companies located outside the free trade area). Governments also agree to coordinate and standardize their labor laws and environmental regulations to facilitate trade and commerce across their borders. This was the logic that led the United States to pursue negotiations with Canada
and Mexico beginning in the 1980s to create a North American Free Trade Area. Even as negotiations were being completed, NAFTA became an issue in the 1992 presidential election. Third-party candidate Ross Perot made opposition to NAFTA a centerpiece of his campaign against incumbent president George H. W. Bush and the Democratic challenger, Bill Clinton. Perot charged that the primary effect of NAFTA would be the loss of jobs for Americans, and he often referred to “the giant sucking sound” of jobs moving south to Mexico. Perot was not the only critic of NAFTA, but in 1993 Congress approved the plan. Proponents of NAFTA cited overall growth in the economies of the United States, Canada, and Mexico and better products at lower prices for consumers. They also argued that there might be some short-term job losses in all three countries as inefficient companies went out of business, but that lost jobs would be quickly replaced by new jobs that were created as a result of expanding economies and growing demand. NAFTA opponents raised a number of objections. The argument that NAFTA would cost jobs was made most often in the United States by critics who pointed out that a number of American manufacturing jobs had already moved to Mexico to take advantage of significantly lower wages and that the removal of barriers to imports from Mexico would accelerate that process. At the same time, some critics in Mexico worried about the ability of small Mexican farmers to compete against American agricultural imports. American and Canadian labor unions made a related argument when they objected to the fact that the NAFTA treaty did not include explicit guarantees for minimum labor standards and rights to join unions. A third line of argument against NAFTA came from American and Canadian environmentalists who decried the lack of environmental standards in the treaty and the danger that national environmental standards would be lowered or eliminated when the three countries negotiated common rules and standards. A final objection was more explicitly political: the loss of sovereignty. One of the hallmarks of a state in the modern international system is sovereignty, the right of each state to govern itself as it sees fit without external interference. This argument was made by citizens in all three countries, but most strongly in Mexico, where the fear was expressed that the government and
1038 North American Free Trade Agreement
citizens of Mexico would be forced by the United States and Canada to dramatically reform their economy and adopt a host of new laws. The Zapatista rebellion in Mexico’s Chiapas province was launched on January 1, 1994, the day that NAFTA came into force, and fears of economic domination and exploitation by transnational corporations have figured prominently in the movement’s anti-NAFTA, antiglobalization, and antipoverty rhetoric. In response to some of the criticisms, the governments of Canada, Mexico, and the United States negotiated two comprehensive supplements to the original NAFTA treaty. The first was the North American Agreement for Environmental Cooperation. This agreement not only committed the three countries to maintaining environmental standards but also set up an institution for coordinating environmental issues and a source of financing for environmental projects, particularly in Mexico The second side agreement was the North America Agreement on Labor Cooperation. This agreement was designed to encourage the three countries to cooperate on resolving disputes over labor standards and to work toward convergence in national labor laws. After more than 10 years of experience with NAFTA, the controversy has abated but not disappeared. Objective, scientific assessment of the impact of NAFTA on the Canadian, Mexican, and American economies is difficult because of the differences between the three countries and because of the difficulty of attributing specific effects to a single cause. The first challenge to assessment is the difference in the size of the three national economies. The value of Canada’s economy is $1.4 trillion, Mexico has an economy of $1.07 trillion, and the U.S. economy, at $12.4 trillion, is the largest in the world. Whatever impact an agreement such as NAFTA has on the United States will be relatively smaller than the impact on Canada or Mexico. The three countries also differ in the extent to which foreign trade contributes to their overall economic status. Canada and Mexico are relatively similar: In 2004, foreign trade contributed 62 percent of the Canadian gross domestic product (GDP, the standard measure of the size of an economy) and 58.5 percent of the Mexican GDP. Given that much reliance on foreign trade and the fact that the United States is the most important trading partner for both Canada and Mexico, the effects
of NAFTA are likely to be more pronounced than in the United States. In 2004, foreign trade amounted to only 20 percent of the U.S. GDP. Canada and Mexico ranked first and second as sources of imports to the United States and as markets for U.S. exports. These differences make it very difficult to generalize about the impact of NAFTA on the overall economic health of each country. A second major problem in trying to weigh the effect of NAFTA on its members’ economies is the fact that national economies are very large and very complex systems. Economic performance is affected by a wide range of variables, from global developments to government policies to the decisions of companies both large and small and the decisions of millions of individual consumers. Even skilled economists armed with very sophisticated tools of statistical analysis do not agree on the impact of single events or variables. NAFTA has not inspired a host of followers, as some had hoped. The recently created Central American Free Trade Area is a much smaller economic unit than NAFTA and brings together even more dissimilar countries. While the United States has strongly advocated a free trade area of the Americas that would cover most of South America, it has met with a chilly reception from some key Latin American players. The United States has had more success with the more modest goal of negotiating free trade pacts with individual countries, such as Chile. While NAFTA remains a source of controversy for specific issues, the best assessment may be that it has been neither as beneficial as its most ardent proponents hoped nor as detrimental as its staunchest opponents feared. For most Americans, Canadians, and Mexicans, the most salient aspect of NAFTA may be the fact that products traded under NAFTA rules are labeled in English, French, and Spanish. Further Reading Acheson, Keith, and Christopher J. Maule. North American Trade Disputes. Ann Arbor: University of Michigan Press, 1999; Hakim, Peter, and Robert Litan, eds. The Future of North American Integration: Beyond NAFTA. Washington, D.C.: Brookings Institution Press, 2002; Hufbauer, Gary Clyde, et al. NAFTA Revisited: Achievements and Challenges. Washington, D.C.: Institute for International Eco-
North Atlantic Treaty Or ga ni zation 1039
nomics, 2005; Schott, Jeffrey. Free Trade Agreements: U.S. Strategies and Priorities. Washington, D.C.: Institute for International Economics, 2004. —Seth Thompson
North Atlantic Treaty Or ga ni zation(NATO) The North Atlantic Treaty Organization (NATO) was a quintessential product of the cold war, and its original purpose was aptly summarized by a British official who claimed that NATO existed to keep the United States in, the Soviet Union out, and Germany down. Such a characterization may be an anachronism in a post-Soviet world no longer animated by customary East-West polarities, but NATO was indeed established as a bulwark against Soviet expansionism in Europe and a guarantee against German military resurgence. As such, its mission and direction have been thrown into question, and its role in the geopolitical evolution of 21st-century Europe is unclear, nor is its ultimate relevance in a global diplomatic theater that has transcended and redefined traditional relationships and affinities. Since the last round of enlargements in 2004, NATO has comprised 26 states from Europe and North America. Its headquarters is located in Brussels, Belgium, and its membership has grown considerably over the past decade. Contrary to what had been the case for most of its history, NATO now includes members from eastern and central Europe and is, thus, no longer an exclusive club for the United States, Canada, and their western European allies. Principal political authority lies with the North Atlantic Council, which is a deliberative decision-making body that acts through consensus instead of voting, thereby ensuring unity of purpose and strategic coordination for NATO initiatives. NATO is led by a secretary-general, who, as head of the North Atlantic Council, represents NATO in dealings with states and other international organizations. The alliance’s political structure is complemented by a unified military command structure, which is controlled by American military personnel but supported by a staff that represents all member countries. Despite NATO’s obvious political role, it is, first and foremost, a military alliance. NATO exists to protect its members from attack by common enemies and to ensure the physical integrity of its territories.
The famous article 5 of the North Atlantic Treaty confirms that because an attack on one member will be considered an attack on all, the collective security of NATO countries depends on an appropriate response to military aggression by their enemies. Although article 5 does not specifically mandate a military response, the treaty clearly expresses the signatories’ intent that NATO serve as a defense structure that leverages the military capabilities of its members. Above all, the historical setting from which NATO emerged demanded a military-based organization through which the security and territorial integrity of its participants could be ensured. NATO was created in 1949 out of circumstances that may seem foreign to most Americans. In many ways, it was both a relic of a prewar European mentality that accepted the inevitability of conflict among great powers and a product of a postwar mindset that viewed conventional warfare as an increasingly ineffective, if not obsolete, means of achieving political objectives. From a geopolitical perspective, two goals seemed paramount to Western policy makers, particularly in the United States and the United Kingdom, immediately following World War II. First, the reconstruction of Europe, both physically and politically, had to be secured in a way that would promote the establishment and maintenance of long-term stability and prosperity in Western European states. Second, and just as significantly, European leaders needed to create an international, or at least regional, structure of some sort that would prevent the outbreak of future wars in Europe. The importance of both objectives was self-evident to contemporary politicians, inasmuch as stable democracies in Western Europe devoted to the implementation of free market principles and international cooperation appeared to be the keys to a minimization of conflict and the prevention of mutual aggression. On both counts, that is, domestic political stability and international cooperation, the United States faced numerous challenges throughout Europe during the mid- to late 1940s. Internal political problems in countries such as Greece, Turkey, and Italy disturbingly illustrated that support among Europeans for ideologies, policies, and goals opposed by the United States was comparatively high in certain regions and that geopolitical alignment with the United States and its long-term trajectories could be
1040
North Atlantic Treaty Or ga ni zation
21
problematic. In addition, because of the spread of communist aggression and the fear of rejuvenated German militarism, visions of international cooperation and a reduction of mutual hostilities among European states seemed unworkable. As it turned out, both of these dilemmas, namely the difficulty of establishing pro-American, democratic regimes devoted to free markets in devastated regions and the implausibility of securing international comity and collaboration in Europe, were linked to the broader dilemma of Soviet expansionism after the war. The United States and its Western European allies may not have shared unified aspirations regarding the political evolution and geopolitical alignment of postwar Europe, but they were unified in their apprehensions about perceived Soviet aggression and the growth of Soviet influence and power in Europe. Most Western European nations eventually accepted the reality that, along with the prospect of German rearmament, the threats posed by Soviet power were potentially the most destabilizing and destructive.
Within a few years following the end of World War II, America’s European allies accepted the necessity of creating a substantial defense mechanism through which the safety and security of Western Europe could be assured. By 1948, through the Treaty of Brussels, a core group of five Western European countries laid the foundations for future cooperation by formulating an effort to control a possible German resurgence. Although this treaty was confined largely to issues dealing with potential German rearmament, its signatories became convinced of the need to erect a similar yet more potent organ for the provision of anti-Soviet defense. The Europeans were operating on the premise that Europe had, by this time, become irrevocably divided between a pro-Soviet, communist East and pro-American, democratic West. In other words, NATO would be based on the assumption that such a geopolitical division was a fait accompli and that any resulting military alliance should be a response to this inherent division. (This is especially significant with respect to
North Atlantic Treaty Or ga ni zation 1041
NATO’s role in today’s world, since such a division is no longer a standard feature of European geopolitics.) From the standpoint of the European states, U.S. membership in any anti-Soviet defense organization was a prerequisite, inasmuch as American military might was absolutely indispensable to deter or repel potential Soviet initiatives in the West. Obviously, the United States was also eager to secure European participation in a defense structure of some kind, not least because of prevailing balance-of-power theories and the fear that any losses in Europe would produce a redistribution of power between democratic and communist forces that favored the Soviet Union. American and European objectives for the protection of Europe and North America from communist aggression, and, to a lesser extent, German resurgence, were expressed through the North Atlantic Treaty, which was signed by the 12 founding members in April 1949. The founding members included the five signatories to the Treaty of Brussels, which were Britain, France, and the Benelux countries (Belgium, the Netherlands, and Luxembourg); the United States and Canada as representatives of the western end of the Atlantic alliance; Denmark and Norway from the historically vulnerable Scandinavian territories; Portugal and Italy as strategically located outposts of antifascism and anticommunism in regions surrounded by antidemocratic regimes; and Iceland, which was permitted to join the alliance without a standing army. Spain was excluded because of the Franco dictatorship, the inherent uncertainties of Germany’s and Austria’s political situations disqualified them from inclusion, and others, such as Switzerland and Sweden, adhered to neutralist policies that precluded participation in military alliances. By the early 1950s, the political climate in Greece and Turkey had stabilized, so that in 1952 these geopolitically critical states became members of NATO. As a counterweight to Soviet hegemony in the Balkans, eastern and central Europe, and also parts of the Arab Middle East, Greece and Turkey provided NATO with strategic options that enabled the alliance to check Soviet expansion and influence beyond established limits. In addition, by the mid-1950s, a long-term political settlement had been consolidated in West Germany, and it joined NATO in May 1955.
The most immediate consequence of West German membership was the emergence of the Warsaw Pact later that same month, a phenomenon that formalized, through these duly sanctioned oppositional military pacts, the existence of the cold war. The only other country to join NATO prior to the post-Soviet enlargements starting in the 1990s was Spain in 1982. On the whole, NATO’s membership during its first 50 years reflected its origins as an anti-Soviet military alliance designed to prosecute the cold war and protect Europe from the expansionism and aggression of antidemocratic forces in the East. Although NATO’s primary geopolitical aims seemed unambiguous from the start, the means of realizing those aims were often disputable. The particular diplomatic objectives of its individual members frequently subverted the overall strategic and operational priorities of NATO as a whole, and the political gamesmanship among some states occasionally undermined the unity that was necessary for practical consensus and operational consistency. Perhaps the most significant division within NATO was one that paralleled diplomatic developments within the European Community (EC) and associated debates among EC members regarding the future of European foreign policy and defense strategy. EC foreign policy controversies centered on the question of whether the diplomatic and military viability of Europe would best be promoted through a continental strategy that revolved around a FrancoGerman geopolitical axis or an Atlantic one that optimized the leadership and capabilities of the United States and exploited the “special relationship” between the United States and Britain. Likewise, a rivalry of sorts appeared within NATO that ultimately pitted a francophilic, Gaullist continentalism against an Anglo-American Atlanticism. This evolved into a formal split when, in 1966, France formally withdrew from NATO’s unified military command structure, and it highlighted a tension that has waxed and waned over the decades but has never subsided. Some of this tension can be ascribed to French jealousy and President Charles de Gaulle’s arrogant obstinacy in the face of American demands about French military and diplomatic compliance, yet the bulk of it has profoundly deeper roots and can be linked to an intrinsic Franco-American incompatibility that manifested itself not only within NATO
1042
North Atlantic Treaty Or ga ni zation
but also in other areas that required cooperation between the two nations, such as Vietnam, the Middle East, and western Europe. Despite increased friction between the United States and France over the last few years as a result of controversial policies pursued by the George W. Bush administration, France has become increasingly cooperative and even compliant since the mid-1990s, but U.S. and French visions of a post-Soviet world will not be reconcilable any time soon, at least not from a long-term normative perspective. These differences and related ones notwithstanding, during its first 40 years, NATO displayed remarkable solidarity and unity of purpose. After all, the cold war provided the organization and its members with an enviably uniform and uniquely definable set of objectives based on an ostensibly predictable system of geopolitical relationships according to which friends and enemies could be readily identified. In the end, although the particular manifestations of NATO policies and strategies varied from member to member, as did the benefits derived from those policies, the overarching objective of defeating or at least containing Soviet communism and authoritarian aggression was relevant and meaningful to all members. So the definability and predictability of the cold war rendered NATO’s purposes and operational tasks correspondingly definable and predictable, and most organizational dissent was inherently obviated or vitiated through the commonalty of those purposes. With respect to purpose and organizational unity, the collapse of the Soviet Union and the dissolution of the Warsaw Pact confronted NATO with a crisis of conscience that it has been unable to address. Geopolitical logic appeared to dictate that the end of the cold war and the resulting irrelevance of cold war diplomatic paradigms demanded the abrogation of the North Atlantic Alliance and the dismantling of defense structures intended to fight the cold war. Without a doubt, an institution such as NATO, whose sole reason for being sprang from its almost singleminded ability to secure western Europe against Soviet-inspired aggression, seemed ill-equipped to survive in a geopolitical environment devoid of Soviet influence or a Soviet-based diplomatic or military foe. Nevertheless, NATO has somehow survived, though it has hardly thrived.
Since 1991, NATO has maintained its Atlantic focus by preserving the fundamental tie between North America and Europe, but the European part of the alliance has shifted its locus of activity eastward. By admitting former Warsaw Pact countries and erstwhile Soviet republics among its ranks, NATO has undermined some of the cultural and geographic solidarity that characterized the pre-1999 alliance. As a result, despite the support some of the eastern countries have demonstrated for U.S. policies, NATO policies have been undermined or weakened by the historical divisions that plagued the development of postwar Europe, especially the EC (and European Union). In addition, the involvement of former Soviet satellites with NATO has alienated Russia, Ukraine, and Belarus, producing new geopolitical polarities between pro-Russian and anti-Russian forces in a region that is far from stable. Furthermore, according to many NATO observers, the accession of former Soviet satellites contradicts NATO’s avowed support for democratic government, inasmuch as the commitment to (and the fate of) democratic governments in these one-time dictatorships is indeterminate at best. Regardless of these limitations and concerns, NATO has endeavored to transform itself and to redefine its purpose beyond the comparatively narrow limits imposed on it by the cold war. As such, its mission has expanded to include geopolitical theaters outside Europe that appear central to European diplomatic and military interests. For instance, NATO has been spearheading operations in Afghanistan. Plus, NATO has enhanced its political role by increasing its diplomatic presence in various arenas and slowly shifting its practical focus to include greater predeployment capabilities. In fact, some scholars believe that NATO’s continued viability lies in its readiness, willingness, and ability to morph from a military alliance into a diplomatic one. As the political and economic integration of Europe has grown, not least through the emergence of the European Union (EU), and the geopolitical independence of Europe as a formidable economic and political global competitor has been secured, the institutional structures and organizational mechanisms of the European Union have increasingly served as both a counterweight and alternative to NATO’s decreasingly relevant traditionalism. Viewed
Or ga ni zationof Petroleum Exporting Countries
from another perspective, the EU and its more continental foreign policy, along with related initiatives to build an EU-specific defense capability, have appeared to offer a more logical and natural response to European defense and foreign policy needs than would be possible under a comparatively obsolete structure such as NATO’s. Over time, the competition between NATO and the EU will inevitably increase, as will the amount of redundancy between them, and should the recent continentalist trend continue to prevail over the customary Atlanticist one, the case for NATO’s ongoing utility and relevance will be a difficult one to maintain. See also diplomatic policy. Further Reading Gaddis, John Lewis. The Cold War. New York: Penguin Books, 2006; Howorth, Jolyon, ed. Defending Europe. New York: Palgrave Macmillan, 2004; Kaplan, Lawrence S. NATO Divided, NATO United. Greenwood, Conn.: Praeger Paperbacks, 2004; LaFeber, Walter. America, Russia, and the Cold War, 1945–2002. Boston: McGraw-Hill, 2002; Sloan, Stanley R. NATO, the European Union, and the Atlantic Community. New York: Rowman & Littlefield, 2002; Udell, Gregory F. Principles of Money, Banking, and Financial Markets. New York: Addison Wesley Longman, 1999; Woods, Ngaire. The Globalizers. Ithaca, N.Y.: Cornell University Press, 2006. —Tomislav Han
Or ga ni zationof Petroleum Exporting Countries (OPEC) The Organization of Petroleum Exporting Countries (OPEC) is an international organization whose members act as a cartel to try to manage the price of crude oil on world markets through coordinating their production. OPEC was created in 1960 by Iran, Iraq, Kuwait, Saudi Arabia, and Venezuela to try to counterbalance the dominance of global oil markets by the so-called Seven Sisters, the largest transnational oil companies. The founding members were ultimately joined by eight other countries, all of whom were less developed countries whose economies were dependent on revenues from oil sales in the global market.
1043
In the 1960s, most of the world market for oil was controlled by an oligopoly of seven transnational corporations: Standard Oil of New Jersey (Esso), Royal Dutch Shell, British Petroleum, Standard Oil of New York (Socony), Texaco, Standard Oil of California (Socal), and Gulf Oil. The companies owned the right to drill and extract oil from countries, the shipping and transportation networks that brought the oil from wellheads to refineries, and the shipping and distribution networks that moved products from the refineries to the local gas station. The role of oil-rich third world countries was limited to passively collecting royalty payments from the companies at a rate largely determined by the companies themselves. Since the same corporation controlled the process from the time the oil left the ground until the time it was pumped into a motorist’s gas tank, costs and profit could be assigned to different stages of the process to reduce taxes or royalties owed to different governments. The creation of OPEC was intended to give governments more control over their own national resources. The price of a barrel of oil on the world market is affected by the quality of the oil and its location. Crude oil varies in the mix of chemicals and contaminants, especially sulfur, it contains and by the quality of the oil itself, both of which affect how easily and inexpensively it can be refined. The highest-quality crude oil with the fewest contaminants and least sulfur content is described as light and sweet. Oil from Saudi Arabia tends to be particularly light and sweet and is used as a benchmark. Oil from other countries is priced higher or lower than oil from Saudi Arabia depending on the quality of the oil, differences in costs of production, and the cost of transporting it to refineries and consumers. For most of its first decade of existence, OPEC was ineffectual. The members lacked the information and expertise to manage oil production inside their own countries. The royalties paid to individual countries tended to be held as secrets by the transnational corporations and the countries themselves, so that national governments did not necessarily know if they were getting a better or worse return than their neighbors or other producers on other continents. The world market for oil over the past century has been cyclical, with periods of abundant supply alternating with shorter stretches of scarcity. During OPEC’s first
1044
Or ga ni zationof Petroleum Exporting Countries
decade, there was plenty of oil available, and OPEC’s members produced far less oil than did some of the major consuming nations, particularly the United States. OPEC members had no leverage, and the organization was inconsequential. The situation changed dramatically in the late 1960s and early 1970s. First, several countries, beginning with Libya, started negotiating contingent royalty payments from the transnational corporations, with the amount one government would receive linked to payments to other governments. This gave governments of producing countries a little leverage to try to negotiate their returns upward. The second change was the most critical. A number of Arab countries, most of them also members of OPEC, had formed the Organization of Arab Petroleum Exporting Countries (OAPEC) and during and immediately after the 1973 war between Israel and its neighbors, sharply reduced their production of crude oil and imposed a boycott on the United States and Western Europe because of their support for Israel. The resulting sudden drop in the world oil supply at a time of growing demand led to sharply increased prices for oil and oil-based products. The boycott induced a shortage of gasoline in the United States that led to several months of long lines at gas stations and emergency procedures such as permitting gas purchases only on alternate days. By the time the boycott was ended, the balance of power in global oil had shifted, and OPEC emerged as a dominant player. Since then, OPEC members have transformed the world oil business by dramatically increasing their level of expertise and involvement in the actual production process in their countries, taking more direct control of their oil fields and reducing the role of transnational oil companies. In the global oil industry, as in other industries based on natural resources, the greatest profits come from processing and refining raw materials. OPEC member governments have become far more involved in what are termed “downstream” activities such as transporting crude oil, refining it, and distributing gasoline and other petroleum-based products to end users. Today, there are 11 members of OPEC: Algeria, Indonesia, Iran, Iraq, Kuwait, Libya, Nigeria, Qatar, Saudi Arabia, the United Arab Emirates, and Venezuela. The OPEC international headquarters is located in Vienna, Austria. In 2005, average world oil produc-
tion was 84 million barrels per day. OPEC members produced 34 million barrels, about 40 percent of the world’s total; in comparison, the United States produced about 10 percent of the total. But OPEC members exported about 78 percent of all the oil they produced, while the United States consumed all it produced and had to import around 12 million barrels of oil each day. The members of OPEC rely on revenues from their oil exports to finance their national budgets. For some OPEC members, oil is a major source of national income for investing in development as well as meeting annual government expenses; for others, it is virtually the only source of national income. Each member thus has a powerful need and desire for higher oil prices. At the same time, OPEC members are well aware that the higher the price of oil, the more likely oil consumers are to reduce consumption by finding alternatives to oil or by increasing the efficiency with which they use oil. OPEC members understand that if the price goes too high, it will undermine economic growth in the rest of the world. The impact of high oil prices on the richer countries of western Europe and North America can be measured in slower economic growth; the impact on less-developed countries is typically far more severe and may even lead to economic declines. OPEC long ago recognized that it and its customers shared an interest in a stable and predictable oil market. The basic political division in OPEC has been between price hawks and doves. Price hawks are more interested in maximizing income through high prices and are those countries whose national budgets are stretched thin by the demands of a large and growing poor population and the need for long-term investment in economic development projects. The price doves are those countries, such as Saudi Arabia, Kuwait, and the United Arab Emirates, whose populations are much smaller compared to the amount of oil exported and whose national budgets are under far less stress. The doves tend to be more interested in the long-term stability of the market and the health of their customers’ economies. The oil or economics ministers of OPEC meet every six months to assess the oil market and set a target price. Each member is assigned a quota, which is a fixed number of barrels of oil that it can sell on the market. The target price and national quotas are
Or ga ni zationof Petroleum Exporting Countries
typically balanced between the hawks’ preference for immediate returns and the doves’ preference for a stable and sustainable market. OPEC, like all international organizations, is a creature of its members and cannot directly enforce national quotas or punish nations who sell more than they are entitled to. In the past, Saudi Arabia has played a unique role in OPEC as a “swing producer.” Saudi Arabia is unique within OPEC because of two factors. First, the kingdom’s income from oil is very high, and its population is relatively small. The national budget is not under a great deal of stress, and government expenditures can be quite flexible if necessary. Second, Saudi Arabia typically has excess capacity, that is, the ability to produce more oil per day than its quota. Those facts have allowed Saudi Arabia to increase production when the global market was getting too hot and prices were rising too high or to decrease production when the market weakened and prices threatened to fall too low. The Saudis have also used the threat of ramping up production and lowering market prices to try to deter cheating on quotas by price hawks. Saudi Arabia’s ability to play the role of swing producer has declined in the 21st century as world demand has grown sharply, OPEC’s share of the world oil market has declined, and Saudi Arabia itself has begun to experience greater demands on its national budget. The relationship between the United States and the members of OPEC is one of complex interdependence. The United States needs oil from OPEC, and the health of the American economy is affected by both the price of oil and the reliability of supplies. However, OPEC countries are not the largest source of oil imports to the United States. For example, in November 2006, non-OPEC countries provided 60 percent of all U.S. imports, with Canada and Mexico together accounting for almost 30 percent of U.S. imports. Of the 40 percent of oil imported from OPEC members, most of that came from Saudi Arabia (at nearly 12 percent), Venezuela (at nearly 10 percent), and Nigeria (at 7.5 percent). Oil imports are a major contributor to the American trade deficit with the rest of the world. At the same time, OPEC in general, and some of its largest producing members in particular, rely heavily on the United States as a market. For example, exports of oil to the United States account for almost 19 percent of Saudi Arabia’s income from interna-
1045
tional trade, half of Venezuela’s international exports, and a little less than half of Nigeria’s. The health of their economies is very dependent on the economic well-being of the United States. Even OPEC members who do little or no business with the United States find their economic health dependent on the American economy because of the central role of the U.S. dollar in the global oil markets. The price of oil is set in world markets each day in terms of dollars, and OPEC members typically expect their customers to pay in dollars, even if their national currency is not. OPEC members, like most countries in the world, tend to hold dollars as their national reserve currencies, which is, in effect, their national wealth. Thus, even the staunchest price hawks who sell no oil to the United States find that they have a vested interest in the state of the American economy. This complex interdependence has several effects on U.S. foreign policy. It helps explain the close working relationship on both economic and security issues between the United States and Saudi Arabia, two countries that on the surface might seem to have little in common. It also explains important dimensions of the U.S. relationship with countries such as Nigeria and Indonesia. The relationship between the United States and Venezuela during the presidency of Hugo Chavez has been marked by increasing strains and pointed accusations on both sides. But the fact that the United States relies on Venezuela for about 10 percent of its oil imports, which amount to more than half of Venezuela’s trade earnings, and the fact that Venezuela owns CITGO, a major refiner and distributor of petroleum products in the eastern United States, dramatically complicates the political equation for both governments. OPEC’s share of the global oil market has declined over the past two decades as producers such as Russia, Canada, and Mexico have played a larger role. But OPEC production is critical to world supplies, and the ability of some key members of OPEC to increase their oil production over the next few years will keep the organization at the center of petroleum economics and politics. OPEC will continue to affect the daily lives of Americans and will continue to be a focus of U.S. foreign policy. (General economic statistics are drawn from the CIA World Factbook at http://www.odci.gov/cia/publications/factbook/index. html. Data on oil production, imports and exports
1046 social democracy
come from the Energy Information Administration of the U.S. Department of Energy at http://www.eia. doe.gov.) See also energy policy. Further Reading Falola, Toyin, and A. Genova. The Politics of the Global Oil Industry: An Introduction. Westport, Conn.: Praeger, 2005; Parra, Francisco. Oil Politics: A Modern History of Petroleum. London: I.B. Tauris, 2004; Sampson, Anthony. The Seven Sisters: The Great Oil Companies and the World They Shaped. New York: Viking Press, 1975; Yetiv, Steve. Crude Awakenings: Global Oil Security and American Foreign Policy. Ithaca, N.Y.: Cornell University Press, 2004. —Seth Thompson
social democracy Social democracy is a term used to describe a political movement made up of union workers, farmers, and other nonelites that demands collective decision making in the social, economic, and educational institutions of a nation. It can also refer to the ideals and values of this movement and to the policies passed in its name. Social democrats seek to ameliorate the extreme effects of capitalism and social inequality on the poor, but without challenging the fundamental system of private property and entrepreneurship. Therefore, while staying within the framework of a market economy, social democracy challenges the classical defenses of a free market by expanding liberal ideals about political equality into areas traditionally designated as private and off limits to democratic accountability, such as the marketplace and the family. Whereas classical liberals see democratic participation as limited to voting for political representatives, social democrats expand the sphere of collective decision making to include social and economic institutions as well as the private familial sphere. Social democracy stands somewhere between the poles of laissez-faire capitalism and socialism. While establishing policies of economic regulation and worker protections, social democracy falls short of socialist demands for economic centralization or state ownership of major industries. The proper
ends of government are redescribed by social democrats to include the material welfare of its citizens, arguing that the conditions of individual autonomy and effective citizen participation presuppose a level of economic security and an equality of opportunity that transcends empty legal promises. To achieve true democracy, social democrats argue, requires addressing the highly unequal power gap that exists between workers and managers and owners and investors. Such an inequality that had broad effects on the personal autonomy of the poor could not be papered over by the illusion of legal equality, or any longer ameliorated by a further expansion of the West. It was the innovation of early European social democrats to break with Marxist communism and argue that a workers’ revolution could be achieved without overturning the state, but rather by transforming it from within by organizing political parties and using the advantages of universal suffrage. For more than a century, social democracy has been a strong and permanent presence in the developed parliamentary democracies of western Europe. Major parties such as the Social Democrats in Germany and Sweden and Labour in the United Kingdom have successfully represented the interests of workers in parliaments both in and out of governing majorities. Social democratic parties in the United States have been present since the Gilded Age of the late 19th century, yet their presence has been far more tepid than their western European cousins. More commonly, as a result of the U.S. two-party system, social democratic aspirations are adopted by the progressive wing of one of the two major parties. Since the 1920s, that party has usually been the Democrats, although some early 20th-century progressives, such as Robert La Follette, were Republicans. Significantly, while some American social democrats have reached out to the writings of Karl Marx and Ferdinand Lasalle, the most successful bids for social democracy found in progressivism, the New Deal and the Great Society, do not have deep genealogical ties to Marxism or European socialism. Nevertheless, the United States has always had numerous unique cultural aspects sympathetic to social democratic values. The traditional place of prominence given to the individual farmer and the
social democracy 1047
small proprietor in the American imagination were used as a wedge to attack the rise of large banks, railroads, and corporate monopolies during the Gilded Age. As well, religion has played a strong role in the American version of social democracy. The social gospel movement of the late 19th century involved many of the same middle-class Protestants who made up the backbone of the Progressive movement. Repulsed by the poverty caused by industrialization, these evangelicals sought a more egalitarian polity based on Christian values and rejected the competition and the worship of Mammon so prevalent in the Gilded Age. While among white Protestants this movement declined along with progressivism, its influence was still visible two generations later as African-American church leaders such as Martin Luther King, Jr., took leadership in the Civil Rights movement. Social democratic sympathies may also be seen at the center of American ideology, liberalism. For example, the radically egalitarian values expressed in such canonical texts as the Declaration of Independence have been used to attack racial discrimination and always left open the possibility of a democratic monitoring of social goods as requisite for a true pursuit of happiness. The rise of social democracy lies at the intersection of an expanding franchise and a changing capitalist system. In Europe, revisionist socialists such as Edward Bernstein saw the opportunity for workers to use their newly acquired right to vote as a tool for entering government, using the state as an instrument for reform rather than trying to overthrow the state by revolution. Similarly, in the United States, each expansion of the franchise—first to poor white men, then (fleetingly) to African-American men after the Civil War, to newly arriving European immigrants, and eventually to women—has led to new demands made in the name of democratic equality. Soon after the Civil War, small farmers founded the Greenback and Populist movements to strike back at banking interests, and labor organizations took on more prominence by organizing strikes and demanding better wages and work conditions. The American social democratic movement flowered during the first two decades of the 20th century, with a number of prominent national spokespersons. On the strength of his “Cross of Gold” speech, the democratic party put forward
William Jennings Bryan as its presidential candidate four times between 1896 and 1908. In 1901, Eugene V. Debs founded the Socialist Party, gathering together in one place many of the disparate parts of labor and trade unions as well as nonrevolutionary socialists. This era saw the spread of urban settlement houses, such as Jane Addams’s Hull House in Chicago, and the founding of journals of progressive thought, such as The New Republic under Herbert Croly. Perhaps the high point of the Progressive movement came during the presidential elections of 1912, in which three of the four candidates for the office (Woodrow Wilson, Theodore Roosevelt, and Eugene V. Debs) ran as progressive reformists of one kind or another. As diffuse as the Progressive movement was, there were several factors that gave early 20th-century social democrats a common identity. They broadly adopted pragmatic perspectives, embracing the use of social experiments to reduce waste, poverty, and ignorance and to keep competition alive. Unlike the New Deal reformers, who established national policies, progressive reformers tended to look to state or local governments for solutions, those “laboratories of democracy,” as Louis Brandeis called them. Progressive policies called for labor laws that instituted minimum wages and maximum hours and laws to do away with child labor. They sought out corruption in government and called for the eradication of poverty and the establishment of progressive taxation. Progressives also campaigned for greater public support for education and other means of developing the mental and physical capacities of citizens. Reform was supposed to provide not just material benefits but an ethical renewal of a democratic spirit. It also expanded the tasks of government toward managing the economy, price stabilization, and brokering conflicts between labor and capital. However, this brief success fell away rapidly. World War I, the Russian Revolution and the first domestic Red Scare, and a conservative-leaning U.S. Supreme Court all helped precipitate the decline of the Progressive movement in American politics, but not before the U.S. Constitution was twice amended to enact two progressive policy goals: women’s suffrage and Prohibition. Both were monuments to the progressives’ vision of rejuvenating the
1048 social democracy
ethical character of American democracy by bringing in virtuous woman and expelling demon rum. Social democratic movements would not return to the political center stage until the New Deal. However, once President Franklin D. Roosevelt and the Democratic Congress began tackling the problems of the Great Depression, the goals of social democracy had evolved, growing more centralized and more regulatory. No longer would it be adequate to promote fair competition and bust monopolies. The New Deal marked a new and highly experimental moment in American politics in response to overwhelming popular demands for the national government to intervene in defense of a sick economy. Roosevelt set about to use national planning and governmental spending in order to stabilize capitalism and make it fairer for the workers who had the least say. The federal government propped up the failing banking and lending organizations, insuring deposits with federal cash. It regulated agricultural and industrial production and growth and subsidized loans to farmers. It established minimal standards for industrial workers and instituted social security. Most controversially, the New Deal adopted Keynesian economic techniques, using deficit spending on large public works projects and seeking not just to assist the needy but to actually employ them. However, if New Deal era reforms were broader than those of the Progressive Era, they were also more conservative. The reforms of the 1930s were primarily administrative and did not possess the strong push toward social and personal transformation that so characterized the efforts of progressives. Rather than using the government as a catalyst for social change, the government adopted a new role of “broker state,” brokering labor-management disputes that had led to so much violence and conflict in the past. Despite attempts such as Roosevelt’s second bill of rights, the New Deal reforms did not result in a transformation of social relations. Reformers always had to fend off charges that their policies were a form of socialism or communism and thus un-American. Part of those challenges came not just from Republicans, but from the southern conservative wing of the Democratic Party. Nevertheless, as a result of the cautious approach of the New Deal Democrats, these new federal administrative powers were accepted by
President Dwight Eisenhower’s administration during the 1950s. The greatest use of governmental power in pursuit of social democratic goals was President Lyndon B. Johnson’s Great Society. The Great Society and Johnson’s War on Poverty were partially inspired by Michael Harrington’s The Other America (1962), which brought to light the existence of a seemingly permanent underclass in the United States who were not benefiting from the prosperity brought about by the postwar boom. Armed with a strong mandate in the 1964 presidential elections and a solid Democratic Congress, Johnson pushed through a flurry of legislation, signing into law programs meant to bring about greater equality of opportunity. Many programs, such as Job Corps and Head Start, focused on training and educational opportunities. Medicare and Medicaid brought affordable health insurance to millions, and a food stamp program was meant to do away with hunger in America. Johnson recognized that poverty and inequality in the United States was tied to race. As a result, Johnson’s War on Poverty was conjoined with his ambitious civil rights agenda, including legislation forbidding racial discrimination not only at the polls but in the workplace, in housing opportunities, and in public accommodations. The most important philosophical defense of the principles of redistributive social democracy in the postwar era was John Rawls’s A Theory of Justice (1971). Using a variation of the liberal contractual model for establishing governmental institutions, Rawls asked what kind of social rules for distributing social goods would individuals choose if they were unaware of what their position in society was. He went on to argue that under this “veil of ignorance,” people would select a system based on a broad distribution of equal rights and liberties and the equal opportunity to compete for positions of social advance, and that any system of inequality would have to benefit the least well off. It is somewhat ironic, however, that Rawls’s classical liberal defense of social democratic policies was published at the very time that movement was slowly being eclipsed by a resurgence of both progressive and conservative political values. The Johnson administration found itself challenged on both its left and right. The 1960s saw the rise of the New Left, which,
social democracy 1049
despite its name, took much of its ideas from the Progressive Era. Embodied by organizations such as Students for a Democratic Society (SDS), youth leaders sought to bring forth a commitment to a cultural renewal of egalitarian values. Building upon the Progressive Era, the 1960s also brought about the second wave of politically active feminism. The New Left criticized welfare programs for not going far enough in redistributing the benefits of American prosperity and for the degree of personal surveillance brought about by social workers administering welfare programs. Additionally, many civil rights leaders, such as King, broke with the liberal foundations of the Great Society during the late 1960s by questioning whether the great racial and economic divides in American society could be bridged merely through a focus on civil rights. Many white voters felt that the benefits of the Great Society were going disproportionately to racial minorities. Republican candidates such as Richard Nixon, who won the presidency in 1968, successfully played up this racial element, using what was called a “southern strategy” to further fracture the Democratic coalition by dividing white southern Democrats from the national party. Emphasizing moral individualism, conservatives argued that governmentsubsidized safety nets weakened the incentives for personal savings and hard work. Welfare made people lazy and helped the undeserving poor. Indeed, the very term welfare took on a negative connotation in public speech. Libertarian voices attacked the economic wisdom of governmental regulation. They argued that high taxes led entrepreneurs and other owners of capital to take flight and invest in countries where building and maintaining a factory were cheaper due to lower employment costs and less regulation. Many white blue collar workers abandoned the Democratic Party during the 1970s and 1980s at a time when the strength of labor unions and the centrality of manual labor itself were on the wane. Also important to the decline of social democracy has been the impact of globalization and international economic competition, which have created a labor market that is international rather than national. As a result, there has been both a decrease in the value of centralizing labor-management bargaining at the state level and a potential race to the bottom as
the third world competes with the first world for investment. Globalization makes companies less dependent on home nations. Indeed, globalization rips at one of the founding premises of social democracy, that an economy is primarily national and controllable by the central government and that it is possible for sovereign governments to effectively manage in the national interest their nation’s slice of the world economy. The election of President Ronald Reagan in 1980, who famously argued that government was the problem and not the solution, marked the decline of social democracy as both a political movement and as a set of governing ideologies and helped bring about a revival of libertarian free market ideology, antagonistic to government regulation. The 1980s also saw the rise of the evangelical movement as a political force. This important movement had broken from the populist social gospel tradition of the Progressive Era and instead of pursuing social justice, preached personal salvation and traditional social values. By the 1990s, Democratic president Bill Clinton pledged to end “welfare as we know it” and signed into law a plan that significantly shrank aid to the unemployed. Social democracy as an ideal has never disappeared, but in the early 21st century its political fortunes have dimmed significantly. Further Reading Dewey, John. The Public and Its Problems. New York: Henry Holt, 1927; Fraser, Steve, and Gary Gerstle, eds. The Rise and Fall of the New Deal Political Order, 1930–1980. Princeton, N.J.: Princeton University Press, 1989; Harrington, Michael. The Other America: Poverty in the United States. New York: MacMillian, 1962; Kloppenberg, James T. Uncertain Victory: Social Democracy and Progressivism in European and American Thought, 1870–1920. New York: Oxford University Press, 1986; Laslett, John H. M., and Seymour Martin Lipset, eds. Failure of a Dream? Essays in the History of American Socialism. Berkeley: University of California Press, 1984; Rawls, John. A Theory of Justice. Cambridge, Mass.: Harvard University Press, 1971; Wright, Anthony. “Social Democracy and Democratic Socialism.” In Contemporary Political Ideologies, edited by Roger Eatwell and Anthony Wright Boulder, Colo.: Westview Press, 1993. —Douglas C. Dow
1050 socialism
socialism Socialism is a model of political economy centered on public control of significant means of production, stressing regulation of markets and reduction of material inequalities in order to expand possibilities for the free development of each individual. The modern movements for socialism have been driven primarily by the economic interests of workers and have been seen by socialists since the time of Karl Marx to be linked to class struggles between direct producers and appropriators of economic surpluses. While the movements for socialism are unquestionably modern in origin, their fundamental elements were clearly present in the politics of the ancient world. Class struggle was well known to the ancient Greeks, for whom the most basic fact of political life was the ongoing rivalry between the demos (the ordinary working people) and the aristos (the land-owning elite). Attempts at constructing political solutions to the problems of class inequality were also recorded by ancient scholars and historians. In his Politics, Aristotle briefly described an effort by Phaleas of Chalcedon to prevent social conflict through the equal distribution of property. In Rome during the second century b.c., spiraling levels of material inequality led to two ill-fated attempts at land redistribution by Tiberius and Gaius Gracchus. Both were assassinated by opponents of their land reform plans. Thomas More’s Utopia (1516) stands as the first early modern meditation on the correction of social ills through the elimination of private property. The members of More’s imaginary island community share all possessions equally, wear identical clothing, and trade houses on a regular basis to avoid either attachment or jealousy. But while More’s vision of a society without property anticipates certain elements of the modern socialist critique of material inequality, its primary focus is on the ownership of personal possessions. The modern socialist movement, by contrast, would increasingly turn its attention to the forms of economic power rooted in the control of productive property. An early indication of this shift can be seen at the close of the English civil war, as Gerrard Winstanley and the Diggers occupied and cultivated rural waste grounds, building four small communes between 1649 and 1651. Stiff resistance from local land owners quickly ended the Diggers’ experiments in communal living. In his published
works, The New Law of Righteousness (1649) and The Law of Freedom (1652), Winstanley continued to promote the argument that social equality and economic betterment for the working class could be achieved only through common ownership of land. Like the Diggers, Robert Owen and Charles Fourier held that socialism would take root in small, intentionally established rural communes. Owen began his career as the manager of a textile mill in Manchester, England, during the early years of the Industrial Revolution, then purchased his own factory at New Lanark, Scotland. There he implemented a series of reforms: housing for workers was improved, regular garbage collection was introduced, child labor was banned, and working hours were decreased. In 1825, Owen purchased land in Indiana for the founding of New Harmony, a commune based on his designs for “Villages of Cooperation.” Initially, 800 residents moved to New Harmony, but the experiment survived only three years. Fourier’s vision of a socialist community was first articulated in The Theory of the Four Movements (1808). Fourier maintained that communal villages (or “phalanxes”) should be established, on which residents would live, work, and equally divide the proceeds of their labor. During the 1840s, a Fourierist movement developed in the United States that established communes in Massachusetts, New Jersey, and Colorado. As with the Owenite experiment at New Harmony, none lasted more than a few years. At the beginning of the 19th century, a broader vision of socialist transformation was being developed by Claude Henri de Rouvroy Saint Simon. A member of the French aristocracy (who as a young man participated in the American Revolution), Saint Simon was an early exponent of the idea that industrialization would demand a sweeping reorganization of social and political life. Scientists, industrialists, and artists would plan and coordinate social order for the benefit of all. In The New Christianity (1825), Saint Simon argued for society to be restructured in such a way as to improve the lot of the poor. Yet, like Owen and Fourier, his vision of socialism was unconnected to the nascent political movements of the industrial working class. This element of socialist theory would come to the fore in the work of Karl Marx, whose understanding of history was premised on the notion of class
socialism 1051
struggle as the primary driver of political change. Yet, while Marx was a key participant in the development of a European socialist movement, his work contains no blueprint for socialism. Unlike the earlier generation of utopian socialists, Marx and his coauthor Friederick Engels held that socialism could not be meticulously planned ahead of time but would emerge from the concrete development of its historical precursor, capitalism. Marx’s work does make clear, however, three aspects of socialism he believed to be of particular importance. First, socialism would be built not in small, rural communes but nationally and internationally through the achievement of working class political power. Second, though its technical procedures could not be rigorously specified in advance, socialism would involve public control of significant means of production. Third, the transition to socialism would begin with a lower stage of development in which material inequalities would continue to be patterned by differences in labor contributions before proceeding to an upper stage in which production and distribution would be based on the motto “From each according to ability, to each according to need.” Marx’s vision of socialism presupposed a high level of technological development and took as its central aim the reduction of time spent working to meet basic needs, so as to expand the time and resources available to pursue interests and desires. At the end of the 19th century, the connection between socialism and working class political power was well established, but the precise path to be taken toward the achievement of both ends remained a topic for debate. The Russian socialist leader Vladimir Lenin held that under the repressive conditions of czarist Russia, a socialist transformation would require a revolutionary seizure of the state. In western Europe, by contrast, mass socialist parties increasingly pursued a legal, parliamentary route to power, outlined by Eduard Bernstein in Evolutionary Socialism (1898). But while such parties now began to contest elections and win seats in their national parliaments, the looming threat of war between European states challenged their expressed commitment to international working class solidarity. The decision in 1914 by most European socialist parties to support their national war efforts resulted in the eventual formation of rival communist parties by antiwar internationalists.
Both factions were represented in the United States. Between 1901 and 1918, the Socialist Party of America competed successfully in state and local elections in New York, Wisconsin, and Oklahoma. In 1912, Eugene V. Debs captured 6 percent of the popular vote as the Socialist Party’s candidate for president. Convicted under the Sedition Act in 1918 for opposing U.S. entry into World War I, Debs continued his campaign for the presidency from prison. In 1919, mirroring the split in the European socialist movement, a communist faction broke from the Socialist Party to form the Communist Party of the United States of America. Despite being the target of government repression during the 1920s and 1950s, the Communist Party played vital roles in the organization of the Congress of Industrial Unions (CIO) and the movement for African-American civil rights. Nonetheless, the American socialist parties remained marginal in comparison to the size and influence of their European counterparts. Scholars offered a variety of hypotheses to explain the phenomenon of American exceptionalism. Frederick Jackson Turner proposed that the American West’s open frontier fed individualist dreams rather than class solidarity. Werner Sombart suggested that the high standard of living enjoyed by American workers gave them little reason to seek radical political or economic change. Louis Hartz maintained that having been founded on the liberal principles of legal equality and representative government, the United States lacked the deep-seated class divides that drove the European socialist movement. Despite some early predictions that they would vanish entirely after the collapse of the Soviet Union in 1991, socialist organizations remain a part of the American political landscape, albeit a minor one. Two debates regarding elements of the socialist agenda are particularly important to both broader questions of political economy and more focused issues in public policy. The first concerns the problem of incentives. Advocates of market economies argue that the ability to accumulate private property ensures social productivity by overcoming the disincentive to labor. Lacking such an incentive, socialist economies (or nonmarket economic mechanisms) will tend toward stagnation. Though no socialist theorist since More has suggested the creation of pure equality or the elimination of personal property,
1052 ugly American
socialist thought does generally advocate a reduction of material inequality. Yet, while unequal rewards can unquestionably act as incentives to labor, the relationship between rewards and incentives may be subject to declining marginal increases. Higher levels of statistical inequality in national economies, for example, do not necessarily correspond with higher levels of productivity. A second debate considers the capacity of nonmarket mechanisms to cope efficiently with complex economic decisions. Markets arrive at supply and pricing choices through the uncoordinated actions of a large number of decentralized decision makers. The limited capacity of planning agencies to process information and make appropriate economic choices produced severe inefficiencies and a limited range of consumer goods in the Soviet Union. Oscar Lange and Alec Nove suggested various ways in which this problem might be overcome through the blending of socialist institutions and market mechanisms. More recently, Paul Cockshott and Allin Cottrell have argued that contemporary computing technology might now make possible the type of complex data analysis required by socialist planners. At a more fundamental level, Frederick von Hayek and Robert Nozick charged that any attempt at public economic planning would ultimately result in the rise of a totalitarian state. Planners charged with allocation decisions would simply express their economic preferences to the exclusion of all others. This critique struck deepest at the Soviet model of socialism, which attempted to remove market mechanisms altogether. The mixed economies of Sweden, Denmark, and Norway have demonstrated that socialist institutions are in no way incompatible with democracy, while state-driven ventures in France and Japan have shown that government coordination of economic development need not produce crippling inefficiencies. See also communism; ideology. Further Reading Bernstein, Eduard. Evolutionary Socialism. New York: Schocken Books, 1961; Cockshott, W. Paul, and Allin Cottrell. Towards a New Socialism. Nottingham, U.K.: Spokesman Books, 1993; Engels, Friederick. Socialism: Utopian and Scientific. New York: International Publishers, 1998; Foner, Eric. “Why Is There
No Socialism in the United States?” History Workshop Journal 17 (1984): 57–80; Fourier, Charles. The Theory of the Four Movements. Cambridge: Cambridge University Press, 1996; Hayden, Delores. Seven American Utopias: The Architecture of Communitarian Socialism, 1790–1975. Cambridge, Mass.: MIT. Press, 1976; Hayek, F. A. The Road to Serfdom. Chicago: University of Chicago Press, 1994; Marx, Karl “Manifesto of the Communist Party” and “Critique of the Gotha Program.” In Later Political Writings, edited by Terrell Carver. Cambridge: Cambridge University Press, 1996; Mészáros, István. Socialism or Barbarism: From the “American Century” to the Crossroads. New York: Monthly Review Press, 2002; Nove, Alec. The Economics of Feasible Socialism Revisited. London: Routledge, 1992; Ottanelli, Fraser M. Communist Party of the United States: From the Depression to World War II. New Brunswick, N.J.: Rutgers University Press, 1991; Salvatore, Nick. Eugene V. Debs: Citizen and Socialist. Champaign: University of Illinois Press, 1984; Sassoon, Donald. One Hundred Years of Socialism. New York: New Press, 1996; Shulman, George M. Radicalism and Reverence: The Political Thought of Gerrard Winstanley. Berkeley: University of California Press, 1989. —Jason C. Myers
ugly American Many believed that the phrase ugly American would be cast on the dustbin of history after the end of the cold war in 1989. In many ways, it seemed time- and context-bound, a relic of a different era, apropos to the conflict between the United States and the Soviet Union that so characterized international politics from the late 1940s until the late 1980s, but no longer descriptive of political reality or politics as practiced in a post–cold war world. But no sooner had the phrase lost its cachet than it was brought back from the dead, and new life was breathed into the old phrase. In the midst of the international war against terrorism, the U.S. government was accused of engaging in actions—torture, extraordinary rendition, and other more extreme actions—that revived criticism of the United States and has brought back to political life the original meaning of the phrase ugly American.
ugly American 1053
Quite rightly, Americans are made uncomfortable when the phrase ugly American is used. It conjures up unattractive images of the most offensive variety and fosters feelings of guilt over past wrongs. That many Americans see at least a kernel of truth in the phrase likewise makes them feel uncomfortable, even guilty. The sobriquet ugly American is designed and intended to displease, as it is also intended to describe a particular way of being. And as a barometer of worldwide sentiment and opinion about the United States, it is a useful guide in helping determine how well or poorly Americans are thought of at any time around the globe. While literally derived from the 1958 novel and best seller of the same name written by Eugene Burdick and William Lederer, the term ugly American is more often incorrectly associated with the novelist Graham Greene, who in 1955 wrote the book The Quiet American. Over time, the term ugly American has taken on a life of its own as an insulting description of the behavior of individual Americans while traveling abroad and also of the activities (often covert operations) of the U.S. government abroad. The novel The Ugly American is a series of short stories connected around a common theme of how the United States was losing the international struggle against communism because of the ignorance and arrogance of Americans abroad. Centered in Southeast Asia, the novel portrays a failure on the part of the American interlopers to adapt to or even try to understand the culture, history, or religions of the indigenous people. The assumption was that the native population was supposed to conform to the wishes of the American superpower, and there was no need for the United States to understand them. The action takes place in Sarkhan, a fictitious Southeast Asian country in the midst of a communist insurgent movement. In the novel, one of the locals laments that “A mysterious change seems to come over Americans when they go to a foreign land. They isolate themselves socially. They live pretentiously. They’re loud and ostentatious.” This is the behavior that has been characterized as “ugly.” Graham Greene’s 1955 novel, The Quiet American, was centered in Indochina at a time when the French colonialists were leaving and the United States was taking over. Here, the American hero plays the locals as pawns in an international game of power
politics. The Americans manipulate people and situations, care little for the native population, but rather see them as expendable and disposable. Over time, the term ugly American has come to be used as an insult aimed at American tourists traveling abroad. In the post–World War II era, Americans, with their superior wealth and power, sometimes acted with an arrogance and conceit that struck many of the locals as rude and insulting. On top of that, some American travelers were quite loud and insistent on having things their way. Some demanded that the locals speak English, and when they did not, acted rudely. The actions of the few became the sobriquet applied to the many. The term ugly American caught on and was—and is—hard to break. In political terms, ugly American has come to be a euphemism for the misuse of American power abroad. As the world’s only remaining superpower, the United States has heavy burdens and responsibilities and at times overplays its hand or behaves as an international bully, even to its friends and allies. Those occasions in which the Untied States behaved badly invited criticism, and the term ugly American was often used as a catchphrase that conjures up images of powerful bullies browbeating and taking advantage of the less powerful. In 2004 and 2005, the term was dusted off and reapplied to the policies of President George W. Bush. Bush often snubbed his nose at international agreements, pulled the United States out of international treaties, refused to sign on to many multilateral agreements, and in the war against Iraq, trampled on the United Nations, and ignored the wishes of traditional allies, even going so far as to demonstrate open disdain for several traditional and long-standing allies. The alleged arrogance and bullying techniques of Bush served to revive the ugly American image, and polls demonstrated that the United States became very unpopular internationally, with the citizens of many nations citing the threats of American power as the greatest danger facing the planet. Measured by polling data from across the globe, the United States had declined dramatically in the estimation of citizens in other countries. In many nations—even European nations—the United States was seen as a dangerous, arrogant superpower that used its power unilaterally and against the interests of global peace. Amazingly, Bush was seen in many of these polls as
1054 Unit ed Nations
more dangerous to world peace than was Osama bin Laden. While such attitudes reflect a multitude of factors, some are clearly directed at what is believed to be the arrogant and unilateral behavior of the United States internationally: the scandal at the Abu Ghraib prison, the rendition of suspects kidnapped from countries and taken to undetermined locations, the prison at Guantanamo Bay in Cuba, the denial of basic rights to many of those in detention, and torture at some locations. All these activities fueled the fire that was already smoldering and gave credibility to the view that, indeed, the United States considered itself above the law (both domestically and internationally) and that American power was out of control. Was this an American overreaction to the events of 9/11, or was this the ugly American writ large? Were these actions an accurate reflection of the real America, or an aberration? World opinion judged the United States harshly. Is the image of the ugly American true? Clearly, there are times when the United States merits criticism, and at times it does seem to play into the worst elements associated with that term. But as the world’s only superpower, the responsibilities of the United States require it to lead and take actions that to some seem ill advised. It is the occupational hazard of a superpower to be an inviting target of criticism, buts in the end, the true test of a superpower is whether it acts with wisdom and justice. Did it promote more than its own selfish interest and serve and protect the international community and its traditional friends and allies? Did it stand up for the right values, and did it have the courage of conviction to do the right thing in a complex world? Further Reading Greene, Graham. The Quiet American. New York: Penguin, 2004; Lederer, William J., and Eugene Burdick. The Ugly American. New York: W.W. Norton, 1999. —Michael A. Genovese
United Nations During the last years of World War II, the victorious Allied powers, particularly the United States, France, the United Kingdom, and the Soviet Union, created the United Nations (UN) “to save succeeding gener-
ations from the scourge of war.” As was the League of Nations that preceded it, the UN was premised on the concept of “collective security,” one component of the idealist or Wilsonian approach to international affairs. Convinced that the breakdown of the “balance of power” approach had contributed to World War I, world leaders resorted to crafting a treaty that would bind members to respond collectively to acts of aggression by rogue states, whether members or not. The League of Nations foundered due to lessthan-universal membership (the U.S. Congress declined to back President Woodrow Wilson’s plan) and the remaining members’ insufficient commitment to curb the Japanese invasion of Manchuria (1931) and the Italian invasion of Ethiopia (1934). The drafters of the UN charter sought to create a structure that was more flexible in its decision making and more reflective of global power relations. The most palpable aspect of this was the designation of five permanent members of the Security Council that individually can block resolutions and actions through exercising a veto. At a series of meetings during the war (Moscow in 1943, Dumbarton Oaks in 1944, and Yalta in 1945), the United States, the United Kingdom, and the Soviet Union laid the groundwork for the organization. In San Francisco in April to June 1945, the UN charter was completed and signed. The subsequent ratifications became sufficient on October 24, 1945, and the UN came into being. October 24th is observed as UN Day around the world. The charter mandated six major organs: the General Assembly (GA) composed of all members, the Security Council (to preserve the peace), the International Court of Justice (to hear disputes among nations), the Secretariat as the administrative body, the Economic and Social Council, and the Trusteeship Council (to foster the independence of trust territories). The goals of the UN, as enshrined in the preamble of the charter, include preventing war, reaffirming human rights, fostering respect for international law, and the promotion of social progress. Among the principles described in article 2 of the charter are the sovereign equality of all member states, the peaceful settlement of disputes among nations, support for UN enforcement actions as mandated by the Security Council, and noninterference in the domestic affairs of member states.
United Nations 1055
World leaders converge at the 62nd United Nations General Assembly. (Getty Images)
The GA is the primary deliberative organ of the United Nations, consisting of all 191 member states, each exercising one vote. Decisions are by simple majority vote except for matters deemed important questions (membership, peace and security actions if the Security Council deadlocks, and some budgetary matters), which require a two-thirds majority. Regular sessions of the GA begin on the third Tuesday in September and usually conclude by mid-December. Much of the GA’s work is done through a series of committees: First (disarmament and security), Second (economic and financial), Third (social, humanitarian, and cultural), Fourth (decolonization), Fifth (administrative and budgetary), Sixth (legal), and the Special Political Committee (peace and security matters not handled by the First Committee). The Security Council (SC) has 15 members, five of which are permanent (China, France, Russia, the United Kingdom, and the United States). The other 10 members are elected for two-year terms, with five
elected in alternate years. In an effort to reflect global power at the time of its creation, the Security Council operates under the rule of “great power unanimity,” generally referred to as “veto power” when one of the permanent members casts a negative vote, thereby blocking an action. Its functions include maintenance of peace and security, investigation and settlement of disputes or instructing the secretary-general (SG) to use good offices to this end, promotion of disarmament, application of economic or other nonmilitary sanctions, sending UN observer forces or armed forces in response to aggression and conflict, admission of new members, recommendation to the GA of the appointment of the SG, and with the GA election judges of the International Court. The SC may convene at any time to deal with crises. The Economic and Social Council (ECOSOC) has 54 members who serve for three years, with 18 elected each year to staggered terms. It coordinates the economic and social work of the UN and serves as
1056 Unit ed Nations
the liaison or reporting organ for most of the specialized agencies and affiliated intergovernmental organizations. It conducts studies, reports to the GA, and makes recommendations on issues ranging from economic concerns (poverty, development, and trade) through social issues broadly defined (human rights, including the rights of women and children, culture, education, and health). Two formal sessions occur annually for about two months each, one in New York and the other in Geneva. The actual work continues all year through commissions, committees, and the specialized agencies. A wide variety of international nongovernmental organizations (NGOs) maintain consultative status with ECOSOC, while an even larger number of national NGOs are affiliated through the Department of Public Information of the Secretariat. The specialized agencies, programs, and funds include a wide range of bodies from long-standing ones such as the International Labor Organization, the World Health Organization, and the United Nations Children’s Fund, through newer ones such as the World Intellectual Property Organization. Also reporting to ECOSOC are functional commissions such as the Commission on the Status of Women and the Commission on Human Rights. Finally, somewhat more autonomous are the Bretton Woods organizations, named after the site of the 1944 conference on the international financial system. These include the World Bank Group, the International Monetary Fund, and the World Trade Organization (conceived at Bretton Woods but only in recent years supplanting the General Agreement on Tariffs and Trade). These financial bodies have decisionmaking structures dominated by member states with the world’s stronger economies. Over the last few decades, they have come under criticism for implementing economic programs that are unduly free market oriented and “one-size-fits-all.” Calls for greater flexibility and distribution of decision-making power have largely been ignored. In recent years, they have been criticized by leaders from some of the developing world and have become the target of grassroots anti-globalization protests. The Trusteeship Council was created to supervise the administration of trust territories overseen by member states which, in turn, made up the council. The Trusteeship Council suspended its work in 1994 after the last trust territory, Palau, became indepen-
dent. Under the ongoing reform and restructuring process, the Trusteeship Council almost certainly will be eliminated. The International Court of Justice (ICJ) is a legacy of the Permanent Court of International Justice established in 1922 and continues to meet in The Hague. Under the 1945 charter provisions, it handles cases of disputes among sovereign nations, not cases involving private parties. Its 15 judges are elected for nine-year terms by the GA and SC. The judgeships are generally distributed on a regional basis, with the five permanent members almost always represented. The greatest constraint on the ICJ is that national sovereignty inherently impedes enforcement. While some nations commit in advance to the court’s compulsory jurisdiction, most do not. Generally, the two states must agree in advance to the court’s jurisdiction, though one may initiate an action requesting the other’s agreement. States usually comply with the decision. However, in two cases, the disappointed party demurred. One was the Corfu Channel case of 1947, when Albania refused to compensate the United Kingdom for damages in a shipping case, and the second was Nicaragua v. U.S.A., when the ICJ ruled that the United States violated international law by mining Nicaraguan harbors. While the U.S. refused to acknowledge jurisdiction and blocked appeal to the Security Council, it did quietly assist in the removal of said mines. The Secretariat is the administrative organ of the UN. It is charged with administering the daily affairs of organization (including the drafting of reports, verbatim records, translation of debates, and communication with the press and public), overseeing the operations of peacekeeping forces, attending to problems of refugees and human rights violations, and preparing studies of issues of concern to the body. The Secretary-General, appointed to a (renewable) five-year term by the GA upon recommendation of the SC, oversees this administration, brings to the SC threats to international peace and security, and may exercise good offices in the settlement of disputes. Secretary-General Kofi Annan (1997–2006) was especially active in advocating for the organization and using the influence of the office to encourage greater member state commitment to eradicating disease, alleviating poverty, and intervening in cases of human rights violations.
United Nations 1057
Member states are assessed obligatory dues for the regular budget and separately for the peace missions and international criminal tribunals; in addition, there are voluntary contributions to the funds and programs. The funding of the UN regular budget is based on contributions by member states, which are assessed an amount based on their “capacity to pay.” This capacity is derived from national income, per capita gross national product, and foreign exchange earnings. Because of the wide range of difference in these capacities, limitations have been set for a “floor” and a “ceiling.” In 1946, the assessment for the United States would have been around 50 percent of the total, but at the insistence of U.S. leaders it was capped at a ceiling of 40 percent, in 1973 further reduced to 25 percent, and ultimately cut to 22 percent of the regular UN budget, while in 1997 the lower limit was set at 0.001 percent. The other obligatory contributions are for the economic development programs and peacekeeping operations. The latter, in particular, have proven controversial when some countries were less than committed to UN actions, such as opposition by the Soviet Union to UN action during the Korean War. The United States has also contributed heavily to peacekeeping, though forcing reductions in giving down to 26.5 percent in recent years. Budget issues are a two-edged sword. While some would like to see the United States pay up to its full capacity, others are concerned that at present the United States, Japan, and Germany (with less than 10 percent of the world’s population) account for about half the budget, and 10 members are responsible for 80 percent of it. A similar discrepancy exists regarding assessments versus voting numbers in the GA, where about 180 voting countries supply only about 25 percent of the budget. Despite the disproportion in voting strength in the largely exhortative GA, many of the poorer countries feel that wealthy ones, and the U.S. in particular, have wielded budget issues to force policies of reform that favor the economically and politically strong minority. Voting patterns at the UN have shifted over the years as the membership has grown. During its early years, the UN consisted primarily of European and Latin American-Caribbean countries with Australia, New Zealand, and a few from Asia, the Middle East, and Africa that had gained their freedom, such as India, Iran, Iraq, Syria, and Liberia, as well as white-
ruled South Africa. During the 1950s, additional states from the Middle East and Asia joined, and beginning in 1957 with Ghana, the decolonization of Africa and other parts of Asia led to a large influx of former colonies. This trend shifted the balance of votes in the GA in favor of developing, non-European nations. As a result, the majority that had voted overwhelmingly along the interests of the United States and western Europe was replaced by a larger bloc that strongly supported decolonization, development, and resistance to white rule in South Africa, often leaving the United States and some of its allies on the losing end. There was also a concomitant shift in the SC, despite the dominance of the permanent members. During the early years, votes tended to favor the so-called first world of U.S. and European market economies, leading the Soviet Union to exercise its veto frequently. As the agenda of the developing majority reached the SC, increasingly the United States invoked the veto to block resolutions calling for condemnation or action against apartheid in South Africa or Israeli policies toward Palestinians. Several key issues that the UN has dealt with over decades serve well to reveal the challenges it has faced and some of the reasons for both its successes and failures. One of the first major crises it confronted was that of the Middle East. As decolonization of the region increased in the wake of World War II, the United Kingdom sought to unburden itself of its League of Nations mandate in Palestine. Arab aspirations for national independence of the region had been strong since the end of World War I, while over the same years some European Jews were striving to create a Jewish homeland in Palestine. The European powers had made conflicting promises to each. In the wake of the Nazi Holocaust, thousands of Jewish refugees were streaming into Palestine with the hope of creating a Jewish state. Increased violence between the Arab and Jewish communities, compounded by Jewish attacks on the British authorities seeking to curb immigration, led Britain to transfer the problem to the fledgling UN. The GA, in November 1947, passed a partition plan to create two states, one Arab and one Jewish, in the area of the mandate, with the Jerusalem and Bethlehem areas to be corpus separatum, under international oversight. Each state was divided into three segments, contiguous only at small points. The Jewish state was slightly larger in
1058 Unit ed Nations
territory, though it included much of the Negev Desert. Since the population of the area remained two-thirds Arab, the Arab state had a small minority of Jews, while the Jewish state had a very sizeable minority of Arabs. Some regarded the partition plan as a formula for disaster, but it was accepted by Jewish leaders but rejected overwhelmingly by the Arab states and the leaders of the Arab community in Palestine. In May 1948, with violence already having broken out between the two communities, Israel declared its independence. Several of its Arab neighbors responded with military attacks. When the war was over, Israel controlled about 80 percent of Palestine, having taken over large areas prescribed for the Arab state. A series of wars ensued in 1956 (over Suez), 1967, 1973, and subsequent Israeli incursions into Lebanon. Throughout this period, the UN supplied peacekeeping forces to help separate the belligerents, and the UN Relief and Works Agency provided sustenance to the Palestinian refugee population, which numbered in the hundreds of thousands. Since then, the UN has affirmed the right of refugees to return to their homes and/or gain compensation. It has also generally supported a “two-state” solution based on the borders that existed between 1948 and 1967 (after which Israel occupied the West Bank, Gaza Strip, Golan Heights, and the Sinai Peninsula, the last later returned to Egypt as a result of bilateral peace efforts). While the UN peacekeepers certainly served as a deterrent to escalation on many occasions, the ArabIsraeli conflict has been seen by many as a UN failure. If this is the case, it is in part because the strong pressure for Israeli withdrawal from the occupied territories expressed by the GA majority has led Israel to eschew the UN as a mediator. Instead, Israel sought its strong ally—the United States—as an intermediary, creating a dynamic that led the Arab states to default to a dominant U.S., not UN, role in the region. Were the United States more in line with the European Union, the other permanent SC members, and the GA majority, perhaps the UN would have proven a more successful peacemaker. The contention over the representation of China at the world body also offers insights into constraints on the UN. In 1945, China was represented by the Nationalist (or Kuomintang) government. Less than four years later, Communist forces toppled the Nationalists, who fled to Taiwan. Standard UN prac-
tices in the case of regime change include an assessment of which party most effectively controls the territory and its infrastructure and a GA vote to recognize the credentials of the delegation representing the new government. Since the Communist victory represented an escalation of the cold war and the United States strongly resisted a second permanent seat being held by a communist government, it succeeded in getting the change in delegation deemed an “Important Question” requiring a two-thirds majority of the GA to shift the representation to the People’s Republic government in Beijing. For 20 years, the SC lived with the anomaly of the displaced government in Taiwan wielding the powers of a permanent member. By 1971, with pressure from the new GA majority increasing and the Nixon administration initiating a détente with the People’s Republic, U.S. opposition was relaxed, and the Beijing government gained its representation. Most assessments of the UN’s successes and failures focus almost exclusively on issues of peace and security, generally resulting in a mixed record. In some cases (Vietnam and Afghanistan) the cold war prevented the UN from exercising an effective role, while in others, the limited national interests of the great powers may have contributed to inaction or a belated response (Rwanda, Somalia, and the Sudan). Often overlooked are the successes of the UN in decolonization, democratic transitions, and nation building, such as the liberation of Southern Africa, the freedom of East Timor, and a somewhat less successful intervention in the Balkans. The UN is probably most underappreciated in the areas that we today refer to as “unconventional security,” such as human rights, global health and disease, environmental concerns, famine and disaster relief, and economic development. Through specialized agencies such as the World Health Organization, the Food and Agriculture Organization, and the UN Environment Program, the UN has eradicated diseases and fed starving populations, saving countless lives—something its critics frequently ignore. It has also served as the arena for crafting the Law of the Sea Treaty and the Kyoto Protocol on global warming. In response to human rights abuses, it has established the International Criminal Court (ICC) for the prosecution of those who commit genocides, crimes against humanity, and other criminal actions. While
United Nations 1059
United Nations ser vice members boarding a plane (United Nations)
the ICC still lacks the support of the United States, it is proceeding to ensure that human rights violators know their actions may no longer go unpunished. As the world has changed rapidly in the six decades since the UN’s creation, the organization has struggled to reform itself to contend with new challenges and an ever-changing global balance of power. From the postwar years of dominance by the World War II Allies, through the bipolarity of the cold war, to the contemporary era with rising economic power exerted by the European Union, Japan, China, and India, the UN has been subjected to the shifting vicissitudes of its strongest members. As its 50th anniversary approached in 1995, the discussion over reform became more intense and urgent. The major areas of reform include streamlining the Secretariat and international civil service, demanding greater financial accountability by all units, exploring alternative modes of funding such as taxing arms sales or the exploitation of shared resources, developing a rapid response
and deployment capability for peacekeeping forces in international crises such as Rwanda, restructuring the Bretton Woods system to allow greater consultation with the poorer states, and considering expanded membership and changes in voting on the SC. Looming over the entire discussion of reform is the challenge posed to the principle of multilateralism and collective security by the ideological and policy alternative of U.S. unilateralism and its refusal to allow U.S. forces to serve under UN command after the Somalia debacle of 1993. The administration of George W. Bush eschewed the vision of the UN as an effective broker of peace and security and regarded it in an instrumentalist fashion, as yet another tool for the achievement of U.S. foreign policy goals. The August 2005 appointment of strident UN critic John Bolton to the U.S. ambassadorship through a recess appointment skirting Senate opposition epitomized this stance, particularly as Bolton’s objections nearly scuttled an omnibus reform
1060 United States Agency for International Development
program long under development. (Bolton left the post in late 2006 amid speculation that the Senate would not approve his permanent appointment.) Nevertheless, the reform process continues as the UN evolves and adapts to a changing global environment. With the political and economic ascension of the European Union, Japan, China, India, and other members states, it is likely that U.S. unilateralism will be increasingly untenable and may recede in the face of such challenges as arms proliferation, terrorism, the HIV-AIDS epidemic, global warming, economic development, and other issues that cross borders and demand the type of cooperation that can only be achieved through the United Nations. See also diplomatic policy. Further Reading Baehr, Peter R., and Leon Gordenker. The United Nations: Reality and Ideal. 4th ed. New York: Palgrave/Macmillan, 2005; Childers, Erskine, with Brian Urquhart. Renewing the United Nations System. Uppsala: Dag Hammarskjöld Foundation, 1994; Fasulo, Linda. An Insider’s Guide to the UN. New Haven, Conn.: Yale University Press, 2004; Gareis, Sven Bernhard, and Johannes Varwick. The United Nations: An Introduction. New York: Palgrave/Macmillan, 2005; Mingst, Karen A., and Margaret P. Karns. The United Nations in the Post-Cold War Era. 2nd ed. Boulder, Colo.: Westview Press, 2000; Moore, John Allphin, Jr., and Jerry Pubantz. The New United Nations: International Organization in the TwentyFirst Century. Upper Saddle River, N.J.: Pearson/ Prentice Hall, 2006; Weiss, Thomas G., David P. Forsythe, and Roger A. Coate. The United Nations and Changing World Politics. 4th ed. Boulder, Colo.: Westview Press, 2004; Yoder, Amos. The Evolution of the United Nations System. 3rd ed. Washington, D.C.: Taylor & Francis, 1997; Ziring, Lawrence, Robert E. Riggs, and Jack C. Plano. The United Nations: International Organization and World Politics. 4th ed. Belmont, Calif.: Thomson Wadsworth, 2005. —Donald Will
United States Agency for International Development (USAID) The Unites States Agency for International Development (USAID), created by an act of Congress in 1961,
is an agency of the U.S. government that provides financial and technical assistance to developing nations. Its purpose is to help alleviate world poverty by assisting third world nations to accelerate their development. USAID funds programs in most major development sectors, including disaster relief, basic institution and services development, and specific global issues, such as HIV/AIDS and poverty. Every U.S. president from Franklin Delano Roosevelt to George W. Bush has found that in order to serve American interests he has needed an agency able to work closely with the other nations of the world to promote their growth in the political, economic, and social sectors. USAID is the agency in the U.S. federal government that provides that capability. The logo of USAID, a handshake between an American and a citizen from the developing world, is known throughout the world and symbolizes the meeting of equals at the personal level to work together for a better world. The concept of foreign assistance to less-fortunate nations has generally enjoyed bipartisan support, although that support has often been weak. The historical antecedent of foreign assistance was the Good Neighbor policy established by Franklin Roosevelt. After World War II and the destruction of much of Europe and Japan, those nations clearly needed U.S. assistance to rebuild, both as a defense against communism and for general international stability. The United States quickly met the immediate need for disaster relief, most famously through the Committee for Relief in Europe (CARE). This relief work was a key factor in avoiding any major humanitarian crises in either Western Europe or Japan. CARE packages became famous. It is worth noting that every post–World War II German chancellor, from Konrad Adenauer through Gerhard Schroeder, remembered with exceptional gratitude receiving American-provided food packages in the aftermath of World War II. For several it was their first taste of chocolate. The Economic Cooperation Act, passed on April 2, 1948, initiated the Marshall Plan that started active American involvement in the reconstruction of Western Europe. Close cooperation with the recipient nations, the infusion of significant amounts of seed capital, and a sound institutional base resulted in a rapid reconstruction of Europe. Point Four followed
United States Agency for International Development 1061
and expanded programs for economic growth to the developing world. Point Four takes its name because it was the fourth point in President Harry S. Truman’s 1949 inaugural address that focused on proposals to build a more prosperous, democratic, and stable world. President Truman’s and Secretary of State George Marshall’s vision of the nations of the world working together in peaceful collaboration and their competence in creating the institutions to achieve that collaboration, including what was to ultimately become USAID are widely acknowledged by historians today as a pivotal step in what became 50 years of progress. The Mutual Security Agency (MSA), created by the Mutual Security Act of June 30, 1951, administered the foreign assistance program in post–World War II Europe. It also established the Foreign Operations Administration (FOA) as an independent agency outside the Department of State in 1953. In 1953, the FOA and the MSA merged responsibilities. As Western Europe stabilized and returned to sound government and economic prosperity, USAID and predecessor agencies expanded into Latin America, Africa, and Asia. Ultimately, USAID grew to concentrate almost exclusively on the newly independent nations of Africa, Asia, and the close American neighbors in Latin America. As one would anticipate, development in those nations was significantly slower than it had been in Western Europe. Western Europe had a solid history of democratically elected governments and strong industrial economies. Thus, Western European revitalization was primarily a case of recovering disrupted capacity rather than creating new capacity. In the developing world, there was little history of democratic or even transparent governance. Even more challenging, the newly independent African and Asian states had weak economies and little industrial base. In the late 1980s with the disbandment of the Soviet Union and its empire, USAID shifted geographic focus again, this time into the countries of the former Soviet bloc. As those nations broke away, asserted their political independence, and embraced a liberal free enterprise economic model, many requested and received USAID assistance. Historically, USAID has been exceptionally responsive to congressional direction; if Congress or a
particular member of Congress is interested in primary education, funds flow in that direction. When congressional interest changes, so do funding priorities, which accounts for the vast range of projects and programs USAID has funded over the years. USAID is occasionally accused of being unfocused in its programs, and much of this diffusion comes from members of Congress mandating that USAID undertake projects in specific areas. Since the late 1960s, USAID has largely functioned as a bank. That is, USAID funds programs, projects, and activities that others then implement. USAID mainly makes grants, not loans. While the concept of foreign assistance enjoys bipartisan support, that support often wanes during budget debates. USAID is a vulnerable target when there is a debate over domestic versus international priorities. Because of this vulnerability, USAID in the past 30 years has declined significantly in size. For example, in 1970, USAID had approximately 12,500 U.S. citizen employees working worldwide. By 2004, the entire USAID agency consisted of 2,000 citizen employees, of which roughly 900 work overseas. USAID world headquarters is located in the Reagan Building in Washington, D.C. It operates under the policy direction of the secretary of state. USAID is organized into both geographical bureaus, including Africa, Latin America, newly emerging states (which includes former communist countries), and Asia, and technical bureaus, including economic growth, agriculture, and trade (EGAT), global health (GH), and democracy, conflict and humanitarian assistance (DCHA). USAID also occasionally creates bureaus or offices for special purposes. For example, the HIV/AIDS bureau was created in response to the pandemic. USAID also has specialized bureaus for planning, budgeting, participant training, congressional relations, and university relations. There are three basic types of employees at USAID, including U.S. direct hires, third country nationals, and foreign service nationals. U.S. direct hires include foreign service officers (FSO), who are U.S. citizens and serve in both the United States and overseas. General schedule (GS) employees are also U.S. citizens and work mainly in Washington, D.C., although occasionally they spend short periods (one to six months) overseas. Third country nationals (TCN) are typically from the region in which they
1062 United States Agency for International Development
work but from a different country than the one in which they are actually working. Foreign service nationals (FSN) are citizens of the country in which they are actually working. USAID implements most of its programs and projects through its bilateral missions. USAID has approximately 40 bilateral missions around the world. (Bilateral refers to working in a single country, for example, Nigeria, Kenya, or Indonesia). The number is approximate because USAID periodically changes to add new countries and subtracts as countries graduate for one reason or another. Reflecting the declining overall size of USAID, the bilateral missions have also shrunk considerably. Typically, now USAID missions are relatively small, having from 10 to 15 U.S. direct hire officers. FSNs or TCNs do more of the managerial and administrative work. With rare exceptions, now contractors or grantees implement the programs and projects. As an economy of force measure, USAID has established several regional centers around the world. For example, in Africa there are centers in East, West, and southern Africa. These centers are generally somewhat larger than the bilateral missions they serve and often have specialists in areas such as law, accounting, and project design that the bilateral missions do not have. Circuit riding to the various bilateral missions in their regions can keep these specialists on the road for 50 percent to 75 percent of their time. In addition to providing staff support to the bilateral missions, regional centers often implement programs that have a regional dimension, such as regional family planning programs or regional agriculture research programs. USAID now does exclusively grant financing, that is, funds go directly to an implementing agency for a specific purpose; the implementing agency does not have to repay these funds so long as it uses the funds for their intended purposes. Historically, the grants have gone primarily to the host country governments, which then have implemented the agreed-upon activities. Over the past 20 years, grants have often gone to universities or nongovernmental organizations (NGO). Until the mid-1970s, USAID also made longterm concessional interest rate loans in Africa, Asia, and Latin America for capital projects, largely infrastructure—that is, roads, dams, and buildings. A
concessional rate of interest is one that is lower than the market rate, typically from 1 to 3 percent. The World Bank has assumed responsibility for most donor-funded capital projects in the developing world. USAID has worked hard to cultivate positive relationships with universities throughout the United States, especially with the land grant colleges, which USAID often uses to implement agriculture, education, and university building programs in Africa and Asia. The presidents of three land grant universities have served as the administrator of USAID. USAID often uses participating agency staffing agreements (PASA) with agencies or departments such as Treasury, the USDA, or the Internal Revenue Service to implement technical assistance projects such as building an agriculture extension service, a tax service, or a customs office. Typically, USAID provides the administrative support and program direction for the PASA team. There are thousands of different types of NGOs working in the developing world, some indigenous, others international, and many American. Typically, USAID finances NGOs with a commitment to a specific purpose, be that family planning, cooperatives, or slowing global climate change. Neither USAID nor the federal government as a whole could function without the contractors (forprofit firms) who do much of the basic work. There are thousands of firms that serve the needs of the three branches of government and those doing business with those institutions. A small number of these specialize in serving the needs of USAID. Overall, these firms have proven skillful in mobilizing teams to implement programs and projects in the areas of the world where USAID works. Currently, the annual budget of USAID is approximately $11 billion, which is less than 1 percent of the federal budget. Actual funding varies from year to year. Geographically, sub-Saharan Africa is the largest recipient. Economic growth, agriculture, and trade (EGAT) funds programs in overall economic development, including education, agriculture, and trade promotion. Global health (GH) includes maternalchild health, family planning, and HIV/AIDS. Democracy, conflict and humanitarian assistance (DCHA) promotes democratic governance, conflict mitigation, and long-term humanitarian assistance. PL480 pro-
United States Agency for International Development 1063
motes the sale and distribution of basic U.S. agricultural commodities overseas. Disaster assistance provides short-term emergency help to nations suffering from natural disasters and other emergencies. Throughout its history, USAID has financed programs in most major sectors, including infrastructure development, institutional development, agriculture, education, health, family planning, HIV/AIDS, natural resources management, human rights, trade, and promotion of democracy. USAID has gone through a number of phases in implementing its programs. Initially, the emphasis was on direct relief in Europe and then on promoting reconstruction. The postcolonial era, beginning in the 1950s when the nations of Africa and Asia received independence, emphasized construction of basic infrastructure (for example, roads, schools, and hospitals) and building the capacity of the new governments to provide basic services. The greatest emphasis has been placed on agriculture, education, and health. While there have been some notable successes in places such as Botswana, Thailand, and South Korea, overall progress has often been very slow. Because of this, in the late 1970s, USAID began to emphasize private sector development, especially the development of for-profit businesses. In addition, USAID has emphasized family planning because it became clear that high population growth rates were partly responsible for slow per capita economic growth. By the late 1990s, it was clear that without improvements in governance, only possible through transparent democracies, neither the private sector nor the developing nations as a whole would flourish, hence the emphasis on promotion of democracy. USAID and predecessor agencies have achieved notable successes in several areas. One such example would be the reconstruction of Western Europe after World War II. World War II devastated Europe, and American intervention provided a vital impetus for Western Europe to reconstruct. The United States initially invited the Soviet bloc nations to participate in this process, but those nations chose not to participate, most critically in the Marshall Plan, and partly as a result lagged badly in reconstruction. From 1950 until the late 1970s throughout Africa, Asia, and Latin America, USAID was instrumental in constructing the roads, schools, and hospitals that assisted nations such as Botswana, India, and South
Korea to sustain their development to the point that they have achieved genuine prosperity. Partly because of these programs, life expectancy in the developing world has increased by approximately 33 percent over the past 40 years. It has taken more than 20 years and approximately $12 billion, but gradually USAID has developed an effective methodology that assisted developing countries to reduce their fertility while allowing their male and female citizens to stay within the bounds of culturally appropriate behavior. More than 50 million couples now use family planning as a direct result of USAID’s population program. In the 28 countries with the largest USAID-sponsored family planning programs, the average number of children per family had dropped from 6.1 in the mid-1960s to 4. 2 by 2001. Almost from the date of the identification of the disease, USAID initiated HIV/AIDS prevention programs. Using an approach similar to that practiced for family planning, USAID has facilitated nations such as Uganda in making major reductions in their infection rates. Worldwide, more than 850,000 people have received education in how to prevent the spread of the HIV virus through USAID-sponsored programs. USAID has also focused attention on disaster relief in the developing world. Whether it is volcanoes, droughts, or tsunamis, USAID got there quickly, organized things on the ground, and provided effective disaster relief. Hurricane Katrina provided an ironic testimony to the effectiveness of USAID disaster relief; the U.S. NGOs involved with Katrina lamented that FEMA lacked the professionalism, the clear procedures, and the sense of priorities and mission that they were accustomed to working with in USAID. USAID has also worked on bringing about a “green revolution.” Worldwide, the international agriculture research stations that provide the technical basis for the green revolution receive approximately 25 percent of their funding from USAID. At the bilateral level, USAID has been the leading funder of national agriculture research and the extension of that research on to farmers’ fields. USAID and other donor investments in better seed and agricultural technologies over the past three decades have helped feed an extra billion people in the world.
1064 w elfare state
Further Reading United States Agency for International Development Web site. Available online. URL: www.usaid.gov. Accessed March 24, 2007; U.S. Overseas Loans and Grants (The Greenbook). Available online. URL: http://qesdb.usaid.gov/gbk. Bertotti, Timothy L. “History and Accomplishments of USAID.” USAID/ Columbia, 3 December 2001. —Norman L. Olsen
welfare state While often referred to today in a pejorative way, the welfare state began as an effort to soften the harsher edges of the system of capitalism. With industrialization came urbanization as workers flocked to the urban areas in hopes of finding gainful employment, but many untrained workers had a difficult time finding affordable housing, making a living wage, and caring for their families. Over time, pressure built up on the government to help alleviate the problems of poverty, health care needs, housing shortages, and a host of other economic and human problems. One way governments attempted to address these problems came to be known as “the welfare state.” In general, a welfare state is a system wherein the government strives to provide for the maximum of social and economic benefits for the citizen. Political scientist Andrew Hacker in the March 22, 1964, New York Times Magazine defined the welfare state as one “that guarantees a broad series of economic protections that any citizen can claim when he is no longer able to provide for himself. In a welfare state, the benefits an individual receives are political rights, not charity, and there should be no occasion for apology or embarrassment in applying for them. Moreover, the services made available by a welfare state will parallel in quality and coverage those open to individuals who are able to draw on private resources.” The welfare state thus marks a shift away from the concept of government having a minimal role (safety and security) to a more positive role (providing for social services). Today, welfare state means many different things to different people. It can be an ideal model of the provision of the tools and resources (welfare) necessary for decent living whereby the state is primarily responsible for the care of the citizen. It can also
mean the state providing some minimal services and resources to the needy. In its grandest form, it can mean the state as the primary agent for providing goods and services. In general, there are the minimalist model of the welfare state (for example, the United States) and the more robust model (such as can be found in northern Europe). At the minimalist level, the government is the provider of last resort, ensuring a basic safety net below which no civilized nation would let its citizens fall. At the more robust level, the state assumes greater responsibility for services, such as wages, jobs, and health care. Most of the modern welfare states developed slowly over time, adding a piece here and a piece there, until the rudimentary elements of what we today call a welfare state became visible. In Europe, beginning in the late 19th and continuing into the 20th centuries, states began to become more and more involved in guaranteeing rights and services to their citizens. The state began to assume responsibilities previously handled by charities, churches, or local communities. One of the key steps in this process was the development of social insurance, established by Bismarck in Germany. The United States is considered to have a low level of social welfare, especially in contrast to European states. This minimalist model assumes that the primary unit responsible for social welfare is the individual. The state is to play a minimal and not interventionist role. Thus, citizens of the United States do not have a “right” to health care (provided by the state), nor is the government responsible for ensuring jobs. Most of the social welfare function is in private hands or the responsibility of religious organizations. The state has a variety of minimal programs such as social security, Medicare, and unemployment insurance, but these are small by the standards of most industrial nations. The individual is to be responsible for him- or herself, and the private sector is the chief vehicle for jobs and income. The welfare state in the United States came about as a result of the Great Depression of 1929. This led to the election of President Franklin D. Roosevelt and the creation of the New Deal programs that marked the beginnings of social welfare in the United States. These programs were expanded during the 1960s during the Great Society era under President Lyndon B. Johnson. In the 1980s, efforts were made to put a cap
welfare state 1065
on social welfare spending, but the programs proved resilient and fairly popular and were hard to trim. During the so-called Republican Revolution of the mid-1990s, when the Republican Party won control of both houses of Congress in 1994 (the first time in 40 years), the welfare state became one of the key targets of the party’s resurgence. And one of the most visible features of the Republican “Contract with America” was the promise to reform and scale back federal welfare programs. But a Democrat, Bill Clinton, was still in the White House, and although politically wounded, he still could use the presidential veto pen. In yet another instance of politics making strange bedfellows, Clinton actually worked closely with the Republican-controlled Congress, and together they passed a significant welfare reform bill. It cut back the welfare rolls and pushed power back to the states. It also had back-to-work features popular with voters. This shrinking of the welfare state demonstrated that federalism is alive and well in the United States and that the political system remains responsive to the will of the voters. In the United Kingdom, the state has charted a middle course between the robust welfare state (often derisively referred to as “the nanny state”) and the minimalist state. In fact, the first time the term welfare state was used was during World War II by Archbishop William Temple. And it was just after World War II that the British began to more fully develop their welfare system. In 1948, British politician Edward Hallett Carr enjoined his fellow citizens, “Let us substitute welfare for wealth as our governing purpose.” It provides health care to all citizens as a right and pays for this service out of taxes. Asa Briggs identified three core elements of the welfare state as providing a guaranteed minimum income, social protection (safety net) in the event of job loss or other insecurity, and the provision of government-supplied life services. This soon became known as the “institutional model of welfare.” The sociologist T. H. Marshall identified the welfare state as combining democracy with capitalism. In the United Kingdom, coverage of the welfare state is fairly significant, but services are provided at a fairly low level. In Sweden, a more robust form of the welfare state is in evidence. Some see this as the “ideal” form of the welfare state, whereby the government offers a
wide safety net to its citizens. Sweden has a comprehensive social welfare state with redistributive and egalitarian goals. Such systems can be expensive. Many nations are not willing to pay the price, either in dollars or in state control, for such systems. Critics point to Sweden and argue that this “cradle-to-grave” welfare state makes the citizen dependent on the state, but most of the citizens of Sweden are pleased overall with their version of the welfare state. Supporters of the more robust version of the welfare state argue that on humanitarian grounds, every citizen deserves a minimal standard of living that no civilized country should let its citizens fall below. They also argue that the robust welfare state is stable, secure, and not prone to rebellion or antisocial outbursts. They also argue that by investing in the social infrastructure, such as educational child care, health care, and so on, they are making their societies better, more economically advanced, more competitive, and better able to adapt to the demands of globalization. Further, many argue that a welfare state is the antidote to the predatory nature of capitalism and the crushing hand of socialism. It is thus seen as a “middle way” between two extremes. Finally, they argue that reliance on the private sector simply has not worked and that a more significant role of the state is necessary in the ups and downs of the business cycle. Critics of the robust welfare state argue that such systems make citizens more dependent on the state and less free. They also argue that such systems are too costly. They further say that such systems are a drain, not a boon, to the state’s economy. To the critics, the free market is a more just and more efficient way to distribute value. Often, critics see the rise of the welfare state as a prelude to socialism. In terms of comparative social spending on welfare-related programs as a percentage of the gross domestic product (GDP), Denmark ranks first, spending more than 29 percent of GDP on its welfarerelated programs. Next is Sweden (nearly 29 percent), followed by France, Germany, Belgium, Switzerland, Austria, and Finland. The United Kingdom ranks in the middle of the pack in 13th place (nearly 22 percent of GDP). The United States ranks near the bottom, in 26th place (out of 29), spending less than 15 percent of its GDP on social welfare programs. The United States ranks just ahead of Ireland, Mexico, and South Korea.
1066 World Bank
Further Reading Coll, Blanche D. Perspectives in Public Welfare: A History. Washington, D.C.: U.S. Social and Rehabilitation Service, Intramural Research Division, U.S. Government Printing Office, 1969; Katz, Michael B. In the Shadow of the Poorhouse: A Social History of Welfare in America. New York: Basic Books, 1996;——— The Undeserving Poor: From the War on Poverty to the War on Welfare. New York: Pantheon Books, 1989. —Michael A. Genovese
World Bank First of all, to avoid confusion, what is commonly known as the World Bank must be distinguished from the larger organizational umbrella of which it is a part, the World Bank Group (WBG). WBG refers to a network of five institutions, two of which constitute the World Bank itself. The core of WBG is the World Bank, but over the past five decades the role of WBG has expanded beyond the customary focus on macroeconomic sustainability and poverty reduction through economic and financial assistance, education, and infrastructural development. What has become WBG originated with the creation of the International Bank for Reconstruction and Development (IBRD), one half of today’s World Bank, during the Bretton Woods Conference at the end of World War II and eventually grew to include the International Development Association (IDA), which is the other half of the World Bank, the International Finance Corporation (IFC), the Multilateral Investment Guarantee Agency (MIGA), and the International Center for the Settlement of Investment Disputes (ICSID). The ICSID provides dispute resolution for controversies between investors and member states through mediation, arbitration, and conciliation. It can play a role or even direct negotiations and resolutions of disputes concerning nonmember countries and issues that are not directly investment related but somehow invoke factors central to WBG initiatives and activities. MIGA can also offer some dispute resolution remedies, but its purpose is specifically tailored to help countries build a favorable environment for foreign direct investment (FDI). Since FDI is one of the keys to economic development in poor coun-
tries, MIGA provides guarantees and assistance to increase the probability and incidence of FDI in target countries. Its services supplement those of the IFC, which antedates MIGA by approximately 30 years and facilitates private investment and private sector development in countries that receive World Bank assistance of some sort. As indicated, the central component of WBG is the World Bank itself, and it attracts the bulk of the public’s attention. In fact, most people are utterly unfamiliar with the distinction between WBG and the World Bank, and, based on the information available through typical news sources, it would be difficult to avoid the conclusion that WBG does not exist. Much of this is due to the fact that the activities of ICSID, IFC, and MIGA seem obscure and are not readily understandable by laypersons. In addition, the services these three institutions provide do not seem as controversial as those of the World Bank itself, so they are not nearly as visible. Institutions such as the World Bank have experienced unprecedented visibility recently, and they have become known even to those with no interest in development economics, poverty, or sustainability. During the past decade or so, the common association of the World Bank with globalization initiatives has exposed the organization to unprecedented scrutiny from individuals and groups that had traditionally cared very little about it. Unfortunately, the politicization of globalization efforts and the associated resistance to those efforts from many circles has attracted controversy over the perceived role of the World Bank in the global economy. Much of that controversy is fueled by misconceptions and propaganda intended to undermine liberalization programs around the world, and it has contributed to a widely misleading picture of what the World Bank actually does and the power it has. As was the case with the International Monetary Fund (IMF), the World Bank, originally consisting of only the IBRD, arose from the agreements that emerged out of the Bretton Woods Conference during the last stages of World War II. This gathering of Allied leaders was a manifestation of efforts to establish geopolitical stability upon the conclusion of the war and to formulate plans for the rebuilding of Europe and Japan. Much of the impetus for the meeting resulted from the desire to confront problems
World Bank 1067
that were unrelated to the war per se but had been responsible, at least in part, for its outbreak. In that regard, the Bretton Woods Conference sought to address the macroeconomic deficiencies that produced and perpetuated the financial and economic crises of the interwar period and encouraged the spread of poverty and economic depression. More broadly, this meeting was a response to the structural and cyclical dislocations and transformations caused by mass industrialization and the consequent need to confront the inadequacies of pre-Keynesian solutions for the dilemmas of industrialization. Above all, the Bretton Woods agreements were based on the conviction that conflict is avoidable through the facilitation of international economic cooperation and increased prosperity throughout the globe and that, therefore, the implementation and maintenance of sustainable development policies and the erection of viable economic infrastructures was paramount to Europe’s survival. On the whole, the participants endorsed capitalist economic principles, though they disagreed regarding the proper level of state intervention and control over economic and financial mechanisms. Nevertheless, a consensus did exist concerning the Keynesian realization that industrialized economies require at least some degree of macroeconomic management and that the structural vulnerabilities of industrialized and industrializing nations must be remedied in some manner. Since domestic economic and financial stability was, among other things, a function of international economic and financial stability, domestic structural weaknesses would be countered through international processes established to pursue the above goals. Despite the overarching economic objectives and normative criteria that animated Bretton Woods negotiations, the most immediate reason for the creation of the World Bank was the postwar reconstruction of Europe and Japan. Indeed, the largest loan (in real terms) ever issued by the bank was awarded to France shortly after the war. In addition, the newly established IBRD was charged with encouraging economic growth in the developing world by supporting infrastructure projects. As the rebuilding of Europe progressed from prospect to reality, the IBRD increasingly devoted its attention to less affluent parts of the globe. The IBRD has continued to provide
low-interest loans to developing nations that have been categorized as middle income, and it eventually secured funding for projects in undeveloped and underdeveloped countries classified as the world’s poorest. This was achieved through the creation of the IDA in 1960, thereby enabling the provision of no-interest loans and grants to the least capable countries around the globe. Collectively, the World Bank’s mission centers on the reduction of poverty, the formulation and implementation of sustainable economic development policies, the establishment or reform of economic infrastructures conducive to long-term growth, and the provision of humanitarian assistance following natural disasters or other emergencies. It has often sought to accomplish these goals through the simultaneous education and training of government officials in target countries and the restructuring of government programs or legal structures that pose obstacles to growth and the reduction of poverty. In most cases, its financial and economic assistance come through either investment or development policy loans, with the former category focusing on the emergence of long-term infrastructural capabilities that support and allow continued growth and the latter allocated for the creation and implementation of procedural systems through which markets and economic viability can be secured. The World Bank is a sizable organization with headquarters in Washington, D.C., and a current membership of 185 states, each of which has a seat on the board of governors. It is headed by the bank president, usually nominated by the United States, who serves a five-year renewable term and answers to a 24-member board of executive directors. Since 2007, the president of the World Bank has been Robert Zoellick of the United States, who succeeded the embattled Paul Wolfowitz following his abortive term. Because of the size of the U.S. financial contribution to the development fund and the country’s comparative influence over others, tradition has ensured that the bank president will always be an American citizen, although the extent and effectiveness of any resulting influence over other members has frequently been overstated by the bank’s critics since the United States must carefully weigh and duly acknowledge the positions and priorities of key members of the board of directors.
1068 World Trade Or ga ni zation
As has been true of the IMF, the World Bank has underwritten projects throughout the globe and especially in regions that are underdeveloped or experiencing serious structural difficulties, so it has been labeled by its critics a tool of Western expansionism and a supporter of the exploitation of developing countries by the industrialized world. Although the relationship between the World Bank and the poor countries it assists in the developing world is intrinsically imbalanced, not least due to the economic and diplomatic leverage, to say nothing of military might, Western countries possess, such an imbalance should not serve as an a priori indictment of the World Bank and its initiatives. In addition, despite the realization that the economic assistance and financial aid disbursed by the World Bank is predicated on certain prerequisites, so that recipients of aid are often compelled to implement pro-Western structural reforms as a consequence of that assistance, people should not be surprised to discover that the World Bank conditions its willingness to engage in specific projects on the reciprocal willingness of target states to implement those structural reforms that will maximize the probability of success for the World Bank’s projects. In the end, as is the case with the IMF, the World Bank has always been an unashamedly Western club, and it has succeeded, at least in part, due to its commitment to that reality. The normative question of whether Western sociocultural values and priorities should govern the practices, ideologies, and relationships of the World Bank cannot be settled in this essay, and the claims of antiglobalization and anti–free trade advocates should be explored elsewhere. However, the substantive debates concerning the bank’s mission and purpose aside, it has witnessed its share of controversy over charges of malfeasance and corruption over the past few years. Recently departed bank president Paul Wolfowitz assiduously pursued corruption allegations against bank officials and member countries, though he was accused of selectively targeting people and states that did not endorse U.S. policy objectives. Wolfowitz’s tenure was ultimately cut short by his own problems stemming from charges of unethical behavior regarding a girlfriend formerly employed by the World Bank. Moreover, his position seemed precarious from the outset
because of his association with increasingly unpopular policies of the George W. Bush administration. Unfortunately, neither Wolfowitz’s extensive background in public service nor his formidable intellect proved sufficient to rescue his beleaguered administration. World Bank observers hope that Robert Zoellick, a man with laudable international credentials and solid professional credibility, will be able to mend the reputation of an organization that plays a crucial role in global economic development and the alleviation of poverty. Further Reading Gilbert, Christopher L. The World Bank. New York: Cambridge University Press, 2006; Gilpin, Robert. The Political Economy of International Relations. Princeton, N.J.: Princeton University Press, 1987; Guide to the World Bank. Washington, D.C.: World Bank Publications, 2007; Krugman, Paul. Pop Internationalism. Cambridge, Mass.: MIT Press, 1997; Udell, Gregory F. Principles of Money, Banking, and Financial Markets. New York: Addison Wesley Longman, 1999; Woods, Ngaire. The Globalizers. Ithaca, N.Y.: Cornell University Press, 2006. —Tomislav Han
World Trade Or ga ni zation (WTO) The World Trade Organization (WTO) is an international organization intended to contribute to the development of a global capitalist market system by removing barriers to free trade. In the aftermath of the Great Depression of the 1930s and the devastation of World War II, the United States led the way to the creation of a new international system that would rest on open markets and free trade. The International Monetary Fund (IMF) and World Bank were created in 1943 as part of the Bretton Woods system (named after the resort town in New Hampshire where the basic treaties were signed), which also included the United Nations. The initial attempt to create a global trade organization was unsuccessful, and a temporary organization, the General Agreement on Trade and Tariffs (GATT) was established to fill the gap. It was not until 1995 that the WTO replaced GATT, and a fully functioning international organization devoted to removing barriers to free trade was inaugurated.
World Trade Or ga ni zation 1069
There are two major types of obstacles to a world market in which everyone could compete solely on the basis of the price and quality of their products: tariffs and “nontariff barriers” (NTBs). Tariffs are taxes imposed by governments on products imported from abroad. In the distant past, tariffs were a major source of government revenues; today they are primarily designed to protect a country’s own businesses from foreign competition by explicitly making foreign products more expensive. The GATT and then the WTO have sponsored a series of very successful international negotiations in which governments agreed to reduce and even eliminate many tariffs. While many tariffs remain, especially those levied by third world countries to protect their own industries, the drive to reduce and eliminate them continues to make progress. As tariffs have dwindled in importance as barriers to free global trade, a new class of obstacles has emerged: nontariff barriers. Tariffs are part of a country’s tax code and easy to identify. Nontariff barriers are far more pervasive in world markets today but far more difficult to identify because they are not formal taxes and are often defended by governments as reasonable health and safety regulations or part of a national culture. Conflicts over NTBs and other unfair trading practices such as dumping are likely to be major items on the international agenda for decades. Three examples illustrate the sometimes ambiguous nature of NTBs. Toys produced in China and shipped to the United States, for example, have to meet U.S. standards for safety. This makes them more expensive to manufacture than if they merely had to meet China’s own standards. This fact is not regarded as a nontariff barrier but a reasonable application of health and safety standards (known technically as phytosanitary regulations) by a national government. In 1995, the United States and the European Union charged that South Korea was using taxes on imported whiskey as an NTB. Korean law levied a relatively high tax on whiskey and other distilled liquor, all imported from other countries, and a much lower tax on soju, a popular Korean alcoholic beverage. To the U.S. and European countries, the difference in tax rates was meant to give Korean soju producers an unfair advantage in the market. The Korean government argued that soju was part of the national culture, and even though its alcoholic content was similar
to whiskey, it was really more like beer or wine than hard liquor. In the end, the World Trade Organization concluded that the different tax rates were not culturally determined but a deliberate nontariff trade barrier. In contrast to the case of toy safety standards, which are widely accepted as not being an NTB, and the Korean soju case, in which it was determined that an NTB did exist, the differences between the way the United States and the European Union treat genetically modified foods is a good example of the difficulty of determining what is and is not an NTB. In the United States, genetically modified plants, such as new species of corn, are regulated by the Department of Agriculture with little distinction between species produced by manipulating genes in the laboratory and species produced by traditional cross-breeding. As long as there is no evidence of potential harm to consumers, they can be used in products and do not require special labeling. In the European Union, on the other hand, genetically modified corn is treated as a potential hazard and must be proven to be safe, or products containing it must carry prominent warning labels. European farmers do not grow genetically modified corn; American farmers do and would like to be able to export their crop to Europe. Large, mechanized American farms are more productive and efficient than European farms, and many crops can be grown in the United States, shipped to Europe, and still sell for less. But a European shopper, faced with two packages of cornflakes, one with all European corn and the other with genetically modified American corn and a big red warning label, is going to avoid the American product. The government of the United States has consistently argued that this is a blatant nontariff barrier, meant to discriminate against American crops by scaring consumers needlessly. The European Union has argued that it is better to be safe than sorry, that European consumers are not sure genetically modified corn is safe, and it is only fair to warn them. The WTO is structured to offer solutions to disputes over unfair trade if the countries that belong to it choose to use it. The WTO does two things that contribute to a free and open world market. First, the WTO provides a setting for representatives of its 145 member countries as they try to negotiate mutually beneficial rules for conducting and regulating
1070 World Trade Or ga ni zation
world trade. Some of those discussions are quite technical and involve very specific issues; others are broad and have sweeping implications for world trade. For example, the 2001 meeting of leading national officials in Doha, Qatar, began an ambitious effort to negotiate the phased elimination of the most important remaining global tariffs, those in agriculture. Contentious and difficult bargaining has continued at annual WTO sessions in Cancun, Geneva, and Hong Kong. When the WTO organizes meetings and the staff provides position papers and technical expertise, it is performing a function that is very similar to many other international organizations involved in promoting cooperation among nations. The WTO also offers a way of resolving specific disputes among countries that is a distinctive contribution to international relations. The WTO dispute resolution process begins when one country brings an accusation of unfair trading or manipulation of NTBs against a second country. This usually happens after the two sides have tried to negotiate a settlement between themselves through diplomatic and political channels, but it can also be used if one country simply refuses to negotiate with the other. The WTO establishes a panel of impartial experts on international trade and economics who review the facts presented by both sides and arrive at a judgment. A significant weakness in the WTO process is that enforcement of its judgments is left to the countries involved. The WTO itself cannot force a country to change its laws or policies. Instead, the winning party in a WTO case is free to take steps of its own, perhaps instituting what are known as countervailing tariffs on products from the other country, to convince it to comply. More often than not, such measures do work, and countries do adjust their behavior. The WTO has become a focus for controversy and protest, often as part of larger campaigns against the perceived ills of globalization. The WTO’s friends and supporters point to its success in reducing tariffs, opening up markets to both rich and poor countries, giving poor countries a voice in the rules of the global trading game, and expanding global wealth. The WTO’s critics charge that the rule-making meetings of the organization are inherently undemo-
cratic, with the rich first world countries dominating the process and the meetings cloaked in secrecy. The outcomes, the critics say, are not fair and balanced global rules. Delegates to WTO rule-making sessions tend to come from national economic ministries and professional economists. The members of dispute resolution panels are picked for their expertise in economic analysis. As a result, the WTO’s critics charge that critically important considerations, such as environmental issues, human rights, and standards for the treatment of workers get overlooked or deliberately ignored in favor of short-term, narrowly defined economic values. In general, the charges leveled against the WTO and the defenses produced by its friends are embedded in the much larger set of issues and debates surrounding globalization. The WTO, along with the International Monetary Fund and World Bank, are the major international organizations at the heart of global economic patterns and the controversies surrounding the present and future of the global economy. As globalization continues to unfold around the world and its consequences, both positive and negative, become more apparent, the WTO will continue to be a very busy international organization and a controversial one. Further Reading Bhagwati, Jagdish, et al., eds. The Uruguay Round and Beyond: Essays in Honor of Arthur Dunkel. Ann Arbor: University of Michigan Press, 1998; Colgan, Jeff. The Promise and Peril of International Trade. Peterborough, Ont.; Orchard Park, N.Y.: Broadview Press, 2005; Dine, Janet. Companies, International Trade, and Human Rights. Cambridge: Cambridge University Press, 2005; Dowlah, Caf. Backwaters of Global Prosperity: How Forces of Globalization and GATT/WTO Trade Regimes Contribute to the Marginalization of the World’s Poorest Nations. Westport, Conn.: Praeger, 2004; Everett, Simon, and Bernard Hoekman, eds. Economic Development and Multilateral Trade Cooperation. Houndmills, U.K.: Palgrave Macmillan, 2006; Jones, Kent. Who’s Afraid of the WTO? New York: Oxford University Press, 2004. —Seth Thompson
SELECTED BIBLIOGRAPHY
FOUNDATIONS AND BACKGROUND OF U.S. GOVERNMENT Ackerman, Bruce A. We the People: Foundations. Cambridge, Mass.: Harvard University Press, 1991. Bailyn, Bernard. The Ideological Origins of the American Revolution. Cambridge, Mass.: Harvard University Press, 1967. Beard, Charles A. An Economic Interpretation of the Constitution of the United States. New York: Free Press, 1913.
Elkins, Stanley, and Eric McKitrick. The Age of Federalism: The Early American Republic 1788–1800. New York: Oxford University Press, 1993. Fehrenbacher, Don E. The Slaveholding Republic. Oxford: Oxford University Press, 2001. Ferling, John. A Leap in the Dark: The Struggle to Create the American Republic. New York: Oxford University Press, 2003. Friedman, Lawrence M. A History of American Law. New York: Simon & Schuster, 1985.
Berkin, Carol. A Brilliant Solution: Inventing the American Constitution. New York: Harcourt, 2002.
Frohnen, Bruce, ed. The American Republic. Indianapolis: Liberty Fund, 2002.
Bliss, Robert M. Revolution and Empire: English Politics and the American Colonies in the Seventeenth Century. New York: Manchester University Press, 1990.
Gillman, Howard. The Constitution Besieged: The Rise and Demise of Lochner Era Police Powers Jurisprudence. Durham, N.C.: Duke University Press, 1993.
Bodenhamer, David J., and James W. Ely, Jr. The Bill of Rights in Modern America after Two-Hundred Years. Bloomingdale: Indiana University Press, 1993. Dahl, Robert A. How Democratic Is the American Constitution? New Haven, Conn.: Yale University Press, 2002. Dworkin, Ronald. Law’s Empire. Cambridge, Mass.: Harvard University Press, 1986.
Hall, Kermit L. The Magic Mirror: Law in American History. Oxford: Oxford University Press, 1989. Held, David. Models of Democracy. 3rd ed. Palo Alto, Calif.: Stanford University Press, 2006. Henderson, H. James. Party Politics in the Continental Congress. Lanham, Md.: University Press of America, 2002.
1071
1072 S elected Bibliography
Horton, James Oliver, and Lois E. Horton. Slavery and the Making of America. Oxford: Oxford University Press, 2005.
Shearer, Benjamin F., ed. The Uniting States. The Story of Statehood for the Fifty United States. Volumes 1-3. Westport, Conn.: Greenwood Press, 2004.
Hoffert, Robert W. A Politics of Tensions: The Articles of Confederation and American Political Ideas. Niwot: University Press of Colorado, 1992.
Strong, Herbert J. What the Anti-Federalists Were For. Chicago: University of Chicago Press, 1981.
Jensen, Merrill. The Articles of Confederation: An Interpretation of the Social-Constitutional History of the American Revolution. Madison: University of Wisconsin Press, 1970. Kahn, Paul W. Legitimacy and History: SelfGovernment in American Constitutional Theory. New Haven, Conn.: Yale University Press, 1993. Kelley, J. M. A Short History of Western Legal Theory. Oxford: Oxford University Press, 1992. Ketcham, Ralph. The Anti-Federalist Papers. New York: New American Library, 1986. Levinson, Sanford. Constitutional Faith. Princeton, N.J.: Princeton University Press, 1988.
Whittington, Keith E. Constitutional Construction: Divided Powers and Constitutional Meaning. Cambridge, Mass.: Harvard University Press, 2001. Wills, Garry. Inventing America: Jefferson’s Declaration of Independence. Boston: Houghton Mifflin, 2002. Wood, Gordon S. The Creation of the American Republic. Chapel Hill: University of North Carolina Press, 1969. Wood, Gordon S. The Radicalism of the American Revolution. New York: Vintage Books, 1991. Zuckert, Michael P. Natural Rights and the New Republicanism. Princeton, N.J.: Princeton University Press, 1994.
Levy, Leonard. Origins of the Bill of Rights. New Haven, Conn.: Yale University Press, 1999.
CIVIL RIGHTS AND CIVIC RESPONSIBILITIES
Locke, John. Two Treatises of Government. Edited by Peter Laslett. Cambridge: Cambridge University Press, 1987.
Abraham, Henry J., and Barbara A. Perry. Freedom and the Court: Civil Rights and Liberties in the United States. 8th ed. Lawrence: University Press of Kansas, 2003.
Madison, James, Alexander Hamilton, and John Jay. The Federalist Papers. New York: New American Library, 1961.
Abramson, Jeffrey. We, the Jury: The Jury System and the Ideal of Democracy. New York: Basic Books, 2000.
McDonald, Forrest. States’ Rights and the Union. Lawrence: University Press of Kansas, 2000.
Alderman, Ellen, and Caroline Kennedy. The Right to Privacy. New York: Alfred A. Knopf, 1995.
Nagel, Robert F. The Implosion of American Federalism. New York: Oxford University Press, 2002.
Anderson, Terry H. The Pursuit of Fairness: A History of Affirmative Action. New York: Oxford University Press, 2004.
Reid, John Phillip. Constitutional History of the American Revolution: The Authority of Rights. Madison: University of Wisconsin Press, 1986.
Baer, Judith A. Our Lives before the Law: Constructing a Feminist Jurisprudence. Princeton, N.J.: Princeton University Press, 1999.
Rossiter, Clinton. 1787: The Grand Convention. New York: Macmillan, 1966.
Branch, Taylor. At Canaans Edge: America in the King Years, 1965–1968. New York: Simon & Schuster, 2006.
Selected Bibliography 1073
Branch, Taylor. Parting the Waters: America in the King Years, 1954–1963. New York: Simon & Schuster, 1988. Branch, Taylor. Pillar of Fire: America in the King Years, 1963–1965. New York: Simon & Schuster, 1998 Butler, Judith. Gender Trouble: Feminism and the Subversion of Identity. New York: Routledge, 1990. Cahn, Steven M., ed. The Affirmative Action Debate. New York: Routledge, 2002. Chang, Gordon H., ed. Asian Americans and Politics. Stanford, Calif.: Stanford University Press, 2001. Coetzee, J. M. Giving Offense: Essays on Censorship. Chicago: University of Chicago Press, 1996. Cook, Timothy E., ed. Freeing the Presses: The First Amendment in Action. Baton Rouge: Louisiana State University Press, 2005. Daniels, Roger. Coming to America: A History of Immigration and Ethnicity in American Life, 2nd ed. Princeton, N.J.: Perennial, 2002. Gerstmann, Evan. Same-Sex Marriage and the Constitution. New York: Cambridge University Press, 2004. Gutmann, Amy, ed. Freedom of Association. Princeton, N.J.: Princeton University Press, 1998. Hammond, Phillip E. With Liberty for All: Freedom of Religion in the United States. Louisville, Ky.: Westminster John Knox Press, 1998.
Jonakait, Randolph N. The American Jury System. New Haven, Conn.: Yale University Press, 2003. Lee, Francis Graham. Equal Protection: Rights and Liberties under the Law. Santa Barbara, Calif.: ABCCLIO, 2003. Lehman, Godfrey D. We the Jury: The Impact of Jurors on Our Basic Freedoms: Great Jury Trials of History. Amherst, N.Y.: Prometheus Books, 1997. Lewis, Anthony. Gideon’s Trumpet. New York: Random House, 1964. Lewis, Anthony. Make No Law: The Sullivan Case and the First Amendment. New York: Vintage Books, 1991. Magee, James J. Freedom of Expression. Westport, Conn.: Greenwood Press, 2002. Marable, Manning. Race, Reform, and Rebellion: The Second Reconstruction in Black America, 1945– 1990. Jackson: University Press of Mississippi, 1991. Mezey, Susan Gluck. Disabling Interpretations: The Americans with Disabilities Act in Federal Court. Pittsburgh: University of Pittsburgh Press, 2005. Mohr, Richard D. The Long Arc of Justice: Lesbian and Gay Marriage, Equality, and Rights. New York: Columbia University Press, 2005. Pember, Don R., and Clay Calvert. Mass Media Law 2007–2008. Boston: McGraw Hill, 2007.
Hoff, Joan. Law, Gender, and Injustice: A Legal History of U.S. Women. New York: New York University Press, 1991.
Perry, Michael J. We the People: The Fourteenth Amendment and the Supreme Court. New York: Oxford University Press, 1999.
Hubbart, Phillip A. Making Sense of Search and Seizure Law: A Fourth Amendment Handbook. Durham, N.C.: Carolina Academic Press, 2005.
Rosen, Ruth. The World Split Open: How the Modern Women’s Movement Changed America. New York: Penguin Books, 2000.
Israel, Jerold H., Yale Kamisar, Wayne R. LaFave, and Nancy J. King. Criminal Procedure and the Constitution. St. Paul, Minn.: Thomson West, 2006.
Segars, Mary C., and Ted G. Jelen. A Wall of Separation? Debating the Public Role of Religion. Lanham, Md.: Rowman & Littlefield, 1998.
1074 S elected Bibliography
Segura, Gary M., and Shaun Bowler, eds. Diversity in Democracy: Minority Representation in the United States. Charlottesville: University of Virginia Press, 2005.
Crigler, Ann N., Marion R. Just, and Edward McCaffery, eds. Rethinking the Vote. New York: Oxford University Press, 2004
Shull, Steven A. American Civil Rights Policy from Truman to Clinton: The Role of Presidential Leadership. Armonk, N.Y.: M.E. Sharpe, 1999.
Corrado, Anthony, Thomas E. Mann, Daniel R. Ortiz, and Trevor Potter. The New Campaign Finance Sourcebook. Washington, D.C.: Brookings Institution Press, 2005.
Stuart, Gary L. Miranda: The Story of America’s Right to Remain Silent. Tucson-University of Arizona Press, 2004.
Downs, Anthony. An Economic Theory of Democracy. New York: Harper, 1957.
Sunstein, Cass R. Democracy and the Problem of Free Speech. New York: The Free Press, 1995.
Dwyre, Diana, and Victoria A. Farrar-Myers. Legislative Labyrinth: Congress and Campaign Finance Reform. Washington, D.C.: CQ Press, 2001.
Wilkins, David E. American Indian Politics and the American Political System. Lanham, Md.: Rowman & Littlefield, 2002.
Edelman, Murray. Constructing the Political Spectacle. Chicago: University of Chicago Press, 1988.
Witte, John, Jr. Religion and the American Constitutional Experiment: Essential Rights and Liberties. Boulder, Colo.: Westview Press, 2000.
Emery, Michael, and Edwin Emery. The Press and America: An Interpretive History of the Mass Media. 8th ed. Boston: Allyn & Bacon, 1996.
PO LITI CAL PARTICIPATION
Fenno, Richard F., Jr. Home Style: House Members in Their Districts. Boston: Little, Brown, 1978.
Alexander, Herbert E. Financing Politics: Money, Elections, and Political Reform. 4th ed. Washington, D.C.: CQ Press, 1992. Ansolabehere, Stephen, and Shanto Iyengar. Going Negative: How Attack Ads Shrink and Polarize the Electorate. New York: The Free Press, 1995. Asher, Herbert. Polling and the Public: What Every Citizen Should Know. 6th ed. Washington, D.C.: Congressional Quarterly Press, 2004. Bagdikian, Ben H. The New Media Monopoly. Boston: Beacon Press, 2004.
Franklin, Mark N. Voter Turnout and the Dynamics of Electoral Competition in Established Democracies since 1945. Cambridge: Cambridge University Press, 2004. Genovese, Michael A., and Matthew J. Streb, eds. Polls and Politics: The Dilemmas of Democracy. Albany: State University of New York Press; 2004. Gierzynski, Anthony. Money Rules: Financing Elections in America. Boulder, Colo.: Westview Press, 2000.
Bennett, W. Lance. News: The Politics of Illusion. 6th ed. New York: Pearson Longman, 2005.
Graber, Doris. Mass Media and American Politics. 7th ed. Washington, D.C.: Congressional Quarterly Press, 2006.
Campbell, Angus, Philip E. Converse, Warren E. Miller, and Donald E. Stokes. The American Voter. Chicago: University of Chicago Press, 1960.
Hamilton, James T. All the News That’s Fit to Sell: How the Market Transforms Information Into News. Princeton, N.J.: Princeton University Press, 2004.
Selected Bibliography 1075
Hernnson, Paul S. Congressional Elections. Campaigning at Home and in Washington. 4th ed. Washington, D.C., Congressional Quarterly Press, 2004. Herrnson, Paul R., Ronald G. Shaiko, and Clyde Wilcox, eds. The Interest Group Connection: Electioneering, Lobbying, and Policymaking in Washington. Chatham, N.J.: Chatham House, 1998. Hill, David B. American Voter Turnout: An Institutional Perspective. Boulder, Colo.: Westview Press, 2005. Jacobson, Gary C. The Politics of Congressional Elections. New York: Longman, 2003. Jamieson, Kathleen Hall, and Paul Waldman. The Press Effect: Politicians, Journalists, and the Stories That Shape the Political World. New York: Oxford University Press, 2003. Kaid, Lynda Lee, and Anne Johnston. Videostyle in Presidential Campaigns: Style and Content of Televised Political Advertising . Westport, Conn.: Praeger, 2000. Maisel, L. Sandy, and Kara Z. Buckley. Parties and Elections in America: The Electoral Process. 4th ed. New York: Rowman & Littlefield, 2005. Malbin, Michael J., ed. The Election after Reform: Money, Politics, and the Bipartisan Campaign Reform Act. Lanham, Md.: Rowman & Littlefield, 2006. Mayhew, David R. Congress: The Electoral Connection. New Haven, Conn.: Yale University Press, 1974. Meyer, David S. The Politics of Protest: Social Movements in America. New York: Oxford University Press, 2006. Page, Benjamin I., and Robert Y. Shapiro. The Rational Public: Fifty Years of Trends in Americans’ Policy Preferences. Chicago: University of Chicago Press, 1992. Paletz, David L. The Media in American Politics: Contents and Consequences. 2nd ed. New York: Longman, 2002.
Patterson, Thomas E. Out of Order. New York: Vintage Books, 1994. Putnam, Robert. Bowling Alone: The Collapse and Revival of American Community. New York: Simon & Schuster, 2000. Savage, Sean J. JFK, LBJ, and the Democratic Party. Albany: State University of New York Press, 2004. Semiatin, Richard J. Campaigns in the 21st Century. New York: McGraw-Hill, 2005. Wayne, Stephen J. Is This Any Way to Run a Democratic Election? 2nd ed. Boston: Houghton Mifflin, 2003. Wayne, Stephen J. The Road to the White House: The Politics of Presidential Elections. 8th ed. Boston: Thomson Woodsworth, 2008. West, Darrell M. Air Wars: Television Advertising in Election Campaigns, 1952–2004. 4th ed. Washington, D.C.: Congressional Quarterly Press, 2005.
LEGISLATIVE BRANCH Baker, Ross K. House and Senate. 3rd ed. New York: W.W. Norton, 2001. Berg, John C. Unequal Struggle: Class, Gender, Race, and Power in the U.S. Congress. Boulder, Colo.: Westview Press, 1994. Brown, Sherrod. Congress from the Inside: Observations from the Majority and the Minority. 3rd ed. Kent, Ohio.: Kent State University Press, 2004. Davidson, Roger H., Susan Webb Hammond, and Raymond W. Smock. Masters of the House: Congressional Leadership over Two Centuries. Boulder, Colo.: Westview Press, 1998. Davidson, Roger H., and Walter J. Oleszek. Congress & Its Members. 10th edition. Washington, D.C.: Congressional Quarterly Press, 2006. Fenno Richard F., Jr. Homestyle: House Members in Their Districts. Boston: Little, Brown, 1978.
1076 S elected Bibliography
Fenno, Richard P. The Power of the Purse: Appropriations Politics in Congress. Boston: Little, Brown, 1966.
Sinclair, Barbara. Unorthodox Lawmaking: New Legislative Processes in the U.S. Congress. Washington, D.C.: Congressional Quarterly Press, 2000.
Fiorina, Morris P. Congress: Keystone of the Washington Establishment. New Haven, Conn.: Yale University Press, 1989.
Wielen, Ryan J. The American Congress. 4th ed. Cambridge: Cambridge University Press, 2006.
Fisher, Louis. Constitutional Conflicts between Congress and the President. 4th ed. Lawrence: University Press of Kansas, 1997.
Arnold, Peri E. Making the Managerial Presidency. Lawrence: University Press of Kansas, 1998.
Frisch, Scott A. The Politics of Pork: A Study of Congressional Appropriation Earmarks. New York: Garland, 1998. Gertzog, Irwin N. Women and Power on Capitol Hill. Boulder, Colo.: Lynne Rienner Publishers, 2004. Hamilton, Lee H. How Congress Works and Why You Should Care. Bloomington: Indiana University Press, 2004. Lublin, David. The Paradox of Representation: Racial Gerrymandering and Minority Interests in Congress. Princeton, N.J.: Princeton University Press, 1997. Mayhew, David R. Congress: The Electoral Connection. New Haven, Conn.: Yale University Press, 1974. Mayhew, David R. Divided We Govern: Party Control, Lawmaking, and Investigations 1946–1990. New Haven, Conn.: Yale University Press, 1991. O’Connor, Karen, ed. Women and Congress: Running, Winning, and Ruling. New York: Haworth Press, 2001.
EXECUTIVE BRANCH
Baker, Nancy V. Conflicting Loyalties: Law and Politics in the Office of Attorney General, 1789–1990. Lawrence: University Press of Kansas, 1993. Burke, John P. The Institutional Presidency. Baltimore: Johns Hopkins University Press, 1992. Cronin, Thomas E., and Michael A. Genovese. The Paradoxes of the American Presidency. 2nd ed. New York: Oxford University Press, 2004. Edwards, George C. On Deaf Ears: The Limits of the Bully Pulpit. New Haven, Conn.: Yale University Press, 2003. Edwards, George C. III. The Public Presidency: The Pursuit of Popular Support. New York: St. Martin’s Press, 1983. Eshbaugh-Soha, Matthew. The President’s Speeches: Beyond “Going Public.” Boulder, Colo.: Lynne Rienner Publishers, 2006. Gelderman, Carol. All the Presidents’ Words: The Bully Pulpit and the Creation of the Virtual Presidency. New York: Walker & Company, 1997. Gergen, David. Eyewitness to Power: The Essence of Leadership. New York: Simon & Schuster, 2000.
Oleszek, Walter J. Congressional Procedures and the Policy Process. 6th ed. Washington, D.C.: Congressional Quarterly Press, 2004.
Greenstein, Fred I. The Presidential Difference: Leadership Style From FDR to George W. Bush. 2nd ed. Princeton, N.J.: Princeton University Press, 2004.
Rosenthal, Cindy Simon, ed. Women Transforming Congress. Norman: University of Oklahoma Press, 2002.
Han, Lori Cox. Governing from Center Stage: White House Communication Strategies during the Television Age of Politics. Cresskill, N.J.: Hampton Press, 2001.
Selected Bibliography 1077
Hart, Roderick P. The Sound of Leadership: Presidential Communication in the Modern Age. Chicago: University of Chicago Press, 1987.
Warshaw, Shirley Anne. Powersharing: White HouseCabinet Relations in the Modern Presidency. Albany: State University of New York Press, 1996.
Hess, Stephen, and James P. Pfiffner. Organizing the Presidency. 3rd ed. Washington, D.C.: Brookings Institution, 2002.
JUDICIAL BRANCH
Jones, Charles O. The Presidency in a Separated System. Washington, D.C.: Brookings Institution, 1994.
Abraham, Henry J. Justices, Presidents, and Senators: A History of the U.S. Supreme Court Appointments from Washington to Clinton. Lanham, Md.: Rowman & Littlefield, 1999.
Kernell, Samuel. Going Public: New Strategies of Presidential Leadership. 4th ed. Washington, D.C.: Congressional Quarterly Press, 2007.
Baum, Lawrence. The Supreme Court. 8th ed. Washington, D.C.: Congressional Quarterly Press, 2004.
Kessel, John H. Presidents, the Presidency, and the Political Environment. Washington, D.C.: Congressional Quarterly Press, 2001.
Bedau, Hugo Adam, and Paul G. Cassell. Debating the Death Penalty: Should America Have Capital Punishment? The Experts on Both Sides Make Their Best Case. New York: Oxford University Press, 2004.
Kumar, Martha Joynt, and Terry Sullivan, eds. The White House World: Transitions, Organization, and Office Operations. College Station: Texas A&M University Press, 2003. Lammers, William W., and Michael A. Genovese. The Presidency and Domestic Policy: Comparing Leadership Styles, FDR to Clinton. Washington, D.C.: Congressional Quarterly Press, 2000. Mayer, Kenneth R. With the Stroke of a Pen: Executive Orders and Presidential Power. Princeton, N.J.: Princeton University Press, 2001. Nuestadt, Richard E. Presidential Power and the Modern Presidents. New York: Free Press, 1990. Pfiffner, James P. The Managerial Presidency. College Station: Texas A&M University Press, 1999. Skowronek, Stephen, The Politics Presidents Make: Leadership from John Adams to George Bush. Cambridge, Mass.: Belknap/Harvard Press, 1993. Tulis, Jeffrey K. The Rhetorical Presidency. Princeton, N.J.: Princeton University Press, 1987. Warshaw, Shirley Anne. The Keys to Power: Managing the Presidency. 2nd ed. New York: Longman, 2004.
Bickel, Alexander. The Least Dangerous Branch: The Supreme Court at the Bar of Politics. New Haven, Conn.: Yale University Press, 1962. Carp, Robert A., and Ronald Stidham. The Federal Courts. 4th ed. Washington, D.C.: Congressional Quarterly Press, 2001. Fisher, Louis. Military Tribunals and Presidential Power. Lawrence: University Press of Kansas, 2005. Hall, Kermit L., and Kevin T. McGuire, eds. The Judicial Branch. New York: Oxford University Press, 2005. Hart, H. L. A. The Concept of Law. Oxford: Oxford University Press, 1961. McCloskey, Robert G. The American Supreme Court. 2nd ed. Chicago: University of Chicago Press, 1994. McGuire, Kevin T. Understanding the U.S. Supreme Court: Cases and Controversies. New York: McGraw Hill, 2002.
1078 S elected Bibliography
O’Brien, David M. Storm Center: The Supreme Court in American Politics. 7th ed. New York: W.W. Norton, 2005.
Blanck, Peter, Eve Hill, Charles D. Siegal, and Michael Waterstone. Disability Civil Rights Law and Policy. St. Paul, Minn.: Thomson-West, 2004.
O’Connor, Sandra Day. The Majesty of the Law: Reflections of a Supreme Court Justice. New York: Random House, 2003.
Blank, Rebecca, and Ron Haskins, eds. The New World of Welfare. Washington, D.C.: Brookings Institution Press, 2001.
Rehnquist, William H. The Supreme Court: How It Was, How It Is. New York: William Morrow, 1987.
Bodenheimer, Thomas S., and Kevin Grumbach. Understanding Health Policy. 3rd ed. New York: McGraw Hill, 2001.
Rosenberg, Gerald N. The Hollow Hope: Can Courts Bring About Social Change? Chicago: University of Chicago Press, 1991. Segal, Jeffrey, et al. The Supreme Court Compendium: Data, Decisions, and Developments. 4th ed. Washington, D.C.: Congressional Quarterly Press, 2006. Silverstein, Mark. Judicious Choices: The New Politics of Supreme Court Confirmations. 2nd ed. New York: W.W. Norton, 2007. Ward, Artemus, and David L. Weiden. Sorcerers’ Apprentices: 100 Years of Law Clerks at the United States Supreme Court. New York: New York University Press, 2006. Warren, Kenneth F. Administrative Law in the Political System. 4th ed. Boulder, Colo.: Westview Press, 2004.
PUBLIC POLICY Altman, Stuart, and David Shactman, eds. Policies for an Aging Society. Baltimore: Johns Hopkins University Press, 2002. Balaker, Ted, and Sam Staley. The Road More Traveled: Why the Congestion Crisis Matters More Than You Think, and What We Can Do about It. Lanham, Md.: Rowman & Littlefield, 2006.
Cahn, Matthew A. Environmental Deceptions: The Tension between Liberalism and Environmental Policymaking in the United States. Albany: State University of New York Press, 1995. Daniels, Roger. Coming to America: A History of Immigration and Ethnicity in American Life. 2nd ed. Princeton, N.J.: Perennial, 2002. DiNitto, Diana M. Social Welfare: Politics and Public Policy. 6th ed. Boston: Pearson, 2007. Gaddis, John Lewis. Strategies of Containment: A Critical Appraisal of Postwar American National Security Policy. Rev. ed. New York: Oxford University Press, 2005. Hochschild, Jennifer, and Nathan Scovronick. The American Dream and the Public Schools. New York: Oxford University Press, 2003. Hoffman, Peter. Tomorrow’s Energy: Hydrogen, Fuel Cells, and the Prospects for a Cleaner Planet. Cambridge, Mass.: MIT Press, 2001. Jentleson, Bruce W. American Foreign Policy: The Dynamics of Choice in the Twenty-First Century. 2nd ed. New York: W.W. Norton, 2004.
Beckett, Katherine. Making Crime Pay: Law and Order in Contemporary American Politics. New York: Oxford University Press, 1997.
Kotlifoff, Laurence J., and Scott Burns. The Coming Generational Storm. Cambridge, Mass.: MIT Press, 2004.
Béland, Daniel. Social Security: History and Politics from the New Deal. Lawrence: University Press of Kansas, 2005.
LaFeber, Walter. The American Age: United States Foreign Policy at Home and Abroad. 2nd ed. New York: W.W. Norton, 1994.
Selected Bibliography 1079
Levi, Michael A., and Michael E. O’Hanlon. The Future of Arms Control. Washington, D.C.: Brookings Institution Press, 2005. McChesney, Robert. The Problem of the Media: U.S. Communication Politics in the Twenty-first Century. New York: Monthly Review Press, 2004. Moe, Terry M., ed. A Primer on America’s Schools. Stanford, Calif.: Hoover Institution Press, 2001. Nye, Joseph. The Paradox of American Power. New York: Oxford University Press, 2002. Rosen, Harvey. Public Finance. New York: McGraw Hill, 2004. Rosenbaum, Walter A. Environmental Politics and Policy. 6th ed. Washington, D.C.: Congressional Quarterly Press, 2006. Skidmore, Max J. Social Security and Its Enemies: The Case for America’s Most Efficient Insurance Program. Boulder, Colo.: Westview Press, 1999. Solinger, Rickie. Pregnancy and Power: A Short History of Reproductive Politics in America. New York: New York University Press, 2005. Spitzer, Robert J. The Politics of Gun Control. Washington, D.C.: Congressional Quarterly Press, 2004.
trol Over $1 Trillion. New York: John Wiley & Sons, 1992. Benton, J. Edwin, and David R. Morgan. Intergovernmental Relations and Public Policy. Westport, Conn.: Greenwood Press, 1986. Burns, Nancy. The Formation of American Local Governments: Private Values in Public Institutions. Oxford: Oxford University Press, 1994. Christensen, Terry, and Tom Hogen-Esch. Local Politics: A Practical Guide to Governing at the Grassroots. Armonk, N.Y.: M.E. Sharpe, 2006. Coppa, Frank J. County Government: A Guide to Efficient and Accountable Government. Westport, Conn.: Praeger, 2000. Cronin, Thomas E. Direct Democracy: The Politics of Initiative, Referendum, and Recall. Cambridge, Mass.: Harvard University Press, 1989. Cullingham, Barry, and Roger W. Caves. Planning in the USA. 2nd ed. New York: Routledge, 2003. Dye, Thomas R., and Susan A. MacManus. Politics in States and Communities. 12th ed. Upper Saddle River, N.J.: Pearson Prentice Hall, 2007. Ellis, Richard J. Democratic Delusions: The Initiative Process in America. Lawrence: University Press of Kansas, 2002.
Squires, Gregory D., and Sally O’Connor. Color and Money: Politics and Prospects for Community Reinvestment in Urban America. Albany: State University of New York Press, 2001.
Ferguson, Margaret R., ed. The Executive Branch of State Government. Santa Barbara, Calif.: ABC-CLIO, 2006.
Teske, Paul. Regulation in the States. Washington, D.C.: Brookings Institution Press. 2004.
Flanagan, Richard M. Mayors and the Challenge of Urban Leadership. Lanham, Md.: University Press of America, Inc., 2004.
Weaver, R. Kent. Ending Welfare as We Know It. Washington, D.C.: Brookings Institution Press, 2000.
STATE AND LOCAL GOVERNMENT Axelrod, Donald. Shadow Government: The Hidden World of Public Authorities—and How They Con-
Frug, Gerald E. City Making: Building Communities without Walls. Princeton, N.J.: Princeton University Press, 1999. Gray, Virginia, Russell L. Hanson, and Herbert Jacob, eds. Politics in the American States: A Comparative
1080 S elected Bibliography
Analysis. 8th ed. Washington, D.C.: Congressional Quarterly Press, 2003. Gross, Donald A., and Robert K. Goidel. The States of Campaign Finance Reform. Columbus: Ohio State University Press, 2003. Hopkins, Lewis D. Urban Development: The Logic of Making Plans. Washington, D.C.: Island Press, 2001. Judd, Dennis R., and Todd Swanstrom. City Politics: Private Power and Public Policy. New York: Longman, 2002. Langer, Laura. Judicial Review in State Supreme Courts: A Comparative Study. Albany: State University of New York Press, 2002. Leland, Suzanne M., and Kurt Thurmaier, eds. Case Studies of City-County Consolidation: Reshaping the Local Government Landscapes. Armonk, N.Y.: M.E. Sharpe, 2004. Maddex, Robert L. State Constitutions of the United States. 2nd ed. Washington, D.C.: Congressional Quarterly Press, 2005. Matsusaka, John G. For the Many or the Few. Chicago: University of Chicago Press, 2004. Meador, Daniel J., and Frederick G. Kempin. American Courts. St. Paul, Minn.: West Publishing Company, 2000. Meyer, Jon’a, and Paul Jesilow. “Doing Justice” in the People’s Court: Sentencing by Municipal Court Judges. Albany: State University of New York Press, 1997. Mikesell, J. L. Fiscal Administration: Analysis and Applications for the Public Sector. Belmont, Calif.: Thompson/Wadsworth. 2007. Moncrief, Gary F., Peverill Squire, and Malcolm E. Jewell. Who Runs for the Legislature? Upper Saddle River, N.J.: Prentice Hall, 2001.
Morehouse, Sarah McCally, and Malcolm E. Jewell. State Politics, Parties, and Policy. Lanham, Md.: Rowman & Littlefield, 2003. O’Toole, Laurence J., Jr., ed. American Intergovernmental Relations. 4th ed. Washington, D.C.: Congressional Quarterly Press, 2007. Pelissero, John P., ed. Cities, Politics, and Policy: A Comparative Analysis. Washington, D.C.: Congressional Quarterly Press, 2003. Ross, Bernard H., and Myron A. Levine. Urban Politics: Power in Metropolitan America. Belmont, Calif.: Thompson Wadsworth, 2006. Rubin, Irene. The Politics of Public Budgeting. Washington, D.C.: Congressional Quarterly Press, 2005. Saltzstein, Alan. Governing America’s Urban Areas. Belmont, Calif.: Wadsworth-Thompson, 2003. Smith, Kevin B., Alan Greenblatt, and John Buntin. Governing States and Localities. Washington, D.C.: Congressional Quarterly Press, 2005. Syed, Anwar. The Political Theory of American Local Government. New York: Random House, 1966. Thompson, Joel A., and Gary F. Moncrief. Campaign Finance in State Legislative Elections. Washington, D.C.: Congressional Quarterly Press, 1998. Van Horn, Carl E., ed. The State of the States. Washington, D.C.: Congressional Quarterly Press, 2006. Wesalo Temel, J. The Fundamentals of Municipal Bonds. New York: John Wiley & Sons. 2001.
INTERNATIONAL POLITICS AND ECONOMICS Acheson, Keith, and Christopher J. Maule. North American Trade Disputes. Ann Arbor: University of Michigan Press, 1999. Arblaster, Anthony. The Rise and Decline of Western Liberalism. New York: Basil Blackwell, 1984.
Selected Bibliography 1081
Baehr, Peter R., and Leon Gordenker. The United Nations: Reality and Ideal. 4th ed. New York: Palgrave/Macmillan, 2005. Bartholomew, Amy, ed. Empire’s Law: The American Imperial Project and the “War to Remake the World.” London; Pluto Press, 2006. Bhagwati, Jagdish. In Defense of Globalization. Oxford: Oxford University Press, 2004. Calvert, Peter, and Susan Calvert. Politics and Society in the Third World. 2nd ed. New York: Longman, 2001.
Gaddis, John Lewis. The Cold War. New York: Penguin Books, 2006. Gerven, Walter. The European Union: A Polity of States and Peoples. Stanford, Calif.: Stanford University Press, 2005. Gilbert, Christopher L. The World Bank. New York: Cambridge University Press, 2006. Gregory, Paul R. Behind the Facade of Stalin’s Command Economy. Stanford, Calif.: Hoover Institution Press, 2001.
Cesarano, Fillippo. Monetary Theory and Bretton Woods. New York: Cambridge University Press, 2006.
Hakim, Peter, and Robert Litan, eds. The Future of North American Integration: Beyond NAFTA. Washington, D.C.: Brookings Institution Press, 2002.
Colgan, Jeff. The Promise and Peril of International Trade. Peterborough, Ont., and Orchard Park, N.Y.: Broadview Press, 2005.
Howorth, Jolyon, ed. Defending Europe. New York: Palgrave Macmillan, 2004.
D’Amato, Anthony, and Jennifer Abbassi. International Law Today: A Handbook. Eagan, Minn.: Thomson-West, 2006. Destler, I. M. American Trade Politics. 4th ed. Washington, D.C.: Institute for International Economics, 2005. Dine, Janet. Companies, International Trade, and Human Rights. Cambridge: Cambridge University Press, 2005. Edwards, Lee. The Collapse of Communism. Stanford, Calif.: Hoover Institution Press, 2000. Falola, Toyin, and A. Genova. The Politics of the Global Oil Industry: An Introduction. Westport, Conn.: Praeger, 2005. Fasulo, Linda. An Insider’s Guide to the UN. New Haven, Conn.: Yale University Press, 2004. Friedman, Thomas L. The World Is Flat: A Brief History of the Twenty-First Century. New York: Farrar, Straus & Giroux, 2005.
Huntington, Samuel P. Third Wave: Democratization in the Late Twentieth Century. Norman: University of Oklahoma Press, 1991. Jones, Kent. Who’s Afraid of the WTO? Oxford and New York: Oxford University Press, 2004. Kaplan, Lawrence S. NATO Divided, NATO United. Greenwood, Conn.: Praeger Paperbacks, 2004. Kegley, Charles W. World Politics: Trends and Transformation. 11th ed. Belmont, Calif.: Thomson Wadsworth, 2007. Krugman, Paul. Pop Internationalism. Cambridge, Mass.: MIT Press, 1997. LaFeber, Walter. America, Russia, and the Cold War, 1945–2002. Boston: McGraw Hill, 2002. Mansfield, Edward D., and Richard Sisson, eds. Evolution of Political Knowledge: Democracy, Autonomy, and Conflict in Comparative and International Politics. Columbus: Ohio State University Press, 2004.
1082 S elected Bibliography
McGiffen, Steven. The European Union: A Critical Guide. Ann Arbor, Mich.: Pluto Press, 2005.
Siebert, Horst. The World Economy. London and New York: Routledge, 2002.
Parra, Francisco. Oil Politics: A Modern History of Petroleum. London and New York: I.B. Tauris, 2004.
Steger, Manfred B., ed. Rethinking Globalism. Lanham, Md.: Rowman & Littlefield, 2004.
Pei, Minxin. China’s Trapped Transition: The Limits of Developmental Autocracy. Cambridge, Mass.: Harvard University Press, 2006. Rawls, John. Political Liberalism. New York: Columbia University Press, 1993. Sen, Amartya. Development as Freedom. New York: Anchor Books, 2000.
Udell, Gregory F. Principles of Money, Banking, and Financial Markets. New York: Addison Wesley Longman, 1999. Woods, Ngaire. The Globalizers. Ithaca, N.Y.: Cornell University Press, 2006. Yetiv, Steve. Crude Awakenings: Global Oil Security and American Foreign Policy. Ithaca, N.Y.: Cornell University Press, 2004.
APPENDICES
DECLARATION OF INDEPENDENCE ARTICLES OF CONFEDERATION
1085 1089
THE CONSTITUTION OF THE UNITED STATES OF AMERICA BILL OF RIGHTS
1103
OTHER AMENDMENTS TO THE CONSTITUTION
1083
1105
1095
DECLARATION OF INDEPENDENCE
Action of Second Continental Congress, July 4, 1776. The unanimous Declaration of the thirteen United States of America. We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty, and the pursuit of Happiness. That to secure these rights, Governments are instituted among Men, deriving their just powers from the consent of the governed. That whenever any Form of Government becomes destructive of these ends, it is the Right of the People to alter or to abolish it, and to institute new Government, laying its foundation on such principles and organizing its powers in such form, as to them shall seem most likely to effect their Safety and Happiness. Prudence, indeed, will dictate that Governments long established should not be changed for light and transient causes; and accordingly all experience hath shown, that mankind are more disposed to suffer, while evils are sufferable, than to right themselves by abolishing the forms to which they are accustomed. But when a long train of abuses and usurpations, pursuing invariably the same Object, evinces a design to reduce them under absolute Despotism, it is their right, it is their duty, to throw off such Government, and to provide new
Guards for their future security. Such has been the patient sufferance of these Colonies; and such is now the necessity which constrains them to alter their former Systems of Government. The history of the present King of Great Britain is a history of repeated injuries and usurpations, all having in direct object the establishment of an absolute Tyranny over these States. To prove this, let Facts be submitted to a candid world. HE has refused his Assent to Laws, the most wholesome and necessary for the public good. HE has forbidden his Governors to pass Laws of immediate and pressing importance, unless suspended in their operation till his Assent should be obtained; and when so suspended, he has utterly neglected to attend to them. HE has refused to pass other Laws for the accommodation of large districts of people, unless those people would relinquish the right of Representation in the Legislature, a right inestimable to them and formidable to tyrants only. HE has called together legislative bodies at places unusual, uncomfortable, and distant from the depository
1085
1086 Declar ation of Independence
of their public Records, for the sole purpose of fatiguing them into compliance with his measures.
FOR quartering large bodies of armed troops among us:
HE has dissolved Representative Houses repeatedly, for opposing with manly firmness his invasions on the rights of the people.
FOR protecting them, by a mock Trial, from Punishment for any Murders which they should commit on the Inhabitants of these States:
HE has refused for a long time, after such dissolutions, to cause others to be elected; whereby the Legislative powers, incapable of Annihilation, have returned to the People at large for their exercise; the State remaining in the mean time exposed to all the dangers of invasion from without, and convulsion within.
FOR cutting off our Trade with all parts of the world:
HE has endeavoured to prevent the population of these States; for that purpose obstructing the Laws of Naturalization of Foreigners; refusing to pass others to encourage their migrations hither, and raising the conditions of new Appropriations of Lands. HE has obstructed the Administration of Justice, by refusing his Assent to Laws for establishing Judiciary powers. HE has made Judges dependent on his Will alone, for the tenure of their offices, and the amount and payment of their salaries. HE has erected a multitude of New Offices, and sent hither swarms of Officers to harass our People, and eat out their substance. HE has kept among us, in times of peace, Standing Armies without the Consent of our legislatures. HE has affected to render the Military independent of and superior to the Civil power. HE has combined with others to subject us to a jurisdiction foreign to our constitution, and unacknowledged by our laws; giving his Assent to their Acts of pretended Legislation:
FOR imposing Taxes on us without our Consent: FOR depriving us in many cases, of the benefits of Trial by Jury: FOR transporting us beyond Seas to be tried for pretended offences: FOR abolishing the free System of English Laws in a neighbouring Province, establishing therein an Arbitrary government, and enlarging its Boundaries so as to render it at once an example and fit instrument for introducing the same absolute rule into these Colonies: FOR taking away our Charters, abolishing our most valuable Laws, and altering fundamentally the Forms of our Governments: FOR suspending our own Legislatures, and declaring themselves invested with power to legislate for us in all cases whatsoever. HE has abdicated Government here, by declaring us out of his Protection and waging War against us. HE has plundered our seas, ravaged our Coasts, burnt our towns, and destroyed the Lives of our people. HE is at this time transporting large armies of foreign mercenaries to compleat the works of death, desolation and tyranny, already begun with circumstances of Cruelty & perfidy scarcely paralleled in the most barbarous ages, and totally unworthy the Head of a civilized nation.
Declaration of Independence
HE has constrained our fellow Citizens taken Captive on the high Seas to bear Arms against their Country, to become the executioners of their friends and Brethren, or to fall themselves by their Hands. HE has excited domestic insurrections amongst us, and has endeavoured to bring on the inhabitants of our frontiers, the merciless Indian Savages, whose known rule of warfare, is an undistinguished destruction of all ages, sexes and conditions. IN every stage of these Oppressions We have Petitioned for Redress in the most humble terms: Our repeated Petitions have been answered only by repeated injury. A Prince, whose character is thus marked by every act which may define a Tyrant, is unfit to be the ruler of a free people. NOR have We been wanting in attention to our British brethren. We have warned them from time to time of attempts by their legislature to extend an unwarrantable jurisdiction over us. We have reminded them of the circumstances of our emigration and settlement here. We have appealed to their native justice and magnanimity, and we have conjured them by the ties of our common kindred to disavow these usurpations, which would inevitably interrupt our connections and correspondence. They too have been deaf to the voice of justice and of consanguinity. We must, therefore, acquiesce in the necessity, which denounces our Separation, and hold them, as we hold the rest of mankind, Enemies in War, in Peace Friends. WE, therefore, the Representatives of the UNITED STATES OF AMERICA, in GENERAL CONGRESS, Assembled, appealing to the Supreme Judge of the world for the rectitude of our intentions, do, in the Name, and by Authority of the good People of these Colonies, solemnly publish and declare, That these United Colonies are, and of Right ought to be FREE AND INDEPENDENT STATES; that they are Absolved from all Allegiance to the British Crown, and that all political connection between them and the State of Great Britain, is and ought to be totally dissolved; and that as FREE AND INDEPENDENT STATES, they have full Power to levy War, conclude Peace, contract Alliances, establish
1087
Commerce, and to do all other Acts and Things which INDEPENDENT STATES may of right do. And for the support of this Declaration, with a firm reliance on the Protection of Divine Providence, we mutually pledge to each other our Lives, our Fortunes and our sacred Honor. John Hancock. Georgia Button Gwinnett Lyman Hall Geo. Walton North Carolina William Hooper Joseph Hewes John Penn South Carolina Edward Rutledge Thomas Heyward, Jr. Thomas Lynch, Jr. Arthur Middleton Maryland Samuel Chase William Paca Thomas Stone Charles Carroll of Carrollton Virginia George Wythe Richard Henry Lee Thomas Jefferson Benjamin Harrison Thomas Nelson, Jr. Francis Lightfoot Lee Carter Braxton Pennsylvania Robert Morris Benjamin Rush Benjamin Franklin John Morton George Clymer James Smith George Taylor James Wilson George Ross
1088 Declar ation of Independence
Delaware Caesar Rodney George Read Thomas M’Kean New York William Floyd Philip Livingston Francis Lewis Lewis Morris New Jersey Richard Stockton John Witherspoon Francis Hopkins John Hart Abraham Clark New Hampshire Josiah bartlett William Whipple Matthew Thornton
Massachusetts-Bay Samuel Adams John Adams Robert Treat Paine Elbridge Gerry Rhode Island Stephen Hopkins William Ellery Connecticut Roger Sherman Samuel Huntington William Williams Oliver Wolcott IN CONGRESS, JANUARY 18, 1777.
ARTICLES OF CONFEDERATION
Agreed to by Congress November 15, 1777 then ratified and in force, March 1, 1781. Preamble To all to whom these Presents shall come, we the undersigned Delegates of the States affixed to our Names send greeting. Articles of Confederation and perpetual Union between the states of New Hampshire, Massachusetts-bay Rhode Island and Providence Plantations, Connecticut, New York, New Jersey, Pennsylvania, Delaware, Maryland, Virginia, North Carolina, South Carolina and Georgia.
ARTICLE I The Stile of this Confederacy shall be “The United States of America”.
ARTICLE II Each state retains its sovereignty, freedom, and independence, and every power, jurisdiction, and right, which is not by this Confederation expressly delegated to the United States, in Congress assembled.
ARTICLE III The said States hereby severally enter into a firm league of friendship with each other, for their common
defense, the security of their liberties, and their mutual and general welfare, binding themselves to assist each other, against all force offered to, or attacks made upon them, or any of them, on account of religion, sovereignty, trade, or any other pretense whatever.
ARTICLE IV The better to secure and perpetuate mutual friendship and intercourse among the people of the different States in this Union, the free inhabitants of each of these States, paupers, vagabonds, and fugitives from justice excepted, shall be entitled to all privileges and immunities of free citizens in the several States; and the people of each State shall free ingress and regress to and from any other State, and shall enjoy therein all the privileges of trade and commerce, subject to the same duties, impositions, and restrictions as the inhabitants thereof respectively, provided that such restrictions shall not extend so far as to prevent the removal of property imported into any State, to any other State, of which the owner is an inhabitant; provided also that no imposition, duties or restriction shall be laid by any State, on the property of the United States, or either of them. If any person guilty of, or charged with, treason, felony, or other high misdemeanor in any State, shall flee from justice, and be found in any of the United States, he shall, upon demand of the Governor or executive power of the State from which he fled, be delivered
1089
1090 Ar ticles of Confederation
up and removed to the State having jurisdiction of his offense. Full faith and credit shall be given in each of these States to the records, acts, and judicial proceedings of the courts and magistrates of every other State.
ference, agreement, alliance or treaty with any King, Prince or State; nor shall any person holding any office of profit or trust under the United States, or any of them, accept any present, emolument, office or title of any kind whatever from any King, Prince or foreign State; nor shall the United States in Congress assembled, or any of them, grant any title of nobility.
ARTICLE V For the most convenient management of the general interests of the United States, delegates shall be annually appointed in such manner as the legislatures of each State shall direct, to meet in Congress on the first Monday in November, in every year, with a power reserved to each State to recall its delegates, or any of them, at any time within the year, and to send others in their stead for the remainder of the year. No State shall be represented in Congress by less than two, nor more than seven members; and no person shall be capable of being a delegate for more than three years in any term of six years; nor shall any person, being a delegate, be capable of holding any office under the United States, for which he, or another for his benefit, receives any salary, fees or emolument of any kind. Each State shall maintain its own delegates in a meeting of the States, and while they act as members of the committee of the States. In determining questions in the United States in Congress assembled, each State shall have one vote. Freedom of speech and debate in Congress shall not be impeached or questioned in any court or place out of Congress, and the members of Congress shall be protected in their persons from arrests or imprisonments, during the time of their going to and from, and attendence on Congress, except for treason, felony, or breach of the peace.
ARTICLE VI No State, without the consent of the United States in Congress assembled, shall send any embassy to, or receive any embassy from, or enter into any con-
No two or more States shall enter into any treaty, confederation or alliance whatever between them, without the consent of the United States in Congress assembled, specifying accurately the purposes for which the same is to be entered into, and how long it shall continue. No State shall lay any imposts or duties, which may interfere with any stipulations in treaties, entered into by the United States in Congress assembled, with any King, Prince or State, in pursuance of any treaties already proposed by Congress, to the courts of France and Spain. No vessel of war shall be kept up in time of peace by any State, except such number only, as shall be deemed necessary by the United States in Congress assembled, for the defense of such State, or its trade; nor shall any body of forces be kept up by any State in time of peace, except such number only, as in the judgement of the United States in Congress assembled, shall be deemed requisite to garrison the forts necessary for the defense of such State; but every State shall always keep up a well-regulated and disciplined militia, sufficiently armed and accoutered, and shall provide and constantly have ready for use, in public stores, a due number of filed pieces and tents, and a proper quantity of arms, ammunition and camp equipage. No State shall engage in any war without the consent of the United States in Congress assembled, unless such State be actually invaded by enemies, or shall have received certain advice of a resolution being formed by some nation of Indians to invade such State, and the danger is so imminent as not to admit of a delay till the United States in Congress assembled can be consulted; nor shall any State grant commissions to any ships or vessels of war, nor letters of marque or
Articles of Confederation 1091
reprisal, except it be after a declaration of war by the United States in Congress assembled, and then only against the Kingdom or State and the subjects thereof, against which war has been so declared, and under such regulations as shall be established by the United States in Congress assembled, unless such State be infested by pirates, in which case vessels of war may be fitted out for that occasion, and kept so long as the danger shall continue, or until the United States in Congress assembled shall determine otherwise.
ARTICLE VII When land forces are raised by any State for the common defense, all officers of or under the rank of colonel, shall be appointed by the legislature of each State respectively, by whom such forces shall be raised, or in such manner as such State shall direct, and all vacancies shall be filled up by the State which first made the appointment.
ARTICLE VIII All charges of war, and all other expenses that shall be incurred for the common defense or general welfare, and allowed by the United States in Congress assembled, shall be defrayed out of a common treasury, which shall be supplied by the several States in proportion to the value of all land within each State, granted or surveyed for any person, as such land and the buildings and improvements thereon shall be estimated according to such mode as the United States in Congress assembled, shall from time to time direct and appoint. The taxes for paying that proportion shall be laid and levied by the authority and direction of the legislatures of the several States within the time agreed upon by the United States in Congress assembled.
ARTICLE IX The United States in Congress assembled, shall have the sole and exclusive right and power of determining on peace and war, except in the cases mentioned in the sixth article – of sending and receiving ambassadors – entering into treaties and alliances, provided that no treaty of commerce shall be made whereby the legislative power of the respective States shall be restrained
from imposing such imposts and duties on foreigners, as their own people are subjected to, or from prohibiting the exportation or importation of any species of goods or commodities whatsoever – of establishing rules for deciding in all cases, what captures on land or water shall be legal, and in what manner prizes taken by land or naval forces in the service of the United States shall be divided or appropriated – of granting letters of marque and reprisal in times of peace – appointing courts for the trial of piracies and felonies commited on the high seas and establishing courts for receiving and determining finally appeals in all cases of captures, provided that no member of Congress shall be appointed a judge of any of the said courts. The United States in Congress assembled shall also be the last resort on appeal in all disputes and differences now subsisting or that hereafter may arise between two or more States concerning boundary, jurisdiction or any other causes whatever; which authority shall always be exercised in the manner following. Whenever the legislative or executive authority or lawful agent of any State in controversy with another shall present a petition to Congress stating the matter in question and praying for a hearing, notice thereof shall be given by order of Congress to the legislative or executive authority of the other State in controversy, and a day assigned for the appearance of the parties by their lawful agents, who shall then be directed to appoint by joint consent, commissioners or judges to constitute a court for hearing and determining the matter in question: but if they cannot agree, Congress shall name three persons out of each of the United States, and from the list of such persons each party shall alternately strike out one, the petitioners beginning, until the number shall be reduced to thirteen; and from that number not less than seven, nor more than nine names as Congress shall direct, shall in the presence of Congress be drawn out by lot, and the persons whose names shall be so drawn or any five of them, shall be commissioners or judges, to hear and finally determine the controversy, so always as a major part of the judges who shall hear the cause shall agree in the determination: and if either party shall neglect to attend at the day appointed, without showing reasons, which Congress shall judge sufficient, or
1092 Ar ticles of Confederation
being present shall refuse to strike, the Congress shall proceed to nominate three persons out of each State, and the secretary of Congress shall strike in behalf of such party absent or refusing; and the judgement and sentence of the court to be appointed, in the manner before prescribed, shall be final and conclusive; and if any of the parties shall refuse to submit to the authority of such court, or to appear or defend their claim or cause, the court shall nevertheless proceed to pronounce sentence, or judgement, which shall in like manner be final and decisive, the judgement or sentence and other proceedings being in either case transmitted to Congress, and lodged among the acts of Congress for the security of the parties concerned: provided that every commissioner, before he sits in judgement, shall take an oath to be administered by one of the judges of the supreme or superior court of the State, where the cause shall be tried, ‘well and truly to hear and determine the matter in question, according to the best of his judgement, without favor, affection or hope of reward’: provided also, that no State shall be deprived of territory for the benefit of the United States. All controversies concerning the private right of soil claimed under different grants of two or more States, whose jurisdictions as they may respect such lands, and the States which passed such grants are adjusted, the said grants or either of them being at the same time claimed to have originated antecedent to such settlement of jurisdiction, shall on the petition of either party to the Congress of the United States, be finally determined as near as may be in the same manner as is before prescribed for deciding disputes respecting territorial jurisdiction between different States. The United States in Congress assembled shall also have the sole and exclusive right and power of regulating the alloy and value of coin struck by their own authority, or by that of the respective States – fixing the standards of weights and measures throughout the United States – regulating the trade and managing all affairs with the Indians, not members of any of the States, provided that the legislative right of any State within its own limits be not infringed or violated – establishing or regulating post offices from one State to another, throughout all the United States, and
exacting such postage on the papers passing through the same as may be requisite to defray the expenses of the said office – appointing all officers of the land forces, in the service of the United States, excepting regimental officers – appointing all the officers of the naval forces, and commissioning all officers whatever in the service of the United States – making rules for the government and regulation of the said land and naval forces, and directing their operations. The United States in Congress assembled shall have authority to appoint a committee, to sit in the recess of Congress, to be denominated ‘A Committee of the States’, and to consist of one delegate from each State; and to appoint such other committees and civil officers as may be necessary for managing the general affairs of the United States under their direction – to appoint one of their members to preside, provided that no person be allowed to serve in the office of president more than one year in any term of three years; to ascertain the necessary sums of money to be raised for the service of the United States, and to appropriate and apply the same for defraying the public expenses – to borrow money, or emit bills on the credit of the United States, transmitting every half-year to the respective States an account of the sums of money so borrowed or emitted – to build and equip a navy – to agree upon the number of land forces, and to make requisitions from each State for its quota, in proportion to the number of white inhabitants in such State; which requisition shall be binding, and thereupon the legislature of each State shall appoint the regimental officers, raise the men and cloath, arm and equip them in a solid-like manner, at the expense of the United States; and the officers and men so cloathed, armed and equipped shall march to the place appointed, and within the time agreed on by the United States in Congress assembled. But if the United States in Congress assembled shall, on consideration of circumstances judge proper that any State should not raise men, or should raise a smaller number of men than the quota thereof, such extra number shall be raised, officered, cloathed, armed and equipped in the same manner as the quota of each State, unless the legislature of such State shall judge that such extra number cannot be safely spread out in the same, in which case they shall raise, officer, cloath,
Articles of Confederation 1093
arm and equip as many of such extra number as they judge can be safely spared. And the officers and men so cloathed, armed, and equipped, shall march to the place appointed, and within the time agreed on by the United States in Congress assembled. The United States in Congress assembled shall never engage in a war, nor grant letters of marque or reprisal in time of peace, nor enter into any treaties or alliances, nor coin money, nor regulate the value thereof, nor ascertain the sums and expenses necessary for the defense and welfare of the United States, or any of them, nor emit bills, nor borrow money on the credit of the United States, nor appropriate money, nor agree upon the number of vessels of war, to be built or purchased, or the number of land or sea forces to be raised, nor appoint a commander in chief of the army or navy, unless nine States assent to the same: nor shall a question on any other point, except for adjourning from day to day be determined, unless by the votes of the majority of the United States in Congress assembled. The Congress of the United States shall have power to adjourn to any time within the year, and to any place within the United States, so that no period of adjournment be for a longer duration than the space of six months, and shall publish the journal of their proceedings monthly, except such parts thereof relating to treaties, alliances or military operations, as in their judgement require secrecy; and the yeas and nays of the delegates of each State on any question shall be entered on the Journal, when it is desired by any delegates of a State, or any of them, at his or their request shall be furnished with a transcript of the said journal, except such parts as are above excepted, to lay before the legislatures of the several States.
ARTICLE X The Committee of the States, or any nine of them, shall be authorized to execute, in the recess of Congress, such of the powers of Congress as the United States in Congress assembled, by the consent of the nine States, shall from time to time think expedient to vest them with; provided that no power be delegated to the said Committee, for the exercise of which, by the Articles of
Confederation, the voice of nine States in the Congress of the United States assembled be requisite.
ARTICLE XI Canada acceding to this confederation, and adjoining in the measures of the United States, shall be admitted into, and entitled to all the advantages of this Union; but no other colony shall be admitted into the same, unless such admission be agreed to by nine States.
ARTICLE XII All bills of credit emitted, monies borrowed, and debts contracted by, or under the authority of Congress, before the assembling of the United States, in pursuance of the present confederation, shall be deemed and considered as a charge against the United States, for payment and satisfaction whereof the said United States, and the public faith are hereby solemnly pleged.
ARTICLE XIII Every State shall abide by the determination of the United States in Congress assembled, on all questions which by this confederation are submitted to them. And the Articles of this Confederation shall be inviolably observed by every State, and the Union shall be perpetual; nor shall any alteration at any time hereafter be made in any of them; unless such alteration be agreed to in a Congress of the United States, and be afterwards confirmed by the legislatures of every State.
CONCLUSION And Whereas it hath pleased the Great Governor of the World to incline the hearts of the legislatures we respectively represent in Congress, to approve of, and to authorize us to ratify the said Articles of Confederation and perpetual Union. Know Ye that we the undersigned delegates, by virtue of the power and authority to us given for that purpose, do by these presents, in the name and in behalf of our respective constituents, fully and entirely ratify and confirm each and every of the said Articles of Confederation and perpetual Union, and all and singular the matters and things therein contained: And we do further solemnly plight and engage the faith of our constituents, that they shall
1094 Ar ticles of Confederation
abide by the determinations of the United States in Congress assembled, on all questions, which by the said Confederation are submitted to them. And that the Articles thereof shall be inviolably observed by the States we respectively represent, and that the Union shall be perpetual.
SIGNATORIES In Witness whereof we have hereunto set our hands in Congress. Done at Philadelphia in the State of Pennsylvania the ninth day of July in the Year of our Lord One Thousand Seven Hundred and Seventy-Eight, and in the Third Year of the independence of America.
On the part and in behalf of the State of New Jersey: Jonathan Witherspoon Nathaniel Scudder On the part and behalf of the State of Pennsylvania: Robert Morris William Clingan Daniel Roberdeau Joseph Reed John Bayard Smith On the part and behalf of the State of Delaware:
On the part and behalf of the State of New Hampshire:
Thomas Mckean John Dickinson Nicholas Van Dyke
Josiah Bartlett John Wentworth Junior
On the part and behalf of the State of Maryland:
On the part and behalf of the State of Massachusetts Bay:
John Hanson Daniel Carroll
John Hancock Francis Dana Samuel Adams James Lovell Elbridge Gerry Samuel Holten
On the part and behalf of the State of Virginia:
On the part and behalf of the State of Rhode Island and Providence Plantations:
On the part and behalf of the State of No Carolina:
William Ellery John Collins Henry Marchant On the part and behalf of the State of Connecticut: Roger Sherman Titus Hosmer Samuel Huntington Andrew Adams Oliver Wolcott On the part and behalf of the State of New York: James Duane William Duer Francis Lewis Gouverneur Morris
Richard Henry Lee Jonathan Harvie John Banister Francis Lightfoot Lee Thomas Adams
John Penn Corns Harnett Jonathan Williams On the part and behalf of the State of South Carolina: Henry Laurens Richard Hutson William Henry Drayton Thomas Heyward Junior Jonathan Matthews On the part and behalf of the State of Georgia: Jonathan Walton Edward Telfair Edward Langworthy
THE CONSTITUTION OF THE UNITED STATES OF AMERICA
We the people of the United States, in order to form a more perfect union, establish justice, insure domestic tranquility, provide for the common defense, promote the general welfare, and secure the blessings of liberty to ourselves and our posterity, do ordain and establish this Constitution for the United States of America.
ARTICLE I Section 1. All legislative powers herein granted shall be vested in a Congress of the United States, which shall consist of a Senate and House of Representatives. Section 2. The House of Representatives shall be composed of members chosen every second year by the people of the several states, and the electors in each state shall have the qualifications requisite for electors of the most numerous branch of the state legislature. No person shall be a Representative who shall not have attained to the age of twenty five years, and been seven years a citizen of the United States, and who shall not, when elected, be an inhabitant of that state in which he shall be chosen. Representatives and direct taxes shall be apportioned among the several states which may be included within this union, according to their respective numbers, which shall be determined by adding to the whole number of free persons, including those bound to service for a term of years, and excluding
Indians not taxed, three fifths of all other Persons. The actual Enumeration shall be made within three years after the first meeting of the Congress of the United States, and within every subsequent term of ten years, in such manner as they shall by law direct. The number of Representatives shall not exceed one for every thirty thousand, but each state shall have at least one Representative; and until such enumeration shall be made, the state of New Hampshire shall be entitled to choose three, Massachusetts eight, Rhode Island and Providence Plantations one, Connecticut five, New York six, New Jersey four, Pennsylvania eight, Delaware one, Maryland six, Virginia ten, North Carolina five, South Carolina five, and Georgia three. When vacancies happen in the Representation from any state, the executive authority thereof shall issue writs of election to fill such vacancies. The House of Representatives shall choose their speaker and other officers; and shall have the sole power of impeachment. Section 3. The Senate of the United States shall be composed of two Senators from each state, chosen by the legislature thereof, for six years; and each Senator shall have one vote. Immediately after they shall be assembled in consequence of the first election, they shall be divided as equally as may be into three classes. The seats of the Senators of the first class shall be vacated at the expiration of the second year, of the
1095
1096 The Constitution of the United States of America
second class at the expiration of the fourth year, and the third class at the expiration of the sixth year, so that one third may be chosen every second year; and if vacancies happen by resignation, or otherwise, during the recess of the legislature of any state, the executive thereof may make temporary appointments until the next meeting of the legislature, which shall then fill such vacancies. No person shall be a Senator who shall not have attained to the age of thirty years, and been nine years a citizen of the United States and who shall not, when elected, be an inhabitant of that state for which he shall be chosen. The Vice President of the United States shall be President of the Senate, but shall have no vote, unless they be equally divided. The Senate shall choose their other officers, and also a President pro tempore, in the absence of the Vice President, or when he shall exercise the office of President of the United States. The Senate shall have the sole power to try all impeachments. When sitting for that purpose, they shall be on oath or affirmation. When the President of the United States is tried, the Chief Justice shall preside: And no person shall be convicted without the concurrence of two thirds of the members present. Judgment in cases of impeachment shall not extend further than to removal from office, and disqualification to hold and enjoy any office of honor, trust or profit under the United States: but the party convicted shall nevertheless be liable and subject to indictment, trial, judgment and punishment, according to law. Section 4. The times, places and manner of holding elections for Senators and Representatives, shall be prescribed in each state by the legislature thereof; but the Congress may at any time by law make or alter such regulations, except as to the places of choosing Senators. The Congress shall assemble at least once in every year, and such meeting shall be on the first Monday in December, unless they shall by law appoint a different day. Section 5. Each House shall be the judge of the elections, returns and qualifications of its own members,
and a majority of each shall constitute a quorum to do business; but a smaller number may adjourn from day to day, and may be authorized to compel the attendance of absent members, in such manner, and under such penalties as each House may provide. Each House may determine the rules of its proceedings, punish its members for disorderly behavior, and, with the concurrence of two thirds, expel a member. Each House shall keep a journal of its proceedings, and from time to time publish the same, excepting such parts as may in their judgment require secrecy; and the yeas and nays of the members of either House on any question shall, at the desire of one fifth of those present, be entered on the journal. Neither House, during the session of Congress, shall, without the consent of the other, adjourn for more than three days, nor to any other place than that in which the two Houses shall be sitting. Section 6. The Senators and Representatives shall receive a compensation for their services, to be ascertained by law, and paid out of the treasury of the United States. They shall in all cases, except treason, felony and breach of the peace, be privileged from arrest during their attendance at the session of their respective Houses, and in going to and returning from the same; and for any speech or debate in either House, they shall not be questioned in any other place. No Senator or Representative shall, during the time for which he was elected, be appointed to any civil office under the authority of the United States, which shall have been created, or the emoluments whereof shall have been increased during such time: and no person holding any office under the United States, shall be a member of either House during his continuance in office. Section 7. All bills for raising revenue shall originate in the House of Representatives; but the Senate may propose or concur with amendments as on other Bills. Every bill which shall have passed the House of Representatives and the Senate, shall, before it become a law, be presented to the President of the United States; if he approve he shall sign it, but if not he shall return it, with his objections to that House in which it
The Constitution of the United States of America 1097
shall have originated, who shall enter the objections at large on their journal, and proceed to reconsider it. If after such reconsideration two thirds of that House shall agree to pass the bill, it shall be sent, together with the objections, to the other House, by which it shall likewise be reconsidered, and if approved by two thirds of that House, it shall become a law. But in all such cases the votes of both Houses shall be determined by yeas and nays, and the names of the persons voting for and against the bill shall be entered on the journal of each House respectively. If any bill shall not be returned by the President within ten days (Sundays excepted) after it shall have been presented to him, the same shall be a law, in like manner as if he had signed it, unless the Congress by their adjournment prevent its return, in which case it shall not be a law. Every order, resolution, or vote to which the concurrence of the Senate and House of Representatives may be necessary (except on a question of adjournment) shall be presented to the President of the United States; and before the same shall take effect, shall be approved by him, or being disapproved by him, shall be repassed by two thirds of the Senate and House of Representatives, according to the rules and limitations prescribed in the case of a bill. Section 8. The Congress shall have power to lay and collect taxes, duties, imposts and excises, to pay the debts and provide for the common defense and general welfare of the United States; but all duties, imposts and excises shall be uniform throughout the United States; To borrow money on the credit of the United States; To regulate commerce with foreign nations, and among the several states, and with the Indian tribes; To establish a uniform rule of naturalization, and uniform laws on the subject of bankruptcies throughout the United States; To coin money, regulate the value thereof, and of foreign coin, and fix the standard of weights and measures; To provide for the punishment of counterfeiting the securities and current coin of the United States; To establish post offices and post roads; To promote the progress of science and useful arts, by securing for limited times to authors and
inventors the exclusive right to their respective writings and discoveries; To constitute tribunals inferior to the Supreme Court; To define and punish piracies and felonies committed on the high seas, and offenses against the law of nations; To declare war, grant letters of marque and reprisal, and make rules concerning captures on land and water; To raise and support armies, but no appropriation of money to that use shall be for a longer term than two years; To provide and maintain a navy; To make rules for the government and regulation of the land and naval forces; To provide for calling forth the militia to execute the laws of the union, suppress insurrections and repel invasions; To provide for organizing, arming, and disciplining, the militia, and for governing such part of them as may be employed in the service of the United States, reserving to the states respectively, the appointment of the officers, and the authority of training the militia according to the discipline prescribed by Congress; To exercise exclusive legislation in all cases whatsoever, over such District (not exceeding ten miles square) as may, by cession of particular states, and the acceptance of Congress, become the seat of the government of the United States, and to exercise like authority over all places purchased by the consent of the legislature of the state in which the same shall be, for the erection of forts, magazines, arsenals, dockyards, and other needful buildings;—And To make all laws which shall be necessary and proper for carrying into execution the foregoing powers, and all other powers vested by this Constitution in the government of the United States, or in any department or officer thereof. Section 9. The migration or importation of such persons as any of the states now existing shall think proper to admit, shall not be prohibited by the Congress prior to the year one thousand eight hundred and eight, but a tax or duty may be imposed on such importation, not exceeding ten dollars for each person.
1098 The Constitution of the United States of America
The privilege of the writ of habeas corpus shall not be suspended, unless when in cases of rebellion or invasion the public safety may require it. No bill of attainder or ex post facto Law shall be passed. No capitation, or other direct, tax shall be laid, unless in proportion to the census or enumeration herein before directed to be taken. No tax or duty shall be laid on articles exported from any state. No preference shall be given by any regulation of commerce or revenue to the ports of one state over those of another: nor shall vessels bound to, or from, one state, be obliged to enter, clear or pay duties in another. No money shall be drawn from the treasury, but in consequence of appropriations made by law; and a regular statement and account of receipts and expenditures of all public money shall be published from time to time. No title of nobility shall be granted by the United States: and no person holding any office of profit or trust under them, shall, without the consent of the Congress, accept of any present, emolument, office, or title, of any kind whatever, from any king, prince, or foreign state. Section 10. No state shall enter into any treaty, alliance, or confederation; grant letters of marque and reprisal; coin money; emit bills of credit; make anything but gold and silver coin a tender in payment of debts; pass any bill of attainder, ex post facto law, or law impairing the obligation of contracts, or grant any title of nobility. No state shall, without the consent of the Congress, lay any imposts or duties on imports or exports, except what may be absolutely necessary for executing its inspection laws: and the net produce of all duties and imposts, laid by any state on imports or exports, shall be for the use of the treasury of the United States; and all such laws shall be subject to the revision and control of the Congress. No state shall, without the consent of Congress, lay any duty of tonnage, keep troops, or ships of war in time of peace, enter into any agreement or compact with another state, or with a foreign power, or engage in war, unless actually invaded, or in such imminent danger as will not admit of delay.
ARTICLE II Section 1. The executive power shall be vested in a President of the United States of America. He shall hold his office during the term of four years, and, together with the Vice President, chosen for the same term, be elected, as follows: Each state shall appoint, in such manner as the Legislature thereof may direct, a number of electors, equal to the whole number of Senators and Representatives to which the State may be entitled in the Congress: but no Senator or Representative, or person holding an office of trust or profit under the United States, shall be appointed an elector. The electors shall meet in their respective states, and vote by ballot for two persons, of whom one at least shall not be an inhabitant of the same state with themselves. And they shall make a list of all the persons voted for, and of the number of votes for each; which list they shall sign and certify, and transmit sealed to the seat of the government of the United States, directed to the President of the Senate. The President of the Senate shall, in the presence of the Senate and House of Representatives, open all the certificates, and the votes shall then be counted. The person having the greatest number of votes shall be the President, if such number be a majority of the whole number of electors appointed; and if there be more than one who have such majority, and have an equal number of votes, then the House of Representatives shall immediately choose by ballot one of them for President; and if no person have a majority, then from the five highest on the list the said House shall in like manner choose the President. But in choosing the President, the votes shall be taken by States, the representation from each state having one vote; A quorum for this purpose shall consist of a member or members from two thirds of the states, and a majority of all the states shall be necessary to a choice. In every case, after the choice of the President, the person having the greatest number of votes of the electors shall be the Vice President. But if there should remain two or more who have equal votes, the Senate shall choose from them by ballot the Vice President. The Congress may determine the time of choosing the electors, and the day on which they shall give their votes; which day shall be the same throughout the United States.
The Constitution of the United States of America 1099
No person except a natural born citizen, or a citizen of the United States, at the time of the adoption of this Constitution, shall be eligible to the office of President; neither shall any person be eligible to that office who shall not have attained to the age of thirty five years, and been fourteen Years a resident within the United States. In case of the removal of the President from office, or of his death, resignation, or inability to discharge the powers and duties of the said office, the same shall devolve on the Vice President, and the Congress may by law provide for the case of removal, death, resignation or inability, both of the President and Vice President, declaring what officer shall then act as President, and such officer shall act accordingly, until the disability be removed, or a President shall be elected. The President shall, at stated times, receive for his services, a compensation, which shall neither be increased nor diminished during the period for which he shall have been elected, and he shall not receive within that period any other emolument from the United States, or any of them. Before he enter on the execution of his office, he shall take the following oath or affirmation:—“I do solemnly swear (or affirm) that I will faithfully execute the office of President of the United States, and will to the best of my ability, preserve, protect and defend the Constitution of the United States.” Section 2. The President shall be commander in chief of the Army and Navy of the United States, and of the militia of the several states, when called into the actual service of the United States; he may require the opinion, in writing, of the principal officer in each of the executive departments, on any subject relating to the duties of their respective offices, and he shall have power to grant reprieves and pardons for offenses against the United States, except in cases of impeachment. He shall have power, by and with the advice and consent of the Senate, to make treaties, provided two thirds of the Senators present concur; and he shall nominate, and by and with the advice and consent of the Senate, shall appoint ambassadors, other public ministers and consuls, judges of the Supreme Court, and all other officers of the United States, whose appointments are not herein otherwise provided for,
and which shall be established by law: but the Congress may by law vest the appointment of such inferior officers, as they think proper, in the President alone, in the courts of law, or in the heads of departments. The President shall have power to fill up all vacancies that may happen during the recess of the Senate, by granting commissions which shall expire at the end of their next session. Section 3. He shall from time to time give to the Congress information of the state of the union, and recommend to their consideration such measures as he shall judge necessary and expedient; he may, on extraordinary occasions, convene both Houses, or either of them, and in case of disagreement between them, with respect to the time of adjournment, he may adjourn them to such time as he shall think proper; he shall receive ambassadors and other public ministers; he shall take care that the laws be faithfully executed, and shall commission all the officers of the United States. Section 4. The President, Vice President and all civil officers of the United States, shall be removed from office on impeachment for, and conviction of, treason, bribery, or other high crimes and misdemeanors.
ARTICLE III Section 1. The judicial power of the United States, shall be vested in one Supreme Court, and in such inferior courts as the Congress may from time to time ordain and establish. The judges, both of the supreme and inferior courts, shall hold their offices during good behavior, and shall, at stated times, receive for their services, a compensation, which shall not be diminished during their continuance in office. Section 2. The judicial power shall extend to all cases, in law and equity, arising under this Constitution, the laws of the United States, and treaties made, or which shall be made, under their authority;—to all cases affecting ambassadors, other public ministers and consuls;—to all cases of admiralty and maritime jurisdiction;—to controversies to which the United States shall be a party;—to controversies between two or more states;—between a state and citizens of another state;—between citizens of different states;—
1100 The Constitution of the United States of America
between citizens of the same state claiming lands under grants of different states, and between a state, or the citizens thereof, and foreign states, citizens or subjects. In all cases affecting ambassadors, other public ministers and consuls, and those in which a state shall be party, the Supreme Court shall have original jurisdiction. In all the other cases before mentioned, the Supreme Court shall have appellate jurisdiction, both as to law and fact, with such exceptions, and under such regulations as the Congress shall make. The trial of all crimes, except in cases of impeachment, shall be by jury; and such trial shall be held in the state where the said crimes shall have been committed; but when not committed within any state, the trial shall be at such place or places as the Congress may by law have directed. Section 3. Treason against the United States, shall consist only in levying war against them, or in adhering to their enemies, giving them aid and comfort. No person shall be convicted of treason unless on the testimony of two witnesses to the same overt act, or on confession in open court. The Congress shall have power to declare the punishment of treason, but no attainder of treason shall work corruption of blood, or forfeiture except during the life of the person attainted.
ARTICLE IV Section 1. Full faith and credit shall be given in each state to the public acts, records, and judicial proceedings of every other state. And the Congress may by general laws prescribe the manner in which such acts, records, and proceedings shall be proved, and the effect thereof. Section 2. The citizens of each state shall be entitled to all privileges and immunities of citizens in the several states. A person charged in any state with treason, felony, or other crime, who shall flee from justice, and be found in another state, shall on demand of the executive authority of the state from which he fled, be delivered up, to be removed to the state having jurisdiction of the crime.
No person held to service or labor in one state, under the laws thereof, escaping into another, shall, in consequence of any law or regulation therein, be discharged from such service or labor, but shall be delivered up on claim of the party to whom such service or labor may be due. Section 3. New states may be admitted by the Congress into this union; but no new states shall be formed or erected within the jurisdiction of any other state; nor any state be formed by the junction of two or more states, or parts of states, without the consent of the legislatures of the states concerned as well as of the Congress. The Congress shall have power to dispose of and make all needful rules and regulations respecting the territory or other property belonging to the United States; and nothing in this Constitution shall be so construed as to prejudice any claims of the United States, or of any particular state. Section 4. The United States shall guarantee to every state in this union a republican form of government, and shall protect each of them against invasion; and on application of the legislature, or of the executive (when the legislature cannot be convened) against domestic violence.
ARTICLE V The Congress, whenever two thirds of both houses shall deem it necessary, shall propose amendments to this Constitution, or, on the application of the legislatures of two thirds of the several states, shall call a convention for proposing amendments, which, in either case, shall be valid to all intents and purposes, as part of this Constitution, when ratified by the legislatures of three fourths of the several states, or by conventions in three fourths thereof, as the one or the other mode of ratification may be proposed by the Congress; provided that no amendment which may be made prior to the year one thousand eight hundred and eight shall in any manner affect the first and fourth clauses in the ninth section of the first article; and that no state, without its consent, shall be deprived of its equal suffrage in the Senate.
The Constitution of the United States of America 1101
ARTICLE VI All debts contracted and engagements entered into, before the adoption of this Constitution, shall be as valid against the United States under this Constitution, as under the Confederation. This Constitution, and the laws of the United States which shall be made in pursuance thereof; and all treaties made, or which shall be made, under the authority of the United States, shall be the supreme law of the land; and the judges in every state shall be bound thereby, anything in the Constitution or laws of any State to the contrary notwithstanding. The Senators and Representatives before mentioned, and the members of the several state legislatures, and all executive and judicial officers, both of the United States and of the several states, shall be bound by oath or affirmation, to support this Constitution; but no religious test shall ever be required as a qualification to any office or public trust under the United States.
ARTICLE VII The ratification of the conventions of nine states, shall be sufficient for the establishment of this Constitution between the states so ratifying the same.
New Hampshire: John Langdon, Nicholas Gilman Massachusetts: Nathaniel Gorham, Rufus King Connecticut: Wm: Saml. Johnson, Roger Sherman New York: Alexander Hamilton New Jersey: Wil Livingston, David Brearly, Wm. Paterson, Jona: Dayton Pennsylvania: B. Franklin, Thomas Mifflin, Robt. Morris, Geo. Clymer, Thos. FitzSimons, Jared Ingersoll, James Wilson, Gouv Morris Delaware: Geo: Read, Gunning Bedford jun, John Dickinson, Richard Bassett, Jaco: Broom Maryland: James McHenry, Dan of St Thos. Jenifer, Danl Carroll Virginia: John Blair—, James Madison Jr.
Done in convention by the unanimous consent of the states present the seventeenth day of September in the year of our Lord one thousand seven hundred and eighty seven and of the independence of the United States of America the twelfth. In witness whereof We have hereunto subscribed our Names,
North Carolina: Wm. Blount, Richd. Dobbs Spaight, Hu Williamson
G. Washington: Presidt. and deputy from Virginia
Georgia: William Few, Abr Baldwin
South Carolina: J. Rutledge, Charles Cotesworth Pinckney, Charles Pinckney, Pierce Butler
BILL OF RIGHTS
The Conventions of a number of the States having, at the time of adopting the Constitution, expressed a desire, in order to prevent misconstruction or abuse of its powers, that further declaratory and restrictive clauses should be added, and as extending the ground of public confidence in the Government will best insure the beneficent ends of its institution; Resolved, by the Senate and House of Representatives of the United States of America, in Congress assembled, two-thirds of both Houses concurring, that the following articles be proposed to the Legislatures of the several States, as amendments to the Constitution of the United States; all or any of which articles, when ratified by three-fourths of the said Legislatures, to be valid to all intents and purposes as part of the said Constitution, namely:
AMENDMENT I Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the government for a redress of grievances.
AMENDMENT III No soldier shall, in time of peace be quartered in any house, without the consent of the owner, nor in time of war, but in a manner to be prescribed by law.
AMENDMENT IV The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no warrants shall issue, but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.
AMENDMENT V No person shall be held to answer for a capital, or otherwise infamous crime, unless on a presentment or indictment of a grand jury, except in cases arising in the land or naval forces, or in the militia, when in actual service in time of war or public danger; nor shall any person be subject for the same offense to be twice put in jeopardy of life or limb; nor shall be compelled in any criminal case to be a witness against himself, nor be deprived of life, liberty, or property, without due process of law; nor shall private property be taken for public use, without just compensation.
AMENDMENT II A well regulated militia, being necessary to the security of a free state, the right of the people to keep and bear arms, shall not be infringed.
AMENDMENT VI In all criminal prosecutions, the accused shall enjoy the right to a speedy and public trial, by an impartial
1103
1104 Bill of Rights
jury of the state and district wherein the crime shall have been committed, which district shall have been previously ascertained by law, and to be informed of the nature and cause of the accusation; to be confronted with the witnesses against him; to have compulsory process for obtaining witnesses in his favor, and to have the assistance of counsel for his defense.
AMENDMENT VII In suits at common law, where the value in controversy shall exceed twenty dollars, the right of trial by jury shall be preserved, and no fact tried by a jury, shall be otherwise reexamined in any court of the United States, than according to the rules of the common law.
AMENDMENT VIII Excessive bail shall not be required, nor excessive fines imposed, nor cruel and unusual punishments inflicted.
AMENDMENT IX The enumeration in the Constitution, of certain rights, shall not be construed to deny or disparage others retained by the people.
AMENDMENT X The powers not delegated to the United States by the Constitution, nor prohibited by it to the states, are reserved to the states respectively, or to the people.
OTHER AMENDMENTS TO THE CONSTITUTION
AMENDMENT XI (1798) The judicial power of the United States shall not be construed to extend to any suit in law or equity, commenced or prosecuted against one of the United States by citizens of another state, or by citizens or subjects of any foreign state.
AMENDMENT XII (1804) The electors shall meet in their respective states and vote by ballot for President and Vice-President, one of whom, at least, shall not be an inhabitant of the same state with themselves; they shall name in their ballots the person voted for as President, and in distinct ballots the person voted for as Vice-President, and they shall make distinct lists of all persons voted for as President, and of all persons voted for as VicePresident, and of the number of votes for each, which lists they shall sign and certify, and transmit sealed to the seat of the government of the United States, directed to the President of the Senate;—The President of the Senate shall, in the presence of the Senate and House of Representatives, open all the certificates and the votes shall then be counted;—the person having the greatest number of votes for President, shall be the President, if such number be a majority of the whole number of electors appointed; and if no person have such majority, then from the persons hav-
ing the highest numbers not exceeding three on the list of those voted for as President, the House of Representatives shall choose immediately, by ballot, the President. But in choosing the President, the votes shall be taken by states, the representation from each state having one vote; a quorum for this purpose shall consist of a member or members from two-thirds of the states, and a majority of all the states shall be necessary to a choice. And if the House of Representatives shall not choose a President whenever the right of choice shall devolve upon them, before the fourth day of March next following, then the Vice-President shall act as President, as in the case of the death or other constitutional disability of the President. The person having the greatest number of votes as VicePresident, shall be the Vice-President, if such number be a majority of the whole number of electors appointed, and if no person have a majority, then from the two highest numbers on the list, the Senate shall choose the Vice-President; a quorum for the purpose shall consist of two-thirds of the whole number of Senators, and a majority of the whole number shall be necessary to a choice. But no person constitutionally ineligible to the office of President shall be eligible to that of Vice-President of the United States.
AMENDMENT XIII (1865) Section 1. Neither slavery nor involuntary servitude, except as a punishment for crime whereof the party shall
1105
1106 Other Amendments to the Constitution
have been duly convicted, shall exist within the United States, or any place subject to their jurisdiction. Section 2. Congress shall have power to enforce this article by appropriate legislation.
AMENDMENT XIV (1868) Section 1. All persons born or naturalized in the United States, and subject to the jurisdiction thereof, are citizens of the United States and of the state wherein they reside. No state shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any state deprive any person of life, liberty, or property, without due process of law; nor deny to any person within its jurisdiction the equal protection of the laws. Section 2. Representatives shall be apportioned among the several states according to their respective numbers, counting the whole number of persons in each state, excluding Indians not taxed. But when the right to vote at any election for the choice of electors for President and Vice President of the United States, Representatives in Congress, the executive and judicial officers of a state, or the members of the legislature thereof, is denied to any of the male inhabitants of such state, being twenty-one years of age, and citizens of the United States, or in any way abridged, except for participation in rebellion, or other crime, the basis of representation therein shall be reduced in the proportion which the number of such male citizens shall bear to the whole number of male citizens twenty-one years of age in such state. Section 3. No person shall be a Senator or Representative in Congress, or elector of President and Vice President, or hold any office, civil or military, under the United States, or under any state, who, having previously taken an oath, as a member of Congress, or as an officer of the United States, or as a member of any state legislature, or as an executive or judicial officer of any state, to support the Constitution of the United States, shall have engaged in insurrection or rebellion against the same, or given aid or comfort to the enemies thereof. But Congress may by a vote of two-thirds of each House, remove such disability.
Section 4. The validity of the public debt of the United States, authorized by law, including debts incurred for payment of pensions and bounties for services in suppressing insurrection or rebellion, shall not be questioned. But neither the United States nor any state shall assume or pay any debt or obligation incurred in aid of insurrection or rebellion against the United States, or any claim for the loss or emancipation of any slave; but all such debts, obligations and claims shall be held illegal and void. Section 5. The Congress shall have power to enforce, by appropriate legislation, the provisions of this article.
AMENDMENT XV (1870) Section 1. The right of citizens of the United States to vote shall not be denied or abridged by the United States or by any state on account of race, color, or previous condition of servitude. Section 2. The Congress shall have power to enforce this article by appropriate legislation.
AMENDMENT XVI (1913) The Congress shall have power to lay and collect taxes on incomes, from whatever source derived, without apportionment among the several states, and without regard to any census of enumeration.
AMENDMENT XVII (1913) The Senate of the United States shall be composed of two Senators from each state, elected by the people thereof, for six years; and each Senator shall have one vote. The electors in each state shall have the qualifications requisite for electors of the most numerous branch of the state legislatures. When vacancies happen in the representation of any state in the Senate, the executive authority of such state shall issue writs of election to fill such vacancies: Provided, that the legislature of any state may empower the executive thereof to make temporary
Other Amendments to the Constitution 1107
appointments until the people fill the vacancies by election as the legislature may direct. This amendment shall not be so construed as to affect the election or term of any Senator chosen before it becomes valid as part of the Constitution.
AMENDMENT XVIII (1919) Section 1. After one year from the ratification of this article the manufacture, sale, or transportation of intoxicating liquors within, the importation thereof into, or the exportation thereof from the United States and all territory subject to the jurisdiction thereof for beverage purposes is hereby prohibited. Section 2. The Congress and the several states shall have concurrent power to enforce this article by appropriate legislation. Section 3. This article shall be inoperative unless it shall have been ratified as an amendment to the Constitution by the legislatures of the several states, as provided in the Constitution, within seven years from the date of the submission hereof to the states by the Congress.
AMENDMENT XIX (1920) The right of citizens of the United States to vote shall not be denied or abridged by the United States or by any state on account of sex. Congress shall have power to enforce this article by appropriate legislation.
AMENDMENT XX (1933) Section 1. The terms of the President and Vice President shall end at noon on the 20th day of January, and the terms of Senators and Representatives at noon on the 3rd day of January, of the years in which such terms would have ended if this article had not been ratified; and the terms of their successors shall then begin.
Section 2. The Congress shall assemble at least once in every year, and such meeting shall begin at noon on the 3rd day of January, unless they shall by law appoint a different day. Section 3. If, at the time fixed for the beginning of the term of the President, the President elect shall have died, the Vice President elect shall become President. If a President shall not have been chosen before the time fixed for the beginning of his term, or if the President elect shall have failed to qualify, then the Vice President elect shall act as President until a President shall have qualified; and the Congress may by law provide for the case wherein neither a President elect nor a Vice President elect shall have qualified, declaring who shall then act as President, or the manner in which one who is to act shall be selected, and such person shall act accordingly until a President or Vice President shall have qualified. Section 4. The Congress may by law provide for the case of the death of any of the persons from whom the House of Representatives may choose a President whenever the right of choice shall have devolved upon them, and for the case of the death of any of the persons from whom the Senate may choose a Vice President whenever the right of choice shall have devolved upon them. Section 5. Sections 1 and 2 shall take effect on the 15th day of October following the ratification of this article. Section 6. This article shall be inoperative unless it shall have been ratified as an amendment to the Constitution by the legislatures of three-fourths of the several states within seven years from the date of its submission.
AMENDMENT XXI (1933) Section 1. The eighteenth article of amendment to the Constitution of the United States is hereby repealed. Section 2. The transportation or importation into any state, territory, or possession of the United States for
1108 Other Amendments to the Constitution
delivery or use therein of intoxicating liquors, in violation of the laws thereof, is hereby prohibited.
meet in the District and perform such duties as provided by the twelfth article of amendment.
Section 3. This article shall be inoperative unless it shall have been ratified as an amendment to the Constitution by conventions in the several states, as provided in the Constitution, within seven years from the date of the submission hereof to the states by the Congress.
Section 2. The Congress shall have power to enforce this article by appropriate legislation.
AMENDMENT XXII (1951) Section 1. No person shall be elected to the office of the President more than twice, and no person who has held the office of President, or acted as President, for more than two years of a term to which some other person was elected President shall be elected to the office of the President more than once. But this article shall not apply to any person holding the office of President when this article was proposed by the Congress, and shall not prevent any person who may be holding the office of President, or acting as President, during the term within which this article becomes operative from holding the office of President or acting as President during the remainder of such term. Section 2. This article shall be inoperative unless it shall have been ratified as an amendment to the Constitution by the legislatures of three-fourths of the several states within seven years from the date of its submission to the states by the Congress.
AMENDMENT XXIII (1961) Section 1. The District constituting the seat of government of the United States shall appoint in such manner as the Congress may direct: A number of electors of President and Vice President equal to the whole number of Senators and Representatives in Congress to which the District would be entitled if it were a state, but in no event more than the least populous state; they shall be in addition to those appointed by the states, but they shall be considered, for the purposes of the election of President and Vice President, to be electors appointed by a state; and they shall
AMENDMENT XXIV (1964) Section 1. The right of citizens of the United States to vote in any primary or other election for President or Vice President, for electors for President or Vice President, or for Senator or Representative in Congress, shall not be denied or abridged by the United States or any state by reason of failure to pay any poll tax or other tax. Section 2. The Congress shall have power to enforce this article by appropriate legislation.
AMENDMENT XXV (1967) Section 1. In case of the removal of the President from office or of his death or resignation, the Vice President shall become President. Section 2. Whenever there is a vacancy in the office of the Vice President, the President shall nominate a Vice President who shall take office upon confirmation by a majority vote of both Houses of Congress. Section 3. Whenever the President transmits to the President pro tempore of the Senate and the Speaker of the House of Representatives his written declaration that he is unable to discharge the powers and duties of his office, and until he transmits to them a written declaration to the contrary, such powers and duties shall be discharged by the Vice President as Acting President. Section 4. Whenever the Vice President and a majority of either the principal officers of the executive departments or of such other body as Congress may by law provide, transmit to the President pro tempore of the Senate and the Speaker of the House of Representatives their written declaration that the President is
Other Amendments to the Constitution 1109
unable to discharge the powers and duties of his office, the Vice President shall immediately assume the powers and duties of the office as Acting President. Thereafter, when the President transmits to the President pro tempore of the Senate and the Speaker of the House of Representatives his written declaration that no inability exists, he shall resume the powers and duties of his office unless the Vice President and a majority of either the principal officers of the executive department or of such other body as Congress may by law provide, transmit within four days to the President pro tempore of the Senate and the Speaker of the House of Representatives their written declaration that the President is unable to discharge the powers and duties of his office. Thereupon Congress shall decide the issue, assembling within forty-eight hours for that purpose if not in session. If the Congress, within twenty-one days after receipt of the latter written declaration, or, if Congress is not in session, within twenty-one days after Congress is required to assemble, determines by two-thirds vote of both Houses that the President
is unable to discharge the powers and duties of his office, the Vice President shall continue to discharge the same as Acting President; otherwise, the President shall resume the powers and duties of his office.
AMENDMENT XXVI (1971) Section 1. The right of citizens of the United States, who are 18 years of age or older, to vote, shall not be denied or abridged by the United States or any state on account of age. Section 2. The Congress shall have the power to enforce this article by appropriate legislation.
AMENDMENT XXVII (1992) No law varying the compensation for the services of the Senators and Representatives shall take effect until an election of Representatives shall have intervened.