Emergency and Backup Power Sources:
Preparing for Blackouts and Brownouts
i
This page intentionally left blank
Emergency and Backup Power Sources:
Preparing for Blackouts and Brownouts by Michael F. Hordeski
iii
Library of Congress Cataloging-in-Publication Data Hordeski, Michael F. Emergency and backup power sources: preparing for blackouts and brownouts/by Michael F. Hordeski. p. cm. Includes bibliographical references and index. ISBN 0-88173-484-5 (print) -- ISBN 0-88173-485-3 (electronic) 1. Emergency power supply. I. Title. TK1020.H67 2005 658.2'6--dc22 2005040624 Emergency and backup power sources: preparing for blackouts and brownouts/by Michael F. Hordeski ©2005 by The Fairmont Press, Inc. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from the publisher. Published by The Fairmont Press, Inc. 700 Indian Trail Lilburn, GA 30047 tel: 770-925-9388; fax: 770-381-9865 http://www.fairmontpress.com Distributed by Taylor & Francis Ltd. 6000 Broken Sound Parkway NW, Suite 300 Boca Raton, FL 33487, USA E-mail:
[email protected] Distributed by Taylor & Francis Ltd. 23-25 Blades Court Deodar Road London SW15 2NU, UK E-mail:
[email protected]
Printed in the United States of America 10 9 8 7 6 5 4 3 2 1 0-88173-484-5 (The Fairmont Press, Inc.) 0-8493-3908-1 (Taylor & Francis Ltd.) While every effort is made to provide dependable information, the publisher, authors, and editors cannot be held responsible for any errors or omissions. iv
Table of Contents Chapter 1
Emergency Power ..................................................................... 1
Chapter 2
Power Stability and Quality ................................................. 31
Chapter 3
Standby Power Systems ......................................................... 61
Chapter 4
Emergency Generators ........................................................... 97
Chapter 5
Alternate Power Sources ..................................................... 139
Chapter 6
Distributed Generation, Clean Power and Renewable Energy ................................ 171
Chapter 7
Fuel Cells ................................................................................ 209
Chapter 8
Protecting Computer Data .................................................. 247
Chapter 9
Data Recovery ........................................................................ 285
Index ................................................................................................................ 311
v
This page intentionally left blank
Preface As the aging power distribution system fails more often to provide reliable power, emergency and backup power sources become critical. In the summer of 2003, 50 million users lost power in eight states and parts of Canada. A handful of commonplace, summertime trips (brief transmission line shutdowns) due to ebbing voltage set off the biggest outage in U.S. history. Extra power through a 345-kV line caused the line to sag into a tree and trip. Utilities in the U.S. and Canada saw wild power swings, several other lines tripped and stations started shutting down after losing power. The outage took down 21 power plants in 3 minutes. Another major outage in the U.S. occurred seven years earlier when high temperatures, sagging power lines and high demand caused a blackout that affected 4 million in nine Western states. The outage ranged from Oregon to San Diego and as far east as Texas and even affected parts of Mexico. Power lines in the Northwest became unbalanced and affected four main arteries that send power south. Major blackouts also occurred in 1997, 1998 and 1999 in the San Francisco and New York areas. Reliable and cost effective systems are needed that will take over during power interruptions and protect critical functions and data. These include standby power systems that employ batteries, kinetic energy storage, fuel cells, reciprocating engines and turbines. This book provides a view of the state of emergency power. An explanation of power disturbances and interruptions is given. Topics include surge suppression, voltage regulation, backup power sources, micro turbines, fuel cells, diesel engines, load management and power quality issues. Reliability and maintainability are critical issues along with comparisons of operating costs and environmental issues. Blackout planning is considered in detail along with emergency procedures and general energy preparedness. In the past the power system has depended on large power plants with long development times. Disruptions in operations at these plants can have major effects on regional supplies of power. In the near future, this crisis can be minimized by investments in new technologies that reduce the need to depend on the power grid. vii
The tools and technologies that are available for an energy user to minimize the effects of power interruptions include fuel cells that use the chemical reaction from a variety of fuels to create power and allow companies to generate clean high-grade electricity on-site without air pollution problems. Some of these units may use fuel processors while others would require hydrogen fuel in a hydrogen economy. Natural gas distributed generation uses gas turbines and gas engines. Modular power units are becoming smaller and more affordable, other modular generation sources include solar-voltaic, micro-hydro and micro-wind. One of the best ways to insure electrical supply reliability and reduce long-term costs is to utilize these smaller, clean, more efficient energygenerating technologies. Cogeneration systems are also available to small scale users of electricity. These modular systems produce electricity and hot water from engine waste heat. Home sized cogeneration packages are capable of providing most of the heating and electrical needs of a home. Cogeneration can produce a given amount of electric power and process heat with 30% less fuel than it takes to produce the electricity and process heat separately. This book stresses the role of contingency planning for reducing the effects of power outages. Frequent power quality problems can be overcome by innovative and practical approaches. This type of power management includes developing alternative sources, monitoring quality and load management solutions. Advances in metering and power monitoring are examined as well as safety and security issues. Cost effective power generation solutions include small modular power generation units. Advanced technologies and products are also available for energy storage options and lighting and cooling integration. Data backup techniques are described for systems and networks. Backup concepts continue or grow with technology and adopt new and innovative approaches to the process. Backup management software can automatically implement backup operations and data maintenance for increased efficiency and reliability. Chapter 1 examines past problems on the power grid. It considers energy usage trends and improving the power future. Basic concepts of emergency power, reliability, energy conversion and efficiency are discussed. Backup techniques and philosophy are explained. Emergency power planning programs, blackout preparation and energy conservation are proposed as solutions. viii
Chapter 2 discusses power stability, grid operation and quality. Virtual utilities are introduced. Grid stability is considered along with the effects of distributed power. Transient conditions and fault removal are explained along with power quality issues such as noise, grounding, ground currents, harmonics and power conditioning. Chapter 3 considers standby power systems and requirements. Backup power systems using chemical batteries, kinetic energy storage and super conducting magnetic energy storage are discussed. Other topics include battery construction and operation, UPS types and operation, maintenance and testing. Chapter 4 is concerned with backup generation. It considers AC generators and alternators, diesel backup and gas turbine generation. One of the newer power sources is the 20 to 60-kW, regenerated, gas turbine power package. This package, in combination with a battery pack, may also deliver low emission power in automobiles. One product is the Capstone 24-kW turbogenerator at a weight of 165 pounds. Chapter 5 describes alternate sources of power backup including solar systems and wind generators. Renewable energy is an important element that improves our energy security and preserves the environment. Renewable energy technologies can provide an important fraction of the nation’s electricity generation requirements and along with other generation sources provide more reliable power. Fuel cells, wind turbines and solar panels can provide power free from dependence on local grids. The search for alternative energy is not new, but the current focus is the goal of making clean and sustainable power a mainstream commodity. Chapter 6 considers distributed power generation including clean power, renewables and combined heat and power generation. Sustainable development is discussed along with environmental issues. Historically, combined heat and power (CHP) systems have been considered only for very large customers or specific facility types such as hospitals and municipal swimming pools. Solar PV systems and fuel cells were not considered at all. This attitude has changed as a new energy climate emerges. The terms used to describe these systems include customersited generation, self-generation, distributed generation, distributed resources, distributed energy, combined heat and power, cogeneration, renewable energy, clean power, and green power. Systems of about 50kW to about 1-MW are large enough to be cost-effective and small enough to be appropriate for many end-users. ix
Fuel cells can be an important source of power in the future as explained in Chapter 7. Topics include fuel cell technology and characteristics. The problems of different types of fuel cells for electric power production are discussed. Fuel cells are compared and their characteristics discussed. Chapter 8 examines the protection of computer data from grid interruptions. In the past, even large enterprises did not always implement backup and recovery procedures on a consistent basis. But, today businesses need to ensure they can recover from a power disaster. Larger enterprises are revising their backup procedures and expanding their emergency infrastructure and many smaller businesses are developing recovery plans. A company cannot afford not to protect its data. Hardware can be replaced but data is difficult or impossible to recover. Chapter 9 discusses the problem of data recovery and offers solutions for minimizing the efforts involved. Even with the use of specialized hardware and fault-tolerant solutions for clustering and replication, some data may be lost. Continued success for an organization that has suffered from a significant system or data loss does not depend just on the ability to replace hardware and rebuild infrastructure. In most cases, continued success depends on the ability to quickly and successfully recover business critical data. Topics include safeguards for data disaster protection, disaster recovery, reducing the risk of data loss, rapid database recovery, clustering and backup appliance storage. Many thanks to Dee, who did much in getting both the text and the tables in their final form.
x
Emergency Power
1
Chapter 1
Emergency Power Problems on the Power Grid In 2003 when Times Square blacked out, some thought it was due to a terrorist attack. In Manhattan, subway trains came to a stop, stranding hundreds of thousands. Toronto went dark along with Rochester, Boston, New York, and other cities. In less than 15 minutes, the computer-controlled power grid of the 80,000-square-mile Canada/United States Eastern Interconnection area went down. Most cities in the Northeast were now without traffic signals, television, airport landing lights, elevators, and refrigeration. As the lights went out, 50 million people in the U.S. and Canada were affected. General Motors shut down 17 of its 60 North American plants. Ford closed 23 plants out of 44. Lost business was estimated at $1 billion. As power returned, investigators focused on overloaded transmission lines around the Lake Erie Loop. Industry and federal officials ruled out a terrorist attack, computer hackers, lightning or the effects of 90° heat as causes of the outage that left almost 50 million people without power. Automated protective devices quickly shut down generating plants and distribution networks across more than a 9,000-square-mile area. About an hour before the main collapse, a section of the system in Ohio experienced problems and took itself off the grid. About 30 minutes later, a second section in Ohio also dropped off the main grid. Events inside the automated and computer-driven power system cascaded too quickly and there was not enough time for operators to react. The event took place in nine seconds according to Michel Gent of the North American Electric Reliability Council, or NERC, a private, standards-setting organization that monitors the transmission system. Even after electrical service had been restored to New York City 1
2
Emergency and Backup Power Sources
and most of the blacked-out areas of the East Coast, the upper Midwest and southern Canada continued to suffer. New York’s subway system slowly resumed service, but airline schedules were impacted and thousands of passengers were stranded. Officials in Detroit and Cleveland urged residents to boil drinking water because of possible contamination. Officials also warned that further rolling blackouts may occur before the system returned to normal in perhaps a week. Reactions to the emergency ranged from pride to finger-pointing. Some officials praised the lack of panic and disorder, along with the effectiveness of emergency-response systems put in place after the September 11, 2001 terrorist attacks on the World Trade Center and the Pentagon. New York Mayor Michael Bloomberg praised the orderly behavior of New Yorkers and the efficiency of firefighters and police officers. Bloomberg said the city’s water supply was safe and adequate, but he warned New Yorkers to stay away from the city’s beaches, which had been contaminated with unprocessed sewage during the outage. In Washington, officials from the departments of Homeland Security, Defense, Treasury and other agencies stated that the federal systems in place allowed them to quickly provide communications, National Guard troops and other resources if needed by local authorities. However, this assistance was mostly unnecessary. New York Governor George Pataki asked the president to declare the state a disaster area. That would make New Yorkers eligible for federal reimbursement for the extraordinary expenses that had been incurred.
CAUSE OF THE OUTAGE Politicians in the United States and Canada rushed to blame one another for failing to deal beforehand with the weaknesses of the power system. Officials in the Canadian prime minister’s office suggested that a fire at the Niagara-Mohawk power plant in upstate New York might have been to blame. Ontario’s provincial premier promoted the idea that the trouble started somewhere in the upper Midwest. In Congress, Republicans and Democrats blamed each other for failure to complete any action on pending energy legislation that contains provisions for improving the power grid. President Bush described
Emergency Power
3
the blackout as a wake-up call for the reform of an antiquated system. The White House announced the formation of a U.S./Canada task force to probe the cause of the outage. The inquiry centered on the Lake Erie loop which is a transmission path for the power that goes along the southern shore of the lake from New York west to Detroit, then up into Canada and back east to the Niagara area. This loop has been known to be a problem for years. There have been plans to make it more reliable but little has been done. Much of the power moving east from the Detroit area to New York would usually move through Canada. Shortly before the power failure, 300 megawatts of power were moving east, but the flow suddenly reversed itself with 500 megawatts going the other way. Such reversals in the flow of power around Lake Erie can cause transmission and generation problems in New York. The power system requires all its parts running at the same rate and the Lake Erie incident can cause transmission shutdowns in New York, which in turn can cause generation problems. The events tend to feed on themselves. Several transmission lines in Ohio went out of operation before the blackout. One system went down an hour before the main crash, and the other a half-hour before. These shut-downs may have triggered the problems around Lake Erie. Depending on the conditions, it could start a chain of events. The Ohio lines are owned by FirstEnergy which is the nation’s fourth-largest utility. The initial problems may have been the result of operator errors or a shortcoming in procedures, exacerbated by the failure of an alarm at FirstEnergy to signal the start of a fast-spreading event. The failed line in Ohio began a cascade that brought down 100 power plants including 22 nuclear plants in the U.S. and Canada as eight states and two Canadian provinces experienced failures.
ROLLING BLACKOUTS IN CALIFORNIA In January of 2001 California shut off the power due to massive power consumption by consumers and winter storms. These conditions forced California into a stage 3 power alert, where random blackouts throughout the state are used to conserve power.
4
Emergency and Backup Power Sources
Stage 3 blackouts affect businesses and residential areas alike. Only vital services like hospitals, police, fire and air traffic control are exempt from the blackouts. Up to two million residents faced the rolling blackouts, which can last as long as four hours. The San Francisco/San Jose area was the hardest hit. While some blamed the increase of technology companies and their 24-hour computing demands for being the problem, power officials said the problem came from excessive residential use, not businesses. If people had used one-third less power, that would have dropped some 5,000 megawatts off the grid and eliminated the blackouts according to the California Independent System Operator. Cal-ISO manages the California power grid and controls about 75% of the power in the state. The state had been facing power concerns for weeks, but it was made worse by a winter storm that brought rain and snow to the state. The Diablo Canyon nuclear power plant in San Luis Obispo County, CA, was also cut to only 20% efficiency. Much of the problem is due to the large amount of power needed in and around San Francisco and the surrounding area. Cal-ISO cannot get enough power from the southern part of the state to the north. The main route for this power, Path 15, is congested. For about a 100-mile stretch, the grid shrinks, which is similar to going from a four-lane to a two-lane highway. In July of 2002 the mercury rose into triple-digit temperatures in California dropping the state’s energy reserves to the lowest level in a year and sending the state into a one-day stage 2 emergency. Despite rolling blackouts and rising wholesale electricity prices, air conditioners continued to hum during the first intense heat of the year. Peak demand reached 42,441 megawatts, the highest of the year, according to the California Independent System Operator. It was about a year since the last rotating outage and the public had come to view the energy crisis as over. The state has been trying to improve the power supply with new power generation and emergency energy conservation measures. New power plants and improved hydroelectric production have helped, over a period of 18 months, California had a net increase of approximately 4,500 megawatts. Consumers have installed thousands of new, more efficient appliances and millions of energy-efficient light bulbs.
Emergency Power
5
Conservation measures have had some impact, California’s larger power customers used about 500 fewer megawatts during the summer. This is the equivalent of the output from a medium-sized power plant. The Real-Time Metering Program allows large utility customers such as retail stores, hospitals, office buildings and schools to monitor their hourly energy use on the Internet and in real-time to control their energy costs. Some municipal utilities have made voluntary, real-time rates available, where electricity is priced based on the wholesale market. This allows customers to adjust their production schedules according to the current electricity pricing. There are also voluntary demand response pay-back programs that pay to curtail prearranged electric loads when asked by the utility. The Energy Commission estimates that these meters will result in reducing peak electric demand by 600 megawatts per year. The cost for implementing real-time electricity meters is approximately $65 per kilowatt. The cost for typical peaking power plants using combustion gas turbine technology are several thousands of dollars per kilowatt. Building energy management systems (EMS) also play a role in demand response programs used by utilities to keep peak electric power usage low. A building energy management system allows utility customers to identify and program energy-consuming equipment and systems to shed loads as needed. Although the power plants and hydropower have helped increase the supply and the conservation programs have helped stiffen the grid, California’s energy supply remains vulnerable. Typically, the state’s energy load has climbed every year by a few percentage points. Although loads dropped below expectations a few summers, economic conditions and conservation efforts have influenced the demand for energy. Loads have increased and regional heat waves tend to thin reserve margins in the west. California has been unable to import electricity from other states in the same amounts as earlier done. To prevent another power crisis, California is asking utility customers to conserve 3,000 megawatts of electricity in the summer. Programs like real-time metering are expected to provide over 1,200 megawatts and consumers and businesses must provide the remaining 1,800 megawatts. The California ISO has been asking consumers to reduce power use in peak periods, between 3 and 6 p.m. These periods will probably occur for a few hours on the hottest days of the year when air conditioners are
6
Emergency and Backup Power Sources
running in large areas of the state. Consumers must reduce the lighting load in their home or office, set the thermostat up several degrees and avoid using any major appliances. They should also increase efficiency by adding insulation and changing windows.
NORTHEAST BLACKOUT OF 1965 The power outage of November 1965 hit New York especially hard as the Niagara power grid dropped out during the rush hour on November 9, 1965. At about 5:15 p.m., the power lines that connect Niagara Falls and New York City exceeded their maximum load causing a transmission relay to fail. The failure of that single part started a chain of events that cut power to more than 25 million people in eight states and two provinces. The power that had been heading for New York on that November evening took an alternate route to the power grid that feeds New England. The subsequent overload caused the entire grid to collapse, plunging Boston, Hartford, and most of the rest of the northeast in darkness. In minutes, utilities diverted their power northward, causing a shutdown of the grids in Ontario and Quebec. The situation became critical in New York, where the airports had no power, traffic lights were out and people were trapped in high-rises and in the subway system. As a result of what was called the Great Northeast Blackout, power utilities across the U.S. instituted fail-safe systems to blackout small areas to save larger portions of the grid. Many thought the problem was solved, but they would learn differently 12 years later.
1977 NEW YORK BLACKOUT The 1977 New York blackout took on a different hue compared to the 1965 outage. At 9:34 p.m. on July 13, 1977, in the middle of a midsummer heat wave, power was cut off to New York City, plunging nine million people into darkness. On Broadway, the show went on with candlelight and emergency power. The atmosphere in 1965 was congenial, but in 1977 it was malicious. In a few hours, the city was ablaze, as people in the Bronx and Brooklyn rioted. Police would arrest 4,000 people in connection with the
Emergency Power
7
looting and pillaging of more than 2,000 stores. Firefighters were called to more than 1,000 incidents, many of which were deliberately set. In 1989, Quebec had its own crisis as sunspots caused HydroQuebec’s power grid to switch off at 2:45 a.m. on March 13, 1989, cutting six million people off from electricity. Nine hours later, the power was restored. To most Americans, a vast blackout seems hard to explain. What do power stations in Ottawa have to do with the lights in New York or Cleveland? Many did not realize that over the years of restructuring regional power, companies have merged their generation capacity to become part of the largest infrastructure on the continent. A power sharing network stretches from Florida to Canada and acts much like a single electrical circuit. Electrical grids in Canada and the United States, link up at 37 major points so the two countries can trade power. When one utility has a shortage, it buys power from a neighboring utility. But, this overworked network also has a very old transmission grid of underground and overhead power lines that were last upgraded in the 1950s and 1960s. This system of 50-year-old lines cannot handle the rapid transfers of the new power-trading economy. One official stated that 80% of the generators that had been off during the initial 2003 blackout were soon running again at capacity but the transmission lines could handle only 20% of the output. Little attention has been paid to the grid’s vulnerabilities. Since the big blackout of 1965, the electricity-transmission system was supposed to have been redesigned with safeguards that would not allow such disruptive incidents. For years we have been warned of too little investment in a transmission grid that has been asked to do more and more. One major factor was deregulation. In the 1990s, many utilities were broken up, separating the transmission businesses from the power generators thus product electricity. Today the system is dominated by independent operators in a market-driven system with no link between generation planning and transmission planning. Operators can see few benefits in building power lines for other regions. Citizens’ groups have made local approval of new transmission lines difficult. In the past, power companies had to invest in transmission because it was part of their business model. Now, they may not own any part of transmission in the deregulated model.
8
Emergency and Backup Power Sources
The Federal Energy Regulatory Commission wants to give all electricity suppliers equal access to power lines. This would provide more power from generators in other regions. But power companies and politicians in the South and West have opposed the plan, stating that it would force prices up in their usually low-cost regions. Homeland Security called the 2003 blackout a test of the system. State and local officials took the necessary steps to be prepared for massive emergencies. In New York, the night of 8/14/03 was a graphic contrast from previous outages. There were no riots like those that crippled the city in the blackout of 1977. Officials recorded 800 elevator rescues, 80,000 calls to 911 and 5,000 emergency medical service calls. Based on initial information supplied by the power company, the event started in Canada. The office of the Canadian Prime Minister then stated that the blackout might have been triggered by a lightning strike on a major transmission line in upstate New York. Then industry officials became convinced that the problem that led to the blackout originated somewhere in northeastern Ohio. Officials were trying to determine why this situation was not brought under control after the first three transmission lines switched out of service according to the North American Electric Reliability Council (NERC). This agency was formed after the Northeastern blackout of 1965, to prevent another major breakdown. The Ohio lines were operated by First Energy, a transmission company based in Akron, Ohio. First Energy confirmed that their facilities in northern Ohio had suffered several mishaps during the afternoon prior to the blackout. These included a tree falling on one of the company’s heavy-duty 345-kilovolt high-tension lines and tripping off a generator at a company plant in Eastlake, Ohio. Another 345-kV line may have been so overloaded that it sagged into a lower-voltage cable below it, shorting out the circuit. But, First Energy said it believed its equipment had coped with these failures, which were not that unusual on a warm summer day.
TIMELINE OF THE 2003 FAILURE On Wednesday, August 13, 2003, at 3:06 p.m. the first of three transmission lines that are believed to have triggered the blackout trips off. The outage puts pressure on another line and the effect spreads.
Emergency Power
9
Cleveland, Ohio Power’s voltage drops to zero. An hour later, utilities in Canada and the Northeast experience major power swings. The Bruce Nuclear Station in Ontario shuts down and blackouts hit Toronto and southern Ontario, where most of the province’s 10 million residents live. A few minutes later the Campbell No. 3 coal-fired power plant near Grand Haven, MI, trips off, and then the Enrico Fermi nuclear plant near Detroit shuts down automatically after losing power. A number of transmission lines trip at this time including a 345-kilovolt line known as the Hampton-Thetford in upstate New York and Vermont. Much of New England is spared, when the region’s power operator disconnects its system from New York’s after realizing something is wrong. A transmission line between Pennsylvania and Toledo, Ohio, trips, and in a period of 15 minutes five nuclear power plants in New York state shut down. Parts of nine states including all of New York City are now affected.
POWER RESTORATION Restoring power to a massive area requires utilities to balance electricity coming from the restarted plants with load demands. An imbalance can trigger more blackouts. Supplementary power sources called black starters are used to re-engage the generators and get auxiliary systems on-line. Once the generators are up, they could flood the grid with too much power and shut it down again if there are not enough substations on-line to draw power. At the substations, operators must control the power distribution and gradually send more power to areas that need it. As new power plants are connected to the grid, those that are already up and running must drop back their output to stabilize the system. Essential facilities and services are the first to get their power back. These include hospitals, police and fire departments, water and sewagetreatment plants. As areas are brought up, they are connected with other nearby regions. This merging can cause destabilizing fluctuations for a while. By 10 p.m. on the day of the blackout, 50% of affected areas in New England had their power restored. By 5 a.m. the next day, 50% of Canadian areas were back online. New York was fully restored by 10:30 p.m.
10
Emergency and Backup Power Sources
FIXING THE GRID Economic growth and the proliferation of computers and other digital devices have strained the power arteries, while utilities and state governments debate over who should repair the problem. Deregulation allows local utilities to sell electricity where they can find a buyer. But the grids are still administered on a state-by-state basis. This is because states do not want to give up control, which stops new transmission lines from being built. State and federal commissions need to meet to consider upgrades of the grid across state lines. The Federal Energy Regulatory Commission (FERC) should order the construction of new lines, like the proposed Arrowhead-Weston line between Wisconsin and Minnesota. The highway system is planned on a federal level, the federal government should also direct expansion of the power grid. As the U.S. economic output doubled between 1975 and today, investment in the grid dropped from $5 billion to about $2 billion annually, according to the Edison Electric Institute. In a deregulated world, utilities use each other’s networks so no one wants to pay for an improvement that would also benefit competitors. Funding upgrades of the system with federal dollars is one solution. Financial incentives for power generators to build transmission lines for new plants is another. New plants could be paid by rate increases over several years. Like the infrastructure itself, the failure of support for long-range planning transcends national borders. As the global economy becomes increasingly dependent on the digital networks made possible by electricity, public funding worldwide for newer cleaner power sources and improving our infrastructure is decreasing. The U.S. spent one-third less on energy R&D in 1995 than it did in 1985. Germany, Italy, and the UK spent two-thirds less.
IMPROVING THE GRID The modern power grid is stretched over a vulnerable technological infrastructure. The Electric Power Research Institute (EPRI) was founded after the failure of the grid in 1965. EPRI believes we still have not fully heard the message of that massive blackout. One lesson of the current crisis, many believe, is that we need smarter methods of electric-
Emergency Power
11
ity generation, transmission and delivery not just more power and more lines. EPRI is the utilities think tank, an independent research organization funded by more than 1,000 power companies. EPRI was the first industry-wide R&D consortium in America. It is one of the largest consortiums in the world and represents utilities in 40 countries. EPRI’s members range from the older giants like Consolidated Edison of New York to the newer upstarts like Mirant and Dynegy. These members generate 90% of the electricity used in the United States. The Bush administration’s declarations about improving the power networks is familiar at EPRI. The institute has been laying the groundwork for this technology for decades. EPRI’s older members stand to gain more from an energy policy that favors the more traditional means of increasing power such as more fossil-fuel plants, more oil and reviving the nuclear power industry. Debate over the merits of solutions such as drilling in national wilderness areas, is a distraction from our ability to implement a practical blueprint for a new conception of the energy grid.
BLACKOUT PREPARATION The blackout that plunged one of every six Americans into darkness in 2003 triggered a national discussion of energy policy and preparedness. In Southern California, where earthquakes are an ongoing threat, concern over the flow of electricity and an awareness of emergency procedures is not new. Many are asking themselves about backup plans if the power goes out. At the Metropolitan Transportation Authority in Los Angeles, they believe that although the rail systems may not operate, the stations and all of MTA’s buildings including corporate headquarters and rail and bus facilities will still have emergency power systems for mission-critical operations. Buses would not run normal routes, but they would continue to run to the extent possible. In the rail stations, emergency power systems would generate power for lighting to allow station evacuation. Some food stores such as Gelson’s in Marina del Rey, California, plan to close retail operations and cover refrigerated cases. If the loss of
12
Emergency and Backup Power Sources
refrigeration is more than two hours, they must order dry ice to keep refrigerated items cold. Some facilities such as hotels use a mixture of procedures. Hilton Hotels has emergency procedures in place for blackouts, fires or earthquakes. In case of a power outage, their hotels keep a supply of flashlights, glow sticks, bottled water and AM-FM radios to distribute to every room. There are back-up lights that come on in the stairwells, hallways, lobby areas and public areas. Key locks are battery operated so guests can get in and out of their rooms. The Westin Hotel in Long Beach has a 1,500-kilowatt generator that supplies the entire hotel in case of emergencies. They do not have to worry about seeking out any other power. During the power outage on the East Coast, they ran a scheduled test and everything worked perfectly. Plastic manufacturers such as Plastic Services and Products PMC Global Inc. use plastic extrusion molds which use a lot of electricity. They run around the clock, so their electricity costs are large and they would need a large amount of backup capability. Without backup generators, if the grid goes down they would have to shut down. Large office buildings of over one million square feet, should have well thought-out emergency plans. But, even here if there were a massive power outage, the emergency power systems might power 20 to 25% of each building. Then, they could run at least one or two elevators in each building and have minimal lighting, enough for people to get in and out. At Universal Studios Hollywood back-up generators would provide power to handle lighting and safe exit from all rides and shows. Emergency power systems would be able to handle audio communications for the guests. In Silicon Valley, technology companies like Oracle are constructing their own local energy network using substations, diesel generators, and power-conditioning systems. In many technology installations a supply of fluctuation-free electricity is critical. Chip fabrication plants and server farms must balance the expense of building independent electricity resources with the cost of equipment failures and network crashes caused by unreliable power. Hewlett-Packard has estimated that a 15-minute outage at a chip fabrication plant cost the company $30 million, about half the plant’s power budget for a year. Surveys of technology companies show that while nearly all are
Emergency Power
13
trying to reduce electricity consumption, preparedness for potential blackouts is spotty. One survey by the Washington Software Association (WSA), showed that of 48 members responding, only four had developed backup power sources, three used generators and the other solar. There is concern about rolling blackouts and brownouts and companies should conserve, to protect themselves from power failures, and to act politically to guarantee future energy supplies. In the present climate now, companies are recovering from stagnant economic conditions any disruptions in their ability to generate products and revenue will impact their recovery. Another survey of members of a technology industry association indicated about 70% increased their blackout readiness. Some of this can be attributed to preparations for Y2K. Many manufacturers installed their own generators as a part of Y2K preparation. In spite of a lessened sense of crisis, the power system has little extra capacity, so companies are still vulnerable. We have seen a growing crisis in California and the rest of the country. Companies need to be prepared and awareness is increasing in the face of the massive eastern blackouts and the rolling blackouts in California. Increased efforts are needed to reduce energy use including educating employees on how they can save. Companies are looking at modifying workloads and shifting energy-intensive operations to off-peak times. Companies should institute long-term reductions in energy use to ease demands on the system. Among the organizations benefiting from Y2K planning is the Fred Hutchinson Cancer Research Center in Washington state, which has several contingency plans to cope with an emergency power shortage. These strategies include prioritized plan for load-shedding, dual feeds from Seattle City Light and diesel generators for emergency power. The facility has also been reducing consumption. It has been a test site for a new energy-saving system using variable volume and air pressure in its air circulation system. This system and 23 other measures implemented in the last two years, has helped the facility cut energy use by 6% even though its activities have increased. Immunex Corporation also planned for Y2K in 1999, and updated those plans to meet emergency power needs. Immunex put together a
14
Emergency and Backup Power Sources
task force to plan strategies to cope with rolling blackouts. It installed generators to keep most vital systems functioning, but a rolling blackout would still be a business interruption since not all departments are on generator power. Birmingham Steel is a large mini-mill in the West Seattle area. It has been implementing new energy-saving technologies in its steel-making processes and has cut its energy by almost 12%. The company has week-long shutdowns during the summer in order to reduce the load on Seattle City Light. They are also working with Seattle City Light in a joint venture to use the plant’s waste heat, a by-product of producing steel, to operate steam-driven generators. Some of the companies well prepared for power failures are Internet server farms, since they must provide stable, uninterrupted services. For example, Zama Networks has a double power backup for its 30,000-square-foot server farm near Boeing Field. The entire facility runs on several hundred rechargeable batteries. The batteries act as a buffer and keep the power cleaner during normal operation and can keep the plant running for short blackouts. Backup for the batteries is a two-megawatt diesel generator with fuel for three days. They also have processes in place to respond to a long-duration power outage and expect to have a power event during the hotter months. Zama has been reducing its power consumption and now uses 5,600 kilowatts a day, compared to the 7,200 kilowatts a day it used earlier.
BACKUP POWER The massive power failure that darkened much of the eastern United States and Ontario in the summer of 2003 did not reach Indianapolis. If it had, only a small group of local companies and facilities would have been capable of operating through the blackout. In spite of that, most Indianapolis companies, industrial parks and other facilities have not rushed to install emergency power supply systems. Facilities managers indicate that a majority of businesses are holding back on purchasing backup systems that would allow them to function in the event of a major failure of the electrical grid. In Indiana, the electrical infrastructure has a good record for reliability compared to other areas. The cost of backup power may seem
Emergency Power
15
high when you consider the overall cost of investment and the price of maintaining it. Although it may be hard to create an EVA (economic value addition) that would justify it, an extended downtime could be crippling to many facilities. Large users may be reluctant to build a stand-alone power plant solely for emergency use, but almost all major office buildings have some emergency power backup systems. Normally, these provide enough power for emergency functions such as basic life safety. This kind of system is not very costly. Moving up to a system large enough to keep a major office park going is not inexpensive. Many large companies become interested in a large-scale backup system, because of concerns about the reliability of the power grid in their specific area. For example, a building site might be served by an electrical substation that has a history of reliability problems. It may be vulnerable to storm damage or have maintenance problems. In this case, a company should look at alternate sources of power. Although a company may not wish to build its own generating plant, it can arrange its electrical system so it can be connected to a different substation in case the regular system goes off. In Indianapolis, industrial parks like the Intech Park, a major business park on the northwest side use this technique. The power feed serving the park is looped so that it can be switched to another substation if needed. This system is used because the high-tech companies at the park do not want a lengthy power outage that could hurt them. Some major manufacturers have a critical need of backup power capacity. Pharmaceutical giant Lilly produces large quantities of medicines and vaccines that can start deteriorating after a few hours on a stopped conveyer line. Lilly has a high capacity backup system because of the nature of its products. If a widespread power outage were to occur in Indianapolis, Lilly would be unable to maintain its critical operations. Roche Diagnostics Corp. in Indianapolis also has a backup system in place to allow critical functions to continue in the event of a major power failure. There are other facilities that must have backup power systems like hospitals. These elaborate backup systems are in place and seemed to operate well enough during the blackout. In spite of the problems caused by the 2003 record blackout, none of the hospitals and other emergency facilities in the affected areas reported any trouble in operating.
16
Emergency and Backup Power Sources
MANDATED BACKUP SYSTEMS Hospitals and other vital emergency centers are required by law to have backup systems. This is usually with a double and in some cases, triple level protection. St. Vincent in Indianapolis is typical of what hospitals use. The hospital complex has three separate emergency power systems, each with its own diesel generator. Each of the systems is connected to the hospital’s electrical system. If the external power fails, one of the generators automatically goes online and provides power for the hospital’s critical operations. The switchover must take place in 10 seconds. If the first generator does not function, the second will be started up. The third unit is available if something happens to the second. The generators run on diesel fuel, and the hospital has about a 6hour supply. If necessary, the hospital can run on its emergency generators for as long as it can obtain diesel fuel. The system is not designed to make up for all lost power in case of a failure. It supplies enough electricity to run operating rooms, intensive care units and neonatal units at full power. Nursing stations and staffing areas get full power, as well as hospital entrances, stairs and elevators. Lighting is cut back with only every third hall light getting power. The emphasis is keeping the most critical elements of the hospital operating. A blackout should not affect the quality of patient care. Hospitals do not use their backup systems to replace full power because it is too costly. The redundant systems in use are expensive enough. St. Vincent Hospital recently replaced one of its three generators, at a cost of $500,000 for the generator and its control system. St. Vincent tests the system every month and inspects it every week. The system has functioned well through several outages caused by ice storms and other inclement weather. Other critical institutions that have backup systems include police and fire stations and many call centers and data centers.
PLANNING FOR EMERGENCY POWER Planning cannot prepare for every calamity, but it can provide emergency personnel with valuable information such as what essential
Emergency Power
17
facilities and services will need power, the amount of power and how to provide it. Expecting the unpredictable is part of the emergency response plan. Instead of trying to plan for a specific event, such as a wind storm, fire or flood, concentrate on the common results of any disaster. Vital among these is the loss of electric power. Electricity is scarce after a disaster. Lights are out, telephones may be disabled and many businesses will be shut down. Essential services are often impacted. People may require food, water, heat and emergency medical attention. True recovery requires the power to be on, but it may be difficult to predict when utility service will be completely restored. A backup power system plays a critical role in recovery from all types of disasters. Permanent backup systems are used by facilities that must maintain some continued level of public health, safety and welfare, even in extended blackouts. Mobile generators are available in all sizes for powering schools, stores, offices, factories and homes while work goes on and the grid is restored. The speed of recovery depends on how well the enterprise has planned for emergency power resources. The provision for electricity is critical in a disaster management plan. Procedures should be clear and well thought out. Global supplies of mobile generator sets have almost quadrupled in the last decade. This allows a facility that has planned ahead to readily secure almost any amount of short-term emergency power from units for small offices or homes to 2-megawatt power modules to supply large buildings. An extended power failure can have many causes; some will be natural and others are not. Some may be more predictable than others. For example, it is difficult to imagine that many could have foreseen the April 1992 flood that shut down power for several weeks in the heart of Chicago. The flood was triggered when construction workers were installing support pillars in the Chicago River bottom. The pillars punctured the roof of a freight tunnel under the city. Water then flowed through the complete system of tunnels and into the city’s building basements that contained the building’s main power distribution systems. Another extended power outage occurred in central Auckland, New Zealand, in February 1998. The four main power feed cables for the city failed because of an overload. The blackout affected 50,000 city
18
Emergency and Backup Power Sources
workers and 6,000 residents. Butter, meat and other perishable exports in thousands of refrigerated containers were at risk. They were waiting to be shipped from the city’s port at the height of the export season. In September 1998, weather forecasters did predict the landfall of Hurricane Georges on certain Caribbean islands. But, the force of the hurricane was underestimated. Puerto Rico and several nearby islands lost all power. Mobile diesel-powered generator sets were important in the recovery from all these events. The demand for generators exceeded local supplies, and units were sent from significant distances. New Zealand and the Caribbean Islands received units by airlift. The logistics of supplying such power equipment in a hurry are difficult, but effective planning makes the task go smoother and recovery is faster.
PERMANENT BACKUP It is vital in emergency power planning to provide essential facilities with permanent backup power. The backup equipment must be properly sized and in good repair. Essential post-disaster services include medical care, drinking water supplies, police and fire protection, refrigeration, communications, wastewater treatment, transportation (including highways, rail, airports and seaports), weather forecasting, temporary relief shelters and emergency response command and control. Backup systems should be sized to carry critical loads such as the power to deliver the facility’s necessary public services. Some facilities, including wastewater treatment plants and hospitals are important enough that backup systems must be sized to at least some level of reduced operation. Backup systems should be included in a planned maintenance program that includes regular inspection and operational testing.
MOBILE POWER A major storm or flood may cause damage to permanent backup power systems. In the 1992 flood in Chicago many backup systems were installed in office building basements and sub-basements that filled with water.
Emergency Power
19
The true test of planning is how well it functions in actual practice. We live in a world that increasingly depends on electricity. A facility should give some priority to mobile power. The sooner power is available, the more efficiently materials and services can be delivered. Mobile power equipment should be sized in the same manner as permanent backup power. Power outage can create major logistical problems as public agencies and private businesses rush to provide temporary power. An outage affecting a major city, such as Chicago, may require thousands of mobile generators. The challenges are even greater if the power outage is caused by a natural disaster. Then, the delivery of power may be affected by transportation breakdowns and the distribution of fuel and other supplies. Space must be available for parking the generators outside the buildings. A facility with a large power requirement may need one or more of the large power modules that are 8 feet wide and 40 feet long. Extension cable are needed to connect the generators to the building’s electrical system. Transformers, distribution and panels, feeder panels, and other accessories may also be needed. During an emergency, diesel fuel supplies and delivery may be difficult to obtain. An on-site fuel tank should have the capacity for at least 24 hours of run time. If on-staff personnel are not experienced with power-generation equipment, it is necessary to arrange for professional assistance to install and operate the mobile units. Many suppliers offer permanent backup systems for sale or lease, as well as mobile power units for rent. The supplier should be able to deliver the power generating sets with all required equipment including cables and transformers. Suppliers should be able to offer training for equipment operation or provide operators along with service and maintenance. When renting power units for emergencies, it may be difficult to obtain a contact that guarantees equipment availability. However, many suppliers offer contracts with a right of first acceptance. Under this arrangement, you pay the supplier a reservation or retainer fee for an allocation of certain equipment. The supplier then agrees to reserve the equipment and not release it to another party without your consent. In Puerto Rico after Hurricane Georges, relief efforts were stalled by trees and power lines blocking roads and preventing transportation. The storm also blew down one of the large cranes in the port at San
20
Emergency and Backup Power Sources
Juan. This created delays in off-loading emergency generators arriving by ships. The mechanics of power delivery are important, especially when equipment is not available locally. Provisions may be needed for staging areas for generators at airports and seaports. Slowdowns in customs can delay the delivery of power. Special legislation can allow generators to be imported in emergencies. Provisions allowing temporary, duty-free imports of equipment can expedite delivery. Contacts established with freight companies during planning can increase the availability of ships or air transports when a disaster occurs. Finances are another issue. As a part of the planning, there should be agreement on payment terms with mobile power suppliers. This may be a letter of credit from a financial institution or budgeting the necessary funds. TESTING THE PLAN An emergency plan is a living document and should be revisited and updated periodically. The plan should also be tested through simulation drills. In a typical drill, participants are presented with a specific scenario and asked to respond to it according to the procedures outlined in the plan. It is useful to involve the local electric utility in drills. During an actual emergency, coordination between utility staff and emergency personnel may improve the utilization of mobile equipment. If emergency personnel know when utility power is about to be restored in a given sector, they can plan to release mobile power units to other areas where they are needed. Disasters are by definition unpredictable and even the best plan will not eliminate the need for good judgment and resourcefulness. However, a plan immediately moves disaster recovery several steps forward. It makes critical actions nearly automatic and provides a basis for sound decision making as the event unfolds. NETWORK OF THE FUTURE The development of backup systems of distributed energy resources is a part the network of the future that will be required to meet
Emergency Power
21
the needs of the digital economy. This marriage of the network that started in the late 19th century with the more recent innovations of the late 20th will produce a future energy providing system that has been called an intelligent grid, energy net and Energy Web. Parts of this network are already appearing in many sectors of the power industry. They have their own momentum and are colliding with regulatory and market barriers. One project, called InfoWatt, would replace the core wiring used in power lines from steel to fiber-optics. This would allow more power on existing lines. The new wiring would allow additional power plants to be brought online along with the distribution of this power. The University of Southern California’s department of material science was involved in the mechanical testing of the, design. InfoWatt also would provide additional computer bandwidth to transmit data between Internet backbones. Power lines are made of aluminum on the outside to carry the current, which tends to concentrate on the outer surface. A steel core is used to support the wires. Aluminum is not strong enough to support large cables, so the steel core is used. However, steel has some drawbacks. It sags in the heat, which can trigger an outage. There is also some loss of transmission capability, because the amount of steel needed is thick. This means less aluminum in the cable and less power capacity. If the lines are thicker, they become subject to environmental problems due to wind and ice. One solution is to change the core. Southern California Edison, the University of Southern California and several government agencies have been working on a fiber-optic core that is lighter than steel, thinner and can transmit data. In the past 20 years, the power industry has increased production by 30% but they have only increased the distribution capacity by 15%. There was excess capacity on the grid when power was generated and sold regionally. Deregulation and shipping power over long distances are stressing the capacity of the grid to carry the needed power. Power transfer has been a major problem in California. Northern California has been subject to blackouts. Southern California may have power to provide to the North, but a problem with the grid, called Path 15, does not have enough throughput. Building more power lines is expensive, an alternative is higher capacity cables on the existing towers. Replacing steel-core wires with fiber allows more aluminum to be
22
Emergency and Backup Power Sources
used in the same thickness of cable since the fiber core is thinner than steel. This results in 15% more capacity during normal loads and up to 200% higher capacity during peak loads. The smarter energy network of the future should utilize a diverse group of resources located closer to the consumer, providing low- or zero-emissions power in backyards, driveways, downscaled local power stations, and even in automobiles, while giving electricity users the option to become energy vendors. The front end of this new system will be managed by third-party virtual utilities, which will bundle electricity, gas, Internet access, broadband entertainment, and other energy services. This is similar to Edison’s original vision of the industry, which was a network of technologies and services to provide lighting. Digital networks will be used to remake the grid. Embedding sensors, solid-state controllers and intelligence in this new supply chain, the grid should be more robust and adaptive with more photovoltaic arrays and wind turbines. The new grid will have to extract the maximum value from limited resources.
DISTRIBUTED GENERATION Distributed generation was Edison’s first plan for universal electrification. Today, this energy mix includes photovoltaic arrays, gas turbines and variable-speed wind turbines that help to make the price of wind power competitive with fossil fuels. The development of renewable resources and increasing energy efficiency are essential to securing a sustainable future. A long-range prescription is found in EPRI’s “Electricity Technology Roadmap” developed when 150 organizations brainstorms a set of goals for the next 50 years of energy R&D. It brought together representatives from the Department of Energy, the Natural Resources Defense Council, Rand, MIT, the New York Power Authority, General Electric, AT&T, Motorola, the Nature Conservancy, Exxon, the World Bank, Royal Dutch/Shell, Oracle, Microsoft and others. Meeting the energy needs of the next century, the Roadmap suggests a substantial overhaul in how we think about electricity. The industry’s most basic assumptions will have to be put on the table, including the hub-and-spoke hierarchy of the existing grid based on large central power stations with long distance transmission lines radi-
Emergency Power
23
ating outward which has been the backbone of the business since in the 1920s. The current power infrastructure is incompatible with the future. The EPRI’s Roadmap indicates that an energy revolution is under way. In 1999, the Swiss engineering giant ABB announced that it was offloading the building of nuclear plants to concentrate on renewables and distributed generation. Smaller-scale methods for producing electricity closer to the consumer are not a new idea. It was Edison’s first plan for universal electrification, where neighborhood steam plants would provide power and heat for 1-mile-square lighting districts.
MICROPOWER Distributed generation is also called micropower. Renewables such as photovoltaic arrays and wind turbines are micropower resources along with reciprocating engines, fuel cells, Stirling engines and gasfired microturbines. Micropower is surging on world markets, both in industrialized countries and in regions with no electricity. Here distributed generation offers these rural communities access to power without costly grid extensions by the major utilities. Wind power has been the fastest-growing energy source, moving at an average of almost 25% per year. Freestanding windmills and wind farms are going up all over, especially in Europe. Denmark gets almost 15% of its energy supply from renewable resources, and about half of the wind turbines in the world are made by Danish companies, such as Vestas Wind Systems and Bonus Energy. These units have been going to Germany, Spain, and the UK. One wind farm constructed in Texas, using Danish turbines, can provide enough electricity for almost 140,000 homes, while avoiding 20 million tons of carbon dioxide emissions from conventional power plants. EPRI has been involved in designing turbines that provides a steady flow of power under varying wind speeds. In 1989, the institute started a 5-year program to upgrade the technology. EPRI and the Department of Energy promoted the capability of wind power, while utilities and federal and state agencies mapped out promising areas for high-turbine sites. By 1995, variable-speed wind turbines designed by EPRI were generating 3 billion kilowatt-hours per year.
24
Emergency and Backup Power Sources
PHOTOVOLTAICS These systems which make power from sunlight are growing internationally. In 2003, the largest solar energy project in the world was started in the Philippines. It involved the Spanish government, the Philippines Department of Agrarian Reform and BP Solar. The solar wing of British Petroleum provides more than 10% of the photovoltaic cells used in the world. The $48 million project on Mindanao Island will bring electricity to 400,000 residents of 150 villages on an island that is home to one-third of the nation’s rural poor. The project will produce enough electricity for new irrigation and drinking water distribution systems, as well as lights and medical equipment for schools and health clinics. Seventy-nine power systems will be built. The Mindanao project illustrates the potential for micropower to raise the quality of life in developing countries without building large power plants or relying on expensive fossil fuels. Micropower is being demonstrated over the globe. Just as developing countries are jumping straight into mobile phone service without laying expensive land lines, micropower technologies are enabling those historically left in the dark to leapfrog hub-and-spoke grids.
POWER SENSORS The Electric Power Research Institute envisions digital sensors along the grid and in homes, sending load information back to planning centers. Software could then analyze usage patterns and reroute power to anticipate outages. The blackouts indicate that the current grid technology lags behind our air-traffic-control network. When airlines have a storm and are shut down in one part of the country, they bypass this area. A real-time monitoring system would inform customers of real fluctuations in electricity prices. If prices are rising on a hot afternoon, facilities might choose to save money by turning off equipment. This would reduce the grid load and bring prices down, providing a selfcorrecting action that helps to satisfy the demand. Conservation is also important. During the California blackouts of 2001, parts of the state cut energy consumption by 20%. Consumers can reduce power during peak hours.
Emergency Power
25
POWER RELIABILITY Electricity is produced at most power plants, by large spinning generators driven by moving water, reciprocating gas or diesel engines, or gas or steam turbines. The steam may be created by burning coal, oil, or natural gas or by a nuclear reactor. At a transmission substation, large transformers increase the voltage from thousands to hundreds of thousands of volts so the power can be transmitted long distances with minimal losses. The electricity travels along these high-voltage lines to a power substation, where the power can be redirected to other high-power lines or stepped down to a lower voltage that is sent to area power lines. In the power grids, there is a tender balance between supply and demand where sudden fluctuations can cause portions to fail. If a transmission line breaks, the system is designed to isolate the problem and disconnect it from the grid. If the control mechanisms in place, the computers, circuit breakers and switches fail to contain the problem quickly enough, there can be rapid fluctuations at substations elsewhere in the grid, triggering more shut-down mechanisms. The problem can spread back to generating plants that may be producing too much or too little electricity, causing more shutdowns. Eventually, the problem is contained or it spreads causing a blackout. The 2003 blackout affected 50-million people in the U.S. and Canada located in eight states and two Canadian provinces. The power failures were attributed to three deaths. The blackout caused 22 U.S. and Canadian nuclear plants to shut down along with 10 major airports and 700 flights were canceled nationwide. In Cleveland 7600 gallons (29,000 liters) of drinking water were distributed by the National Guard in Cleveland after the city’s four main pumping stations went down. In New York 350,000 people were stranded on the subway systems when the power went out. Nineteen trains were stuck in underwater tunnels. The reliability of the system depends on procedures that were meant to isolate problems and keep them from spreading. Since supply and demand needs allow the system to work, mechanisms on the grid monitor the flow of power. If there is a sudden failure, like a lightning strike on a transmission tower, circuit breakers open, and the sector releases itself from the grid. The process is called islanding and the goal is to contain the fault by sealing it off from the rest of the network. When this happens, it creates a hold and the network is pro-
26
Emergency and Backup Power Sources
grammed to either pick up power from other sources or shed load, purposely shutting off power in one part of the grid to protect the rest of it. This is why suburbs are more likely than urban area to suffer brownouts. The system is programmed to protect essential services like hospitals in the most densely populated areas. Getting a power system up and running after a blackout is called a black start. Power-generating units must be brought online slowly. Cities are divided into electrically isolated areas and brought back one by one in parts that the system can handle. As they build up the system, they can start to stringing them together, but if they move too fast, the whole system may go down again. Nuclear power plants usually take at least 24 hours to restart. Any load shift has to happen fast, and no one likes to drop load since it will have to be reported to the regulatory commission. It is also difficult to start up again. In the 2003 blackout, there may not have been time to make a decision. In past blackouts, there was a window of as much as 45 minutes to attempt a deviation. This time events moved across hundreds of miles in seconds. However, Vermont unplugged itself from power feeds from New York and the state was spared from the blackout. In Chattanooga, engineers at the Tennessee Valley Authority operations center saw the transmission flows spike and they got their generators to slow down and stabilize the flow of electricity in their area. A system of circuit breakers in Ohio also stopped the spread of the blackout south. Large amounts of money and time have been spent on the problem of spreading breakdowns since the blackout of November 1965, when an overloaded relay switch in Toronto left 30 million without power through New England and New York City. That event triggered the creation of the NERC., an industry group that sets standards for the transmission system. The NERC set up the system for isolating plants so that if one failed, it would not affect the others. Even with the new safeguards in place, there were still problems in July 1977, when lightning storms in northern Westchester County, N.Y., shut down two transmission lines. Within an hour New York City sealed itself off from the larger grid, but since the city does not generate enough power internally to sustain itself, nine million people lost power, some for more than a day. In July 1996, about two million customers from Nebraska to Washington State to Baja California Norte in Mexico lost power when a 345
Emergency Power
27
kilovolt line was shorted by a tree in Idaho. A mechanical problem shut down a parallel line, setting off a wide collapse. A combination of market forces, politics and a lack of welcome for high-voltage lines has hindered a transmission system that can keep up with demand. There is little incentive for utilities to construct new lines, especially after new federal rules in the late 1980s effectively capped the return on such investment at around 11%. By 1999 transmission investment was less than half the $5 billion it had been 20 years earlier. It has been known for over a decade, that the utility industry was not investing enough in the reliability of the grid. Congress has neglected the operation of the system.
BUSINESS LOSS Many small commercial property insurance policies typically exclude coverage for damage from power outages. This is not true with some large commercial policies because many large companies buy special endorsement to cover service interruptions. However, policies are triggered by different factors, including the duration of the outage, and its cause. Most types of outages are not covered, but exceptions sometimes include damage caused by effects of an outage, such as a fire, or terrorism, if that type of coverage has been purchased. In the August 2003 blackout, small businesses suffered outage-related losses from $100 to $125,000 according to a Detroit Regional Chamber survey from 150 members in the retail and food-service industries. The Associated Food Dealers of Michigan estimates that grocers lost more than $50 million in perishable food in the outage. Oakland County in Michigan estimates the loss at more than $90 million with $82 million in lost wages, $5 million in lost business and about $4 million in damages to utilities. The power outage caused numerous lawsuits for lost inventories, productivity and even information stored in computers. The Michigan Lawyers Weekly stated that not only power companies but others that allegedly failed to provide proper emergency backup systems could be targets. If companies failed to provide appropriately and demonstrated an inability to provide proper backup systems for critical functions like computer data, they could be subjects for lawsuits. The New York City firm Cauley, Geller, Bowman & Rudman L.L.P.
28
Emergency and Backup Power Sources
filed a class-action lawsuit on behalf of all those who lost power in the Court of Common Pleas in Cuyahoga, Ohio. The complaint against FirstEnergy Corp. charges that the company recklessly caused the power outage, failed to have a functioning alarm that could have warned of problems, failed to cut back problem tree limbs and failed to maintain a fail-safe system that could have separated the local system from the rest of the power grid. The complaint seeks damages for injuries as well as punitive damages.
BLACKOUT LESSONS Hospitals hit by the massive August 2003 blackout learned some important lessons that can help prepare for similar disasters. One was to acquire multiple power generators. Another was to test backup procedures. Like many other health care organizations, William Beaumont Hospitals in Royal Oak, Michigan, and Memorial Sloan-Kettering Cancer Center in New York have diesel generators for running information systems during a power outage. These generators automatically kicked in during the 2003 outage that hit parts of the United States and Canada. But, these hospitals encountered unexpected problems. At Beaumont, information technology personnel had been fighting the MSBlaster worm virus and had just started making patches to workstations. Many of the workstations normally connected to both regular and emergency power outlets could not be used until they received the patch. Two generators dedicated to information systems at Beaumont Hospitals’ Royal Oak and Troy facilities kept laboratory, pharmacy, clinical and registration systems running during the blackout. The workstations were cleansed of the MSBlaster virus, with registration and laboratory workstations cleaned first. Registration allowed patients into the information systems for tracking. The hospital was short of PCs because only a distinct amount are connected to emergency power and not all of these were yet virus-free. Moving to backup paper forms was not a problem. When systems are taken down for software upgrades, the staff must go to paper processes. The downtime during upgrades provided the drill-time needed. Simulated drills are hard to do, while the downtime from upgrades is real.
Emergency Power
29
At Memorial Sloan-Kettering, one of three auxiliary units that protect against power surges burned out during a large power surge just before the blackout. Circuit breakers blew, as designed, disconnecting the generator’s link to several servers running critical computer applications. Both problems took time to correct in an emergency environment. Hospitals do not operate on full power with generators during blackouts. Generators run only critical patient care devices and information services. Food preparation is minimal, serving patients and not always employees. Most hospital air conditioning units do not work during a power outage and the workplace gets hot. Beaumont had to truck in water because the blackout shut down the Detroit area’s water distribution system for four days. After the circuit breakers blew at Memorial Sloan-Kettering, the hospital’s e-mail system was down for 30 minutes and its primary clinical system was out for about 90 minutes. The human resources system was off and came up about five hours later. The auxiliary unit that burned out was an uninterruptible power supply (UPS). This battery back-up unit is used to handle power during the transition from normal electrical power to generator power. Having multiple UPS units gave Memorial Sloan-Kettering multiple power sources for information systems. Therefore, most information systems were not affected by the failure of one UPS unit. At Memorial Sloan-Kettering, which had undergone recent renovations, some floors did not yet have emergency outlets connected to a generator, so workstations that are supposed to be connected to normal and emergency power were down. Workstations were plugged into emergency outlets in telephone closets. The hospital also had some difficulty contacting employees at home because so many homes have cordless phones. These phones plug into normal outlets for electrical power to recharge batteries and transmit signals between the base unit and the cordless receiver. A hardwired phone with a handset connected by cord to the unit, receives all of its power from the telephone line. Hospitals and other facilities considering the purchase of generators can purchase more than one unit to split the load and have redundancy in emergency power. Generators should operate at a minimum level of 30% capacity because a generator running below its minimum capacity level may not burn all the fuel in the cylinders, which washes oil off the cylinder walls and cuts engine life.
30
Emergency and Backup Power Sources
Facilities with generators should run them for one to four hours under load quarterly as part of a regular maintenance program. A dummy load can be created by connecting a generator to a resistive load bank (an electrical unit load with resistors). Allow the water and oil temperatures to stabilize, then monitor performance over the testing time. References Dietderich, Andrew, “Insurers Denying Claims Until Cause of Outage Found,” Crain’s Detroit Business, Vol. 19, August 25, 2003, p. 22. Goedert, Joseph, “The Blackout as a Learning Experience,” Health Data Management, Vol. 11 No. 10, October 2003, p. 14. Hirsh, Michael and Daniel Klaidman, “What Went Wrong,” Newsweek Special Report, August 25, 2003, pp. 37-41. “Trouble All Down the Line,” Time, August 25, 2003, Vol. 162 No. 8, pp. 35-38.
Power Stability and Quality
31
Chapter 2
Power Stability and Quality GRID OPERATION The electric power grid suffered a massive blackout on the afternoon of August 14, 2003 when lights went out from Ohio and Ontario to New York. It seems that a local system failure cascaded over a wide area, but many have long seen major problems in the grid and search for ways to minimize the effects of blackouts. The grid is always involved in a balancing act. The amount of electricity taken from the lines (the load) has to match the electricity being generated. If the power generation drops too much, system controllers have to shed load, causing brownouts or blackouts. The electricity flows through the grid as alternating current so AC frequencies at each station must match. Partial deregulation during the early 1990s allowed some states to separate their generation and transmission industries. Generation systems boomed, but transmission lagged behind due to a patchwork of interstate regulations and jurisdictions. Nationwide policies covering transmission system operation, capacity and investment would force transmission owners to implement a stronger and more resilient grid. Currently protective relays shut down power lines if high currents threaten to make them overheat and sag, but those lines could be kept functioning with more heat-resistant lines, which are already available. Generators switch off if the AC frequency or phase changes rapidly because the generators can damage themselves trying to respond to these changes. The use of breaking resistors, which exchange electricity for heat, could help generators make smoother transitions. Better communications among power stations would also aid in stabilizing the grid. Protective relays rely on local information and may disconnect a line unnecessarily. Dedicated fiber-optics would permit 31
32
Emergency and Backup Power Sources
comparisons of conditions at adjacent stations, reducing needless shutdowns. The Global Positioning System (GPS) could be used to put time stamps on station readings, allowing operators to make better decisions by using successive snapshots of grid conditions. The Bonneville Power Administration, based in Portland, Oregon, and Ameren Corporation, a St. Louis utility, use GPS time stamping. Once operators get a snapshot of grid conditions, they could transfer the information to faster, smarter switches. Flexible AC transmission devices could tune the power flow. Superconducting valves called fault current limiters would allow circuit breakers to disconnect lines cleanly. Installing more AC lines or more powerful lines would increase transmission capacity but could lead to bigger transients in the grid. If something goes wrong, there has to be a way to contain a disturbance, and the most common way to do that is to disconnect lines. A master computer with a total view could serve as traffic control for the grid. Studies indicate that such a global view would have prevented about 95% of customers from losing power during the 1996 blackouts in the western U.S. One technique to improve control would automatically quarantine trouble spots and divide the remaining grid into islands of balanced load and generation. EPRI has commissioned computer-modeling studies of the technique, called adaptive islanding. These studies concluded that it could preserve more load than conventional responses. Adaptive islanding would take about five years to implement, but blackouts would not disappear. The chance of a cascading failure is real in stressed or highly interconnected systems. With every incremental increase in grid reliability, the cost of the next increment goes up. Over the long term, some improvements in grid complexity could occur. Direct current lines, which have no frequency associated with them, tend to act as shock absorbers to disturbances in AC systems. DC lines separate the Texas power grid from the eastern and western grids. Adding more could help make the grid system more stable, although high-voltage DC is expensive.
VIRTUAL UTILITIES Deregulation of the power industry has changed the way the utility business operates, but changes in the future will be more visible.
Power Stability and Quality
33
Most noticeable will be a reduction in the number of high profile, monolithic power stations and their replacement by small, localized generators. While this may improve power availability, it will mean an addition to the number of transmission lines already in place. But, it will also make it more economically viable and practical to incorporate less common forms of power generation. While these developments may lead to the need for more local transmission lines, advances in reactive compensation will allow more lines to be run underground. These changes are currently driven by the needs of manufacturers and organizations that could benefit by generating their own electricity and selling any surplus. The benefits of localized generation will grow and this may reduce the need to transmit electricity over very long distances. This increased volume of privately generated electricity requires more control and management and this means the development of virtual utilities. These are organizations that do not own generating capacity, transmission lines or distribution equipment. They control a power network by paying those who supply electricity to the system and collects from those who use power. They maintain the infrastructure through subcontractors. Microturbines, wind power generators, solar energy and fuel cells do not naturally produce electricity at 50 or 60-Hz. More flexible AC transmission systems would help the efficient connection of these resources to grid systems. Grid stability will become an even more complex issue for control and protection. One technique is to use high voltage direct current (HVDC) systems as isolating links between grids as a way to reduce the need for large-scale synchronization.
DC TRANSMISSION The use of direct current on power networks avoids the problems of instability which can occur on long AC transmission lines and cause surges and blackouts. When connecting isolated grids, HVDC back-toback stations allow power interchange while blocking power line problems. DC transmission means lower line costs with no need for frequency control equipment and lower line losses and costs. But, HVDC stations are more expensive than AC substations and may interact in an adverse manner under certain conditions.
34
Emergency and Backup Power Sources
ABB Power Systems’ HVDC Division in Ludvika, Sweden, has been involved in several developments in HVDC. These include using insulated gate bipolar transistors (IGBTs) instead of thyristor valves for control. This improves the cost range for dc transmission, making it more feasible for local distribution. Another development is deep-hole ground electrode technology, which drops cabling costs by replacing one wire with an earth return. The electrode can be located close to the station, with reduced power loss and interference and provides opportunities to use monopolar HVDC transmissions. Land electrodes usually require a large area, especially where the earth resistivity is high. But, lower resistivity can often be found between 100 to 200m below the surface due to a higher salt content. This means lower electric potentials and potential gradients than at the surface.
DISTRIBUTED POWER Distributed power generation means placing energy generation and storage as close to the point of consumption as possible with maximum conversion efficiency and minimal environmental impact. Typically, centralized power stations are over-designed to allow for future expansion and so they run for most of their life at a reduced efficiency. They also represent a higher financial risk to the owner because of the greater amount of investment in a single plant. Distributed generation involves dispersed generators, which are customer sized and usually in the service transformer range of 5-500 kW. These are connected at low voltage to the network. Larger distributed generators are about the size of primary distribution equipment such as feeders or substation transformers in the range 2-10 MW and connected at medium voltages to the network. Thermal generators may be used, where heat is the main energy requirement. In one virtual utility project ABB has a partnership with Progress Energy of the USA, which supplies about 3 million customers in the Carolinas and Florida. This project allows the connection of combinations of energy sources, including microturbines, CHP plants, wind power and fuel cells. Internet-based links are used to connect the sources to a central control center. This allows Progress Energy to monitor about
Power Stability and Quality
35
10 MW of distributed generation from a single location. The control of virtual utilities requires monitoring and supervision software with aggregation and reporting software to aid decision-making functions. This software interfaces with other packages that link users to trading and forecasting packages. The market rates are continually available and compared with generation capacity. Unit control and dispatching packages complete the software functions. An increase in virtual utilities should result in increased trading competition and lead to further technical developments. This will improve efficiency in system operation along with forecasting and scheduling. Weather forecasting will also be used as a crucial factor in predicting power usage. Improved simulation software should result in improvements in forecasting and scheduling decisions. The methods used for managing a network will also change. Currently, most power networks use a top-down structure, with centralized controls for energy sources. In the future, it should be possible to have a bottom-up integrated optimization of energy supply based on the availability of generators, with a decentralized control. This would allow more resource optimization and reduce the need for high levels of standby capacity. In transmission and distribution, optimum routing will become more important to match energy usage to local sources. The key objective will be to route electricity by the shortest path and to optimize the energy imported from outside regions. In the future environmental and resource conservation pressures will lead to an increased regional and intercontinental energy exchange. This will require enhanced management systems. By 2060, the World Bank estimates that developing nations will consume over twice the power used by established countries.
GRID STABILITY Stability is a property of a system that is disturbed and returns to its original condition. In a power distribution system consisting of transmission lines connected together with two or more generators, the rotors of the generators rotate at a constant speed and are in step as a normal condition. When a fault occurs on one of the lines, the generator closest to the
36
Emergency and Backup Power Sources
fault will supply the largest portion of the fault current while the other generators supply smaller parts of the current depending on their distance from the fault. The sudden load on the generators will cause them to slow down but not equally. The generator closest to the fault will slow down more than the others and the generators will no longer be in step. The governors connected to the generators will attempt to bring the generators back to normal speed. There will be an angular displacement or difference between the rotors of the generators. The rotor of the generator that has slowed the most will attempt to return to normal while the rotors of the other generators may have already returned to normal speed. The generators will tend to slow down as the load on them is reduced with the slowest generator picking up load. This causes a rocking motion to take place on the rotors. As the fault is removed by taking out of service the line section on which it is located, the rocking motion will decrease and the generators will get back into step. If the fault persists, other generators will try to pick up the load and may fall out of step until a complete shutdown occurs. If the fault is removed quickly enough, the rocking motion will decrease as the generators return in step to their original condition. The system except for the faulted section out of service then returns to normal.
SYNCHRONOUS MACHINES Synchronous machines for power generation are usually the round rotor type or the salient pole type. The large round rotors are generally found in steam turbine generator installations, while the salient pole units may be found in water-turbine generators, synchronous capacitors and synchronous motors. A synchronous capacitor or condenser is a machine which is allowed to float on the line and draw only leading current from the line. It is not intended to supply any mechanical load and is built with a lighter construction than a regular synchronous motor. They are rated in kVA on a zero-leading power factor basis and are capable of receiving capacitive kvars equal to their rating. Synchronous capacitors range in size from a few hundred hundreds of thousands of kvars. The larger units are hydrogen cooled and installed outdoors.
Power Stability and Quality
37
The salient pole unit has a relatively large airgap between the adjacent pole sides. Each pole winding is concentrated on the pole and the magnetic field or flux distribution in the air-gap is adjusted to produce a sine wave output by the saturation of, and by chamfering the tips of the poles. The round rotor type, has a distributed field and magnetic flux passes between adjacent poles. The flux decreases in steps going from coil to coil of the distributed winding.
OVERHEAD LINES Overhead transmission lines employ large conductors that may be made of stranded copper conductor or aluminum conductor steel reinforced (ACSR). The conductors must have enough mechanical strength to support long spans under normal conditions and also under the conditions of ice and wind loading. Hard drawn copper for producing the highest strength copper conductors is used as well as aluminum. In 8conductors, the steel core is considered as taking all the mechanical tension. The AC resistance varies with the amount of alternating current flowing, but it is often determined for a current density of 600 amperes per square inch. There is an increase in the AC resistance due to the skin effect eddies which increases as the diameter of the conductor increases. The skin effect for the ACSR conductor is generally greater. An approximate rule for the voltage of transmission lines is 500 to 1000 volts per mile of the line. In the early days of AC power in the United States, the operating voltage increased quickly. In 1890, the Willamette-Portland line in Oregon operated at 3,300 volts. By 1907, a line was operating at 100-kV and in 1913 the voltage rose to 150-kV. In 1926, the voltage was 244-kV. By 1953, lines operating at 345-kV were being constructed. The power that can be safely transmitted over a transmission line at a specified voltage varies inversely with the length of the transmission line. The cost of the transmission line increases directly with its length so that the cost per kW transmitted increases more rapidly than the first power of the length of the line. The series line impedance is mostly reactance and may be halved if the line frequency is changed from 60 cycles to 30 cycles, then the
38
Emergency and Backup Power Sources
power that may be transmitted over a given distance would be doubled. But, costs and size of the equipment at the lower frequency are much greater and the possibility of flicker in lighting loads, makes these lower frequencies generally impractical. In a 60-cycle transmission system if the line voltage is doubled without altering the line, the amount of power that can be transmitted is about four times greater.
SYNCHRONOUS CONDENSERS In long distance alternating current transmission one or more intermediate synchronous condensers are normally used. They allow the transmission of more power over a given transmission line or system at a specified voltage, resulting in a lower cost per kW transmitted. The reliability of the transmission system is also improved since an intermediate synchronous condenser also aids the stability of a transmission system. For a given size and spacing of conductor the series impedance of a line increases directly with the length of the line. For a given line voltage and a specified operating angle between voltages the line drop is constant in magnitude. The longer the line length, if intermediate synchronous condensers are not used, the smaller the line current will become. If the operating angle is increased, the current will increase but stability is reduced. This prevents increasing the operating angle above a certain value.
TRANSMISSION CIRCUITS Transmission lines supply power in large blocks to load centers. This power may originate from several generating stations and other sources that may be part of a large power pool or grid. When a fault occurs on a transmission line that supplies power to a load that has other sources of power supply, that line will be removed from service and the load served from other sources. Each of the other sources may have to increase its output to take care of the loss of supply from the faulted line. In some cases, the other sources may not be capable of the sudden increase in demand so a part of the load may have to be dropped to prevent the complete loss of all of the load.
Power Stability and Quality
39
To maintain a high degree of continuity of service, the system is usually designed so with the loss of the largest source of supply, the rest of the system is capable of picking up the load. Ideally, this should be done without interruption of service or loss of any portion of the load. It is desirable to have power transmitted from a generating station over more than one transmission line in order to provide the continuity of service desired. These transmission lines from a source are often operated in parallel.
OPERATING CONNECTIONS The important operating connections employed in transmission lines include the line step-up and step-down transformers and the high voltage circuit breakers. It is not economical to generate voltages much higher than about 20,000 volts. The power is generated at medium voltages and stepped up by transformers to several hundreds of thousands of volts. At the end of the transmission line, transformers step the voltage down to the distribution voltage which may be as high as 70,000 volts. The distribution transformers then step this voltage down to the distribution voltage which is usually 2300 in suburban communities. In larger communities, the distribution may be done in underground cables at voltages of 6600, 13,200 or 26,400 although higher voltages are also used. Power may be sent 10 or 20 miles at these voltages, although they are not classed as transmission line voltages. For amounts of power greater than 25,000 kW, the voltage used increases directly with the distance involved. This voltage is 500 to 1000 volts per mile so a line of 150 miles would be designed to use 100,000 to 150,000 volts. The transmission system includes the high voltage buses and structures along with the generators and their controls and also the receiver synchronous condensers. There may be two or more transmission lines from the same generating station. They are not paralleled except through the receiver end low voltage load center. The interrupting duty of the low voltage circuit breakers at the sending end of the system may be relatively low. For the continuity of service, the source should have enough capacity to immediately pick up the load being carried by the line which may be removed from service. When there is more than one source, the
40
Emergency and Backup Power Sources
sources remaining in operation may pick up parts of the load and the remaining capacity of each source and associated transmission line may become smaller. The sources and lines may operate at a fairly large phase angle difference under normal conditions provided that the loss of the largest source of power supply in the system would cause a relatively small percentage increase in line load and operating angle. But, if the increase in line load is relatively large, then the normal operating phase angle may have to be smaller. If there is more than one line, the unit system connection, lines and transformers, may be used with the lines bused at each end on the low voltage side of the transformers.
TRANSIENT OPERATION OF TRANSMISSION SYSTEMS The alternating current electric system is essentially a constant speed system where the speed is controlled within very narrow limits. The governors for the generators operate to keep the machines rotating within these limits. If an increase in load occurs on the electrical system, the governor does not start to operate until a small but definite increase or decrease in speed has occurred. In general, the speed will increase or decrease still further before the governor’s action on throttle mechanisms operates sufficiently to balance the increase or decrease in the electrical load. Up to the time that the balance is obtained, the electrical output or input may be greater or lesser than the input to the prime mover, and the difference between outputs and inputs (neglecting machine losses) is supplied by a decrease or increase in kinetic energy of the rotating parts of the generator and prime mover. Transient operation following a switching operation follows a set sequence. When a generating system is supplying a load through two paralleled transmission lines and a receiver end condenser is operating, the operating angle between sending end and receiver end low voltage buses may be about 10 degrees. If one transmission line with its transformers is suddenly removed from service, then the speed of the generators should increase enough to cause the throttle to operate and the prime mover input would decrease. The mechanical moment of inertia of the generator and prime mover rotating parts, prevent an immediate change in the operating angle.
Power Stability and Quality
41
Part of the load is initially supplied by the stored kinetic energy in the rotor of the condenser which begins to drop back in phase angle. The generator rotor begins to advance in phase angle until the electrical power output of the generator is equal to that of the load. At this point, the rotor of the condenser is running at a slightly slower speed than the rotor of the generator, and the condenser rotor drops farther behind in phase angle and the line transmits more power than required by the load. The excess power transmitted becomes available to increase the speed of the condenser rotor back to that of the generator. When this is accomplished, the speed of the two machines is equal but the power output of the line is greater than the load requires and the condenser rotor is accelerated until it reaches a new normal steady state operating angle. But, it is moving faster than it should be and it will move to a lower operating angle where the power output of the line is less than that required by load.
SWITCHING OPERATIONS A switching operation that removes one line from service produces a large increase in load on the line remaining in service and produces electrical-mechanical oscillations between synchronous machine rotors at each end of the system until these oscillations are damped out. The magnitude of the first quarter cycle of oscillation is the difference between the new steady state operating angle and the old steady state operating angle before the system was changed. Since the power output varies as the size of the operating angle between machines, greater angular changes may be produced during the second quarter cycle of such an oscillation than during the first quarter cycle. This type of oscillation normally occurs at the rate of about one to two cycles per second. At the rate of two cycles per second, the percent change in angular velocity is about 0.5 percent. The small percent change in speed produced during the oscillation shows an electrical system under transient operation which is readjusting itself to a new steady state operating condition. There is almost a constant power input through each prime mover during the initial stages of the transient operation. The governors on the prime movers do not act until after the worst
42
Emergency and Backup Power Sources
portion of the transient operation is over. If a three-phase fault should occur, during the time the fault is on the transmission system, it will reduce the line voltage at the fault to zero and prevent all power from being transmitted between the generators and the load. The time of clearing of a fault may be less than ten cycles on a 60 cycle system or less than 0.2 seconds. If the line was operating at full load, this power is available during the time of fault to accelerate the rotating parts of the prime movers and generators above synchronous speed. At the receiver end the only source of supply is from the receiver synchronous condensers that hold up the load voltage. At the time of fault, the condensers act as generators and supply power to the load as well as to the fault. This causes a decrease in the stored kinetic energy in the rotor of the condensers causing them to slow down. The generators at the sending end of the system are accelerating while the synchronous condensers at the receiving end are decelerating. Both effects produce an increase in the operating angle between internal machine voltages. As the fault is cleared, the operating angle between machine rotors increases. If the operating angle should increase beyond 90°, the maximum power output of the transmission system is reached and the system will pull out of step. If the system does not pull out of step during the fault, then when the fault is cleared the remaining line in service may again transmit power, but the operating angle has increased and the rotors at each end are no longer operating at identical speeds. Even if the new steady state operating angle with one line out of service is reached at the time the fault was cleared, there will be an overswing before the rotors again reach identical speeds. If this overswing reaches beyond 90°, then the system would fall out of step before the machine rotors reach identical speeds. During the fault and overswing, after the switching operation to remove the fault is complete, the actual change in speed of the system may not be sufficient to cause the governor of the prime mover to operate and to change the input to the generators in the short period of time involved. The stored kinetic energy in the machine rotors, along with the power angle determines any acceleration of the rotors above or below synchronous speed. During the fault the system as a unit begins to increase in speed above synchronous speed and there is also a net acceleration between machine rotors. This net acceleration between rotors
Power Stability and Quality
43
determines if the machines stay in synchronism or pull out of step. The machines normally do not lose synchronism during the fault, if it is cleared in a reasonable time by automatic breakers and relays, but the difference in angular velocity acquired during the fault is often sufficient to cause the machines to pull out of step after the fault is cleared. This becomes a function of the amount of synchronizing power which may be transmitted after the fault is cleared, so that there is also a net acceleration after the fault is cleared which tries to change the rotor velocities so that they are again equal. The line section removed when the fault is cleared reduces the voltage at the load so that the load does not take as much power as formerly. The system as a whole continues to increase in speed until the governors act to reduce the input but this occurs after the worst of the transient oscillations takes place. If the machines do not lose synchronism at the end of the first half cycle of the oscillation, then they will probably not fall out of synchronism, because during the next cycle there is more time for the voltage regulation to increase the transient internal voltage and the oscillations are being damped out by losses in the field circuit. These losses are produced by the oscillations. Damper windings are used by synchronous condensers while other machines use an approximate equivalent in the solid rotor and metal wedges in their construction. These winding make the damping effect more rapid.
FAULT REMOVAL The protection of transmission systems from faults must be accomplished by removing the fault as quickly as possible. The usual technique involves high speed circuit breakers actuated by relays. The circuit breakers usually operate in oil. Smaller sizes operate in air. In some air breakers the arc that forms when the contacts separate is controlled by the magnetic field into a series of small gaps. These gaps aid deionization and extinguish the arc quickly. In some types of circuit breakers, compressed air is used to extinguish the arc. In oil circuit breakers, the mechanism is immersed in an oil similar to transformer oil. When the circuit is opened and an arc forms, some oil is vaporized and a pressure is built up. The pressure of the oil helps to
44
Emergency and Backup Power Sources
extinguish the arc. Some circuit breakers are mounted in the same oil with the transformer. The relaying of faults depends on the types of relays used. These include overcurrent and directional overcurrent relays as well as pilot systems. These types of relays are not used for the protection of long single lines or loop systems. Here, distance relays are employed. They compare the voltages and currents during a fault which are a function of the circuit constants between the relay and the fault along with the distance to the fault.
OVERCURRENT RELAYS During a short circuit the current flow in the affected conductors is greater than the load currents. This difference allows the relay to discriminate between loads and faults. The fault current will depend on the fault location, type of fault, connections and amount of generating capacity. The measurement of fault current alone does not determine the location of the fault. A system can be sectionalized with relays set to operate on a minimum fault current for a fault in that section. To prevent tripping of corresponding breakers, time delays are used in the operation of the relays. The time delay increases as the source is approached so that the breaker nearest the fault clears before other relays operate. When fault currents can flow on either side of a bus, a direction element is used which allows the relay to trip when only current is flowing away from the bus. An inertia factor affects the time required to clear a fault. This is the ratio of stored kinetic energy in the machine rotors at the sending end to the stored kinetic energy in the prime movers at the sending end. In a water wheel generator, the allowable time for clearing a fault varies from about 0.04 to 0.14 seconds. This takes about 2.5 to 8.7 cycles. A large moment of inertia is present in water wheel generators. Steam turbogenerators run at a higher speed and have a higher amount of stored kinetic energy. A transmission system using steam turbo-generators will have an inertia factor of about 10-12 and the allowable time for clearing the fault of more than twice the time if the generators were water wheel machines having factors of about 3.
Power Stability and Quality
45
Circuit breakers and relays are designed to provide a high speed interruption which improves the stability. A total relay and circuit breaker time of less than three cycles is typical. This is improved with newer electronic relays. When several breakers are in series, the cascaded time settings require that the relays nearest the generators have longer time delays. When more than a single source of supply is connected to the system, especially in a ring bus system, the appropriate time settings must be calculated. The short circuit currents which exist under the known circuit conditions are calculated by the method of symmetrical components and the currents through each of the relays determined from these calculations. Differential, pilot and balanced protection all depend on the comparison of currents which have defined relations to each other under normal conditions, and much different relations during the fault conditions for which the breakers should trip. Differential protection is used for protecting generators, transformers and bus structures from internal faults. The vector sum of all of the currents in each phase, flowing out of the equipment being protected, is sensed by the relay. If the currents in the relays are not at the same voltage base, as is the case for transformers, they are brought to the same base by current or auxiliary transformers in the relay circuits. In some cases, the relay itself is used. For generators, the vector sum of the currents flowing out of each phase winding is sensed by the relay. For transformers the vector sum of the currents in each winding on the core, after correction in instrument transformer circuits is sensed by the relay currents on a common voltage base. The vector sum of all the currents in each phase of each line connecting to a bus is sensed by the relay. Under normal conditions and for faults outside the equipment being protected, the vector sums should always be zero. When a fault occurs in the equipment, a current should flow which is not included in the vector sum. In transformers, a fault between turns of a winding will change the effective transformer ratio making the sum no longer zero and a current flows through the relay. The relays are given the most sensitive setting which will not cause operation for faults outside the apparatus. The main causes of faulty operation are saturation in current transformers from high values of current, and transformer energizing transients, which cause currents to flow momentarily in the winding which has just been energized.
46
Emergency and Backup Power Sources
When differential protection is used for polyphase transformer banks with delta connected windings, all the corresponding currents in the windings must be considered. The current transformers are inserted in the leads of the individual transformers before the delta is closed. Pilot protection is a similar type of protection used in transmission lines where the ends of the circuit are some distance apart. Auxiliary circuits using pilot wires are employed to obtain the vector sum. Pilot protection involves the simultaneous opening of circuit breakers at the terminals of transmission lines using the pilot wires as a communication link between the circuit breakers. These physically separate pilot wires are connected in various ways. In the circulating current pilot wire technique, current transformers are used for sensing the load currents and fault currents which circulate over the pilot wires. This technique is also used with current balancing relays. Another scheme uses pilot wires with percentage differential relays. The directional comparison pilot wire scheme uses direct current over a pair of wires with polyphase directional relays. The link may also use the transmission line conductors themselves with a very high frequency current superimposed on the line. The carrier current signal operates the relays to keep the circuit breakers closed. A fault on the line interrupts the signal and opens the breakers, causing the line to fail safe. Pilot systems also use radio signals with microwave channels operating from relay stations located in a line-of-sight along the route of the transmission line. The relays operate to de-energize the line in the same fail safe manner. Balance current protection involves the difference of corresponding currents in two similar parallel lines. As long as the lines are alike, the relay current is zero. A fault on one of the lines will cause a difference in the two lines and unbalances the currents unless the fault is near the far end. In this case, relays at the distant end will operate to open the breaker in the faulted line. Then, a difference in current occurs at the near end, since the lines are no longer tied together at the far end.
POWER QUALITY MONITORING In the past, most electrical equipment represented a linear load. This was usually a resistive or inductive load such as lights or motors.
Power Stability and Quality
47
This type of equipment can tolerate disturbances on the electrical transmission and distribution systems. As electronic controls and other circuitry was added to these linear loads, a consistent sinusoidal voltage became more important. These electronic loads can cause the sinusoidal voltage to be deformed. Some of the early electronic equipment were not tolerant of events on the power system and disturbances would often cause the electronics to fail. Electronic equipment has improved significantly and most now have some type of built in protection from transients. However, the protection does not always protect the internal devices. Power quality monitors are available with a variety of features and methods of capturing data. Low end power quality monitors usually capture voltage, current, and detect voltage sags. Mid-range monitors capture power factor, energy, and waveforms along with the features of low end power quality monitors. High end units can also capture high speed transients and harmonics. Some equipment has a range of these features.
RMS VOLTAGE The root mean square voltage is a basic measurement provided by a power quality meter. But, not all meters capture RMS voltage the same way. They use a variety of methods of calculating the voltage and storing the data. Important factors are the sampling rate and the type of data that is captured. Most low cost hand-held meters do not actually calculate the RMS voltage. They measure the peak voltage and divide the voltage by the square root of two, which assumes that the voltage is sinusoidal. Other meters known as true RMS meters calculate the RMS value by sampling the waveform multiple times per cycle, square the values, average the square voltages and then calculate the square root. Most meters that are referred to as power quality meters actually measure the true RMS value, but low grade meters use the peak detection method. The sampling rate can refer to both the number of samples per cycle the meter uses to calculate the RMS voltage and how often the meter samples a cycle of data. The number of samples that a meter uses per cycle is generally from 16 to 512. Low end meters sample the AC waveform 16 to 64 times per cycle, mid-range meters usually sample the
48
Emergency and Backup Power Sources
AC waveform 128 samples per cycle and high end meters use 256 or more samples per cycle. A higher sampling rate allows the RMS voltage to be more accurate. The frequency that a meter samples a cycle of data is also important. Most power quality meters calculate the RMS voltage every cycle but some meters only perform the calculation once a second, minute, or even 15 minutes. The meter should calculate the RMS voltage for every cycle because variations that occur between cycles will affect the RMS data that the meter captures. The type of data that a meter captures can be the average voltage, minimum voltage, or maximum voltage. The meter needs to capture average voltage, but the minimum and maximum voltages are also important. The minimum and maximum voltages indicate how much the voltage varies from the average during an interval. The long-term minimum, maximum, and average RMS voltages indicate the long term voltage regulation. Monitoring the average voltage over a period of time can indicate if the utilities’ voltage regulation equipment is operating properly. The minimum and maximum voltages will also indicate if a sag or swell occurs in a certain time period when the monitor calculates the RMS voltage every cycle. At one facility, a chiller would trip off at night with an overvoltage error code but spot checks of the voltage showed that the voltage was within tolerance. When a power quality meter was used, the RMS voltage showed that the voltage would reach 512V on a 480V service during the night. While this was within the utility’s allowable tolerance of 480V +/-10%, the voltage was over the 460V chiller’s 10% tolerance. The utility investigated and found a setting on a voltage regulator was incorrect.
RMS CURRENT Most power quality meters are able to monitor current in addition to voltage. The basic measurement of AC current is the RMS current while the important factors are the sampling rate and type of data collected. Most meters that capture both RMS voltage and current usually either have the same sampling rate or one half the sampling rate for the current waveform. The RMS current indicates the load on electrical equipment such as transformers and breakers. Recording the RMS current over periods of
Power Stability and Quality
49
time can be used to determine the peak and average load of electrical equipment.
POWER FACTOR Power quality monitors that measure both voltage and current also provide the power factor which is a measurement of the phase angle difference between the current and voltage. It indicates the ratio of real power delivered to the total power being delivered. The power factor is calculated by dividing the real power (watts) by the apparent power (voltamps). The apparent power is found by multiplying the RMS voltage by the RMS current. The real power is calculated by multiplying each voltage and current sample together for a cycle, then summing them and dividing by the number of samples per cycle. This method of calculating the power factor takes into account the harmonics and is called the total power factor. Another method uses only the fundamental 60Hz contribution to real power and is called the displacement power factor. Most power quality monitors calculate both the total and displacement power factor, but some only calculate one and call it the power factor. Some utilities charge a penalty if a facility has a low power factor or they may charge for reactive power, which does no real work. A low power factor uses up the capacity in transformers, breakers and conductors since they are delivering reactive power. If a utility does not charge for kVARh or a low power factor, then they will be charging for extra facilities if a large transformer is needed to deliver the kVAR. If the power factor is not monitored, the spare capacity of a facility’s electrical system may not be known, which effects decisions about new loads. A plant may need to expand, but their electrical system may not have enough capacity to add the additional load from the expansion. After monitoring the power factor and current, there may not be enough capacity because their power factor is 65%. The plant would need to have a larger transformer and switchgear installed or they can improve their power factor using capacitors. The low cost solution is usually to install the capacitors and free up the capacity in the existing electrical system.
50
Emergency and Backup Power Sources
POWER AND ENERGY Power quality meters that monitor both voltage and current usually have the ability of measuring energy. Some of these meters are even certified as revenue accurate. This means the meter can accurately measure billing parameters such as kWh, kVARh, and demand. This information is useful for the daily operations of a facility and has the potential to provide substantial savings. Electric utility charges may be based on peak demand, kWh and power factor. Peak demand is the maximum kWh consumed over a 15 or 30 minute interval. The interval can be either a sliding window or fixed window. Knowing when the peak demand occurs can point to ways of lowering the peak period. Peak demand can be lowered by staggering the start of large loads like chillers outside the demand window. Detailed kWh information allows the facility to take advantage of off-peak hours when the price per kWh is lower.
VOLTAGE SAGS Voltage sag detection is a basic feature of a power quality monitor. A power quality meter can monitor the cycle by cycle RMS voltage for periods when the voltage is below 90% of nominal. The monitor can record the lowest RMS voltage, the duration of time that the voltage is below 90% and a timestamp of the data. This type of measurement is called magnitude and duration or Mag-Dur. The method provides an exact timestamp and duration of events. The data captured with voltage sag detection can be used to analyze equipment misoperation from voltage sags. Mitigation can be applied to the most sensitive critical loads first.
WAVEFORM CAPTURE The more expensive power quality meters are able to capture the waveform when a voltage sag is detected. This is an definite advantage over a meter that only captures magnitude and duration. Voltage sags with the same RMS voltage can effect equipment in different ways if the waveforms are different.
Power Stability and Quality
51
The waveform can also be used to analyze the effect of capacitance on loads that have enough energy to ride through the voltage sag. If the same event happens at a different time, equipment can be affected differently depending on loading conditions. Analysis of the waveform can be used to determine why certain equipment tripped off and other equipment did not. Waveform capture is also useful in determining if a voltage sag was upstream or downstream of the meter by comparing the current and voltage profiles. If the current increases at the same time as the voltage decreases, the sag is caused by an event downstream of the meter. When the current increases after or decreases during the voltage sag, then the event was caused by something upstream of the meter. This can be used to determine where a problem occurred.
TRANSIENT CAPTURE High-end power quality meters have the ability to capture transient events that may not affect the RMS voltage enough to trigger a voltage sag or swell. This type of event is captured by a snapshot of the deviation of the waveform from the expected or normal waveform. Transients may be caused by equipment switching or lightning. Switched capacitor banks will cause transients when they are switched on, they draw a high initial current, which reduces the voltage until the capacitor is fully charged. An overvoltage can be caused by the inductance in the system from changing currents. This event is not captured by the voltage sag detector because the RMS voltage does change enough. Other electrical equipment can also cause transients due to switch or relay contact noise. At one facility, several motor drives were tripping off and the monitor showed an overvoltage error code. A check of the voltages indicated that the voltage levels were normal. Monitoring the incoming voltage indicated that a capacitor switching transient was causing an overvoltage on the DC bus of the motor drives. A capacitor clamp was installed along with line reactors on the drives. The capacitor clamp and reactor reduced the transient and the drive operated properly.
52
Emergency and Backup Power Sources
HARMONICS High-end power quality meters are also able to monitor harmonic frequencies. Most meters can monitor the total harmonic distortion (THD) and some can monitor individual harmonic frequencies. The meter usually monitors individual harmonics from 2 to one half of the sampling frequency minus 1. A monitor taking 128 samples per cycle would monitor harmonics 2 to 63. This is done by taking the Fourier transform of the voltage waveform to determine the magnitude of the individual harmonic components. The harmonics with the higher magnitudes are the 3rd, 5th, and 7th. The 3rd harmonic is caused mainly by single phase electronic switching loads while the 5th and 7th harmonics are more likely to be caused by three phase electronic loads like motor drives. Harmonics are important because electrical equipment such as transformers, breakers and conductors should be derated to allow for the heat caused by the harmonic currents. Electrical equipment is designed and sized for 50 or 60Hz AC current. As the harmonic currents flow through the resistance in the equipment, they cause heat. In transformers and motors, the higher frequencies cause higher core losses from eddy currents. Circuit breakers may trip early from the generated harmonics. One facility was experiencing tripping of a new 800A breaker. After monitoring the harmonics at half load, it was found that the peak current was allowing this peak sensing breaker to trip. The harmonics added to the peak of the current while the RMS current was not high enough to trip the old thermal breaker. At half load on the breaker, the RMS current was measured at 509A, with a peak of 720A. The actual RMS current was only 321A. At full load, the peak current was enough to cause the 800A breaker to trip. Solving problems with transients and harmonics requires power quality equipment to record the types of events that are seen by the facility and affects equipment. Event notification is available in some power quality meters. This feature starts the process of sending text messages to any device with an e-mail address. These events range from peak demands to voltage sags. The notification of peak demands can result in significant cost savings. A voltage sag can indicate a brownout has occurred. A full power backup with some type of uninterruptible power supply to ride through voltage sags is expensive.
Power Stability and Quality
53
A power quality issue may be misdiagnosed based on incomplete monitoring. This can lead to incorrect solutions being applied or the solution being incorrectly sized. The voltage sag’s depth and duration needs to be known along with the equipment response. POWER QUALITY ENVIRONMENT In the past, sensitive data processing equipment was generally located in a controlled environment like a data center. This facility was usually designed with special attention to the electrical and environmental support systems. Publications such as Federal Information Processing Standards Publication 94 were used to guide the design and installation of the data center’s electrical system. Equipment that was used in the general office environment or on a factory floor generally did not contain sensitive electronics. The advent of microprocessors, personal computers and networked computer systems changed all that and electronic equipment might be installed almost anywhere. Personal computers and microprocessor-based equipment are now used in offices, warehouses, factory floors, and other locations. These facilities can have equipment problems due to poor wiring and other power quality related issues. Uninterruptible power systems (UPS) are important tools in protecting computer systems from power failures. UPS equipment uses a battery system to provide energy during the power failure, providing enough time to save data and properly shut down the system. Many users also have backup generators if the power failures are extended. One benefit of using an UPS is that sensitive equipment gets conditioned power. An UPS takes AC power, converts it to DC for the batteries and then converts it again to AC. The circuitry has enough feedback to keep the power output of the UPS isolated from most input fluctuations. The UPS output voltage is solid and does not change with the input voltage. Most electrical noise at the UPS input is filtered out.
ELECTRICAL NOISE Electrical noise refers to any undesirable electrical signals that could affect an electrical or electronic circuit. Noise can cause server
54
Emergency and Backup Power Sources
reboots, data corruption, lock-ups and slow down network operations. Electrical noise may be defined as common mode, transverse mode or interference. Common mode noise refers to the electrical signals between a circuit conductor and the grounding conductor. In a balanced three-phase system common mode noise should be equal in magnitude and in phase to all conductors and ground. The transverse mode is also called normal mode or differential mode. It refers to the signals that exist between a pair of circuit conductors. Electrical noise can affect the data stream in a data communication signal traveling between two computers or between a computer and printer. The data at the receiving device can be different from the data the transmitting computer sent. Computers have the capability of detecting and correcting these data errors. But, this uses some of the data transmission capability of the link and causes delays. If the errors occur too frequently, the error correction system may become swamped and inoperative. There can be radio frequency interference and electromagnetic interference. Radio frequency interference is typically higher in frequency and is capacitively coupled into the electrical system. The electrical wiring acts like an antenna for radio frequency interference. Sources of radio frequency interference include radio and television transmission. Electromagnetic interference is inductively coupled through the electrical system through the conductors in transformers, motors and other wiring. Computer manufacturers expect neutral to ground voltages to stay below a specific level. In some cases this may be 500 millivolts. Erratic operation can occur if the neutral-to-ground voltage exceeds this level. The neutral-to-ground voltage can be kept low by using dedicated circuits for critical equipment, isolating high current circuits, keeping circuit runs short and using larger conductors. Wiring in a cable tray allows magnetic fields from adjacent current carrying conductors to be introduced into connected circuits, resulting in noise in these circuits. If the adjacent circuits are spaced further apart, this minimizes the effect of each circuit’s magnetic field on the other. The ground wire for each circuit should be located equidistantly from each circuit conductor. Then each conductor’s magnetic field influences the ground wire equally. The magnetic fields on the ground wire
Power Stability and Quality
55
cancel and the result is smaller voltage or current flow in the ground. If conduit is used, it acts as an electromagnetic shield and limits electrical noise from magnetic fields. The closer proximity of conductors in the conduit also minimizes the effects of magnetic fields on the conductors. Rigid metallic conduit can be used as a ground conductor in certain applications instead of an actual ground cable. The National Electrical Code should be used to make sure the electrical installation is safe. Power quality measured are addressed in The Institute of Electrical and Electronic Engineers publication Standard 1100-1999 IEEE Recommended Practice for Powering and Grounding Electronic Equipment.
DISTRIBUTION PANELS Power quality related problems can also originate at the distribution panels. Using a single conduit for distribution panel service and a single conduit for the load or branch circuits does not aid power quality. Running all the branch circuits in a single conduit allows each conductor to produce a magnetic field that gets induced into the other conductors. Noise proliferates and there is no shielding. A neutral-ground bond should only be made in the distribution panel if it is not made anywhere else upstream. It is a good practice to separate non-critical loads with critical loads. Modular cubicles can be a source of power quality problems. The electric wiring scheme is often not designed for sensitive equipment. The wiring may be undersized and the connections may be degraded by temporary loads such as space heaters, fans, and other loads that can overload the electrical systems or cause noise problems.
POWER STRIPS Power strips provide a convenient way to quickly increase the number of power outlets. One wall outlet can be expanded into an additional six outlets. High quality power strips can be effective against noise and surge problems. But, many power strips are not well made. They can have undersized wiring or poor connections. In some units, wires have been connected to the wrong prong of the outlet (hot and neutral reversed). Many devices have spike or surge suppressors that
56
Emergency and Backup Power Sources
may fail and provide a short circuit between neutral and ground. Plugging one power strip into another for more outlets also causes potential problems. One facility that had five or six power strips attached to a single 15 or 20 amp circuit had an AC current that was high enough to burn the first power strip in the chain.
NEUTRAL-GROUND VOLTAGES Neutral-ground voltages can be kept low by keeping circuit runs short and loads low. The neutral-ground voltage can be measured with a digital voltmeter and a special adapter. The amount of voltage measured is a function of the circuit’s neutral current times the neutral impedance plus any ground current times the ground wire impedance. Besides measuring neutral-ground voltage with a voltmeter, it can be monitored with an oscilloscope. This will indicate if high frequency noise is present. Some equipment manufacturers recommend that their equipment not be used when a certain neutral-ground voltage is exceeded. Isolation transformers have a neutral-ground bond providing a zero voltage neutral-ground reading. If neutral-ground voltage measured at the distribution panel is very high, then there could be a problem such as a missing neutral-ground bond for a service transformer.
GROUND CURRENTS Making ground current measurements at various points in an electrical system can be used to indicate the state of the system. There should be a small amount of ground current since almost all electrical equipment has some leaking to ground. The measured ground current should be only 1 to 2% of the phase reading. A zero reading can indicate an open ground which is a safety hazard if a wiring or equipment fault occurs. If large ground currents are measured, then these need to be investigated. Isolating the source of ground currents may be difficult. Check all ground wires in the system. An electrical one-line diagram and physical drawing of the electrical system wiring will be helpful. Make measurements for each link on the drawings. Measure all bonding conductors, connections to the grounding system and even conduits.
Power Stability and Quality
57
Ground currents will return to the source through the lowest impedance path. These paths include ground wires, conduits, equipment chassis, grounding electrodes or building steel. The ground current can be measured with a clamp-on ammeter without removing wires. Clamp the meter around the phase conductors and neutral. Ideally, the current combined in the phase conductors and neutral should sum to zero. If the meter does not read zero, then ground current is flowing. Another method of finding ground currents is to place the ammeter around the ground conductor in the distribution panel and turning off circuit breakers one at a time. If the current reading drops when a circuit breaker is turned off, that circuit is at fault.
POWER CONDITIONING Power conditioning equipment includes uninterruptible power supplies (UPS), static transfer switches, shielded isolation transformers, power distribution units, magnetic synthesizers, transient voltage surge suppressors and filters for noise and harmonics. These must all be properly installed, grounded and wired. A power conditioner will not cure a poorly constructed electrical system. An electrical system contains a chain of transformers, generators, switches, circuit breakers, distribution panels, power conditioners, cables and other equipment. The chain depends on its weakest link. UPSs, static transfer switches, generators and automatic transfer switches have become more electronic. Many devices are microprocessor controlled with much sensitive circuitry. They need proper wiring and grounding, like most electronic devices. Some equipment may use internal static switches that switch neutrals, so special attention may be needed for grounding requirements. Signal cables may be sensitive to noise.
GROUNDING SCHEMES One facility decided it was best to totally isolate the data center grounding system from the rest of the building. A ground ring was placed outside and all ground feeds coming into the data center were disconnected. The data center floor, power conditioning equipment and
58
Emergency and Backup Power Sources
computer equipment were tied to the isolated ground ring. This is in violation of the National Electrical Code and a fault in a piece of equipment could cause dangerous voltages to develop on the chassis of the equipment. Later, a lightning strike would hit the ground close to the ground ring. Since the ground ring was connected to the data center and not earth ground, the energy flowed into the data center and destroyed most of the data center’s equipment. In another instance, a high neutral-ground voltage was measured at a power receptacle. The equipment manufacturer recommended less than half a volt. Another neutral to ground bond was added in the upstream distribution panel. This made two neutral to ground paths in the same part of the electrical system, forming a ground loop and violating the National Electrical Code. Neutral currents were allowed to flow through both neutral and ground conductors. The excess ground current created enough noise voltage to create problems in the entire computer network system.
POWER AUDITS Interruptions of service, network problems and other unexplained hardware failures may indicate that there is a power quality problem. A facility power audit is one way to determine the state of an electrical system. It can be a valuable aid on where to begin the troubleshooting process. Power quality audits begin with a review of the facility electrical systems one-line diagram. An inspection of the facility main electrical service follows. It should be properly bonded and grounded. An inspection of the grounding electrode system should confirm that it has not been damaged or altered. Measure the current through the main service neutral-ground bonding conductor and the current flowing into the ground electrode system. This is an important indication of grounding problems. Ground impedance testing can also be conducted, a good ground should have an impedance of less than 0.1 ohms. Start at the main electrical service entrance and inspect the switchgear, wiring routes, cables and distribution panels from source to load. Take sample readings throughout the facility. These include voltage, current, and harmonics. A power quality analyzer can be used if a problem is suspected.
Power Stability and Quality
59
Review the application and operation of any power conditioning equipment. Inspect the connections to load equipment. There may be better ways of connecting the loads. A power problem may have more than one solution. At one facility, a computer network system was installed in an old warehouse. The branch circuits had no ground wire because the branch circuit conduit was being used as the grounding conductor. The computer network was experiencing problems. The electrical system met building code requirements, but it was thought to be inadequate for the network application. One recommendation was to rebuild the electrical system. Another suggestion was to power each network workstation with a small UPS. This solution was inexpensive and tried, but did not resolve the problem. Finally, it was determined that the network cable runs were too long, combined with the poor electrical environment which resulted in poor operation of network. The solution was to replace the network wiring with fiber-optic cables. References Buff, Katrina, Editor, Energy and High Performance Facility Sourcebook, Editor, Proceedings of the 26th World Energy Engineering Congress, The Association of Energy Engineers, Inc., The Fairmont Press: Lilburn, GA, 2003. Minkel, J.R., “Heating the Grid: Several Near-term Solutions can Keep the Juice Flowing,” Scientific American, Vol. 289, November 2003, pp. 18-20. Pansini, A.J., Power Systems Stability Handbook, The Fairmont Press, Inc.,: Lilburn, GA, 1992. Russell, Eric, “Virtual Utilities: the Shape of Things to Come,” European Power News, Vol. 26 No. 5, May 2001, pp. 7-9.
This page intentionally left blank
Standby Power Systems
61
Chapter 3
Standby Power Systems An important measure of an organization’s strength is its ability to respond successfully to emergencies. The risk posed by emergencies can be reduced by careful planning using adequate guidelines, yet flexible enough to be adaptable to sudden changes and varying demands. A sound electrical system is needed with emergency and standby power supplies to keep critical processes and equipment going during the emergency. Ideally, there should be a seamless transition between normal and standby power. In real life, however, this may be difficult and too costly, so compromises must be accepted.
STANDBY POWER REQUIREMENTS The first part in determining standby power requirements is to determine the duration of power interruption that can be tolerated. This sets the criticality of the electrical load. If a load cannot be interrupted for more than half a cycle (1/120 of a second for a 60-Hz system), it is called a critical load. If a load cannot be interrupted for more than 10 seconds, it is classed as an essential load. If a load can be interrupted for the duration of the normal power failure, it is called a non-essential load. The type of load determines the standby power requirement. The source of energy storage determines the type of standby and emergency system.
STORAGE DEVICES Types of energy storage available include thermal, mechanical and chemical. In thermal storage, wind-generated energy is converted into heat using an electric-resistance heating. This heat is stored as hot water, a warm bed of rock or gravel, or molten heat-storage salt. Mechanical 61
62
Emergency and Backup Power Sources
energy storage involves lifting an object such as water so gravity can later return it. Energy storage can also use a flywheel since a spinning disk stores energy. The more weight, and the faster it spins, the more energy a wheel can store. Advanced flywheels are not very heavy, but they spin at 30,000 rpm or more. The energy stored in a flywheel is directly proportional to weight but it increases with the square of the rpm. So, you get quadruple the energy for double the rpm. A large amount of energy can be stored in a fast-spinning wheel. At such high speeds, air friction is considerable, so some flywheels are installed inside a vacuum chamber. The bearings must be very precise, carefully designed and built. Chemical energy storage can be done using electrical power to split a compound, like water, into its constituent parts, hydrogen and oxygen. These constituents are stored separately and later recombined to produce electrical power as needed in a fuel cell. These cells are becoming more available and while still expensive, offer a way of using the energy of chemical bonds to provide energy storage. Another approach to chemical energy storage is the more traditional battery storage device. The metal plates inside these batteries act as receptors for the metal atoms from the electrolyte (acid in lead-acid batteries) as the batteries are being charged. When the battery discharges, these metals return to solution in the electrolyte, releasing electrons and generating direct current at the battery poles. The various types of different batteries are named for the type of plates or electrolyte used. The most common are the lead-acid and the nickel-cadmium batteries. Lead-acid batteries are used in cars, golf carts and other common applications. Nickel-cadmium, or nicad batteries are used when higher costs are not a penalty for lower weight and improved tolerances to overcharging. Airlines generally use nicads. While you must be careful not to overcharge a lead-acid battery, or discharge it too quickly, nicad batteries can stand up to these abuses. But, if you discharge a nicad only half-way enough, eventually half-way will be as far as the discharge will go. Batteries are effective in providing an emergency power source for fire, alarms, emergency communication, exit signs, protective relays and emergency lighting. These loads generally have individual batteries with a battery charger, that is connected to AC power. The National Electrical Code (NEC) states that the system should be capable of maintaining the load for 90 minutes without dropping below 87.5% of normal
Standby Power Systems
63
voltage. These systems are simple and reliable and robust. Maintenance requirements are minimal, but since all batteries have a finite life, they must be tested and replaced.
BATTERY RIDE-THROUGH Lead-acid battery based ride-through systems are common. Storage batteries generate electricity by chemical action. Electric storage was used by the Italian count Alessandro Volta. His voltaic pile in 1800 used a cell with round plates of copper and zinc as electrodes. Cardboard soaked in salt water was the electrolyte. Current flowed but this primary cell could not store power for any length of time. The lead-acid battery was invented over 100 years ago. It consists of a number of cells connected in series. A fully charged cell will measure close to 2.2 volts. The cells are enclosed in individual compartments in a rubberoid or high-impact plastic case. The compartments are sealed from each other and except for zero maintenance types are open to the atmosphere. The lower walls of the individual compartments extend below the plates to form a sediment trap. Filler plugs are located on the cover and can be combined with wells or other visual indicators to monitor the electrolyte level. The cells consist of a series of lead plates connected by internal straps. The plates are divided into positive and negative groups and separated by means of plastic or fiberglass sheeting. Some very large batteries, which are still built almost entirely by hand, continue to use fir or cedar separators. A few batteries intended for transport service have a woven fiberglass padding between the separators and positive plates. The padding helps support the lead filling and reduces damage caused by vibration and shock. Both sets of plates are made of lead. The positive plates consist of a lead grid works that has been filled with lead oxide paste. The grid is stiffened with a trace of antimony. Negative plates are cast in sponge lead. The plates and separators are immersed in a solution of sulfuric acid and distilled water. The standard proportion is 32% acid by weight. The level of the electrolyte drops in use because of evaporation and hydrogen loss. Sealed batteries have vapor condensation traps molded into the roof of the cells. Nonsealed batteries must be periodically replenished with distilled water. When a cell is fully charged, the negative
64
Emergency and Backup Power Sources
plates consist of pure sponge lead. The electrolyte consists of water and sulfuric acid. During discharge both sponge lead and lead dioxide become lead sulfate. The percentage of water in the electrolyte increases as the SO4 radical splits off from the sulfuric acid to combine with the plates. During the charge cycle the reaction reverses. Lead sulfate is transformed back into lead and acid. A small quantity of lead sulfate remains in crystalline form and resists breakdown. After many charge-discharge cycles, this residual sulfate reduces the battery’s output capability. The battery is then said to be sulfated. Sulfation becomes more certain below a 70% charge. In addition to the possibility of damage to the plates, a low charge brings the freezing point of the electrolyte up to near 32°F. Temperature changes have a major effect on the power density. Batteries perform best at room temperatures. At 10°F the battery has only half of its rated power. Both the energy density (Whr/lb) and power density (W/lb) are not high. The average life is about 360 complete charge-discharge cycles for non-deep cycle batteries. It is not possible to discharge a lead-acid battery completely, even those that have be stored for years still have some charge. During charging, about 3/4 of the input can be retrieved giving an efficiency of 75%. Sodium-sulphur, lithium-halide, lithium-chlorine and zinc-air batteries can provide better performance, but are not as common due to high costs or complex charging procedures. Most chemical batteries must be replaced every 6 to 7 years. The replacement and disposal cost must be factored into the total energy cost. Battery systems are heavy relative to the amount of power they can deliver and those applications requiring significant power storage may weigh several tons. Over 99% of all power disturbance are a few cycles to a few seconds long. Many times an UPS will be used for the few seconds that it takes for a standby engine generator to start and become operational. Storage batteries are typically used to provide the necessary backup power to the load during this short time period. These battery systems need constant monitoring, maintenance and service. Batteries also require a large environmentally controlled storage area and special disposal procedures, which increases the operational costs through regulatory compliance. Each discharge and charge cycle also decreases the useful cell life.
Standby Power Systems
65
When lead-acid batteries are regularly deep cycled to 80% or more of full discharge, their life is usually limited to less than 600 charges/ discharges. When only shallow discharges are used, leaving 2/3 or more of the battery’s full capacity, the number of charge/discharge cycles can go above 10,000. For most UPS ride-through applications, the discharges are relatively infrequent (less than 100 per year) and a short duration does not require complete discharge. Chemical-battery-based, ride-through systems typically use sealed, maintenance-free automotive batteries with thicker lead plates and higher electrolyte levels. Smaller systems can provide 250-kW for 10 seconds while larger 10-MW, 10-second are available. A 1-MW system will use 240 lead-acid cells weighing 46,000 pounds. These are capable of delivering 1,800 amps for five minutes. Full output is achieved within two milliseconds of a utility power interruption.
BATTERY TYPES Performance is limited by the lead-acid battery packs which are generally the most affordable option. More exotic batteries like nickel metal hybrid (NiMH) packs have also appeared. The common 12-volt lead-acid battery has six cells, each containing positive and negative lead plates in an electrolyte solution of sulfuric acid and water. This proven technology is not expensive to manufacture and it’s relatively long-lasting. But, the energy density of lead-acid batteries, the amount of power they can deliver on a charge, is poor when compared to NiMH and other newer technologies. The United States Advanced Battery Consortium (USABC) is a Department of Energy program launched in 1991. Since 1992, USABC has invested more than $90 million in nickel metal hybrid batteries. These batteries are much cheaper to make than earlier nickel battery types, and have an energy density almost double that of lead-acid. NiMH batteries can accept three times as many charge cycles as leadacid, and work better in cold weather. NiMH batteries have proven effective in laptop computers, cellular phones, and video cameras. NiMH batteries can power an electric vehicle for over 100 miles, and are still several times more expensive than lead-acid. NiMH Batteries from Energy Conversion Devices were installed in GM’s EVI and S10 electric pickup truck, doubling the range of each. Chrysler has also
66
Emergency and Backup Power Sources
used NiMH batteries, made by SAFT of France its Electric Powered Interurban Commuter (EPIC) vans, adding 30 miles to their range. Other battery technologies include sodium-sulfur which was used in early Ford EVs, and zinc-air. Zinc appeared in GM’s failed Electrovette EV in the late 1970s. Zinc-air batteries have been promoted by a number of companies, including Israel’s Electric Fuel, Ltd. Zinc is inexpensive and these batteries have six times the energy density of lead-acid. A car with zinc-air batteries could deliver a 400 mile range, but the German postal service demonstrated that these batteries cannot be conventionally recharged. Other battery types are more promising, including lithium-ion, which is used in a variety of consumer products. Lithium batteries could offer high energy density, long cycle life, and the ability to work in different temperatures. However, like the sodium-sulfur batteries in the Ford Ecostar, lithium-ion presents a fire hazard since lithium itself is reactive. Plastic lithium batteries could prove to be very versatile. Bellcore is working on a lithium battery that would be thin and bendable like a credit card for laptop computers and cell phones. Each cell is only a millimeter thick. The plastic batteries are lightweight and have been tested for automotive applications. Canadian utility Hydro-Quebec has been working with 3M on a lithium-polymer unit which could be the first dry electric vehicle battery. Like the Bellcore product, this dry battery uses a sheet of polymer plastic in place of a liquid electrolyte. Also working on this technology is a team at Johns Hopkins University. This is also a plastic battery that can be formed into thin, bendable sheets. These batteries also contain no dangerous heavy metals and are easily recycled.
BATTERY CONSTRUCTION AND OPERATION A battery is made up of one or more cells. Each cell contains alternating negative and positive plates. Between these are the plate separators which are insulators. The negative plates are connected together, as are all the positive plates. Each plate has a grid-like frame, and on the grid is the plate’s active material. The grid provides the physical structure for the plate. The active material is a substance that produces the electron flow. When a lead-acid battery is fully charged, the active material in the
Standby Power Systems
67
negative plates is mostly sponge lead. In the positive plates, it is lead dioxide. The plates are in contact with a solution of sulfuric acid which acts as the electrolyte. As the battery discharges, the acid from the electrolyte combines with the active material in the battery plates, forming a solution of lead sulphate and water. The water dilutes the acid solution. As the battery is charged, water is removed from the acid solution, increasing the strength of the electrolyte. A portion of the plate material is used to formed the lead sulphate. The chemical reaction is as follows: PbO2 + Pb + 2H2 SO4 < 2PbSO4 + 2H2O A charged lead-acid cell will produce a voltage of close to 2.2 volts. This is the voltage that results from the lead dioxide and lead in the sulfuric acid. Cells made of other metals and electrolytes have different voltages. The battery capacity depends on the size of the cell. The more lead dioxide and lead paste available, the greater the storage capacity. A larger, thicker plate will produce the same voltage as a smaller, thinner plate, but it will provide more electrons resulting in a greater current flow. The plates are made porous so that the acid can filter or diffuse through the plate.
CHARGING When a battery is charged, the charging current will first reconvert to active material the lead sulphates most accessible to the electrolyte. This takes place relatively quickly. Then the rate of charge is limited as acid filters out of the active material and water filters in. During charging and especially in the final stage of charging, some of the water in the electrolyte breaks down into its component parts of hydrogen and oxygen. These bubble out of the electrolyte lowering its level. This water must be replaced by filling the battery cells with distilled water through the vent caps. In many newer batteries this water loss is prevented by a catalyst that forces the hydrogen and oxygen to recombine into water. These newer batteries are sealed. Some batteries use a catalyst with vent caps. The battery plates in lead wet cells have small quantities of antimony added to the lead. This strengthens the grid and helps to lock the active material in the grids to reduce shedding. But antimony also has the effect of promoting small internal galvanic currents in the battery.
68
Emergency and Backup Power Sources
This slowly discharges the battery. The process is known as self-discharge. Antimony also increases gassing during charging. Some batteries are allowed to build up a certain amount of internal pressure. Under this pressure, small amounts of hydrogen and oxygen are produced during charging which recombine into water. These batteries are sometimes called recombinant. Excessive charging will cause excessive amounts of hydrogen and oxygen to be produced, so recombinant batteries use pressure relief valves to vent excess gases. GEL-CELLS In a gel-cell, the electrolyte is in the form of a gel with the consistency of soft wax. During manufacture this gel is pasted on the battery plates and separators. The active material in the battery reacts with the gel, but there is not the same fluid movement in the electrolyte as in a wet cell. The battery plates in a gel cell are relatively thin to allow diffusion around them. The gel cannot be replaced during service and gelled batteries are built as sealed no-maintenance units. It is important to prevent gassing during charging, since this causes the electrolyte to dry out and the battery will fail. Several methods are used to prevent gassing. The charge voltage is controlled to prevent overcharging. The antimony used to reinforce conventional battery plates is replaced with calcium. The resulting plate grid is not as strong, but it is not so prone to gassing or self-discharge. Gel batteries are sometimes SVRs (sealed valve regulated) batteries. Some maintenance-free batteries are wet batteries with excess electrolyte contained in partially sealed cases. The excess electrolyte is slowly used up. A true no-maintenance battery is an SVR or recombinant. A starved or absorbed electrolyte battery is an SVR with even less electrolyte. These are sometimes called AGM (absorbed glass mat) batteries.
BATTERY FAILURE One of the main causes of battery failure is shedding of the active plate material. When a battery is discharged and charged, the chemical
Standby Power Systems
69
process in the plates, reconverting sponge lead and lead dioxide to lead sulphate, tend to weaken the bond between the active material and the plate grids. Each time a battery is discharged some active material is loosened and shed from the grid. This is a normal process of aging and will eventually result in failure of the battery. The material builds up in the base of the battery until it reaches the level of the plates and causes a short circuit or the loss of the active material from the plates. This causes the battery to no longer have enough capacity. Thin-plate, low-density batteries are more susceptible to damage than thick-plate, high-density units. Shedding is accelerated by deep discharges, high rates of discharge and charge, and by gassing during an overcharge, which washes material out of the plates. Gel-cells shedding can occur and the material tends to be trapped in the gel rather than falling to the base of the battery. All wet-cell batteries tend to gas as they approach full charge. Explosive gases are given off and the corrosive vapors are vented from the filler caps. The battery will use water, requiring regular refills. Gelcells minimize these problems, but they have drawbacks of their own. Newer batteries are built with envelope plate separators. These are sealed on the sides and bottom, so any shed material remains within the envelope. This reduces plate shorting, but not the shedding.
SULPHATION The lead sulphate formed in the battery plates during discharges is initially soft but if the battery is left in a discharged state, the sulphate hardens into crystals that are more difficult to reconvert into active material. This reduces battery capacity and is known as sulphation. Sulphation can occur from leaving a battery discharged or from undercharging. This means a percentage of the battery’s active material is always left uncharged. Failing to charge the inner areas of thick plates or plates with dense active material will result in sulphation. Idle batteries will slowly self-discharge and sulphate over time. Gel-cells have a slower rate of sulphation. Short-term high loads pull the charge off the battery plate surfaces. Long-term lower loads drain a battery steadily over a number of hours, allowing it time to stabilize internally, which drains the charge from less
70
Emergency and Backup Power Sources
accessible plate areas as well as the surface areas. When it is time to recharge, these more inaccessible inner plate areas must also be recharged. This requires enough time for the electrolyte to diffuse. If charging times are limited, the battery will not be full charged. Some of the lead sulphates formed in the inner plate areas when the battery was discharged will remain. These can slowly crystallize and charging will not reconvert them to active plate material. The battery then becomes sulphated. Increasing the plate thickness and density increases the odds of damage from sulphation. A problem with wet-cell batteries is the antimony used to reinforce the plate grids. It causes small discharge currents in the battery. These currents will slowly discharge an unused battery. If the battery is not regularly recharged, the lead sulphates forming in the plates will harden, causing a loss of capacity. To minimize damage from sulphation, a wet-cell, deep-cycle battery should be periodically (at least monthly) returned to a full charge. If it has been heavily discharged during the month, a controlled overcharge called equalization or conditioning should be used to soften up hardened sulphates. This is done by charging the battery at 3% to 5% of its rated amp-hour capacity (3 to 5 amps for a 100-Ah battery). The battery voltage should be between 15.0 and 16.2 volts for a 12-volt battery. During this charge the battery must be isolated from all loads. Equalization generally requires several hours.
OVERCHARGING Overcharging can be damaging since it leads to gassing. This results in water loss and if the lost water is not replaced, the plates will dry out. Gel-cells have less electrolyte than wet-cells and there is no way to replace the lost electrolyte. During overcharges, galvanic activity in the battery attacks the positive plate grids, causing them to deteriorate. The grids in thin-plate batteries fail sooner than those in thick-plate batteries. A battery has an internal resistance that increases as the battery is fully charged. The more current driven through it, the warmer it gets. This can cause the battery to heat up internally until the plates distort, shorting out adjoining plates. Automotive batteries are designed for engine starting. These bat-
Standby Power Systems
71
teries have many thin plates with low-density material. This maximizes the plate surface area and minimizes the diffusion time of acid through the plates. These thin plates and the low-density active material cannot handle repeated deep discharges and recharges (cycling). In each cycle some of the active material falls out of the plate grids. This is not a big concern in automotive cranking since the batteries are normally discharged only a few percent and little shedding occurs. But, if repeated deep discharges occur automotive batteries can quickly fail. A poorly constructed automotive battery can fail in as few as a dozen complete discharge/recharge cycles. Better-quality ones may survive more than 30 to 40 deep cycles. Deep-cycle batteries have much thicker plates, stronger grids, denser active material and heavier plate separators. There is still some shedding of active material from the plates with every discharge cycle, but not as much as from thin-plate batteries. Some high-quality automotive batteries are built like deep-cycle batteries.
LIFE CYCLES The life cycles of a battery are the number of times a battery can be pulled down to a certain level of discharge, and then recharged, before it fails. Some manufacturers estimate life cycles using a 50% discharge and recharge cycle. Others use an 80% or even 100% discharge/recharge cycle. The greater the depth of discharge in each cycle, the fewer life cycles the battery has and the shorter the life expectancy of the battery. If one battery has the same number of life cycles at an 80% discharge as another has at a 50% level of discharge, the former is preferred, it will have more life cycles at a 50% discharge than the latter. Another factor is the manufacturer’s definition of failure. This may be defined when the shedding of active plate material reduces the battery’s capacity to 80% of its original value or when it has fallen to as little as 60% of original capacity, which will increase the number of life cycles. If a battery is to be used for loads similar to engine cranking (motor loads) and immediately recharged, this directly parallels automotive use and an automotive cranking-type battery is adequate. For other applications, the batteries will usually be cycled at some
72
Emergency and Backup Power Sources
time, especially batteries that are to be used for emergency power as opposed to engine-cranking service. These should be good-quality, deep-cycle batteries. Prevailer gel-cells are known to stand up to heavy cycling in marine diesel use. Prevailers are manufactured under license in the USA and their cost is comparable with top-quality wet-cells. Their success has spawned clones such as Dynasty and Lifeline batteries. Premium deep-cycle batteries include the Surrette (Tilton, NH) and Rolls (Salem, MA) wet-cells and the Prevailer gel-cells (Lyon Station, PA). In the UK, Lucas/Yuasa makes premium traction batteries, while Prevailers are sold by FWO Bauch under the Sportline DryFit label. Wet-cell deep-cycle batteries acquire their long cycling life from heavy, antimony-reinforced plate grids. These are thick plates with highdensity active material and multiple plate separators. If properly cared for, these batteries may be cycled thousands of times. These batteries are several times more expensive than an automotive battery and they also suffer with a loss of performance in other respects. The thick plates and dense active material slow the rate of acid diffusion through the battery and thus the rate at which a charge can be withdrawn or replaced. When these batteries are under a high load, such as motor or cranking loads, the battery voltage tends to drop off. This voltage drop may be critical with DC to AC inverter use depending on the inverter. Some loads will suffer a serious loss of performance as battery voltage declines.
BATTERY CAPACITY RATINGS A battery is in a constant state of chemical activity and is affected by temperature changes, aging, humidity, and current demands. The traditional measure of a battery’s ability to do work is its ampere-hour (Ah) capacity. The battery is discharged at a constant rate for 20 hours so that the potential of each cell drops to 1.75-V. A battery that will deliver 6-A over the 20 hour period is rated at 120-Ah (6-A X 20 hr). A 120-A battery will not deliver 120-A for 1 hr. The amp-hour rating defines the capacity in the number of amphours available from a battery at 80°F (26.7°C), at a relatively slow rate of discharge. In the USA, the discharge period is normally 20 hours while in the UK, it is 10 hours.
Standby Power Systems
73
This means that a battery rated at 100-Ah can deliver 5 amps for 20 hours in the USA, called the C20 rate, or 10 amps for 10 hours in the UK, called the C10 rate. Shorter time periods are also used, 5 hours is used on deep-cycle (traction) batteries, called the C5 rate. Here, a 100-Ah battery would be able to deliver 20 amps for 5 hours. A battery has a lower Ah capacity at higher rates of discharge. If a USA-rated 100-Ah battery (C20 rate) is discharged at a rate of 10 amps, (C10 rate) it will deliver about 85-Ah before its voltage drops below the threshold level. A UK-rated 100-Ah battery (C10 rate) can deliver a full 100-Ah at this rate. When comparing battery ratings, the same rating period should be used. Battery capacity ratings also include reserve capacity and cold cranking amps. Reserve capacity (or minutes) is an automotive industry rating. The reserve capacity indicates how long, in minutes, a battery at a temperature of 80°F (26.7°C) will support a specified load before its voltage drops below 1.75 volts per cell (10.5 volts for a 12-volt battery). The normal rating load is 25 amps, but other amperages are used. Battery comparisons should use the same load. Cold cranking amps (CCA) rate the engine cranking ability. As the temperature drops, it takes more energy to turn an engine, while at the same time the battery has less available power. The standard USA coldcranking rating is at 0°F (-17.8°C) for a time of 30 seconds. It indicates the maximum discharge rate in amps the battery can deliver to the starter motor without dropping below 1.2 volts per cell (7.2 volts for a 12-volt battery). There is also a marine cold cranking amps, which uses a higher temperature of 32°F (0°C). This is more than the CCA rating. In the UK, the British Standard Rate uses a temperature of 0°F (-17.8°C) with a time period of 180 seconds. The final terminal voltage is 1.0 volt per cell. The International Electrotechnical Commission rating is at the same temperature with a time period of 60 seconds and a final terminal voltage of 1.4 volts per cell. Zero cranking power is a hybrid measurement expressed in volts and minutes. The battery is chilled to 0°F; depending on battery size, a 150- or 300-A load is applied. After 5 seconds the voltage is read for the first part of the rating. Discharge continues until the terminal voltage drops to 5-V. The time in minutes between full charge and effective exhaustion is the second digit in the rating.
74
Emergency and Backup Power Sources
HYDROMETER TESTING As the battery discharges, some of the sulfuric acid in the electrolyte decomposes into water. The strength of the electroltye in the cells is an index of the state of charge. The measurement of specific gravity is done with a hydrometer. It consists of a rubber bulb, a barrel, and a float with a graduated scale. The graduations are in terms of specific gravity. Water is assigned a specific gravity of 1. Pure sulfuric acid is 1.83 times heavier than water and thus has a specific gravity of 1.83. The height of the float above the liquid level is a function of fluid density, or specific gravity. The battery is said to be fully charged when the specific gravity is between 1.250 and 1.280. Water should be added several operating days before the test to ensure good mixing. Use a hydrometer reserved for battery testing. Do not use one that has been used as an antifreeze tester. Trace quantities of ethylene glycol will shorten the battery’s life. American hydrometers are calibrated to be accurate at 80°F. For each 10° above 80°F, add 4 points (0.004) to the reading and for each 10°F below the standard, subject 4 points. The standard temperature for European and Japanese hydrometers is 20°C, or 68°F. For each 10°C increase add 7 points (0.007) and subtract a like amount for each 10°C decrease. Some hydrometers have a built-in thermometer and correction scale. All cells should read within 50 points (0.050) of each other. Greater variation is a sign of abnormality. The state of charge is only indirectly related to the actual output of the battery. Chemically the battery might have full potential, but unless this potential passes through the straps and terminals, it is of little use. A more reliable test is to load the battery with a resistive load while monitoring the terminal voltage. The battery should be brought up to full charge before the test. The current draw should be adjusted to three times the ampere-hour rating. A 120-Ah battery would be discharged at a rate of 360A for 15 seconds. The terminal voltage should not drop below 9.5-V. KINETIC RIDE-THROUGH Kinetic energy storage can be used as a ride-through technology. It has been used for many years in motor-generator sets for load isolation and conditioning.
Standby Power Systems
75
Some power protection systems combine slow-speed conventional flywheels with other energy storage and generation technologies. These include synchronous generators, diesel or natural gas engines with flywheels that rotate at less than 6,000 rpm. Direct circuit (DC) system flywheel energy storage technology can be used as a substitute for batteries for providing backup power to an interruptible power supply (UPS) system. The initial cost may be higher, but flywheels offer a much longer life, reduced maintenance, a smaller footprint, and better reliability compared to batteries. The combination of these characteristics generally results in a lower life-cycle cost for a flywheel compared to batteries. Flywheels designed for UPS application typically provide power for about 15 seconds. Using a flywheel instead of batteries requires a generator that can come up to full power in about 10 seconds. Flywheels can also be used to extend battery life. A majority of power events last 5 seconds or less. A flywheel can be added to a battery backup system and controlled so that the flywheel provides power for short-duration events while the battery is used for longer outages. Flywheels are highly tolerant to frequent cycling while batteries are not and batteries can provide power for a longer period. Batteries for UPS applications are typically sized for about 15 minutes of full load power. Many UPS systems are integrated with diesel-fired generators that can come up to full power within 10 seconds. In kinetic energy storage, energy is stored by causing a disk or rotor to spin on its axis. The stored energy is proportional to the flywheel’s mass moment of inertia and the square of its rotational speed. Motor generators use flywheels to isolate electric loads from electricity supply disturbances. These flywheels limit disturbances to less than 1 second of backup power and only utilize a few percent of the flywheel’s stored energy. A more effective use of flywheel technology requires a means of disconnecting the kinetic energy stored in the rotating mass from the energy demands of the load. The addition of a variable speed drive and inverter components provided a solution, but it was initially cumbersome and expensive. Advances in electronic power conversion and control technology, coupled with resourceful integration of the components, have been key to the technology’s development. The superior energy storage density of flywheels compared to batteries has always been recognized. Much of the flywheel develop-
76
Emergency and Backup Power Sources
ment in recent times has been in space vehicles and satellites, where mass and volume constraints are severe. Flywheel development in these applications has focused on increasing the rotor speed to maximize the energy density. These speeds require the use of magnetic bearings, which are now used in moderate rpm flywheels as well. Flywheel systems must have low frictional losses. Magnetic bearings are one solution since they allow a flywheel’s rotating shaft to float, with no surface-to-surface contact. Hybrid magnetic bearings use permanent magnets and provide stability with much smaller and more energy efficient electronic components. Alternatives to magnetic bearings include ceramic bearings that provide nearly frictionless rotation, in spite of the surface-to-surface contact. DC flywheel energy storage systems are an alternative or supplement to lead-acid batteries. Batteries have the advantage of providing backup power for a period measured in minutes rather than seconds, but this advantage has limited value if reliable backup generators are available. Batteries will usually have lower first costs, but their significantly shorter life and greater maintenance requirements compared to flywheels generally gives flywheels lower life-cycle costs. Standby losses for flywheels range from about 0.1% to 1.0% of rated power. This includes power to overcome frictional losses and auxiliary equipment. The life of a flywheel is typically about 20 years, while most batteries in UPS applications will only last 3 to 5 years. Batteries must also be kept at a narrow operating temperature while flywheels are tolerant of outdoor ambient temperature conditions. Frequent cycling has little impact on flywheel life, while frequent cycling significantly reduces battery life. Flywheel maintenance is generally less frequent and less complicated than for a battery. Flywheel reliability is 5 to 10 times greater than a single battery string or about equal to two battery strings operating in parallel. Flywheels avoid battery safety issues associated with chemical release. They are also more compact, using only about 10 to 20% of the space required to provide the same power output from a battery. With a much higher power density than batteries, typically by a factor of 5 to 10, flywheels are more attractive where floor space is expensive. Batteries usually have a lower first cost than flywheels, but suffer from a shorter equipment life and higher annual operation and maintenance expenses. Flywheels are attractive in operating environments that
Standby Power Systems
77
are detrimental to battery life such as frequent cycling due to main power supply problems. Flywheels can be classed as low speed or high speed. The low range is in thousands of revolutions per minute (rpm) measured while the high range is tens of thousands of rpm. Doubling the rpm quadruples the stored energy, so increasing rpm considerably increases the energy density of a flywheel. Low-speed flywheels are usually made from steel while highspeed flywheels are usually made from carbon or carbon and fiberglass composites that will withstand the higher stresses associated with higher rpm. Higher rpm also creates greater problems with friction losses from bearings and air drag. High-speed flywheels typically use magnetic bearings and vacuum enclosures to reduce or eliminate these sources of friction. Magnetic bearings allow the flywheel to levitate, fundamentally eliminating frictional losses associated with conventional bearings. Some low-speed flywheel use only conventional mechanical bearings but most flywheels use a combination of the two bearing types. Vacuums are also used in some low-speed flywheels.
INSTALLATION Most flywheels are attached to a concrete slab and connected to the DC bus of the UPS system with a DC disconnect switch to allow servicing. A 120-V AC service is required for most flywheel systems auxiliary equipment, such as vacuum pumps. Some flywheels require a higher AC voltage for recharging. A DC flywheel system must have an output voltage that matches the UPS system DC bus voltage. DC flywheel energy storage can be applied anywhere a battery is currently used to provide backup power for a UPS system. The flywheel can be used as a substitute or supplement for the battery. UPS batteries are sized to provide backup power for periods from about 5 minutes up to around 1 hour, but more commonly about 15 minutes. A period of 15 minutes is generally adequate to allow an orderly shutdown of equipment. The backup period for flywheels is commonly about 15 seconds. This is enough time to allow the flywheel to handle the majority of power disruptions that last for 5 seconds or less and still have time to cover slightly longer outages until a backup generator can come up to
78
Emergency and Backup Power Sources
full power. Flywheels should not be used alone for backup power, without a battery and/or a fuel-fired generator.
PRODUCTS Several companies offer products where the flywheel is an integral part of the UPS system rather than being a direct substitute for a battery. Low-speed units may use unenclosed steel rotors with conventional bearings while high-speed units have composite material rotors operating in a vacuum with magnetic bearings. Intermediate products with elements of these two endpoints also exist. Further product development and enhancement is likely coupled with the ever increasing needs for more reliable, higher quality power. The focus has been on stand-alone DC flywheel energy storage systems that can substitute or supplement batteries in a UPS system. Some manufacturers offer flywheels as an integral part of a UPS system while others have developed DC flywheel energy storage systems. See Table 3-1. Material advances have benefited flywheel systems. In the last 30 years, the tensile strength of graphite composite materials has increased some five-fold, while the cost per pound has dropped over 90%. The development of Kevlar was a breakthrough in flywheel system development. Recently, T1000 graphite composites with tensile strengths of up to a million pounds have allowed flywheels to approach 100,000 rpm, significantly increasing their energy storage capacity. Magnetic materials have also seen marked improvements in recent years with the development of rare earth magnets such as samarium cobalt-17 in the mid-1970s and neodynium-iron-cobalt (NdFeB) in 1983. The latter is used extensively by the automotive industry in alternators. Materials costs for NdFeB are quite low, since neodynium is the third most common rare earth element, and cobalt is used only in small amounts.
KINETIC SYSTEMS A flywheel-coupled motor-generator is used in International Computer Power’s Dynamic Energy Storage System (DESS). This is a kinetic
Standby Power Systems
79
Table 3-1. Flywheel Companies ———————————————————————————————— Manufacturer Active Power Acumentrics AFS Trinity Power Beacon Power Caterpillar Designed Power Solutions Flywheel Energy Systems GE Digital Energy Holec Indigo Energy Optimal Energy Systems Pentadyne Power Piller Powerware Preciso Power Regenerative Power & Motion Reliable Power SatCon Power Systems Statordyne Urenco Power Technologies
Products UPS, DC energy storage UPS, DC energy storage DC energy storage UPS, DC energy storage UPS UPS with generator, DC energy storage DC energy storage UPS with generator UPS with generator UPS, DC energy storage UPS, DC energy storage DC energy storage DC energy storage, UPS, UPS with generator UPS, DC energy storage UPS DC energy storage UPS UPS with generator UPS with generator DC energy storage
———————————————————————————————— storage system for system ride-through. These units can be used to replace UPS battery backup systems in standby engine generator applications. Kinetic-storage can also be used with an existing battery storage system to prevent unnecessary battery system discharging during shortduration power anomalies. The Statordyne system uses hydraulic storage with back-up generation. The system uses a synchronous motor/generator that provides power factor correction by running as a synchronous condenser while the utility is supplying power. It switches to a generator when the utility power is interrupted. Energy is stored in a mechanical flywheel that keeps the generator’s speed at about 1,777 rpm for the first 100 to 200 milliseconds. Then a hydraulic motor is engaged to keep the generator’s shaft spinning at 1,800 rpm until a diesel or natural gas generator starts and takes over. These systems range from 100 to 800-kW, and cost about $1,000/kW depending on the size and features.
80
Emergency and Backup Power Sources
The Holec system also uses flywheel-inertia storage with back-up generation. This system integrates low-speed flywheel technology with a synchronous generator to protect facilities from power outages 2 to 3 seconds in duration, with a diesel or natural gas engine taking over for protection from longer power outages. The Holec system is installed in about 300 facilities worldwide. The system uses a 3,600-rpm induction motor, with a rotating stator that is mechanically attached to the rotor of an 1,800-rpm motor/generator. Under normal conditions, both motors operate at full speed and the rotor of the induction motor turns at 3,600 + 1,800 rpm for an overall rotating speed of 5,400 rpm. It acts like an energy storage flywheel. When an electric power interruption occurs, the induction motor is reconfigured to operate as an eddy current clutch. It transfers energy from itself to keep the generator’s shaft at full speed. These systems are available in sizes ranging from 100 to 2,200 kVA and cost about $700 per kVA for a 1-MW system.
MAINTENANCE DC flywheel maintenance depends on the flywheel design. Although valve regulated lead acid (VRLA) batteries do not require monitoring and maintenance of electrolyte fluid levels, manufacturers generally recommend quarterly inspections to check tightness of connections, remove corrosion, measure voltages, and check for cracks and swells in the battery cases. Periodic replacement of individual batteries may be required, while replacement of the entire battery system can be expected about every 4 years. Routine maintenance for flywheels involves changing cabinet air filters and checking vacuum pump oil level every few months. The vacuum pump oil should be changed once a year. Magnetic bearings require no maintenance, while replacement of mechanical bearings is expected every 3 to 10 years, depending on the flywheel design. The vacuum pump may need replacing every 5 to 10 years. Bearing replacement for lower rpm flywheels with mechanical bearings costs $5/kW to $15kW depending on flywheel design. Replacing the vacuum pump costs about $5/kW. While most flywheels have a design life of about 20 years, they will probably last longer with regular maintenance. Flywheel backup costs vary from $100/kW to $300/kW. The lower end of the range represents larger, lower rpm units while smaller, higher
Standby Power Systems
81
rpm models have higher per kW costs. Installation is about $20kW to $40kW. Standby power consumption is about $5kW/year for lower rpm flywheels with mechanical bearings. It is only about 10% of this for higher rpm flywheels using only magnetic bearings. VRLA battery purchase costs, measured in $/kWm (dollars per kilowatt-minute) are about $17/kWm for 5 minutes of backup power, and drop to about $13kWm, $10kWm, and $8kWm for 10, 20, and 30 minutes of backup power. Annual battery maintenance costs are about $3.50/kWm for 5 minutes of backup power, dropping to about $2.25/kWm, $1.50kWm, and $1.25kWm for 10, 20, and 30 minutes of backup power. Standby power consumption, or float loss, is negligible for batteries compared to flywheels, but the differences in footprint and floor-space requirements are much more. Flywheel footprints range from 0.04 ft2/kW to 0.12 ft2/kW depending on the size and type. Battery footprints per kW depend on the backup time. For 5 minutes of backup power, VRLA battery footprints start at about 0.15 ft2/kW. This doubles for 20 minutes of backup power and depends on the backup time. Electromechanical flywheels avoid the environmental and safety issues associated with battery requirements. This includes the need for eye wash stations, spill containment, hydrogen detection, and elevated room ventilation rates. Batteries must also be kept at normal indoor air temperature or suffer from degradation in expected life. This adds to cooling system operating costs. INSTALLATIONS Several hundred units have been installed with about a dozen at Federal facilities. Most of these were at military installations, but the State Department and Veterans Affairs also have installations. At Fort McPherson in Atlanta, Georgia, the U.S. Army has equipment that must be running on a 24/7 basis. Grid power was backed up by a UPS system with wet-cell batteries and diesel-fired generators. While the diesel generators would come online within 10 seconds, the existing battery system was becoming less reliable and more expensive to maintain. Annual battery maintenance was almost $30,000. The UPS system consisted of four 500-kVA units operating in parallel, each with its own wet-cell battery string. The UPS and battery system was about 15 years old, which is well beyond the expected battery life.
82
Emergency and Backup Power Sources
One of the battery strings had already failed and represented a hazardous condition for maintenance personnel. The batteries needed to be replaced and the options were either VRLA or wet-cell batteries, or flywheels. Reductions in power demand reduced the requirements to two 500-kVA units operating in parallel, with either unit able to meet the building’s critical power demand. It was decided to install two flywheels, one for each 500-kVA UPS unit instead of replacement batteries. A wet-cell battery system would cost approximately the same as a flywheel, but would probably have a shorter life and is expected to have higher annual maintenance costs. A VRLA battery system would cost about half as much as a flywheel, but would have to be replaced several times over the life of the flywheel and would incur higher annual maintenance costs. Using a flywheel would free the 2400 square foot room reserved for the battery strings. Room ventilation and cooling costs could also be reduced. Environmental and safety issues associated with the batteries were eliminated. The diesel generators would come online within 10 seconds, which minimizes the value of the longer backup period provided by batteries.
LIFE-CYCLE COSTS Since flywheels will cost more than batteries, but require less maintenance and last much longer, they will generally be less expensive on a life-cycle cost basis. At one facility a 250-kW UPS system is backed up by a generator that can come up to power in 10 seconds. Backup power could be provided by either a battery or a flywheel. The comparison in Table 3-2 is based on a low-rpm flywheel with a life of 20 years and a VRLA battery with a life of 4 years. Battery life will vary depending on the operating conditions, but a typical lifetime of 4 years is used. The battery is assumed to provide power for 10 minutes. A discount rate set by the National Institute for Standards and Technology for federal energy projects is used. The resulting present value of life-cycle costs is $248,129 for the battery option and only $105,572 for the flywheel option. The savings is $142,557, about 60%. The short battery life compared to flywheels results in much greater lifecycle cost for the battery option.
Standby Power Systems
83
Table 3-2. Batteries Versus Flywheels ————————————————————————————————
Batteries Cost ($13/kWm × 250-kW × 10 minutes) $32,500 Installation ($30/kW × 250-kW) $7,500 Total $40,000 Replacement every 4 years $40,000 Annual maintenance ($2.25/kWm × 250-kW × 10 minutes) $5,625 Floor-space/year (0.22 ft2/kW × 250-kW × $10/ft2) $550 Standby power/year (250-kW × 8760 hrs × 0.01% × $0.063/kWh) $14
Flywheels Cost Installation Total Bearing replacement every 5 years Vacuum pump replacement every 7 years Annual maintenance Floor-space Standby power
($200/kW × 250-kW) ($30/kW × 250-kW)
$50,000 $7,500 $57,500
($10/kW × 250-kW)
$2,500
($5/kW × 250-kW) ($5/kW × 250-kW) (0.08 ft2/kW × 250-kW × $10) (250-kW × 8760 hours x 1% × $0.063 kWh)
$1,250 $1,250 $200 $1,380
———————————————————————————————— SUPERCONDUCTER STORAGE Electrical energy may be stored in a coil of superconducting wire submerged in liquid helium. The technique can be used for both ridethrough and energy storage systems. Since the late 1960s, over 20 development projects have begun, with some outstanding results. Storing large amounts of energy this way is feasible and the ability to deliver high bursts of power make it attractive for facility ride-through systems. This technology stores electrical energy without intermediate conversions to mechanical or chemical energy. A superconducting magnetic coil is immersed in liquid helium at 4.2°K (-455.5°F), causing its resistance to DC current to fall to zero. High electrical currents can be sent to the coil and the current will circulate without any losses, until it is diverted from the coil to the facility elec-
84
Emergency and Backup Power Sources
trical system. The very cold temperatures used require continuous cooling. The energy used by the cooling systems is about 25-kW for a moderately sized unit that stores about 0.28-kWh. Superconductivity, Inc., has been shipping commercial systems since 1992 with over a dozen ride-through systems currently installed. Systems are available for about $1,000 per kW. The company is now part of American Superconductor, Inc. located in Westborough, MA.
UNINTERRUPTIBLE POWER SUPPLIES (UPS) Any strategy for safeguarding against power interruptions or poor power quality should include an uninterruptible power supplies (UPS). These are one of the best power-quality defense tools that one has. An UPS can monitor and regulate the utility power entering a facility. An UPS is used for critical loads when continuous power is required and can protect the load from system power disturbances such as harmonics, transients, voltage surges and sags. The quality of service and frequency of utility power failure is important. This depends on the utility service company, the geographical isolation or location and the time of the year. It is useful to obtain data for the average duration as well as the frequency of the outages. The reliability of the utility service and the duration of the average power outage will determine the type and size of the standby power. If the quality of utility power is not always satisfactory, an UPS can improve this situation as well. A useful step is to classify the loads into functional categories depending if the load is supporting staff, equipment or building functions. Loads associated with human safety such as life support systems in a hospital, air-traffic, police and fire control systems require an UPS. Systems for data processing or critical industrial processes may need to operate under any conditions. An UPS can help ensure that critical data and telecommunications systems function properly and consistently. Building support systems include lighting, HVAC, elevators, communication systems, security systems and fire alarm. The last two systems will require battery backup or UPS. While the others will typically use batteries or emergency generators for backup. Batteries are often used for smaller loads such as emergency lighting, alarms, control circuits, telephones, fire protection systems and security alarms. For larger
Standby Power Systems
85
loads such as HVAC, elevators and other essential devices, a standby generator is required. In case of power anomalies, such as surges, an UPS may have transient voltage surge-suppression circuitry to prevent equipment damage. In the event of brownouts and blackouts, the UPS takes over to keep systems running for a period of time. When the UPS’s battery begins to be drained, before regaining utility power, the UPS’s software can save open files and shut down critical systems. UPS systems may be rotary or static. A rotary system uses a motorgenerator set to isolate the critical load from normal power. During an interruption, the motor-generator set will continue to provide power to the load for 100 milliseconds or more with its kinetic energy. Flywheels will extend this to many seconds. A bypass circuit allows the system to operate with normal power if the UPS malfunctions. Transfer to the bypass circuit may be manual or automatic. To protect against data loss due to interruptions or fluctuations in commercial power, many critical use areas use a static UPS on all computers and attached equipment. An UPS is a specially-designed power supply, with batteries that can supply power to the computer for short periods. If the commercial power fails, the computer runs off the UPS batteries until it can be shut down properly. A static UPS uses a rectifier and inverter circuit along with batteries. The rectifier circuit converts AC power to DC which charges the batteries and supplies the inverter. The inverter converts DC power back to AC. During an interruption, the batteries continue to supply the inverter until they are discharged or normal power is restored. Normally, the batteries are large enough to last a minimum of 20 minutes. To enhance the protection offered by a UPS, many software versions include a built-in UPS monitoring feature. This feature provides an interface between the UPS unit and the operating system. Through this interface, the UPS can signal its status to the operating system and thereby coordinate an orderly shutdown. The UPS can provide more complete power protection but not all UPS systems are alike. Early UPS units were designed simply to provide backup power in the event of a blackout, leaving the equipment exposed to other types of power irregularities. Newer UPS systems are designed to protect against a full range of power problems. UPS systems can be grouped into offline, online, and hybrid. The newer types act as an intelligent power supply. A standby or offline UPS
86
Emergency and Backup Power Sources
operates by switching from commercial power to battery power when the commercial power drops below a certain voltage level. The inverter is not powered, except during power outages. Since the load receives power directly from the utility line as long as it is available, the load remains exposed to power disturbances such as spikes, noise, surges, and brownouts. Because of this, these systems should be used with a line conditioner or voltage regulator. Another disadvantage is the time it takes for the unit to switch over after it senses that the voltage has dropped. Some sensitive equipment may not be able to ride through this transition period without being affected. A switching time of four milliseconds or less is needed for electronic loads such as computer or control hardware. The major advantage of an offline UPS is its low cost.
UPS OPERATION An online UPS acts as alternate source of power. It continuously converts commercial AC power to DC charging power to keep a battery charged. Its inverter uses the DC power from the battery to create new, clean AC power. Critical loads run off the power generated by the online UPS at all times. Since the online UPS uses the commercial line voltage to keep its batteries charged, but never to supply power directly to the load, there is complete isolation of the load from the commercial power line and any power fluctuations. Also, there is no switching of the load between commercial and battery power. A hybrid UPS may use an offline UPS with electronic or ferroresonant conditioning/filtering to smooth the transition from utility to inverter power. The quality of output power from a hybrid UPS depends on its filtering and conditioning capabilities. Many of these systems are advertised as line interactive, no break, load sharing and bidirectional. Sometimes hybrids are even labeled as online products although they do not function the same as online UPS units. Hybrid and offline UPS units do not regenerate power continuously so the load either receives raw or partially filtered utility power. Sensitive equipment may suffer from potential damage from this raw or partially filtered utility power. Many UPSs are specifically designed to protect computer systems. They add software capabilities that work with the computer or network
Standby Power Systems
87
operating system. The interface to the operating system is more than a status indicator of power or battery charge. Built into the software are UPS monitoring functions to provide the network or workstation with an orderly, automatic, and unattended shutdown. A true regenerative, online, sinewave UPS provides the most complete protection while maintaining a solid output during any power interruption. These type of systems are the most expensive and may not be able to handle all the loads.
SPECIFYING AN UPS UPS units come in a variety of sizes and price levels. Units can range from a $100 to many thousands of dollars, depending on whether you need to control one PC/workstation or an entire data center. The most basic UPS can protect a single PC or a smaller workstation. It provides 200-650 VA and has an average running time of 6 to 12 minutes. Larger systems, that are capable of running an entire data center, can operate for a half hour or more and can have a rating from 1,000 to 5,000 VA. There are also different types of UPS technologies to consider. These include online, line-interactive and standby (offline) units. Each of the UPS technologies keeps its output between +10% and -20% for 120 volt outputs and most have a UL 1449 listing for urge-suppression protection. The most common UPSs for computer equipment including servers and networks are standby, line interactive, standby online, hybrid, standby-ferro, double conversion online, and delta conversion online. The standby UPS system is used for personal computers. In the standby UPS system, when the primary AC source fails, the transfer switch moves from the AC line to a battery/inverter system for power. Standby systems will only start during a power failure. These are battery/inverter power units characterized by high efficiency, small size and relatively low cost. Standby UPS units switch from utility to battery power when the utility power falls below a certain voltage, the unit then begins drawing power from the UPS’s internal battery. The technology has advanced in the past few years and most standby UPSs switch over in 2 to 10 milliseconds. This is usually fast enough to prevent any damage. Online UPSs use batteries as a power buffer to absorb spikes, surges and sags on a full-time basis. They convert the line AC to low-
88
Emergency and Backup Power Sources
voltage DC to charge the battery and then covert it back to AC to operate critical systems. The online UPS operates constantly, so the power to critical systems is uninterrupted, even in power failure. While this is very useful, it can be costly.
LINE INTERACTIVE Line-interactive UPSs constantly monitor the quality of your utility power. If required they make the necessary adjustments to raise or reduce the line voltage. A line-interactive UPS acts as a power conditioner for sensitive systems. The initial and daily operating costs are less than online systems, but more expensive than standby units. The line interactive UPS is commonly used for small business computer equipment including Web processing and departmental servers. It’s battery-to-AC power converter (inverter) is always connected to the output of the UPS. When the AC power is functioning normally, the battery is charging as the inverter is operating in reverse. When the input power fails, the transfer switch functions and power flows from the battery to the UPS output. Unlike the standby UPS, the inverter in the line interactive UPS is always connected to the output. Since the inverter is always connected to the output, additional filtering is used to provide lower switching transients. In a line interactive UPS, the inverter also provides some line regulation, to correct brownout conditions. The switch to battery operation is not needed when the UPS is used at sites subject to brownouts. The problem of a single point failure in the UPS can be eliminated with this system. If the inverter fails, power will still flow from the AC input to the output since this design provides two independent power paths. It is an efficient, reliable system that provides power protection. High efficiency, low cost, and high reliability along with the ability to correct low or high line voltage conditions makes the line interactive UPS a prime choice in the 0.5-5 kVa power range. STANDBY ONLINE HYBRID The most used common UPS system for under 10-kVa is the standby online hybrid. Although it is called an online system, it is not
Standby Power Systems
89
a true online system. The power path from the battery to the output is online, (the inverter half), but the DC-DC converter half, is operated in the standby mode. The standby converter from the battery is switched on when there is a power failure. The switch to standby power is almost instantaneous, so there is virtually no transfer time.
STANDBY FERRO-RESONANT The Standby Ferro-Resonant UPS, also called a Standby Ferro UPS, operates in the 3-15 kVa load range. The transformer in this unit has three windings for power connections. The primary power path is from AC input, through a transfer switch, through the transformer, and to the output. In the event of a power failure, the transfer switch opens and the inverter takes over the output load. The inverter, which is normally in the standby mode, becomes active when the input power fails and the transfer switch opens. The transformer provides some regulation and output waveform shaping. Standby-Ferro UPS systems are sometimes called online units, however they are not true online units. They have a transfer switch, but the inverter operates in the standby mode and, during a power failure, they go into a transfer mode. Standby Ferro systems are reliable and provide good line filtering, but they can also be inefficient.
DOUBLE CONVERSION ONLINE This is the most common UPS for loads above 10-kVa. It is the main protection for servers, networking equipment and data centers. The primary power path of the Double Conversion Online UPS is the inverter. In the Standby UPS, the main power path is the AC line power. In a double conversion online UPS, incoming AC power is converted to DC power and the DC power is converted to AC power to the load. During normal operation, the batteries on the DC bus are charged.
90
Emergency and Backup Power Sources
When there is an AC power failure, the batteries run the inverter and isolate the load. Since both the battery charger and the inverter convert the entire load, more heat results which reduces the efficiency in this type of UPS. The extra heat in the power components reduces the reliability over other types and the extra energy consumed results in less efficiency. This contributes to the increased life-cycle costs of this design. The input power required by the large battery charger is often non-linear and can cause problems in power quality.
DELTA CONVERSION ONLINE The delta conversion online UPS is available for loads of 5 kVA or more. In this type of UPS, the inverter supplies the load voltage as in the double conversion UPS. But instead of the rectifier charging the batteries as in the double conversion UPS, the delta conversion uses bi-directional converters that are connected to a battery. These convert AC power to DC power and back to AC with minimal energy loss. The converters are connected in series between the power source and the load. They also compensate for differences between the required output voltage and the utility voltage. Harmonic distortion is reduced and energy efficiency is increased. The delta conversion online UPS has the same output characteristics as the double conversion online UPS. But, the delta conversion reduces energy losses and costs. The input power quality of the delta conversion UPS is also better especially in the larger kVa sizes. Many UPS systems offer some protection against spikes, surges, and sags, but they disregard brownouts. Offline and other non-regenerative UPS units do not perform well during sustained low voltage conditions. Switching times for these UPS units tend to increase as the utility voltage decreases. A unit with a 5-millisecond transfer time at 120 VAC may exceed twenty milliseconds at 100 VAC. A brief period of low voltage precedes most blackouts and this places the load at greater risk. Offline and hybrid UPS units may also sense a brownout as a blackout and prematurely switch to battery. During a sustained brownout, an offline UPS can discharge the battery and lose power even though the utility power is still on.
Standby Power Systems
91
OUTPUT WAVEFORMS UPS units are available with either sinewave or squarewave outputs. These are often modified or approximated in some way. Sinewave is usually considered best since it is the same waveform provided by the utility companies and is the waveform that most equipment is designed for. Sinewave is better since some hardware may be affected by both linear or RMS currents and nonlinear or peak currents. A squarewave output only approximates a sinusoidal waveform and puts stress on RMS-sensitive system elements, while starving peaksensitive elements. This can cause excessive heating and hardware failures. The excess energy in squarewaves in the form of harmonics can affect electronic circuitry and cause data errors. Computer grade sinewaves try to use enough squarewave increments to closely approximate a sinewave. This usually provides the needed RMS voltage and limits the peak voltages to equipment design values. But, these apparent sinewaves may not provide the precise timing needed by many computer monitors. The timing mechanism is referenced to the zero crossing of the utility sinewave. It regulates the scanning of the monitor. The imprecise screen scans from the zero crossing problem result in the screen appearing to swim or undulate. The waveform from the UPS must also be a pure alternating current with no DC component. Even a small percentage of direct current in the output can saturate magnetic loads such as fans and transformers and make them inoperative.
RATINGS The UPS must be able to start up all connected loads at a critical time. When the loads are energized again after a power anomaly, the initial inrush current is much higher than the normal operating power requirement. If additional equipment connected to the UPS is powered on when the demand for UPS power is already high, an overload condition may occur. The additional devices may not receive power, or other equipment attached to the UPS will be shut down. Online UPS devices may require an inrush capacity as high as 1000%. Online UPS units with inadequate inrush capacity cannot be used to power equipment up to the rated output of the UPS.
92
Emergency and Backup Power Sources
UPS units must have a power rating sufficient to support all of the loads that will require power from the unit. Power ratings are usually specified in terms of volt-amps (VA) or watts. Extended operation on the UPS batteries in the absence of commercial power will shorten the life of the batteries. Most UPS systems have a maximum time before the unit switches back to commercial power. The increasing need for reliable power quality in today’s industrial and commercial facilities has resulted in the availability of a wide variety of power quality migration equipment, among them static and rotary UPS systems. The level of power protection provided by these systems depends on the maintenance of electrochemical storage batteries.
RELIABILITY A standby power source must be reliable in order to function properly. All of these systems are made up of many parts and the reliability of the individual parts make up the overall system reliability. A robust system is characterized by simplicity, so reducing the number of parts improves reliability. Some of the larger electronic chips will have a major impact on system performance. In a standby generator, the transfer switch has a critical role since it must operate properly during normal and emergency conditions. If the transfer switch fails, it could transfer to generator power in error, causing a short power interruption. If it fails to transfer during a power failure, there will be no backup power. If the anticipated utility outages are longer than the capacity of UPS batteries, then an auxiliary generator must be used to charge the batteries. Many power conditioning problems are interactions with load devices. A solid-state UPS may not synchronize with the load it is carrying and passes the load off to the bypass line. A standby generator cannot be used to power even an unloaded UPS, because the current demands of the UPS rectifier are so distorted that the generator must be several times the size of the UPS to provide the correct energy. These problems can often be explained by harmonic distortion demands. Most loads are designed to operate with 60-Hz electrical power, but under some conditions the incoming service, transformers, busways, panel boards, branch circuits, and load distribution devices may demand several frequencies due to harmonics.
Standby Power Systems
93
An UPS may be installed in order to keep all loads running even when the power company has an anomaly in their service. The UPS supplies 60-Hz power, which is supported by batteries for the time needed. However, suppose the loads also need 180-Hz, 300-Hz, 420-Hz, and other frequencies. Newer loads, for both single phase and three phase equipment, may make demands upon the power source for a mixture of frequencies for the current they take to operate. The higher frequencies above 60-Hz are a part of the current spectrum which characterizes these non-linear devices. These extra frequencies cause heating in the wiring and electrical distribution equipment, lowering the efficiency of the system in handling energy. In a three phase load, these extra currents may reduce the power factor by 10 to 15%. This creates an unwanted peak energy demand for the system. If enough of these devices, such as variable speed drives (VSDs) are installed on a system, the resultant harmonic currents could be large enough to cause voltage distortion in excess of the amount which could be handled by the power supply. In single phase loads, such as those caused by personal computers or electronic lighting ballasts, the loss of capacity starts on the common neutral wire of three phase, four wire distribution systems. The overloading of the neutral occurs due to the high content of 3rd harmonic, 180-Hz, which does not cancel when each of the phase contributions arrives on the common neutral. A 750-kVA distribution transformer, 480 volts to 208/120 volts, may only carry 100 amperes on each of the secondary phase wires when 1/2 loaded. But, harmonic currents can increase the neutral current into the transformer to over 200 amperes. The current in the neutral should cancel, except for the unbalanced portion at 60-Hz. New drive equipment often lowers the power factor. But, adding power factor correction capacitors to improve the power factor and avoid penalty billing may cause the circuit breaker to trip from the inrush current to turn on the capacitors. High frequency currents from an elevator controller caused such a voltage distortion that 120 volt control circuits in energy management devices were damaged. The manufacturer of the elevator controller needed to add filters for the distorted currents.
94
Emergency and Backup Power Sources
MAINTENANCE AND TESTING Proper maintenance and testing procedures are needed along with good equipment records. An effective maintenance program will allow the system to operate when it is needed. A critical part of a preventive maintenance program is periodic testing of the total system. This requires that power be shut down and that means some scheduled downtime for the facility. In most cases this can be minimized with minimal impact by testing during off-hours or holidays. Maintenance requirements depend on the equipment. Batteries may need to be checked for water level, clean connections and specific gravity. Generators will need fluid changes and checks. Periodic starting of the engine is needed along with generator capacity, transfer switch and control circuit testing. The maintenance procedures for UPS units should be done in accordance with the manufacturer’s recommendations. The cost difference between the technologies is large. Online units can cost three times more than line interactive units and line-interactive units can cost three times more than standby systems. An important consideration is the cost of operation and maintenance. Fewer systems means less maintenance but not necessarily less operating cost. An online system becomes part of the power generation system since it is always operating. It consumes power and generates heat. Online and line-interactive types will minimize battery inventory and maintenance. It is often a wise choice to use a combination of online or line-interactive and standby units. The working life of the unit’s battery should be at least 3 to 5 years. It is good maintenance practice to discharge and recharge the battery every six months to prolong its working life. An important battery feature is recharge time. In the event of battery activation, recharge time is critical. Many units that are completely drained have recharge times of between 2 and 12 hours, depending on the battery size and UPS type. Testing the UPS on a regular basis is one way to guarantee that when power is needed, it will reach critical systems. It is important that the UPS monitors the battery’s condition with periodic tests and have a built-in mechanism to warn both audibly and visually, when battery power is low. The UPS’s firmware or software programming is also important. Select a unit that constantly monitors the unit’s condition, particularly in
Standby Power Systems
95
cases of overload. Units with software can be programmed to determine what action the UPS will take for different types of power anomaly events. The unit’s software should include a feature to program test times and log these results. Since it is important to know when a power fluctuation or failure occurs, select a unit that has interface connections that will alert you through network messaging, such as e-mail, or telephone paging, when a power anomaly occurs. A cost/benefit analysis should be used to determine the need for a standby power system. Protection of life, property and the business future may be involved. Building code and equipment requirements may also be involved. These include sections of ANSI/NFPA, ANSI/ IEEE, ANSI/UL or ANSI/NEMA documents. Standby power decisions can be based on economic considerations, but factors such as the type of power interruptions can be tolerated, initial costs, operating and maintenance costs will provide an optimum solution. References Buff, Katrina, Editor, Energy and High Performance Facility Sourcebook, Editor, Proceedings of the 26th World Energy Engineering Congress, The Association of Energy Engineers, Inc., The Fairmont Press: Lilburn, GA, 2003. Gustin, Joseph F., Cyber Terrorism: A Guide for Facility Managers, Lilburn, GA: The Fairmont Press, Inc., 2004. Ondrejack, Richard, “Fade to Black,” Communication News, July 1999, Vol. 36, p. 36. Wells, Editor Joyce, Solutions for Energy Security and Facility Management Challenges, Proceedings of the 25th World Energy Engineering Congress, Lilburn, GA: The Fairmont Press, Inc., 2003.
This page intentionally left blank
Emergency Generators
97
Chapter 4
Emergency Generators Electric generators are a common source of emergency power. The engine may be powered by bottled or natural gas, gasoline or diesel. The load is coupled to normal power and the emergency generator through a transfer switch. Under normal conditions, the transfer switch connects the load with the utility power source. If there is loss of utility power, the generator will be started and the transfer switch will connect the load to the generator. This process takes about 10 seconds. When the utility power is restored, the load is transferred to normal AC power after 15 minutes. The generator can provide power to the load for as long as there is an adequate supply of fuel. Since during a power interruption there is a 10-second delay before the generator can power the load, the load must be able to survive this 10-second ride-through period. Natural gas generators are efficient and easy to maintain. Pollution is low and the flue gas is not a serious problem. But natural gas generators are not considered true emergency systems, since if there is an interruption from the utility gas supply, the unit will not function. Agencies such as the Joint Commission for Accreditation of Hospitals do not consider a natural gas unit as a true emergency generator. For critical loads, propane, gasoline or diesel powered generators are available. Smaller units generally use propane or gasoline while larger units use diesel with its lower operating costs, lower maintenance requirements and safety. This is because diesel fuel has a high flash point and low volatility. One method of providing standby power is serving a load with a double-ended or triple-ended power station. This technique is useful if both feeders are powered by different and independent sources. This arrangement provides redundancy for feeder cables, transformers and circuit breakers. The transfer to the alternate feeder may be manual or automatic. 97
98
Emergency and Backup Power Sources
There will be a power interruption of about 10 seconds during the transfer. AC GENERATORS AC power requires a stable frequency of 60-Hz (USA) or 50-Hz (UK). The frequency can be established by running the generator at a virtually constant speed regardless of the load placed on it. Modern solid-state electronic circuitry can be used to produce a stable output frequency from a fluctuating generator frequency. This allows stable AC power to be produced from a variable-speed power source. The different approaches to powering generators include a separate engine, regulated and governed to a constant speed and coupled directly to a generator. This is the traditional stand-alone generator set or genset. A variable-speed clutch driven off the engine can mechanically compensate for changes in engine speed, providing a constant speed of rotation to the generator. Solid-state variable-speed technology (VST) takes the fluctuating output of an engine-driven alternator and feeds it through an inverter to produce a stable source of AC power. In an alternator, a direct current (DC) flows through a set of field windings on the rotor, creating a magnetic field that produces an output in the stator windings. On start-up the residual magnetism in the rotor induces an AC voltage. Diodes built into the rotor rectify this to DC, which is used to power the field windings. An armature type of AC generator has several coils wound around the rotor or armature. Two or more electromagnets are mounted inside the generator case. The armature rotates inside these magnets, producing an alternating current in the armature coils. This AC output is transferred to slip rings on the end of the armature shaft, where it travels through spring-loaded brushes and then to the generator’s output terminals. This type of generator may have two, three, or four slip rings and brushes, depending on the configuration and power output. The smaller AC generators have two slip rings and brushes. Larger generators have four slip rings and brushes. The simplest generators have two field windings (a two-pole generator) to produce one magnetic field (north and south), but most have two sets of windings (a four-pole generator).
Emergency Generators
99
ALTERNATORS Instead of using fixed (permanent) magnets in the rotor, an alternator rotor has a soft-iron core wrapped with wire to form the field winding. When direct current flows through the coil, the iron core is magnetized. The stronger the current, the greater the magnetism. The alternator output is controlled (regulated) by varying the field current in the field winding. Current flows to the field winding through two slip rings rotor shaft. Contact is made by a spring-loaded carbon brush in a brush holder fixed to the alternator housing. A compound rotor has multiple interlocking fingers and produces multiple north and south magnetic poles when the field winding is energized. As these north and south poles spin inside the coils of the stator, alternating current is generated in the coils. The number of coils varies among alternators, when there are three coils spaced properly the resulting power output will be three phase. The current from each phase is alternating and to be of use in charging a battery it must be rectified to direct current. This is done with silicon diodes. A diode acts as an electronic switch device that allows the current to flow in only one direction. A single-phase alternator can be rectified with four diodes. Each end of the stator winding is connected to a pair of diodes. One pair passes only positive waveforms and is connected to the battery’s positive terminal. The second pair passes only negative waveforms and provides a return path from the battery’s negative terminal to the winding. The complete circuit is known as a bridge rectifier. In spite of a constantly reversing current flow in the stator, current flows in only one direction to the battery. In a three-phase alternator, three positive diodes are tied together to form the alternator’s positive terminal, which is connected to the positive terminal of the battery. Three more diodes form the alternator’s negative terminal.
GENERATOR MAINTENANCE Most generators have bearings that are sealed for life. Some have grease fittings, which need grease once or twice a year. The brushes on armature-type generators carry the full output current of the generator.
100
Emergency and Backup Power Sources
If they are allowed to wear down or stick in their brush holders and make an imperfect contact with their slip rings, arcing can occur with damage to the slip rings. The brushes need to be checked regularly, some manufacturers recommend every 200 hours. They should be replaced once they are worn to half their original length. They must move freely in their brush holders and have enough spring tension. Brushes should always be replaced back in the same holders.
DIESEL BACKUP Rudolf Diesel was involved in the new technology of refrigeration with several patents for a method of producing clear ice. Diesel spent several years in Paris, working on an ammonia engine, but was defeated by the corrosive nature of this gas at pressure and high temperatures. The theoretical basis of this work was a paper published by N.L.S. Carnot in 1824. Carnot addressed the problem of determining how much work could be accomplished by a heat engine with a repeatable cycle. Carnot’s engine used a boiler or other heat exchanger. As the air in a chamber is heated, it expands driving a piston. The temperature of the air and the pressure is higher during expansion than during compression. Since the pressure is greater during expansion, the power produced by the expansion is greater than that consumed by the compression. This results in a power output that can be used for driving other machinery.
DIESEL ENGINES Diesel engines are simple in principle, and require little routine maintenance, although this maintenance is essential to their longevity. In a diesel engine, a piston compresses air in a cylinder. The compression is measured as a compression ratio, where the cylinder volume with the piston at the bottom of the cylinder is compared with the volume when the piston is at the top of its stroke. The more air that is compressed, the hotter it becomes. At compression ratios between 16:1 and 23:1, the air temperature rises to over 1,000°F (580°C), which is well above the diesel fuel’s ignition temperature of 750°F (400°C). When the piston is near the top of its stroke, diesel fuel is injected
Emergency Generators
101
into the cylinder under pressure. As the mixture ignites, it raises the temperature and pressure even higher, which drives the piston down the cylinder. This is called the power stroke. Diesels are often called compression-ignition (CI) engines since they do not have an ignition system. The diesel fuel is ignited by the high temperature from compressing air. In a diesel engine, the air and fuel remain separated until the edge of combustion. The fuel is injected late in the compression stroke, when the cylinder pressure is high. Injection pressures range from about 1600 psi to more than 25,000 psi for modern engines. The amount of fuel supplied depends on load and speed requirements. Air enters through the intake manifold, similar to a spark ignition engine. But, the air supply is unthrottled, so that the diesel engine draws the same volume of air per revolution, regardless of its speed. At idle, the open manifold supplies a large surplus of air, which passes through the engine without taking part in combustion. Typically, the idle speed air consumption is about 100 pounds per pound of fuel consumed. At high speeds or heavy loads this ratio drops to about 20:1. With no throttle valve, a diesel breathes easily at low speeds, while consuming little fuel. A spark ignition engine requires a fuel-rich mixture at idle to generate enough power to overcome the throttle restriction. The absence of an air restriction and an ignition system can sometimes cause a runaway condition where lube oil enters the combustion chamber where it is burned as fuel. Oil getting by worn piston rings can cause a runaway engine that accelerates itself until the air intake is blocked.
IGNITION AND COMBUSTION Spark ignition (SI) engines are fired by an electrical spark timed to occur just before the piston reaches the top of the stroke. Since the complete charge of fuel and air are available, combustion proceeds quickly in a kind of controlled explosion. The cylinder pressure rises rapidly within a few crankshaft degrees of piston motion. Thus, the volume of the cylinder above the piston undergoes little change from the moment of ignition and the point of peak pressure. This makes SI engines almost constant volume engines. Compared to spark ignition, compression ignition takes some time
102
Emergency and Backup Power Sources
for the fuel spray to vaporize and the spray to reach ignition temperature. The fuel continues to be injected during this time. Once it is ignited, the accumulated fuel burns quickly with corresponding increases in cylinder temperature and pressure. The injector will continue to deliver fuel through this period of rapid combustion and into the controlled combustion that follows. Then, injection stops and combustion enters what is known as the afterburn period. Diesel fuel quality is expressed by the cetane number. The more easily the fuel ignites, the higher the cetane number. The rough combustion associated with ignition lag is sometimes called diesel detonation. However, diesels do not detonate like SI engines. Some vibration or clatter can occur during start-up but if the condition persists, there may be injector or fuel quality problems. In normal operation, with ignition delay under control, cylinder pressures and temperatures rise more slowly to higher average levels than SI engines. Cylinder pressures tend to remain more constant through the expansion (or power) stroke. The pressure rise is relatively smooth and diesel engines are sometimes called constant pressure devices to distinguish them from constant volume SI engines.
DIESEL CYCLES With these exceptions, CI and SI engines operate on similar cycles, consisting of an intake, compression, expansion, and exhaust. Four-cycle engines of either type require one up or down stroke of the piston for each of the four tasks. Two-cycle engines compress this into two strokes of the piston, or one crankshaft revolution. Among these are the Detroit Diesels which use a mechanically driven supercharger mounted in the air inlet to compress the incoming air. When the piston nears the bottom of the power stroke, the exhaust valves are opened and the exhaust gases exit the cylinder. The descending piston uncovers a set of ports in the cylinder wall and the pressurized air flows in expelling the remaining exhaust gases and refilling the cylinder with fresh air. The piston reaches the bottom of its stroke and is on its way back up the cylinder. The exhaust valves close and the ascending piston blocks off the inlet ports in the cylinder wall. The cylinder is full of clean air and a new compression stroke begins.
Emergency Generators
103
In Detroit Diesel two-cycle engines, the blower purges the cylinder without raising its pressure much above atmospheric. The exhaust valve remains open until the ports are closed to eliminate a supercharge effect. In a four-cycle diesel engine, there are four piston strokes in a one complete cycle. After the compression and ignition strokes, on the next upward stroke the piston expels the burned gases through the exhaust valve. On its next downward stroke it takes clean air into the cylinder through the inlet valve. In the four cycle engine, air enters through the open intake valve and fills the cylinders as the piston falls on the intake stroke. The intake valve closes as the piston rounds bottom dead center. Injection begins near top dead center on the compression stroke. The fuel ignites and drives the piston down on the expansion stroke. The exhaust valve opens and the piston rises on the exhaust stroke, purging the cylinder of the spent gases. As the piston again reaches top dead center, the four strokes of the cycle are complete, taking two crankshaft revolutions. Diesel engines must be sturdier than SI engines. They are assembled with higher grade components such as anodized pistons, forged crankshafts and complex oil systems. Diesel manufacturers rely heavily on modular construction. Cylinder liners, pistons, rods, valves and injectors are common in engine families.
FUEL EFFICIENCY In theory, a diesel engine requires 10,000 volumes of air per volume of fuel oil, but in practice, air volume is increased by 1/4 to reduce smoke. Thus, the high-pressure injector has only 1/12,500 the capacity of the cylinder. The need to inject a fine fuel spray with enough force to penetrate cylinder compression requires injection pressures of 25,000 psi in modern, emissions-rated systems. Older engines may operate at 2,500 psi. Fuel metering parts are manufactured with extreme precision and lapped to tolerances expressed in wavelengths of light. Pump plungers are non-interchangeable and plungers that have been exposed to direct sunlight cannot be fitted to their barrels without first equalizing the temperatures. High compression ratios or large ratios of expansion, give diesel engines greater thermal efficiency. Under the best conditions, a well
104
Emergency and Backup Power Sources
designed SI engine utilizes about 30% of the heat from the fuel. The rest is lost as heat through the exhaust, cooling system, lubricating oil and surrounding air. Thermal efficiencies for diesel engines can reach 40% or more. Gas turbines have higher efficiencies, but only at a constant speed. This thermal efficiency, plus good volumetric efficiency and the capability to recycle some exhaust heat by turbocharging improve fuel economy. Diesels can provide a fuel consumption of 0.35 pound per brake horsepower hour operating near its torque peak. A gasoline engine may burn 0.5 pound of fuel per hp-hour under the same conditions. The weight differential between diesel fuel (7.6 pound/US gallon for No. 2-D) and gasoline (about 6.1 pound/US gallon) gives the diesel an even greater advantage when consumption is measured in gallons per hour or mile. Diesel passenger cars and light trucks can deliver 30% to 50% better mileage than the same vehicles with gasoline engines although there is some trade-off in acceleration and top speed. The high-octane fuels used for SI engines have low cetane numbers. Aviation gasoline is a poor diesel fuel, usable only if the compression ratio is raised to generate the necessary ignition temperature. However ether or amyl nitrate, which would deteriorate in SI engines, are excellent starting fuels for diesels. Fuel sold in this country must have a cetane number of at least 40. Heavier crudes tend to have higher numbers. Lighter, more aromatic crudes can be boosted to this level during refining. Viscosity, or the pourability of the fuel, also affects performance. Lower, more gasoline-like, viscosity fuels tend to atomize better and have been shown to generate less exhaust smoke. Low viscosity fuels tend to leak past the pump plungers, so that less is available for injection. At room temperature, the viscosities of light diesel fuels sold in this country vary from about two centistrokes to more than four. Fuel quality can also affect engine and component life. The abrasive ash left after combustion can collect on injector nozzles where it disrupts spray patterns. This ash can also increase upper cylinder and ring wear. U.S. fuels contain 0.01% ash by weight. Water, when present in sufficient quantities, promotes bacterial growth. RED AND BLUE DIESEL Since 1994, diesel fuel sold in this country comes in red, blue, and clear varieties. These are three versions of what was formerly known as
Emergency Generators
105
No. 2 Diesel. The 1990 amendments to the Clean Air Act called for a reformulated diesel fuel, with significantly less sulfur than the 0.5% then in effect. Sulfur and its compounds are pollutants that cannot be effectively addressed by vehicle emissions systems. The new fuel, which contains 0.05% sulfur by weight, was phased in during 1993 for use in commercial trucks and diesel passenger cars operating on the public roads. It was decided to continue to provide high-sulfur fuel for agricultural and construction equipment, railroad, commercial vessels, and diesel-fired heating systems. Users of this fuel do not pay the federal highway tax. The Environmental Protection Agency (EPA) made low-sulfur fuel colorless and the high-sulfur fuel tinted blue. The blue dye (1.2 dialkylamino-anthraquinone) gave the sulfurous fuel a green tint. Vehicles operated on public highways by government agencies and the American Red Cross (but not other charitable organizations) are exempted from the highway tax. But, the fuel for these vehicles falls under the low-sulfur rule. The IRS wanted this untaxed fuel to be stained red. Diesel fuel in California for over-the-road vehicles conforms to federal sulfur regulations, but contains no more than 10% by volume of aromatics. Federal fuel can contain as much as 35%. Aromatics contribute to particulate solids that blacken diesel exhaust.
STARTING Most diesel engines are fitted with electric starter motors. Some engines used to power construction machinery may use a gasoline engine rather than an electric motor as a starter. In cold weather the heat generated by compression tends to dissipate through the cylinder and head metal. Cold clearances may also cause some of the compressed air to escape past the piston rings. Other problems include the effect of cold on lube and fuel oil viscosity. The spray pattern coarsens and heavy oil friction between the moving parts increases. Many diesels have a cold-start mode that provides extra fuel to the nozzles and makes combustion more likely. Lube oil and water immersion heaters can also be mounted permanently on the engine. In cold
106
Emergency and Backup Power Sources
weather areas additional batteries can be wired in parallel. If chilled beyond the cloud point, diesel fuel enters the gelling stage. Flow through the system is restricted, filter efficiency suffers and starting is affected. Racor is one manufacturer of fuel heaters. These combine electric resistance elements with a filter or use a resistance wire in a flexible fuel line. Most indirect injection engines use glow plugs as a starting aid. These engines would be extremely difficult to start without some method of heating the air in the prechamber. A low-resistance filament (0.25 to 1.5 ohms, cold) draws a heavy current to generate 1500°F at the plug tip. Early types used exposed filaments which sometimes broke off. Improved versions contain the filament inside of a ceramic cover. Glow-plug systems are energized by a switch and may use a timer. The systems used in modern automobiles automatically initiate glowplug operation during cranking and switch them off when the engine starts. A solid-state module with an internal clock is used. A pulsed system opens the glow-plug power circuit for successively longer intervals as the engine heats and the timer counts down. In any version, the glow-plug resistance varies with temperature and the plugs function as heat sensors. During cold starts, a relay may direct full battery voltage to the glow plugs, as the engine heats the relay opens. Starting fluid can be used in the absence of intake air heaters. Aerosol cans are available for injection directly into the air intake. Some engines are fitted with starting devices for a capsule of fluid. A needle pierces the can releasing the fluid. COOLING Most diesel engines are liquid-cooled with a radiator, circulation pump and a thermostat for quicker warm-ups. Stationary and large vehicular engines may include oil and transmission coolers. With the thermostat closed, water flow is limited to a small port in the thermostat and the engine heats quickly. At a predetermined temperature the thermostat opens and circulation is unobstructed. GOVERNORS The majority of older diesels employ centrifugal flyball governors, which are part of the injector pump. The flyball governor invented by
Emergency Generators
107
James Watt for his steam engines, but instead of opening and closing a steam valve, the diesel governor rotates the injection pump plungers to deliver more or less fuel per stroke. Pneumatic governors respond to air velocity entering the manifold which is a function of piston speed. Coarse regulation is used in most installations and can result in speed change peaks of 10% or so. Fine regulation cuts this in half, for maximums of 2.5% over or under the desired rpm. Governors may require adjustment (high- and low-speed limits), cleaning, and replacement. Bearings and pivots can wear, coarsening the regulation. Diaphragms can fail from leaks or age hardening. The CAV governor is a typical mechanical governor with a manual override and a lever pivoting on a axle. The assembly pivots on command from the throttle. When the engine is running at about half-speed, a change in speed will be reflected by the flyweights. If the engine accelerates, the weights will press outward against their springs and move the lever, reducing fuel delivery. Under load the engine slows, and the weights move inward, causing the throttle to open. Pneumatic, or flap valve, governors operate on the venturi principle. When a moving fluid encounters a restriction, its velocity increases. The vacuum the venturi draws is a function of air velocity through it. Air velocity in a diesel engine is dependent on piston speed, so the venturi-induced vacuum can be used to monitor rpm. The vacuum is conveyed, by a tube, from the venturi to a diaphragmed chamber at the governor. The diaphragm movement is transferred to the fuel valve by a link. The application of digital electronics moved governors from simple speed-control devices to comprehensive engine management systems.
ELECTRONIC GOVERNORS Electronic governors are replacing mechanical governors on power generation, rail traction, marine propulsion, and other industrial engines. In large engines, mechanical governor functions include stepped speed control, reduced fuel delivery at high turboboost pressures, and automatic shutdown in the event of loss of oil pressure. The first step away from pure mechanical governors was control functions using ganged relays. Relay logic for on/off control has been
108
Emergency and Backup Power Sources
replaced with solid-state programmable logic controllers. The early electronic governors were analog devices that worked like mechanical governors. Instead of a force generated by a flyweight against a spring, analog governors sense speed as a voltage signal, which depends on engine rpm. This signal is compared to a reference voltage and the difference control the fuel flow. These analog governors were replaced with digital computers. Rather than using an analog voltage, digital governors count pulses that pass a sensing head. The computer then calculates engine speed and compares this to the programmed speed. A difference between the two causes the computer to generate a command signal to a fuel control actuator. The computerized engine control systems used on small and midsized engines are more like electronic governors with some self-diagnostic capabilities. A switch on the electronic control module (ECM) or unit (ECU) allows the operator to select the speed-control program. A minmax mode is used for power generation. The ECM may use sensors to measure manifold air pressure (MAP) or manifold air temperature (MAT). The ECM restrains fuel delivery until the MAP sensor reports that the engine has come up to the speed. Bearing noise means major and expensive repairs. By the time bearings reach this point, the journals have been pounded and bearing fragments have circulated through the engine. The technology exists to detect bearing failures early, while the damage is still minor. Crankpin bearing clearance checks help but the best protection is spectroscopic analysis of the lube oil.
TURBOCHARGERS A turbocharger is an exhaust-powered supercharger. The exhaust stream impinges against the turbine wheel and provides the energy to turn the compressor. This turbo boost is usually limited to 10 or 12 psi, but it is enough to increase engine output by 30% to 40%. Turbocharging represents the least expensive way to enhance performance. This energy would otherwise be wasted as exhaust heat and noise. Turbochargers develop maximum boost at high engine speeds and loads.
Emergency Generators
109
AIR SYSTEM Diesel engines run with open manifolds and are restricted only by the pressure drop across the air filter. The amount of air intake is about 200 cubic feet per pound of fuel for four cycle engines and significantly more for two cycle engines. Most of the air cycles through the engine, without taking part in combustion. But it must be filtered to remove abrasive dust. Most lethal particles are between 10 and 20 microns in diameter. Larger particles (or particulates) tend to pass out the exhaust without doing much damage. Small particulates are less aggressive, although they can still cause damage. The air cleaner cannot stop all particulates and some of the smaller particulates get through, which causes engines to wear rapidly in dusty environments. Air filters may be wire mesh, fabric, foam, or fiber. The filter can be dry, like the pleated-paper filters, or oil-wetted to improve particulate adhesion. The oil bath air cleaner combines oil-wetted filtration with inertial separation. There are two stages of separation. Air enters the top of the unit through the precleaner, or cyclone. Internal vanes cause the air stream to rotate, which tends to separate out the larger and heavier particulates. The air then goes through a central tube to the bottom of the filter, where it reverses direction and some of the remaining particulates drop into the oil. Polyurethane foam filters require reoiling in normal operation and tend to go dry in storage as the oil migrates to low spots. Unless oiled, these filters are ineffective. Pleated-paper filters work better when slightly dirty, but cannot really be cleaned. Solvents cause the fibers to swell. Overfilling an oil bath air cleaner can cause the engine to run away on the oil drawn from the reservoir. Some air cleaners include a pressure sensor that alerts the operator when the filter needs service. A manometer can also be installed on the intake manifold, downstream of the filter. The pressure drop across a new filter should be about 2 inches of water at normal engine speeds.
MAINTENANCE Most diesels will run trouble-free for thousands of hours with regular filter and oil changes. There may be several grease points that
110
Emergency and Backup Power Sources
need periodic attention. Belts to auxiliary equipment must be checked and kept tight (less than 1/2 inch or 13 mm of deflection under moderate finger pressure in the center of the longest belt run). The cooling system liquids will need periodic replacing. Starter batteries should be checked for water levels if they are not sealed. Most manufacturers have specific schedules for overhaul procedures. Problems such as difficult starting, changes in oil pressure or water temperature, smoky exhaust, vibration, or noise should be resolved quickly since delays may cause expensive repairs. The efficient operation of a diesel engine depends on maintaining compression. Small amounts of dust passing through a ruptured air filter or a leaking airinlet manifold can lead to rapid piston-ring wear and scoring of cylinders. Small particles can become embedded in the surfaces of pistons and bearings, which accelerates wear. If a filter is not changed and becomes plugged, it will restrict airflow to the engine. This limits the oxygen reaching the cylinders, and combustion, especially at high loads, suffers. The engine loses power and the exhaust will have black smoke from improperly burned fuel. Pistons, valves, turbochargers, and exhaust passages will carbon up, reducing efficiency and leading to other problems. The engine may overheat and even seize up. Air filters should be kept clean. The interval for changing filters depends on operating conditions. Most small diesels have replaceable paper-element-type filters. Some use the oil-bath type which forces the air to make a change of direction over a reservoir of oil. Particles of dirt are trapped in the oil. The air then flows through a fine mesh, which depends on the oil mist drawn from the reservoir to keep it lubricated and effective. As the reservoir fills with dirt, the oil becomes more viscous, less oil mist is drawn up and the filter becomes less efficient. Periodically the oil must be refreshed in the reservoir and the pan cleaned with diesel fuel or kerosene. The screen should also be flushed with diesel fuel or kerosene and blown dry. Refill the reservoir with oil, but do not overfill it, excess oil can get into the engine.
FUEL SYSTEM Conventional diesel fuel is a middle distillate, slightly heavier than kerosene or jet fuel. The composition varies with the source crude, refin-
Emergency Generators
111
ing processes used, additive mix, and regulatory climate. Fuel sold in this country must conform to standards set by the Environmental Protection Agency or to the more rigorous version of these standards accepted in California. High-cetane numbers provide good ignition and the fuel ignites at a low temperature and burns quickly for minimum ignition lag. Engines start easily on high-cetane fuels and, once started, tend to run smoothly. Fuel-injectors contain precise parts that can be affected by dirt or traces of water. It is important to keep the fuel clean. According to CAV, one of the largest manufacturers of fuel-injection equipment, 90% of diesel engine problems result from contaminated fuel. Minuscule particles of dirt can lead to the seizure of injection-pump plungers or the scoring of cylinders and plungers resulting in plugged or worn injector nozzles. Water in the fuel leads to a loss of lubrication of injection equipment and can result in seizures. In the combustion chamber, the result is misfiring and lowered performance. Water droplets in an injector can turn into steam in the high temperatures of a cylinder under compression. This creates an explosive force, which can blow the tip off the injector. Water in the fuel system will also cause rust to form on many parts. Fuel system malfunctions can produce multiple effects. An air leak in the system can limit prime to the injection pump and prevent the engine from starting. Less serious leaks introduce bubbles into the fuel stream and cause hard starting, refusal to idle, and loss of power. Leaks can also cause engine speed to rise and fall, or hunt. A restricted fuel filter can affect idle quality, throttle engine output, and cause the engine speed to surge. Inappropriate or contaminated fuel can make the engine refuse to start, misfire and exhibit loss of power. A fuel sample can be taken upstream of the filter. Allow the fuel to settle for a few minutes in a glass container. Cloudiness is an indication of water. Organic contamination will appear as jell-like particles floating on the surface. Placing a few drops of fuel between two pieces of glass will make it easier to see impurities. Several other tests for checking fuel quality can be made. Fuel that burns cleanly in an oil lamp, emitting little smoke, will also burn cleanly in the engine. Water can be detected by wetting a strip of paper with fuel, setting it alight and listening for the crackle as any water bursts into steam. Mixing a small quantity of fuel with sulfuric
112
Emergency and Backup Power Sources
acid will release the carbon and resins, which appear as black spots. The fewer of these spots, the better. Litmus paper can be used to show the presence of acids. If fuel quality is unknown, it would be helpful to determine the cetane value. Bacteria can grow in apparently clean diesel fuel, creating a film that can plug filters, pumps, and injectors. The microbes live in the fuel/ water interface, using both liquids to survive. There are favorable growth conditions in the dark, quiet nonturbulent environment found in fuel tanks. Two types of biocide are available to kill these bacteria. One is water soluble, the other is diesel soluble which is preferred. Some of the diesel-fuel treatments contain alcohol to absorb water, but this attacks O-rings and other nonmetallic parts in some fuel systems. Biocides can help to amend problems after they develop, but preventive measures should be used to avoid contaminated fuel. A length of clear plastic tubing, plugged off with a finger can be used to bring up a sample of fuel from all levels of the barrel, allowing an examination for contamination. Also, filter all fuel, using a funnel with a fine mesh or one of the multistage filter funnels. Regular samples from the bottom of the fuel tank can be used to check for contamination. At the first sign of contamination, drain the tank or pump out the fuel until no trace of contamination remains. A dirty batch of fuel should be discarded. When refueling fill the fuel tank to the top. This eliminates any air space and cuts down on condensation in the tank.
FUEL FILTERS Fuel filters function as a defense against contaminated fuel. Their function is to deal with minor contamination that escapes preventive measures to keep the fuel tank clean. A diesel engine may have a primary and a secondary fuel filter. A primary filter is the main defense against water and larger particles in the fuel supply, but it does not guard against microscopic particles of dirt and water. These are filtered out by the secondary filter. A primary filter is generally a sedimenter filter designed to separate water from fuel. Sedimenter filters consist of a bowl and deflector plate. The incoming fuel hits the deflector plate, then flows around and under it to the filter outlet. Water droplets and large particles of dirt
Emergency Generators
113
settle in the bowl. Better-quality filters then pass the fuel through a relatively coarse filter element of 10 to 30 microns. A micron is one millionth of a meter, or about 0.00004 inch. Some filters have a sensing device that sounds an alarm if water reaches a certain level. Others have a float that shuts off the flow of fuel to the engine if the water reaches a certain level. Some engines have two or more primary filters. This allows either filter to be closed off and changed without shutting down the engine. A secondary filter is designed to remove very small particles of dirt and water droplets. It cannot handle major contamination because its fine mesh would plug up. Secondary filters are usually of the spin-on type with a specially impregnated paper element to trap dirt. Water droplets are too large to pass through the paper and they will adhere to it. As more water is trapped, the droplets settle to the bottom of the filter, from where they must be periodically drained. The filter mesh is generally in the range of 7 to 12 microns. In some filters it may be as small as 2 microns.
LUBRICATION Lubricating oil in a diesel engine works much harder than in a gasoline engine, due to the higher temperatures and pressures used. This is especially true with modern lightweight, turbocharged diesels. Diesel engine oil must also contend with more acid and soot formation. Diesel fuels contain traces of sulphur. When a diesel engine is operated for long periods at light loads, it runs cool, which causes moisture to condense in the engine. This moisture combines with sulphur to form sulphuric acid, which attacks the engine surfaces. A lowload and cool running also generates more carbon (soot) than normal. This carbon can cause piston rings to stick and coats valve surfaces and stems, leading to a loss of compression and numerous other problems. Diesel engine oils are specially formulated to hold soot in suspension and handle the acids and other harmful effects of combustion. Using the correct oil in a diesel engine is important. Many oils are designed for gasoline engines and are not suitable for use in a diesel. The American Petroleum Institute (API) uses the letter C for compression ignition to indicate oils for diesel engines, and the letter S for spark ignition to indicate oils for gasoline engines. The C or S is followed by
114
Emergency and Backup Power Sources
another letter which indicates the additives in the oil. An oil rated CC, CD, CE, or CF-4 is suitable for diesel engines. Detroit Diesels use CD-II. As the oil is used to lubricate the engine, the additives and detergents are gradually used up. The oil must be replaced at regular intervals, more frequently than in gasoline engines. High-sulphur fuel is used in many Third World countries and much of the Caribbean and extended periods of low-load operation will increase the soot content, so oil-change intervals should be shortened to about 50 hours. Each time the oil is changed, a new filter should be installed to clear the engine of its contaminants. Without regular oil changes, the acids formed start to attack engine surface and the carbon overpowers the detergents in the oil. This can result in sludge in the crankcase and oil cooler. This sludge will begin to plug narrow oil passages and areas through which the oil moves slowly, causing a loss of oil pressure and lubrication. Major mechanical breakdowns can be avoided by regular changes of oil and filters. One major bearing manufacturer estimates that almost 60% of bearing failures are due to dirty oil or a lack of oil.
CENTRIFUGES AND BYPASS FILTERS A typical full-flow engine oil filter has a mesh of about 30 microns. This is relatively large since a finer mesh would plug up faster and need changing more often. Particles of dirt smaller than 30 microns can pass through the filter and circulate with the oil. Studies have shown the microscopic particles that pass through the filter and are the most destructive on engine wear are in the 10- to 20-micron range. Two methods are used to trap these particles: centrifuges and bypass filters. A centrifuge is a bowl mounted on bearings with small nozzles on the base of the bowl. The oil in these nozzles is under pressure from the engine oil pump, causing the assembly to spin. The centrifugal force generated by the spinning bowl causes the particles of dirt to be thrown out onto the centrifuge’s outer housing. This dirt accumulates as a dense, rubbery mat and periodically the outer housing is removed and cleaned. Centrifuges can remove particles down to 1 or 2 microns in size. They are usually found on engines of 100 to 200 hp or more. Bypass filters are used on all sizes of engines. They use a fine mesh filter element which, depending on the element, can filter particles down to 1
Emergency Generators
115
micron in size. A restriction built into the filter controls the flow rate at a level that will not cause a drop in the engine oil pressure. One type of bypass filter by TF Purifiner has a heating element to vaporize water or fuel in the oil. A centrifuge or bypass filter results in cleaner oil with a significant reduction in engine wear. Engine life in many applications is doubled. Many larger engines have one or the other as standard equipment, but few smaller engines have either.
OIL ANALYSIS Oil analysis has become routine for larger engines. The technology was developed for diesel locomotives and perfected by the Navy during the fifties. It can predict engine failure with the trace materials found in the oil. Spectroscopic analysis involves vaporizing an oil sample in an electric arc. Each element has its own frequency signature as it gives off light. Normally some 16 elements are tracked in concentrations of as little as one part per million. This information, along with engine history and user profile, gives a good representation of engine condition. The cost of the service is nominal and the transaction can be handled by mail.
COLD STARTS Many emergency diesel generators are installed outdoors in metal enclosures. The enclosures are often ventilated rain shelters which enclose the engine, generator and controls. Since diesel engines rely on the compression of air in the cylinders, the compression must create enough heat to cause combustion of the injected fuel. When the engine is cold, combustion may not occur or it may occur unevenly across the cylinders. Failed starting or rough starting causes premature wear on the engine. After starting, the engine should be warmed up before the electric load is applied. In the case of an outage, the engine may be loaded quickly as it reaches operating speed. Loading the engine before it reaches its normal temperature places stress on the bearings and other moving parts.
116
Emergency and Backup Power Sources
ENGINE HEATERS Most diesel generators have electric heaters to ensure that the engines will start in cold temperatures. These heaters are usually electric resistance heaters. The heaters are connected to the cooling water jacket that surrounds the cylinders of the engine. The heaters are usually set to maintain a 130-170°F water temperature. Water returning from the engine is about 10° cooler than the water leaving the heater. The heaters are generally sized at about 1-kW of heater capacity for each 100-kW of engine capacity. The heaters also reduce condensation and corrosion in the engine. Alternative methods involve the use of heat pumps for water heating, solar water heating, improved enclosure insulation and solar electricity to supply the resistance heaters. Air to water heat pumps for jacket water heating can reduce loads during warm days and night, but they require electric resistance heating when temperatures drop below about 38°. Solar electricity from photovoltaics can power the existing heaters, but may not be cost effective for the power levels needed. Solar water heating is feasible, but the high supply and return temperatures needed impact the efficiency of the solar water heating system. Evacuated tube solar collector systems can provide the high temperatures needed with high efficiency, they are not generally capable of disconnecting from the load under full sun conditions when the generator is running.
SOLAR AIR HEATING One approach to reduce the heat loss from an electrically heated engine is to use solar air heating for the enclosure. The solar air heating system provides a thermal blanket of air around the engine which reduces conduction and radiation of heat from the engine. The air is solar heated up to 110°F. The jacket water returning to the heater is at a higher temperature and requires less energy for reheating by the electric heater. Engine manufacturer warranties are affected since there is no direct connection between the engine and the solar air heating system. The maximum air temperature within the enclosure must be maintained at a temperature below the upper ambient operating temperature of the engine.
Emergency Generators
117
Heating of the enclosure also puts warm air in the area of the engine air intake. During the start of the engine, this warm air helps to ensure high compression cycle temperatures, quick combustion and a rapid start. The volume of warm solar heated air available in an enclosure during the start cycle is sufficient for about 15 to 30 seconds of operation. The mass of the generator equipment, which can be 5 tons for a 350-kW diesel generator alone and the structure, act as a thermal mass that maintains elevated temperatures in the enclosure after solar heating stops. Heating of the oil in the sump maintains the lubricating oil in a less viscous state. This results in more rapid lubrication of the bearing surfaces during start-up. When insignificant lubrication exists on the bearing surfaces, during the initial revolutions of the engine, high wear on the engine takes place. Reducing the viscosity of the oil by heating accelerates the flow of oil to critical wearing surface.
APPLICATION AND INSTALLATION Solar air heating generators have been used at the U.S. Geological Survey Headquarters. Solar thermal tile air heating systems were designed to reduce the electricity use of the jacket water heaters on these engines. One Cummins natural gas fired generator is enclosed in a well insulated enclosure. The walls of the enclosure are weather tight aluminum panels insulated with 1.5-inch-thick fiberglass contained behind perforated aluminum sound dampening panels. Temperature in the enclosure varied between 71 and 56°F, as the outside ambient temperatures varied from 62 to 23°F, during October, with no solar heating. The enclosure temperature, generally varied within 7 degrees of the daily high temperature. Other generators on the site were in fully ventilated enclosures. These enclosures have no insulation and open louvers. The temperature in these enclosures is usually within a few degrees of the ambient outside air temperature. Measurements of the electric energy of the electric engine heaters taken between July and December indicated that the heaters for the 750kW generator in the well insulated enclosure required a total of 1.4-kW in air temperatures near 70°F. This increased to 1.9-kW at 45°F air temperatures. Each of the 800-kW generators required 2.4-kW in 70° ambi-
118
Emergency and Backup Power Sources
ent air in the fully ventilated enclosures. The temperature range for the average high and low daily temperature throughout the year in Washington, D.C., is 26 to 89°F. Based on the average annual temperature in Washington, D.C. of 57°, it was determined that there was a potential reduction in heater demand from 2.9-kW to 1.45-kW. Annual electrical energy use for the 750-kW generator in the tight enclosure was 13,962 kWhr (48 million Btu). Each of the 800-kW generators in the fully ventilated enclosures were expected to use 25,147 kWhr (86 million Btu) throughout the year. The typical electric energy use for hot water heating in a single family home is 3,017-kW per year. At an average cost of $.046/kWhr, the demand to heat each generator with electric resistance heating, costs $642 per year for the 750-kW generator and $1157 each of the two 800-kW generators. The 750-kW generator in the insulated enclosure of 11' × 23' × 10' high has a solar tile system of 40' × 6'. This system is 160 feet from the generator enclosure. No dedicated thermal storage was used but the mass of generator and enclosure provide some thermal storage. The system delivers solar heated air when the sun is up and shuts off when collector temperatures drop below the enclosure temperatures. Another system heats a 250-kW generator in a smaller, fully ventilated enclosure. This enclosure is 4.5' × 10' × 5' high. The solar tile field is 30' × 8'. This system is 35 feet from the generator enclosure. In these systems, air is moved by a 240 watt, 10" tubeaxial fan in the solar tile support structure, through a 6" insulated duct to the enclosures. The duct is buried underground in a plastic liner. For the 40' × 6' system, a 120 foot portion of the duct is run above grade but below a ground cover. This routing was used to prevent trenching damage to tree roots. Both systems use switches to disconnect the system fan when the generator is running. This prevents excess heat around the operating generator. The radiator fan on the engine with the engine combustion air move almost 15 times more ambient air though the enclosure when the engine is operating, than the solar fan delivers for standby heating. The delivered air temperatures do not exceed 110°F since some diesel fuels can degrade above this temperature. Automatic controls will shut off the fan if temperatures in the space exceed 130°F. The 40' × 6' and 30' × 8' solar heating systems use a diamond slate solar thermal tile. The 12" × 12" glazing tiles are installed over corru-
Emergency Generators
119
gated metal absorbers to form a weather-tight collector surface. The tiles are fastened like slate roofing. The tiles are supported on a structure of self-framing walls and corrugated steel roof deck. The systems are tilted to 45° and face due south. As the air flows through the channels between the corrugated absorbers and the tiles, it is heated to about 70° above ambient temperatures at an airflow rate of 1 cfm per square foot of tile surface. This air temperature can be increased to about 100°F above ambient by varying the airflow rate. A differential controller senses when the collector temperature drops below the enclosure temperature and shuts off the fan. There was a 10-12°F temperature drop through the 160 foot duct with a 55-70°F temperature difference between solar heated air at the fan and ambient mid-day air. In October of 2001, an airflow of about 200 CFM in the 40' × 6' system resulted in a temperature rise at the fan of 60-75° above ambient temperatures. The ambient temperatures reached 85°. At 2 p.m. the collector fan temperature was 145°F with the delivered air temperature into the enclosure at 122°F. The thermally activated outside air damper had not yet been installed to limit the delivered air temperature to 110°F. On one November day with the outside air temperature at 56°F, the delivered air temperature to enclosure was 98°F.
GAS TURBINES Small gas turbines or micro gas turbines are being used more and more as backup or auxiliary power sources. The concepts of a gas turbine and a steam turbine appeared at about the same time. A 1791 patent for the steam turbine described other fluids or gases as potential energy sources. John Barber’s idea for gas turbine was a unit in which the gas was produced from heated coal, mixed with air, compressed and then burnt. This provided a high speed jet on the radial blades of a turbine wheel. Earlier developments in this area included Branca’s impulse steam turbine in 1629, Leonardo da Vinci’s smoke mill in 1550 and even Hero of Alexandria’s reaction steam turbine in 130 BC. These early versions of the gas turbine were really turboexpanders, since the source of compressed air or gas is a by-product of a separate process. These concepts turned into practical working equipment in the
120
Emergency and Backup Power Sources
late 19th Century by Charles de Laval and others. These units employed an impulse type of turbine wheel with expanding nozzles. The Industrial Revolution needed the power of steam turbines and the technology expanded to gas turbines, gas generator compressors and power-extraction turbines. The axial flow compressors in today’s gas turbines resemble a reaction steam turbine with the flow direction reversed.
POWER TURBINES In 1905, a gas turbine and compressor unit at the Marcus Hook Refinery of the Sun Oil Company near Philadelphia, PA, provided 4,400 kilowatts for hot pressurized gas and 900 kilowatts for electricity. The first electricity generating turbine for a power station appeared at Neuchatel, Switzerland, in 1939. This was a 4,000-kilowatt turbine with an axial flow compressor that delivered excess air at 50 pounds per square inch to a single combustion chamber for driving a multi-stage reaction turbine. Excess air was used to cool the exterior of the combustor and to heat air for the turbine. An early utility gas turbine in the U.S. was installed at the Huey Station of the Oklahoma Gas & Electric Company in Oklahoma City. This 3,500-kilowatt unit was installed in 1949. It was a simple-cycle gas turbine with a fifteen stage axial compressor, six straight flow-through combustors placed circumferentially around the unit, and a two-stage turbine. During World War I, the reciprocating gasoline engine was refined for the small, light aircraft of the time. Gas turbines were big and bulky, with too large a weight-to-horsepower output ratio for aircraft power plants. But, the turbo-charger became an addition to the aircraft piston engine. The exhaust-driven turbo-charger was developed in 1921, which led to the use of turbo-charged piston engine aircraft in World War II. In 1937, the British Thomson-Houston Company built and tested Frank Whittle’s jet engine. It had a double entry centrifugal compressor and a single stage axial turbine. A turbojet engine with a compound axial-centrifugal compressor similar to Whittle’s design and a radial turbine was built by the German aircraft manufacturer Heinkel. In 1939 a turbojet aircraft powered by this engine made the first flight of a jet powered aircraft. During the war years various changes were made in the design of
Emergency Generators
121
these engines. Radial and axial turbines were used along with straight through and reverse flow combustion chambers, and axial compressors. The compressor pressure ratio advanced from 2.5:1 in 1900, to 5:1 in 1940, 15:1 in 1960, and is now approaching 40:1. Since World War II, improvements made in aircraft gas turbine-jet engines have been transferred to stationary gas turbines. After the Korean War, Pratt & Whitney Aircraft provided the cross-over from the aircraft gas turbine to the stationary gas turbine. In 1959, Copper Bessemer installed the world’s first aircraft industrial gas turbine, in a compressor drive. This unit provided 10,500 brake horsepower (BHP) for a pipeline compressor. Airborne application units are referred to as jets, turbojets, turbofans, and turboprops. Land and sea-based applications units are referred to as mechanical drive gas turbines. Jet engines function as gas generators where the hot gases are expanded either through a turbine to generate shaft power or through a nozzle to create thrust. Some gas generators expand the hot gases only through a nozzle to produce thrust. These units are identified as jet engines or turbojets. The turbojet is the simplest form of gas turbine since the hot gases generated in the combustion process escape through an exhaust nozzle to produce thrust. Jet propulsion is the most common use of the turbojet, but it has been adapted to drying applications, supersonic wind tunnels, and as the energy source in a gas laser. Gas turbines that expand some of the hot gas through a nozzle to create thrust and the rest of the gas through a turbine to drive a fan are called turbofans. The turbofan combines the thrust provided by expanding the hot gases through a nozzle (as in the turbojet) with the thrust provided by the fan. The fan acts as a ducted propeller. In recent turbofan designs the turbofan approaches the turboprop in that all the gas energy is converted to shaft power to drive the ducted fan. When most of the hot gases through the turbine drive the compressor and the attached propeller with no thrust from the gas exiting the exhaust nozzle, the unit is called a turboprop. Turboprops have much in common with land and sea-based gas turbines. The engines used in aircraft applications may be either turbojets, turbofans, or turboprops, but they are commonly called jet engines. Since turboprops use the gas turbine to generate shaft power, they can be used wherever there is a need for large amounts of horsepower. At the 1967 Indianapolis 500 Race a Pratt & Whitney turboprop powered
122
Emergency and Backup Power Sources
car led the race for 171 laps, but suffered a gearbox failure on the 197th lap. The car had an air inlet area of 21.9 square inches. Later, race officials would restrict the air inlet area to 12.99 square inches which effectively eliminated gas turbines from racing. Some engines evolved from aircraft engines, but most land based gas turbines were derived from the steam turbine. Like steam turbines, these gas turbines have large, heavy, horizontally split cases and operate at lower speeds and higher mass flows than aircraft units of equivalent horsepower. Gas turbines in the small and intermediate size horsepower range incorporate features of aircraft and heavy industrial gas turbines. By the mid-1960s, the U.S. Navy was installing gas turbines as a ship’s propulsion power plant. The first combat ship to use a gas turbine was the USS Achville, a patrol gunboat commissioned in 1964. Larger ships like the Arleigh Burke Class destroyers use four aircraft derived gas turbines as the main propulsion units with 100,000 shaft horsepower. At the end of 1990s, the U.S. Navy had over 140 gas turbine propelled and 27 navies of the world had over 330 ships with about 800 gas turbines. By 1993, about 25,000 megawatts of electric power were generated by gas turbines. Many of these facilities utilize cogeneration to recover waste heat from the turbine exhaust. Gas turbines have also been used to power automobiles, trains, and tanks. The Abrams tank has a gas turbine engine that moves the 63 ton unit at over 40 miles per hour on level ground.
TURBINE CONFIGURATIONS Gas turbines can have different configurations with single or dual shafts and hot or cold end drives. A gas turbine is made up of a gas generator and a power-extraction-turbine. The gas generator consists of a compressor, combustor, and a compressor-turbine to drive the compressor. The power-extraction-turbine drives the external load. The compressor provides the high pressure, high volume air needed. It is heated in the combustor and expanded through the turbine section. Both axial and centrifugal compressors are used. Centrifugal compressors are used in smaller units, while medium and high horsepower applications use axial compressors. The energy that is developed in the combustor, by burning fuel
Emergency Generators
123
under pressure, is the gas horsepower (GHP). In turbojets, the gas horsepower that is not used to drive the compressor is converted to thrust. In turboprops, mechanical drive, and generator drive gas turbines the gas horsepower is used by the power extraction turbine to drive the external load. The gas horsepower may be expanded through the remaining turbine stages in a single shaft machine, or through a free power turbine, as done on a split shaft machine. A single spool-split output shaft gas turbine is also called a splitshaft mechanical drive gas turbine. The front shaft drives the compressor and the rear shaft drives the output load. The rear shaft comes off a free power turbine. The compressor/turbine component shaft is not physically connected to the power output shaft, but it is coupled aerodynamically. The aerodynamic coupling is also known as a liquid coupling. It provides easier, cooler starts on the turbine components and allows the gas turbine to reach self-sustaining operation before it drives the load. The gas turbine can operate at a low idle speed without the driven equipment rotating. The output shaft may be an extension on the turbine output (hot end drive) or it may be an extension of the compressor shaft (cold end drive). At the turbine end, the exhaust gas temperatures may reach 1,000°F (538°C). There are also exhaust gas turbulence as well as maintenance accessibility problems. At the cold end, accessibility is improved, the temperature is ambient, but the inlet duct must be turbulent free for proper turbine operation. The shaft and generation configuration must not impede a uniform, vortex free, flow through the input duct. Inlet turbulence can cause surges which can destroy the turbine blades. The dual spool split shaft gas turbine uses high and low pressure compressors and turbines. In this type of unit, there are three shafts and each operates at different speeds. Dual spool units are used for compressor, pump and generators drives in the higher horsepower ranges. Liquid coupling is used in compressor and pump drives as well as electric generator drives. This arrangement also allows the power turbine to operate at the same speed as the driven equipment. In generator drive applications the power turbines may operate at either 3,000 or 3,600 rpm to match 50 cycle or 60 cycle generators. Centrifugal compressor and pump application speeds are usually in the 4,000 to 6,000 rpm range. Matching the speeds of the drive and driven equipment eliminates the need for a gearbox. Gas turbines are used in the Alyeska pipeline to pump about 2
124
Emergency and Backup Power Sources
million barrels of crude oil per day some 800 miles to Valdez, Alaska. Aero-derivative single spool-split output shaft gas turbines are used to drive the large centrifugal pumps. Aero-derivative gas turbines are also used in the 900 mile Saudia Arabian East-West pipeline.
LUBRICATION Lubrication reduces friction between the rotating and stationary bearing surfaces and removes excessive heat from those surfaces. The bearings may be hydrodynamic or anti-friction types. The lubrication in a hydrodynamic bearing converts sliding friction into fluid friction. Anti-friction bearings work on rolling friction. The shaft load is supported by the rolling elements and bearing races in a metal-to-metal contact. Most heavy frame gas turbines use hydrodynamic bearings with mineral oil while the aircraft derivative gas turbines use anti-friction bearings with a synthetic oil. Mineral oils are distilled from petroleum crude oils and are less expensive than synthetic lubricants. Synthetic lubricants do not occur naturally but are made by reacting organic chemicals, such as alcohol or ethylene with other elements. Synthetic lubricants are used in high temperature applications of less than 350°F (175°C) or where fire-resistant qualities are required. The bearings in gas turbines are lubricated by a pressure circulating system. This consists of a reservoir, pump, regulator, filter, and cooler. Oil in the reservoir is pumped under pressure through a filter and oil cooler to the bearings and then returned to the reservoir for reuse. In cold climates a heater in the reservoir warms the oil prior to startup. The reservoir also serves as a deaerator. As the lubrication oil moves through the bearings, it can entrap air in the oil. This results in oil foaming. The foam must be removed before the oil is returned to the pump or the air bubbles may result in pump cavitation. To deaerate the oil the reservoir surface area must be large enough, so screens and baffles may also be built into the reservoir. Filters remove the wear particles from the oil. While 5 or 10 micron filters can be used for running conditions, 1 to 3 micron filters are used for break-in periods or after overhauls. Redundant oil filters are used along with a three-way-transfer valve. If the primary filter clogs, the transfer valve is switched over to the clean filter. Wear particles from the pump and gas turbine bearings
Emergency Generators
125
accumulate in the filter element along with temperature related oil degradation and oil additives. This may create a sludge that accumulates in the filter. As the filter clogs, the differential pressure across the filter increases. An oil pressure differential gauge is used for local readout, and a differential pressure transducer for remote readout and alarm. The differential pressure alarm setting is typically 5 psig. Regulators provide a constant pressure level in the lube system. These regulators allow the operation of a secondary pump for preventive maintenance. Lube oil coolers remove heat from the oil before it is re-introduced into the gas turbine. The amount of cooling required depends on the friction heat generated in the bearings, heat transfer from the gas turbine to the oil by convection and conduction and heat transfer from the hot gas path through seal leakage. The oil is cooled to 120°F-140°F (50°C60°C). To maximize the heat transfer, fins are installed on the outside of tubes and turbulators are placed inside each cooling tube. The turbulators transfer heat from the hot oil to the inner wall of the cooling tubes and the fins help dissipate this heat. Cooling media may be either air or a water/glycol mix. Air/oil coolers are used in desert regions, while tube and shell coolers are found in Arctic and most coastal regions. Air/oil coolers use ambient air as the cooling media. Cooling fans are usually electric motor driven, often with two speed motors. This allows high and low cooling flows. To closely match the cooling flow to the required heat load, changeable pitch fan blades may be used. As the heat load changes, the blades can be adjusted to meet the new heat flow requirements. Air/oil coolers may also include top louvers to protect the cooling coils from hail. These louvers are not effective for temperature control. Gas turbines have always been tolerant of a wide range of fuels from liquids to gases, to high and low Btu heating values and are now functioning satisfactorily on gasified coal and wood.
MICRO TURBINE POWER GENERATION Micro gas turbine based generator systems are becoming more popular for providing electric power and heat in cogeneration applications. High-speed turbo-generator sets are very compact in size and competitive for cogeneration. The electric generator is directly coupled
126
Emergency and Backup Power Sources
to a high-speed turbine. One of the newer power sources is the 20 to 60 kilowatt, regenerated, gas turbine power package. This package, in combination with a battery pack, can also deliver low emission power in automobiles. Operating at high speed, micro and mini turbines run in the range of 30,000 RPM to 120,000 RPM. The higher power units use lower speeds. A 40-kW micro turbine-generator may operate at 120,000 RPM while a 500-kW mini-turbine may operate at 30,000 RPM. Most micropower units use permanent magnet generators. In some designs, induction generators are used. Induction generators can provide lower costs, higher cycle efficiency and safety of operation.
TURBO GENERATOR SYSTEMS In most of these electric power systems the generator and the turbine are directly coupled. A constant speed of operation is used, but loads may cause an operating speed range between +0 to -5%. These integrated power systems are located close to the user such as in a factory building, hospital, store or office complex. Vehicle mounted applications are also used, but in either case the length of the feeders is short. Compatibility with utility power systems is usually required. The electrical power output is typically 3-phase AC with singe or multiple voltage lines. DC output may be required for some industrial applications. In most AC power systems, 50/60-Hz frequency is required, but a 400-Hz frequency may be required for aircraft/military/aerospace applications. In situations with stand-alone capability, isolation from the utility is required. In most emergency power applications, power transfer from utility to the turbo-generator and back is necessary. The generator must also provide electric start capability during the initial start-up of the turbine. The generator set must have a cooling system that is compatible with the system environment. Typically, either air, lubricant oil, or a water glycol mixture is used.
GENERATOR TECHNOLOGIES Three different generator technologies are suitable for high-speed operation. These are permanent magnet (PM), induction, switched reluc-
Emergency Generators
127
tance (SR), synchronous reluctance and homopolar. Permanent magnet technology is used by most micropower systems. The generator has two electromagnetic components. A rotating magnetic field is provided with permanent magnets. The stationary armature is made with electrical windings in a slotted iron core. The permanent magnets are made from high-energy rare earth materials such as Neodymium Iron Boron or Samarium Cobalt. A highstrength metallic or composite containment ring holds the magnets on the shaft. The stationary iron core is made of laminated electrical grade steel. The electrical windings are made from high purity copper conductors insulated from one another and from the iron core. The armature assembly is impregnated with high temperature resin or epoxy. The voltage output from the generator is unregulated, multiple phase AC. This voltage varies as a function of the speed and load. The output is connected to a solid state power conditioning circuit. This circuit uses buck/boost transformer switching to regulate the output. Induction generators are based on electric motor technology. Induction motors are the most common types of electric motors. Older induction generators were made using capacitors for excitation. These fixed capacitors could not be adjusted as the load or speed changed from nominal values. Induction generators were used in large power systems by utility companies. Excitation was provided from the infinite power bus as demanded by the load and speed conditions. Since the availability of high power switching devices, induction generators can be provided with an adjustable excitation and can be used in isolation. Induction generators use a rotating magnetic field constructed with high conductivity, high strength bars in a slotted iron core to form a squirrel cage rotor. A stationary armature is used similar to those used in PM generators. The voltage output from the generator is regulated, multiple phase AC. Control of the voltage is accomplished with a closed loop operation where the excitation current is adjusted to generate a constant output voltage regardless of the variations of speed and load current. The excitation current is supplied to the stationary armature winding from the short circuited squirrel cage which provides a secondary winding in the rotor. A switched reluctance (SR) generator uses the concept of magnetically charged opposite pole attraction. An unequal number of salient
128
Emergency and Backup Power Sources
poles are used on the stator and the rotor. Both the stator and rotor are constructed using laminated electrical grade steel. If the number of poles on the stator is 6, then the number of poles on the rotor will be 4. Other pole combinations such as 8/6 or 10/8 are also used. There is no electrical winding on the rotor. The armature coils for the stator poles are concentric and are isolated from one another. When the coils on opposite poles are excited, the corresponding stator poles are magnetized. The rotor poles that are closest to the stator poles are magnetized to opposite polarity by induction and are attracted to the stator poles. If the prime mover drives the rotor in the opposite direction, voltage is generated in the stator coil to produce electric power. The output from the generator is DC with a high ripple content. The voltage output may be filtered and regulated by adjusting the duration of the excitation current. Commutation of the current through the stator coil is accomplished by the controller.
INDUCTION GENERATORS Induction generators have several advantages in micro and mini turbine based power systems. The use of electromagnets instead of permanent magnets results in lower costs. The rare earth permanent magnets used in PM generators are much more expensive than the electrical steel used in electromagnets. They also must be contained using additional supporting rings. PM generators also require special machining operations and special handling of the magnets is required. The PM generator produces an unregulated voltage. Depending upon the changes in load and speed, the voltage variation can be wide. This is especially true for generators exceeding about 75-kW. An induction generator produces AC voltage that is almost sinusoidal. When an internal failure occurs in a PM generator, the failed winding will continue to draw energy until the generator is stopped. In highspeed generators, this may take some time during which further damage to electrical and mechanical components can occur. This can also be a safety hazard. An induction generator is shut down by the de-excitation that occurs within a few milliseconds. When an induction motor is operated at a speed higher than the synchronous speed, the shaft torque drops as the motor goes into the
Emergency Generators
129
generate mode. Electric power is generated from the mechanical input power from the prime mover. The generated power is a function of the slip, which is the speed in excess of the synchronous speed. In the generate mode, the slip is controlled according to the load requirements and the induction generator delivers the necessary power. The synchronous speed is a function of the electrical frequency applied to the generator terminals, but the operating shaft speed is determined by the prime mover. The electrical frequency must be controlled as changes in the load and the prime mover speed occur. The excitation current is provided to the generator stator windings for induction into the rotor. The magnitude of the excitation current determines the voltage at the bus. The excitation current is regulated to keep a constant bus voltage. The controller for the induction generator must adjust the electrical frequency to produce the slip corresponding to the load requirement and also adjust the excitation current to provide the required bus voltage. The electrical frequency must be changed from 100% at no load to about 95% at full load if the prime mover speed is stable at 100%. The controller for an induction generator consists of a power module, sensing circuits and a control module. The power transistors are IGBTs or MOSFETs in a conventional multiphase circuit configuration, the number of phases is the same as the phases in the generator winding. The sensing of currents and voltages is done at the load as well as in the power section of the controller. The speed of the shaft is measured and a PID control algorithm is used with switching commands for the power transistors generated by the control. This provides the necessary frequency and amplitude of the excitation currents for the induction generator windings. The control also includes over-current, overvoltage and over-temperature protection. The control of induction generator slip requires precise measurement of speed while an SR generator requires precise measurement of the rotor position. In the SR generator, the operating frequency can be 6kHz, at 60,000 RPM. In an induction generator, the operating frequency is 1-kHz to 2-kHz at 60,000 RPM depending upon whether a 2 pole or 4 pole generator is used. In an SR generator, the higher rates of change of currents and voltages result in higher stress levels for the power electronic devices. The induction generator has a sinusoidal output that can be more easily
130
Emergency and Backup Power Sources
conditioned. The controller for an induction generator is smaller and generally costs less than a controller for PM or SR generators.
TURBINE ADVANCES Gas turbine advances in recent years has been driven by metallurgical improvements that allow high temperatures in the combustor and turbine components. Other factors include both aerodynamic and thermodynamic breakthroughs and the use of computer technology in the design and simulation of turbine airfoils and combustor and turbine blades. There have been improvements in compressor, combustor and turbine design. The properties of creep and rupture strength improved from the late 1940s through the early 1970s. In 1950 390°F (200°C) was achieved in operating temperature. This resulted from age-hardening and precipitation strengthening which utilized aluminum and titanium in the nickel matrix to increase strength. Since 1960, sophisticated cooling techniques have been used for turbine blades and nozzles. Since 1970, turbine inlet temperatures have increased to 500°F (260°C) and in some units as high as 2,640°F (1,450°C). The increase in turbine inlet temperature was made possible with new air cooling techniques and the use of complex ceramic core bodies for hollow, cooled cast parts. Turbine blades and nozzles are formed by investment casting. A critical factor is the solidification of the liquid metal after it is poured into the mold. Nonuniform grain sizes, shapes, and transition areas can cause premature cracking of turbine parts. The equiaxed process provides more uniformity of the grain structure. The strength is improved if grain boundaries are aligned in the direction normal to the applied force. This elongated or columnar grain formation in a preferred direction is called directional solidification and was introduced by Pratt & Whitney Aircraft in 1965.
TURBINE CONTROL Newer turbines are under the control of a highly responsive unit using computer control technology. Computers start, stop, and govern the operation of gas turbines. They also provide diagnostics and predict future failures. Gas turbines are highly responsive, high speed units. In
Emergency Generators
131
an aircraft, a gas turbine can accelerate from idle to maximum take-off power in less than 60 seconds. In industrial gas turbines, the acceleration rate is limited by the mass of the driven equipment. Without the proper control system, the compressor may go into surge in less than 50 milliseconds and the turbine can exceed safe temperatures in less than a quarter of a second. A power turbine can go into overspeed in less than two seconds. The gas generator turbine and power-extraction turbine control takes place by varying the gas generator speed, which is done by varying the fuel flow. The following may be monitored: fuel flow, compressor inlet and discharge pressures, shaft speed, compressor inlet temperatures, turbine inlet and exhaust temperatures. At a constant gas generator speed, as the ambient temperature decreases, the turbine inlet temperature will decrease slightly and the gas horsepower will increase considerably. The increase in gas horsepower results from the increase in compressor pressure ratio and aerodynamic loading. This means the control must protect the gas turbine on cold days from overloading the compressor airfoils and overpressurizing the compressor cases. For maximum power on hot days it is necessary to control the turbine inlet temperature to constant values, and allow the gas generator speed to vary. The ambient inlet temperature, compressor discharge pressure and gas generator speed are the three main variables that affect the amount of power that the engine will produce. Sensing the ambient inlet temperature also helps to insure that the engine internal pressure are not exceeded, and sensing the turbine inlet temperature insures that maximum allowable turbine temperatures are not exceeded. Sensing the gas generator speed allows the control to accelerate through critical speed points. Gas turbines are typically flexible shaft machines and have a low critical speed. The control may be hydromechanical (pneumatic or hydraulic), electrical (wired relay logic), and computer (programmable logic controller or microprocessor). Typical hydromechanical controls include cams, servos, speed (fly-ball) governors, sleeve and pilot valves, metering valves and temperature sensing bellows. Electrical controls include electrical amplifiers, relays, switches, solenoids, timers, tachometers, converters, and thermocouples. Computer controls may simulate many functions such as amplifiers, relays, switching and timers. These functions are programmable. Modifying the program may be done by the
132
Emergency and Backup Power Sources
user or operator in the field. Analog signals such as temperature, pressure, vibration, and speed are converted to digital signals before they are processed. The computer may output signals to components such as the fuel valve, variable geometry actuator, bleed valve and anti-icing valve. Until the late 1970s, turbine control systems operated only in real time with no ability to store or retrieve data. Hydromechanical controls had to be calibrated frequently, weekly in some cases, and were subject to contamination and deterioration due to wear. Multiple outputs such as fuel flow control and compressor bleed-air flow-control required independent, control loops. Coordinating the output of multiple loops, using cascade control, was a difficult task and often resulted in compromises between accuracy and response time. Many tasks had to be done manually. Station valves, prelube pumps and cooling water pumps were manually switched on before starting the gas generator. Protection devices were limited and the margin between temperature control setpoints and safe operating turbine temperatures had to be made large since hydromechanical controls cannot react quick enough to limit high turbine temperatures, or to shut down the gas generator, before damage may occur. By the early 1970s, electric controls consisted of a station control, a process control, and a turbine control. All control functions such as start, stop, load, unload, speed, and temperature were generated, biased, and computed electrically. Output amplifiers were used to drive servo valves, using high pressure hydraulics, to operate hydraulic actuators. These actuators may also contain position sensors to provide electronic feedback. The advent of programmable logic controllers and microprocessors in the late 1970s eliminated these independent control loops, and allowed multi-function control. Control system functions include sequencing, routine operations and protection. Sequencing steps are used to start, load, unload, and stop the unit. The typical cycle used in a normal start is as follows: • • • • • • •
Starter on Fuel on Engine lights up - Exhaust gas rise Engine reaches self-accelerating speed Ignition off Starter cuts off Engine reaches idle RPM
Emergency Generators
133
When the start sequence is complete, the gas turbine has reached self-sustaining speed and the control mode goes into the routine operation control. This mode is used until the process starts to load the unit. Before initiating this loading, the sequence controller will position the inlet and discharge valves and circuit breakers. In electric generator drives this is when the synchronizer is used to synchronize the unit to the electric grid. When these steps are complete, control goes back to the routine operation mode. At this time, the speed control governor, acceleration scheduler, temperature limit controller, and pressure limit controller become active. Changing the fuel flow results in higher or lower combustion temperatures. When the fuel flow is increased, combustor heat and pressure increase and heat energy to the turbine is increased. Part of this energy is used by the compressor-turbine to increase speed which causes the compressor to increase airflow and pressure. The remaining heat energy is used by the power extraction turbine to produce more shaft horsepower. This cycle continues until the desired shaft horsepower or some parameter limit such as temperature or speed is reached. If the fuel flow is increased too quickly, excessive combustor heat is generated and the turbine inlet temperature may be exceeded or this increase in speed may drive the compressor into surge. When the control reduces the fuel flow, combustion heat and pressure drops, the heat energy available to the compressor-turbine drops and the compressor-turbine slows down. Lowering the compressor speed along with the airflow and pressure continue until the desired shaft horsepower is reached. If the fuel flow is decreased too rapidly, then the compressor may not be able to reduce airflow and pressure fast enough. This can result in a flame-out or compressor surge, since speed decreases move the compressor operating point closer to the surge line. Flame-out creates thermal stresses that become critical with each shutdown and re-start. High turbine inlet temperature will shorten the life of the turbine blades and nozzles, and compressor surge can severely damage the compressor blades and stators and possibly the rest of the gas turbine. The control must also protect against surge during rapid power changes, start-up, and periods of operation when the compressor inlet temperature is low or drops rapidly. A gas turbine is more susceptible to surge at low compressor inlet temperatures. Normally, changes in ambient temperature are slow compared to
134
Emergency and Backup Power Sources
the response time of the gas turbine control system. The temperature range from 28°F (-2.0°C) to 42°F (6.0°C) with high humidity is a major concern. Operation in this range can result in ice formation in the plenum upstream of the compressor. Anti-icing schemes increase the sensible heat by introducing hot air into the inlet. Anti-icing is another control function that must address the effect temperature changes can have on compressor surge. An acceleration schedule is used to load the unit as quickly as possible. As the load goes from the idle-no-load position to the full-load position, the fuel valve is opened and as the load approaches the setpoint, the speed governor begins to override the acceleration schedule output until the fuel valve reaches its final running position. During this time the temperature limit controller and the pressure limit controller monitor temperatures and pressures so that the preset levels are not exceeded. The temperature limit control for turbine inlet temperature uses the average of several thermocouples taking temperature measurements in the same plane. When the temperature or pressure reaches its setpoint, the limit control will override the governor controller and maintain operation at a constant temperature or pressure. The control allows the operating point to move along a set of points that define the operating line for the load conditions. Protection control continuously checks the speed, temperature, and vibration for levels that may be harmful to the operation of the unit. Usually two levels are set for each parameter, an alarm level and a shutdown level. When the alarm level is reached, the system provides a warning that there is a problem. If the transition from alarm to shutdown condition takes place too rapidly and operational response is not possible, the unit is automatically shut down. Overspeed is one parameter that does not include an alarm signal. Turbine inlet temperature is the most frequently activated limiting factor. One level is set for base load operation and a higher level is set for peaking operation.
STARTING Auxiliary turbine equipment includes the starting system, ignition system, lubrication system, air inlet cooling system, water or steam injection system for NOx control or power augmentation and the ammonia
Emergency Generators
135
injection system for NOx control. These systems may be direct driven and connected directly to the output shaft of the gas turbine. In most units, one of the lubrication pumps is direct connected. Indirect drives include electric, steam, or hydraulic motors. Indirect drives provide redundant systems, increasing the reliability. Electric systems are powered by a directly driven electric generator. Starting systems may drive the gas generator directly or through a gearbox. These may be electric, hydraulic, or pneumatic air or gas. The starter must rotate the gas generator until it reaches its self-sustaining speed and drive the gas generator compressor to purge the gas generator and the exhaust duct of volatile gases before initiating the ignition cycle. The starting sequence engages the starter, purges the inlet and exhaust ducts, energizes the ignitors and switches the fuel on. The starting system must accelerate the gas generator from rest to a speed just beyond the self-sustaining speed of the gas generator. The starter must develop enough torque to overcome the drag torque of the gas generator’s compressor and turbine and the attached load including accessories and bearing resistance. Another function of the starting systems is to rotate the gas generator, after shutdown, beginning the cooldown. Purge and cooldown functions have resulted in the use of twospeed starters. The lower speed is used for purge and cooling while the higher speed is used to start the unit. The starter may be directly connected to the compressor shaft or indirectly connected with an accessory gearbox or impingement air may be directed into the compressor-turbine. Starters for gas generators include alternating current and direct current motors, pneumatic motors, hydraulic motors, diesel motors and small gas turbines. If alternating current (AC) power is available, threephase induction type motors are preferred. The induction motor is directly connected to the compressor shaft or the starter pad of an accessory gearbox. Once the gas generator reaches self-sustaining speed, the motor is de-energized and usually disengaged with a clutch mechanism. If AC power is not available, called a black start, direct current (DC) motors are used. The source of power is a battery bank. One technique is to convert the DC motor electrically into a electric generator to charge the battery system. This is useful where the battery packs are also used to provide power for other systems. Battery-powered DC motor
136
Emergency and Backup Power Sources
starters are mostly used in small, self-contained gas turbines under 500 brake horsepower (BHP). Electric motors require explosion proof housings and connectors and must be rated for the area classification in which they are installed. Typically this is Class 1, Division 2, Group D. Pneumatic starter motors may be the impulse-turbine or vane pump type. These motors use air or gas as the driving force, and are coupled to the turbine accessory drive gear with an overriding clutch. The overriding clutch mechanism disengages when the drive torque reverses and the gas turbine self-accelerates faster than the starter. Then, the air supply is shutoff. Air or gas must be available at approximately 100 psig and in sufficient quantity to sustain starter operation until the gas generator exceeds self-sustaining speeds. If a continuous source of air or gas is not available, banks of high and low pressure receivers and a small positive displacement compressor can provide enough air for a limited number of start attempts. The starting system should be capable of three successive start attempts before the air supply system must be recharged. In gas pipeline applications, the pneumatic starter may use pipeline gas as the source of power. Hydraulic systems are often used with aircraft derivative gas turbines. Hydraulic pumps are used to drive hydraulic impulse turbine, Pelton Wheel starters or hydraulic motors for starting. Small gas turbines may be used to provide the power to drive either pneumatic or hydraulic starters. In aircraft, a combustion starter, which is essentially a small gas turbine, is used to start the gas turbine in remote locations. They are not used in industrial applications. Large heavy frame, 25,000 SHP and above, gas turbines require high torque starting systems. Most of these units are single shaft machines and the starting torque must be great enough to overcome the mass of the gas turbine and the driven load. Diesel motors are preferred for these large gas turbines. Since diesel motors cannot operate at gas turbine speeds, a speed increaser gearbox is used to boost the diesel motor starter speed to gas turbine speeds. Diesel starters are usually connected to the compressor shaft. Besides the speed increaser gearbox, a clutch mechanism is needed to disengage the starter from the gas turbine. Diesel motors can run on the same fuel as the gas turbine, eliminating the need for separate fuel supplies. Impingement starting involves jets of compressed air piped to the compressor turbine to rotate the gas generator. The pneumatic power source for impingement starting is similar to air starters.
Emergency Generators
137
IGNITION The ignition system is not energized until the gas generator reaches cranking speed and remains at this speed long enough to purge volatile gases from the engine and exhaust duct. When the igniters are energized, fuel can then be admitted into the combustor. These two functions are often done simultaneously and called pressurization. Two igniters are usually used, one on each side of the engine. During the start cycle each igniter discharges about 2 times per second and provides an energy pulse of 4 to 30 joules. One joule is the unit of work or energy transferred in one second by an electric current of one ampere in a resistance of one ohm. One joule/second equals one watt. Once the gas generator starts the igniter is no longer needed and any further exposure to the hot gases of combustion shortens its life. Some igniters are spring loaded and retract out of the gas path as the combustion pressure increases. Ignition systems include inductive and capacitive AC and DC, with high and low tension systems. The capacitive systems generate the hottest spark. Since the energy stored in the capacitor is proportional to the square of the voltage, it is more economical to use a high voltage to charge the capacitor. A radio frequency interference filter is used to prevent ignition energy from affecting local radio signals. The potential at the spark plug is about 25,000 volts. The ignition harness to each igniter plug is shielded and the ignition exciter is hermetically sealed. An AC transformer, or transistorized chopper circuit transformer, boosts the voltage to about 2,000 volts in the low tension system. A rectifier allows the flow of current into the storage capacitor but prevents most of the return flow. This voltage is boosted up to charge a smaller high tension capacitor. The low voltage charge in the storage capacitor is not enough to jump the gap across the spark plug electrodes. The initial path is provided by a higher voltage discharge from the high tension capacitor. It discharges first to bridge the gap across the electrodes of the spark plug and reduces the resistance for the low tension discharge. The low tension capacitor then discharges providing a long, hot spark. Inductive ignition systems use the rapid variation in magnetic flux in an inductive coil to generate enough energy for the spark. This system produces a high voltage spark, but the energy is relatively low and is only suitable for easily ignitable fuels.
138
Emergency and Backup Power Sources
Ignitor plugs use an annular-gap or constrained-gap. The annulargap plug projects slightly into the combustor, while the constrained-gap plug is positioned in the plane of the combustor liner and operates in a cooler environment. References Calder, Nigel, Boatowner’s Mechanical and Electrical Manual, 2nd edition, Camden, ME: McGraw-Hill Companies, 1996. Dempsey, Paul, Troubleshooting & Repairing Diesel Engines, 3rd edition, New York, NY: Tab Books, Division of McGraw-Hill, 1995.
Alternate Power Sources
139
Chapter 5
Alternate Power Sources BLACKOUTS AND POWER When blackouts struck much of the northeastern U.S., Ontario and Rome in the summer of 2002, users on both continents were reminded of just how fragile electricity supplies can be. The massive disruptions stranded commuters, defrosted freezers, shut down businesses and refocused attention on power. Most of our power comes from oil-and gas-fired generators and nuclear plants. These sources are tied into creaking infrastructures. They also pollute the environment and many feel they pose unacceptable health risks. The 21st century brings numerous challenges and opportunities that will affect the nation’s energy, economic, and environmental security: global economic and population growth. There will be a greater demand for power quality and reliability. Environmental sensitivities will also be among the major driving forces for improving the nation’s systems for generating and delivering energy. The prices of natural gas and oil continue to increase and have shown significant volatility. Electricity restructuring in California caused serious increases in electricity prices. Stress on the transmission and distribution system causes widespread power outages that affect millions of people and thousands of businesses. Renewable energy is an important element that improves our energy security, preserves the environment and supports our affluence. Renewable energy technologies can provide an important fraction of the nation’s electricity generation requirements and along with other generation sources provide more reliable power. Alternatives include clean energy from renewable resources. Fuel cells, wind turbines and solar panels can provide power free from dependence on local grids. The search for alternative energy is not new, but the current focus is on the goal of making clean and sustainable 139
140
Emergency and Backup Power Sources
power a mainstream commodity. The preferred energy future is plentiful and reliable sources of clean energy at reasonable prices. Several factors impact these preferences. These include global economic and population growth trends, technology advances, power quality and reliability problems, environmental challenges, and utility restructuring.
PHOTOVOLTAICS The nation’s energy generation and delivery systems will be changing in the coming years. Photovoltaics (PV) is only one solution to these challenges, but this renewable-energy option can be an important contributor to reducing the energy needs of the United States and the world. Photovoltaics has several characteristics that make it an important component of improving our nation’s power needs. PV is a versatile technology that can be used for applications from very small loads to moderate loads. It is a modular technology that allows generating systems to be built incrementally to match growing demands. PV is easy to install, maintain, and use. It is a convenient technology that can be used anywhere there is sunshine and it can be mounted on almost any surface. PV can be integrated into building structures to maximize aesthetics and multifunctional value. These positive features allow PV to address the problem of reliable power, power quality and power backup. The cost of power interruptions is high and customers with essential web servers or critical hospital or industrial needs cannot tolerate power interruptions or poor-quality power. Each year, U.S. businesses spend about $2 billion for uninterruptible power supplies and consumers purchase almost 200,000 small generators of 3 kilowatts or less. The losses incurred by businesses due to power quality and reliability problems are estimated at more than $30 billion each year. Distributed generation sources, such as PV, can improve grid reliability by reducing loads on transmission and distribution systems. Photovoltaic technologies provide solid power reliability with on-site generation and short feed lines. The reliability of photovoltaics is demonstrated by projects like San Francisco’s plan to install photovoltaicpowered traffic stoplights that have backup battery power at 100 key
Alternate Power Sources
141
intersections. The City will use PV to prevent traffic problems during rolling blackouts in California. Battery backup ensures that power is available at night or when the sun is not shining. In a grid-connected application, the grid uses this local generated solar electricity while the sun shines. This electricity is deducted from the power provided by the utility which results in lower power costs and provides more output for the grid. By 2020 the domestic photovoltaic industry will provide about 15% (3,200 MW) of U.S. peak electricity generating capacity. The cumulative installed capacity in the United States for PV will be about 15-GW. Conversion efficiencies are expected to be 18% to 20% at a cost of less than 50 cents per watt. The PV industry will be at $15 billion annually. Production facilities will be highly automated, reducing the cost of production by a hundredfold from 2000 values. PV systems will have plug-and-play qualities that allow them to tie into the existing grid structure for electricity production. Photovoltaics is a semiconductor based technology that converts sunlight into electricity. The photovoltaic effect produces direct-current (DC) electricity, while using no moving parts, consuming no fuel, and creating no pollution. Solar-electric power is suited to be a major contributor to an emerging national energy mix. The U.S. electrical grid will increasingly rely on distributed energy resources in a competitive market to improve reliability and moderate distribution and transmission costs and onpeak price levels. A greater value is being placed on power reliability as well as on lower energy cost. Many regions are becoming limited by transmission capacity and local emission controls. Solar-electric power addresses these issues since it is easily sited at the point of use with no environmental impact. Since sunlight is widely available, the United States can build a solar-electric infrastructure that is geographically diverse and less vulnerable to international energy politics and volatile markets based on fossil fuels. The International Energy Agency (IEA) projects that 3000-GW of new capacity will be required globally by 2020, valued at around $3 trillion. IEA also projects that the fastest-growing sources of energy will be supplied by renewables. Much of this new capacity will be installed in developing nations where solar-electric power is already competitive. The United States has been the world leader in photovoltaic research, technology, manufacturing, and sales. But other countries are
142
Emergency and Backup Power Sources
increasing their efforts to provide important technologies and gain global market share.
PV CHARACTERISTICS A wide range of PV technologies and applications exist for solar electric power. Photovoltaic cells utilize solid-state technology to produce electric energy directly from the sun. The cells are connected together and laminated in a sealed package called a module, which can be mounted on roofs, packaged into roofing or other building products or installed on the ground. The solar electric power is produced as direct current, or DC power. PV modules are usually measured in kW, and come with a nameplate rating in kW STC (Standard Test Conditions). The alternating current (AC) power rating of kW PIC (PVUSA Test Conditions) provide an estimate of the AC power output. Utility grid systems operate on alternating current so grid-connected applications of photovoltaic modules require an inverter to convert the DC output to AC power. The inverter also provides a means to disconnect the PV module during an outage. Solar electric systems can operate independently from the grid by using energy storage in the form of a battery. The different types of solar modules allow mounting on a variety of building and on-ground environments. The PV modules are one component of the system. The other components are the inverter, mounting system hardware and wiring. The inverter converts the direct current produced by the PV modules to alternating current. The inverter may be the weak point in the PV system since it is the most frequent cause for failures and malfunctions. However, inverters are becoming more reliable. Most PV systems are ground or roof mounted. Ground mounted PV systems require significant amounts of space. There is the potential for vandalism or theft and the system may pose a safety hazard if not protected by fencing. Ground mounted systems may be stationary, one axis tracking or two axis tracking systems. Tracking systems provide a greater power output, but have a higher initial cost and may require additional maintenance. Roof mounted systems take advantage of unused roof space, but there is a risk of reducing the roof’s integrity by installing PV systems that penetrate the roof. This can invalidate the roof’s warranty and cause
Alternate Power Sources
143
leaking. Structure and foundation costs can be minimized with roofmounted PV systems. The roof orientation, shape and shading can affect the PV system output. The level of PV energy production depends on the weather at the site. Actual operating conditions will vary with time. There are models that will give estimates of the system’s output. The models depend on plane of array irradiance, ambient temperature and wind speed.
PV APPLICATION PV systems are modular and can be adapted to almost any site. Residential PV systems are usually roof-mounted and less than 5kW. These systems generally have a higher cost/watt than larger systems. The smaller inverters used for residential systems are often the most unreliable part of the system. Without some form of power monitoring it is difficult to determine if they are operating correctly. Large commercial and utility-scale PV systems are usually more cost effective than residential systems. Systems over 70kW have dropped in costs since 1996. A recent trend is building integrated photovoltaics (BIPV). When a building is designed for PV systems, performance can be optimized. In a study of PV systems with a generating capacity of 70kW to over 400kW system costs ranged from $5.31/W to $11.82/W. Large PV systems can differ in several ways, single-axis tracking systems require tracker mechanisms and installation, while roof-mounted systems may require cranes and expensive roof modifications. The availability of state tax credits and incentives, site-specific installation and interconnection requirements and labor costs all affect total project cost. Module costs have declined. The early residential systems were large, customized systems. But, by 2000 the majority of the systems installed were 1.5kW or less and many were standardized packages. Many of these used AC modules with ratings of 500 watts or less. Standardized systems reduce costs and installation becomes easier. In 1992 the Solar Electric Power Association (SEPA) was formed. Programs include its six year long Technology Experience to Accelerate Markets in Utility Photovoltaics (TEAM-UP) and Solar Power Solutions (SPS).
144
Emergency and Backup Power Sources
BARRIERS Despite the advances that the PV industry has made over the years, a number of barriers still exist. The primary obstacle to market penetration is the high initial cost of the systems. Various federal, state, and local programs help to bring down the cost of solar. The Solar Electric Power Association, with funding from the Department of Energy has developed Solar Power Solution (SPS). This program is designed to address the barriers to commercialization and market penetration. While the technology has made significant improvements in efficiency and reliability, local support for PV is necessary to advance commercialization. The interconnection of solar to the electricity grid should be easier. Net metering compensates the owner of a PV system for the excess electricity that the system produces and feeds back into the utility grid. A number of states have developed net-metering policies, but these are not uniform.
SOLAR DEVELOPMENTS A number of important solar power developments are taking place. Nasosys of Palo Alto, CA, is developing tiny photovoltaic cells that can be incorporated into the fabric of roofing materials to provide power to homes and other buildings. Nanosys is combining the science of solar cells with the science of nanotechnology, which manipulates items as small as an atom to do tasks from switching electricity to storing data to sensing the movement of a bridge girder that is beginning to weaken. Nanosys has already embedded microscopic photovoltaic crystals into plastic sheeting. A prefabricated Nanosys roof could generate enough electricity to run a typical home. Electricity generated during the day can be stored in batteries for use at night. A single square meter of the solar cell plastic will cost about $100 and last about 20 years, so a complete roof would cost a few thousand dollars. The tiles should generate electricity at a cost of about 4 cents per kWh, well below the 20 cents to $1 for traditional solar panels. The company has government contracts from the Defense Advanced Research Projects Agency, the National Science Foundation and the Na-
Alternate Power Sources
145
tional Institute of Health. By 2020 the domestic photovoltaic industry is projected to provide about 15% or 3,200 MW (3.2 GW) of U.S. peak electricity generating capacity. This will reduce peakload demand, when energy is most constrained and expensive. Peak shaving will reduce the need to build new power plants and transmission lines. These projects typically meet with customer resistance. This moves the load off the grid and handles peak loads at the point of consumer use providing distributed generation. The importance of PV technology to reducing loads and providing backup power is shown in two major growth areas: installed capacity and costs. The growth by 2010 is expected to be 100 times the level of installed systems in 2000. The total installed (annual) peak capacity is expected to be about 7-GW installed worldwide by our domestic PV industry during 2020, of which 3.2-GW will be used in domestic installations. It is estimated that the mix of applications will be: • • •
50% AC distributed generation, 33% DC and AC value applications, and 17% AC grid (wholesale) generation.
This is based on business plans and market trend projections of the PV industry and published independent analysis. Installed volumes will continue to increase, exceeding 25-GW of domestic photovoltaics during 2030. By 2020, the cumulative installed capacity in the United States will be about 15-GW, or about 20% of the 70-GW expected cumulative capacity worldwide. The system price will be $3 to $4 per watt by 2010. Total manufacturing costs (costs for the system components) are expected to be 50 to 60% of the total installed costs.
HYBRID WIND AND SOLAR SYSTEMS These two resources and technologies are complementary. They not only improve the reliability of a stand-alone power system, but are more cost-effective when combined. Many off-the-grid installations start with a few photovoltaic panels, since they are simple to install and the unit costs are not extreme. A small system may use two or more panels,
146
Emergency and Backup Power Sources
either in series or parallel batteries and inverter. Until recently, adding a wind turbine was difficult because even small turbines were costly. The emergence of inexpensive micro turbines has lowered the cost of building a hybrid wind and solar system. The wind turbine is usually less than 50% of a hybrid system’s total cost. Since wind has a higher power density than solar, even in lowwind areas, the addition of a small amount of wind capacity, such as provided by a micro turbine, can considerably boost the total energy available. Components such as batteries and inverters are critical parts of hybrid systems. Batteries cost $50-$100 per kWh of stored capacity and only about 50% of the energy stored in a battery can be withdrawn without sulfating the plates and reducing its effectiveness. Batteries also have a limited lifetime of several thousand cycles. After about 2,000 cycles, they have a reduced capacity. If a battery discharges 50% of its gross capacity through 2,000 cycles, it will deliver about 1,000 kWh of net electrical energy over its operating lifetime. Thus, battery storage alone costs $0.05 -$0.10 per kilowatt-hour of usable energy in an off-thegrid system. Inverters, such as the Trace SW (Sine Wave) series, can be programmed to start heavy loads when excess power is available, cut the load when battery voltage falls, or start and stop a backup generator as needed.
INTERCONNECTION TECHNOLOGY A major difference between the early 1980s and today is the type of small wind turbines being used for utility interties. In the 1980s most small turbines, such as those built by now defunct Enertech, used induction generators. Today, nearly all small turbines destined for utility intertie use permanent-magnet alternators and inverters.
GENERATORS AND TRANSMISSIONS Wind power is converted to electrical power by a generator or alternator. The generator could be AC or DC but an alternator produces AC. Automotive alternators contain AC-to-DC rectifiers that supply DC
Alternate Power Sources
147
current to the automotive electrical system. AC power can be generated directly at the wind generator, or AC can be made from DC using an inverter. Alternating current can be produced by either induction generators or synchronous generators. They run at an rpm that is governed by the load and the line frequency. Typically, they run at 1,750 to 1,800 rpm, with the higher rpm occurring when the motor is fully unloaded. Direct current is generated by either a DC generator or an AC alternator with rectifiers. The traction motors used in golf carts, fork lifts, and electric cars can be operated as generators. These motors have commutating brushes that carry the full output current of the generator. Alternators come in a variety of sizes and types, from small automotive types to large industrial alternators. Automotive alternators are not as efficient (60%) compared to over 80% for industrial and DC traction types. They must also be driven at high rpm, but they are inexpensive. The alternator or generator is usually coupled to the rotor through some form of transmission that speeds up the relatively slow-turning powershaft to the higher rpm required by the generator. Older machines such as the Jacobs units, used a slow-turning generator that could be coupled directly to the rotor. Newer alternators need to spin much faster and a speed-up gear is needed. Speed-up mechanisms have included chains, belts and gearboxes. Chain drives are less expensive than gearboxes, but oiling and tensioning requirements make them less desirable. Some designs enclose the chains and sprockets in sealed housings with oil bath lubrication and a spring-loaded tensioner. Belt drives include toothed belt and V-belt. Belts can be temperamental. If their pulleys are not properly aligned, the belts will slide off. If the torque is high at low rpm, they can jump teeth and self-destruct. Cold weather can also effect them. Most small wind turbines use permanent-magnet alternators. Wound-field alternator are also used as well as induction generators. Some permanent-magnet alternators used by small wind turbines use a case where the magnets are attached which rotates outside the stator, the stationary part of the generator. The blades are bolted directly to the case. Centrifugal force presses the magnets against the inside wall of the case. In the more conventional alternator, the magnets are thrown away from the spinning shaft. Most small wind turbine alternators produce three-phase AC.
148
Emergency and Backup Power Sources
Some battery-charging models rectify the AC to DC at the generator. Others rectify at the controller.
INVERTERS Electronic inverters convert direct current into alternating current at a voltage and frequency determined by the inverter’s circuitry. Inverters that convert 12 volts DC into 110 volts AC are available in a variety for amperage ranges and voltage and frequency accuracies. A small 4ampere inverter that delivers 110 volts at 4 amps, or 440 watts, will cost about $100. Voltage output can vary from 105 to 115 volts, and frequency from 55 to 65 cycles per second. The output may not be a smooth sine wave of more expensive inverters, but rather a square wave. This square-wave AC can be used for operating motors, lights, and other non electronic devices. The square waves may effect electronic power supplies. More expensive inverters have close voltage regulation with a quartz crystal to control frequency and a sine wave output. For applications where the AC output will power electronic devices, this type of inverter is needed.
TYPES OF INVERTERS A DC/AC inverter basically reverses the action of a battery charger, producing line voltage, (120 or 240 volts AC) from a DC voltage. Two steps are usually required. The first is to convert DC to AC and then to step the AC up to the required output voltage. Inverters may produce a true sine-wave output similar to that produced by a generator or a stepped-square-wave that approximates a modified-sine. There are two ways to produce either waveform, linefrequency switching or circuit generated switching.
LINE FREQUENCY INVERTERS A line frequency stepped-square-wave inverter uses a transformer. The primary is low-voltage winding and the secondary a high-voltage
Alternate Power Sources
149
winding. The negative side of the DC input voltage is switched by transistors to each end of the primary winding. The positive side of the battery is connected to the center of the winding. The switches are turned on and off alternately at a rate determined by a crystal oscillator. As the switches are turned on and off, current flows through the coil in one direction and then in the opposite direction producing alternating current, with the frequency regulated by the oscillator. This alternating current is stepped up by the transformer to produce alternating current in the secondary winding at the required output voltage. The rate at which the transistors switch on and off determines the frequency of the AC output. This is set to produce 60 cycles a second in the USA and 50 cycles a second in Europe. The waveform produced by such a switching inverter is a stepped-square-wave instead of a true sine wave. The peak of the stepped-square-wave is lower than that of a true sine wave. The length of the wave is controlled to maintain an RMS output of 120 volts. As the load on the inverter increases, the battery voltage will decrease. To compensate for this falling voltage the wave length is increased so the inverter maintains its RMS voltage. A line frequency sine wave inverter uses control circuits that produce a controlled peak output voltage with a true sine wave. The process is more complex and these inverters are more expensive. High-frequency inverters use a center-tapped transformer with transistors that switch on and off at a frequency of 16KHz or more. The low voltage, high frequency AC is stepped up by a transformer to a voltage above 140 volts for a 120-volt system. This voltage is then rectified to DC and switched by transistors to produce AC at the correct frequency. The higher the frequency of the switching operation, the smaller and lighter, the transformers and the inverter. A 2-kW line frequency, square wave or sine wave unit will weigh about 50 pounds and occupy almost a cubic foot of space. High frequency models can weigh as little as 8 pounds and occupy one third the space. The high frequency inverter will generate much more radio frequency interference and it has more components which means a higher failure rate. High frequency units are not reversible while some line frequency inverters can be operated in reverse to charge batteries. Inverters tend to operate at the highest efficiency at or near their rated power. AC loads are not always on at the same time, so you can
150
Emergency and Backup Power Sources
often enhance the overall system efficiency by using small inverters at each important load. These local inverters should be selected to operate at their peak efficiency. Where voltage and frequency are not important, but sine wave output is, a motor-generator type of inverter may be used. Here, the DC powers an electric motor that in turn spins an AC generator. AC generators produce sine waves, but changing loads may change the motor speed and cause frequency and voltage dips. Most loads involving motors, draw a surge of current several times their rated amperage for a few seconds during starting. This current surge can damage an inverter that is not designed for it.
SYNCHRONOUS INVERTERS Synchronous inverters also turn DC to AC, but they can be used to drive an existing AC line. The AC output from a synchronous inverter is fed directly into the line, so this inverter must synchronize its output with the AC line. To be fully synchronized, the wave forms of the sine waves must match. Most synchronous inverters sample the utility wave form for internal voltage and frequency regulation. In the mid-1970s Windworks began using a synchronous inverter to connect 1930s-era Jacobs wind turbines to an electric utility. The technology has grown and is much more mature today. These inverters take DC, or rectified variable-voltage, variable frequency AC from permanent-magnet alternators, invert it to AC, and synchronize it with the AC from the electric utility. Synchronous inverters are line-synchronized, or line-commutated. The early models used SCR (silicon controlled rectifier) switches with analog controls. Since they are line-commutated they need the utility’s line to function. Bergey Windpower and Wind Turbine Industries still produce turbines with line-commutated inverters. Most modern inverters are self-commutated. They produce utilitycompatible electricity using internal circuitry with IGBTs (integratedgated, bipolar transistors) and digital controls. The newer self-commutated inverters provide better reliability and power quality compared to the older line-commutated versions. Self-commutated inverters use the same techniques as sine-wave
Alternate Power Sources
151
inverters for off-the-grid power systems. These inverters use the DC from a battery storage system and produce an AC sine wave similar to that of the utility. Modern sine wave inverters, such as Trace’s SW series, have the ability to feed excess power to the utility system. Unlike the old line-commutated systems, the new utility interactive systems require batteries to operate. They act as stand-alone power systems connected to the utility through the inverter. In these utility interactive systems, when electrical demand exceeds supply and the batteries are nearly spent, the inverter automatically draws power from the utility until the batteries are recharged. When there is a surplus of generation relative to the load and the batteries are fully charged, the inverter can also feed the excess power back to the utility. If the utility power fails, the inverter and batteries act as an uninterruptible power supply. The inverter automatically switches to the stand-alone battery system and makes the transfer from utility power. Advances in inexpensive inverters for photovoltaic panels also provide benefits to micro turbines. Some solar panels include their own 100-watt inverter, allowing a plug and play capability. In the future micro turbines could be used like contemporary computer peripherals.
INDUCTION GENERATORS Wind turbines using induction generators are the simplest to connect to the grid. The induction or a snychronous generator uses current in the utility’s lines to magnetize its field. The voltage and current the induction generator produces are now synchronized with that of the utility. The induction generator is unable to operate without the utility and when the utility system is down, the induction generator is down too. An induction generator can be used with a capacitor to charge the field, allowing induction generators to be used in stand-alone power systems. Vergnet wind turbines drive induction generators in this way in wind-diesel and battery-charging systems in France and its overseas territories. Induction wind turbines use control circuits to connect and disconnect from the grid, otherwise the wind turbine can motor in low winds, acting like a fan and consume electricity instead of generating it.
152
Emergency and Backup Power Sources
In an AC power system, the AC voltage cycles continuously from positive to negative and then from negative to positive. The nominal system voltage is an average known as the root mean square, or RMS voltage. The voltage at the peak of the sine-waves, both positive and negative, is higher than the nominal (RMS) voltage of the system. Peak voltage is found by multiplying the RMS voltage by 1.414, or dividing by 0.707. A 120-volt (RMS) circuit has a peak voltage of 169.7 volts and a 240-volt (RMS) circuit has a peak voltage of 339.36 volts. Alternating current of 120 volts surges first to 169.7 volts positive and then to 169.7 volts negative, for an RMS value of 120 volts.
VOLTAGE REGULATION Regulating the generator may be done with speed control (governing) or voltage regulation. Different approaches are used to control the speed of wind generators. Some use a centrifugally activated friction or air brake. Another technique is furling where a tilt-back mechanism progressively turns the unit away from the wind as the wind speed increase. Yet another technique is to design the blades to change their pitch so that they will not race at higher wind speeds. This is effective but can also be noisy. A conventional alternator is regulated by varying the field current to the field coil. Some wind generators have permanent magnets so this technique is not available for controlling output. Several methods are used to regulate a wind generator’s output. A circuit can be used to sense the battery voltage and control the current from the generator. Unless the generator has a governor, reducing the load can allow the generator to speed up. Some wind generators use this method with a furling tail that keeps the maximum speed below damaging levels. Other small generators use an air brake to control the speed. A shunt regulator can divert the generator’s output to another load as the battery comes up to charge. In such a charge diverting regulator, output is normally shunted to a fixed resistor, but could be switched to another load and put to useful work or sold back to the power company. DC generators require a diode in the positive cable to prevent a reverse drain from the batteries when the generator is not running. This diode is a part of many regulators circuits. Alternators already have this diode in the rectification circuit.
Alternate Power Sources
153
WIND AND WATER GENERATORS Small wind and water generators are similar units with different propellers (impellers, turbines, vanes). A wind generator uses a propeller, or turbine, to convert wind energy to a rotating force that is used to operate a generator. In this conversion process a doubling of the propeller diameter produces a theoretical fourfold increase in generator output, and a doubling of the wind speed produces a theoretical eightfold increase in output (see Table 5-1). Table 5-1. Wind Power Conversion Factors ———————————————————————————————— VALUE OF K TO GET WIND POWER IN Horsepower Watts Kilowatts ———————————————————————————————— Windspeed in mph Rotor area in foot2
0.006
4.3
0.004
Windspeed in foot second Rotor area in foot2
0.700
5.2
0.52
Windspeed in meters/second Rotor area in (meters2)
0.002
1.4
0.001
———————————————————————————————— At wind speeds of less than 5 knots there is not sufficient energy to produce any output from a wind generator. At 5 knots the more efficient generators will begin to trickle charge a battery. Less efficient designs will begin to generate when the wind speed reaches 7 knots. The propeller may be used to drive an alternator or a DC generator. Both produce alternating current (AC) in their output windings. But, an alternator rectifies this to DC with diodes, while a DC generator rectifies the output using a commutator and brushes. Small alternator types are made by Ampair, LVM and Rutland and DC generator types are made by Neptune, Fourwinds and RedWing. Some models can also be used as water generators. Alternator types need no brushes to generate electricity and will need less maintenance-free. In a DC generator the commutator and brushes require periodic maintenance.
154
Emergency and Backup Power Sources
The higher-output wind generators can be quite noisy. In strong winds the centrifugal forces developed by larger propellers (from 50 to 60 inches in diameter on up) can cause some units to self-destruct. Improved blade design, materials, and manufacturing tolerances, combined with methods to regulate the top speed of these generators, have eliminated these problems in some, but not all, generators. NET METERING Many states permit net metering for wind turbines. Net metering allows the sale of excess kilowatt-hours back to the utility. Net metering makes interconnecting a small wind turbine with the utility more attractive. Most states limit net-metering interconnections to 10-kW, although some states have a higher limit: Minnesota, 40-kW; Massachusetts, 30kW; New Mexico and North Dakota, 100-kW. There is no limit in Iowa. In most states, net-metering regulations affect only investor-owned or regulated utilities, thus excluding those connected to rural electric cooperatives. Eleven states offer net metering on both investor-owned utilities and rural co-ops. To find out if your state currently permits net metering, contact the American Wind Energy Association. POWER QUALITY AND THE UTILITY Before interconnecting a wind turbine with their lines, the utility may have some concerns about the power factor, voltage flicker, and harmonics produced by a wind machine. The utility may also require payment for reasonable costs arising from the interconnection. There are now several decades of experience with interconnected wind turbines. Wind turbines have operated more than three billion hours on the lines of electric utilities in Europe, the Americas, and Asia without creating electrical problems. MAINTENANCE Shaft movement up or down or from side to side indicates the need for bearing replacement. Fasteners in wind generators are subject to
Alternate Power Sources
155
vibration and sometimes work loose. Adding a drop of Loctite threadsealing compound can help. To cure excessive vibration, if the turbine blades can be detached individually, remove the opposite pairs and weight and correct any differences. Replace as matched pairs and check the alignment. The fiber-reinforced plastic blades in some small micro-generators are UV-degradable in sunlight. If the surface becomes ragged and powdery, they can be sanded and painted with a two-part polyurethane paint. The leading edge of some blades will wear down just from the impact of dust and rain. The blades should be kept smooth for maximum efficiency and noise reduction and recoated with epoxy or twopart polyurethane paint. The pivot points on an air brake, furling, or tilt-back mechanism need to be free and lubricated. Generator brushes and brush springs are the points of failure. Alternators have brushes only on the slip rings. Brushes and brush springs should be checked periodically for wear, corrosion, and loss of tension. If brushes or springs are defective, the commutator or slip rings should be checked for burning or pitting.
WIND POWER TRENDS A surge in the U.S. small wind turbine industry occurred in the early 1980s. It was backed by federal energy tax credits, state incentives and high electricity prices. It peaked in 1983 when energy prices fell, federal energy tax credits expired, and state incentives dropped off. By 1986, the main market was stand-alone or off-grid applications for remote homes. While filling this small domestic market, U.S. manufacturers expanded into overseas markets. Since 1999, electricity prices have been rising. There has been increased concern about the security of energy supplies and the centralized generating facilities that produce these sources of energy. Many want some independence from the electric utilities. There is also increasing concerns about global warming. Many state and local governments now provide significant incentive programs that include reduced costs for small wind turbine systems. These incentives reached a total of $3.5 billion in 2001 for programs that include small wind turbines. These factors have resulted in increased interest in small wind
156
Emergency and Backup Power Sources
turbines connected to the utility grid. In 2001, the annual sales of the U.S. small wind turbine industry were estimated at 13,000 turbines valued at about $20 million. This is about the same level as sales in the early 1980s, but it is only about 2% of the total sales of large wind turbines in the United States. The success of the large wind turbine industry is due to the support from government programs and policies both at home and abroad. Support such as federal and state tax credits were discontinued in the mid-1980’s for small wind systems. This led to a significant decrease of the industry and less momentum in technology and market developments. Combined efforts from government and industry are increasing the contribution of small wind turbines to the generation mix. There is the potential for real contributions to the energy supply. It is projected that small wind turbines could contribute 3% of U.S. electrical consumption by 2020. While foreign companies may dominate the market for other renewable energy technologies, the U.S. small wind turbine industry is the leader in markets at home and abroad. While producing energy, small wind turbines produce no environmental emissions. The market for small wind turbines has been growing 40% per year. The U.S. small wind turbine industry offers a variety of products for different applications and environments. The following range of products are available: • • •
Small generators of 400 watts (W) Mid-size generators of 3-15 kilowatts (kW) Larger generators of up to 100-kW for commercial operations.
Small wind turbines can be effective in most of the rural areas of the United States. About 60% of the United States has enough wind for small turbines to generate electricity.
TECHNOLOGY ADVANCES The newer small turbines have a high reliability with only two or three moving parts and require relatively low maintenance. Continuous advances in the industry and support with the U.S. Department of En-
Alternate Power Sources
157
ergy (DOE) has produced small wind turbines with advanced airfoils, super-magnet generators, smart power electronics, very tall towers, and low-noise. These features help to reduce the cost of producing electricity and have aided in the acceptability of wind power. Small wind technology has been improving since the 1970s. Improvements have taken place in operating reliability, noise concerns, and manufacturing and installation costs. Much has been done to incorporate advanced technologies and to enhance manufacturing. It is estimated that high-volume manufacturing alone could reduce costs 15-30%. Modern small wind turbines are not like the generators of the 1920s and 1930s. Today’s small turbines use aerospace technologies to provide sophisticated, yet simple, designs that allow them to operate reliably for a decade or more without maintenance. As small wind turbine technology has matured, the products have become simpler and more robust. The small wind turbine industry uses advanced component technologies and newer design tools such as three-dimensional solid modeling and computational fluid dynamics. Turbines use high-efficiency airfoils, neodynium-iron-boron super-magnet generators, pultruded FRP blades, graphite-filled injection molded plastic blades, special-purpose power electronics and tilt-up tower designs to lower costs and increase efficiency. Among the goals and trends in wind power is a reduction in cost of energy resulting from turbines that operate in low-wind areas along with new technologies for low-cost towers and low-wind rotors. Turbine costs are expected to drop through improvements in the performance and efficiency of small wind turbines. Technology developments are projected to reduce tower and installation costs by using lower-cost foundation or anchoring systems for towers and automated processes for tower fabrication. Turbine reliability should improve with improved test methods including extreme event testing and extensive multi-year data on turbine performance, reliability, operation, and maintenance. There will be increased participation in small wind turbines, an option in domestic government programs. The Federal Energy Management Program will develop more small wind projects at federal facilities and promote small wind turbines for security and military operations.
158
Emergency and Backup Power Sources
Reduced manufacturing costs from the increased volume of production and improved manufacturing techniques will include advances in equipment and processes for the mass production of small wind turbine systems. Additional standards will be developed to address reliability, durability, longevity, noise, and power performance. Stronger, certified distribution channels and support should include generic installation and maintenance training programs for small wind turbines along with certification standards. A more consumerfriendly performance rating system will include an updated American Wind Energy Association (AWEA) performance standard with the IEC 61400-12 for small wind turbines. U.S. manufacturers of small wind turbines export more than 50% of their production and have a leading share of the world market. The foreign market for grid-connected wind turbines is driven by electricity prices more than double those in the U.S. About 2 billion people in the world do not have access to electricity for domestic, agricultural, or commercial uses. The traditional method of providing electricity by extending the distribution grid is too expensive and not suitable to the low consumption levels in developing nations. The number of homes without electricity has been increasing because the birthrate has outpaced the electrification rate. Small-scale renewable energy systems (wind, micro-hydro, and solar) are often less expensive to install than power line extensions. Small wind turbines are less expensive to operate and produce much less carbon dioxide per kilowatt hour than diesel generators. Small wind systems can be used to provide power for 500W or 50kW loads. Wind electricity can be used for charging batteries for power backup applications. By 2020 small wind turbines should add 3% or 50,000-MW to the U.S. electric supply. The small wind industry should grow to a billion dollar industry, with more than 10,000 employed in manufacturing, sales, installation and support. Rooftop wind turbines can help power electric loads and send power back to the utility grid. Small wind turbines will become another appliance and may be purchased at local home improvement stores. Installed wind turbines should have a 50-year life based on reliability improvements. Small wind turbines have a generating capacity of up to 100 kilowatts (kW), which requires a 60-foot rotor diameter.
Alternate Power Sources
159
WIND ENERGY Energy and power are derived from the wind by using the force it exerts on solid objects. Windmill blades move in response to this force, and wind machines can extract a real portion of the energy and power available. The wind energy available in a unit volume (one cubic foot or one cubic meter) of air depends on the air density (D) and the instantaneous windspeed (V). This kinetic energy of the air in motion is expressed by the following relationship: kinetic energy ——————— unit volume
= 1/2 × D × V2
To find the kinetic energy in a particular volume of air, multiply by that volume. The volume of air that passes through a surface such as the area swept by the blades of a horizontal-axis windmill, oriented at right angles to the wind direction is: volume = A × V × t where: t is the elapsed time (in seconds) A is the area (in square feet or square meters) The wind energy that flows through the surface during time t is then: wind energy = 1/2 × D × V3 × A × t The wind power is the amount of energy which flows through the surface per unit time, and is found by dividing the wind energy by the elapsed time t. The wind power is then: wind power = 1/2 × D × V3 × A Note that both the energy and power are proportional to the windspeed cubed. If all the available wind power on the rotor could be utilized, this formula could be used directly to calculate the power. But, the windspeed near the blades is affected by their motion. The maximum power that can be extracted from the wind is only about 60% of the power available.
160
Emergency and Backup Power Sources
In actual practice, a wind machine extracts even less power than this maximum. The rotor may capture only 70% of maximum power. Bearing friction will take a few percent. Gears and other losses may lose half of the remaining power. The final output is reduced from the power that is actually available in the wind. This factor ranges from 0.10 to 0.50.
WIND ROTORS The power supplied by a wind machine (P), depends on the windspeed (V), the rotor area (A), the air density (D) and the system efficiency (E): P = 1/2 × D × V3 × A × E The rotor area can be expressed as: P A = ———————— E × F × Ca × Ct Here: F is a factor that depends on windspeed (see Table 5-2) Ca and Ct are correction factors for the air density at other attitudes This equation gives the area in square feet when the power P is expressed in watts. If P is in horsepower, multiply A by 0.737. If you are purchasing a factory-built machine and know its system efficiency, this formula can tell you if its frontal area is suited to your power needs. To size a three-bladed propeller machine to produce 2000 watts at winds that peak at 15 mph, start with the system efficiency. Small propeller-type systems have an efficiency of 15 to 30%. (see Table 5-3) Use 25% for an E of 0.25. F = 17.30 at 15 mph Ca = Ct = 1 at sea level, for standard temperature (60°F). A=
2000 ————————— 0.25 × 17.30 × 1 × 1
Alternate Power Sources
161
Table 5-2. Windspeed Factor F ———————————————————————————————— V F ———————————————————————————————— 6 1.1 8 2.6 10 5.1 12 8.9 14 14.1 16 21.0 18 30.0 20 41.0 22 54.6 24 71.0 26 90.1 28 112.6 30 138.4 ———————————————————————————————— This gives an area of 462 square feet and a diameter of 34.6 feet. Table 5-3. Wind System Efficiency ———————————————————————————————— Efficiency ———————————————————————————————— Multibladed farm type 10 - 30 Savonius windcharger 10 - 20 Small prop type (to 2kW) 20 - 30 Medium prop type (2 -10kW) 20 - 30 Large prop type (over 10kW) 30 - 45 Darrieus wind generator 15 - 35 ————————————————————————————————
WIND TURBINE SIZES Wind turbines range in size from micro-turbines like the 20-watt Martec 500, with a rotor only 0.5 meters (1.7 feet) in diameter to large units like Vestas’s 1,650 kW machine with a rotor spanning 63 meters (200 feet). (see Table 5-4)
162
Emergency and Backup Power Sources
Table 5-4. Characteristics of Small Turbines ———————————————————————————————— Manufacturer/ Rotor Area Rated Model Diameter (m2) Power Wind Speed (m)/(feet) (kW) (m/s) ———————————————————————————————— Micro Turbines Marlec/500 0.5/1.7 0.20 0.02 10.0 LVM/Aero4gen 0.86/2.8 0.58 0.6 10.3 Ampair/100 0.9/3.0 0.66 0.05 10.0 Aerocraft/120 1.2/3.9 1.13 0.12 9.0 Southwest 503 1.5/5.0 1.8 0.5 12.5 LVM/Aero8gen 1.6/5.1 1.9 0.22 10.3 Marlec/1800 1.8/5.9 2.5 0.25 9.4 World/H900 2.1/7.0 3.58 0.90 12.5 Bergy/1500 3.1/10.0 7.31 1.5 12.5 World/3000 4.5/14.8 16.0 3.0 11.0 Vergnet/6.5 6.0/20.0 28.3 5.0 10.0 Bergy/PD 7.0/23.0 38.5 10.0 12.1 Wind/29-20 8.8/29.0 61.4 20.0 11.6 ———————————————————————————————— Size classifications depend upon both the diameter of the rotor and the capacity of the generator. Typically, small wind turbines are machines producing a few watts to 10-20-kW. Wind turbines at the upper end of this range are driven by rotors 7-9 meters (23-29 feet) in diameter. Small wind turbines can be divided further into micro wind turbines, (the smallest of small turbines), mini wind turbines, and homesize wind turbines. Micro turbines are those from 0.5-1.25 meters (about 2-4 feet) in diameter. These machines include the 20-watt Marlec as well as the 300-watt Air 303 from Southwest Windpower. Mini wind turbines are slightly larger and span the range between micro turbines and home-size machines. They range in diameter from 1.25-2.75 meters (4-9 feet). Turbines in this group include World Power Technologies H500 at 500 watts and Bergey Windpower’s 850-watt unit. Home-size wind turbines are the largest of the small wind turbines. They include turbines as small as World Power’s Whisper 1000 with a rotor 2.7 meters (9 feet) in diameter to the Bergey Excel that uses a rotor 7 meters (23 feet) in diameter and weighs more than 1,000 pounds (463 kilograms).
Alternate Power Sources
163
WIND TURBINE RATINGS The turbine’s rated wind speed is the wind speed at which the wind turbine produces the power indicated on its nameplate. The peak power is usually higher than rated power. At a given speed, the wind turbine begins governing or limiting the power it produces. In the case of most small wind turbines, the rotor begins furling, or turning out of the wind which reduces the power produced. The Bergey 850 starts generating at 8 mph (3.6 m/s) and reaches its rated power at 28 mph (12.5 m/s). The rotor furls, or folds, toward the tail vane at 33 mph (14.7 m/s). At a wind speed of 18 mph (8 m/s), the Bergey 850 produces 327 watts. In a 12 mph wind influence with a Rayleigh distribution, an 18 mph (5.5 m/s) wind speed will occur 294 hours per year. At this speed, the turbine produces 0.33-kW × 294-h, about 100-kWh per year. Across the entire speed range, the Bergey 850 will generate about 1,400-kWh per year at this site. World Power Technologies estimates that its Whisper 500, which uses a 2.1 meter (7 foot) diameter rotor, will produce about 100-kWh per month at a site with a 12 mph average wind speed, or about 1,200-kWh per year. All this energy may not be put to use. Batteries that are fully charged may not be able to take more energy on a very windy day. Some of the energy that is stored in the batteries is lost due to the inefficiency of battery storage. Additional losses occur in an inverter to convert DC to AC. Only about 70% of the energy delivered to the batteries may be used for power.
ROTOR CONTROL Wind turbines work in a demanding environment. The heavier small wind turbines have proven to be more rugged and dependable than lightweight machines. Most wind turbines have some means for controlling the rotor in high winds. The smaller wind turbines do not use the yaw motors and mechanical drives of the bigger upwind turbines. Most small wind turbines use tail vanes to point the rotor into the wind. Micro and mini wind turbines furl, or fold so that the rotor swings toward the tail vane. Some furl the rotor vertically others furl
164
Emergency and Backup Power Sources
the rotor horizontally toward the tail. Some home-size turbines pitch the blades and others even pitch the blades and furl the rotor. To furl the rotor in high winds, the rotor axis is offset from the furling axis. In high winds, thrust on the rotor overcomes the force keeping the rotor into the wind, and swings the rotor toward the tail vane. The wind speed at which furling occurs a function of the hinge between the tail vane and the body of the turbine. In vertical furling, the high winds tilt the rotor up, where it resembles a helicopter rotor. A spring or shock absorber is used to dampen the rate at which the rotor returns to its normal position.
MICRO TURBINES Micro turbines can generate about 300-kWh per year at sites with average wind speeds of 5.5 m/s (12 mph) which is similar to that found on the Great Plains. When comparing wind turbines, compare rotor diameter first. For power ratings, always consider the wind speed at which the turbine is rated. There is almost three times more power in the wind at 12.5 m/s (28 mph) than at 9 m/s (20 mph). Only at the windiest locations do wind turbines operate for any length of time in wind speeds about 12 m/s (27 mph). Wind turbines installed in wind regimes where most people live spend most of their time operating in winds less than 12 m/s, generating less than rated power. British manufacturer LVM builds a line of micro and mini wind turbines. They have been building micro wind turbines under the Aerogen trade name with high power-density rare-earth magnets since 1985. Most of LVM’s multiblade turbines are used in the marine market, other applications include electric fencing. LVM’s micro turbines are used to mark the channel to Plymouth harbor in southwest England. Marlec Engineering’s Rutland brand micro turbine is a multiblade machine. Land-based versions use a furling tail while marine versions use a fixed tail. Marlec’s Rutland 913 uses a pancake generator design that seals the generator in plastic. This is an axial field generator, in contrast to the more common radial field designs used in automotive alternators or magnet-can alternators. The Ampair 100 is a small multiblade turbine. The shaft-driven
Alternate Power Sources
165
alternator has two, six-pole permanent-magnet rotors inside a cast aluminum body. The two six-pole stators are staggered 30 degrees to minimize cogging in low winds. The Southwest Windpower Air 303 has high performance airfoils that use blade flutter to limit the power in high winds. An industrial version uses cooling fins to keep the generator from overheating. The Air 303 uses three different windings. In low winds, a light winding with thin wire is used for power production. At higher wind speeds, the windings with fewer turns of thicker wire are used to deliver more power as it becomes available. Aerocraft is a German manufacturer that offers micro, mini, and home-size turbines from 120 watts to 5-kW. The smaller units use vertical furling to control rotor speed and the bigger models use pitch weights to regulate the rotor.
MINI TURBINES Mini wind turbines include Southwest Windpower’s Windseeker and Proven’s WT600. These turbines have rotors of 1.5-2.6 m (5-8.4 feet) and can produce 1,000-2,000-kWh per year at 5.5 m/s (12 mph) sites. World Power Technologies has a line of wind turbines from 500W to 4.5-kW. Each product comes in two versions; a standard twoblade model and a three-blade high wind version. The heavier rotor stays pointed into the wind longer before it begins to furl, allowing it to capture more energy in high winds than the two-bladed version. Their H500, H600 and H900 models use injection-molded, polycarbonate blades with fiberglass reinforcement. The bigger turbines by World Power use carbon fiber reinforcement in its composite blades. World Power uses an angle governor for control of the turbine in high winds. In the Bergey Windpower units, the blades are part of the generator case which turns around the stator. Bergey Windpower uses a rotor design where the blades are set at a high angle of attack for starting the turbine in low winds. As the rotor speed increases, a weight twists the flexible fiberglass blades. This changes blade pitch towards the optimum running position. As the rotor speed increases, centrifugal force stiffens the blades.
166
Emergency and Backup Power Sources
Home-size turbines are suitable for homes, farms, ranches and small businesses, and telecommunications. They range from 1-kW to 20kW. These wind turbines can generate from 2,000-kWh to 20,000-kWh per year at 5.5 m/s (12 mph) sites, like those on the Great Plains. The Proven machines use a direct-drive, permanent-magnet alternator. They use a flexible hinge between the blades and the hub that allows the blades to change pitch with rotor speed. Wind Turbine Industries’ turbines are large machines that can be used in utility intertie applications. Wind Turbine Industries uses designs and parts from the now defunct Jacobs Wind Energy Systems. The original Jacobs turbines go back to the 1930s-era. The new Jacobs units are radically different. The 23-foot (7-meter) model drives a 17.5-kW alternator mounted vertically in the tower with a hypoid gearbox. The 1100 rpm, four-pole alternator produces 40 hertz, three-phase AC, which is then rectified and inverted for delivery to the utility. Fiberglass blades are used which are made from a resin transfer process.
WIND POWER PROJECTS As part of an aid project in Indonesia, World Power and NRG Systems installed 50 Whisper 600s with NRG’s 45-foot (14 m) tower. The off-the-grid package was designed to provide 100-kWh per month at sites with a 12 mph (5.5 m/s) average annual wind speed. In North America, utilities in rural areas are connecting small wind turbines to the grid. Now that an extensive network of lines exists, utilities have found it is expensive to maintain them. Utilities on America’s Great Plains are finding that small wind turbines can offset some of the cost of upgrading or maintaining rural lines. At the University of Massachusetts, a wind generator 33 feet in diameter is mounted on a 50-foot steel pole tower. The electrical output of this machine is used to warm water, which is combined with heated water from solar collectors mounted on a south-facing wall. Concrete tanks in the basement house store this hot water. The fiberglass blades are capable of producing 50 horsepower (37 kilowatts) in a 26-mph wind. Shaft power from the blades drives a generator with an electronic load controller that senses the windspeed and rotor rpm and applies the required current to the field windings of the generator. This prevents overloading or underloading of the blades.
Alternate Power Sources
167
EUROPEAN WIND GENERATION The small turbine industry in Denmark evolved much the same way as it did in North America. The first turbine using an induction generator was connected to the grid in the mid-1970s. The small turbines of Danish manufacturers have gradually grown larger, becoming the core of the world’s commercial wind power industry. Hundreds of turbines from the 1970s and early 1980s are still operating in Denmark. More than 80% use induction generators and are interconnected with the utility. They represented almost 3 megawatts of capacity, about 5% of that installed in Denmark in 1998. About half of these early turbines are Kuriants, with a 12-meter (40-foot) three-blade rotor downwind of the tower. Most Danish turbines are upwind designs. Kuriants also use guyed towers, a feature rarely seen in Europe. In Denmark a wind system with cogeneration capabilities has been built at the Tvind School on the Jutland peninsula. The Tvind machine is rated at 2 million watts (2 megawatts) in a windspeed of 33 mph. The fiberglass blades on this machine span a diameter of over 175 feet. Each blade of the three-bladed unit weighs about 5 tons. The Tvind machine was designed to supply the school with its electrical and heating needs and to supplement the grid lines using a synchronous inverter. Alternating current from the generator is rectified to direct current and then reconverted to grid-synchronized AC for the school’s electrical system. The synchronous inverter is rated at 500-kW. The power from high winds will be used to heat water. The electricity is fed to coils immersed in a hot-water storage tank. This type of wind furnace is an application of wind power to raise the temperature of water for agricultural and industrial applications. Most of these applications require more heat when it is windy than when it is calm. Early Danish turbines were about the same size as those being installed in the United States: 7-kW, 10-kW, and 15-kW. In Denmark and since 1991 in Germany, wind turbine owners have been paid a fair price for the electricity they produce. Germany’s Electricity Feed Law, the Stromeinspeisungsgesetz, requires utilities to buy wind-generated electricity at 90% of the retail rate. Danish utilities are required to pay 85% of the retail rate. The Danish government also refunds a carbon dioxide tax collected on all electricity generated in the country. The result has been a steady growth of distributed wind turbines in Germany and
168
Emergency and Backup Power Sources
Denmark with several thousand wind turbines installed. In the future, small wind turbines will be accepted as common appliances similar to the way that heating and air-conditioning systems are purchased and installed today. High market penetration rates require that small wind turbines be designed to work effectively in low wind resource areas. These turbines need relatively larger rotors to capture more wind energy. But they must also be robust because even areas with low to average wind speeds can experience severe weather. These turbines must be extremely quiet, so that they are not heard above the local background noise. They should be able to operate for 10 to 15 years between inspections and/or preventive maintenance and they should offer a life expectation of a 50 or 60 years. Advances in small wind turbines include major improvements in small turbine manufacturing and more efficient installation techniques. The U.S. Department of Energy (DOE) and the National Renewable Energy Laboratory (NREL) have been accelerating the development and adoption of new small wind turbine technology and manufacturing techniques. Large wind turbines are in their seventh or eighth generation of technology development, while small wind turbines are only in their second or third. The industry is striving to reduce the cost of electricity generated by small wind turbines. In 2002, typical 5- to 15-kW residential wind turbines cost about $3,000 per installed kilowatt. These turbines produce about 1,200-kWh per year of electricity per kilowatt of capacity in a wind area with a DOE class 2 wind. By 2020, the installed cost should be 1,200 to $1,800 per kilowatt. Smaller systems are relatively more expensive. The productivity should rise to 1,800-kWh per installed kilowatt. The 30-year life cycle cost of energy will then be in the range of $0.04 to $0.05/kWh. This is lower than almost all residential electric rates in the country today. The attractiveness of small wind turbines to consumers depends on meaningful, cost-effective standards and certification programs for them. There have been some instances of exaggerated claims and the standards and certification programs for large wind turbines are not appropriate for small wind turbines. Appropriate standards for small wind turbines are under development. Related standards, such as electrical grid interconnection standards, are also needed and should reduce the costs of owning a small
Alternate Power Sources
169
wind turbine. The engineering of wind machines involves aerodynamics, structures, controls, electrical conversion, electronics and corrosion prevention. Government/industrial collaboration should take place at national laboratories and universities to provide better wind machines. Applied research is also being conducted at the facilities of small wind turbine companies with support from the government. Other applied research projects involves companies, universities, and national laboratories. This research and development by industry, research institutes, state and local governments and DOE will increase the contribution of small wind turbines to the power generation mix. In 2001, about 13,000 small wind turbines were manufactured in the United States. More than 50% of these were exported. It is expected that both the domestic and foreign markets for small wind turbines will continue to grow. It is estimated that small wind turbines could provide up to 8% of U.S. electrical needs by 2020. Wind may generate at 3% of U.S. electrical demand by 2020 and 6-8% of residential electricity demand. This will provide small wind turbines for a total generating capacity of 50,000MW. A study by A.D. Little sponsored by DOE projected almost 4 million small wind systems installed in grid-connected applications. A large market for small wind turbines is in rural areas where wind-generated electricity can reduce utility bills. In 2000, American homes used over one trillion kWh which was more than one third of total electricity sales. By 2020, there will be more than 15 million homes with 1/2 acre or more of land and sufficient wind to install a small wind turbine. If each of these homes installs a 7.5-kW machine, the total generating capacity would be more than 100,000-MW. About two million mid-sized commercial buildings could be outfitted with wind turbines of 10 to 100-kW. Many public facilities such as schools and government buildings could also use small wind turbines. Industrial and commercial customers who are connected to the utility grid can use wind power as back-up generators. Where the utility grid is not available, stand-alone or hybrid systems may provide electricity for homes, communities, water pumping, and telecommunications services. The Energy Information Administration (EIA) estimates that there are about 200,000 off-grid homes in the U.S. Many communities that are remote and isolated produce their elec-
170
Emergency and Backup Power Sources
tricity with diesel or gasoline generators. Alaska has over 90 villages powered by diesel generators, serving a population of over 40,000 people. Several hundred other remote facilities are powered by diesel generators that range from 2 to 250-kW. New wind-electric water pumping systems are available where the turbine can be located where there is good exposure to the wind and it does not have to be located near the well and pump. Remote broadcast facilities tend to use hybrid systems that combine generation from solar, wind, and diesel systems. These markets could contribute up to 25,000MW of generating capacity by 2020. The total installed capacity for small wind turbines in 2020 could be 140,000-MW. According to the EIA, the total generating capacity in the U.S. in 1999 was almost 750,000-MW and the projection for 2020 is over 1,000,000-MW of generating capacity and 4,800 billion kWh in demand. Fifty gigawatts (50,000-MW) of small wind turbines in 2020 would produce an estimated 132 billion kWh of clean electricity per year, or approximately 3% of projected total U.S. demand. At this level of capacity, small wind systems would be providing 6-8% of residential electrical demand. The EIA forecasts that the residential demand will be over 1,700 billion kWh by 2020. A growth in the domestic market from the current installed capacity of 15-18-MW to 50,000-MW in 2020 means a doubling of the market each year for several years and then a sustained sales growth of 50-55% per year. The domestic small wind turbine industry would then reach annual sales of $1 billion and employ approximately 10,000 people by 2020. Off-grid sites could contribute up to 25,000-MW of generating capacity by 2020. References Gipe, Paul, Wind Energy Basics, White River Junction, VT: Chelsea Green Publishing Company, 1999. Park, Jack, The Wind Power Book, New York: Van Nostrand Reinhold, 1981. Halper, Mark, “More Power to You: Alternative-energy Technologies Could Soon Give Your Phone, Your Car, and Your House their Own Microgenerators,” Time, Vol. 162, Bonus Section, January 2004, i24, p. 10. Winebrake, Ph.D., Editor, Alternate Energy: Assessment and Implementation Reference Book, Lilburn, GA: The Fairmont Press, Inc., 2004.
Distributed Generation, Clean Power and Renewable Energy
171
Chapter 6
Distributed Generation, Clean Power and Renewable Energy The California energy crisis of 2001 transformed the state’s outlook on energy. Suddenly you saw terms and issues like stranded costs, renewable portfolio standards and exit fees in the daily papers. The regulatory atmosphere also changed, electricity transmission and capacity constraints made state public utilities commissions and utilities start to promote self-generation. Incentive programs in several states have been started or expanded and customers are more willing to consider technologies previously considered too expensive or complex.
CHP AND PV Historically, combined heat and power (CHP) systems were considered only for very large customers or a few very specific facility types such as hospitals and municipal swimming pools. Solar PV systems and fuel cells were not considered at all. This attitude has changed as the new energy climate emerges. The terms used to describe these systems include customer-sited generation, self-generation, distributed generation, distributed resources, distributed energy, combined heat and power, cogeneration, renewable energy, clean power, and green power. Self-generation (SG) can be used to describe any technology that is sited at a customer’s facility and produces power primarily or exclusively for use on-site. This term is used in the California incentive program and is less variable than some of the other terms. The SG market ranges from 1-kW residential PV systems to 40-MW 171
172
Emergency and Backup Power Sources
gas turbine cogeneration plants serving universities. Systems of about 50-kW to about 1-MW are large enough to be cost-effective and small enough to be appropriate for many end-users. The primary technologies in this area are PV, reciprocating engines, fuel cells, and microturbines. Most of the CHP activity has historically been in facilities with large electrical and thermal loads. In these large systems, the primary technologies are gas turbines, large reciprocating engines and steam turbines. The market for commercial-scale PV is growing rapidly. Powerlight is a large-scale PV systems integrator with grid-connected PV installations that have seen an annual grown rate of 55% for the last 5 years. The CHP market in the 50-kW to 1-MW range may have contracted slightly over the last few years. Customer concerns over high gas prices, combined with the economy, may be causing customers to postpone decisions. SG activity in California applications in the California Self-Generation Incentive Program is summarized in Table 6-1. The total capacity of projects that were active as of January 2003 is 105-MW. New York provided incentives in 2001-2002 that supported almost 16-MW of CHP installations of 1-MW and below. Almost 1-MW of PV systems larger than 10-kW were installed under the New Jersey Clean Energy Program. Fuel cells have been a much smaller part of the SG market. Table 6-1. California Self-generation Projects ———————————————————————————————— Complete In Work Average Size (kW) ———————————————————————————————— PV 21 168 170 Fuel Cells 1 3 400 IC Engines 7 119 540 Microturbines 5 50 200 ———————————————————————————————— The California SG program is open to systems between 30-kW and 1-MW. ESCOs are interested in projects that can be installed profitably, and these projects tend to be large. The typical PV project size is increasing. A 300-kW PV installation is no longer an unusually large system with a capital cost of about $2.5 million.
Distributed Generation, Clean Power and Renewable Energy
173
The PV industry will continue to grow with estimates that the gridconnected PV market will exceed $3 billion by 2010. The PV industry goal is 25% annual growth in the total capacity of panels produced domestically. According to the Energy Information Administration, there is a market for 17-GW of building-sited CHP by 2010. About half of this is in existing facilities as opposed to new construction with installations of less than 1-MW. Office buildings are nearly half of this estimate, with hospitals and colleges accounting for the rest building site applications. Much of the growth in office buildings involves absorption chillers. The Federal Energy Management Program (FEMP) estimates that 9% of federal facilities could install CHP systems with an average simple payback period of 7 years. A CHP market study for New York showed the potential for almost 20,000 new CHP installations in commercial facilities with systems less than 1-MW. The total project capacity was about 3-GW in this size sector. The projections are about 200-MW in sub-1MW commercial installations by 2012. This was about 1% of the installed base of CHP installations in the state, or about 50-MW. Major barriers to self-generation include interconnection, air pollution permits, building permits and the installation of net meters.
INTERCONNECTION Interconnection remains an important barrier in California in spite of statewide efforts to streamline the process. In states without interconnection standards, an inflexible local utility can make it nearly impossible to install any type of SG. Charges for standby power and exit fees can be a problem. New York, negligible growth in systems less than 500kW is predicted, but significant growth is expected if regulatory barriers are lowered. Net metering laws are more of an issue in residential systems. Compared to commercial, in residences, the time of peak solar output may correspond with low building loads since no one may be home. Some states have net metering laws that apply to all systems but many apply only to residential-sized systems. In spite of the barriers, significant SG activity is occurring. The
174
Emergency and Backup Power Sources
reasons that customers install SG projects depend on the type of technology. In California, a survey to determine the factors that were influential showed that savings were the major reason. Table 6-2 shows average responses on a scale where 10 is most influential. Table 6-2. Self-generation Factors ———————————————————————————————— Savings Environment Image ———————————————————————————————— PV 4 4 3 Fuel Cells 4 5 4 IC Engines 5 3 3 Microturbines 5 3 2 ———————————————————————————————— Reciprocating engines and microturbines are not viewed as being as green as PV and fuel cells. Reciprocating engines and microturbines are installed primarily because they are a cost-effective way to reduce utility bills. Concern for the environment ranked higher than improving the image of the business for all but reciprocating engines.
INCENTIVE PROGRAMS Incentive programs are important in the increased activity of commercial scale PV projects. Several states have programs for residential systems and a smaller group have commercial programs. Commercial sized programs are offered by California, Connecticut, Delaware, Illinois, Massachusetts, New Jersey, New York and Rhode Island. Program rules, incentive levels, and funding availability change frequently. The California Self-Generation Incentive Program was authorized to spend $125M annually through 2004. Systems between 30-kW and 1MW are eligible. The incentives include up to 50% of costs, approximately $4/watt, a 15% state tax credit for systems of less than 200-kW and standby and exit fees waived. An additional $100M was available through a separate program for systems less than 30-kW. The New Jersey program had a tiered rate structure where smaller systems received a higher incentive rate than larger projects. The incentive program covered up to 50% of project cost with the following tiered structure:
Distributed Generation, Clean Power and Renewable Energy
• • • •
$5.50/watt $4.00/watt $3.75/watt $0.30/watt
for for for for
175
the first 10-kW the next 90-kW the next 400-kW 500-1000-kW
Oregon has two incentives available. The Energy Trust of Oregon provides an incentive for commercial PV systems less than 25-kW at $1.75/watt, but it caps out at only $20,000. There is also a Business Energy Tax Credit (BETC) for 35% of the installed cost. The BETC program has a pass-through option where tax-exempt entities can pass the credit to a partner that has a tax liability. New York has several programs for PV with incentives of $4 to $5/ watt for <15-kW and $5/watt for >15-kW installations. There is also state tax credit for some installations and a new construction incentive for building-integrated PV. Tax credits supplement cash incentives from utilities and include a Federal Investment Tax Credit that allows customers to take 10% of the installed system cost against their federal tax burden. Several states also have tax credits, with Oregon being the highest. The federal government has been the main supporter of fuel cell projects. In energy efficiency and PV most direct support has been at the state and utility level while federal support was largely limited to research and education. The Climate Change Fuel Cell Program provides up to $1.0 per watt and 33% of project cost. States that have programs to support fuel cells, include California, Connecticut, Illinois, Ohio, Massachusetts, Michigan, New Jersey, New York and Oregon. The California program offers an incentive of $2.50 per watt up to 40% of eligible costs. The New Jersey Clean Energy program originally offered incentives higher than California’s for fuel cells powered by natural gas but changed their program to cover only fuel cells operating on renewable fuels such as landfill gas. Incentive programs for technologies other than PV and fuel cells are few. There is some support for reciprocating engine and microturbine CHP in California, New York, and Oregon. California provides incentives of $1.00 per watt that cover up to 30% of the cost of microturbines and reciprocating engines running on natural gas. Oregon has a Business Energy Tax Credit that allows a credit for 35% of the installed cost of systems that exceed 56% total efficiency. This is not the standard PURPA efficiency that includes only half of the thermal output. New York provides incentives for CHP
176
Emergency and Backup Power Sources
demonstration projects of up to $1M per installation that can cover up to 50% of the project cost, however these projects are selected in a competition. Reciprocating engine and microturbine projects require less support in order to be cost-effective. Even without incentives, payback periods can be short in CHP applications where a large percentage of the waste heat is effectively recovered. This is less true of fuel cells. Table 6-3a shows the possible incentives for a PV 100-kW array installed on a flat office building, a popular PV application. The table compares the installation in California, New Jersey and New York. The customer is a for-profit enterprise with a tax liability. The installed cost is $8,000 per kW with a projected electricity price increase of 2% per year. See Table 6-3b. Table 6-3a. 100-kW PV Incentives ———————————————————————————————— STATE FEDERAL NET Incentive Tax Credit Tax Credit Cost ———————————————————————————————— California $400,000 $54,000 $40,000 $306,000 New York $500,000 $13,500 $30,000 $256,500 New Jersey $415,000 None $38,500 $346,500 ———————————————————————————————— Table 6-3b. 100kW PV Costs ———————————————————————————————— Projected First Year Simple Net Present kWh/year Savings Payback Value ———————————————————————————————— California 162,000 $31,000 10 years $81,000 New York 108,700 $12,300 20 years $ 8,900 New Jersey 113,500 $ 9,200 38 years $(88,000) ———————————————————————————————— While the simple payback period can be long, the Net Present Value of the investment was positive in both California and New York due to the life of the equipment and its depreciation. The NPV goes positive for the New Jersey case if the cost goes to $6,500 per kW and the electricity price changes to 3.5% per year. New York has less savings than California in spite of New York’s more generous incentive due to less sunshine and lower electricity rates in New York. All three projects
Distributed Generation, Clean Power and Renewable Energy
177
included large incentives and this shows the economics are difficult in states without incentives. PV projects do not have the short paybacks when compared to 4year payback lighting projects. The benefits used to commonly sell PV are utility cost savings, reduced exposure to price increases, reliability, low maintenance, no emissions and an output coincident with the highest power prices. There is also some public relations value and the potential to fulfill some organization goals in self sufficiency for power outages. Many corporations and public agencies have established sustainability goals and policies. Johnson and Johnson has a goal of reducing its carbon dioxide emissions to a level 7% below their 1990 levels. Cities like San Francisco and San Diego have made commitments to increase their use of renewable energy. Installations include PV systems at fire houses and a 1-MW system on the roof of the convention center in San Francisco. PV is viewed as more of a contributor to sustainability goals than energy efficiency measures or CHP. With their low emissions, fuel cells are viewed as almost as green as PV. PV provides a way to reduce exposure to price volatility. Installing a PV system represents a way to prepay a part of future electricity use at a known price. The volatility in the total utility cost for the facility is thus reduced. The generating nature of PV encourages buyers to think in terms of the price of generated power. Dividing the installed cost of a PV system by the lifetime energy production yields costs of produced power at less than $0.10kWh. This price compares favorably with the current cost of utility power and is a reasonable price to lock-in for a portion of electricity needs. SG projects can be analyzed using the following economics: simple payback, cost of produced energy, net cash flow forecast, life cycle cost and net present value. Most rely on cash flow projections and estimates of the cost of produced power. The cost of produced power is especially helpful in marginally cost-effective projects.
PV AND EFFICIENCY Many projects combine energy efficiency and PV. There are several reasons for this approach. A long payback item like PV is aided by
178
Emergency and Backup Power Sources
packaging the measure with shorter payback measures like lighting to improve the economics of the project. Many of the costs in a performance contracting project are independent of the number of energy conservation measure (ECMs) and the cost of the package. Adding a SG project to an energy efficiency-based performance contract increases the total value of the project. Longer loans terms are used for PV, due to the longer expected service life of the equipment. With 25-year warranties on the PV panels, a 15-year loan term is not unusual. Increasing the loan term of a combined efficiency and PV project over 15 years offers real cash flow benefits. Consider an energy efficiency performance contract project with a $1M installed cost, 4 year simple payback and a 10 year loan. Now, add a 100-kW solar PV system and extend the loan term for the combined project to 15 years. Without considering the tax implications, the economics are summarized in Tables 6-4a and b. Table 6-4a. Combined Project Payback ———————————————————————————————— Annual Initial Simple Savings Cost Payback ———————————————————————————————— Efficiency $200,000 $1,000,000 5 PV $12,400 $256,500 21 Combined $212,400 $1,256,500 6 ———————————————————————————————— Table 6-4b. Combined Project Cash Flow ———————————————————————————————— Annual Loan Payment Net Cash Flow (15 year loan) ———————————————————————————————— Efficiency $147,400 $ 76,600 PV $ 26,200 $ (13,900) Combined $138,000 $ 106,900 ———————————————————————————————— The net cash flow is higher in the combined project than in the efficiency project alone. This occurs in spite of the long simple payback
Distributed Generation, Clean Power and Renewable Energy
179
period of the PV project. The improved cash flow results from the longer loan. PV prices are falling as production increases. The installed cost of a system is now only about 10% that the cost was in 1975. There are estimates that the installed cost of PV systems will drop by 50% in 6 years and by 75% in 12 years. These decreasing costs will allow 10 year paybacks on PV projects in California without the current 50% incentive and can drop post-incentive paybacks to 10 years in many states. SG technologies are also helped by the ongoing efforts to reduce the time required for interconnection and permitting.
MODULAR GENERATION Modular systems are available and used in the Unites States and internationally. Advances should occur in modular systems and distributed small-scale generation of less than 1-MW. These systems should be cost competitive as advances occur to reduce their capital costs. Systems should be developed that can consume small quantities of organic waste or dedicated resources for the distributed generation of power and heat locally for use on-farm, on-site and in small industrial systems. These alternatives could include the integration of modular biomass systems with fuel cells, microturbines and other distributed systems. Fuel sources include food/feed/grain processing plant residue, fats and oils, nutshells, corncobs, tomatoes, carrots, fruit, rice hulls, as well as uncontaminated urban wood residue and farm animal waste. Significant development in scaled-down, skid-mounted or mobile installations and fuel concentrators may take place to increase energy density. Significant opportunities for modular systems exist in low value byproducts from grain, soy, wood and other processing systems, and in farm and forest residues where the high cost of transporting biomass to larger facilities may be avoided. Rural communities and farmers could benefit if modular systems can be deployed to offset power costs in gridbased systems. Industry standards for grid connection should be simplified and new standards developed so that modular biomass systems can be easily connected to the grid. The waste biomass from any biorefinery that has no other value will be able to be converted into electricity.
180
Emergency and Backup Power Sources
BIOCONVERSION Economically viable and environmentally sound bioconversion processes and technologies allow the commercial application of a wide range of biobased fuels and products. Advances in biochemical conversion processes will increase the variety of biofuels and biobased products that are cost-competitive and produced from biomass resources. The conversion of multiple sugar streams and lignocellulosic materials could provide useful fuel and value-added products. The development of enzymatic pre-treatment methods would do much to increase the efficiency of biofuels production. Bioconversion research will take place in two general categories: processing and conversion. Improved methods and technologies for processing biomass feedstocks will increase the economics and capabilities of bioconversion systems. Improving the physical and chemical pretreatment of biomass feedstocks prior to fermentation may include new enzymes and new methods for enzyme pretreatment. Traditional agriculture and forest crops, urban waste, and crop residues are a major source of readily available complex proteins, oils and fatty acids as well as simple and complex sugars that can be used as raw materials. These materials are available at low cost across the United States. The development of low-cost chemical and biological processes will include new chemistry and thermochemical synthesis that can break down these molecules and separate the resulting components into purified chemical streams. Residual biomass resources exist in the form of plant, animal, and other residues. These residues can be used to develop value-added fuels, chemicals, materials, and other products. Additional research should result in cost effective methods for processing solid and liquid residues into economically viable biomass resources. New cost-effective methods of chemical/enzymatic conversion should allow greater utilization of biomass resources.
BIODIESEL Catalytic and chemical methods for converting vegetable oils and animal fats into biodiesel are currently in use. It will be necessary to
Distributed Generation, Clean Power and Renewable Energy
181
improve the efficiency of these processes, to develop new processes, and to make processes more cost-competitive with non-biobased products. Additional research is needed to conquer the barriers associated with inhibitory substances in sugar streams. Methods to enable removal of catalytic inhibitors should be developed and new catalysts developed. Research on engineering and biological principles should improve feedstock separation and product purification. Biomass fermentation and hydrolysis research should increase the fermentation and hydrolysis of fiber, oil, starch, and protein fractions of crop components and processing by-products. Along with more rapid conversion of cellulose to a fermentable substrate, there is a need to develop new fermentation technologies to enable production of base chemicals and chemical intermediates from the wide range of existing crop components.
SYNGAS Syngas fermentation research should improve the catalytic synthesis of gases to chemical as well as to improve pyrolysis to produce chemicals. Processing systems should optimize both mass transfer of oxygen and nutrients for bio-organisms and fermetor environments. Biorefinery integration advances will allow biorefineries to efficiently separate biomass raw materials into individual components, and convert these components into marketable products, including biofuels, biopower and conventional new bioproducts. Biorefineries exist in some agricultural and forest products facilities including corn wet milling and pulp mills. These systems can be improved through the better utilization of residues. New biorefineries may be enhanced from lessons learned from existing facilities. Biorefineries could become markets for locally produced biomass resources and provide local and secure sources of fuels, power, and products. Optimized biorefineries will use complex processing strategies to produce a diverse and flexible mix of conventional products, fuels, electricity, heat, chemicals, and material products from biomass. Further development and deployment of the biorefinery concept for local and regional markets will take place. Additional utilization of existing biomass processing and conversion facilities will result in the development of more biorefineries. The development of new cost-competitive biomass technology
182
Emergency and Backup Power Sources
platforms will result in additional biorefinery products including the bioconversion of sugars to products such as polyols and other products that can be used to produce chemicals, materials, or other biobased products. The development and commercialization of the conversion of vegetable oils will produce hydraulic fluids, lubricants, and monomers for use in plastics, coatings, fibers and foams. A biodiesel/bioproducts biorefinery provides alternatives to petroleum-based chemicals, polymers, plastics and synthetic fibers. Alternatives to petroleum-based additives in the polymer industry include dyes, stabilizers, and catalysts. Rural-based biorefineries should be modular and produce high value products. Residual waste from the biorefinery would be converted into electricity and useful heat.
BIOMASS POWER By 2020 electric utilities will use biomass to generate power at four times current levels, providing 5% of all energy use in the industrial and utility sectors. Most landfills will be tapped for natural gas which will be used to generate heat and power for homes, schools, and industry. There will also be an integration of conversion systems with power generating equipment along with increased capabilities to convert low quality gas into electricity. New technologies will allow the more efficient production of biofuels, with less reliance on agricultural products and more on grasses and woody plants. Improvements in biomass gasification technology will allow the conversion of a wide range of feedstocks including residue biomass. Ten percent of the transportation fuels used in the country may be derived from biomass. Vehicle fuel includes biodiesel and ethanol developed from biomass like soybean oil, corn oil, switch grass, and other woody plants. Biomass gasification technologies must improve their cost competitiveness with other technologies. Improved operating efficiencies are needed with a wider range of sources, such as forest and agricultural residues. Gasification should be integrated with generating turbines and biorefineries. About 20% of the chemical commodities produced in the U.S. will
Distributed Generation, Clean Power and Renewable Energy
183
be biobased products. Companies will not be as reliant on petroleum resources to produce some chemicals. Direct combustion of biomass is currently in use but improvements could be made to improve operating efficiencies. Co-firing will be used in more industrial facilities and includes fossil fuel replacement and reduced environmental effects. Co-firing provides near-term demand for biomass feedstocks that will help develop the infrastructure needed to produce stand-alone biomass electric generation facilities and integrated biorefineries. Demonstration projects show co-firing to be a viable option although its use is not widespread. Improved operating efficiencies will increase use in industry. Improvements in technology and demonstrations from forest products will increase the spread of co-firing technology.
ANAEROBIC FERMENTATION Power and fuels can be produced from anaerobically generated gases. These include landfill gases, anaerobic digestion of animal manure and food/feed/grain products and by-products, wastewater treatment gas, sludge and sewage gases and other sources. The methane from biomass waste is a powerful greenhouse gas, with a global warming potential 21 times that of carbon dioxide. Over 600 million tons of carbon equivalent methane are produced annually in the United States. Greater application of anaerobic fermentation is needed. Reduced capital costs and improved operating efficiencies should increase the use of these systems. Low intensity methane could be viewed as a resource instead of a waste product. Systems for the use of methane from 10-300 Btu/cu feet are feasible and should be developed and demonstrated.
COMMERCIAL BUILDING TECHNOLOGY Today’s commercial buildings use diverse technologies in their construction, operation, and maintenance. A whole-buildings approach is used where all of the building components and sub-systems are considered together with their potential interactions and impact.
184
Emergency and Backup Power Sources
The goal is to optimize the building’s performance in terms of comfort, functionality, energy efficiency, resource efficiency, economic return, and life-cycle. The whole-buildings approach requires the integration of planning, siting, design, equipment and material selection, financing, construction, commissioning, and long-term operation and maintenance. The whole-buildings approach can enhance air quality, lighting, and other aspects of the building’s indoor environment. The natural environment benefits through energy and waste reduction and more effective land use. About 4.5 million commercial buildings in the United States account for over one-sixth of total national energy consumption, (16 quadrillion Btu) and one-third of total national electricity consumption. The consumption of electricity in commercial buildings has doubled in the last few decades and can be expected to increase by another 25% by 2030 if current growth rates continue. Commercial buildings represent a great opportunity to save money and reduce pollution across the country. Annual energy expenditures in commercial buildings exceed $100 billion. Benefits to the environment include reduced emissions of sulfur dioxide, nitrogen oxides, and carbon dioxide from fossil-fuel power generation. A 30% improvement in energy efficiency could be in the coming decades by applying existing technologies. More dramatic improvements ranging from 50 to even 80% could be achieved with new technologies.
COMBINED HEATING, COOLING AND POWER SYSTEMS These include advances in combined heating, cooling, and power systems, optimized building controls, solar and other forms of renewable energy. Energy-efficient building shells and equipment could produce commercial buildings that are net electricity generators rather than consumers. Tomorrow’s high performance commercial buildings are more likely to incorporate smarter, more responsive technologies. Commercial buildings will use smart materials and systems that sense internal and external environments, anticipate changes, and respond using the whole-buildings approach. Wireless sensors and controls will monitor
Distributed Generation, Clean Power and Renewable Energy
185
energy use and adjust operations accordingly. More individualized control of lighting, ventilation, and thermal conditioning will be used with stored user profiles that specify personal environmental preferences. The control will follow an individual through a building or group of buildings. More uniform protocols will allow control devices to talk to each other and communicate externally. Buildings will use performance information to self-diagnose and correct problems, and alert users to causes of marginal operation. More sound environmental practices will allow commercial buildings will be resource-efficient and will make more use of environmentally sustainable materials. The buildings will operate more efficiently, using 30-80% less energy than 20th century buildings. Some will be net electricity exporters, generating their own power using on-site technologies such as fuel cells and photovoltaics, and supplying excess power back to the grid. Sunlight will be used to produce electricity as well as for daylighting. Passive solar construction and natural ventilation will be incorporated into buildings designed for greater flexibility and adaptability to reuse, resulting in longer life. Components and materials will be designed for more complete recyclability at the end of their lifetimes. Commercial buildings will become more closely integrated with the surrounding environment. Building philosophy will shift from single, stand-alone buildings to communities. Resource management will be optimized across the entire community using strategies such as distributed power generation. More building space will double as both commercial and residential space. Fewer but better buildings will need to be constructed as a result. Communities will benefit from better land and resource use with lower investments in highways and transportation. By 2020 commercial buildings will use dynamic envelopes that can respond to changing environmental conditions. Microscale thermal conditioning can be used with individually controlled user’s preferences. Dynamic, personalized ventilation systems will utilize plug-and-play components and systems. Solid-state lighting will be used with dynamic level changing and more daylighting. Distributed energy resources will include photovoltaics, fuel cells, combined cooling, heating, and power generation. Digital wireless microsensors will personalize building controls. The focus of building finance will become long-term, taking into account life-cycle benefits.
186
Emergency and Backup Power Sources
BUILDING EFFICIENCY By 2020 windows should become controlled to operate with the surrounding building environment. These windows will be more energy efficient, using gels and other new materials to improve insulation. They will be able to sense energy loads in the building exterior and interiors and adjust their insulation properties based on energy needs. Holographic techniques will allow windows to direct the outside light to particular areas of the interior space, replacing artificial light with natural light. Windows will use embedded photovoltaic technology to keep the energy balanced in a building while generating power from solar energy. Windows will also provide imaging for displays. These displays could be used for signs indicating sales, weather conditions or news. Windows will be more durable and shock proof. They will be able to stand up to serious environmental changes. They will also be more adaptable and modular, so they can be substituted with newer advanced products more easily. Windows will be produced in a more environmentally responsible manner and designed for recyclability, modularity and upgrading. Building spaces will be more adaptable by 2020. The external envelope will incorporate more adaptability and flexibility. Rooms will convert more easily from one use to another with modular components that allow for movable walls. By 2020, buildings will minimize heating, cooling, and lighting loads using integrated design and non-polluting energy sources which will return excess electricity to the grid. This should save money, and reduce emissions, brownouts and blackouts. By 2020, buildings will have enhanced air quality and airflow with natural ventilation and lighting. Intelligent features will increase the adaptability and energy efficiency of the building. These features will allow the use of light and energy only when and where they are needed. In Manhattan at the intersection of Broadway and 42nd Street, is the first Manhattan office tower to use green standards such as energyefficient design, sustainable materials, and on-site power generation. This 1.6 million-square-foot 48-story building generates some of its electricity with on-site fuel cells. These natural gas fuel cells run cleanly and quietly 24 hours a day. No combustion is required and the waste products are water vapor and CO2.
Distributed Generation, Clean Power and Renewable Energy
187
The building has integrated photovoltaic (PV) panels on some areas of the facade. The peak output is about 15kW, enough power to operate five homes. DOE-2 software is employed for analyzing the building’s energy. It models and compares the energy savings using a variety of options. The ventilation system provides tenants with 50% more fresh air than required by code. The reduced energy costs are estimated at $500,000 annually with a payback period of five years or less.
LIGHTING ADVANCES Lighting in the future will allow more effective use of space for multitasking, so businesses can adapt workplaces currently designed for individualized, manual operations into an environment with shared resources and electronic processes. High-quality lighting systems improve employee productivity, employee retention and quality control. Advanced lighting solutions will improve health, safety, and security in the workplace and provide savings in energy consumption. Advances in lighting will give building owners and managers higher returns on capital investments. More efficient, intelligent lighting systems will be networked and allow managers greater control over building functions, minimizing operations, maintenance, and energy costs. Sensors and controls in future lighting systems will provide new levels of information about our environment and will allow us to shape that environment to improve our creativity and productivity. Advanced lighting systems will include more powerful and cost-effective sensors and controls, wireless connectivity and high-efficiency light sources. Lighting will be done in an integrated whole buildings approach that optimizes daylight to provide more efficient, high-quality lighting, heating, cooling and ventilation. By 2010, luminaries will become smarter and more integrated, communicating with control systems, performing self-diagnostics, and allowing preventive maintenance. New materials will make reflectors configurable and more integrated with the light source. Microelectronics will be used in smaller, more flexible ballasts. Sensors will provide multiple inputs to define the lighting environment for users. Controls will work with the larger building management system to optimize daylighting, thermal load management, preventive maintenance and demand load shedding. Solid-state LEDs and organic light emitting
188
Emergency and Backup Power Sources
polymers (LEPs) will be used. Fluorescent sources will reach efficiencies approaching 200 lumens per watt while maintaining a high color rendition index (CRI) through the use of new two-photon phosphor coatings. Low-cost ballasts will increase the flexibility of the systems and make compact fluorescent lamps more common. By 2020 the design of building systems will combine both natural and human-made lighting systems. Newer technologies will capture daylight for later transmission and distribution. Programmable flatpanel luminaries will create theatrical-type effects using advanced control systems. More efficient, reduced-mercury fluorescent sources will become available and incandescent lamps will use advanced materials that will raise their efficiency to 60 lumens per watt.
SUSTAINABLE DEVELOPMENT Sustainable development is impacting the urban environment and is relevant to energy planners, engineers and architects. Many physical examples of sustainable development exist and the role of sustainable development is becoming a guide to planning at international, national and local government levels. Sustainable development can be a guiding vision and planning model to shift the practice of dominance by special interests to a more holistic and distributed energy world. Aspects of sustainability include urban development, population trends, environmental impacts and energy concerns. Incorporating alternative energy technologies is critical to the success of urban sustainability. Visionary ideas for guiding the development of towns, cities and regions in the 1980s and 1990s started to include two evolutionary approaches, the New Urbanism and sustainable development. Both approaches attempted to address concerns about equity issues and the conservation of environmental resources. The New Urbanism attempted to recapture the urban sense for locale and community by physically reorganizing neighborhoods, improving pedestrian access, revising transportation patterns and reintroducing mixed use development. The focus was on providing alternatives to suburban sprawl. Sustainable development has a broader vision and addresses a wider range of concerns.
Distributed Generation, Clean Power and Renewable Energy
189
In 1987, the report of the World Commission on Environment and Development, brought sustainable development into the forefront. The report was later referred to as the Brundtland Report. It defined sustainable development as meeting present needs without compromising the needs of future generations. At the United Nations 1992 Conference on Environment and Development (also called the Rio Summit or the Earth Summit) representatives from 167 nations, including the United States, produced the Rio Declaration Environment and Development which is referred to as Agenda 21. The Agenda 21 charter deals with policies involved in reducing unsustainable patterns of production and consumption and promoting environmental protection. While energy usage is a primary contributor to global pollution, the term energy is not specifically mentioned in Agenda 21. However, the term resource is used. In the European Community’s (EC), Sustainable Cities Agenda, the principle of ecosystems thinking emphasizes the city as a complex system incorporating aspects such as energy, natural resources and waste production. The EU suggests that cities must be viewed as complex, interconnected and dynamic systems. Sustainable development has links to global issues and balance. Future generations need to reproduce and balance local social, economic, and ecological systems, and link local actions to global concerns. This includes the efficient use of natural resources. Non-sustainable development implies growth that is environmentally unsafe and consumes resources inefficiently. Implementing sustainable development at the local level is often counter to existing planning regulations. If conventional real estate development prohibits the construction of mixed-use neighborhoods, then it becomes very difficult to build these projects. Today, we have a separation of uses in most newer areas. Shopping is separate from housing and schools. Vast areas are used for employee and customer parking. New buildings, transportation systems and power distribution systems are required to meet growing demands. As new facilities are constructed to meet urban requirements, energy usage should be considered in the planning process. This type of infrastructure planning is usually performed by the utilities, often resulting in higher costs for ratepayers. Techniques such as using improved infrastructure technologies and more creative design approaches are
190
Emergency and Backup Power Sources
needed. Urban expansion affects world energy consumption since it requires more energy to provide critical urban services.
GROWTH TRENDS AND ENERGY Rapid population growth on a global scale has increasingly placed a growing burden on our resources. Over half of the world’s population now lives in urban areas, gaining over one billion in population the last 30 years. In 1970 in the U.S., more people lived in the suburbs than in either urban or rural areas. But by 2000, 99% of the 153 million increase in (U.S.) population from 1930 had occurred in the nation’s metropolitan areas. Las Vegas, Nevada, a city that may have the fastest growth rate in the U.S., grew from 273,000 in 1972 to 1,376,000 in 2000. More efficient use of energy can have a major impact in housing solutions, transportation systems and environmental problems. Resource conservation permits the opportunity to grow without constructing additional power generating facilities while reducing environmental impact. Power plants and vehicles account for a major portion of sulfur oxides, carbon and nitrogen emissions into the atmosphere. International efforts at mitigation of environmental impacts include the Kyoto Protocol. While the 15 members of the European Union (a total of 87 countries worldwide) have ratified the Kyoto Protocol, the U.S. has resisted, saying that implementation could cost the country up to $400 billion and 5 million jobs. However, this analysis fails to consider the employment that would have been created by efforts to comply. Energy usage is increasing worldwide. The U.S. is the world’s largest energy consumer. From 1970 to 1996, total energy consumption in the United States grew from 68 quadrillion Btus to 94 quadrillion Btus. Energy from renewable sources grew from about 3 quadrillion Btus to 7 quadrillion Btus. In 1999, total U.S. energy nearly reached 100 quadrillion Btus with transportation using 26 quadrillion Btus. Energy usage is highly decentralized while energy generation and production tends to be relatively centralized. Rapid increases in population and increases in conditioned space are major causes of increasing energy use. The need for highly conditioned space has developed into new standards for human comfort, especially in the workplace. More efficient use of energy in the built environment can have a significant
Distributed Generation, Clean Power and Renewable Energy
191
impact on reducing energy needs. This includes the ability to provide for growth without constructing additional power generating facilities. While technologies are available to provide more efficient use of energy, economically viable technologies are often not implemented. Power production and power use have urban and regional impact. Issues include power plant construction, lack of flexibility from multiple energy sources and improved technologies for designing and building facilities that are extremely efficient. The U.S. is experiencing an energy crisis. Many of our buildings waste energy and incorporate few conservation or reclamation features. In spite of improved design standards, many newer buildings use more energy than older ones. This is due to the design, location, and construction technologies employed and equipment utilized. Changing standards for fresh air admission into occupied space increases ventilation air. More energy is expended in producing occupancy air (cooled or heated, humidified or dehumidified) from unconditioned air. A wide range of solutions needs to be employed to solve the complex problems associated with increased energy usage. These approaches must include both supply (production) and demand (consumer usage). Usable resources are available to be utilized but everyone shares the responsibility of not wasting or misusing usable resources. Actions include education and training in the management of resources.
SUSTAINABLE DEVELOPMENT PROGRAMS Sustainable development programs include the Florida Sustainable Communities Network, DOE Efficiency and Renewable Energy Network, Civano Sustainable Community Project, Lake Tahoe Regional Plan, Manchester VT Planning and Zoning Program, and Santa Monica, CA, Sustainable City Program. Alternative communities include Seaside, Florida and Laguna West, California which use higher density, mixed-use development. Arcosanti, Arizona, is an ecologically based experimental community dedicated to alternative energy, concentrated development and a reduced role for the automobile. Kishio Kurokawa’s eco-media city is an advanced version of an eco-city proposed in a Futian, China, city center plan. It is an eco-media city park concept for the information age to
192
Emergency and Backup Power Sources
demonstrate urban sustainability. The town of Navarra, Spain, has a goal of providing 100% of its electrical energy by renewable energy sources by 2010. The Building Act in Finland establishes sustainable development as the foundation for land use planning. In the United Kingdom, Leicester, Leeds, Middlesborough and Peterborough have achieved the Environmental City designation as a result of their planning efforts. Ecolonia in the Netherlands is a demonstration town for ecological development. The 1997 Treaty of Maastricht included sustainable development as an objective for European Nations. Sustainability programs have provided incentives for wind-farms in Austria and pushed German automotive and electronic manufacturers to develop new ways to recycle components and improve energy resource management. Local codes and ordinances often punish the installation of sustainable technologies. One argument against implementing sustainable or alternative technologies is that investments may fail a simple payback test or some other economic standard. Investors often need a given guaranteed rate of return in order to implement new technologies. Part of the action plan of the World Summit on Sustainable Development included the use of renewable energy technologies to be increased to 15% of worldwide energy production by 2010.
DISTRIBUTED GENERATION AND ENERGY MANAGEMENT Applying energy management systems to distributed generation provides additional cost reductions. Energy consumption reductions are usually derived from strategies such as temperature resets in air and water handling systems, chiller plant optimization, HVAC scheduling optimization, and automation of lighting system operation. These activities represent a significant potential for energy savings. In the era of deregulation, electric utility rates include RTP (real time pricing), and interruptible service (IS). On-site power generation systems have been utilized by many facilities to take advantage of these rates to reduce electric costs. Energy management systems are also used to manage the operation of on-site power generation systems for power outages and to improve power quality. One example of a typical installation involves a 2,000-kW on-site power generation system and its integration with a facility energy man-
Distributed Generation, Clean Power and Renewable Energy
193
agement system to exploit an interruptible electric service rate. The system was installed at a 600-acre corporate campus site in central Pennsylvania. There were 28 buildings on the site, enclosing about 1,000,000 square feet of conditioned space. Of the 28 buildings, only one, the corporate data center, does not use the centralized steam plant for heating and a 2,500-ton chilled water plant for cooling. Combining the capabilities of the EMS (Energy Management System) and on-site power generation systems allows the facility to interrupt more than 70% of its on-peak summer electric load, within 2 hours after a request, using a minimal staff and causing a minimal change to those at the site. The system reduced the annual electricity costs by more than 25% since its implementation in 2001. The peak electrical demands occur during the summer months, usually during July or August. Peaks of 4,800-kW have been recorded on days when air conditioning loads peak. Electrical power is purchased from the local utility at 13,800 volts and distributed to 21 different substations. The substations vary in size from 500-kVA to 3,750-kVA and are both indoor and outdoor types. Secondary power distribution voltages of 4,160 volts, 480, 277, and 208/ 120 volts are used. There are seven on-site emergency power generators, ranging in size from 7.5-kW to 750-kW, which is the data center generator. There is also a 2,000-kW curtailment generator. The seven machines were installed to provide back-up power to building critical loads during power outages. The 750-kW data center system provides backup power to a data center UPS (Uninterruptible Power Supply), and is also used to shed about 450-kW of load during electrical curtailments. In the early 1990’s interruptible rates were made available to large commercial and industrial customers in this area, primarily those being served at 12-kV or 69-kV voltages. With the interruptible rates, customers pay reduced demand and energy charges, in exchange for an agreement to curtail power use to a target level when requested by the utility during special periods of high demand or economic hardship. The new interruptible rate contract used about 35% (1,500-kW) of the previous summer’s peak demand for firm capacity. Tables 6-5a and 6-5b show how the firm capacity was related to the estimated savings. The requirements of the rate included a two hour curtailment notice with up to 15 interruptions per year and a 10 hour maximum duration per interruption. There would be five or less interruptions per
194
Emergency and Backup Power Sources
Table 6-5a. Interruptible Rate Savings ———————————————————————————————— Year Usage (kWh) Cost/kWh Savings ———————————————————————————————— 2001 (Jan.-June) 11,659,000 $ .062 2001 (July-Dec.) 8,697,000 $ .046 $ 139,200 2002 20,439,000 $ .048 $ 286,200 2003 (Jan.-June) 9,336,000 $ .049 $ 121,400 ————————————————————————————————
Table 6-5b. Interruptible Rate Comparison ———————————————————————————————— Rate Firm kW Savings $kWh ———————————————————————————————— Firm 100% 0% $.062 Interrupt 3,000 $.060 2,500 6.6% $.058 2,000 14.7% $.053 1,500 19.6% $.049 1,000 26.2% $.045 0 36.0% $.044 ———————————————————————————————— month with a buy through allowed for economic interruptions and penalties for not meeting the firm level. Manual curtailment procedures were developed to provide equipment shutdown to comply with the agreement. These included turning off discretionary lighting, non-centralized air conditioners, pilot line machinery and some test equipment. Major electrical loads like chillers, air handlers, and pumps were also affected by the procedures, sometimes causing short-term discomfort in occupied buildings.
POWER MONITORING The initial real time power measurement system was a six point personal computer-based system. This replaced watt-hour metering at key substations. The system was expanded to 17 measurement points,
Distributed Generation, Clean Power and Renewable Energy
195
including chiller plant and data center electrical power loads. The PowerLogic system allows monitoring in real time of critical loads. This information is needed for managing interruptions and to ensure meeting the firm power target. With the interruptible contract in place, an on-site generator with enough power could continue the operation of key equipment, including the central chiller plant. Maintaining lower electricity costs was achievable with this generation. In 2001, a 2-MW generation system was purchased, complete with step-up voltage substation and paralleling switchgear. The system successfully operated during the initial utility mandatory curtailment called in July, 2001. The generator has a 2,000-kW standby, 1,850-kW prime rating with a 3,000-HP Detroit Diesel, 16 cylinder engine which is fuel injected and turbo charged. A Marathon, 4-pole alternator provides 480 volt output and Kohler PD-100 integrated controller provides LCD operator interface, automatic synchronism and paralleling. A water cooled, 150 gallon glycol loop is used with a 100-HP direct in line fan. The fuel consumption is 133 gallon/hour at 100% load. The Johnson Controls system has grown to incorporate controls for approximately 8,000 hardware and software points, controlled by 700 individually programmed processes. These processes control all HVAC functions for fourteen buildings, the central chiller plant and the boiler operation. In the boiler room, the system monitors all critical mechanical systems, burner management systems and steam parameters. It is Ethernet based and accessible from eight operator workstations. Curtailment is a four-step process that is accessed from any operator workstation. The process initially sub cools air-conditioned spaces by turning off all reheat systems. Next, dewpoint setpoints are adjusted upward so that discharge air temperatures go up, allowing chilled water temperatures to rise. The next step starts to override all chillers to an off state, except one 1,000-ton machine which is manually controlled. Building chilled water pumps are slowed (VFD operated) or shutdown. This reduces the chiller plant load from about 1,600-kW to 700-kW, before any electrical generation is placed on-line. After these steps, on-site generation is started about an hour before the power target must be reached. Depending upon the actual power load before the curtailment, only two generators are normally needed to reach the target power level. First, the data center generation takes place,
196
Emergency and Backup Power Sources
contributing about 450-kW. About 30 minutes before deadline, the 2MW generator is placed on-line. The power measurement system data are used for any adjustments needed to meet the target. Additional building lighting, air handlers and smaller water pumps may be shutdown. Prior to July, 2001, electricity costs averaged $.061/kWh. After the first curtailment with generation, the firm demand level was reset to 1,000-kW, under terms of the interruptible contract. Costs averaged $.046/kWh for the rest of the year. The estimated savings for the first 2 years after installation exceeded $500,000. The $700,000 investment yielded a 37% ROI, less than a 3-year simple payback. The system reduced overall costs per kWh by 25%, compared to non-interruptible rates. Another DG facility in Mobile, Alabama, consists of 41-MWs of electrical generation (10 Wartsila gensets). The facility operates in an island mode as the sole source of electricity for a hydrocarbon processing plant. This facility is designed to start large compressors at the hydrocarbon processing plant. These compressors use 6,000 HP electric motors. The DG facility also provides heat to the plant using engine exhaust gas to heat incoming oil before it is processed.
REDUCING CO AND VOC Another project at the Sweetheart Cup factory in Owings Mills, Maryland, included oxidation catalysts for CO and VOC control. The plant equipment used Wartsila 5.76-MW gensets and 25,000 lb/Hr fired steam recovery burners. NOx was controlled to 6 PPM and CO to 80 PPM. One problem at this facility was the fitting of the new DG plant onto a limited area. The limited space required a narrow four elevation building to fit the required equipment onto the available plant space. As a part of Northwest Airlines expansion at Detroit International Airport, Metro Energy LLC installed a distributed generation facility. Equipment included Wartsila Model 18V34SG gensets rated 5.76-MWs at full load. The systems used oxidation catalysts for CO and VOC control and closed loop heat rejection systems to provide engine jacket water and air cooling. The airport expansion included a new control tower that is located very close to the distributed generation facility. Water vapor from com-
Distributed Generation, Clean Power and Renewable Energy
197
bustion or cooling towers was a concern since there would be some potential for a visible plume. The low heat rate system is based on the use of closed loop cooling systems instead of cooling towers. The Chicago area has seen several distributed generation projects because of the rate structure of the local utility. The University of Illinois-Chicago provides part of their electrical power through distributed generation. One University of Illinois facility in Chicago not only produces electricity and some thermal product, but it uses a fired incinerator to reduce CO and VOCs from the genset gas exhaust. This allows the facility to meet environmental requirements without additional back-end cleanup systems. There are two 4.1-MW gensets provided by Wartsila. Factors to be considered for distributed generation include electric costs and rate structure, natural gas rates, maintenance requirements and costs, electric deregulation impact and potential changes in operation.
COOLING, HEAT AND POWER SYSTEMS Distributed energy generation systems that combine cooling, heat and power (CHP) or cogeneration can make significant contributions to mitigating power constraints. These systems can meet increased energy needs, reduce transmission congestion, cut emissions, increase power quality and reliability and increase the overall energy security for a facility. The U.S. Department of Energy’s market assessment estimates that CHP could be successfully applied in almost 10% of large federal facilities (Table 6-6). This would annually conserve 50 trillion Btus of energy, reduce CO2 emissions by almost 3 million metric tons, and reduce utility bills by $170 million. A combined heat and power system generates electricity (or shaft power) and uses the heat from that process to produce steam, hot water or hot air. The most common application involves a prime mover (gas turbine or engine) with a generator to produce electricity and capture the waste heat for process steam and space heating. Boiler steam may be passed through a turbine to generate electricity in addition to serving other thermal applications. One of the simplest systems involves replacing steam pressure-relief valves with a low-cost backpressure steam
198
Emergency and Backup Power Sources
Table 6-6. CHP Potential at Federal Buildings ———————————————————————————————— Hospital
Industrial
Office
———————————————————————————————— Total Mfeet2
141
115
514
Mfeet2 buildings with CHP payback <10 years
113
80
146
Total sites
331
181
2302
Sites with CHP payback <10 years
235
75
167
% of sites with CHP potential
71
42
7
Potential TWh of electricity from CHP
2.9
2.3
0.8
Potential CHP (MW)
440
340
250
———————————————————————————————— turbine and electric generator. CHP systems recover the heat from electricity generation for productive uses. Conventional power plants usually waste this heat. Since a CHP system generates electricity near the point of use, transmission losses are much less.
CHP EFFICIENCY The combined generation of electricity and thermal energy on-site by a well-designed CHP system is more efficient than the combined efficiencies of these two alternatives. The key to an efficient CHP system is to maximize the use of the thermal energy (waste heat) from the generation process. Emissions or other site-specific factors may override electrical efficiency or operating and maintenance costs when calculating which CHP system is best for a facility. Since CHP uses energy to generate electricity on site, and because it is slightly less efficient for heating than a regular boiler, energy use at the site will increase with a CHP system. However, the losses associated with generating and distributing the electricity will be avoided and CHP results in a net savings of primary or source energy. The United States has more than 50 gigawatts (GW) of installed CHP capacity producing about 7% of the nation’s electricity. Executive Order 13123 directs all federal facilities to use CHP when life-cycle cost
Distributed Generation, Clean Power and Renewable Energy
199
and analysis indicates energy-reduction goals will be met. The DOE, the Environmental Protection Agency, and the private sector have a joint effort to double the amount of CHP capacity in the U.S. by 2010. The effort to expand CHP in federal sites also includes the U.S. Combined Heat and Power Association (USCHPA). More than 50 federal sites have benefited from CHP systems and another 50 sites are developing opportunities to install 100-MW of additional CHP capacity. The Oak Ridge National Laboratory (ORNL) has created a model that calculates the energy use and costs in different types of buildings. This model is used to estimate where CHP would be most likely to offer a cost-effective alternative to traditional (grid and boiler) systems. The model uses parameters involving CHP technology, energy prices, and energy use. It calculates the financial payback of CHP. CHP provides thermal energy for heating and cooling a building while at the same time generating a portion of its electricity. Other applications include process steam for industry, laboratories, laundry, hot water, dehumidification and systems for site-specific operations. Sitespecific information is critical to verify CHP potential.
POTENTIAL CHP The total amount of CHP potential capacity for federal facilities nationwide is estimated to be 1600-MW. These CHP systems could produce 8 terawatt hours (TWh) of electricity which is about 13% of the 57 TWh total electricity the federal government purchased in 2000. This could provide electricity and thermal energy for almost 600 million foot square of building space in nearly 10% of all federal sites. The potential is greatest in large sites with central plants or mechanical rooms and high electricity rates. These estimates assume that reciprocating gas engines are used at their current costs and efficiencies. CHP supplied 75% or 50% of estimated electric demand with load factors at 85% or 35%, depending on building type and size. All recoverable waste heat will be utilized by the site and only systems with a simple payback less than 10 years were counted. Federal hospitals had the highest potential for CHP. More than two-thirds of large hospitals have CHP potential. Industrial buildings were next in potential capacity. R&D facilities, office buildings and service buildings provide similar amounts of capacity.
200
Emergency and Backup Power Sources
CHP potential was found in the military services, Veterans Affairs (VA) hospitals, Department of Energy (DOE), National Aeronautics and Space Administration (NASA), General Services Administration (GSA), Postal Service and the Department of Justice. The military services have significant potential CHP capacity in most types of buildings, while the VA’s capacity is in hospitals. The DOE and NASA capacity is concentrated in R&D and industrial buildings, while GSA and the Postal Service have capacity in office buildings. Regions with the greatest CHP potential are the southwest, northeastern metropolitan areas and the southeast. Where there is low-cost electricity, CHP can have difficulty competing. If on-site energy is required for power security, CHP can make the system more efficient and cost-effective. As energy prices increase and CHP system costs decrease, the amount of cost-effective CHP potential will rise. The 1.5-GW identified is sufficient to power more than a million homes and save the government $170M per year in energy costs. The average simple payback period for these projects is about 7 years and many could be financed through existing credit mechanisms such as energy saving performance contracts (ESPCs), utility energy service contracts and enhanced use lease agreements. The energy savings from this CHP investment are estimated to be 50 trillion Btus per year with project carbon dioxide emissions reduced by 2.7 million metric tons per year compared to gas-fired alternatives. Although CHP technologies are proven and the potential savings and benefits are significant, project development over the past decade has been impeded by a number of factors including low electricity costs, high initial costs for CHP systems, budgets, custom engineering and design for sites, local regulations and policies, backup/standby fees and emissions. Packaged CHP could reduce design and technical costs to projects. Addressing policy and regulatory constraints such as permitting, grid interconnection requirements, exit fees and standby/backup charges could also reduce project costs. The ADD CHP initiative is a part of DOE’s Federal Energy Management Plan (FEMP) technical assistance program for CHP at federal sites. ADD CHP offers support in-site survey and feasibility, federal, state and private financing and other areas. ADD CHP has requests from 15 different states as well as Puerto Rico and the Virgin Islands and a broad range of federal agencies: VA, DOE, National Guard, Air Force,
Distributed Generation, Clean Power and Renewable Energy
201
Army, Navy, NASA, Department of Justice (Bureau of Federal Prisons), GSA and the Postal Service. Most of these involve larger buildings and campus-style sites with more than 1-MW of electricity.
CHP SITES At the Army’s Fort Bragg in North Carolina, the Honeywell Corporation is developing an order to install 5-12-MW of CHP capacity to reduce energy consumption at a central heating and cooling plant on the base. This would be an advanced turbine CHP project. The National Park Service along with state, federal, and private partners are involved in a CHP microturbine demonstration project at the Gateway National Recreation Area in Brooklyn, New York. Microturbines will generate about 175kW and supply heating and cooling to a Park Service building at Floyd Bennett Field. The installation will be part of the Park Service’s demonstration of sustainable urban development. Landsberg Engineering is implementing the project with Capstone turbines and Broad USA chillers. At the Twenty-nine Palms Marine Corps Base in California a CHP plant is being installed. This will be a 7-MW cogeneration project. About 50 other federal sites are in various stages of implemention. FEMP’s New Technology Demonstration Program (NTDP) has several CHP publications including “Energy Efficiency Improvements Through the Use of Combined Heat and Power (CHP) in Buildings” which examine how to provide electric power and thermal energy (heating, cooling and humidity control) to buildings and processes (See Tables 6-7a and 6-7b). Table 6-7a. National CHP Objectives for 2000-2010 ———————————————————————————————— 46 GW of New Installed CHP Capacity 13 Trillion Btus/Year Lower Energy Use $5 Billion Energy Cost Savings 0.4 Million Tons/Year Lower NOx Emissions 0.9 Million Tons/Year Lower SO2 Emissions 35 Million Metric Tons Less Carbon Emissions ————————————————————————————————
202
Emergency and Backup Power Sources
Table 6-7b. Business Strategies ———————————————————————————————— Community-based Model Public-private partnership between community groups, business, utilities and government. ———————————————————————————————— Green Pricing Programs Customers pay a premium on their electric bills to support green electricity (wind, biomass, and other sources) ———————————————————————————————— Schools and Public Buildings Provides visibility and raises community awareness. ———————————————————————————————— Dealership Networks Alliances with distributors, builders, developers and energy service providers. ———————————————————————————————— Utility Partnerships Promotion, marketing and financing. ————————————————————————————————
INTEGRATING TECHNOLOGIES Much can be done on integrating proven technologies such as engines, gas turbines, boilers, absorption chillers, desiccant dehumidifiers, electric air conditioners to maximize the use of recoverable thermal energy. In the area of emerging technologies are microturbines and fuel cells. Integrated systems bring together gas-fired and electrically driven equipment to provide heating, cooling, dehumidification, and electrical service to commercial and public buildings. The principle configurations of integrated systems include CHP systems for on-site electrical power generation with heating and cooling space conditioning and potable hot water production. Development in the integration of on-site power generation, heating, cooling, and dehumidification are moving rapidly as a result of changing commercial building electric rates.
Distributed Generation, Clean Power and Renewable Energy
203
DEMAND RESPONSE AND DISTRIBUTED GENERATION Enterprise energy management (EEM) and alternative generation technologies can deliver the power needed by an expanding digital economy, environmental concerns and deregulation. A greater number of power producers and energy traders require more real-time information. Digital enterprises such as data centers, call centers, semiconductor fabrication plants and other computer-controlled, mission-critical businesses are a growing segment of power consumers. These facilities require nearly 100% uptime to provide their products or services, but the power grid can deliver only 99.9% uptime or three nines of power. This is the equivalent of eight hours of outages per year. Today’s expectations for reliability range from six nines (99.9999%) to nine nines. Internet use continues to grow. The Internet consumes over 10% of the electricity in the U.S. and this will rise as usage increases. Many enterprises now rely heavily on the Internet for data exchange and they often require access to information on a 24/7 basis. Telecom hotels and server farms host these data and communication systems. Their power densities can exceed 150-W per square foot, which is twice that of the most energy-intensive industrial facilities. Long-distance transmission lines are exposed to damage from animals, trees, lightning and other disasters. These types of disruptions can be tolerated by lighting systems, industrial motors and air conditioners, but not the data and communication assets of today’s economy. Microprocessor-based devices are especially sensitive to voltage sags, swells, spikes or outages that can cause lost or corrupted data, equipment damage and process shutdowns. Some suppression can be effective for certain waveforms of spikes. Deregulation has made generation and transmission companies reluctant to expand on large projects while their load growth is uncertain and the competition continues to increase. Load curtailment is one way for protecting a facility from outages. Distributed generation is an efficient method of creating generation capacity since large generation facilities take years to build and then sit idle most of the time. Load curtailment programs and power plants with long development timeframes may not be enough to meet the growing demand for power.
204
Emergency and Backup Power Sources
In the last decade, California’s electricity needs have risen 30%, but practically no new major power plants have been built. Demand in New York state has grown by 2700-MW in the last five years, but generating capacity has not kept up, increasing by only 1060-MW. New York has not brought a new plant on-line since 1996. Even if additional centralized power plants are constructed, they tend to operate as isolated islands of generation due to limited transmission paths. Investments in the transmission and distribution infrastructure has decreased over the last decade. Between 1988 and 1998, capital improvements to New York’s transmission system fell from almost $310 million per year to $90 million per year. Decentralized power plants, which connect to end-users directly or through shorter transmission lines, are sources of higher nines power. Energy management systems can monitor and control the operation of these plants and communicate real-time generation and consumption data. This can be used to verify the power quality and trace the origins of power related events. Smaller power plants located close to end users are more affordable and local plants avoid transmission system bottlenecks and have lower transmission losses. Industrial and commercial energy customers also tend to support the concept of microgrids. These are power networks made up of linked generation units with each monitoring, switching, and communicating through advanced EEM technology. The microgrid combines the generating capacity of distributed units and forms a virtual utility.
ENVIRONMENTAL ISSUES Generation and transmission providers find it more expensive to obtain rights-of-way and government approvals. Communities continue to block the construction of conventional power plants. Federal and state governments have set allowable emission levels for pollution sources and place restrictions on generators. Even though some power plants could be retrofitted to meet these new standards, the costs to do so are high. The permitting process for small-scale generators is much easier, if even required at all. Stationary pollution sources below a defined size do not need a permit to operate and microgenerators fit into this group. Their emission levels are also low, with fuel cells and solar
Distributed Generation, Clean Power and Renewable Energy
205
cells near zero. A large number of polluting coal-fired plants will be de-commissioned over the next few years and the only way to replace that capacity quickly is through distributed generation. Microgenerators such as fuel cells, solar cells, and microturbines have always been environmentally friendly and now their decreasing cost is allowing their deployment.
REAL-TIME MANAGEMENT OF POWER Real-time monitoring, communications and control occurs mainly at the independent system operator (ISO) and regional transmission organization (RTO) level in order to control centralized generating facilities. In a traditional load curtailment, a utility’s operations center is constantly monitoring the availability and price of power, along with current and predicted demand. Depending on these factors, as well as the time of day, the system operator may decide that the trend is not favorable and the utility cannot meet impending demand. A power grid with decentralized generation must also be supported with energy management tools. An electronic energy management (EEM) system can continuously monitor voltage and current waveforms and capture momentary disturbances for analysis. The supply/demand imbalance in electricity is being felt in many areas. New York and the Pacific Northwest have reported the lowest reserves in years and California has implemented rotating blackouts in addition to industrial/commercial load curtailment. Prices on the spot markets have become extremely volatile with huge hourly spikes. In the summers of 1998 and 1999, electricity in some regions of the U.S. sold at the margin for $1,000 to $6,000 per MWh, which was as much as 200 times higher than normal. Competitive markets require strong interactions between supply and demand, but this has not always been the case for electricity. Consumers cannot easily vary their demand in response to real-time prices. This has resulted in unnecessary price increases, load growth and lower reliability. EEM technology can provide users with real-time tools to act upon spot market prices. These actions will tend to lower the spot price at peak periods, reduce overall demand and moderate price fluctuations.
206
Emergency and Backup Power Sources
DEMAND RESPONSE PROGRAMS Demand response programs provide load curtailment credits based on real-time pricing. They are usually targeted at customers that can curtail loads of 100-kW to 5,000-kW or more. If smaller customers want to participate, they must aggregate curtailment with other customers through an energy service provider or load aggregator. Demand response programs also save money on the supply side. Federal Energy Regulatory Commission (FERC) rules mandate that each ISO must set up reserves to keep the grid operating during times of peak demand. These reserves can be in the form of spinning or supplemental reserves (on-line or idle generators), capacity contracts with other suppliers, or demand response or load curtailment programs. Using a demand response program instead of a spinning reserve allows a regional operator to save tens of millions of dollars. ISO New England must maintain over 3,500-MW of generators that run at low levels so they are available to produce additional power on short notice. This is costing $30 million.
STABILIZING THE GRID Distributed generation, in the form of standby generators, accounts for 10% of generation capacity in the U.S. and is expected to increase to over 30% of capacity in the next 10 years. Distributed generation affects the stability of the power supply. Historically, utilities and ISOs have been responsible for grid stability, but independent power producers will start affecting the stability of the grid. For all of the supply and demand side equipment to function properly, electrical voltages must be maintained within a specified range, usually within +5% of nominal. Voltage control becomes more complex as the energy industry is restructured and responsibilities are divided among different generation, transmission, marketing and regulatory entities. Real-time EEM systems should be able to control voltages by automatically managing reactive power. Most power system components and end-user facilities including overhead lines, underground cables, transformers, motors, and fluorescent lights represent inductive or capacitive loads. This means some current carried by the transmission system wastes energy (reactive power) while the rest transmits useful
Distributed Generation, Clean Power and Renewable Energy
207
energy (real power). Transmission lines and transformers can handle a limited amount of current, so any reactive current lowers the capacity of the grid. Power providers and users need to minimize reactive power since they reduce the ability of generators to supply real power. Since the current in a capacitor rises while the current in the same loop in an inductor drops, this effect can be used to cancel reactive power flows. Users can install capacitors to compensate for inductive loads. Installing capacitor banks in the transmission and distribution system can absorb reactive power. Induction generators such as windmills and solar cells are isolated from the grid with solid-state electronic invertors. Frequency is another grid stability issue. The frequency must be maintained within a narrow window around 60-Hz. Variations occur on a small scale when there is small supply/demand imbalance and on a larger scale when a generating unit suddenly fails. Generators must react quickly or end-users may suffer equipment damage. An EEM system can stabilize the grid by helping to control voltage and frequency, coordinate protection and fault clearing and activate reserve capacity. EEM devices use direct connections to public communication networks such as the Internet, wireless and paging infrastructures. An EEM system can alert thousands of sites to spotmarket prices curtailment and generation activities to help operators maintain grid stability. Intelligent monitoring and control devices can be distributed at interties, generators, and customer service entrances. The EEM software would provide system-wide or local situation analysis, load aggregation and reporting. It could provide an audit trail to prove the exact sequence of events for fault analysis. It must be able to continue operating and store data in the event of power disturbances or communication interruptions. It should allow easy data access through public Internet, wireless, satellite and paging networks. These intelligent power meters should support load profiling, power quality analysis, control and multiple communication ports and protocols. Intelligent power meters must also work with firewall restrictions to restrict outside access to corporate networks. The meters may transmit e-mail messages or serve data directly to web pages. Web browsers or e-mail would be used to access data.
208
Emergency and Backup Power Sources
Distributed generators would have an intelligent power meter for monitoring, control and communications. The meter would continually transmit current conditions at the generator’s interconnect point to the ISO, distribution company or utility. The meter would receive control signals so when a fault occurs, it can respond by quickly disconnecting the generator from the grid to prevent equipment damage. In a demand response environment, an energy consumer’s EEM system would monitor the real-time market price of electricity through XML links to Internet wholesale exchanges. The EEM system would correlate the data with building usage. If the price of electricity climbs above the customer’s threshold, curtailment can take place based on the costs of production. The system would also correlate billing data and adjust customer bills with credits or penalties. References Buff, Katrina, Editor, Energy and High Performance Facility Sourcebook, Proceedings of the 26th World Energy Engineering Congress, The Association of Energy Engineers, Inc., The Fairmont Press: Lilburn, GA, 2003. Wells, Joyce, Editor, Solutions for Energy Security and Facility Management Challenges, Proceedings of the 25th World Energy Engineering Congress, Lilburn, GA: The Fairmont Press, Inc., 2003.
Fuel Cells
209
Chapter 7
Fuel Cells INTRODUCTION Fuel cells will have a major impact on electric power generation and transportation. The impact may be similar to the impact that the reciprocating engine had for all forms of modern life in the last century. Reciprocating engines quickly became an important prime mover and replaced steam engines. The early automobiles and trucks were powered by steam and electricity. Spark-ignition and diesel engines now power almost every locomotive, automobile, and truck manufactured in the world. The fuel cell extracts electricity from the chemical reaction between oxygen and hydrogen. It has been around for about 150 years, although its commercial deployment did not begin until the 1960s as a part of NASA spacecraft. Today, this technology is being used in Tokyo, where Japan’s first hydrogen-fuel filling station opened in June 2003 and European cities such as Stockholm which operate hydrogen fuel cell buses. When hydrogen and oxygen molecules combine, the reaction produces heat and water along with flow of electrons to generate electricity for powering an electric motor or other load. One of the attractions of fuel cells is that they can be big enough to run a factory or small enough to fit in cell phones. The micro fuel cell is a power source that is small enough to slip into a shirt pocket. The Medis Power Pack is a portable, wire-free mobile-phone recharger. These power packs are about the size of a cigarette pack, cost between $25 and $40 and last a year or more. The power comes from a disposable fuel cartridge that costs about a dollar and provides up to nine hours of phone use. The New York based market research firm Allied Business Intelligence predicts that by 2011 the market for micro-fuel cells will hit $2 to $3 billion. By 2013 the market for all fuel cells could reach $35 billion. 209
210
Emergency and Backup Power Sources
Hydrogenices, in Mississauga, Ontario, is working to bring the fuel cell to market. Hydrogenics and General Motors which owns about a quarter of the Canadian firm are developing an Army fuel-cell-diesel hybrid engine for a new generation of 30,000 light tactical vehicles. Fuel cells can help free a vehicle from dependence on vulnerable supply lines, cut fuel consumption 20% and generate enough hydrogen to be selfsufficient in electrical power for up to five hours with the engine turned off. Fuel cells are also quieter and cooler than traditional portable generators. Fuel cells last longer than the batteries that currently support these operations. The units can also provide soldiers with water since the water vapor from the fuel cells can be recycled for human consumption. GM hopes to sell fuel-cell-powered cars by 2010. Deere & Company, the makers of farm and construction equipment, is working on a hydrogen-powered forklift. The Canadian government has given Hydrogenics a $6 million project to produce a fuel cell powered transit bus for Winnipeg, Manitoba by March 2005.
FUEL CELL POWER A fuel cell is an electro-chemical device. It is a container in which hydrogen and oxygen react in a controlled way to produce water, heat, and the flow of electrons through an external circuit. The fuel cell container keeps the two reacting chemicals in separate sealed chambers. They can react only by means of the flow of electrical charges between the two chambers and through the external circuit.
FUEL CELL EVOLUTION Fuel cells were first demonstrated in 1839 but were not commercially applied because the technology was expensive and other technologies were available. They have exceptional environmental characteristics and are capable of outstanding efficiencies. This has recently caused renewed interest along with advances in engineering and material science. Fuel cells can provide real alternatives for producing power in large and small quantities.
Fuel Cells
211
HYDROGEN CONVERSION Hydrogen conversion can be used both in engines and in fuel cells. Engines can use hydrogen in the same manner as gasoline or natural gas, while fuel cells use the chemical energy of hydrogen to produce electricity and thermal energy. Since electro-chemical reactions can be more efficient than combustion at generating energy, fuel cells can be more efficient than internal combustion engines. The use of hydrogen in engines is a well developed technology, and new combustion applications are under development. Vehicles with hydrogen internal combustion engines are now in operation and the combustion of hydrogen blends is being tested. Fuel cells are in various stages of development. Current fuel cell efficiencies range from 40-50% at full power and 60% at quarter-power, with up to 80% efficiency reported for combined heat and power applications. Fuel cells will become a cost-competitive technology in mass production. Advanced, hydrogen-powered energy generation devices such as combustion turbines and reciprocating engines will experience widespread commercial use. The commercial production, delivery, and storage of hydrogen will come with the commercial conversion of hydrogen into valuable energy services and products, such as electricity and thermal or mechanical energy. The technologies for end-use will be well established. Products using these technologies will provide safe, clean, and affordable energy in all sectors of the global economy. All of today’s conversion products and prototypes have some deficiencies. They cannot yet provide, at affordable costs, the level and quality of energy services demanded by consumers. Fuel cell technologies have generated much excitement, but they are in various stages of maturity. They have appeared only in small quantities and many performance issues including durability, reliability and cost need to be resolved. Combustion turbines and engines that use hydrogen or hydrogen/natural gas blends are already in use in both mobile and stationary applications.
HYDROGEN APPLICATIONS Hydrogen can be used in conventional power generation technologies, such as automobile engines and power plant turbines, or in fuel
212
Emergency and Backup Power Sources
cells, which are relatively cleaner and more efficient than conventional technologies. Fuel cells have broad applications in both transportation and electrical power generation, including on-site generation for homes and commercial buildings. Transportation applications for hydrogen include buses, trucks, passenger vehicles, and trains. Technologies are being developed to use hydrogen in both fuel cells and internal combustion engines. Almost every major automaker has a hydrogen-fueled vehicle program, with targets for demonstration projects. These early fuel cell demonstration programs consist of pilot builds of 10 to 150 vehicles. Hydrogen-fueled internal-combustion engine vehicles are a nearterm, lower-cost option that can assist in the development of a hydrogen infrastructure and hydrogen storage. Hydrogen-fueled internal-combustion engine vehicles will be made in larger numbers when demand increases. STATIONARY POWER GENERATION Stationary power applications include backup power units, grid management, power for remote locations, stand-alone power plants for towns and cities, distributed generation for buildings, and cogeneration where the excess thermal energy from electricity generation is used for heat. Commercial fuel cells are available but, the industry is still in its beginnings. Most fuel cell systems are used in commercial applications and operate on reformate from natural gas. The widespread availability of hydrogen would allow the use of direct hydrogen units, simpler systems with lower cost and increased reliability. Most combustion-based processes, such as gas turbines and reciprocating engines, can be designed to use hydrogen either alone or mixed with natural gas. These technologies will have applications in the higher power ranges of stationary generation. Unlike an internal combustion engine with its noise, heat, and moving parts, the fuel cell is an enclosed box, with no moving parts and little noise. FUEL CELL TECHNOLOGY A fuel cell operates much like a battery. Fuel cells produce power from chemical reactions rather than combustion. As long as fuel is sup-
Fuel Cells
213
plied to the cell, it will continue to operate. In a typical PA fuel cell, 256 cell stacks have stacked graphite plates containing phosphoric acid. Natural gas is processed through an external reformer, which converts it to hydrogen and carbon dioxide. The chemical reaction between the hydrogen and the oxygen in the air allows each fuel cell to produce 200 kilowatts of electricity at 480 volts. FUEL Transportation applications will have a major effect on the growth of fuel cell power generation. Small, on-board reformers that could convert gasoline to hydrogen for automobile and truck applications would allow the existing vehicle refueling infrastructure to be used. Methanol, which is used in racing cars, is another path. Methanol tanks and pumps would be added at service stations. This was done when people started buying and using small portable kerosene heaters in the late 1970s, as a reaction to high energy prices and natural gas shortages. Much work is also underway on ways to use hydrogen directly, either as a gas stored in high pressure tanks or as a hydrate. Distributed generating systems could be fueled with natural gas, propane, or fuel oil. Natural gas is readily available in most areas and in rural locations propane could be used. Major progress has been made during the past few years and the rate of progress is accelerating. Major R&D activities are underway by large and small companies in partnerships (Table 7-1). This includes most of the major global automobile manufacturers. The market for fuel cell power plants could exceed $100 billion and the construction of fuel cell power plants could largely replace other technologies for large central station plants by 2030. Economic feasibility has not yet been demonstrated for many commercial applications. But, the infrastructure to support widespread fuel cell deployment already exists, which was not the case with other technologies. PORTABLE POWER GENERATION Portable applications for fuel cells include consumer electronics,
214
Emergency and Backup Power Sources
Table 7-1. Fuel Cell Companies ———————————————————————————————— Alkaline Astris Engergi ENECO Elecro-Chem-Technic Energy Conversion Device Fuel Cell Control International Fuel Cells ———————————————————————————————— Molten Carbonate UTC Fuel Cells (ONSI) Fuel Cell Energy (Energy Research Corp.) ———————————————————————————————— Phosphoric Acid Electrochem Toshiba International Fuel Cell Corp. UTC ———————————————————————————————— Proton Exchange Membrane Anuvu Alstom Technology Analytic Energy Systems Avista Labs Ballard Power Systems BCS Technology Dais-Analytic Element 1 Power Energy Partners Fuju Electric General Motors H Power H2-ECOnomy, Hydrogenics IdaTech Intelligent Energy Johnson Matthey Lynntech Manhattan Scientifics Masterflex Matsushita Electric MTU Friedrichsha Novars Gmbh NU Element Nuvera Palcan Plug Power Proton Energy Systems Proton Motor Schatz Siemans Teledyne Toyota UTC Fuel Cells Voller Solid Oxide ———————————————————————————————— Type of Fuel Acumentrics Ceramatec Advanced Ionic Technologies Ceramic Fuel Cell Technologies Global Thermoelectric Hydrovolt Siemens Westinghouse ZTEK ————————————————————————————————
Fuel Cells
215
business machinery and recreational devices. Many companies in the fuel cell industry are developing small-capacity units for a number of premium power applications from 25-watt systems for portable electronics to 10-kilowatt systems for critical commercial and medical functions. Most of the portable applications will use methanol or hydrogen as fuel. In addition to consumer applications, portable fuel cells will be used as auxiliary power units in military applications. Hydrogen will be available for every end-use energy need in the economy, including transportation, power generation, industrial process heaters, and portable power systems. Hydrogen will become the dominant fuel for government and commercial vehicle fleets. It will be used in a large mix of personal vehicles and light duty trucks. It will be combusted directly and mixed with natural gas in turbines and reciprocating engines to generate electricity and thermal energy for homes, offices and industry. Hydrogen will be used in fuel cells for both mobile and stationary applications. It will be used in portable devices such as computers, mobile phones, internet applications and other electronic equipment. HYDROGEN STORAGE Storage issues involve the production, transport, delivery, and enduse application of hydrogen as an energy carrier. Mobile applications are pushing the development of safe, space-efficient, and cost-effective hydrogen storage systems, but other applications will also benefit from advances in vehicle storage systems. Hydrogen can be stored as a gas or liquid or in a chemical compound. The storage of compressed hydrogen gas in tanks is the most mature technology, although the low energy density of hydrogen means a pressure of 5,000 to 10,000 psi is required to improve vehicle range. Liquid hydrogen takes up less storage space, but requires cryogenic containers and the liquefaction of hydrogen is energy-intensive. Recent developments in metal hydrides or carbon nanotubes show promise for hydrogen storage. As the hydrogen is needed, it can be released from these materials under certain temperature and pressure conditions. There are also chemical hydrides that bind hydrogen in a chemical compound and then release the hydrogen through a catalyzed chemical process. These methods tend to be costly. Table 7-2 lists several hydrogen storage techniques.
216
Emergency and Backup Power Sources
Table 7-2. Hydrogen Storage ———————————————————————————————— Liquid Hydrogen Storage Cylindrical tanks Elliptical tanks Cryotanks High-pressure liquid tanks Compressed Fuel Storage
Cylindrical tanks Quasi-conformable tanks
Chemical Hydrides
Alkaline liquids
Solid State Conformable Storage
Hydride materials Carbon adsorption ———————————————————————————————— No current technology meets all the desired storage requirements which involve: cost, weight, volume, safety, handling, long-term storage and efficiency. A selection of relatively lightweight, low-cost, low-volume hydrogen storage devices should be available for a variety of energy needs. Pocket-sized containers will provide hydrogen for portable telecommunications and computer equipment. Small and medium hydrogen containers will be available for vehicles and on-site power systems, and industrial sized storage devices will be available for power parks and utility-scale systems. Solid-state storage media that use metal hydrides will be in a mass production mature technology. Storage devices based on carbon structures will be developed. Government support for research and development should focus on developing advanced renewable and low-carbon-emitting methods along with carbon dioxide capture and sequestration technologies. Improve gas separation and purification processes will include lowering the cost of multi-fuel gasifiers and developing low-cost, high efficiency methods for hydrogen purification. These will help lower the costs of hydrogen production, especially at decentralized sites. Small reformers that run on natural gas, propane, methanol or diesel can provide hydrogen to some of the first fleets and retail sales points, reducing overall costs. Reducing the costs of electrolyzers should improve efficiency. Elec-
Fuel Cells
217
trolysis is more expensive than thermal production, but a better understanding of high-temperature and high-pressure electrolysis could bring costs down. Advanced renewable energy methods for hydrogen production could include semiconductors that lower costs and improve efficiencies of photolytic processes to split water and produce hydrogen. Biological systems should also be developed as a way to produce hydrogen. Research is needed to identify and develop methods for economically producing hydrogen with nuclear energy, which would avoid carbon emissions. Thermochemical water splitting using high temperature heat from advanced nuclear reactors could be a part of future nuclear plants. A cost-effective way to capture and isolate carbon dioxide would allow the production of vast quantities of hydrogen with low carbon emissions. Efforts should focus on existing commercial processes such as steam methane reforming, multi-fuel gasifiers, and electrolyzers, and on the development of advanced techniques such as nuclear thermochemical water splitting, photo-electrochemical electrolysis, and biological methods.
HYDROGEN DELIVERY A key part of the hydrogen energy infrastructure involves the delivery system that moves hydrogen from its point of production to the user. Delivery system requirements vary with production methods and user application. Hydrogen is currently transported from a limited number of production plants by pipeline or by road via cylinders, tube trailers, and cryogenic tankers. A small amount is shipped by rail car or barge. Pipelines are used as an efficient means to supply customer needs. These pipelines are currently limited to a few areas of the U.S. where large hydrogen refineries and chemical plants are concentrated such as in Indiana, California, Texas, and Louisiana. The pipelines are owned and operated by hydrogen producers. Hydrogen distribution takes place via high-pressure cylinders and tube trailers with a range of 100-200 miles from the production or distribution facility. For long-distance distribution of up to 1000 miles, hydrogen is usually transported as a liquid in super-insulated, cryogenic, over-the-road-tankers, rail cars and barges. It is then vaporized for
218
Emergency and Backup Power Sources
use at the customer site. A national hydrogen supply network will evolve from the existing fossil fuel-based infrastructure to provide centralized and decentralized production facilities. Pipelines will be used to distribute hydrogen to high-demand areas, and trucks and rail will distribute hydrogen to rural and other lower-demand areas. On-site hydrogen production and distribution facilities will be located where demand is high enough. A comprehensive delivery infrastructure for hydrogen faces scientific, engineering, environmental, institutional and market challenges. Fueling economics depend on volume which impedes the installation of an effective infrastructure. There is a high level of investment required to achieve low cost fueling. Hydrogen delivery technologies cost more than conventional fuel delivery. The high cost of hydrogen delivery methods could lead to the use of conventional fuels and associated delivery infrastructure up to the point of use, and small-scale conversion systems to make hydrogen onsite. However, cost effective means do not currently exist to generate hydrogen in small-scale systems. Customers expect the same degree of convenience, cost performance, and safety when dispensing hydrogen fuel as when dispensing conventional fuels. Current hydrogen fueling solutions and designs are not mature enough to provide this convenience. There is currently a lack of codes and standards for hydrogen delivery.
HYDROGEN PRODUCTION Hydrogen can be produced in centralized facilities or at decentralized locations where it will be used on-site. From centralized facilities, it can be distributed via pipeline, or stored and shipped via rail or truck. When produced on-site, hydrogen can be stored and/or fed directly into conversion devices for stationary, mobile and portable applications. Hydrogen can be produced from a number of sources, including fossil fuels, nuclear power and renewable sources such as wind, solar and biomass. The U.S. hydrogen industry currently produces 9 million tons of hydrogen per year for use in chemical production, petroleum refining, metals, treating, and electrical applications. This is enough hydrogen to fuel 20-30 million hydrogen-fueled cars annually. However, hydrogen now is primarily used as a feedstock, intermediate chemical,
Fuel Cells
219
or, on a smaller scale, a specialty chemical. Only a small portion of the hydrogen produced today is used for energy. Although hydrogen is the most abundant element in the universe, it does not naturally exist in large quantities or high concentrations on earth. It must be produced from other compounds such as water, biomass, or fossil fuels. Steam methane reforming accounts for about 95% of the hydrogen produced in the United States. This is a catalytic process that involves reacting natural gas or other light hydrocarbons with steam to produce a mixture of hydrogen and carbon dioxide. The mixture is then separated to produce high-purity hydrogen. This method is the most energy-efficient commercial technology currently available. Partial oxidation of fossil fuels in large gasifiers is another technique for thermal hydrogen production. This involves the reaction of a fuel with oxygen to produce a hydrogen mixture, which is then purified. Partial oxidation can be used with a wide range of hydrocarbon feedstocks, including natural gas, heavy oils, solid biomass and coal. Its primary by-product is carbon dioxide. Hydrogen can also be produced by using electricity in electrolyzers to extract hydrogen from water. Currently this method is not as efficient or cost effective as using fossil fuels in steam methane reforming and partial oxidation, but it would allow more distributed hydrogen generation and it allows for using electricity from renewable and nuclear resources. Other methods promise to produce hydrogen without carbon dioxide emissions, but these are still in early development phases. They include: • • • •
thermochemical water-splitting using nuclear or solar heat, photolytic (solar) processes using solid state techniques, fossil fuel hydrogen production with carbon sequestration, and biological techniques (algae and bacteria) that generate hydrogen from hydrogen containing materials.
Hydrogen should become a major energy source, reducing U.S. dependence on imported petroleum while diversifying energy sources and reducing pollution and greenhouse gas emissions. It could be produced in large refineries in industrial areas, power parks and fueling stations in communities, distributed facilities in rural areas with processes using fossil fuels, biomass, or water as feedstocks and release little
220
Emergency and Backup Power Sources
or none carbon dioxide into the atmosphere. By 2020 hydrogen should be used in refrigerator-sized fuel cells to produce electricity and heat for the home. Vehicles that operate by burning hydrogen or by employing hydrogen fuel cells will be commercially viable and emit essentially water vapor. Hydrogen refueling stations using natural gas to produce hydrogen should be available in urban areas to refuel hydrogen vehicles. Micro-fuel cells using small tanks of hydrogen will be operating mobile generators, electric bicycles and other portable items. Large 250-kW stationary fuel cells, alone or in tandem, will be used for backup power and as a source of distributed generation supplying electricity to the utility grid. There are several benefits to be expected from a hydrogen economy. The expanded use of hydrogen as an energy source should help to address concerns over energy security, climate change and air quality. Hydrogen can provide a variety of domestically produced primary sources including fossil fuels, renewable, and nuclear power and allow a reduction of the dependence on foreign sources of energy (Table 7-3). The by-products of hydrogen conversion are generally agreeable to human health and the environment.
FUEL CELL CONVERSION Fuel cells convert the chemical-energy of fuels directly into electricity. The principle of the fuel cell was demonstrated by Sir William Grove in 1839. He made several early improvements to storage batteries and showed that the combination of hydrogen and oxygen could be used to produce electricity. His work evolved from the idea that it was possible to reverse the electrolysis process and produce electricity rather than using electricity to cause the chemical changes needed for metal plating. There was no practical application during the inventor’s lifetime and practical uses for this gas battery did not develop. Fuel cells are like a car battery since hydrogen and oxygen are combined to produce electricity. But, batteries store their fuel and oxidizer internally requiring a periodic recharge. A fuel cell operates as long as it has fuel and oxygen. The hydrogen may be extracted from a fuel like methanol or gasoline. It is fed to the anode, one of two electrodes in each cell. Aided by
Fuel Cells
221
Table 7-3. Hydrogen Conversion ———————————————————————————————— Technology Application ———————————————————————————————— Combustion Gas Turbines Distributed power Combined heat and power Central station power ———————————————————————————————— Reciprocating Engines Vehicles Distributed power Combined heat and power ———————————————————————————————— Fuel Cells Polymer Electrolyte Vehicles Membrane (PEM) Distributed power Combined heat and power Portable power ———————————————————————————————— Alkaline (AFC) Vehicles Distributed power ———————————————————————————————— Phosphoric Acid (PAFC) Distributed power Combined heat and power ———————————————————————————————— Molten Carbonate (MCFC) Distributed power Combined heat and power ———————————————————————————————— Solid Oxide (SOFC) Truck auxiliary power Distributed power Combined heat and power ———————————————————————————————— a catalyst, the hydrogen atoms lose their electrons, turning them into positively charged hydrogen ions (protons). These flow through an electrolyte which depends on the type of fuel cell. The electrolyte may be phosphoric acid, molten carbonate, or another substance. The positive ions are attracted to the other electrode, called the cathode. The negatively charged electrons cannot travel through the electro-
222
Emergency and Backup Power Sources
lyte. They get to the cathode through a conductive track or cable. This is the movement that provides an electric current. In molten carbonate and solid oxide fuel cells, the electrolyte transports negative ions from the cathode with water and carbon dioxide exhaust from the anode. The amount of current is determined by the size of the electrodes. At the cathode, electrons combine with oxygen to produce the fuel cell’s major by-product, water. The other major by-product is waste heat which can be reused in a processed cogeneration. One cell produces a small voltage of less than one volt. Usable power levels require a number of cells stacked together. The assembly is called a fuel-cell stack. Oxygen is readily available in the air, but there are several ways to obtain the hydrogen. One method is to use electricity to dissociate water into its constituent elements, but this is energy intensive. Another technique is to reform the molecules of a hydrocarbon fuel. This includes natural gas, propane, fuel oil, gasoline and even methane generated in landfills. Producing hydrogen this way also produces some carbon dioxide. The conversion efficiency for the process depends on the fuel. Most fuel cells that operate on a hydrocarbon fuel use an external reformer to generate the hydrogen gas that flows into the fuel-cell stack. Some use an internal reformer, where the needed hydrogen is produced within the stack itself. The fuel cell produces a direct current (DC) output. Alternating current (AC) power is obtained from an inverter, which converts DC voltage to AC. Inverters are also used in generating units that produce electricity directly from sunlight using solar photovoltaic panels and from the wind using wind-driven turbine generators.
FUEL CELLS FOR ELECTRIC POWER The trend is to deregulate the production of electric power. Deregulation should promote combined heat and power (CHP), also known as cogeneration. CHP conserves fuel by using the thermal-energy that is produced from generating electricity. Since thermal-energy cannot be piped over long distances, CHP power plants generally tend to be much smaller than the present units. Fuel cells are suitable for these smaller electric power plants. Fuel
Fuel Cells
223
cells are ideal for electric power production since electricity is both the initial and final form of energy that is produced. They need to produce reasonable efficiencies in 30-kW sizes, run quietly, require infrequent maintenance and emit little pollution. Electricity is used by many high technology portable devices. The current batteries used in many devices do not have a very long life. Fuel cells could provide continuous power for these devices. Every week or month a new supply of liquid fuel could be injected into the fuel cell. Fuel cells are being developed as battery substitutes for laptop and hand-held computers, cell phones, and other portable electronic devices. In these applications, the fuel cell will be powered by stored hydrogen and produce a DC voltage. Fuel cell power systems can operate at high efficiency with low emissions of pollutants. They can also produce some usable heat as well as electricity and suitable for cogeneration and CHP applications. Noise levels are very low as well as maintenance. They can be factory assembled and installed as modules. Fuel cells have been known to produce electricity for more than a century, but the first practical application was in the space program in the 1960s. Units in the 200-kW range for distributed-generation (DG) have been available for the past 10 years. There are 20 or more companies supplying power units in the 0.5kW to 1,000-kW range. These include units for power generation and transportation. Distributed power-generation applications are linked to success in transportation power. Many of the efforts to develop costeffective fuel cells are focused on vehicle applications. But, the same stack design that produces electricity to power a car or bus can be used to produce electricity for homes, factories and other facilities. One factor is lower per-unit cost. The automotive market involves high production. An automobile engine is highly complex and requires close tolerance casting and machining, yet the cost of the product is approximately $50/kW. One horsepower equals 746-W. The cost of fuel cell power plants is close to $5,000/kW, but production volumes are only hundreds per year. As the volume goes up, the costs and prices should comedown. Fuel-cell power plants need to operate reliably for periods that are measured in years, while using air and one or more commonly available fuels as input. They also need to be competitive with electricity available from other sources.
224
Emergency and Backup Power Sources
A low-cost design may not have long life and a design that exhibits high efficiency may be too expensive. Many test efforts are underway with design refinements and improvements being developed. Some designs are too new to have demonstrated records of reliability. Mass markets require systems that are cost-competitive with electricity delivered by electric generators for emergency power. Many of these use reciprocating engines. A small number of special applications exist where relatively high price levels are less of a barrier. One involves providing power at remote communications relay stations at remote sites. The alternative is to deliver fuel to and perform maintenance on a continuously running engine-generator. Utility customers are finding it less costly to install a DG system to serve a remote load than to upgrade or extend a utility power line. Another application is providing power to reclosers and other switching equipment located at utility substations. Fuel cells are also expected to be a viable alternative to batteries in similar applications.
FUEL CELL CHARACTERISTICS Fuel cells have much lower carbon dioxide emissions than fossil fuel based technologies for the same power output. They also produce negligible amounts of SOx and NOx, the main constituents of acid rain and photochemical smog. Several types of fuel cells are being developed around the world, the chief difference between each being the material used for the electrolyte and the operating temperature. Types of fuel cells include solid oxide, molten carbonate, phosphoric acid, polymer, direct alcohol and alkaline. The different types of electrolytes have very different properties and the different fuel cell types have been built around them and are mostly named after the electrolyte. When fuel cells transform the energy stored in a fuel into electricity and heat, the fuel is not burned in a flame but oxidized electrochemically. This means that fuel cells are not constrained by the law that governs heat engines, the Carnot limit, which specifies the maximum theoretical efficiency that a heat engine can reach. Their efficiency increases with a partial load. A fuel cell works similar to a battery. In a battery there are two
Fuel Cells
225
electrodes which are separated by an electrolyte. At least one of the electrodes is generally made of a solid metal. This metal is converted to another chemical compound during the production of electricity in the battery. The energy that the battery can produce in one cycle is limited by the amount of this solid metal that can be converted. Larger batteries have more metal exposed to the electrolyte. In the fuel cell the solid metal is replaced by an electrode that is not consumed and a fuel that is continuously replenished. This fuel reacts with an oxidant such as oxygen from the other electrode. A fuel cell can produce electricity as long as more fuel and oxidant is pumped through it. There are now several types of fuel cells that appear to be promising. Some of these are in limited commercial production. Solid oxide fuel cells (SOFCs) are most likely to be used for large and small electric power plants above 1-kW. The direct alcohol fuel cell (DAFC) is more likely to be a battery replacement for portable applications such as cellular phones and laptop computers. The polymer electrolyte fuel cell (PEFC) may be the most practical in a developed hydrogen economy. The DAFC may be much simpler than the PEFC making it better for vehicular applications. The much higher efficiency of the SOFC and it’s ability to use most any fuel make it a contender for vehicular applications. The start-up time of the SOFC may be overcome by using supercapacitor batteries for the first few minutes of operation. Fuel cells in commercial production include the polymer electrolyte fuel cell (PEFC) which is in limited production. The phosphoric acid fuel cell (PAFC) is being produced for medium-sized electric power plants. The alkaline fuel cell (AFC) has been produced in limited volumes since the early space flights. The SOFC is considered to be superior to the PAFC and would likely replace it in time. The molten carbonate fuel cell was believed to be best for electric power plants due to the potential problems of the SOFC. Since it appears these problems may be solved, development of the MCFC may decrease.
ALKALINE FUEL CELLS The alkaline fuel cell (AFC) has been used since the early space applications where hydrogen and oxygen are available. Using carbon diox-
226
Emergency and Backup Power Sources
ide scrubbers allows these fuel cells to be operated on hydrogen and air. Alkaline fuel cells can reach thermal efficiencies of up to 70%. They have been used on the first spacecraft but their cost has made them too expensive for commercial use and they were replaced by proton exchange membrane (PEM) cells for some space applications. A few manufacturers continue to develop alkaline fuel cells for commercial applications. The alkaline fuel cell is one of the oldest and most simple type of fuel cell. Hydrogen and oxygen are normally used as the fuel and oxidant. The electrodes are made of porous carbon plates which are laced with a catalyst to accelerate the chemical reactions. The electrolyte is potassium hydroxide. At the anode, the hydrogen gas combines with hydroxide ions to produce water vapor. This reaction results in electrons that are left over. These electrons are forced out of the anode and produce the electric current. At the cathode, oxygen and water plus returning electrons from the circuit form hydroxide ions which are again recycled back to the anode. The basic core of the fuel cell consisting of the manifolds, anode, cathode and electrolyte is called the stack. Since alkaline fuel cells use a solution of potassium hydroxide in water as their electrolyte, they are not sensitive to CO as the solid polymer fuel cell (SPFC) is, but to CO2. The alkaline fuel cell cannot operate with carbon dioxide in either the fuel or oxidant. Even a small amount of carbon dioxide in the air is harmful. Carbon dioxide scrubbers have been used to allow these fuel cells to operate on air. The cost of the scrubber is considered to be a reasonable addition. Since oxygen has traditionally been used as the oxidant, there have been few uses outside aerospace, although some of the first experimental vehicles were powered by AFCs. They use comparatively cheap materials in their electrodes but are not as power dense as SPFCs, making them bulky in some situations. The alkaline fuel cell was used with great success in past space missions, dating back to the Apollo and Gemini missions in the 1960s. It was still used in the Space Shuttle and provided not only the power but also the drinking water for the astronauts. The space-qualified hardware made by United Technologies Corporation is very expensive, and until recently it was not thought that it would be useful in other applications. However, in July 1998 the Zero Emission Vehicle Company (ZEVCO) launched its first prototype London taxi based on the technology. Using a 5-kW fuel cell and about 70-kW of batteries, the hybrid taxi is a zero emissions vehicle capable of operating in cities.
Fuel Cells
227
This type of fuel cell operates at various temperatures, 250°C was used in space vehicles. The DC efficiency can reach 60% at rated power and since there are low system losses, the thermal efficiency can be even higher at 70%.
DIRECT ALCOHOL FUEL CELLS In 1999 there was a movement away from developing the polymer electrolyte fuel cell (PEFC) in favor of the direct alcohol unit (DAFC). In the DAFC, methyl (DMFC) or ethyl (DEFC) alcohol is not reformed into hydrogen gas but is used directly in a simple type of fuel cell. This type of fuel cell was bypassed in the early 1990s because its efficiency was below 25%. Most companies pursued the PEFC because of its higher efficiency and power density. Efficiencies of the DMFC are now much higher and projected efficiencies may reach 40% for DC automobile applications. Power densities are over 20 times as high now as in the early 1990s. The DMFC could be more efficient than the PEFC for automobiles that use methanol as fuel. Fuel passing from the anode to the cathode without producing electricity is one problem that has restricted this technology. Energy Ventures claimed in 1999 that it had solved this cross-over problem. Another problem is the chemical compounds that are formed during operation that poison the catalyst. The Direct Alcohol Fuel Cell appears to be most promising as a battery replacement for portable applications such as cellular phones and laptop computers. There are working DMFC prototypes used by the military for powering electronic equipment in the field. Small units for use as battery replacements do away with the air blower and the separate methanol water tank and pump. These fuel cells are not very different than batteries in construction. JPL has been working on DMFC since 1992. Many of the increases in efficiency and power density are as a result of their efforts. The operating temperature is 50-100°C, which is ideal for small to mid-size applications. The electrolyte is a polymer or a liquid alkaline. One concern is the poisonous aspect of methanol-methyl alcohol. Methanol can be replaced by ethanol. Several companies are working on DEFC. The power density is only 50% of the DMFC but this may be improved.
228
Emergency and Backup Power Sources
DIRECT METHANOL FUEL CELLS The direct methanol fuel cell (DMFC) of fuel cell is based on solid polymer technology but uses methanol directly as a fuel. This could be a plus in the automotive area where the storage or generation of hydrogen is one of the big obstacles for the introduction of fuel cells. Prototypes exist, but development is at an early stage. There are major problems, including the lower electrochemical activity of the methanol as compared to hydrogen. This produces lower cell voltages and efficiencies. Also, methanol is miscible in water, so some of it is able to cross through the water-saturated membrane and cause corrosion and exhaust gas problems on the cathode side. The direct methanol fuel cell is being worked on at Siemens in Germany, the University of Newcastle and Argonne National Laboratory.
MOLTEN CARBONATE Molten Carbonate Fuel Cells (MCFCs) use an electrolyte that is a molten alkali carbonate mixture, retained in a matrix. These cells operate at high temperatures of 1200°F and are more feasible for larger commercial applications. The higher operating temperatures allow more choices in catalysts and more heat to recover for thermal needs or for fuel reforming in the stack. The cell reactions for molten carbonate systems and solid oxide fuel cells are different from the PEM and PA reactions. In the molten carbonate electrolyte, negative carbonate ions flow from the cathode instead of positive protons from the anode. A small amount of water vapor and carbon dioxide is the exhaust from the anode. The cathode must be supplied with carbon dioxide, which reacts with the oxygen and electrons to form carbonate ions, which convey the ionic current through the electrolyte. At the anode these ions are used in the oxidation of hydrogen. This also forms water vapor and carbon dioxide to be conveyed back to the cathode. There are two ways to accomplish this: by burning the anode exhaust with excess air and removing the water vapor before mixing it with the cathode inlet gas or by separating the CO2 from the exhaust gas using a product exchange device. The fuel consumed in an MCFC is usually natural gas, although
Fuel Cells
229
this must be reformed in some way to create a hydrogen-rich gas to feed the stack. An MCFC produces heat and water vapor at the anode, which can be used for the steam reformation of methane. This means it is fundamentally more efficient than a cell requiring external fuel processing. The MCFC may be used for large scale power generation. One reason is the necessity for auxiliary equipment, which can make smaller operations uneconomical. Fuel Cell Energy is working on MCFC 300kW, 1.5-MW and 3-MW power generation units. This technology cannot be scaled down below 300-kW because of their need for significant amounts of auxiliary equipment such as pumps. There is no requirement for catalysts as needed in low temperature fuel cells and the heat generated can be used for internal reformation of methane, a bottoming cycle and for fuel processing and cogeneration. This increases the overall efficiency of the generating system. The molten carbonate fuel cell has been under development for almost 20 years as an electric power plant. The operating temperature is lower than the solid oxide fuel cell (SOFC). Its electrical efficiency is greater than the phosphoric acid fuel cell (PAFC) and has the advantage of reforming inside the stack. One disadvantage is the corrosiveness of the molten carbonate electrolyte. Large power plants using gas turbine bottoming cycles to extract the waste heat from the stack could be up to 70% efficient when operating on natural gas. If problems with the SOFC are solved, work on the MCFC may fade. It is unclear whether hydrogen fuel will be widely used. This is because solid oxide fuel cells may become popular and these can cleanly convert renewable hydrocarbon fuels. The solid oxide fuel cell may be the most promising technology for small electric power plants over 1-kW.
PHOSPHORIC ACID The acid fuel cell (PAFC) is one of the oldest and most established of the fuel cell technologies. It has been used in several power generation projects. It has a phosphoric acid electrolyte and can reform methane to a hydrogen-rich gas for use as a fuel with the waste heat from the fuel cell stack. This heat may also be used for space heating or hot water. It is possible to use alcohols such as methanol and ethanol as fuels, though care must be taken to avoid poisoning the anode by carbon monoxide and hydrogen sulfide which may be present in the reformed
230
Emergency and Backup Power Sources
fuels. This results in a gradual reduction in performance and the eventual failure of the cell. Like the PEM fuel cell, the catalysts are affected by contaminants and performance may degrade slowly. This requires repeated stack replacements. Japan has done some advanced PAFC research and design and has power plants from a few kilowatts to a few megawatts in operation. Toshiba, Fuji and Mitsubishi and others are pursuing this technology. Japan’s lack of natural resources is forcing this technology on the market at a higher price than would be possible in other countries. The phosphoric acid fuel cell has been under development for almost two decades as an electric power plant. While it has a lower real efficiency than the molten carbonate fuel cell (MCFC) or solid oxide fuel cell (SOFC), its lower operating temperature is considered almost ideal for small and midsize power plants. Midsize 200-kW AC power plants are 40% efficient and large 10-MW units are 45% efficient when running on natural gas. These efficiencies are higher than the polymer electrolyte fuel cell (PEFC). Phosphoric acid cells are being used to generate electricity in hospitals, hotels, schools, and other buildings. They operate at about 390°F and the recovered heat can be used for building needs.
POLYMER ELECTROLYTE The PEFC operates at 80°C which makes it useful for small applications and allows less expensive materials to be used. A catalyst is required to promote the chemical reaction at these low temperatures. The platinum catalysts used in the stack makes this type of fuel cell expensive, but new techniques for coating very thin layers of catalyst on the polymer electrolyte have reduced the cost of the catalyst to about $150 per unit. The PEFC can only use hydrogen for fuel and hydrocarbon fuels must be reformed carefully since small amounts of carbon monoxide can damage the catalyst. If a reformer is used, this requires a few minutes of warm-up time. Stored hydrogen is used in the start-up phase. A liquid cooling system is required. PEFCs larger than 1-kW are usually pressurized to increase the chemical reaction at the low temperatures involved. Air compressed to about 3 atmospheres or higher is used to increase the power density of
Fuel Cells
231
the fuel cell. On small systems this results in a significant loss of efficiency. The air compressors also add more complexity to the fuel cell. On automobiles and buses two air compressors are often used, a turbocharger and a supercharger. The polymer electrolyte fuel cell is considered the fuel cell of the hydrogen economy. Automobiles would emit pure water from their tail pipes. PEM cells were used in the Gemini spacecraft in the 1960s, but the amount of power was too low and too expensive to be transferred to commercial applications. In the late 1980s, Los Alamos National Laboratory made major advances in catalysts, reducing by 90% the amount of platinum required. Ballard Power Systems increased the stack’s power density keeping the membranes wet but not soaked and by perfecting the way that hydrogen, oxygen and water move through the stacks. Ballard, a British Columbia based company has almost 400 patents in PEM technology. A few years ago Ballard exceeded a power density of 1,000 watts per liter. Newer stacks can put out as much as 1,350 watts per liter. This power density could accelerate an automobile. PEFC systems would extract hydrogen from hydrocarbon fuels such as methanol or natural gas. The efficiency of the PEFC when running on hydrogen and no air pressurization is high but practical systems that use fuel reforming and air compression suffer in efficiency. Small 30-kW AC power plants are likely to be 35% fuel to electricity efficient with 200-kW units at 40% and large units at 45%. An automobile power plant with an electric motor would have an efficiency of about 35%.
PROTON-EXCHANGE MEMBRANE The proton-exchange membrane fuel cell (PEMFC) is solid, compact and operates at a relatively cool 80°C. The PEM cell uses a rubbery plastic membrane coated with a platinum catalyst. The catalyst splits hydrogen gas into protons and electrons and only the protons can pass through the membrane. The electrons move over the membrane, generating the electric current and then recombine with the protons and oxygen on the other side of the membrane to generate water. A series of the membrane-catalyst assemblies makeup a cell. The cells are connected in series to increase the voltage. The proton exchange membrane fuel cell has advantages because
232
Emergency and Backup Power Sources
of its low operating temperature, high power density, and advanced stage of technical development. However, the fuel used by the PEMFC is hydrogen, which is not easily transported or stored. In order to take advantage of the existing fuel infrastructure, the PEMFC can be integrated with a fuel processor that converts liquid hydrocarbons into hydrogen. The fuel cell can then use the hydrogen to produce electricity. There has been some progress in storing hydrogen in different materials such as hydrides or carbon. This would eliminate the need for a reformer but unless there is a hydrogen pipeline system, the hydrogen would have to be produced locally at service facilities. This is more likely to be done at larger metropolitan facilities.
SOLID OXIDE Solid oxide fuel cells operate at temperatures of 1,500-1,800°F. They are likely to be used for large stationary power plants. The high operating temperature eliminates the need for expensive catalysts and allows for the reforming of hydrocarbons within the stack. The high temperature heat can be used for combined cycles, but there is a high thermal stress on the components. The solid components used allow some flexibility in cell configuration. Advances in modern ceramic technology and solid-state devices are pushing the development of a range of efficient units. Many ceramics can be tailored to display electrical properties unattainable in their metallic or polymeric counterparts. These materials are called electroceramics. One group of electroceramics, the fast oxygen ion conductors, are used in devices such as oxygen sensors, oxygen pumps, exhaust catalysts and solid oxide fuel cells (SOFCs). A SOFC uses yttria-stabilized zirconia as its electrolyte, between the anode and the cathode. It runs at a temperature of about 1,000°C. The heat produced can be used in cogeneration applications or in a steam turbine to provide more electricity than that generated from the chemical reaction within the fuel cell. This is known as a bottoming cycle. Several different fuels can be used, including pure hydrogen, methane and carbon monoxide. The nature of the emissions from the fuel cell will vary correspondingly with the fuel mix. Many SOFCs use a separate pre-reformer as opposed to an integral reformer. SOFCs generally use carbon monoxide as a fuel. There are three basic designs of
Fuel Cells
233
SOFC: tubular, planar and monolithic. Westinghouse has been working on a tubular form of SOFC. The tubular units operate with the fuel on the outside surfaces of a bundle of tubes. The oxidant is on the inside and the tube itself is composed of the electrolyte and electrode sandwich. The tubes have a high electrical resistance but are simple to seal. Other companies such as Global Thermoelectric are working on planar SOFCs made of thin ceramic sheets which operate at less than 800°C. The thin sheets have a low electrical resistance and offer high efficiencies. Less expensive materials can be used at the lower temperatures and help the SOFC to reach commercial markets. Planar SOFCs are being developed by several companies, including Siemans and Fuji Electric. In these units the cells are flat plates bonded together and form a stack. The advantage of this over the tubular system is its relative ease of manufacture. The lower ohmic resistance of the electrolyte results in reduced energy losses. Siemens Westinghouse in Germany is working on tubular SOFCs operating at 1000°C. In 1998, Siemens halted work on its own planar solid oxide fuel cells and bought out Westinghouse’s gas turbine and tubular solid oxide fuel cell division. Siemens planar design suffered from leaky seals of it’s window frame type design which had 16 small SOFC cells in each layer. Siemens is also working on direct methyl fuel cells (DMFC) for automobiles and polymer electrolyte (PEFC) for specialty applications. Sulzer in Germany is working on a 3-kW SOFC for CHP. Monolithic SOFCs are also in development that use a honeycomb structure. Tests indicate that this form of fuel cell may be one of the most efficient. They are capable of efficiencies between 50 to 60%. High-grade waste heat is produced, for combined heat and power (CHP) applications and internal reforming of hydrocarbon fuels is possible. Global Thermoelectric is working on planar SOFCs operating at 800°C. In July 1997, Global signed a fuel cell agreement with Forschungszentrum Julich, one of the world’s leading developers of solid oxide fuel cells. In early 1999 Global reported that they had achieved high levels of power output with a new type of seal and an inexpensive variety of ceramic plates for the stack. There are few problems with electrolyte management. Liquid electrolytes are usually corrosive and difficult to handle. Solid oxide fuel cells provide some advantages when compared with other fuel cell
234
Emergency and Backup Power Sources
types. Solid oxide fuel cells are made from solid-state materials, using an ion-conducting oxide ceramic as the electrolyte and operate in the temperature range of 900-1000°C. A SOFC unit consists of two electrodes (anode and cathode) separated by an electrolyte. The fuel is usually H2 or CH4. It is injected at the anode, where it is oxidized by oxygen ions from the electrolyte. This releases electrons (e-) to the external circuit. On the other end of the fuel cell, oxidant (O2 or air) is fed to the cathode, where it supplies the oxygen ions (O2-) for the electrolyte by accepting electrons from the external circuit. The electrolyte conducts these ions between the electrodes. The current technology employs several ceramic materials for the active SOFC components. The anode is typically constructed from an electronically conducting nickel/yttria-stabilized zirconia cermet (Ni/ YSZ). The cathode is based on a mixed conducting perovskite, lanthanum manganate (LaMnO3). Yttria-stabilized zirconia (YSZ) is used for the oxygen ion-conducting electrolyte. To generate a suitable voltage, fuel cells are not operated as single units but as a series array of units or stack, with a doped lanthanum chromite (La0.8Ca0.2CrO3) interconnect joining the anodes and cathodes of adjacent units. Several stack designs exist but the most common is the planar or flat-plate configuration. For the YSZ electrolyte to provide sufficient oxygen ion conductivity, a high operating temperature (900-1000°C) is required. This means that expensive high-temperature alloys must be used to house the fuel cell. The cost of the fuel cell could be reduced if the operating temperature were lowered to between 600 and 800°C. This would allow the use of materials such as stainless steel. A lower operating temperature can also reduce the thermal stresses in the active ceramic structures, resulting in a longer lifetime for the system. To lower the operating temperature, either the conductivity of YSZ must be improved, or alternative electrolytic materials must be developed to replace YSZ. Ceramics that are being investigated include Gddoped CeO2, Ba2In2O5 and (Sr,Mg)-doped LaGaO 3 (LSGM). These materials all face serious drawbacks compared with YSZ, and it is more likely that the first commercial SOFC units will use zirconia-based ceramics as the electrolyte. The solid oxide fuel cell is considered to be a desirable fuel cell for generating electricity from hydrocarbon fuels. This is because it is
Fuel Cells
235
simple, highly efficient, tolerates impurities and has some capability to internally reform hydrocarbon fuels. One advantage of the SOFC over the molten carbonate fuel cell (MCFC) is that the electrolyte is a solid. This means that no pumps are required to circulate the hot electrolyte. Small planar SOFCs of 1-kW could be made with very thin sheets for a very compact package. Another advantage of the SOFC is that both hydrogen and carbon monoxide are used in the cell. In the PEFC the carbon monoxide is a poison, while in the SOFC it is a fuel. This also means that the SOFC can use many common hydrocarbons fuels such as natural gas, diesel, gasoline, alcohol and coal gas. In the PEFC an external reformer is required to produce hydrogen gas while the SOFC can reform these fuels into hydrogen and carbon monoxide inside the cell. This results in some of the high temperature thermal-energy that is normally wasted being recycled back into the fuel. Chemical reactions in the SOFC occur readily at high operating temperatures, so air compression is not needed. This results in a simpler, quieter system with high efficiencies. Exotic catalysts are not needed. Some fuel cells such as the PEFC require a liquid cooling system but the SOFC does not. Insulation is used to maintain the cell temperature on small systems. The cell may be cooled internally by the reforming action of the fuel and by the cooler outside air that is drawn into the fuel cell. Since the SOFC does not produce any power below 650°C, a few minutes for warm-up are required. As an emergency power source, this time period is similar to the warm-up of engine driven generators. Because of the high temperatures of the SOFC, they may not be utilized for sizes much below 1,000 watts or for small to mid-size portable applications. Small SOFCs are almost 50% efficient at about 15%100% power. To achieve greater efficiency, medium-sized and larger SOFCs are generally combined with gas turbines. The fuel cells are pressurized and the gas turbine produces electricity from the extra waste thermal-energy produced by the fuel cell. The resulting efficiency of this type of CHP SOFC generating system is about 70%. A SOFC generator using natural gas as it’s fuel would use a reforming chamber. On the anode side, natural gas is sent into the reforming chamber where it draws waste thermal-energy from the stack and is converted into hydrogen and carbon monoxide. It then flows into the anode manifold where most of the hydrogen and carbon monoxide is oxidized into water and carbon dioxide. Part of this gas
236
Emergency and Backup Power Sources
stream is recycled to the reforming chamber where the water is used in the reformer. On the cathode side, air is forced into the heat exchanger where it almost reaches the operating temperature. The air is brought up to the operating temperature of the fuel cell by combustion of the remaining hydrogen and carbon monoxide gas from the anode. The oxygen in the cathode manifold is converted into an oxygen ion which flows back to the anode. There are efforts to develop a low-temperature solid oxide fuel cell (SOFC) that operates at 500°C. This would allow the use of methanol directly and use stainless steel components. The Imperial College in London is active in this area.
SOLID POLYMER The Solid Polymer Fuel Cell (SPFC) is also known as the Proton Exchange Membrane Fuel Cell (PEMFC). It is unusual in that the electrolyte consists of a layer of solid polymer which allows protons to be transmitted from one face to the other. It essentially requires hydrogen and oxygen as inputs, although the oxidant may also be ambient air. These gases must be humidified. It operates at a relatively low-temperature (170°F) with a moist polymer membrane as the electrolyte. Their high power potential makes them useful for light vehicles and for generating electricity for backup-power and micropower generation. The cell requires hydrogen and a reformer is needed if tanks of hydrogen are not used as fuel. A catalyst at the anode strips off electrons from the hydrogen atoms. The resulting protons move through the membrane and electrons flow through the external circuit. They reunite at the cathode, where air is flowing. Water vapor is formed when two of the hydrogen atoms combine with an oxygen atom. The recovered heat can be used for domestic water heating or space heating. It operates at a much lower temperature than most fuel cells. The SPFC can be contaminated by carbon monoxide, reducing the performance by a few percent to tens of percent. It requires cooling and management of the exhaust water in order to function properly. SPFCs are being developed by almost 30 companies. The main
Fuel Cells
237
focus is transport applications, since there are advantages in having a solid electrolyte for safety. The heat produced is not adequate for cogeneration. Daimler-Benz is involved in developing cars powered by Ballard fuel cells. Toyota has shown a vehicle that uses a fuel cell of their own design. Other car manufacturers, including General Motors and Ford, are active in similar developments. The SPFC could be used in small scale power generation, where the heat could be used for hot water or space heating. There is also the potential of a heater/chiller unit for cooling in areas where air conditioning is needed. This particular type of fuel cell could be used for both transport and power generation with the advantages of economies of scale. This could help the introduction of this technology compared to others. General Electric did much of the early work on proton exchange membrane (PEM) fuel cells in the 1960s. These PEM cells were used for the Gemini space program and cost hundreds of thousands of dollars for each kilowatt generated. GE saw no practical applications and let many of its patents run out. GE is now back into PEM fuel cells. There are more advanced materials than were available in the 1960s when the last surge of development occurred. In a pure hydrogen fuel cell, emissions are a few parts per million for the common pollutants. This is much greater in cells that reform or extract the hydrogen from a fossil fuel.
FUEL CELL UTILIZATION Fuel cells are an old technology, but costs and other problems in the past have plagued their utilization. Hydrogen could be widely used in the future but methanol or ethanol are also being proposed as fuel as well as gasoline. Fuel cells are becoming a reality in several applications. Ballard Power Systems is a major developer of fuel cells. In 1993, Ballard demonstrated a fuel-cell bus and surprised most of those attending an international energy conference. Buses were an ideal platform for a hydrogen-powered fuel cell, particularly for the bulky technology that existed in 1993. They offered a large roof that could be used for fuel storage, a flat floor for batteries and a large engine
238
Emergency and Backup Power Sources
compartment that can house the cell. Municipal buses usually run out of a central depot that could be used for hydrogen production and storage. Ballard’s bus, like the International Fuel Cells and Daimler-Benz buses that appeared later, proved the concept of driveable prototypes. Three fuel-cell-powered buses have been in service in Chicago. These buses serve as rolling test beds for Ballard to gather operating data. There have been some stack problems, but with 4,500 cells in the three buses, there has been problems in only ten of them. The main problems have been with the air conditioning and brake systems rather than the fuel cell drives. Ballard’s buses run on pure hydrogen, without the need for a reformer. The fuel cells develop enough power on their own without the need to be supplemented by battery packs. The fuel cell needs cooling, control, and fuel processing to operate. The Chicago buses need fast acceleration under 25 miles per hour to merge into traffic. In 1995 Ballard developed a fuel cell stack producing the equivalent of 275horsepower in a fraction of the space the 125-horsepower 1993 cell required. Along with the stack for buses, Ballard is building 50- to 100kilowatt systems for cars, and a small, under-two-kilowatt portable unit that could power a laptop computer or fit in a soldier’s backpack. Ballard builds fuel cells for car manufacturers such as Ford, Volvo, and DaimlerChrysler. A separate Ballard subsidiary builds stationary 250-kilowatt power plants to run hospitals and factories, and a smaller 10-kilowatt model for homes. Ballard is working on polymer electrolyte fuel cells (PEFC) for transportation and electric power plants. Most of the PEFC technology is developed in house and they own over 200 patents. They are working with DaimlerChrysler and Ford. According to Merill Lynch, the PEFC fuel cell cars powered by Ballard could be suitable for mass production. In 1999 they announced the purchase of a license to direct alcohol fuel cell (DMFC) intellectual property from the California Institute of Technology (Caltech) and the University of Southern California (USC). The license is based on technology developed at the Jet Propulsion Laboratory of Caltech and the Loker Hydrocarbon Research Institute at USC. Automakers are facing tighter regulation of tailpipe emissions and many are investing heavily in fuel cells. DaimlerChrysler, Ford
Fuel Cells
239
and Ballard Power Systems have spent close to $1 billion on fuel cells and plan to spend at least a billion more to begin mass-producing vehicles. Japan’s four largest automobile makers have invested more than $850 million in fuel cells over the past decade. The internal combustion engine is getting harder to improve and even the most sophisticated designs may have difficulty with newer emissions standards imposed in California and several East Coast states. A fuel cell car needs to be competitive in price with internal combustion models. Fuel cells are being proposed to replace Otto or Diesel engines because they could be reliable, simple, quieter, less polluting and have a greater economy. The internal combustion Otto or Diesel cycle engine has been used in automobiles for over 100 years. It has a life span of about 10,000 hours of operation in automobiles and over 25,000 hours in larger applications such as buses, trucks, ships and locomotives. Automobile manufacturers have been finding new ways to improve the Otto and Diesel engines. Toyota, for example, has demonstrated an Otto cycle automobile with emissions five times cleaner than present requirements. Volkswagen has a prototype compact four-seater Diesel cycle automobile that gets 100-mpg. Fuel cells using reformers do not produce much less pollution than very advanced Otto and Diesel cycle engines with complex catalytic converters. If vehicles use hydrogen as fuel, a hydrogen supply system would need to be installed. Fuel cells can be considerably quieter than Otto or Diesel cycle power plants; however, fuel cells produce electricity which is not the final form of energy for transportation. The electricity must be converted into mechanical power using an electric motor. The Otto or Diesel cycle produces the required mechanical power directly. Otto and Diesel cycle engines are inexpensive to produce, and use readily available liquid fuels. The direct alcohol fuel cell (DAFC) would be simpler than the internal combustion engine, produce greater efficiency and be less polluting. The liquid fuel could be handled by slightly modifying the present distribution equipment. When the DAFC is perfected, it may compete with Otto and Diesel cycle automobiles. Ballard, DaimlerChrysler and Ford are testing their technologies in the California Fuel Cell Program. Fuel cells hold the promise of zero emissions with the potential of a hydrogen-oxygen-water cycle that is sustainable forever.
240
Emergency and Backup Power Sources
The Energy Research Corporation is working on large molten carbonate fuel cells for power on ships. One unit uses a diesel reformer for an output of several megawatts in a 15-foot-tall package. Fuel cells are attractive since they free electric cars from battery power. Battery-powered cars are smooth and responsive but these features have been overshadowed by the vehicles’ limited range. The fuel cell, unlike batteries, which store a charge, generates electricity. Fuel cells utilize different fuels and materials, but one choice for automotive use is the proton exchange membrane (PEM) fuel cell. Another way to extend the range of the electric car is to carry fuel and generate electricity onboard. This is the approach used by hybrid gasoline/ electric cars such as the Toyota Prius. The Prius uses a small combustion engine, plus a set of batteries to supplement the engine during acceleration. This approach combines electric and mechanical drive technologies. Fuel cell vehicle systems are still costly and supplying hydrogen to the unit is a problem. Even compressing the hydrogen at 5,000 pounds per square inch may take up too much space for a 70-mileper-gallon, 350-mile-range vehicle. Storing the hydrogen in metal hydride is being pursued, but adds weight and high costs. Researchers at Northwestern University have developed a system based on the absorption of carbon nanofilters for the high density of hydrogen. This could make direct-hydrogen cars practical and researchers at the National University of Singapore have reported promising results.
FUEL PROCESSORS A complete fuel cell/fuel processor generator system could weigh about one kilogram (.45 pounds). Most of the weight results from fuel storage. It would be fueled by a liquid hydrocarbon such as butane, and could provide 5 watts of base load electric power with 10 watts of peak power for one week. The system could use a compact lithium power battery for load leveling and to meet peak electric power demands. A major component of the fuel processor, the vaporizer, has been demonstrated at the scale required for a 25-kW fuel cell, using methanol as the liquid hydrocarbon fuel. A device with dimensions of 7 × 10 × 2.5cm vaporized methanol at a rate of 208mL/minute. Heat was
Fuel Cells
241
provided by catalytic combustion of a dilute hydrogen stream that would be supplied as the exhaust from the fuel cell anode. The same miniaturization techniques could be used for additional system components such as steam reforming, partial oxidation, water-gas shift, and preferential oxidation reactors.
METHANOL Using methanol as a fuel means extracting hydrogen. Methanol is sulfur-free and yields hydrogen at 300°C. Refining methanol is still a complex process involving many steps, each of which must take place at a specific temperature. Methanol is produced from natural gas or distilled from coal. In the 1980s it was used for internal-combustion engines, but methanol is highly toxic and corrosive. The onboard extraction of hydrogen from gasoline could make the transition to the fuel cell vehicle easier, but refining gas on-board is not easy. The reactions occur about 800°C, making the devices slow to start, and the process is sensitive. The process is used in chemical manufacturing plants and oil refineries to make industrial volumes of hydrogen. General Motors and Exxon Mobile are involved in the joint development of gasoline fuel processors. DaimlerChrysler is developing a methanol system for fuel cells that run directly on methanol rather than hydrogen. One methanol processor being tested provides enough hydrogen to take a vehicle almost 200 kilometers between methanol fill-ups. The range is limited by the size of the fuel tank which is small due to the bulk of the fuel processor. Another problem is that the fuel processor takes a half-hour to warm up. One processor uses steam to free the hydrogen and it takes this amount of time to get the steam ready. Another type of fuel processor uses a catalyst, instead of steam, to start the hydrogen production. This system is much smaller and weighs half as much as a steam unit. Methanol poses a danger as a fuel. It is fatal if ingested and splashing it on the skin can cause blindness and liver and kidney failure. Since methanol dissolves in water, it can be a threat to underground drinking water supplies. The methanol-based fuel additive MTBE (methyl tertiary butyl ether) is being phased out of gasoline, after the chemical was found in the drinking water of several areas.
242
Emergency and Backup Power Sources
GASOLINE REFORMING Fuel cells that run on pure hydrogen must have a high-compression tank of this highly flammable gas nearby or they can use a reformer to extract hydrogen from a fossil fuel such as methanol. The direct hydrogen approach is cleaner but autos will probably retain their familiar liquid fuels and the first fuel-cell cars will probably run on them. A 1997 joint project of Arthur D. Little, Plug Power and the Department of Energy demonstrated a gasoline reformer. This was considered a major feat, since gasoline is among the hardest fuels to reform. Gasoline contains some sulfur, which poisons fuel cells, but Epyx which is part of Arthur D. Little trapped the sulfur before it got to the cell using a technique similar to a catalytic conversion. Such a reformer could work with multiple fuels and be changed to use gasoline, ethanol, or methanol. Chrysler had been a proponent of gasoline reforming and has switched to methanol as a result of it’s merger with Daimler-Benz. As clean as they are, fuel-cell cars with reformers still are not zero-emission vehicles, as defined by California standards. One solution is to directly use hydrogen as the fuel. While hydrogen has more energy by weight than any other fuel, about three times more than gasoline, it is hard to get much of this energy in a small fuel tank. A commercially compressed gas tank with hydrogen will take a vehicle about 150 kilometers which is no farther than the best car batteries. Hydrogen is also the smallest of molecules and slips through the smallest holes which is troubling given its natural flammability. One test car has gone 450 kilometers using a liquid hydrogen tank. But, cryogenic technology was used to store the fuel at -253°C, 20° above absolute zero. There are a few hydrogen filling stations in the world and companies such as Texaco Energy Systems which specializes in advancedfuels are investing in hydrogen fueling technology including advanced storage tanks. Stronger tanks could compress the hydrogen to greater pressures, but another technique is to pack the tank full of materials that bind hydrogen, slowing down the molecules without liquefying the gas. Graphite fibers with elaborate nanostructures have been shown to absorb more than 20% hydrogen by weight, allowing more to be squeezed into a tank.
Fuel Cells
243
MICROSTRUCTURES A compact fuel processor is possible with the enhanced heat and mass transfer exhibited when fluids flow in and around microstructures. These structures consist of machined microchannels up to 500 microns wide and other special structures engineered to enhance chemical reactions or separations. Using many microstructures in parallel, chemical systems can achieve major reductions in size and weight. The process operations take place in parallel sheets that are machined with many parallel micro-scale features. Combinations of reactor, heat exchange, and control sheets are stacked together to form an integrated system that performs operations such as steam reforming, partial oxidation, water-gas shift reaction, carbon monoxide removal and heat exchange. Each parallel sheet may perform one or more chemical process operations. Fuel cells may be a few years away from commercialization on a large scale and the fuel cell and technology that will dominate depends on how fast some of the existing problems are solved. The solid oxide (SOFC) and direct alcohol fuel cell (DAFC) have some problems, but if these can be solved quickly then these may become the predominant fuel cells in the future. There has already been considerable progress made in this direction. Ballard has purchased DAFC technology from JPL, and Global has made progress towards commercializing SOFC technology. JPL predicts that direct oxidation, liquid feed methanol fuel cell efficiency will increase to 45% with the use of advanced materials. Solid oxide fuel cells may compete with phosphoric acid cells. SOFC problems include the fouling of the membranes by sulfur and other contaminants. The installed cost is about $3.5/W with an electrical efficiency of 45% and a CHP efficiency of 70%. Among the different types of fuel cells, the polymer electrolyte membrane or proton exchange membrane (PEM) cell, is considered among the best for transportation applications. PEM cell were supplied for the Gemini space program by General Electric in the early 1960s. It has some advantages in size, low operating temperatures, adjustable power outputs and quick-starting. The low temperatures can be used for water heating, but are too low for steam production. The installed cost is about $5.5/W with an electrical efficiency of 30% and a CHP efficiency of close to 70%. A major breakthrough occurred
244
Emergency and Backup Power Sources
in the 1980s with the reduction of up to 90% the amount of the expensive catalyst needed to coat the cell’s thin polymer membrane.
POWER INSTALLATIONS Fuel cells being sold as electric power plants may be successful before vehicular applications. International Fuel Cells (IFC) has manufactured more than 250 PC25 power plant systems. These are installed in approximately 100 locations throughout the world. The PC25 fuel cell power plant systems have more than 3 million hours of operation. One of the world’s largest fuel cell installations is at the State of Connecticut juvenile detention facilities. The facility gets its heating, cooling, and electricity from a central power plant located just outside the secured area, using buried pipes and cables to distribute the energy to each building on-site. This Energy Center provides heating, cooling and electrical power for 227,000 square feet of buildings on 35 acres, including residences, office buildings, and other campus facilities. The Energy Center delivers power from fuel cells interconnected with the regional electric power grid and emergency generators for system backup. Extensive life-cycle cost analysis and environmental benefits pointed to fuel cells. State agencies along with the private sector joined in two primary agreements for engineering, procurement and construction, (an EPC contract) and operations and maintenance, (an O&M contract). The EPC contract addressed specifications, equipment capacities, schedules, terms, and other details related to the engineering and construction. The O&M contract documented the duties, responsibilities and requirements of the 30-year operating period. The EPC contract was a fixed-price contract to construct the Energy Center. Fuel cells were required to meet a 1.2-megawatt electrical capacity test, chillers had to produce 680 tons of cooling and the boilers had to produce 9 million Btuh of hot water. The O&M contract, has a 20-year term with a 10-year renewal. Chilled water, hot water, and electricity must be provided 24 hours per day, 7 days per week. Failure to provide these services can result in significant liquidated damages. The O&M contract also includes fuel purchasing. Department of Defense fuel cell grants were obtained in the
Fuel Cells
245
amount of $200,000 per fuel cell. Under the terms of the lease, the state makes fixed semi-annual payments and owns the plant at the end of the lease. The contractor, Select Energy Services (SES) raised the money to build the project by selling tax-exempt certificates of participation (COPs) in the lease. Each certificate represents the right to a proportionate interest in the lease payments from the state. COPs can be traded and sold, and typically have lower interest rates than a private placement of a project loan. Standard & Poors rated the COPs A+. The initial work focused on providing temporary electricity, chilled water, and hot water to the site. Since some school buildings had already been under construction for several months, utilities were needed. A 13,000-volt temporary power line was energized within 2 weeks. Chilled water was required to dehumidify the interior to allow painting and other interior work to be completed. Temporary boilers were on site and started up to provide heat. Power is generated by six fuel cells at the Energy Center. The fuel cells are located outside the main structure. To ensure reliability, the electric supply has three levels of redundancy. The fuel cells are the primary power source, with backup power provided by the local utility and gas-fired emergency generators. The six individual fuel cells reduce problems from equipment failure. If the fuel cells should malfunction, the local electric utility switches in through a 13,200 volt line fed from a single substation with automatic reclosures. In the event of a utility failure, two 1,500-kW emergency generators can supply the power needs of the facility. An absorption chiller was installed to use waste heat from the fuel cells to capture free cooling. During the spring and fall, the absorption machine will provide most of the chilled water. During the summer months, two centrifugal chillers will also produce chilled water. These chillers are equipped with variable-speed drives. The fuel cells generate waste heat that can be used to heat the building during most periods. A fully redundant fire-tube boiler system was installed to ensure that heat is available under all circumstances. The building control system is designed for maximum waste heat use before drawing on the boilers. An automated control system is used for monitor and control. Separate, integrated control systems operate the fuel cells and the rest of the mechanical systems.
246
Emergency and Backup Power Sources
Fuel cells are commercially available and can be used in suitable applications. Green power is more expensive than commercially available alternatives, but the future is bright for the environment and for alternate power sources. References Turner, Dr. Wayne C., Editor-in-Chief, Energy Engineering, Vol. 101 No 1, 2004. Wells, Joyce, Editor, Solutions for Energy Security and Facility Management Challenges, Proceedings of the 25th World Energy Engineering Congress, Lilburn, GA: The Fairmont Press, Inc., 2003.
Protecting Computer Data
247
Chapter 8
Protecting Computer Data Many small- and medium-sized businesses believed that data availability was not crucial to their operations before September 11, 2001. Many companies now realize that their data are the life blood of their business. Many thought that they could not afford the staff, services, and capital to properly protect that data off-site. Often, they backed up to tape and hoped that they never had to use those tapes to recover loss data. Even large enterprises did not implement backup and recovery procedures on a consistent basis. But, today businesses need to ensure they can recover from a disaster. Larger enterprises are revising their backup procedures and expanding their emergency infrastructure and many smaller businesses are developing recovery plans. A company cannot afford not to protect its data. Hardware can be replaced but data are difficult or impossible to recover. Many companies still do not have backup or mirroring processes in place. A survey by the TDM Group in 2003 showed that about 360,000 businesses have given no thought on how to get their systems up and running rapidly in case of a disaster. Few businesses started taking the issue seriously until September 11. Banks are among the best prepared, with 92% of system managers reporting that their networks would be up and running quickly after a disaster. The least prepared business sector was manufacturing. There are costs involved in data loss mitigations, but a large degree of risk reduction may be accomplished with relatively low expenditures. By categorizing the value of data and understanding the various risks that exist, it is possible to develop a strategy that protects vital databases. If a business depends on the constant availability of its stored data for revenue generation, any extended period of data downtime can have a critical financial impact. In high-transactional types of commerce, in247
248
Emergency and Backup Power Sources
come losses can be devastating if stored data are inaccessible. In the retail brokerage business, average financial losses can be over $6 million for each hour of system downtime. An airline reservation system can lose almost $100,000 per hour. Most businesses are susceptible to a potential data loss. Getting back on-line will require the recovering and restarting of the critical database following the event and may take 2-10 hours or more. Backup processes are regularly tested during scheduled backups, but recovery time is often untested until a disaster hits. Conventional recovery processes require about 15 hours to recover a 1-terabyte database after a serious failure. In many institutions, the revenue loss or damage to the organization that would result from an extended outage of this magnitude cannot be tolerated.
DISASTER PLANNING The nature of the facility is a major factor in disaster planning. An electrical system outage can have catastrophic effects in a hospital. Life support systems would go off as well as monitoring devices. Medical and surgical procedures would be interrupted. These are critical loads and emergency power systems must be readily available and easily activated when necessary. An electrical outage would have a different impact on the business operations of an equipment rental company. Employees and customers would be inconvenienced, but reservations could be manually logged during the outage and keyed-in when the electrical service is restored and the data system is reactivated. The contingency strategies depend on the acceptable level of downtime which is defined as the acceptable time period for using alternate procedures to maintain essential functions. Contingency planning elements include the initial emergency response, recovery, resumption, implementation, testing and revision. As in any other emergency response, the initial critical actions that are taken are done to minimize loss. In the recovery mode the steps needed to continue critical functions are taken. Resumption involves the requirements needed for a return to normal business functions. A formal backup policy specifies the frequency of backup and the frequency of reuse. The frequency of testing the system and data recovery should be determined. A complete test should be done at least once
Protecting Computer Data
249
or twice a year. Included in the formal backup policy would be the procedures for the retrieval and identification of the data from any off-site storage location, how restoration is to be accomplished and procedures for the return to storage. Ideally, once a backup is made it should be sent to the off-site location within a day. The cost of contingency plans and procedures is another factor that must be considered in planning. Emphasis should be placed on developing alternative solutions that are cost-effective. But, these solutions should not compromise the recovery effort. Identifying and prioritizing functions and their related operations that are essential to business continuity, is a major part of determining the recovery plan. The quick restoration of computer system resources is the goal of the disaster plan. The options for data and system recovery include off-site storage, backup policy, testing program and the types of backup media to be used. Stored data should include information on data content, structure and any relevant licensing and vendor information. Documentation on patch levels, applications and versions running on the computers aids in system recovery. Any changes or upgrades to hardware and software should be documented in the same way. System recovery time may depend on the care taken when a new system is first used. Documentation of such items as the operating system and version, patch levels, and the applications and versions that are running on the computer all aid in the system recovery. Once the planning has been completed, implementation involving preparation, documentation and training take place. Testing and revision should be part of the contingency plans. All possible functions should be tested and revised if necessary. The National Institute of Standards and Technology (NIST), has developed contingency planning guidelines for information systems. These guidelines are designed to be integrated into each stage of the system development life cycle of the Information Technology (IT) systems. These elements are summarized in Table 8-1. A system disruption generally has three phases as listed in Table 8-2. Disasters, like emergencies, are unplanned events. Each event must be considered in the context of its impact. A nuisance in a large industrial complex could be considered a disaster at a smaller facility. A total electrical system failure would have a different impact on
250
Emergency and Backup Power Sources
Table 8-1. Contingency Planning ———————————————————————————————— Formal policy Provides the control and direction needed to develop the contingency plan. Business impact analysis (BIA)
Identifies and ranks critical computer systems and components.
Preventive controls
Measures to reduce the effects of system disruptions and increase system availability.
Recovery strategies
Allow the system recovery to be effective following an outage.
Contingency plans
Provide the detailed guidance and procedures needed for restoring the system.
Testing and training
Identifies omissions and prepares personnel for plan activation.
Plan maintenance
The plan should be updated regularly. ————————————————————————————————
Table 8-2. System Disruption Phases ———————————————————————————————— Notification/activation Notify recovery personnel and perform damage assessments. Recovery
Reconstitution
Restore operations using contingency capabilities.
Returns the system to normal operating conditions. ————————————————————————————————
Protecting Computer Data
251
the business operations of an equipment rental company. While employees and customers would be temporarily inconvenienced, reservations could be manually logged for the duration of the electrical shutdown and keyed-in when the electrical service is restored and the data system is reactivated.
WEB OPERATIONS Many companies now conduct some aspect of their business using the Internet. The continuation and protection of consumer/customer transactions and data is critical. The business of doing business through the Internet is called e-commerce or e-business. Such business is conducted via websites. If a company’s website is set up for selling goods/ services to the public, it is called e-commerce. A website may also be used for business-to-business purposes. In this case, it is called e-business. Many companies seek to gain a competitive advantage from a website with low cost, 24/7 product availability and secure methods of payment as part of their e-commerce services. These websites present information to the public or authorized personnel via the world wide web or a private Intranet. An external website also may be an electronic commerce portal, where the organization may provide services over the Internet. A website may be used internally within an organization to provide information. These are called Intranets and such information could include human resources forms and internal documents including phone directories and other staff information.
WEB CONTINGENCY PLANNING Website contingency planning involves such factors as: documentation, programming, security controls, load balancing, implementation coordination and response procedures. Website documentation involves documenting hardware and software configurations used to create and host the website. Before a website is put into operation, procedures should be implemented involving site testing, configuration management and maintenance. All changes should be documented and backup procedures should be put in place.
252
Emergency and Backup Power Sources
Website addresses are assigned an Internet Protocol (IP) address. The IP address maps to a domain name, or Uniform Resource Locator (URL) by a Domain Name Server (DNS). The DNS allows users to access the Internet using text addresses instead of numeric addresses. If the DNSD becomes unavailable, access to the web including e-mail log-in ability and other Internet services are interrupted. This means a total loss of electronic connectivity for the business. Factors reducing risks associated with DNS failure involve redundancy. If an organization provides the same address for both primary and secondary name servers, the company is dependent on the functioning of a single server. The failure to list all DNS servers in domain registration records, or incorrect configuration entry errors can also lead to problems including non-authoritative DNS information and DNS failure. When all of an organization’s name servers are located on the same physical network segment, a local outage or even a routing misconfiguration or security attack can make all of these servers simultaneously unavailable, interrupting DNS. SECURITY CONTROL A website is often is the entry point for a hacker into an organization’s network. Both the web server and supporting infrastructure, must be protected through security controls in the event of a power disruption and change over to backup power and procedures. Contingency planning must be coordinated with these controls to insure that security is not compromised during system recovery. This will ensure that the appropriate security controls and patches are implemented on the rebuilt websites. Integrity controls should be used to protect the operating system, application and information in the system from alteration or destruction. Integrity controls also provide some assurance to the user that the information provided has not been altered. Integrity control elements include virus detection and elimination software, updating virus signature files, automatic and/or manual virus scans, virus eradication and reporting, reconciliation routines used by the system including check sums, totals and record counts. The later are especially important during switch-overs to backup and recovery to normal operations.
Protecting Computer Data
253
EMERGENCY MANAGEMENT Emergency management is a process of preparing, mitigating, responding and recovering from a disaster or emergency. It involves planning, training, conducting drills, inspection, testing and coordinating facility activities. Major considerations include direction and control, communications, life safety, property protection, recovery and restoration. The system for managing resources, analyzing information and making decisions in a disaster/emergency is called direction and control. Larger industries may have their own fire team, emergency medical technicians, or hazardous materials team. Smaller facilities need to consolidate positions or combine responsibilities. Occupants and tenants of office buildings or industrial parks may be part of an emergency management program for the entire facility. The Emergency Director should be the facility manager. The emergency group members must determine the short- and long-term effects of an emergency, order evacuations or shutdowns and interface with outside organizations.
INCIDENT COMMAND The incident command system that was developed for the fire services can be applied to all emergencies. The incident command system provides a coordinated response and a clear chain of command and safe operations. The incident commander is responsible for management of the incident, tactical planning and execution, outside assistance and internal resources. They must assess the situation and implement the emergency management plan. An emergency operations center serves as a centralized management center for disaster/ emergency operations. Every facility should appoint an area where decision makers can gather during a disaster or emergency. This should be a dedicated area equipped with communications equipment, reference materials, activity logs and other tools appropriate to a disaster/emergency. It should have a copy of the emergency management plan and procedures, blueprints, maps, emergency operations personnel and duties, technical information and data, security system and data management information.
254
Emergency and Backup Power Sources
PLANNING Planning considerations include defining the duties of personnel and procedures for each position. Checklists for all procedures should be prepared. A determination of equipment and supply needs for the response must be made. Isolation of the incident is important when the disaster/emergency is discovered. This includes establishing temporary barriers and containment. Advanced security measures should be put into place and access to the incident scene should be limited. Laws, codes, agreements or the nature of the emergency may require operations from or to an outside response organization. Protocols between the facility and outside response organizations should be implemented. A communications failure can cut off vital business activities. Communications are needed to coordinate response actions and to keep in contact with customers and suppliers. CONTINGENCY PLANNING Planning for possible contingencies from temporary or short-term disruptions must be done. Procedures for restoring communications systems must be established. Provisions for backup communications should be made for each business function. Options may include messengers, telephones, modems, FAX, portable microwave, amateur radios, point-to-point private lines, satellite and high-frequency radio. Employee and occupant procedures for reporting disasters/emergencies should be established. Emergency telephone numbers should be posted near each telephone. Telephone and pager numbers of key emergency response personnel should be maintained. Notification must be made to local government agencies when a disaster/emergency has the potential to affect public health and safety.
PLANNING CONSIDERATIONS Planning considerations in protecting property include establishing procedures for fires, material spills, shutting down equipment and moving equipment to a safe location. The various systems needed to detect abnormal situations, provide warnings and protect property
Protecting Computer Data
255
should be determined. These systems include fire protection, lightning protection, water level monitoring, automatic shut-offs and emergency standby power generation systems. Facility shutdown is generally a last resort. Improper or disorganized shutdown may result in confusion, injury and property damage. Certain conditions could necessitate a shutdown and a partial shutdown would affect other facility operations. The length of time required for shutdown and restarting should be determined along with the shutdown procedures. RECORDS PRESERVATION A company’s vital records may include financial and insurance information, engineering plans and drawings, product lists, specifications, employee, customer and supplier databases, formulas, trade secrets and personnel files. Preserving vital records is essential to the quick restoration of operations. This involves classifying operations, determining essential functions, identifying minimum information and equipment needed. Procedures for protecting and accessing vital records must be established. This may involve labeling, backing up, storing, off-site facilities, security and backup power. Bringing systems back on-line may include repairing or replacing equipment, relocating operations to an alternate location or contracting operations on a temporary basis. Many companies discover that they are underinsured after they have suffered a loss. The lack of appropriate insurance can be financially devastating. There should be coverage for interruption of power and lost income due to shutdown.
RESUMING OPERATIONS After a disaster or emergency, assess any remaining hazards and maintain security. Keep detailed records, consider audio recording discussions, take photographs or videotape the damage to account for all damage-related costs. Restore equipment, for major repair work, review restoration plans with the insurance adjuster. Disaster and emergency recovery depends on the formation of a planning team. The size of the planning team depends on the facility’s
256
Emergency and Backup Power Sources
operation, requirements and resources. Some persons will serve as active members while others will serve in advisory capacities. Input from all functional areas in the company should be obtained. A vulnerability analysis will determine the facility’s capabilities for handling disasters and emergencies. This should include security procedures and risk management. Outside resources are a valuable component of the process. Codes and regulations may include environmental regulations, fire codes, seismic safety codes, transportation regulations, zoning regulations and corporate policies. Identifying critical products, services and operations is important. The need for backup systems requires a review of products, services, facilities and equipment. Operations, equipment and personnel vital to the continued functioning of the facility must be identified. Backup systems may be needed to provide for payroll, communications, production, customer services, shipping, receiving, information systems and emergency power. ASSESSING POTENTIAL IMPACT An assessment of the impact of the event should be determined. The impact on facilities and business operations must be assessed in terms of losses and damages and the potential loss of market share. Losses must be evaluated in terms of potential replacement costs, repair costs and temporary replacement costs. Business issues include business interruption, violation of contractual agreements, fines/penalties and legal costs. A regular review of the company’s internal response capabilities and the available external response capabilities should be conducted. If these are inadequate, consideration should be given to developing additional disaster/emergency procedures, conducting additional training, acquiring additional equipment, establishing mutual aide agreements and agreements with specialized contractors.
DEVELOPING THE STRATEGY In developing site-specific strategies, several considerations must be addressed. These considerations include the scope of the event, the nature of site operations, time frames and cost. In planning for the
Protecting Computer Data
257
measures and actions to be taken in the event of disaster, the planning team must determine the impact of a potential occurrence upon the entity’s essential business functions and operations. The specific measures that must be taken depend on the nature of business operations. A total electrical system failure could have catastrophic consequences for a hospital. Life support systems would shut down, monitoring devices would be cut off and medical and surgical procedures would be interrupted. These situations are critical loads and emergency power systems must be readily available and activated when necessary.
TIME FRAMES Determining the appropriate contingency strategies involves determining the acceptable level of downtime. Once the scope of the event and its potential impact upon the essential business functions have been determined, an estimate of acceptable maximum downtime must be made. Downtime is defined as the acceptable time periods for using alternate procedures to maintain essential functions. If a facility is rendered untenable, the planning team must determine how long essential functions can be maintained at an alternate facility with less space and limited capabilities. The costs of activating contingency plans and procedures is another factor that must be considered. Emphasis should be placed on developing alternative solutions that are cost-effective. But, these solutions should not compromise the recovery effort. Identifying and prioritizing functions and their related operations is essential to business continuation. An essential task is the establishment of maximum acceptable time frames for alternate procedures. One of the most critical areas in supporting essential business functions is the issue of vital records recovery and management. VITAL RECORDS RECOVERY The records of a company that are vital to business continuity include current and pending vendor/sales/customer and client contracts, tax and financial data, including accounts receivable/payable, personnel/employee records, including employment contracts and payroll/
258
Emergency and Backup Power Sources
benefit/medical records, building specifications, including floor plans and regulatory/compliance documents. While backups with off-site storage may exist, there are many records and documents that are maintained at the primary worksite. These documents, reports and paperwork are work-in-progress and must be addressed in any records management contingency plan. CONVENTIONAL DATA PROTECTION Traditional methods of protecting computer data transfer a copy of the data from a hard disk to removable media. Until recently, the low cost of tape-based storage made magnetic tape the most common method for backing up data. Traditional tape backup strategies include full, incremental, and differential techniques. Problems with magnetic tape backup include the slow speed of backing up. Because tape uses a linear recording format, it takes more time to write backup data to it compared to disk. Activating the option to verify after write on the tape drive adds 30 to 50% to the time required to complete the backup. If tapes are removed from the drives to be sent off-site, there can be a significant delay in obtaining those tapes for recovery purposes. New generations of tape technology appear on a regular basis and these different tape technologies make it difficult to recover data from previous backups. There is also an inability to audit backup. With distributed systems, it is difficult to verify that tapes are being properly written to with each backup. When the backup tapes are in the tape drive, they are vulnerable to physical events. Most tape backup hardware or software does not encrypt backup data, leaving it exposed to unauthorized access. Tape is useful for the long-term retention of data. It has excellent price performance characteristics and should continue to have a price advantage over disk in the near future.
COMPUTER RECORDS Business information including personnel records, customer data, financial information, client and vendor information are all stored in today’s computer systems. These vital records must be protected to
Protecting Computer Data
259
ensure the continuity and integrity of these business operations. The current technology includes desktop/portable computers, servers, Local Area Networks (LAN), Wide Area Networks (WAN), distributed systems and mainframe systems. Desktop and portable systems include laptops and hand-held devices with a central processing unit, memory, disk storage and various input and output devices. Desktop computers are sold as stationary units that fit on office desk or table. Desktops may be networked to allow communications with other networked devices including the Internet. The files created by these systems need consistent backup and should be stored off-site in a secure facility that is also environmentally safe (free from dust and humidity). Users should backup data on a regular basis. In the event of an outage, data should be saved in a specific folder to aid in its rebuilding. If hardware, software and peripherals are standardized, then system recovery is faster. System configuration and vendor information should be documented along with vendor names and emergency contact information. PCs are the most common platform for routine automated processes and they become important elements in a contingency plan. PCs may be physically connected to an organization’s LAN, and they can dial into the organization’s network from a remote location. They can also act as a stand-alone system.
STRATEGIES FOR PROTECTION The following strategies should be considered in addressing potential losses: the documentation of system and application configurations, interoperability among components, off-site backup and data storage, the use of alternate hard drives, implementation of redundancy in critical system components, Uninterruptible Power Supplies (UPS) and backup power. Redundant data storage, communications paths, power sources and system components reduce the likelihood of critical system failure. The costs of implementing redundant capabilities should be weighed against the risks of system outage. The use of uninterruptible power supplies (UPS) is a powerful tool for data availability. See Table 8-3.
260
Emergency and Backup Power Sources
Table 8-3. Using UPS Units ———————————————————————————————— Equipment
Run-time/rating/features
Desktop PC
6-12 minutes, 200-300-VA Surge-protected outlets for peripherals Software saves files and shuts down machine
Workstation/dual processors
10-15 minutes, 450-650-VA Software saves files, shuts down machine
Single application server
8-15 minutes, 1,000-VA User-replaceable, hot swappable batteries
Multiserver application
B-20 minutes, 1,400-2,000-VA Network management software alerts network administrator via e-mail or pager
Data center/ computer room
10-20 minutes, 3,000-5,000-VA Several UPS units, or large UPS unit Multiple units require more maintenance Single, larger unit with problem could close down entire operation
———————————————————————————————— System information such as system name, location, contact information, operational status and purpose should be documented. Major application configurations, such as its name or identifying characteristics, functions, types of information processed and connections should be documented to aid restoration/recovery. Backup data should be stored at an off-site location. There are several determining factors for the location of the off-site location including proximity to geographic area, cost, physical security, retrieval time, and an environment free from dust and humidity. The use of alternate hard drives provides some protection in the event of an outage. The data on the alternate hard drive should mirror or match the original. Redundancy in other critical system components can also be employed. Alternate sites should have matching equipment in the event of system failure in the original location. Interoperability among components can be a problem. The various components of the system should be able to operate effectively in alternate systems or sites. Interoperability is important since providing stan-
Protecting Computer Data
261
dard platforms and configurations assist in the system recovery and reduce any expenses involved in procuring replacement equipment.
SERVERS A server is a computer that runs a network operating system. Its software provides access to the network’s resources including disk storage, printers and network applications. A server can be a PC or a larger computer with multiple disk drives, with enough memory to run multiple processes. Servers support file sharing, e-mail, printing and other shared network services. Networked PCs allow users to log into the server in order to access the server resources. Server data can be preserved through the backup methods. A full backup will require large numbers of backup media. A lengthy amount of time will be required to accomplish the task. Recently created/ changed files can be treated to incremental backups. Files that were created or modified since the last full backup may be stored on differential backups. Redundancy is another solution to data loss. Considerations in redundancy backup include the frequency of backup, speed of backup data retrieval required and length of time for media retention.
LANs and WANs A Local Area Network (LAN) may support hundreds of users and multiple servers. A peer-to-peer network provides each node with similar capabilities and responsibilities. In a client/server network, each node on the network is either a client or a server. A client can be a personal computer or a printer that relies on a server for direction and resources. A LAN should be protected from power disruptions in much the same way as desktops and portable computers. LANs need up-to-date documentation for their physical and logical layouts. With this documentation LAN recovery can be accomplished more quickly. The network connective devices that facilitate LAN communications (switches, bridges and hubs) are critical in recovery and also need
262
Emergency and Backup Power Sources
to be documented. The system documentation should include contact information for hardware and software vendors. A Wide Area Network (WAN) uses data communications to allow one LAN to interact with other LANs that are geographically dispersed. Communications between the LANs is usually provided by a public carrier. Connections provided by WANs may also involve another WAN and/or a LAN to the Internet. Communications links for WANs include the following techniques. Dial-up modems provide data transfer over a nonpermanent connection. The speed of these connections depends on the modem speed. The Integrated Services Digital Network (ISDN) standard issued for sending voice, video, and data over digital or standard telephone wires. Transfer rates of 64 or 128-Kbps are supported by ISDN. T-1 and T-3 phone connections have channels that can be configured to carry voice or data signals. T-1 supports data rates of 1,544 Megabits per second (Mbps) and consists of 24 individual 64-kbps channels. The T-3 connection is also referred to as DS3. It is a dedicated phone connection with data rates approaching 43-Mbps. A T-3 line has 672 individual channels and each channel supports 64-kbps. Frame Relay is a packet-switching protocol for connecting devices on a WAN. The data are routed over virtual circuits with data transfer rates at T-1 and T-3 speeds. Transmission takes place on frame relay networks. An ATM network transfers data using packets of fixed size. ATM can support data transfer rates of 25-622-Mbps. Synchronous Optical Network (SONET) is a standard for synchronous data transmission on optical media. SONET supports gigabit transmission rates. A wireless LAN bridge can be used to connect LANs to form a WAN. Wireless supports distances of 20-30 miles in a direct line of sight. A Virtual Private Network (VPN) can be used as an encrypted channel between nodes on the Internet. Several protocols are supported on the VPN. The connections are point to point.
DISTRIBUTED AND MAINFRAME SYSTEMS A distributed system is an interconnected set of processing elements that can exchange and process data for computer applications. Distributed systems are used where client and users are widely dis-
Protecting Computer Data
263
persed and use LAN/WAN resources. Since distributed systems rely on local and wide area network connectivity, disaster recovery is similar to LANs and WANs. The solutions include system backups, RAID, redundancy, electronic vaulting and remote journaling, disk replication, virtualization, Network-Attached Storage (NAS), Storage Area Networks (SAN), remote access, LAN cabling redundancy and WAN communication link redundancy. Mainframe multi-user computers are designed for large organizations. Mainframe systems store data in a central location and use a centralized architecture, redundancy is not built in. Data backup is critical with storage off-site. Data can be replicated with remote journaling and electronic vaulting. Storage solutions include virtualization, Network Attached Storage (NAS) and Storage Area Networks (SAN). Power spikes and other utility failures can cause system interruptions and may damage the computer hardware. Uninterruptible power supplies (UPS) should be one of the recovery strategies along with the standardizing of components and system documentation.
BACKUP TECHNIQUES Backups of data are important for ensuring data availability on computer systems. The backup device must be compatible with the operating system and applications. It should also be easy to install onto different models or types of computers. The amount of data often determines the backup method. Other factors besides the amount of data include its frequency and required retention, recovery and transport requirements, restoration procedures, availability and costs. The different types of system backups include full, incremental, differential, Redundant Array of Interactive Devices (RAID) and disk mirroring or replication. A full backup stores all the files selected for backup. Large storage capacity and lengthy recording time characterize a full backup. Files that are created or modified since the last backup are incremental backups. In terms of recovery, restoration from an incremental back-up may require more time since restoration depends upon the time of the last full back-up. A differential backup stores files that were created or modified since the last full backup. Restoration requires only the full backup tape
264
Emergency and Backup Power Sources
and the last incremental tape. This approach is less time consuming to complete than a full backup and requires fewer storage units than an incremental approach. Backup media includes diskettes, tape cartridges, zip drives, compact disks, network storage devices such as networked disks, or server backup. Internet backup or on-line backups are also an alternative but the issue of confidentiality occurs when using Internet backup options. Each type of media has its own storage life. Backup software may run from the very simple to the automated backup packages being developed and expanded today. Simple backup methods are adequate for smaller data sets. Advanced software packages are designed used for larger data transfers. Floppy diskette drives are available in most desktop computers. The floppy drive is the least expensive backup solution but the drives have a very low storage capacity and the process is slow. Tape drives provide an automated, low cost, high capacity backup option and are widely used. Removable cartridges are fast and flexible, but the media cost is higher. Special drivers and applications are usually required if the device is portable or external. Iomega Zip and Jaz storage drives are popular as removable cartridges. Compact disk, Read-Only-Memory (CD ROM) is now a standard feature of most desktop and laptop computers. These disks offer low cost as well as high storage compared to floppy disks. Writable drives (CD-RW) and software are available to write to a CD. Networked PCs can be backed-up using a server with data storage capacity. It may be limited by its own storage capacity or the allocation limitations of the user. When files are saved to the networked disk, the backup is done through the network or server backup program. A networked storage device can also be used to backup local drives on a networked PC. The data are backed-up from either the networked system or the PC. Replication can be used to backup data for portable and hand-held computers. The portable computer is connected to a desktop PC and the data are copied to the desktop unit. Internet backup can be used to backup data over the Internet to a remote location. The confidentiality of the data is not as good, so sensitive data should not be backed up this way. The backup of data is an important part of the contingency plan. But, as the data value diminishes, budgetary realities come into play. A
Protecting Computer Data
265
small company with hard copy backup data might consider a weekly backup since data lost after midweek can be recreated by staff at less expense that it would cost with redundant systems or more elaborate backup processes. The sales or payroll database of another small business may be critical and deserve more expense to keep it protected and available. Incremental backups can help to reduce losses due to accidental deletion or corruption incidents which may not be immediately detectable. Miss-edits or inadvertent changes in a value in a database or spreadsheet can require the point-in-time restore found in some backup and snapshot tools. Process errors such as a failure to backup data are a problem, since, when combined with other factors, may lead to permanent data loss. The best way to minimize the chance of process failure is automation. Set up an automatic backup schedule, or use storage products with advanced backup features. To limit the loss of media keep disks and backups in physically secured areas. It is also important to have accurate and reliable media tracking systems, such as logs in a safe. Some backup and restore products such as BrightStor provide tools for media management. To mitigate software problems during restores, keep off-line copies of complete disk data, including software and settings, in case a full restore from backup is needed. Disk mirroring is usually the fastest and most complete way of providing recovery from a failure. Create a mirror image of the disk to provide an exact copy on a CD, DVD, network drive or tape so that in the event of a hard disk failure, you recreate the system at the point that the backup was made, including all software systems, patches, and other associated information. RAID technology is often used to reduce the likelihood of data loss resulting from a disk crash. Mirroring (RAID-1) provides data protection through redundancy and is often used for smaller amounts of information. RAID-5 is widely used with data striped across three or more drives. A key storage/recovery factor is providing enough redundancy and alternative routes within the network to combat loss and failure. Data can be backed up in multiple locations to provide availability and redundant networks enhance this through alternative accessibility. Both are needed in disaster recovery. Redundant network connections can
266
Emergency and Backup Power Sources
also help to mitigate local disasters by providing working network connections through unaffected areas. If the power fails, battery power can keep data available for up to about 72 hours. Then motor generators need to be implemented. Alarm systems are also useful to indicate power supply fluctuations or potential failure conditions. CD BACKUP CD-RW drives are found in most PCs and low-end servers. They provide a backup device between low-end removable magnetic media formats such as Zip and LS120, and more-expensive, higher-capacity DAT, 8mm, AIT, and DLT tape formats. Rewritable disks cost more, but they allow the media to be re-used in a backup rotation cycle similar to tape cartridges. CD-R media cannot be rewritten, but it provides the low cost media at less than $1 a disk. Both media provide expected life spans in the 5-12 year range. RELOCATION Plans must also be made for relocation if a system cannot be recovered at the original site. In most cases the system must be relocated to an alternate site for temporary processing. The plan should consider various types of alternate sites and their capabilities. These alternate sites may include cold sites, mobile sites, warm sites, hot sites and mirrored sites. Warm sites may require setup and lack some features, while cold sites are basically storage rooms. The costliest backup service is the hot site, where all computing capabilities can be delivered quickly. Mobile backup is essentially a data center on a truck, but the user must provide electrical power. Coordination with security controls is important since contingency planning cannot exist alone. Planning should be coordinated with security controls in order to reduce system risks. PLAN TESTING Plan testing is important and includes identifying and addressing potential problems. It should evaluate the ability of staff to quickly and
Protecting Computer Data
267
effectively implement the plan. Each element of the contingency plan should be tested. Testing each element of the plan confirms the accuracy of individual recovery procedures and the overall effectiveness of the plan. The main parts of contingency testing involve system recovery on an alternate platform from backup tapes, coordination among staff, internal and external connectivity, system performance using alternate equipment and the restoration of normal operations. To protect the company’s critical data, Charles Schwab & Company contracted with a disaster recovery services vendor to provide a backup data center, complete with mainframe computing, storage and communications. The backup center was used when the 1990 earthquake cut off electrical power to Schwab and threatened to put the company temporarily out of business. Schwab had come to rely on users protecting their data by copying it onto tape periodically and transporting to a second location for safekeeping. If a disaster occurs at the data center, the tapes are rushed to a backup data center, loaded and run. This approach can result in lost data since the last backup. Physically transporting tapes and loading data into backup CPUs may take several hours or even days. The company trucked tapes to the airport where FedEx then shipped them to a backup data center in New York City. When the Oakland earthquake hit, San Francisco International Airport shut down, and some of Schwab’s tapes were left in a FedEx office in Oakland. Schwab now uses electronic vaulting. In 1990 a Manhattan substation fire cut power to hundreds of businesses, costing millions in data center downtime and lost data. The Bank of New York and other financial institutions are going to electronic vaulting and remote journaling as government regulators require banks to ensure minimum levels of data recovery and downtime in case of an emergency. One report issued by the New York Clearing House recommended that banks be able to recover data from the time of failure by midnight of the day of the failure. The Clearing House also recommended that banks be able to complete the day’s processing activity by midnight of the same day. Many companies are hard pressed to meet these standards. Off-site vendors need 24 hours just to transfer data to a hot site, which is an off-site computer room. It can take seven or eight hours just to move the data, and you need to stabilize the tapes to account for
268
Emergency and Backup Power Sources
temperature changes. Adding the time to transfer and load the data, it can take a day if everything goes well.
ELECTRONIC VAULTING AND REMOTE JOURNALING Protecting critical data from disasters can be done by moving copies of important data to backup sites electronically. Vendors in the disaster recovery business offer a number of products and services for electronic vaulting and the remote journaling of data. Another service called full database shadowing has the potential to recover all data back to the point of failure in a few minutes. Electronic vaulting uses the batch transmission of data via T1 or T3 data networks to a remote location. Remote journaling is a more sophisticated technique which remotely records data updates as they occur. It provides a higher level of data protection. Software monitors the updates to VSAM files and a remote, on-line log is kept of all updated records. If a failure occurs, data can be updated back to the point of failure, although it may take a few hours to reconstruct files, depending on their size and the number of updates. Both Sungard and Comduisco provide remote journaling. Database shadowing is the most advanced type of service. In this technology, copies of entire databases are maintained at remote sites. Electronic vaulting has become more popular since the appearance of storage area networks (SANs) and network-attached storage (NAS) devices. Forms of electronic vaulting have been in use since the early days of computer networking. Wide-area network architectures, as well as the TCP/I set of protocols that form the basis for the Internet, have always provided a method to move data electronically from a local to a remote system. In the late 1980s, Sungard Recover Services of Wayne, PA, was the first to offer electronic vaulting services to IBM mainframe customers using 3420 or 3480 tape drives. Electronic vaulting involves the movement of electronic data over private or public communication lines from a local (primary) computer storage device to a remote (secondary) storage device for the purpose of restoring the data in the event the primary copy is lost. Electronic vaulting is also called e-vaulting or on-line backup. E-vaulting represents a major shift from traditional in-house data backup and recovery functions. The backup and recovery functions are
Protecting Computer Data
269
outsourced to an automated third-party service provider. E-vaulting is a replacement for traditional tape backup and archiving software practices with off-site tape storage. Shifting from traditional tape vaulting services to e-vaulting means improvement in each function (See Table 8-4). Table 8-4. E-Vaulting Versus Tape Backup ———————————————————————————————— Traditional Tape Backup 30 tapes $ 1,500 Backup administration 8 hour/week $10,000 Off-site storage, 20 tapes/year $ 3,840 Emergency retrievals/year $600 Restore 1 day/server/year $ 2,500 Replace tape drives/2 year $ 1,500 Backup software $ 1,000 Daily backup for 20GB/year $ 8,500 Total $21,040 ———————————————————————————————— E-vaulting Restore 1 day/server/year $240 Daily backup for 20GB/year $ 8,500 Total $ 8,740 ———————————————————————————————— The growth of high speed Internet connections to support e-commerce and Web hosting applications has fueled e-vaulting. Most of these Internet connections are under-utilized after normal office hours and are usually fully available to carry backup data during the 6:00 pm to 6:00 am hours. E-vaulting can reduce backup times from hours to minutes. A company can have important savings using e-vaulting while improving the level of service for data protection (See Table 8-5). The transition to e-vaulting from traditional tape-based backup and off-site vaulting services is fueled by cheap, reliable high-bandwidth network connections such as DSL. Another factor is affordable, reliable, and massively scalable Redundant Array of Independent Disk (RAID) storage systems.
270
Emergency and Backup Power Sources
Table 8-5. E-Vaulting Features ———————————————————————————————— • Automated and unattended backups • Centralized management of backup and restore from one or more locations • Open file and open database backups • Backups take less time • Complete control of files and directories • File filtering • File restores around the clock • End-user or central administrator control • Restore data over the network • Restore data from one platform to another • User-managed data retention • Several levels of data encryption • Automatic restart and resume • Exception and problem reports • Detailed usage reports • Archive data for long-term retention ———————————————————————————————— Newer software is designed to minimize the amount of data required to transmit large backups across a network. It also maintains the history of these backups with a very small storage footprint at the remote location. A typical backup schedule is shown in Table 8-6. Enough storage is required to present a comprehensive history of backup data. The total number of backup sets/generations to support this schedule is 24. Table 8-6. Retention Schedule for Backup Data ———————————————————————————————— Daily backups saved for one cycle (one week-7 days) Weekly backups saved for one cycle (one month-5 weeks) Monthly backups saved for one cycle (one year-12 months) ———————————————————————————————— Backup administration involves off-site administration, tape handling and labeling, backup monitoring and troubleshooting, backup set up and scheduling and installation of software upgrades.
Protecting Computer Data
271
Emergency services are required to bring a tape back from off-site to restore an entire disk drive or directory to the server. Restore time is typically eight hours to support this function. DATA PROTECTION SCHEMES Most backup software uses compression technology to reduce storage requirements. The technology can be software- or hardware-based. It reduces the amount of data to be transmitted or stored by an average ratio of 2:1. Compression ratios depend on the type of file being compressed. Text files and most databases allows the highest compression ratios. A full backup copies all files and directories that have been selected for backup. A user can select an entire disk volume or choose specific files and directories to backup. A full backup takes the longest time and many files can be backed up unnecessarily because they may not have changed since the last backup. An incremental backup copies only those files that have been modified or changed since the last backup. Files that have not changed do not require backup, reducing the time required to do the backup. The restoration process requires a restore from the first full backup and then incremental backups up to the required date of restore. Typically, an incremental backup method requires a weekly full backup to reduce the time to restore and reduce restore errors. A differential backup copies all those files that have been modified or changed since the last full backup. This differs from an incremental backup that just copies files changed since the last incremental backup. It takes less time to backup than the full backup method, but requires more time than the incremental. This is because it must backup all the files that have changed since the last full backup and not just since the last incremental. The restoration process is simpler and quicker than the incremental. In this method, one uses the last full backup and the latest differential.
DELTA BACKUP Delta backup is used for e-vaulting, since it is not practical to transfer full, differential, or incremental backups across a network. Network
272
Emergency and Backup Power Sources
bandwidth normally hinders traditional backup methods. Most e-vaulting systems are designed to minimize the amount of data transferred across the network using an enhancement of the incremental method called Delta processing. Delta processing makes it possible to transfer just the changes that occurred since the previous backup. This procedure makes it practical to backup data to a remote locale over relatively lowspeed communication lines. Delta processing uses an algorithm to compare each data file with the image of the same file from the last backup. This comparison generates a list of blocks within the data file that have changed. These blocks are compressed, encrypted and then transmitted across the network. In a 100-MB data file with changes made to the data in the course of a day totaling 1-MB, then about 1-MB of data is transmitted to the remote site to complete the backup. Using the incremental or differential methods would require the entire file of 100-MB to be backed up. The effective compression ratio in the Delta method can range from 50:1 to 1000:1. These large ratios make it possible to backup large amounts of data over common network connections.
BACKUP TIMING All backup methods require some type of time stamp to allow a proper recovery. This is important in databases where updates can occur 24 hours a day, 7 days a week. Some businesses will take the system down for a short time to perform a backup. This provides a clean copy of the database with no partial transactions. If the system cannot be taken down for a short time then traditional backup methods do not provide a stable backup solution, since there is no guarantee which transactions made it into the backup and which did not. In these cases, special backup functions are written into the database system that allow backups to take place while updates are held in a queue. The backup is usually written to a temporary file, which is an image of the database. This image normally would be written to tape but could be sent electronically to a remote location. On-line databases may also create journal files listing the transactions continuously, which allows restoration using the last full backup plus the transactions up to the end of the journal file. Other solutions to this problem function at the operating system
Protecting Computer Data
273
level. One technique is to intercept write updates to open files during a backup and then store the previous data in a cache area. This allows the backup to read the data up to the point the backup started. When the backup is complete the cache is released. This allows critical applications to run 24 hours a day. An important factor for backups involves how special ports on a firewall are opened to allow backup data to travel across the network. Other important factors are listed in Table 8-7. Table 8-7. Backup Factors ———————————————————————————————— Recovery goals Server hardware and operating system Volume of data storage Communication facilities Firewall management Retention schedule Alternate processing facility ———————————————————————————————— E-vaulting is a solution that fits between mirroring and long-term archiving. Mirroring differs from e-vaulting, although some literature and Web sites promoting these technologies claim similar features and benefits.
MIRRORING Data mirroring and particularly remote mirroring, is a tool for business continuity and disaster recovery, but it is not a substitute for conventional backups. Mirroring offers no protection against data loss from accidents or virus infection. Anything that corrupts the primary data also affects the secondary or mirrored data. Mirroring provides different features and benefits compared to evaulting, but mirroring may be used to complement e-vaulting. Local mirroring can be used to provide real-time data protection for missioncritical applications that cannot afford more than a few minutes of downtime. E-vaulting could be used to backup the mirrored copy of the data to a remote location for disaster recovery purposes.
274
Emergency and Backup Power Sources
E-VAULTING SOFTWARE The software for e-vaulting takes advantage of the development of communication techniques to handle large-volume data movements over public networks. The three main components of e-vaulting are agents, administration, and backend server. A software agent is installed on each system to be protected. The administration utility connects to the agents and provides scheduling, backup parameters, monitoring and restoration functions. The agents start-up at the scheduled backup time and transmit the changes to the remote server. The remote server accepts the changes, starts a retention cycle and serves the data when a restore request is received. With e-vaulting, the data are off-site as soon as the backup is completed. E-vaulting encrypts data at the agent and the data remains encrypted through the cycle at the remote server. E-vaulting can back up all system states, security, and other operating system attributes for the files. Several backup methods may be used to initialize the e-vaulting process. A full backup may be done to a local tape device, which is then transported to the remote e-vaulting server for initialization. A full backup may take place across the local network to a portable e-vaulting server, which is then transported to the remote site for initialization. After the remote e-vaulting server is initialized, a full backup is normally never required again. Notifications are sent via e-mail for completion and exceptions. The process is automated from this point on. Recovery is done with the help of a wizard in the administration utility that guides the user through the retrieval, prompting for dates, names and other information. If the recovery involves a small volume, it is usually done immediately over the network connection. A complete recovery is typically done from tape or a portable e-vaulting server. In a complete recovery, the restore request is handled at the remote e-vaulting server. The restored data are delivered to the hot site. The portable server is much faster than using tape for volumes of data greater than 40GB. Delta processing is used which results in minimal data traveling across the network to effectively recreate a full backup of the protected data. The administration utility allows hundreds of remote systems to be managed from one or more locations. Log files can be reviewed and spot
Protecting Computer Data
275
checks can be made on any system to verify the backup process is operating correctly.
DATABASE SHADOWING The most comprehensive and expensive of electronic data protection services is database shadowing. A copy of the entire database, not just updated records, is kept on-line at a remote site. Linked with a hotsite data center backup, database shadowing offers the recovery of data back to the point of failure and a return to operations in minutes. Comdisco and Sungard also provide database shadowing. Most electronic data protection services tend to be costly from high communications costs, although prices depend on the amount of data being protected. Functions such as electronic vaulting, remote journaling and database shadowing will be integrated into more systems. Tandem Computers, Digital Equipment and Computer Associates have been building remote database shadowing features into their products. Tandem has been using a remote duplicate database facility (RDF) with its operating system since 1986. This feature allows on-line backup of Tandem databases to remote sites over standard communication lines. Digital used VAX Volume Shadowing under control of its VMS operating system. Initially, the system connected only via fiber channels.
DISASTER RECOVERY SECURITY One concern over outside disaster recovery strategies is security. Critical data may be run on backup equipment that may be shared with other users. Vendors have worked to overcome those concerns and have put their systems through government security evaluations. Many systems meet government security standards.
DBMS Distributed database management system (DBMS) technologies can also be used to provide on-line remote data protection. The U.S. Army’s Depot Systems Command operation installed Computer Associ-
276
Emergency and Backup Power Sources
ates’ distributed DBMS on IBM and Amdahl mainframes at several sites. The system creates duplicate log files of critical data and stores them off site. The system was built to recover data to the point of failure and to return to operation within two hours of a failure.
REMOTE COPYING In 1992, IBM introduced some storage-array-based applications such as Concurrent Copy, which allowed storage arrays to run applications. IBM’s Dual Copy produced a copy of data within the same array that was running the application. This is useful in cases of data corruption, but it is not very helpful for disaster recovery since both copies are on the primary storage array. Products for copying storage to secondary locations include EMC’s Symmetrix Remote Data Facility (SRDF) and Symmetrix Data Migration Service (SDMS) along with IBM’s Peer-to-Peer Remote Copy (PPRC) engine. These products transmit data across fiber-optic links to physically removed secondary sites about a campus distance apart. Remote copying has several challenges, including latency, data integrity, and physical distance problems. An early successful approach, that is still used today, is synchronous operations. The backup application will create duplicate data at a secondary site, then mirror all changes to the primary back to the secondary to provide identical copies. One problem is that an I/O write acknowledgment is not returned to the primary system until both writes are complete. Cache operations can help, but the approach is still limited to about 100-km. Another remote copying technique is an asynchronous remote copy over Internet networks, where the primary does not wait for an I/ O write acknowledgment before continuing operations. This allows the user to restore data at a specific time from the secondary storage array.
SYNCHRONOUS MIRRORING Mirroring provides protection against basic user and equipment errors including disk, subsystem, and path or link failures. A transparent switchover is usually desired and real-time synchronous mirroring is one approach.
Protecting Computer Data
277
Mirroring can be used to replace server-intensive file transfers and backup/restore methods when moving data from one location to another. The on-line image is copied to the new location in the background. After a copy is made from the original image it is used as a primary source. The advantage of mirroring is being able to restore when necessary. The secondary image consists of the database data and its associated objects, including the operating system, registry entries, application data structure, scripts, and instructions. All of these objects are usually stored off the primary data volume.
OUTAGE RECOVERY For disaster recovery, mirroring protects data against both physical disaster and data corruption. For planned outages, disruptions to users can be minimized by using the secondary database as the primary while the primary is off-line. A local secondary database with the capability to roll back to a known good state can handle corruption, while disaster recovery requires secure remote copying. Some high-end replication products require identical primary and secondary storage arrays. Protocols such as EMC’s SRDF must run between two Symmetrix boxes, each loaded with a copy of SRDF. This allows switching data between up to sixteen Symmetrix subsystems. This is a secure solution, but it is expensive and can cost as much as a primary SAN. Lower-priced alternatives for disaster recovery include SAN management packages, such as DataCore’s SAN symphony. True disaster recoverability means no loss of data and minimal processing delay after an outage. This requires a solution that generates and moves the data to a remote site, creating a mirror copy that is quickly transmitted to the remote site and is identical at both sites. In the event of an outage, the remote site can become the primary site, and production processing continues with essentially no delay, loss, or corruption of data. This type of disaster recovery procedure can be used for mission-critical data, including financial transactions and database applications. For mission-critical, file-based data on a NAS, disaster recovery products available for doing file-based synchronous remote copy include EMC Celerra with a Symmetrix backend, running SRDF.
278
Emergency and Backup Power Sources
BACKUP APPLIANCES Storage management for backup and disaster recovery used to require that each component be purchased separately. STOR Server was one of the first companies to introduce a new approach to storage called the backup appliance. This is a single box that includes disks, tape libraries and software in an all-in-one appliance that is easy to use. Backup appliances allow for automation and long-term management flexibility. A backup appliance can provide a return on investment of only months by reducing storage expansion costs, eliminating the need to duplicate data across users, networks and by storing inactive data in archive packets.
DATA VALUATION AND COSTS Data protection strategies and technologies exist for almost every level of preservation of data in an organization. If costs, time and performance were no object, you could just replicate all of the information in many places all of the time and keep ongoing off-site, read-only, secure stored copies of everything that ever was written to disk. Then, run all of the same servers, applications and network elements in parallel at an off-site location to reduce the possibility of data loss. However, protecting against downtime and data loss needs to be done on a budget. Cost reduction for data protection starts with evaluating the criticality and value of information, identifying the risks to the information and prioritizing those risks. Then, identify the risk-mitigating solutions and implement the most appropriate solutions. The value of data needs to be calculated in terms of the cost of recreation of the data and the cost of not having the data to conduct business. If data loss does not affect organizational health, little needs to be done to protect it. As the damage resulting from data loss increases, more thorough data protection methods and technologies must be used. Disaster recovery cost analysis involves a number of terms. The recovery time objective (RTO) is the amount of time between when an outage occurs and when operations resume. Some measure the RTO from the outage until the restoration is complete. But, the RTO should consider the impact on users and the loss of the process. It should include the time of process impact until it resumes. When restoring a large
Protecting Computer Data
279
database, users must usually wait until some level of maintenance or validation is completed. The RTO is longer than the tape restore; it is the maximum allowable recovery time that the business unit can afford to be impacted. The Recovery Point Objective (RPO) is a related term. The RPO refers to the maximum amount of lost data until the restoration is complete. This is dictated by the business unit(s) and is the maximum amount of data that a department can afford to lose. Both the RTO and RPO are a measure of resilience for business units. In a manufacturing company, the administrative teams and technology staff, processes, and tools are in place to enhance the productivity of workers. The RTO and RPO are an administrative assessment of how long workers can be unproductive and/or how much data they can lose. Another term which involves the RTO and RPO is the service level agreement (SLA). It represents the commitment to satisfy the RTO/RPO for a specific group. A business unit might decide it cannot afford to be down for more than 6 hours and lose no more than one day’s worth of data. This could equate to a commitment of an SLA of 2 hours, with no more than three outages per year. This expresses a performance goal. An SLA may also have a penalty clause, if it is not achieved. The business impact analysis (BIA) involves the financial impact of the outage. In installations where a business unit depends on a single server, when that server fails, the business impact can be the sum total of hourly rates of the unproductive employees plus lost revenue, overhead, customer goodwill and other factors. The business unit determine the business impact of the crisis. It will need to determine how long it can be down and/or how much data can be lost. This will be used to derive the RTO and RPO. There is a maximum budget for any potential solution and the price of the solution should not be more than the cost of the problem. This is expressed by the return on investment (ROI). It indicates how quickly the solution pays for itself. If an outage occurs three times per year, with a business impact of $15,000 each time, the annual cost of the problem is $45,000. If the solution were to cost $30,000, it would pay for itself by the second outage (eight months). Since most technology is appreciated over three years, the real ROI is $30,000 for the solution versus $135,000 for three years of problems, a 450% ROI. There may be additional costs or impacts, includ-
280
Emergency and Backup Power Sources
ing labor and overhead costs such as WAN bandwidth. Another factor in a solution cost, is the total cost of ownership (TCO). If the solution requires 8 hours per week of a network administrator’s time in a 40-hour work week and half of an existing T1 connection, then the TCO should include 20% of one salary plus 50% of the telecommunications bill. These annual numbers are added to the price of the solution for a better evaluation of the ROI. In the case above, this would add about $10,000 to the cost of the problem, making the ROI still attractive at 337.5%. These concepts can be applied to data protection technologies. The most basic protection is tape backup, with a typical program of weekly full backups and nightly incrementals. If an outage occurred late in the day, the data could be restored that evening. The RTO would be the amount of time for any repairs and the restoration (assume eight hours in this case). The RPO would be a full day, since the server will appear as it was at the last night’s backup. All of that day’s data would be lost. The SLA could be 10 hours for repairs with 2 days of data loss, to allow for the most current tape to be unreadable and no more than two outages per year. The cost of the problem (BIA) for the department is the lost productivity of its employees. This could easily be $1000 per hour. For the above SLA, the annual business impact would be $32,000, plus lost revenue. If this department was generating $6 million in annual sales, each outage now costs $56,000 and the 3-year business impact would be $336,000. The price of the solution might be $20,000 for tape hardware and software plus one hour per week from a network administrator. The solution is inexpensive, but the company still has a large loss over three years. A higher level of data protection and availability can be achieved by using clustering with synchronous mirrored storage. This solution would use twin storage arrays with a fiber channel and mirroring software at the switch/array level. This allows the storage to be redundant and if the host server fails there is clustering. This solution would depend on separation to protect against outages. The RTO and RPO would be zero since all components are redundant. The recovery time should be a few seconds. With synchronous storage, the alternate data are exactly the same. An SLA of a few hours per year might be required for maintenance. The impact cost remains the
Protecting Computer Data
281
same but the solution price is much higher. At $60,000 per storage array, plus fiber channel and software, the synchronous hardware could cost $170,000. Adding two clustered servers at $15,000 each plus time from the network administrator and the cost is almost 2/3 of the business loss for three years. When part of the data protection plan is to get the data off-site, the operation of the synchronous hardware gets costly. This is due to the high bandwidth requirements to keep both storage arrays in synchronous operation. Over three years, the solution could reach $300,000. While the solution still costs less than the problem, it costs well over half of the expected problem. If the frequency of outages is improved by 50% of the time, the synchronous hardware solution is not justified. One survey found that less than 1% of critical application could justify such expenses. Replication tools such as NSI Software’s Double-Take replicate file changes at the byte-level. Regardless of the application (SQL, Exchange, Oracle, Notes or user files), only the actual byte string being written is transmitted. If an application changes PUT to GET three bytes are transmitted. The RTO for software replication ranges from minutes to seconds. Some replication technologies include fail over for a source server. This allows users to connect to the replication server and assume business operations within seconds. The RPO of software replication is usually seconds or less. The replication is done in real-time, but the data are sent asynchronously at the best available network speed. This produces most of the benefits of synchronous hardware, without the costs. There can still be an SLA of a few hours of downtime per year, for maintenance.
REMOTE SITES A remote disaster recovery site can be set up that is an exact duplicate of critical systems. It should be ready to be accessed in the event of a catastrophic failure. The decision to build or move a data center offsite is usually driven by cost, security and concerns about the main facility. The technology explosion of the last decade has led many property managers and owners to convert their office buildings, warehouses, strip
282
Emergency and Backup Power Sources
malls and factories into technology centers, telecom hotels and data warehouses. Most of those facilities were not designed for these applications. Even the old central office facilities built by the predecessors of the RBOCs (Regional Bell Operating Companies) like Qwest and Bell South, fail as modern data technology centers. Older converted facilities often have fees to recover costs for features that were not built into the facility from the beginning. There should be an access control system on the door to the facility that has provisions for emergency access. Security should include digital video cameras for protection. These should be strategically placed and coupled to recorders in a master monitoring station. This will allow the operator to find a specific moment in time on a specific camera feed. Larger buildings will offer a manned checkpoint that checks IDs of employees or vendors and provides another access card for further access to the building. A safe place to locate a data center would be one that eliminates service interruptions, supports technology expansion and avoids any problem of a bankrupt telco. A data center should have survivability for natural disasters. The facility should have a seismic zone 4 construction rating. Buildings defined as essential facilities or essential services have a high level of survivability, with the same strength and resistance to movement and potential shock from explosions as hospitals and fire stations. Most jurisdictions require sprinklers to be in every space, including the data center. The sprinkler is the first line of defense in a fire so in the data center their should be a double interlocking pre-action sprinkler system. This means that damaging a sprinkler head while moving equipment will only release air and sounds an alarm, not trigger the sprinkler. To release water through a pre-action valve outside the critical area and charge the sprinkler heads, two separate smoke detectors need to report smoke and there must be enough heat on a head to open its thermocouple. Only when all three of those conditions are met will the sprinkler begin to spray water. The more telecom providers in a facility, the better. Diversity and redundancy would require dual entrances and routes to the center. Redundancy increases reliability. The extra resources in reserve is measured in N+ levels. If the number of generators is (N) that is required, another generator available for backup provides (N+1) levels. Two available generators gives N+2 levels and 2N provides twice as
Protecting Computer Data
283
many as needed. Diversity relates to the location of the resource, or the routes involved in its delivery. If two fiber cables are needed to connect to the data center, four fiber cables will provide redundancy. But if all four fibers are in the same sheath or duct entering the building, there is no diversity and a single failure can shut down the facility. Bringing the two redundant fiber cables in at different points adds diversity to redundancy.
POWER LOSS Power failures are the single greatest disrupter of data center service. The key in power loss protection is multiple resources, separate sub-stations, a spot network with multiple sub-stations or multiple generators and UPS (Uninterruptible Power Supply) power on site. The generators and fuel sources should have both redundancy and diversity.
FUEL SUPPLIES A three-day supply of fuel is a minimum, but an isolated region might be cut off for weeks from re-supply. Double walled tanks in a secure, protected area that is not visible or accessible to the public are needed. Provision for foam fire suppression, spill containment and cleanup are expensive but they are necessary.
LIGHTNING AND GROUNDING There are few places in the world where buildings are not subject to lightning. The facility should have a solution for lightning strikes. Grounding is another important consideration. Static electricity can cause many problems around storage devices. The foundation drawing should show how the steel frame is grounded or that the building system is tied to a grounding ring. The cooling of large data center environments can be solved in different ways, many of these require water. When the water supply is disrupted the cooling system can fail, causing equipment damage and
284
Emergency and Backup Power Sources
data to become corrupted or unrecoverable. A large water storage tank on the roof, brings another set of problems. A facility with well-water backup is safer. References Chudnow, Christine Taylor, “Disaster Recovery—Not If You Need It, but How to Do It,” Storage Inc., Q3, 2001, pp. 52-54. Chudnow, Christine Taylor, “New Horizons in Enterprise Storage,” Storage Inc., Q4, 2002, pp. 16-18. Earls, Alan R., Computer World, Vol. 28 No. 19, May 9, 1994, pp. 120-21. Ganong, Ray, “The Emergency of Electronic E-Vaulting Electronic is a Compelling Improvement on Traditional In-house Data Backup and Recovery Functions,”p Information Management Journal, Vol. 37 Jan-Febr. 2003, pp. 20-29. Gordon, Scott, “Watch Your Back,” Storage Inc., Q4, 2002, pp. 12-15. Gustin, Joseph F., Cyber Terrorism: A Guide for Facility Managers, The Fairmont Press, Inc.: Lilburn, GA, 2004. Gustin, Joseph F., Disaster and Recovery Planning: A Guide for Facility Managers, The Fairmont Press, Inc.: Lilburn, GA, 2002. March, Michael, “Filling the Storage Gap,” Storage Inc., Q4, 2002, pp. 2426. Meak, Jeff, “Disaster-proof Your Data,” Datamation, Vol. 36 No. 21, Nov. 1, 1990, pp. 87-91. McKinstry, Jim, “Dynamic SAN Optimization,” Storage Inc., Q4, 2002, pp. 20-21. Moad, Jeff, “Disaster-proof your Data,” Datamation, Vol. 36 No. 21, Nov. 1, 1990, pp. 87-91. Pearring, John, “Simplifying Storage,” Storage Inc., Q4, 2002, pp. 22-23. Sims, Richard, “Cost-Optimizing RAID Systems:4,” Storage Inc., Q4 2002, pp. 7-10.
Data Recovery
285
Chapter 9
Data Recovery Although databases are not the only type of critical data, they are an important subset. One of the important issues about databases is that they can be more than the sum total of the data. Besides the database, there are external files that do not reside in a database. These need to be backed up and restored along with the database. Database availability options include parallel servers, replication, and a standby database. In a clustered environment, a database may run on more than one computer. This environment is used for failovers, if one machine fails, the other machines continue processing with little or no interruption. Dual-ported RAID devices and other failover products can provide the same protection at a local site. If one machine fails, the second machine can take over and run a recovery script. However, the entire site can go down and the database can become corrupted.
MINIMIZING DATA LOSS An outage is like having cables unplugged, electronics failing, drives not spinning, batteries run down and viruses galore. Electronic records will be discarded and overwritten. Even with the use of specialized hardware and fault-tolerant solutions for clustering and replication, some data may be lost. Continued success for an organization that has suffered from a significant system or data loss does not depend just on the ability to replace hardware and rebuild infrastructure. In most cases, continued success depends on the ability to quickly and successfully recover business critical data. Backup concepts continue or grow with technology and adopt new and innovative approaches to the process. Disaster recovery is often a component of a backup and recovery solution. It can be defined as the ability to quickly and gracefully recover from total data loss. 285
286
Emergency and Backup Power Sources
Software replication requires little administration, uses existing lines and does not require new servers, fiber channels and storage. Replication software costs are similar to backup software without agents or tape hardware but there are the benefits of mirrored storage. Some environments only need tape while a few can justify synchronous storage. Many can use software replication. Server clustering and RAID disk arrays are often a part of the solution. They provide high availability for data from hardware failures. These technologies were designed to increase performance, provide high availability of data and create an additional level of fault tolerance by reducing the possibility of data loss caused by hardware failure. Business continuance solutions such as data replication, persistent image technology and volume snapshot solutions offer quick point-intime recovery of lost data, due to corruption or user error. These types of solutions promote fault tolerance, high availability and quick recovery. But, with a few exceptions, they are still susceptible to data loss due to hardware failure. With each step toward the goal of 100% data availability, the technology grows more costly to implement and manage. Regardless of the technology in place, once data has been lost, it comes down to the ability of backup and recovery software such as TapeWare by Yosemite Technologies.
MANUAL RECOVERY Compared to the real costs of data loss, even manual recovery is better than no recovery. While at first it may not seem like a large task to manually recover a failed system, it can be a cumbersome and timeconsuming task for anything more than a limited amount of data. The first task is to isolate the problem and take steps to correct it. This can involve identifying and replacing a defective part. You must configure the partitions or special RAID sets that are needed. Next, you will need to reinstall the OS. You may need previous system information such as network addresses, directory structures, volume sizes or cluster information to complete the installation. The OS and any additional hardware configuration may require device drivers, patches and several megabytes of service packs before you can get up all of the peripherals. Once the base OS is up and running, you still need to locate, install
Data Recovery
287
and configure the applications and backup software. This may take 2 to 4 hours of manual processes, then you can load a tape and start rebuilding the catalog so you can begin selecting files to restore. The actual rebuild and restore process could add an additional 1 to 4 hours or more to the total recovery time.
AUTOMATED RECOVERY Disaster recovery products have the ability to automatically recreate hard drive partitions and perform a full system recovery of the operating system, applications and data. This alone could trim 2 to 4 hours off a typical manual recovery process. There are two parts in preparing to implement most disaster recovery solutions. First, make a full backup of the system exactly as it will be restored in the event of a disaster. Next, create the appropriate boot media. The full backup, along with bootable disks, bootable CD-ROM image, or bootable tape device is used to perform a complete restoration. The disaster recovery solution is designed to be as automatic as possible during both preparation and recovery phases. Once installed, the solution should complete its tasks without user intervention. Solutions such as the TapeWare Disaster Recovery Agent can function across multiple platforms and operating systems. Disaster recovery is only as effective as the media rotation schedule that is put in place. If tapes are not being rotated regularly and stored in secure locations, then the data are still at risk and no solution will be effective. For protection, full backups should be performed, either as part of a regular scheduled backup plan or as a snapshot that is performed offschedule. A full backup should also be performed each time there is a notable change in data on the system. A new bootable disk set or a CDROM needs to be created any time there is a hardware change or a change in operating system. Disaster recovery is not be limited to servers. It must also protect desktop and work group environments. Critical data may be distributed in hard drives of desktop and laptop computers used daily. Most serverbased backup solutions can backup desktop clients remotely, but may not offer the combination of affordable disaster recovery, local tape device support, common user interface (UI), intelligent wizards and other
288
Emergency and Backup Power Sources
healthy features to the desktop and workstation environment. Some solutions are specialized and for performance or security reasons may require each protected system have a tape device attached to it. Additional licenses may also be required. Other solutions may allow network-based recovery of a remote system with backup data archived on disk instead of tape. Compared to tape, disk solutions may not offer the same levels of reliability, portability or scalability. Since all operating systems do not support plug-and-play, disaster recovery operations should always be performed on the same computer after replacing faulty hardware. Most disaster recovery solutions assume that major changes to the hardware have not occurred. The hardware that is used to restore data must be nearly identical to the source system with few exceptions. Some solutions do not fully restore the base OS. These products undertake a scripted reinstall of the OS and then restore only the critical data. There may be slower restore times or manual intervention required. They also have a tendency to lock up when using advanced hardware with additional drivers, service packs, or products not originally supported by the OS. Several cloning solutions are available for desktop computers. These products provide a snapshot image of the operating system that can be stored on a hard drive or network volume. While these allow for the quick recovery of a standard system, it is not feasible for daily data protection and does not work well for large application servers.
REPLICATION Replication provides redundancy, but it can cause problems in a transaction environment. A company may sell products from a website that is built on a replicated database model. Orders come in to different purchasing locations. The product ships correctly, but the inventory may not be updated properly. This type of update conflict would be detected by a database server such as Oracle which would invoke application code to fix it. Replication has some advantages. It works over long distances and WANs and can tolerate short network outages. It provides redundant copies at remote locations and has a fixed overhead. Parallel servers and replication can also be used together.
Data Recovery
289
A standby database can be used for failover and disaster recovery. It must be on the same network, which can present the same latency issues as replication depending on the amount of distance between the two database servers. The standby will also lag behind a small amount according to the size of the redo files. In this configuration, the database server forwards its logs to another network computer and applies log changes to the database in constant recovery mode. It can be brought online after a failure. The standby database does not affect primary machine performance, but it should not be the database’s only backup since once it goes into service, the standby becomes the master. Critical databases can be spread across multiple pieces of hardware, hardware architectures and multiple physical locations. Storage management applications such as Tivoli Storage Manager, along with the native capabilities in the database applications can aid in the backup process. When disaster recovery is properly executed, a full restoration of the operating system, hard-drive partitions, applications and data can be achieved quickly and easily. The key to disaster recovery is in the preparation.
PROGRESSIVE BACKUP Traditional backup methods of full, differential, or incremental backup do not work well with the capacity limitations of CDs and create problems when trying to restore files. A better alternative for CDs is progressive backup. Progressive backup differs from incremental backup by backing up only new and changed files, but without relying on the archive characteristics of the file. Progressive backup software performs a comparison between the files that are on the source volume and the files that are on the backup media. The comparison uses a database called a catalog file. The software searches the catalog file and compares its contents with the volume to be backed up. This allows the software to determine which files have been altered or newly created without reading over the media it has backed up. By referencing the catalog, the software knows when the file was backed up and where those files are. Dantz Retrospect Backup software utilizes backup sets. These contain the data needed for a complete restore of any volumes that were backed up to that set. The software keeps writing to the CD until it is
290
Emergency and Backup Power Sources
completely filled. The software also creates a snapshot of the backup volume, providing a point-in-time image of the contents of the hard disk volume. Combining the snapshot and catalog capabilities provides the files needed to restore the disk volume to the exact state it was in at the time of the backup (when the snapshot was taken) and at the location where the required files are stored. Progressive backup CDs created on CD-R and CD-RW drives can only be restored from writable drives. Smaller data sets, improved performance, and precision data recovery are possible when progressive backup software is used for backups to AIT, DLT, LTO, and other tape formats. The random file access of the CD format and the fast data reading performance of the current generation of drives of up to 6MB/second make this a fast, accurate backup solution. But, a quality mechanism for the drive is needed.
PACKET WRITING Industry standards for rewritable CD technology have evolved slowly and support for packet writing of data to CD media is not always supported. When backup operations span multiple pieces of CD-R or CD-RW media, some drives cannot manage the media swap and will flush the cache, causing a loss of up to 2MB of data and rendering the file that was being written unrecoverable. Other drives do not properly flush the cache when needed, forcing the software into a flush cache operation that can slow down the drive. This can drop a 20x CD recorder to write at 2x speed. Since there is no standard for finalizing a packet-written CD, progressive backup software does not finish disks it has created. Conventional CD writing software wastes about 10MB per session for catalog and directory information about the CD itself. Using the backup set and catalog, Dantz Retrospect conserves that space for writing backup data.
ADVANCED RECOVERY Recovery of a large database takes so long because most backup and recovery solutions focus on data backup, data protection and data management. Recovery is often an afterthought. Newer rapid database
Data Recovery
291
recovery products address these business continuity issues. Some products can fully recover and restart a database in 20 minutes or less, 1 terabyte or 1 petabyte in size. These products can be used with existing backup and recovery solutions to protect business continuance. A rapid recovery product like RealTime is designed to integrate with most storage and backup systems, such as Tivoli, Legato, and VERITAS, to restore lost or corrupted data quickly. The recovery process works in minutes, not hours, reducing the risks of business impact which result from extended interruptions. Instead of restoring lost data from transaction logs or archives, the process uses the ability to undo corruption or loss rather than rebuild the entire database. A timeslide capability rolls back the database to the day, hour, minute and second prior to the loss and is then restarted at that point in time. The process is completely automated. A non-intrusive continuous data capture with timeslide (undo) recovery capability is used for IBM, Sun and Oracle systems. The undo procedure eliminates the need to apply transaction logs for data reconstruction or to restore full volumes of data. The database recovery process exceeds the recoverability criteria set by Oracle to maintain transactional consistency. Since the process rewinds rather than rebuilds a database following an outage, the database can be rolled back in time and restarted at any point before the loss or corruption occurred. Operations continue from that point in time. This disk-to-disk solution for recovery creates a rolling movie of the data as opposed to a snapshot. This eliminates the risk of data loss that can occur between snapshot points. It is also faster and less complex than snapshots. The rolling movie approach is block level-based, meaning every bit of data is backed-up regardless of source or format. Time-stamp journaling is used for data consistency during restoration. RealTime captures data continuously and journals it to allow immediate, point-intime recovery. Undoing its queued data writes in reverse order, RealTime restores data by backing out data until it has moved back in data time past the system crash or data corruption point. Partial restores, full restores, and disaster procedures are all supported. A data recovery solution for a relational database requires that four major components of the database be backed up; data files, control files,
292
Emergency and Backup Power Sources
configuration files and archive logs. Turning on the archive log is an optional feature in Oracle databases. This makes recovering a relational database a time-consuming and complex task. Backing up a relational database like Oracle, DB2, or SQL Server to a remote site while it is running presents added challenges. Mirroring of the complete database to the remote site is needed to guarantee that the data files, configuration files, control files and archive logs are present for transactional consistency. Every change to the database must be recorded with proper write ordering sequencing before it is applied to the data file. If the remote site is more than 10km from the production database server, an Oracle database requires that an asynchronous transfer protocol be used.
ADVANCED CLUSTERS The concept of using more than one computer for disaster recovery has evolved to include clustered implementations. One of the key features of a cluster is the interconnect between nodes. Exchanging state information and keeping the cache operating are critical parts of clustered architectures. The number of messages passing between cluster nodes is usually vital and often the message sizes are small. The cluster’s performance then becomes sensitive to the speed at which messages can be exchanged. Clusters are most effective for parallel workloads. Distributed processing clusters are well suited to parallel applications than can be distributed across many computers. They provide an increased availability to a set of users. This high availability comes with load balance/failover benefits and simplified administration. Cluster administration replaces that of individual machines. Hot spare failover clusters provide high availability by having applications only run on the active node. In the event of a failure of the active node, a pre-configured backup node is ready to take over. Various degrees of automation are available for switching over to the backup node. High availability load balancing clusters allow work to be done on all the nodes as they load balance the work across available resources. In the event of failure, work is no longer sent to the failing nodes. This type of cluster trades off complexity for the ability to use all the available resources of the cluster. An important use of this type of cluster is serv-
Data Recovery
293
ing web pages. The nodes are usually connected via Internet Protocol (IP) networks, which does not require the nodes to be located in close proximity. More special-purpose clusters, requiring higher performance, use commercially available interconnects to improve the bandwidth and more importantly the internode message latency. Giganet’s (Emulex) cLAN cards and switches offer higher performance and lower latency than standard Ethernet. These cards are capable of supporting an IP compatibility mode and they allow some performance improvement rewriting an IP-based clustering communications layer. The full performance of these cards requires their native Virtual Interface (VI) API to be used. In the VI mode, major CPU savings can be realized, particularly over Gigabit Ethernet, along with gigabit per second bandwidth and sub-10 microsecond latencies. InfiniBand is another solution for internode communication, like Giganet, it is based on VI. Another commercial interconnect is Myricom’s Myrinet. Both cards and switches are available for copper or fiber cable. The copper wiring versions are limited to about 10 feet to get the maximum performance. The fiber cable version supports more than six miles of distance at full speed. It provides a bi-directional bandwidth at over a gigabit per second with sub-10 microsecond latencies.
HIGH AVAILABILITY CLUSTERS Disaster recovery requires highly available, mission-critical computational resources. Brownouts and blackouts may not affect all machines the same, some may recover during power up and others may have errors or lock up. The open-source community has been active in promoting clustering and Linux has been a prolific area for clustered solutions. Linux high availability cluster solutions include Legato, SteelEye, Polyserve, Red Hat, TurboLinux, HATs, and Veritas. These often work well with a pair of nodes but may run into problems once the node count grows beyond two. Open source solutions include the Linux Virtual Server and the Linux-HA Project. The Linux Virtual Server uses kernel patches for load balancing incoming TCP/IP traffic. It examines incoming TCP packets and redirects packets to a set of nodes acting as a cluster. The load-balancing can be customized to the application. The nodes can be on the
294
Emergency and Backup Power Sources
same LANs or over WANs using IP tunneling. A master node has all initial traffic routed through it via a virtual IP address. Then requests are distributed to nodes in the cluster and replies are sent back to the requesting clients. The Linux-HA Project is a failover system where nodes in the high availability cluster can take over the IP addresses of failed nodes. When a node fails, it is replaced by another that acts as the failed node. A critical part of any failover system is preserving the state of applications. Memory caches and other client specific data make client failover difficult. REPLICATION AND CLUSTERS There can also be some form of replication going on between clusters in different physical locations. Active passive replication requires the disaster recovery site to have a second cluster and data from the primary cluster is replicated to the backup. The available replication software supports synchronous, near real-time, or scheduled data movement operations. Bi-directional replication requires two active clusters to be operational and accessed at each location. This means that the data being accessed at each site is different. As cluster A data are replicated to cluster B, users on the two clusters are not operating on the same data. This solution is more cost effective than active passive since users can be doing production work on both clusters. Active replication is the most complicated form of replication. It allows users on either cluster to be accessing and modifying the same files, which then get replicated both ways. Commercial replication packages include Repliweb’s Replication Development Suite for Linux clusters. This is a scheduled content replication and synchronization system for mixed server environments (Windows, UNIX/Linux). It allows the scheduled replication and synchronization of file systems over networks and can be used for replicating data to clusters and between them.
CLUSTERS AND STORAGE Extending clustered computing to storage by creating a distributed file system that runs on Linux nodes has been done with Tricord’s Lunar
Data Recovery
295
Flare appliances. These provide a low cost, high availability cluster designed to serve up storage as a NAS device. The distributed file system allows users to connect to any of the nodes in the cluster and to treat the data as a single large volume. When the cluster fills up, users can add an additional Linux node and grow the cluster by adding more storage, total computing power and network bandwidth. Files stored in the cluster are automatically mirrored or striped like RAID-5 across the nodes. The cluster provides high availability by allowing any node to fail while still supporting access to all the data in the cluster from any of the remaining nodes. Lunar Flares are built with PCs, so they are able to provide high availability and disaster recovery without major investments in hardware. For disaster recovery, Linux packages can be used to replicate data between two Lunar Flare clusters. In one form of cluster-to-cluster replication, each node keeps track of files changed by attached clients. Then, the parallel nodes replicate the modified files to a remote cluster over an IP network. Lunar Flare clusters can automatically load balance clients accessing the cluster with a variety of protocols. CIPS, NFS, HTTP, FTP, AppleTalk, and resync protocols are all supported with load balancing. The stateless protocols HTTP and CIPS are supported from failover. Other protocols support failover, but require automated or manual reconnection depending on the application accessing the data. For disaster recovery, clients can be reconfigured to point to the replicated cluster or the replicated cluster can be reconfigured to appear as the original cluster. This reduces the time required to reconfigure clients.
LONG DISTANCE REPLICATION Planning for disaster recovery means more than relying on local backup to tape. Around-the-clock access to backups is often needed. New options for long-distance replication have been improving and costs dropping. Many companies can now afford to replicate data to remote sites. This data replication can be synchronous or asynchronous. Synchronous replication is similar to standard RAID-1 mirror implementations in a storage array. The difference is that the source and target of the mirror operation can be separated by 100 kilometers.
296
Emergency and Backup Power Sources
In synchronous operation, the application has to wait for acknowledgment from the remote site before the write is completed. The duration of the pause depends on the round-trip transmission time to the remote site. Reducing this write penalty requires the use of expensive, high-bandwidth, low-latency site-to-site links. Also, 100km is not enough for some disasters. Asynchronous replication separates the replication process from the local write so the application server does not have a performance penalty. The sequencing of writes from the host is done with first-in/ first-out (FIFO) queues. The remote copy is not a perfect copy of the source since it lags behind at any given point in time. This lag depends on the bandwidth of the network and the resources at the remote end to write to disk. Semi-synchronous mirroring has some of the features of synchronous mirroring but it is still asynchronous in operation. An I/O buffer is used at the primary and secondary sites. The mirroring takes place between the local storage and the router. The buffer is usually small and a high-speed link is needed to keep it from overflowing. A summary of synchronous, asynchronous, and semi-synchronous mirroring appears in Table 9-1. Table 9-1. Mirroring ———————————————————————————————— Feature Synchronous Asynchronous E-Vaulting ———————————————————————————————— Protection for corruption • • x Protection for deletion • • x Cost High Medium-High Low Distance Limitations x • • Link Speed Required High Medium Low Performance High Medium Low Failover Auto Manual Request Restore Multi-platform • • x ———————————————————————————————— Since it can operate over any distance with relatively cheap hardware, asynchronous replication is the most flexible and affordable option available today.
Data Recovery
297
HOST-BASED REPLICATION The replication involves issuing a second write to get the data to the remote site. This can be done with host-based replication where the host’s operating system or an add-on to the OS initiates the second write or array-based replication with the second write initiated by the storage array. Storage virtualization allows networked-based replication. Here, a device between the host and storage provides replication and other storage services. In a host-based system, the replication occurs on the host and the remote destination is another application server. Since the replication relies on the host, it works with any storage that can be attached to the host. This approach is simple and relatively cheap, but the administration becomes extensive beyond several servers. It becomes a problem for more than a few dozen servers and is not feasible for large server farms such as those found at Internet service providers. The replication process also uses extensive host resources impacting application performance. These problems limit the use of host-based solutions to relatively small networks and application servers where performance is not critical.
ARRAY-CENTRIC REMOTE REPLICATION Array-centric solutions move replication to intelligent arrays. One array becomes the mirroring agent for transmitting the data to another identical array. This type of replication does not have any impact on the host. It supports any host attached to the array through a single user interface, regardless of the host operating system. However, this feature is only available with expensive disk controllers of proprietary design. It requires costly external equipment to convert disk channel protocols for WANs and LANs.
NETWORK-BASED REMOTE REPLICATION This is similar to the array-based approach, but instead of proprietary controllers in each array, storage virtualization engines (SVE) are used in the network between host and storage. These can mirror data regardless of the storage devices. The network-based approach combines
298
Emergency and Backup Power Sources
the benefits of host- and array-based approaches as shown in Table 9-2. Asynchronous mirroring with SVEs allows affordable and manageable disaster-recovery capabilities to almost any company that has access to a network able to handle asynchronous replication. Table 9-2. Host, Array and Network-based Solutions ———————————————————————————————— Feature Host-based Array-based Network-based ———————————————————————————————— OS independent • x • Use any storage x • x Standard IP networking x • x Central management • x x ————————————————————————————————
NEARLINE STORAGE There is a gap in both performance and cost, between highly available, on-line disk-based storage and the low-cost backup functionality of tape. New hardware and software solutions known as nearline storage fill an important role between on-line disk-based storage and off-line tape. On-line storage is best suited to applications that require constant, instantaneous access to data, such as databases and frequently accessed user data. Off-line storage such as tape is used primarily where infrequent serial access is required, such as backup for long-term storage. Between on-line and off-line storage is a range of nearline storage technologies such as Network Appliance NearStore. This is a networkattached storage appliance that includes backup and recovery, on-line archival and remote disaster recovery. Nearline storage solutions are for applications that require quicker random access of data compared with off-line storage, but do not require the continuous, instantaneous access provided by on-line storage. NearStore is designed for data protection and business continuance while allowing organizations to replicate more data at a more economical cost. It uses less expensive ATA disk drives instead of SCSI or fiber channel drives. These scalable appliances offer capacity at a lower price of 2 cents per megabyte. ATA-based disk drives receive data faster than tape
Data Recovery
299
drives and can shorten the backup window. Nearline technologies are not intended as a tape replacement but as an intermediate step to accommodate increasingly complex storage demands. A nearline device such as NearStore supports the most popular tape backup software and acts as a repository of tape data for nearline recovery. Backing up to a nearline storage solution and then to tape enhances data protection, management and improves primary storage and tape library performance. It is also faster and consumes less application-server CPU than direct backup to tape. Nearline systems complement primary storage solutions by streamlining backups to tape and allowing quicker data recovery. With the ability to scale up to 100TB, large-capacity nearline solutions also help to consolidate the management of multiple recovery and backup applications onto fewer systems. Nearline storage is suitable for storing redundant copies of data because of its large capacity, network connectivity and interoperability with backup and replication software packages. NetApp NearStore is compatible with Veritas Netbackup, Legato NetWorker and RepliStor, CA BrighStor Enterprise Backup, and Connected TLM, as well as NetApp’s new SnapVault software for on-line backup. It can connect to multiple primary storage platforms, including NetApp and others over an IP (Internet Protocol) network and serve as a central repository for redundant data. Nearline storage can also serve as a foundation for what is known as resilient systems. This refers to a newer storage architecture that stresses redundancy, remoteness, and recoverability. Remoteness extends the idea of redundancy by backing up data at locations away from the main data center. Nearline appliances can be located remotely and connected to primary storage with IP network connections. For most storage systems, the more immediate the recovery, the more expensive it is to enable. A real-time mirror can provide almost instantaneous failover capabilities, but it requires expensive redundant systems. Tape backup is far less expensive, but may require hours or days to restore, especially if the backup process is manual. Nearline storage bridges this gap with less expense and greater functionality. Once files are archived they are normally deleted from the primary storage system and not available on-line for fast retrieval. Disk-based nearline storage is well suited to on-line archives of large or infrequently
300
Emergency and Backup Power Sources
accessed information. Reference information including medical and check images from CAD/CAM is increasingly considered to be of such high value to an organization that it is retained on-line for active access. Typically, this information has been stored on expensive primary storage. Software-enhanced nearline technology can immediately return a file system back to a previous point in time for instantaneous recovery. Doctors can call up medical information including CAT scans or MRIs for diagnosis.
RAID LEVELS In RAID configurations, the drives are arranged into striped or mirrored array groups. RAID levels differ depending on how they break up data, handle redundancy and the number of drives required. Data availability is a function of how the RAID levels handle data redundancy. RAID-1 with standard mirroring requires drives in pairs for mirroring. The files reside on separate disks and two copies of the data are kept with one copy per disk. RAID-5 uses striped parity and requires three or more drives. Data and parity are rotated so they are distributed evenly across all of the disks in the array. RAID-1+0 uses mirrored striping and also requires three or more drives. The data are striped and mirrored on adjacent drives in the array. When drive sizes were smaller, the incremental difference incapacity from one size to the next was relatively small. Today the difference is much greater and it significantly affects the performance and cost of RAID implementation. With larger drives, writes to parity arrays will take longer, and reconstruction from a failed drive will take significantly longer, particularly when the stripe width exceeds four drives. A RAID-5 configuration might use 36GB 15Krpm drives with a maximum strip width of 14 drives. A RAID-1+0 configuration might use 146GB10K rpm drives with the mirror limited to a maximum of 14 drives. The difference is a RAID5 array using small capacity drives and a RAID-1+0 array using large capacity drives. Raw capacity is usable capacity plus redundancy capacity, which takes the form of parity for RAID-5 and a mirror for RAID-1 and RAID-
Data Recovery
301
1+0. At lower capacities, the difference between the number of drives required for the two RAID level/drive capacity configuration is not significant, starting at around 2TB, the difference becomes significant. The total usable capacity is the total raw capacity minus the capacity generated for redundancy. Mirroring for RAID-1 and RAID-1+0 gives seven drives for a 14-drive array. Parity for RAID-5 gives one drive for a 14drive array. At lower capacities, the difference in usable capacity, which is that used to store primary data, for these RAID configurations is not significant, but starting around 2TB, the difference becomes significant and grows quickly. In RAID-1, if one disk in the mirror fails, no data are lost. The simultaneous loss of both mirrored disks will result in data loss. A RAID-5 single disk failure does not result in data loss, but if a second disk fails before replacement of the failed drive, before stripe reconstruction, all the data in the stripe is lost. Losing one drive in a RAID-1+0 system does not cause data loss, but losing any two adjacent drives causes data loss. Losing any two non-adjacent drives will not cause data loss. Performance in RAID levels depends on the way they handle data movement. This involves the method employed to retrieve and store data. RAID levels perform latency-driven tasks well. In RAID-1, each file has to be written twice, once to each disk in the mirror. Caching RAID controllers helps by requiring only one write from the host, but they still must perform two writes to the disks. Read performance is increased since the RAID controller can read from either disk in the mirror. If one disk is busy, the data can be retrieved from its mirrored unit. Standard mirroring is generally used with randomly-accessed small files under 8KB in size. It is also used for mirroring host operating systems or applications since these are not frequently accessed or updated. Standard mirroring is especially well suited for highly random write-heavy tasks because there is no parity generation needed and response time is kept low. RAID-5 with striped parity causes a write penalty for each storage request because new parity must be generated for each bit written. This requires two writes plus reading back all other pieces of the stripe. Also, since the controller must wait until the proper execution of both data and parity IO processes before it can confirm that a write operation has been properly executed, additional overhead is needed. The greater the number of drives, the greater the number of par-
302
Emergency and Backup Power Sources
ity calculations, so keeping the size of the array group small increases performance, but requires more drives for parity. The larger the stripe size, the higher the number of parity calculations due to more data sent per IO operation. This increases the time needed to complete a write operation. The smaller the stripe size, the higher the frequency of parity calculations since more writes occur. This is why large, record writes are matched with large stripe sizes to transfer more data per write operation. Read operations are not affected by parity, so data retrieval is as fast as with RAID-1 or RAID-1+0. When a large file is spread out across drives as opposed to residing completely on one drive, it is faster than RAID-1. RAID-1+0 with mirrored striping is like RAID-1, where all writes have to be written twice and data can be retrieved from either mirror. Like RAID-5, large files can be distributed across multiple drives, where many drives do the work of one. The combination of striping for large files and mirroring for small files plus the lack of parity for frequent writes gives RAID-1+0 a higher throughput than RAID-1 and lower response times than RAID-5.
RECOVERY OF A FAILED DRIVE RAID levels differ in the way they handle failed drive replacement. This affects how fast they can fully recover after a failed drive has been replaced with a new drive. Resiliency is a measure of how fast an array can recover after replacement of a failed drive to full operation. It indicates the degree to which performance is impacted during the data recovery period. In RAID-1, during a single disk failure, the data are at risk of loss since there is no redundancy. After a failed drive has been replaced, the data are copied from the original drive to the new drive. Recovering data to a mirror is called resilvering. It is dependent on individual drive capacity and on the formatted transfer rate. It generally takes less than an hour, even for the larger drives. A RAID-5 single disk failure causes the data to be at risk of loss because there is no redundancy. After a failed drive has been replaced, the data are recalculated from parity and copied to the new drive in the stripe. This process of recovering data from parity is called reconstruc-
Data Recovery
303
tion. It is dependent on individual drive capacity, formatted transfer rate, and stripe width (the number of drives in the stripe). It can take many hours to perform.
RECONSTRUCTION Reconstruction takes longer for the larger data stripe sizes or higher drive capacities. During reconstruction the array will exhibit a significant decrease in performance. Reconstruction time can be reduced using small capacity drives and small stripe widths of three to four drives. This allows more drives for data and/or parity. Mirroring two RAID-5 arrays together allows one mirror to handle all of the IO activity while the other handles parity rebuilding. Using a mirrored array of RAID-1 or RAID-1+0 eliminates these reconstruction problems. In RAID-1+0, during a single disk failure, the data are at risk of loss because there is no redundancy between the failed drive and either adjacent drive. After a failed drive has been replaced, data are copied from data on the two adjacent drives onto the new drive. This is also called resilvering. Other drives in the array are not affected. Like RAID1, this operation depends on the individual drive capacity and on the formatted transfer rate. It will generally not take longer than an hour even for larger drives. When the costs of each configuration are compared, the 2TB raw capacity point is where the differences start to become significant. This is because at 2TB, four times as many 36GB drives in the RAID-5 arrays are required to generate 2TB of raw capacity than the 146GB drives in a single RAID-1+0 array group. The usable capacity is the amount of storage space that is actually utilized to store primary data. It is the portion that is actually used to produce productive work. Except for a usable capacity below 1TB, the cost per usable GB is almost equal for a RAID-1+0 array with 146GB drives to a RAID-5 array with 36GB drives. The traditional sentiment is that RAID-5 is less expensive than RAID-1 or RAID-1+0. Beyond 2TB, using 146GB drives in a RAID-1+0 implementation is more cost effective while providing enhanced data protection, read/write performance, and resiliency.
304
Emergency and Backup Power Sources
SAN, DAS and NAS In today’s high-speed, connected world the management of storage differs from the traditional practice of adding more storage as needed. This approach worked with limited amounts of data and cheap storage. Today, a storage strategy is needed that simplifies storage and server management and allows storage to be shared among users. It must also be adaptable to meet changing needs. This means storage area networks (SAN), direct-attached storage (DAS) and network-attached storage (NAS). As storage needs grew, network-attached storage (NAS) devices called filers became a popular alternative to direct-attached and network storage. These high-end NAS filers from NetApp or EMC, were scalable and provided needed management options. Work groups and smaller divisions bought economical and simple NAS devices to handle their file storage. But, these NAS devices were not designed for centralized management. NAS and SAN are similar in some ways. They are storage devices that use multiple disks in arrays for availability and performance and usually employ a buffer cache to enhance performance. NAS works well for heterogeneous and simultaneous file sharing, but has a large network protocol overhead for error-checking and integrity. This can affect NAS performance, but it is not an issue fiber channels or SANs. Using SANs for larger transfers offers high performance and more predictable response. NAS and SAN serve different purposes and distinct management interfaces. They also have different and often incompatible backup procedures. You cannot share capacity or load-balance workloads and migrating application storage between environments can be difficult. SANs lack NAS file sharing capabilities. NAS is better doing many small things while SAN can do large tasks very well. SANs use dedicated networks, most commonly fiber channel. Like DAS, I/O requests access devices directly. SANs are optimized to handle storage traffic at high speeds with single points of control, and off-load processing tasks from the LAN. SANs divide the storage up into pieces called logical units, or LUNs (Logical Unit Number). A LUN may be one entire array, or a very large array may be divided into several LUNs. They are similarly sized and can only be accessed in whole blocks, which speeds up data transfer. Command level
Data Recovery
305
interfaces, wizards, or semi-automated procedures allow the mapping of LUNs to blocks on the disk subsystems. NAS uses a specialized processor with its own disk storage. It attaches to the LAN or WAN using specialized file access and sharing protocols, and uses this processor to service file requests. NAS systems differ from SANs since they view storage as a set of files instead of LUNs. Files are more dynamic than LUNs data blocks. They come in radically different sizes and are easily created, modified or deleted. This dynamic and hierarchical environment supports many thousands of files instead of tens or hundreds of LUNs.
NAS GATEWAYS There are hybrid devices that combine NAS and SAN, but a more common solution is NAS gateways. These consist of NAS appliances that connect to SAN storage through a Fiber Channel port, providing file- and block-based data storage. A NAS gateway connects a NAS device through a Fiber Channel HBA (host bus adapter) to a Fiber Channel SAN. NAS retains its file sharing capabilities, but its storage becomes more flexible and manageable on the SAN. Some gateways can only store data on arrays from the same vendor. They enhance an existing SAN by allowing it to service NAS applications through the gateway. This is a major boost to backup operations since without a gateway, most NAS devices must be backed up separately. A gateway does not completely converge NAS and SAN, so fileand block-based data must be stored on different devices. True convergence may involve transparent gateways that can share files on the backend or hybrid devices/global file systems running in storage area networks. A NAS gateway can also suffer from latency and bottleneck issues.
SAN OPTIMIZATION The typical SAN-attached server may operate with tens of terabytes of a database. These databases may experience hot spots with large amounts of random I/O. A large host system can easily overwhelm
306
Emergency and Backup Power Sources
a traditional disk subsystem with random I/Os causing the host to idle while waiting for the disks to respond. The larger disk subsystems use multiple gigabytes of cache to speed up I/Os when the same data are read many times.
SOLID-STATE DISKS Another way to eliminate a database I/O bottleneck is to use solidstate disk technology. This very high-speed memory unit can emulate several disk drives. The solid-state disk is used exactly the same way as a normal disk, but there are none of the delays associated with normal disk drives. A solid-state disk unit can be connected to a SAN directly, or through a SCSI bridge and logically dividing it into multiple LUNs. A SAN environment with a large set of database servers or a few high-end database servers sharing a smaller set of storage devices is essentially randomly accessed. Even a RAID controller that uses cache memory to increase performance and availability may not help the hard disk storage often to keep up with the servers’ I/O requests. In this environment, cache emulators can work with solid-state disk to improve performance. The cache emulator can write data blocks sequentially to the persistent cache area and move them to the data disk (random write) as a secondary operation once the writes have been acknowledged. This accelerates the performance of the slower disk. The reads also take advantage of the cache pool. Intelligent disk-based staging mechanisms can re-map frequently used areas of any pre-defined zones to solid-state disks. This is based on statistics collected to determine the most frequently accessed zones. When a zone is no longer considered hot, the data from the high-performance disk are moved back to their original zone on the magnetic disk.
RISKS IN SECONDARY STORAGE Secondary storage, such as backup and replication relates to greater application availability, recovery and business continuity. It is also associated with greater data volume than primary storage. In practice, it requires managing large backup processes and tape libraries. The
Data Recovery
307
tasks include cataloging, storing, distributing, vaulting and scratching tapes by pooling and virtualizing backup resources. It may also involve transferring images and data outside the main facility to data centers or service providers. Some or all of backup, vaulting or recovery may be outsourced. The storage functions are handled by more people, transferring store data to more locations and placing sensitive data on more dispersed mediums. While backup and replication inherently preserves data, the risk of unauthorized data access, theft or corruption in secondary storage grows. Tape media is the most popular source for data recovery. Backup tapes are small, portable and typically stored outside the data center for off-site disaster recovery. Unauthorized users have more time to read tape data, analyze confidential information and even rebuild entire systems. Tapes used for bulk data transport can be misdelivered or lost. With replication, system snapshots are duplicated and often stored outside the primary site. Access controls and infrastructure management may fall short of protecting access to the tape media and data repositories. Additional safeguards may be needed to further ensure data integrity and confidentiality. This includes stored data authentication and encryption.
SECURITY Security is more likely to be adopted when it is transparent and non-obtrusive. It should not impede the performance (read-write data rates) of the tape device. This is especially true for virtualized tape. These are disks that look like tape libraries. Protecting stored data can provide compliance to e-commerce, healthcare, FDA, EU and other privacy legislation. Encryption converts clear data (plain text) into an unreadable form called cipher text using a secret key or password that is unbreakable without the particular decryption key. Authentication is a process to validate a transmission, message or originator by assuring the identification a given user or system, typically passwords or digital certificates. Authorization determines what an authenticated entity is granted permission to do or access. Integrity is a process that establishes that data has not been modified. Key management determines how keys are
308
Emergency and Backup Power Sources
created, protected, distributed, recovered, updated and terminated. Strong encryption, authentication, authorization, data integrity, and centralized key management are the way to mitigate access exposures in tape media, virtualized tape systems and replicated images. For recovery, encrypted tapes may need to contain metadata that securely reference the encryption system used to protect the tape. This can be implemented at the host, the storage subsystem or in a tape media security appliance. Implementing data encryption using backup software at the host or backup server can produce performance bottlenecks. Implementing data protection at the tape library can provide significant benefits. Encrypting files prior to backup has it strengths and weaknesses. If the file structures are relatively static and simple, then the overhead associated with file encryption may be acceptable. This approach can be complex in environments that have a large number of files, changing file attributes, users and associated crypto keys. This makes recovery difficult, since these files are backed up and restored at remote locations and on different media which may not have the same access requirements or infrastructure. Avoiding this requires replicating the primary environment. A file-encryption approach may not address other applications, such as e-mail and large databases, which may write directly to disk in raw partitions. If an application needs to recover a specific database table, the primary environment needs to be completely mirrored on the secondary site. A tape media security appliance can provide performance, centralized management, protected/managed keys, flexible deployment and seamless integration. It offers centralized management of security which improves policy enforcement and key protection. The keys are maintained by the appliance which can be managed remotely and placed close to the storage library or virtualized tape. An appliance can support protection of media across different backup applications without affecting local system administration. References Bradley, Mark, “Mitigating the Risk of Data Loss,” Storage Management Solutions, Vol. 8 Issue 3, 2003, pp. 20-22. Buffington, Jason, “New Acronyms for Disaster Recovery,” Storage Management Solutions, Vol. 8 Issue 3, 2003, pp. 13-26.
Data Recovery
309
Doyne, Ed., “The 21st Century Data Center: Onsite or Offsite?” Storage Management Solutions, Vol. 8 Issue 3, 2003, pp. 46-48. Harless, Eric, “Dissecting Disaster Recovery Solutions,” Storage Management Solutions, Vol. 8 Issue 3, 2003, pp. 16-18. Levine, Ron, “Rapid Database Recovery,” Storage Management Solutions, Vol. 8 Issue 3, 2003, pp. 23-26. Simpson, Nik, “Data Replication for Critical Storage Assets,” Storage Inc., Q1, 2002, pp. 36-39. Susman, Paul, “The Role of Clustering in Disaster Recovery,” Storage Inc., Q1, 2002, pp. 8-14. Ullman, Eric, “A Guide to CD-based Backup,” Storage Inc., Q1, 2002, pp. 67-68.
This page intentionally left blank
Index
311
Index chemical hydrides 215 CI 101-102 Climate Change Fuel Cell Program 175 co-firing 183
A adaptive islanding 32 ADD CHP 200 Agenda 21 189 American Wind Energy Association (AWEA) 154, 158 API 113 ASCR 37
D disaster recoverability 277 disaster recovery 285, 287289, 293, 295 disaster recovery cost analysis 278 disk mirroring 265 downtime 257
B backup administration 270 backup data 260 backup media 264 backup software 264, 271 backup systems 256 biomass fermentation 181 biomass gasification 182 biorefinery 181-182 BIPV 143 black start 135 business impact analysis 279
E e-business 251 e-commerce 251 ECM 108 ECU 108 Electricity Feed Law 167 electroceramics 232 electrolytes 224, 232, 235 emergency director 253 emergency operations center 253 Energy Information Administration 169 Energy Trust of Oregon 175 EPRI 10-11, 22-23 EPRI’s Roadmap 23
C California Self-Generation Incentive Program 172, 174 Cal-ISO 4 capacitor clamp 51 carbon nanofilters 241 carbon nanotubes 215 Carnot limit 224 CCA 73
311
312
F facility shutdown 255 FERC 10 First Energy 8 fuel-cell-diesel hybrid engine 210
Emergency and Backup Power Sources
MAP 108 MAT 108 metal hydrides 215-216, 240 micro fuel cell 209, 220 microgrids 204 Mindanao project 24
G
N
gas turbine bottoming cycles 229 global warming 183 glow plugs 106 green standards 186 greenhouse gas 183 ground impedance 58
nanostructures 242 National Electrical Code 55, 58, 62 National Renewable Energy Laboratory (NREL) 168 New Urbanism 188
H high-sulfur fuel 105, 114 hot site 266 Hurricane Georges 19 HVDC 33-34 I ignitor plugs 138 impingement starting 136 InfoWatt 21 integrity control 252
O oxidation catalysts 196 P Partial deregulation 31 partial oxidation 219 photovoltaic cells 142 photovoltaic effect 141 PIC 142 R
Kyoto Protocol 190
Rayleigh distribution 163 redundancy 282 RMS 152 root mean square 47
L
S
Lake Erie loop 3 low-sulfur fuel 105
SEPA 143 SI 101-103 SPS 144 SR 127, 129 stack design 223
K
M Mag-Dur 50
Index
STC 142 steam methane reforming 219 T THD 52 Treaty of Maastricht 192 turboexpanders 119
313
V VRLA 80, 82 VST 98 vulnerability analysis 256 Y Y2K 13
U
Z
USABC 65
Zero Emission Vehicle Company 226