Contents | Zoom in | Zoom out
For navigation instructions please click here
IEEE
Search Issue | Next Page
March 2009, Vol. 47, No. 3
www.comsoc.org
MAGAZINE ng si 1 es 2 oc ge Pr Pa al e gn Se Si l ee ria Fr uto T
Optical Communications Supplement Radio Communications
Modeling and Simulation: A Practical Guide for Network Designers and Developers
®
A Publication of the IEEE Communications Society
Contents | Zoom in | Zoom out
For navigation instructions please click here
Search Issue | Next Page
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
___________________________
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Cell phone antenna simulation in XFdtd …
A
BEMaGS F
EM Simulation PRECISION. The FASTEST Calculations. Remcom Has the COMPLETE SOLUTION.
Turn to Remcom for electromagnetic simulation expertise with XFdtd® and Wireless InSite®. Our products are designed to work together to provide complete and accurate results when simulating real-world devices in real-world scenarios.
XF
XFdtd: General purpose, full-wave 3D EM analysis software 䡲 Acceleration via computer clusters or GPU hardware
WI
Wireless InSite: Radio propagation analysis tool for analyzing the impact of the physical environment on the performance of wireless communication systems 䡲 Complex urban, indoor and rural environments 䡲 Wideband applications
…
an
ds
ubs
eque
n t prop
agation analysis in W i re l e
ite ss InS
.
Remcom focuses on building customer needs into every innovation and new feature. Our business philosophy is to satisfy our customers by delivering exceptional quality, unmatched reliability and state-of-the-art software and services, all backed by our expert customer support. Visit www.remcom.com or call us to learn how Remcom can add precision and speed to your simulation projects.
+1.888.7.REMCOM (US/CAN) | +1.814.861.1299 | www.remcom.com
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Director of Magazines Steve Gorshe, PMC-Sierra, Inc. (USA) Editor-in-Chief Nim K. Cheung, ASTRI, (Hong Kong)
A
BEMaGS F
IEEE
Associate Editor-in-Chief Steve Gorshe, PMC-Sierra, Inc. (USA) Senior Technical Editors Nirwan Ansari, NJIT (USA) Tom Chen, Swansea University (UK) Roch H. Glitho, Ericsson Research (Canada) Andrzej Jajszczyk, AGH U. of Sci. & Tech. (Poland) Torleiv Maseng, Norwegian Def. Res. Est. (Norway) Technical Editors Koichi Asatani, Kogakuin University (Japan) Mohammed Atiquzzaman, U. of Oklahoma (USA) Tee-Hiang Cheng, Nanyang Tech. Univ. (Rep. of Singapore) Jacek Chrostowski, Scheelite Techn. LLC (USA) Sudhir S. Dixit, Nokia Siemens Networks (USA) Nelson Fonseca, State U. of Campinas (Brazil) Joan Garcia-Haro, Poly. U. of Cartagena (Spain) Abbas Jamalipour, U. of Sydney (Australia) Vimal Kumar Khanna (India) Janusz Konrad, Boston U. (USA) Nader Mir, San Jose State U. (USA) Amitabh Mishra, Johns Hopkins University (USA) Sean Moore, Avaya (USA) Sedat Ölçer, IBM (Switzerland) Algirdas Pakstas, London Met. U. (England) Michal Pioro, Warsaw U. of Tech. (Poland) Harry Rudin, IBM Zurich Res.Lab. (Switzerland) Hady Salloum, Stevens Inst. of Tech. (USA) Heinrich J. Stüttgen, NEC Europe Ltd. (Germany) Dan Keun Sung, Korea Adv. Inst. Sci. & Tech. (Korea) Naoaki Yamanaka, Keio Univ. (Japan) Series Editors Ad Hoc and Sensor Networks Series Edoardo Biagioni, U. of Hawaii, Manoa (USA) Silvia Giordano, Univ. of App. Sci. (Switzerland) Applications & Practice Series Osman Gebizlioglu, Telcordia Technologies (USA) John Spencer, Optelian (USA) Design & Implementation Series Sean Moore, Avaya (USA) Integrated Circuits for Communications Charles Chien (USA) Zhiwei Xu, SST Communication Inc. (USA) Stephen Molloy, Qualcomm (USA) Network and Service Management Series George Pavlou, U. of Surrey (UK) Aiko Pras, U. of Twente (The Netherlands) Optical Communications Series Hideo Kuwahara, Fujitsu Laboratories, Ltd. (Japan) Jim Theodoras, ADVA Optical Networking (USA) Radio Communications Series Joseph B. Evans, U. of Kansas (USA) Zoran Zvonar, MediaTek (USA) Standards Yoichi Maeda, NTT Adv. Tech. Corp. (Japan) Mostafa Hashem Sherif, AT&T (USA) Columns Book Reviews Andrzej Jajszczyk, AGH U. of Sci. & Tech. (Poland) Communications and the Law Steve Moore, Heller Ehrman (USA) History of Communications Mischa Schwartz, Columbia U. (USA) Regulatory and Policy Issues J. Scott Marcus, WIK (Germany) Jon M. Peha, Carnegie Mellon U. (USA) Technology Leaders' Forum Steve Weinstein (USA) Very Large Projects Ken Young, Telcordia Technologies (USA) Your Internet Connection Eddie Rabinovitch, ECI Technology (USA) Publications Staff Joseph Milizzo, Assistant Publisher Eric Levine, Associate Publisher Susan Lange, Digital Production Manager Catherine Kemelmacher, Associate Editor Jennifer Porcello, Publications Coordinator Devika Mittra, Publications Assistant
®
2
Communications IEEE
MAGAZINE March 2009, Vol. 47, No. 3
www.comsoc.org/~ci OPTICAL COMMUNICATIONS SUPPLEMENT SUPPLEMENT EDITORS: HIDEO KUWAHARA AND JIM THEODORAS
S4 GUEST EDITORIAL S16 PHOTONIC INTEGRATION FOR HIGH-VOLUME, LOW-COST APPLICATIONS
To date, photonic integration has seen only limited use in a few optical interface applications. The recently adopted IEEE draft standards for 40 Gb/s and 100 Gb/s Ethernet single-mode fiber local area network applications will change this situation. CHRIS COLE, BERND HUEBNER, AND JOHN E. JOHNSON
S24 A TOTAL-COST-OF-OWNERSHIP ANALYSIS OF L2-ENABLED WDM-PONS
Next-generation access networks must provide bandwidths in the range of 50–100 Mb/s per residential customer. Today, most broadband services are provided through copper-based VDSL or fiber-based GPON/EPON solutions. Candidates for next-generation broadband access networks include several variants of WDM-PONs. KLAUS GROBE AND JÖRG-PETER ELBERS
S30 THE ROAD TO CARRIER-GRADE ETHERNET
Carrier-grade Ethernet is the latest step in the three-decade development of Ethernet. The authors describe the evolution of Ethernet technology from LAN toward a carrier-grade operation and then, with an overview of recent enhancements. KERIM FOULI AND MARTIN MAIER
S40 A COMPARISON OF DYNAMIC BANDWIDTH ALLOCATION FOR EPON, GPON, AND NEXT GENERATION TDM PON Dynamic bandwidth allocation (DBA) in passive optical networks (PON) presents a key issue for providing efficient and fair utilization of the PON upstream bandwidth while supporting the quality of service (QoS) requirements for different traffic classes. BJÖRN SKUBIC, JIAJIA CHEN, JAWWAD AHMED, LENA WOSINSKA, AND BISWANATH MUKHERJEE
TOPICS IN RADIO COMMUNICATIONS GUEST EDITORS: JOSEPH B. EVANS AND ZORAN ZVONAR
78 GUEST EDITORIAL 81 COGNITIVE RADIO AS A MECHANISM TO MANAGE FRONT-END LINEARITY AND DYNAMIC RANGE Most of the consideration of the benefits and applicability of Dynamic Spectrum Access (DSA) has been focused on opportunities associated with spectrum availability. This article describes the use of DSA to resolve challenges in achieving wireless and cognitive radio operation in dense or energetic spectrum. PRESTON F. MARSHALL
88 PRIMARY USER BEHAVIOR IN CELLULAR NETWORKS AND IMPLICATIONS FOR DYNAMIC SPECTRUM ACCESS DSA approaches are increasingly being seen as a way to alleviate spectrum scarcity. However, before DSA approaches can be enabled, it is important that we understand the dynamics of spectrum usage in licensed bands. DANIEL WILLKOMM, SRIDHAR MACHIRAJU, JEAN BOLOT, AND ADAM WOLISZ
96 A TECHNICAL FRAMEWORK FOR LIGHT-HANDED REGULATION OF COGNITIVE RADIOS Light-handed regulation is discussed often in policy circles, but what it should mean technically has always been a bit vague. For cognitive radios to succeed in reducing the regulatory overhead, this has to change. ANANT SAHAI, KRISTEN ANN WOYACH, GEORGE ATIA, AND VENKATESH SALIGRAMA
103 PUBLIC-SAFETY RADIOS MUST POOL SPECTRUM
A critical first step toward a DSA-enabled future is to reform spectrum management to create spectrum pools that DSA-enabled devices, such as cognitive radios, can use, under the control of more dynamically flexible and adaptive prioritization policies than is possible with legacy technology. WILLIAM LEHR AND NANCY JESUALE
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
________________
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
2009 Communications Society Officers Doug Zuckerman, President Andrzej Jajszczyk, VP–Technical Activities Mark Karol, VP–Conferences Byeong Gi Lee, VP–Member Relations Sergio Benedetto, VP–Publications Byeong Gi Lee, President-Elect Stan Moyer, Treasurer John M. Howell, Secretary Board of Governors The officers above plus Members-at-Large: Class of 2009 Thomas LaPorta, Theodore Rappaport Catherine Rosenberg, Gordon Stuber Class of 2010 Fred Bauer, Victor Frost Stefano Galli, Lajos Hanzo Class of 2011 Robert Fish, Joseph Evans Nelson Fonseca, Michele Zorzi 2009 IEEE Officers John R. Vig, President Pedro A. Ray, President-Elect Barry L. Shoop, Secretary Peter Staecker, Treasurer Lewis M. Terman, Past-President Curtis A. Siller, Jr., Director, Division III Nim Cheung, Director-Elect, Division III 6804) is published monthly by The Institute of Electrical and Electronics Engineers, Inc. Headquarters address: IEEE, 3 Park Avenue, 17th Floor, New York, NY 100165997, USA; tel: +1-212-705-8900; http://www.comsoc. org/ci. Responsibility for the contents rests upon authors of signed articles and not the IEEE or its members. Unless otherwise specified, the IEEE neither endorses nor sanctions any positions or actions espoused in IEEE Communications Magazine.
ANNUAL SUBSCRIPTION: $27 per year. Non-member subscription: $400. Single copy price is $25. EDITORIAL CORRESPONDENCE: Address to: Editorin-Chief, Nim K. Cheung, Telcordia Tech., Inc., One Telcordia Drive, Room RRC-1B321, Piscataway, NJ n.che08854-4157; tel: +(732) 699-5252, e-mail: ___
[email protected] ______. AND
REPRINT
POSTMASTER: Send address changes to IEEE
Standard economic theory tells us that the value of an additional unit of spectrum is equal to the increase in socially beneficial services it produces. For licensed spectrum allowed to trade in markets, this value is relatively easy to calculate: It is the price firms pay for the licensed spectrum. The equation is more complex, however, when unlicensed spectrum is involved. COLEMAN BAZELON
MODELING AND SIMULATION: A PRACTICAL GUIDE FOR NETWORK DESIGNERS AND DEVELOPERS SERIES EDITORS: JACK BURBANK
118 120
128 SIMULATION METHODOLOGY FOR MULTILAYER FAULT RESTORATION
The authors describe the modeling methods and the simulation tools they have used for the analysis of a new integrated restoration scheme operating at multiple layers /networks. GEORGE TSIRAKAKIS AND TREVOR CLARKSON
135 DESIGN VALIDATION OF SERVICE DELIVERY PLATFORM USING MODELING AND SIMULATION The authors discusse how modeling and simulation effectively help validate the design of various components constituting the service delivery platform. TONY INGHAM, KALYANI SASTRY, SHANKAR KUMAR, SANDEEP RAJHANS, AND DHIRAJ KUMAR SINHA
142 UNIFIED SIMULATION EVALUATION FOR MOBILE BROADBAND TECHNOLOGIES
The authors present a unified simulation methodology, including fading channel models, system configurations, and how to consider technology-dependent algorithms, such as scheduling, overhead modeling, interference margin definition, and resource allocation based on system loading. YUEHONG GAO, XIN ZHANG, DACHENG YANG, AND YUMING JIANG
150 MODULAR SYSTEM LEVEL SIMULATOR CONCEPT FOR OFDMA SYSTEMS
The authors describe how the cellular setup and traffic generation are performed for the proposed snapshot concept. Furthermore, a new methodology is proposed for a quality measure of resource units that is applicable to future wireless systems using an interleaved subcarrier allocation. ANDREAS FERNEKEß, ANJA KLEIN, BERNHARD WEGMANN, AND KARL DIETRICH
158 HIGH-FIDELITY AND TIME-DRIVEN SIMULATION OF LARGE WIRELESS NETWORKS WITH PARALLEL PROCESSING The authors describe a parallel processing technique for time-driven simulations of large and complex wireless networks. The technique explicitly considers the physicallayer details of wireless network simulators such as shadowing and co-channel interference. HYUNOK LEE, VAHIDEH MANSHADI, AND DONALD C. COX
166 AGENT BASED TOOLS FOR MODELING AND SIMULATION OF SELF-ORGANIZATION IN PEER-TO-PEER, AD-HOC AND OTHER COMPLEX NETWORKS The authors address this important area of research for the M&S community in the domain of computer networks by demonstrating the use of agent-based modeling tools for modeling self-organizing mobile nodes and peer to peer (P2P) networks. MUAZ NIAZI AND AMIR HUSSAIN
ADVERTISING: Advertising is accepted at the discretion of the publisher. Address correspondence to: Advertising Manager, IEEE Communications Magazine, 3 Park Avenue, 17th Floor, New York, NY 10016. SUBMISSIONS: The magazine welcomes tutorial or survey articles that span the breadth of communications. Submissions will normally be approximately 4500 words, with few mathematical formulas, accompanied by up to six figures and/or tables, with up to 10 carefully selected references. Electronic submissions are preferred, and should be sumitted through Manuscript Central (http://commag-ieee.manuscript central.com/). Instructions can be found at: ___ http:// ______ www.comsoc.org/pubs/commag/sub_guidelines.html. ________________________ For further information contact Steve Gorshe, Associate Editor-in-Chief (steve_gorshe@pmc-sier___________ ra.com). All submissions will be peer reviewed. ____
Communications IEEE
GUEST EDITORIAL WIRELESS NETWORK MODELING AND SIMULATION TOOLS FOR DESIGNERS AND DEVELOPERS The authors provide a discussion of M&S for wireless network designers and developers, with particular attention paid to the architectural issues. WILLIAM T. KASCH, JON R. WARD, AND JULIA ANDRUSENKO
Communications Magazine, IEEE, 445 Hoes Lane, Piscataway, NJ 08855-1331. GST Registration No. 125634188. Printed in USA. Periodicals postage paid at New York, NY and at additional mailing offices. Canadian Post International Publications Mail (Canadian Distribution) Sales Agreement No. 40030962. Return undeliverable Canadian addresses to: Frontier, PO Box 1051, 1031 Helena Street, Fort Eire, ON L2A 6C7
SUBSCRIPTIONS, orders, address changes — IEEE Service Center, 445 Hoes Lane, Piscataway, NJ 08855-1331, USA; tel: +1-732-981-0060; email: ____________ address.change@ ieee.org.
F
SPECTRUM ALLOCATIONS
PERMISSIONS:
Abstracting is permitted with credit to the source. Libraries are permitted to photocopy beyond the limits of U.S. Copyright law for private use of patrons: those post-1977 articles that carry a code on the bottom of the first page provided the per copy fee indicated in the code is paid through the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923. For other copying, reprint, or republication permission, write to Director, Publishing Services, at IEEE Headquarters. All rights reserved. Copyright © 2009 by The Institute of Electrical and Electronics Engineers, Inc.
4
BEMaGS
110 LICENSED OR UNLICENSED: THE ECONOMIC CONSIDERATIONS IN INCREMENTAL
IEEE COMMUNICATIONS MAGAZINE (ISSN 0163-
COPYRIGHT
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
174
ACCEPTED FROM OPEN CALL IDENTITY MANAGEMENT AND WEB SERVICES AS SERVICE ECOSYSTEM DRIVERS IN CONVERGED NETWORKS The authors describe a Web service-based framework supported by federated identity management technologies, which enables fixed and mobile operators to create a secure, dynamic, and trusted service ecosystem around them. JUAN C. YELMO, RUBÉN TRAPERO, AND JOSÉ M. DEL ALAMO
The President’s Page Book Reviews Society News Conference Report/CCNC ‘09 History of Communications Very Large Projects
6 8 10 12 14 18
Conference Calendar New Products Global Communications Newsletter Product Spotlights Advertisers Index
22 24 181 186 188
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
THREE AIRCRAFT, A SINGLE MODEL, AND 80% COMMON CODE. THAT’S MODEL-BASED DESIGN. To develop the unprecedented three-version F-35, engineers at Lockheed Martin created a common system model to simulate the avionics, propulsion, and other systems and to automatically generate final flight code. The result: reusable designs, rapid implementation, and global teamwork. To learn more, visit mathworks.com/mbd
TM
Accelerating the pace of engineering and science
Communications IEEE
©2008 The MathWorks, Inc.
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
THE PRESIDENT’S PAGE
WOMEN IN COMMUNICATIONS TECHNOLOGY
C
entral to the ComSoc 2.0 framework, with only 9.2% and 12.4% awarded to presented in the January 2008 Presiwomen, respectively. Does this mean girls dent’s column, is the concept of a global really can’t do math as well as the boys? “ComSoc community.” This community The statistics differ. According to Hyde’s ideally includes all members regardless of study [3] based on math scores from 7 milwhere they are on the planet (or beyond). lion students in 10 states in the US, In line with this, one of the initiatives due although boys performed better than girls for discussion is a brand new women in in math 20 years ago at high school, this is communications technology program at no longer the case. Looking at the average ComSoc to encourage and inspire woman of the test scores, the performance of the engineers in the communications and relatmost gifted children, and the ability to ed disciplines worldwide. In 2008, I appointsolve complex math problems, they found, DOUG ZUCKERMAN ed Heather Yu ComSoc’s liaison to IEEE in every category, that girls did as well as Women in Engineering (WIE). Heather’s boys! role is to serve as a communications link The statistics in professional societies between ComSoc and WIE. Attending WIE further signify the gender divide in engicommittee meetings and talking to our neering. Opening the book on IEEE facts female colleagues who are working in the and statistics [4][5], we found that out of field have helped us understand more about over 375,000 IEEE members, only 9% of the benefits such a group could potentially them are female, and out of the 6288 felbring to our female members as well as the low grade IEEE members in 2008, less Society at large. This month’s column is than 3% of them are female. Evidently, shared with Heather. electrical engineering and computer engiHeather Yu (
[email protected]) neering are still the most under-represent_______________ is a senior manager and head of the Multimeed engineering fields for women worldwide. dia Content Networking research team at Could ComSoc, a society with a majority of Huawai Technologies USA. She received its members holding an Electrical Engiher Ph.D. in electrical engineering from neering or Computer Sciences degree, posPrinceton University. Currently she is servsibly do something to help elevate HEATHER YU ing as Associate Editor-in-Chief of the Jourawareness and overcome the barriers that nal of Peer-to-Peer Networking and are keeping women away from advancing Applications, Chair of the new ComSoc technical subcomin an engineer career, especially a career in the communimittee on Human Centric Communications, a voting cations and related disciplines worldwide? Being the secmember of the GLOBECOM/ICC Technical Content ond largest society within IEEE, ComSoc has been playing Committee, and a member of ComSoc’s Strategic Planning leadership roles in serving its members and leading the Committee. Her research interests include multimedia advancement of science, technology and applications. As communications and multimedia content access and distrithe Society leaps into a new phase — ComSoc 2.0 — let’s bution. She has published two books more than 60 techniseize the moment and work together to foster a new gapcal papers, and holds 23 U.S. patents. less community that takes advantage of the value and intelligence of women engineers as well as men engineers. THE ENGINEERING GENDER DIVIDE Building a large community of educated and devoted men Oftentimes, we hear stories about high school guidance and women engineers working together to nurture a supcounselors steering girls away from engineering because porting environment and promoting female professional they don’t think the girls can do the math. The bias is cerdevelopment in communications technology, we can expect tainly not an isolated case. Mather and Adams’ study [1] to change the way nations perceive women in engineering shows that the college enrollment rate of young women and help initiate the transformation necessary to bridge has exceeded that of young men since 1991. And yet, the the engineering gender divide. percentage of engineering bachelor’s degrees awarded to BRIDGING THE GENDER DIVIDE women continues to decline in recent years. According to Looking around, we can clearly see growing efforts in the American Society of Engineering Education [2], only advocating women in engineering continuously. A Google 18.1% of engineering bachelor’s degrees went to women in search on the Internet will return a long list of Web sites 2006-2007, the lowest since 1996. Among the 20+ engiof local and global organizations devoted to promoting neering disciplines surveyed, computer engineering and women engineers and scientists. The Society of Women electrical engineering are among the least popular ones,
6
Communications IEEE
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
THE PRESIDENT’S PAGE Engineers (SWE,) Association of Women in Computing (AWC,) Women in Technology (WIT,) and IEEE Women in Engineering (WIE) are some of the most popular global not-for-profit organizations dedicated to offering women at all levels of the technology industry and engineering fields a wide range of professional development and networking opportunities. Among them, IEEE WIE [6] is the largest international professional organization dedicated to promoting women engineers and scientists. Together these organizations are helping to initiate the changes necessary to bridge the gender gap and change how the world perceives women in engineering.
WOMEN ENGINEERS IN COMSOC Last year was a year of learning for us to understand the need to invest in special efforts in promoting women communications scientists and engineers, encouraging women in leadership roles in engineering fields, and bringing more female members, volunteers, and leaders to the ComSoc family. As a fellow ComSoc member and an engineer/scientist, you may wonder about the advantages of such a dedicated effort that seemingly excludes male members. A little investigation brings us a pleasant surprise. Out of the 12,000 IEEE WIE members, 2700 are male. As WIE Chair Karen A. Panetta pointed out [7], “This is not unusual considering that our mission is to foster a community of women and men that supports their mothers, daughters, sisters, and wives to pursue engineering and science careers that will inevitably lead to enriching lives within the global community and the environment.” As a long-standing member driven professional society, ComSoc has a rich history of providing professional services and an environment through which members could work together to promote the advancement of science, technology, and applications in communications and related disciplines. Promoting women in engineering at ComSoc can bring ComSoc one step forward in the battle of bridging the engineering gender gap, creating a fertile ground to attract, encourage, and advance women engineers, and benefiting from the additional female members who will bring unprecedented value to the Society and our members. Let’s work together to build a community that: •Attracts more and more female engineers to our community. •Provides assistance for female career advancement in the communications technology profession. •Offers information and guidance on balancing work and family.
•Manages mentoring programs to encourage female students to work toward an engineering degree and a career path in the field of communications technology. •Advocates women in leadership roles in ComSoc as well as other related societies. •Recognizes women’s outstanding achievements in communications technology fields through various awards. •Promotes IEEE member grade advancement for women. •Facilitates the development of activities and programs that promote women in communications technology. Together, we can put forward a mission to encourage the next generation of female communications technology engineers and scientists.
LET’S WORK TOGETHER In May 2008, the John Fritz Medal, which is referred to as the highest award in the engineering profession, awarded by the American Association of Engineering Societies (AAES), was awarded to a woman, Dr. Kristina M. Johnson, provost and senior vice president for Academic Affairs at Johns Hopkins University “for her internationally acknowledged expertise in optics, optoelectronic switching, and display technology.” The John Fritz Medal was established in 1902 as a memorial to the great engineer Dr. John Fritz. Alexander Graham Bell (1907), Thomas Edison (1908), and Alfred Nobel (1910) are also recipients of this award. Dr. Johnson is the first woman to receive this honor. Today, we have approximately 180 female role models who were promoted to IEEE Fellow grade. Hopefully, with the continuously increasing effort in promoting women in engineering, we will see many more women engineers reach the pinnacle of their career and be recognized for their achievements. Let’s work together!
REFERENCES [1] M. Mather and D. Adams, The crossover in female-male college enrollment rates, available at _________________________ http://www.prb.org/Articles/2007/ CrossoverinFemaleMaleCollegeEnrollmentRates.aspx ______________________ [2] M. Gibbons, Engineering by the numbers, available at http://www.asee. org/publications/profiles/upload/2007ProfileEng.pdf [3] J. S. Hyde, S. M. Lindberg, M. C. Linn, A. B. Ellis, and C. C. Williams, DIVERSITY: gender similarities characterize math performance, Science 25 July 2008, pp. 494-495. http://www.ieee.org/web/member[4] IEEE Fellow Program, available at ___________________ ship/fellows/fellow_presentation.html ____________________ [5] IEEE Quick Facts, available at http://www.ieee.org/web/aboutus/ home/index.html _________ [6] About IEEE Women in Engineering, available at http://www.ieee. org/web/membership/women/about.html?WT.mc_id=WIE_nav2 [7] K. Panetta, Working together to attract, sustain, and enrich women engineers, IEEE Women in Engineering Magazine, Winter 2007/2008, pp. 2-3, Dec. 2008.
IEEE Communications Magazine • March 2009
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
7
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
BOOK REVIEWS EDITED BY ANDRZEJ JAJSZCZYK SECURITY AND COOPERATION IN WIRELESS NETWORKS: THWARTING MALICIOUS AND SELFISH BEHAVIOR IN THE AGE OF UBIQUITOUS COMPUTING LEVENTE BUTTYAN AND JEAN-PIERRE HUBAUX, CAMBRIDGE UNIVERSITY PRESS, 2008, ISBN 978-0-521-87371-0, HARDCOVER, 500 PAGES REVIEWERS: SZYMON SZOTT AND MAREK NATKANIEC Few books look at the future of wireless security as Security and Cooperation in Wireless Networks does. It has been written by L. Buttyán and J.P. Hubaux, who are leading experts in the field. This novel book anticipates new challenges in upcoming wireless networks. These challenges are related to human behavior, the root of all security issues. Two types of human conduct are considered: malice and selfishness. The former is defined as harmful actions intended to harm other users, while the latter is the overuse of shared resources. In terms of the wireless architecture, the focus of the book is on wireless ad-hoc networks. The book is written in a mostly technologically independent manner, although 802.11 is referred to the most. Other examples of wireless communications, such as RFID and UMTS, appear throughout the book, but are given less attention. The book consists of three parts and two appendices. Part I is a very good introduction to security issues in wireless networks. Many applications of current and future networks are given, including sensor, mesh, and vehicular networks. The presented scenarios are used later throughout the book to illustrate important concepts and solutions. The authors also introduce the reader to the concept of trust and define the adversary model. This background is required for understanding the challenges of future networks. Part II of the book deals with security in wireless networks, which is defined as preventing malicious behavior. The authors cover many aspects of wireless communications in a distributed environment, which require security precautions: naming and addressing, authentication and key establishment, neighbor discovery, and routing. The aspect of protecting privacy is also discussed, for example, in the context of location secrecy in vehicular networks. Each chapter is a survey of the state of the art, and many interesting solutions are pre-
8
Communications IEEE
sented. This part is supplemented by Appendix A, which serves as an introduction to cryptography. Encouraging cooperation (and therefore hindering selfishness) is the subject of Part III. The authors clearly state that “it is still unclear how selfishness will materialize in practice in upcoming wireless networks.” Therefore, theoretical models and possible practical solutions are described. The authors study selfishness in medium access and packet forwarding, spectrum sharing by multiple operators, and how to provide incentives for correct behavior. The chapters in this part rely on game theory. Therefore, the authors have provided a brief tutorial on this branch of mathematics in Appendix B. An important feature of this book is that it can serve as a textbook for a university course on security and cooperation in wireless networks. It is self-contained, and supplies appendices required to understand the basics of cryptography and game theory. The chapters are well organized, first presenting a particular problem or requirement and then providing possible solutions based on the state of the art. Each chapter contains a “To probe further” part and ends with thought provoking questions. Over 400 references guide the reader to more detailed information. The publisher’s Website complements the book with lecture slides and solutions to problems. An electronic version of the book is available as a free download. Security and Cooperation in Wireless Networks achieves its goal of explaining how to prevent malicious and selfish behavior. It is a well written book that can serve as an inspiration for future research. Therefore, we can wholeheartedly recommend this book most of all for postgraduate students and lecturers, but also for researchers working in the field of wireless security.
RFID DESIGN PRINCIPLES EDITED BY HARVEY LEHPAMER, ARTECH HOUSE MICROWAVE LIBRARY, 2008, 293 PAGES, HARDCOVER, ISBN: 978-1596-93194-7 REVIEWER: PIOTR JANKOWSKI-MIHULOWICZ Radio frequency identification (RFID) systems are a fundamental factor in the development of a universal information system for different objects, often called the “Internet of Things.” The typical applications of these systems are concentrated on different economic and public activities in industry,
commerce, science, medicine, and other areas. The book RFID Design Principles, edited by Harvey Lehpamer, is a comprehensive source of knowledge that explains the essence of RFID systems, and their design and applications in supply chain management, intelligent buildings, transportation systems (in the Automatic Identification of Vehicles processes), animal identification, military applications, and many others. This book consists of seven chapters that cover the problems of selection, creation, and use of elements and whole RFID systems. The thematic range of this books relates to the short-range communications systems used nowadays, with stress on wireless (optical and radio) processes and systems of automatic identification. Special attention is paid to RFID standards development, and also to choice and design bases of hardware elements, such as electronic tags, read/write devices (RWDs), and their antennas. The process of the optimal RFID system design with regard to the issues concerning security, ethics, and protecting consumer data is presented. Chapter 1 introduces basic definitions, and also presents the RFID technology and its place in the area of contact and wireless methods of automatic identification. Advantages of RFID technology and business problems that should be solved under implementation of system solutions in automated identification processes are presented in this chapter. Chapter 2 presents a broad comparison of short-range communications systems. The comparison is preceded by a brief discussion of the basic questions for antennas and wave propagation. Short-range wireless technology in which people or devices are connected anywhere and anytime by different types of communication links is considered in the comparison of communications systems. Wireless local area networks (WLANs), wireless personal area networks (WPANs) with Bluetooth and ZigBee technologies, wireless body area networks (WBANs) where implanted medical devices and on-body sensors communicate with an external environment by inductive coupling between antennas (e.g., in LF and HF RFID systems), and radio communication systems working in ultrawideband (UWB) technology belong to them at present. Chapter 3 presents the essence, base of construction, and principles of oper(Continued on page 10)
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
______________ _______________
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
BOOK REVIEWS (Continued from page 8) ation of single and anticollision inductive (LF, HF), and propagation (UHF) coupling RFID systems. Special attention is paid to the economic potential of the electronic product code (EPC), which should, for example, replace the EAN-UCC bar codes applied universally in the area of fast moving consumer goods. The potential of RFID technology is discussed through many examples of automatic identification applications in the areas of industry, commerce, and services where items are handled and associated, and data collected and processed. Chapter 4 presents the most important aspects of many existing and constantly modified regional regulations and standards. It concerns first of all the frequency bands (from LF to microwave), power emissions, safety of RFID systems use, and so on. Consistently, this chapter presents the regulations and standards concerning the electronic product code, which is compatible with the EPC Class 1 Gen 2 protocol, standardized
by the ISO 18000-6c on the motion of EPCglobal. This organization is the leader in the development of industry-driven standards for the EPC to support the use of the RFID technology in fast-moving information-rich trading networks. Chapter 5 presents the characteristics of RFID system elements, such as passive, active, and semipassive tags and read/write devices. The fundamental relationships that describe interrogation zone conditions of single and anticollision, inductive, and propagation coupling RFID systems are also presented. This part of the problem of interrogation zone synthesis is the basis for practical use of RFID systems required for specific applications in different business activities. Chapter 6 presents essential factors that condition the process of the optimal RFID system design in order to, on the basis of individual requirements of an organization, implement every business project successfully. Aspects of the RFID system reliability, such as the efficiency coefficient and objects identi-
fication probability in a specific interrogation zone, are also considered. Chapter 7 presents a rational approach to relations among business needs, security, and protection of consumer data, and ethical and moral dilemmas of RFID technology. Such a connection permits solving the problem of the most useful RFID system creation. The book is an interesting publication and differs from many available in that it presents, in a very synthetic way, the basis of synthesis, implementation, and testing of RFID systems in order to raise business process efficiency. The “review questions and problems” sections applied after each chapter of this book are exceptionally useful. They raise the educational value of the book, and help to approach the problems of RFID technology and the area of its use. I strongly recommend this book to integrators of RFID systems, and to researchers and electrical engineering students interested in conditions of operating, designing, and implementing the most popular LF, HF, and UHF RFID systems.
SOCIETY NEWS IN MEMORY OF HENRICH LANTSBERG BY ALEX GELMAN, STEVE WEINSTEIN, CURTIS SILLER, AND DOUG ZUCKERMAN It is with the deepest regret, but also with fond memories, that the leadership for the IEEE Communications Society pays tribute to Henrich Lantsberg, who, perhaps more than any other single individual, was responsible for the strong relationship between the Popov Society and the IEEE Communications Society, and for advancing our activities in Russia. A Russian friend of the IEEE and ComSoc, Henrich passed away on 20 January 2009 after a long illness. Born in Moscow in 1922, he was a WW-II veteran who participated in the epic meeting of Russian and American troops on Elba River. He founded and assisted in the founding of the 14 Chapters of the IEEE Russia Section, and was especially active in the Professional Communications, Broadcasting Tech-
10
Communications IEEE
nology and Communications Chapters. He was instrumental in organizing the first ComSoc/Popov Society joint workshop, Moscow Internet ‘99. Dr. Lantsberg was a true friend to the IEEE Communications Society for many, many years. Henrich was the bridge between IEEE Russia and the rest of the world. He handled relationships with IEEE headquarters, Region 8, and several IEEE societies. He was a dedicated and hard worker who was also a very kind and generous person, always optimistic and friendly. In spite of his long debilitating illness caused by a weak heart, and the tremendous difficulty in walking that he suffered in recent years, Henrich never retired. He is survived by his wife Valentina and daughter Svetlana.
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Tro
F
r e t pe os Strat
n
Vitelec
flex
Vie ws oni cs
M icr ow av e
Sem
BEMaGS
AIM -Ca
Customer Innovation Drives Our Connectivity Solutions
mb
M id we st
Joh
o ns
m
A
ri d ge
Let Emerson Network Power Connectivity Solutions be your answer for quality connections. Our extensive product offering features radio frequency, microwave and fiber optic interconnect components and assemblies. Emerson’s connectivity products are relied upon in: data networks, wireless communications, telephony, broadcast, defense, security systems, health care and industrial markets. For more information please visit us at: www.emersonnetworkpower.com/Connectivity or contact us at: ______________________________
[email protected]
Wireline, Wireless and Optical Connectivity Solutions.
Just another reason why Emerson Network Power is the global leader in enabling Business-Critical Continuity™.
Connectivity Solutions
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
CONFERENCE REPORT RECORD NUMBER OF ATTENDEES EXPLORE LATEST CONSUMER NETWORKING TECHNOLOGIES AT CCNC 2009 number and a 33 percent increase from 2008. In addition, the event also received nearly 225 short paper submissions, an increase of 100 percent from the prior year. Several consumer electronics industry giants such as Panasonic, Samsung, and Nokia were among this year’s patrons. In addition, Motorola donated a cell phone and two GPS systems as gifts for the “Communications Gadget of the Year” raffle, while Samsung supSoon Ju Kang, Kyungpook National University, Korea, plied two flat screen televigiving demo for “U-FIPI: Ubiquitous Sensor Network sions to display conference Service Infra Supporting Bidirectional Locationprogramming through the Awareness between Mobile Nodes and Fixture Nodes.” course of CCNC 2009. Highlighting the theme of “Empowering the Connected ConPanasonic Research & Development sumer,” Jim Battaglia, vice president Center of America, opened the event of strategic business development for on Saturday evening when he spoke on “Connected Entertainment Devices: Past, Present & Future.” Discussing the “maturation of the consumer electronics marketplace,” Battaglia detailed the industry’s collaborative effort to make online and traditional desktop fare as ubiquitous as DVDs with connectivity available from just about any location including cars and in-flight cabins. Fred Kitson, corporate vice president of the Applied Research & Technology Center (ARTC) at Motorola, Inc., continued this conversation the next morning with his address on “The Power of Communications + Content + Community.” During his presentation, Kitson spoke on the consumer demand for one-stop shopping of Internet services and the growing synergy among manufacturers and other providers to make “any content on any device available anywhere you want.” Later that evening, Patrick Barry, vice president of Connected TV, Connected Life at Yahoo! Inc., further elaborated on the next wave of consumer communications by describing the future availability of on-demand networking systems that will one day provide consumers with anywhere and anytime access to entertainment and information services, regardless of location. At the conference banquet, which _________________ was held Monday evening, keynote speaker Hwan Woo Chung, senior vice president of Samsung Telecommunica-
Dedicated to the latest consumer communications and networking advances in devices, services, and applications, the Annual IEEE Consumer Communications and Networking Conference (CCNC) continues to rise in both prominence and attendance. Held concurrently with the Consumer Electronics Show (CES) in Las Vegas from January 10th to the 13th, the 2009 conference extended its emergence as a leading industry event as nearly 450 attendees, a 10 percent increase from last year’s total, participated in approximately 300 technical presentations, workshops, demonstrations, and keynote addresses. Another telling sign of the conference’s growing importance was the overall response to the “Call for Papers” from leading researchers, academics, and business professionals worldwide. CCNC 2009 had more than 450 full paper submissions, a record
12
Communications IEEE
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
CONFERENCE REPORT tions America, also addressed “The Connected Device” theme after the presentation of the conference’s best student paper, paper, and demonstration awards. Included in the honors were student author Jinglong Zhou of the Delft University of Technology in the Netherlands, who received the “Best Student Paper Award” for “A Novel Link Quality Assessment Method for Mobile Multi-Rate Multi-Hop Wireless.” Also cited for their contributions were Dr. Martin Jacobsson, Dr. Ertan Onur, and Prof. dr. ir. Ignas Niemegeers, his co-authors from the Delft University of Technology. “DT-Talkie: Interactive Voice Messaging for Heterogeneous Groups in Delay Tolerant Network,” presented by Md. Tarikul Islan from the Helsinki University of Technology in Finland, was selected as the CCNC 2009’s “Best Demonstration.” A committee of venture capitalists including Jim Smith of Mohr, Davidow Ventures, Yatin Mundkur of Artiman Ventures, and Marcin Matuszewksi of FutureInvest awarded the contribution for the clear way it demonstrated the distribution of voicemessaging without the use of a central server or end-to-end communication path. Rounding out the evening’s ceremonies was the “Best Paper” honor, which was given to Remi Bosman, Johan Lukkien, and Richard Vehoeven of the Technische Universiteit Einhoven in the Netherlands for their presentation on “An Integral Approach to Programming Sensor Networks.” As for the conference’s technical program, 117 presentations were accepted from nearly 350 paper submissions for a 35 percent acceptance rate. Of particular note was the concentrated focus of these papers, which in many cases detailed the technology research and product development surrounding the: * Next generation of mobile television and IPTV services * Recent trends in distributed systems and Peer-to-Peer technologies * Advances in routing mechanisms and network protocols used in Radio Frequency Identification (RFID) technologies. The event’s demonstration track was also widely successful due to its recurring theme, which resonated on the latest healthcare and sensor networking advances as well as the demonstration of applications that included traffic monitoring in Las Vegas and the worldwide collection of geological data used to better predict natural disasters such as tsunamis.
Other program highAndrew Dempster, Univerlights included the presity of New South Wales, sentation of three panel Australia, presenting the discussions that explored first paper in the Beyond GPS Session. new trends in consumer communications and networking services. These panels, which were led by leading industry executives, discussed topics John Buford, CCNC ranging from the develop2009 TPC Chair, prement of standards and senting the Best Student solutions involved in conPaper Award to Jinglong sumer device manageZhou of Delft University. ment to the proliferation of context-aware information services and the surrounding privacy issues. timedia Communications & Services, With consumer and industry interContent Distribution & Peer-to-Peer est clearly on the rise for the latest Networks, Security and Content Proconsumer networking technologies, tection for Consumer Electronics, planning has already begun for the and Pervasive and Ambient Applica7th Annual CCNC Conference, which tions. will be held in Las Vegas from JanFor more information, interested uary 9 - 12, 2010. Specific industry researchers, academics, and business tracks will cover Wireless Home professionals are urged to visit C o mmu n icatio ns & N etwo rking, www.ieee-ccnc.org/2010 and review the Smart Spaces & Personal Area NetCCNC 2010 “Call for Papers.” works for Consumer Electronics, Mul-
___________
__________
IEEE Communications Magazine • March 2009
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
13
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
HISTORY OF COMMUNICATIONS EDITED BY MISCHA SCHWARTZ
INTRODUCTION The article for this month's History of Communications Column, a summary by Hisashi Kobayashi of the history of partialresponse signaling, is one in a continuing series in which we have pioneers in a significant communications discipline describe their work and that of associates in the field. You will note that Dr. Kobayashi, while focusing on the applications of partial-response maximum likelihood technology to digital magnetic recording (he was working for IBM at the time), does not neglect to signal out the early pioneering work of the late Adam Lender on duobinary transmission, the analogous technology applied to communication transmission. Other early contributors to this work are noted as well. Such is continuously the case with many of our significant systems and technologies: they have relevance and applica-
tion in multiple fields. This is what makes the study of the history of communications so fascinating and so up-to-date. As one of my younger colleagues mentioned just the other day, we are continually in danger of re-inventing the wheel. This is why it is important to scan the history of our field and related areas, not just for the excitement of revisiting the early stages of an important invention or system concept, but to note that the original ideas and concepts of the pioneering workers in the area still have relevance and significance today. As Dr. Kobayashi cogently notes, these original ideas and developments of the 1960s and early 1970s in partial-response signaling have evolved into the vital and huge magnetic recording industry of today.
PARTIAL-RESPONSE CODING, MAXIMUMLIKELIHOOD DECODING: CAPITALIZING ON THE ANALOGY BETWEEN COMMUNICATION AND RECORDING
noise ratio (SNR), head disk spacing, and signal processing. The signal processing and coding technology for HDDs is the essence of the channel electronics module in an HDD that processes signals read from the magnetic media [1].
HISASHI KOBAYASHI, PRINCETON UNIVERSITY ABSTRACT Signal processing and coding technology for digital magnetic recording is the core technology of the channel electronics module in a hard disk drive (HDD) that processes signals read from magnetic media. In this historical review I focus on what is now widely known as partial-response, maximum-likelihood (PRML) technology, which takes advantage of the inherent redundancy that exists in signals read out of magnetic media; its theoretical foundation goes back to 1970, and it capitalizes on the analogy between high-speed data transmission and high-density digital recording, and that between a convolutional code and a partial-response signal. The first PRML-based product was introduced by IBM in 1990, and PRML technology soon became the industry standard for all digital magnetic recording products, ranging from computers’ HDDs and tape drives to micro hard discs used in PCs, mobile phones, and MP3 players; use of the PRML principle has recently been extended to optical recording products such as CDs and DVDs. Its improved version, called NPML (noise-predictive, maximum-likelihood), and variants have been adopted by the HDD industry since 2000. Today, a large number of communication and information theory researchers are investigating use of advanced techniques such as turbo coding/decoding to further improve the density and reliability of both magnetic and optical recording systems.
INTRODUCTION The IBM RAMAC, the first HDD introduced in 1956, had storage capacity of a mere 4.4 Mbytes, and the price per megabyte was as high as $10,000, whereas 50 years later, in 2005, a microdrive contained 9 Gbytes, and the price per megabyte is less than $0.03. In this 50-year period the areal density has grown from 2 × 10–3 Mb/in2 to 3.4 × 104 Mbs/in2, ia phenomenal gain of 17 million times! Such dramatic growth in storage capacity and shrinking cost per bit is a result of the compounding effects of significant progress made in key components: track position control, head sensitivity, high-speed writing, media signal-to-
14
Communications IEEE
0163-6804/09/$25.00 © 2009 IEEE
PRE-1970 SIGNAL PROCESSING AND CODING FOR MAGNETIC RECORDING The conventional method of magnetic recording used either the non-return-to-zero (NRZ) or NRZ-inverse (NRZI) method. In NRZ recording, one direction of magnetization corresponds to a 1, while the opposite direction corresponds to a 0 in data; in NRZI, 1 is recorded as a transition of magnetization and 0 as no transition. If the read-head uses an inductive coil head, the rate of change in the magnetic flux as the read-head passes over the medium will be proportional to the induced voltage at the read-head output. Thus, the relationship between the readback voltage r(t) and the magnetization m(t) should be written as [2] dm(t ) (1) , dt where ⊗ means the convolution operation, and h(t) represents the magnetic-head field distribution characterized by the response due to a unit step function in m(t). The conventional detection method of NRZI recording interpreted the presence of a pulse in the readback signal as 1 and the absence of a pulse as 0. This was often realized by passing the output voltage signal through a rectifier and then through a threshold detector. Furthermore, the conventional signal processing method for the readback signal used the so-called peak detection (PD) method (see, e.g., [3]), in which peak levels in the output voltage signal were searched, and the sampled values were compared to the threshold for binary decision. But as one attempted to store the information bits more densely on the medium, the PD method failed because: • The height of a peak became not much larger than background noise. • Neighboring peaks came closer and collapsed into one peak. • The position of peaks significantly shifted, sometimes beyond the neighboring bit boundaries. These “pulse crowding” effects set the limit on recording density in the conventional technique. The run-length limited (RLL) codes pioneered by Donald Tang [3, 4, references therein] were the main techniques available to mitigate adverse effects of pulse crowding. r (t ) = h(t ) ⊗
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
HISTORY OF COMMUNICATIONS ANALOGY BETWEEN MAGNETIC RECORDING CHANNEL AND PARTIAL-RESPONSE CHANNEL
Shaping filter
The “pulse crowding” effect alluded to by the digital recording community prior to 1970 was equivalent to intersymbol interference (ISI) in digital data transmission. Unlike analog signal (e.g., audio) recording, digiT tal recording uses saturation recording in that the T driving current in the recording head coil is switched T from one saturated level to the opposite saturated PR channel level so that the readout signal should have large SNR. This magnetization process is inherently nonlinear. Write current Noiseless readback signal I joined the Communication Theory group at the IBM Research Center at Yorktown Heights, New xn = an – an-2 an York in 1967, and my primary assignment was to t investigate methods to mitigate the ISI problem in data transmission over voice-grade lines. I got attracted, as a side line, to magnetic recording research on Figure 1. Partial-response class-4 (PR4) channel: G(D) = 1 – D2. The which my colleague Don Tang was working. Although sampling rate is 1/T. I immediately noticed the similarity between pulse crowding and ISI, my attempt to treat the digital recording system as a linear channel was not readily accepted other as they are pushed closer together. So the sampled chanby magnetic recording experts in IBM. Use of saturation nel output forms a three-level sequence. If we label these three recording, its hysteresis characteristics and signal-dependent levels 0, 1, and 2, the corresponding channel is represented by noise, all compounded to discourage them from treating a magG(D) = 1 + D; Lender called this high-speed signaling scheme netic recording system as a linear system. the duobinary technique [5]. Similarly, he termed a data transBut once binary information is stored as a saturated bipomission channel with G(D) = 1 – D 2 modified duobinary. A lar signal m(t), the readout process is a linear operation as general class of signaling scheme that can be characterized by a given in Eq. 1. Thus, my argument was that if nonlinear disfinite polynomial G(D) with integer coefficients is referred to as tortion introduced in the writing process was negligible or correlative-level coding (see Adam Lender, IEEE Spectrum, could be precompensated by proper shaping of the writing February 1966). Ernest R. Kretzmer of Bell Telephone Laboracurrent, the magnetic recording system could be approximated tories coined the term partial-response channel for this class of by a linear model as far as the readback process is concerned. binary data transmission channels, and referred to duobinary If the ISI was introduced by an increase in recording density, and modified duobinary as Class-1 and Class-4, respectively [6]. it should be eliminated by an equalizer; so went my argument. Note that G(D) = 1 + D in Lender’s duobinary signaling My 1970 paper with Don Tang [2] proposed that the is a result of intentionally pushing the transmission speed well recording channel should be treated just like a data transmisbeyond the conventionally tolerable rate, whereas the term sion channel, and that the readout signal x(t) should be samG(D) = 1 – D we defined for the magnetic recording channel pled at regular intervals, t = nT (n = 0, 1, 2, …), instead of is due to the inherent differential operation in the readout sampling x(t) at instants of peak values as practiced in the process. But mathematically they are quite similar. conventional peak detection method. If the ISI is removed by Don Tang and I showed in [2] that a magnetic recording an equalizer, the sampled output xn = x(nT) is a three-level channel can be shaped into a partial-response channel with the transfer function G(D) = (1 – D) P(D), where P(D) is any polysignal, represented by + 1, 0, –1 after proper scaling. In NRZ nomial of D. The simplest choice is P(D) = 1 + D, which gives recording the sampled sequence {xn} is related to the binary G(D) = (1 – D)(1 + D) = 1 – D2, which we termed Interleaved data sequence {an} by NRZI [7]. The overall transfer function of Interleaved NRZI is (2) xn = an – an–1, n = 0, 1, 2, …, equivalent to Lender’s modified-duobinary and Kretzmer’s partial-response Class-4 for data transmission. Thus, in the magnetwhich can be compactly written in a polynomial form, ic recording community, our interleaved scheme is often X(D) = (1 – D)A(D) = G(D) A(D), (3) referred to as the “PR4” signal [3, 8] (Fig. 1). The next simple choice is P(D) = (1 + D)2 = 1 + 2D + D2, also proposed in where D is the delay operator. The transfer function G(D) = our paper [2], which results in G(D) = (1 – D)(1 + D)2 = 1 + 1 – D is the “difference operator,” which is a discrete-time D – D 2 – D 3. This partial-response channel is referred to as counterpart of the “differential operator” involved in the extended PR4 or EPR4 in the magnetic recording community [3]. readback process represented by Eq. 1. In data transmission the subject of my primary assignment, I learned that Adam Lender (1921–2003) of GTE Lenkurt disMAXIMUM-LIKELIHOOD DECODING covered in 1963 that as he increased the transmission rate of binary signals close to the Nyquist rate of a bandlimited chanALGORITHM AND EQUALIZATION OF THE nel, the ISI became so pronounced that the output signal suddenly turned into three level signals: if two adjacent pulses are PR SIGNAL both positive and move close to each other, they merge into a large positive pulse; if two negative pulses push closer together, From September 1969 to April 1970 I took a sabbatical leave they end up as a large negative pulse; if the adjacent pulses are from IBM Research to teach signal detection theory and opposite in their polarities, they result in zero by canceling each information theory in the System Science Department of the
IEEE Communications Magazine • March 2009
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
15
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
HISTORY OF COMMUNICATIONS University of California at Los Angeles, where I had an opportunity to learn directly from Andrew Viterbi about his new nonsequential decoding algorithm for convolutional codes [9], that is, the Viterbi algorithm he published in 1967. Jim Omura, who joined the department as an assistant professor in 1969, had just shown the equivalence of the Viterbi algorithm to Bellman’s dynamic programming (IEEE Transactions on Information Theory, January 1969). I soon recognized an analogy between a convolutional encoder and a partial-response channel: they can both be represented as a linear finite state machine, the former being defined over a binary Galois field and the latter over the real number field. Then it became quite apparent that the Viterbi algorithm should be equally applicable to a partial-response (PR) channel. The analysis and simulation I performed soon after I returned to IBM Yorktown Heights confirmed that the maximum likelihood (ML) decoding algorithm could gain as much as 3 dB in SNR compared with bit-by-bit detection. Its advantage over the “ambiguity zone” detection method [10] — an algebraic decoding algorithm with an “erasure” option that I had been working on with Don Tang — was also demonstrated. I published these results in the IBM Journal of Research & Development [11] for the magnetic recording audience, and in the Information Theory Transactions [12]. These papers [2, 11, 12] laid the theoretical foundations of what was later called PRML in the digital recording community [3, 8]. Around the same time Dave Forney was developing the idea of applying the Viterbi algorithm to a general class of ISI channels, as discussed in his seminal paper [13]. Digital communication products based on Forney’s maximum likelihood sequence estimation (MLSE) scheme, referred to as the Viterbi equalizer in GSM-related literature, were introduced to the mass market finally around 1995.
DEVELOPMENT OF PRML-BASED HDD PRODUCTS Although the potential significance of the proposed scheme of combining the partial-response (PR) channel coding and maximum-likelihood (ML) decoding was recognized by some of IBM’s magnetic recording experts, the scheme was considered too expensive to implement circa 1970, when microprocessorbased signal processing technology was in its infancy. Even analog-to-digital conversion was an expensive proposition. In 1971 the mission of communications research within the IBM Research moved to the Zurich Laboratory, and I was appointed manager of a newly created System Measurement and Modeling group in the Computer Science Department; thus, I was no longer able to further work on PRML or push its technology transfer. Several industrial laboratories in the United States and Japan reportedly conducted experiments and built prototypes (e.g., Robert Price of Sperry Research Center and the late Dr. K. Yokoyama of NHK Laboratory in Tokyo) by 1980. In the 1980s a team of several researchers led by François Dolivo in Gottfried Ungerboeck’s group at IBM Zurich Research Laboratory conducted extensive simulations and built a working prototype that incorporated novel timing recovery and equalization algorithms during the 1980s, and they succeeded in transferring PRML technology to the IBM Storage System Division in Rochester, Minnesota. Their series of technological developments are reported in [8, references therein]. In 1990 IBM Corporation introduced a new generation of 5.25-inch HDD by incorporating a PRML channel. Magnetoresistive (MR) read heads, another major breakthrough technology, were incorporated in the following year, 1991. Since
16
Communications IEEE
then, practically all HDDs have adopted the MR read heads and PRML channel, and the rate of increase in HDD areal density has jumped from the traditional 25 percent compound growth rate (CGR) to 60 percent CGR or higher, as an external analog filter, digital finite impulse response (FIR) filter, and equalization technology associated with the PRML channel were further improved, together with great advances in the MR read head and film disk technologies. The PRML technology is now adopted not only in HDDs, but also tape drives and micro hard discs installed in laptop PCs, cell phones, and MP3 players; the PRML principle has recently been extended to optical recording products such as CDs and DVDs.
NOISE PREDICTIVE MAXIMUM LIKELIHOOD Evangelos Eleftheriou and his coworkers at IBM Zurich Laboratory [14] more recently proposed to enhance the performance of the traditional PR equalizer by using noise prediction techniques. The resulting noise-predictive PR equalizer consists of a forward linear PR equalizer followed by a linear predictor to whiten noise. Their scheme, which combines the noise-predictive PR equalizer and ML sequence estimation, is termed noisepredictive maximum likelihood (NPML) detection. Introduction of NPML into HDD products since 2000 has led to a 50–60 percent increase in recording density and has resulted, together with the introduction of the giant magneto-resistive (GMR) read sensor, in 100 percent CGR in areal recording density. Sophisticated signal processing techniques such as PR channel coding, maximum likelihood sequence estimation, and noise predictive equalization, contribute to the significant increase in density. With use of a proper Reed Solomon code and run-length limited (RLL) code, a BER as low as 10 –15 can be achieved. Today a read channel architecture based on NPML detection and noise-predictive parity-based post-processing techniques has become the new de facto industry standard for HDDs.
RECENT PROGRESS IN PRML SYSTEMS Signal processing and coding for PRML-based digital recording, both magnetic and optical, is now a well established area of research and development, actively pursued by researchers with communication and information theory backgrounds. Turbo decoding or iterative decoding of partial-response channel output sequence has been discussed by Kobayashi and Bajcsy [15], Souvignier et al. (IEEE Transactions on Communications, August 2000) and Bajcsy et al. (IEEE Journal on Selected Areas in Communications, May 2001). Kavcic et al. (IEEE Transactions on Information Theory, May 2005) discuss low density parity check (LDPC) codes for partial response channels. Recent studies of hidden Markov models (HMMs) show that the Viterbi algorithm and maximum a posteriori (MAP) algorithm used in turbo decoding are special cases of forward-backward algorithms (FBAs) for hidden Markov chains, and the FBA in turn is a special case of the expectation-maximization (EM) algorithm. Therefore, we anticipate a further advance in algorithmic developments for signal processing of digital recording data.
ACKNOWLEDGMENTS I would like to thank my former IBM colleague, Dr. Donald D. T. Tang, who introduced me to magnetic recording research; and Drs. François Dolivo, Evangelos Eleftheriou, and their team members at IBM Zurich Laboratories and IBM Storage System Division in Rochester, Minnesota for their efforts in turning the theoretical concept into a working prototype and finally to real products. I am also indebted to the late Dr. Adam Lender, Drs.
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
HISTORY OF COMMUNICATIONS Andrew Viterbi, Jim Omura, and David Forney for sharing their knowledge and insights with me during my research on PRML. This article draws on my joint paper with François Dolivo and Evangelos Eleftheriou [1]. I thank Prof. Mischa Schwartz for inviting me to prepare this article, and Dr. Dolivo and anonymous reviewers for their suggestions to improve this manuscript. Because the editorial policy requires that the number of references be limited to 15, I fear that I am doing injustice to many authors by not including their worthy papers.
NEW AND FORTHCOMING TITLES! Classical and Quantum Information Theory An Introduction for the Telecom Scientist
Emmanuel Desurvire $80.00: Hardback: 978-0-521-88171-5: 714 pp.
Wireless Internet Security Architecture and Protocols
REFERENCES
James Kempf
[1] H. Kobayashi, F. Dolivo, and E. Eleftheriou, “35 Years of Progress in Digital Magnetic Recording,” Proc. 11th Int’l. Symp. Problems of Redundancy in Info. and Control Sys., Saint-Petersburg, Russia, July 2–6, 2007, pp. 1–10. [2] H. Kobayashi and D. T. Tang, “‘Application of Partial-Response Channel Coding to Magnetic Recording Systems,” IBM J. R&D., vol. 14, no. 4, July 1970, pp. 368–75. [3] P. H. Siegel and J. K. Wolf, “Modulation and Coding for Information Storage,” IEEE Commun. Mag., vol. 29, no. 12, Dec. 1991, pp. 68–86. [4] H. Kobayashi, “A Survey of Coding Schemes for Transmission or Recording of Digital Data,” IEEE Trans. Commun. Tech., COM-19, no. 6, Dec. 1971, pp. 1087–100. [5] A. Lender, “The Duobinary Technique for High-Speed Data Transmission,” IEEE Trans. Commun. Elec., vol. 82, May 1963, pp. 214–18. [6] E. R. Kretzmer, “Generalization of a Technique for Binary Data Transmission,” IEEE Trans. Commun. Tech., COM-14, Feb. 1966, pp. 67–68. [7] H. Kobayashi and D. T. Tang, “Magnetic Data Storage System with Interleaved NRZI CODING” U.S. Patent no. 3,648,265, Mar. 7, 1972. [8] R. D. Cideciyan et al., “A PRML System for Digital Magnetic Recording,” IEEE JSAC, vol. 10, no. 1, Jan. 1992, pp. 38–56. [9] A. J. Viterbi, “‘Error Bounds for Convolutional Codes and Asymptotically Optimum Decoding Algorithm,” IEEE Trans. Info. Theory, IT-13, no. 2, Apr. 1967, pp. 260–69. [10] H. Kobayashi and D. T. Tang, “On Decoding of Correlative Level Coding with Ambiguity Zone Detection,” IEEE Trans. Commun. Tech., COM-19, no. 8, Aug. 1971, pp. 467–77. [11] H. Kobayashi, “Application of Probabilistic Decoding to Digital Magnetic Recording Systems,” IBM J. R&D, vol. 15, no. 1, Jan. 1971, pp. 69–74. [12] H. Kobayashi, “Correlative Level Coding and Maximum Likelihood Decoding,” IEEE Trans. Info. Theory, IT-17, no. 5, Sept. 1971, pp. 586–94. [13] G. D. Forney, Jr., “Maximum Likelihood Sequence Estimation of Digital Sequences in the Presence of Intersymbol Interference,” IEEE Trans. Info. Theory, IT-18, no. 3, May 1972, pp. 363–78. [14] J. D. Coker et al., “Noise-Predictive Maximum Likelihood Detection,” IEEE Trans. Magnetics, vol. 34, no. 1, Jan. 1998, pp. 110–17. [15] H. Kobayashi and J. Bajcsy, “System and Method for Error Correcting A Received Data Stream in A Concatenated system,” U.S. Patent no. 6,029,264, Feb. 22, 2000.
$70.00: Hardback: 978-0-521-88783-0: 224 pp.
BIOGRAPHY HSASHI KOBAYASHI [LF] is the Sherman Fairchild University Professor Emeritus of Princeton University, where he served as dean of the School of Engineering and Applied Science (1986–1991). Prior to joining the Princeton faculty he was with the IBM Research Division (1967–1986), where he held many managerial positions, including founding director of the IBM Tokyo Research Laboratory (1982–1986). Among his technical contributions is his 1970 invention of the high-density digital recording scheme called partialresponse coding and maximum-likelihood decoding (PRML) discussed in this article. For this contribution he was awarded, together with Drs. François Dolivo and Evangelos Eleftheriou of IBM Zurich Research Laboratory, the 2005 Eduard Rhein Technology Award. He has also contributed to data transmission theory and system performance evaluation methodology, especially diffusion process approximation, queuing and loss network models, and their computational algorithms. He authored Modeling and Analysis (Addison Wesley, 1978) and coauthored with Brian L. Mark System Modeling and Analysis (Pearson/Prentice Hall, 2008). He received the Humboldt Prize (Senior U.S. Scientist Award) from the Alexander von Humboldt Foundation (1979) and IFIP’s Silver Core Award (1980). He was elected to the Engineering Academy of Japan (Japan’s national academy of engineering) in 1992. He has served as a scientific advisor for numerous organizations in the United States, Japan, Canada, and Singapore. Currently he resides in Manhattan and is authoring textbooks on probability, statistics, and random processes; network protocols, performance, and security; and digital communications and networks. He also serves as a technical advisor for the National Institute of Information and Communications Technology of Japan on their new-generation network architecture project called AKARI.
LTE for 4G Mobile Broadband Air Interface Technologies and Performance
Farooq Khan $99.00: Hardback: 978-0-521-88221-7: 512 pp.
Cooperative Communications and Networking K. J. Ray Liu, Ahmed K. Sadek, Weifeng Su, and Andres Kwasinski $99.00: Hardback: 978-0-521-89513-2: 642 pp.
Design, Measurement and Management of Large-Scale IP Networks Bridging the Gap Between Theory and Practice
Antonio Nucci and Konstantina Papagiannaki $75.00: Hardback: 978-0-521-88069-5: 406 pp.
Next Generation Wireless LANs Throughput, Robustness, and Reliability in 802.11n
Eldad Perahia and Robert Stacey $70.00: Hardback: 978-0-521-88584-3: 416 pp.
Multiwavelength Optical Networks Architectures, Design, and Control Second Edition
Thomas E. Stern, Georgios Ellinas, and Krishna Bala $95.00: Hardback: 978-0-521-88139-5: 1,004 pp.
Prices subject to change.
www.cambridge.org/us/engineering
IEEE Communications Magazine • March 2009
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
17
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
VERY LARGE PROJECTS/EDITED BY KEN YOUNG
TELECOMMUNICATION SOLUTIONS FOR EUROPEAN LEADERSHIP IN TELECOMMUNICATIONS ANTONIO SANCHEZ, JOSÉ JIMENEZ, BELÉN CARRO, HEINZ BRÜGGEMANN, AND PETER HERRMANN In past issues of the column there have been articles about European Commission (EC) funded programs and also national programs (e.g., in Spain), both focusing on services and applications. In order to provide a holistic view of European programs, there is an important missing piece: the so-called Eureka program that is jointly funded by 41 countries. Within Eureka the cluster devoted to telecommunications is called Celtic (Cooperation for a Sustained European Leadership in Telecommunications) with a budget in the range of €1 billion.
EUREKA Eureka, the European Research Cooperation Agency, is a pan-European network for market-oriented industrial R&D. Created as an intergovernmental initiative in 1985, most of the members are European (plus Israel and Morocco as associated countries), and the European Union itself is also a member. Its intention is to extend its reach to all European countries, and include other non-European countries such as Canada and South Korea. Eureka aims to enhance European competitiveness through its support to businesses, research centers and universities who carry out pan-European projects to develop innovative products, processes, and services. As a measure of the magnitude of the program, as of June 2008 the number of running projects was 693 with a total budget for these projects of €1.4 billion with 2623 organizations involved (large companies, 478; small and medium enterprises [SME], 1157; research institutes, 503; universities, 432; governments/national administrations, 53). Projects are classified according to technological areas, one of them being Electronics, IT and Telecomms, which in turn has five subareas: Electronics, Microelectronics; Information Processing, Information Systems; IT and Telematics Technology; Multimedia; and Telecommunications. Outstanding projects are recognized yearly through two awards: the Lillehammer Environmental Award and Lynx Business Award. Operating through a decentralized network, Eureka awards its internationally recognized label that facilitates access to national public and private funding schemes. This label can be
18
Communications IEEE
awarded on an individual project basis by the public authorities of the countries that belong to the project consortium (e.g., CDTI, Centre for the Industrial Technological Development from the Ministry of Industry, Tourism and Trade in Spain). Eureka Umbrellas are thematic networks that focus on a specific technology area or business sector, and have as their main goal to facilitate the generation of projects. Examples of Umbrellas are in the fields of e-Content, Tourism, Laser, Robotics, and Transport. Alternatively, label awarding can be delegated to so-called Eureka Clusters. Clusters are long-term strategically significant industrial initiatives. They usually have a large number of participants, and aim to develop generic technologies of key importance for European competitiveness, primarily in ICT (and, more recently, in energy and biotechnology). Currently the following clusters exist (in addition to Celtic): • MEDEA+ (2001–2008)/CATRENE (2008–2012): Cluster for Application and Technology Research in Europe on NanoElectronics • EURIPIDES (2006–2013): Eureka Initiative for Packaging and Integration of Microdevices and Smart Systems • ITEA (1998–2009)/ITEA 2 (2006–2014): Information Technology for Europe Advancement: Software for Software-Intensive Systems and Services (SiS); probably the cluster most closely related to Celtic • EUROFOREST (1999–2009): Medical and Biotechnology • EUROGIA (2004–2008)/EUROGIA+ (2008–2013): Energy In 2000 the EU decided to create the European Research Area (ERA), a unified area all across Europe, contributing among other things to the EU objective to devote 3 percent of GDP for research (2010). ERA has become a central pillar of the EU Lisbon Strategy for growth and jobs (2007). In this sense the Eureka initiative complements the EU’s Framework Programme (FP) in working actively toward this common European objective. This complementarity first comes from different sources of public cofunding (EC vs. national governments). Additional differences between the programs come from the feature of Eureka projects:
• Typically closer to the market (strong industrial core and clear market prospects) • More technologically mature (using technology readiness level [TRL], the FP spans the TRL1–6 range, whereas Eureka spans the TRL5–7 range) • A great deal of freedom in the choice of topics addressed (as opposed to FP work programmes with a preassigned budget per topic) • Less strict deadlines for applying, thus allowing better fit in accordance with market demand and technological maturity • Two-stage evaluation process, first for obtaining the label, second for obtaining the funds at the national level It is also worth mentioning Eureka’s Eurostars program specifically targeted for SMEs. There are different ways of financing and risk sharing in each Eureka country, and this lack of synchronization has always been a challenge. In this regard Eurostars represents a landmark since the EU has approved European Community top-up of national financing for Eurostars projects through the FP. Between the EC FP and Eureka are the newly created Joint Technology Initiatives (JTIs) which are co-funded by both public bodies. Defined as longterm public-private partnerships, JTIs aim to achieve greater strategic focus by supporting common ambitious research agendas in areas that are crucial for competitiveness and growth, assembling and coordinating at the European level a critical mass of research. They couple research tightly to innovation. Two JTIs were launched in February 2008 in the ICT domain: • ARTEMIS: embedded systems • ENIAC: nano-electronics In terms of scale, Artemis was launched in the first year with a public funding of 100M (approximately onethird by EC and two-thirds by member states with different levels of contribution by each state) and with a global budget of €2.5 billion throughout the 10-year program.
CELTIC The Celtic cluster started in 2003 and is currently defined until 2011. The goal of this initiative is to sustain Europe’s leadership in telecommunications. Celtic
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
VERY LARGE PROJECTS was officially approved as a Eureka cluster project on 23 October 2003. It is the first European R&D program fully dedicated to end-to-end telecommunications systems. It certainly supports the European telco industry in its transition from an infrastructure- and connectiondriven industry (utility) to a servicesand applications-driven industry. Its creation was in response to the high strategic relevance of the telecommunications sector in industry and particularly in the innovation domain. Five years later this relevance is even stronger; the ICT sector in 2007 represented 6 percent of employment in EU-27, 8 percent of GDP with 25 percent of its growth, 20 percent of R&D investment and 40 percent of productivity gain. The founding members include top European telcos and telco vendors: Alcatel-Lucent, British Telecom, Deutsche Telekom, Ericsson, Eurescom, France Télécom, Italtel, Nokia Siemens Networks, RAD Data Communications, Telefónica, Telenor, and Thomson. Turkcell has also recently joined the core group that steers the initiative. The total budget that has been defined for Celtic between 2004 and 2011 is €1 billion. The costs for Celtic projects are shared between governments, who contribute up to 50 percent of the project budget, and private investment. Celtic projects’ typical budgets are in the €2–20 million range (average: €7 million; the largest project launched hits the €60 million mark). The typical duration is 18 to 36 months (average: two years). To date 50 projects have been started, with 20 more expected to start very soon and another 20 invited to the Full Project Proposal (FPP) phase this year (an increase of labeled projects has been observed in the last two years). By the end of 2008, these 70 projects will have been launched, with a total budget of more than 500 million and over 5000 person-years of effort. Projects usually have from 3 to 15 participants from 3 to 6 countries (average: 8 participants from 4 countries). In terms of participating partners, 12 percent are telcos, with an additional 23 percent from industry, 23 percent SMEs, 24 percent universities, 17 percent research institutes, and 1 percent from government. Approximately 20 Eureka countries support Celtic by being involved in projects, with the most active ones being (budget-wise) France, Spain, Germany, Italy, Finland, Belgium, Sweden, Israel, Norway, Ireland, and The Netherlands. Celtic has yearly calls for proposals with the sixth call currently active. Submission and evaluation is a two-stage procedure with a first round consisting
of a Project Outline (PO) following a second round of FPP (only for selected proposals). Groups of Experts take part in the evaluation process and assess the quality of the proposals, mainly from a technical perspective. Final decisions on the labeling of projects are taken together with PAs from involved countries. Acceptance rates for Celtic are slightly higher than EC counterparts, but below 50 percent (e.g., the success rate for a labeled proposal to become a running project is between 60 and 70 percent). Project execution is supervised specifically through two main milestones: Mid Term Review (MTR) and Final Review (FR), both carried out by external experts. For the MTR, out of 34 projects, results show that 29 percent are rated excellent, 44 percent good, 15 percent improvements required, and 12 percent strong improvements needed. For the FR, out of 23 projects, 35 percent are excellent, 61 percent good, and only 4 percent acceptable. This shows the efficiency of the MTR to better focus project work, improving the project impact at its end (results transferred into products, employment generation, etc.). After the first two rounds of projects (covering basically calls 1 and 2) were finished last year, the Celtic Excellence Award was created and awarded to six projects. Celtic holds an annual event in the early spring. The first, in 2006, took place in Dublin; in 2007 it was in Berlin, in 2008 in Helsinki, and in 2009 it is scheduled for 11 and 12 March in Paris, under the title Future Directions in Telecommunications and ICT. Accessible only by invitation, its main objective is to present the current status, available results, and developments of running projects. In parallel to the workshop and conference sessions, project teams demonstrate their achievements, demonstrations, or prototypes, and discuss their results. An information day for the upcoming seventh call is also collocated a day before. Celtic publishes a quarterly newsletter, Celtic News, where project success stories are reviewed as well as general articles about the initiative. In the planning for the future, the possibility of forming another JTI will also be further investigated to ensure that Celtic, as a Eureka cluster, will be engaged in the activities, as well as foster synergies with its own projects.
CELTIC TECHNICAL DOMAINS: SERVICES AND APPLICATIONS The objectives and work areas of Celtic are laid down in detail in the Celtic Purple Book (whose current version
was updated in 2007–2008). The major technical domains that constitute the core and focus of the Celtic program are identified as: • Services and Applications • Broadband Infrastructures • Security The first domain accounts for almost half of the program, as expected from its mission, although the second domain is also very relevant with 38 percent. The first domain has the challenge to develop and realize new services and applications, including design and methodologies, including early testing and validation of the new services. The focus is particularly on broadband and mobile multimedia services. In the multimedia area many changes are coming from distributed networked media, broadcast of content over broadband networks, the advent of home networking and connectivity, as well as the possible access to content for nomadic users through wireless networks. Key topics coming from Celtic projects include new multimedia services, platforms for service delivery, solving the infrastructure dilemma, mobile integration, network management, security, looking to the customer, and beyond conventional networks. Like EUREKA, Celtic also has strong links with EC programs. The updated Purple Book reflects the creation of the so-called European Technology Platforms (ETP), fostered by the EC, whose objective is to “provide a framework for stakeholders, led by industry, to define research and development priorities, timeframes and action plans on a number of strategically important issues.” Particularly relevant are the four priorities closely related to Celtic objectives: • The Mobile and Wireless Communications Technology Platform (eMobility) • The European Initiative on Networked and Electronic Media (NEM) • Networked European Software and Services Initiative (NESSI) • Integral Satcom Initiative (ISI), the European Technology Platform for Satellite Communications The unique value of Celtic lies in the development of comprehensive integrated communication system solutions, including both platforms and test vehicles. This concept is at the core of the Celtic Pan-European Laboratory (PanLab), and will enable the trial and evaluation of service concepts, technologies, and system solutions. PanLab has as its aim a European laboratory, which will enable the trial and evaluation of service concepts, technologies, system solutions,
IEEE Communications Magazine • March 2009
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
19
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
VERY LARGE PROJECTS and business models to the point where the risks associated with launching of these as commercial products will be minimized. Based on the Celtic approach, the European Commission is currently running two projects on the implementation of a PanLab for future Internet test platforms.
CONCLUSIONS The telecommunications sector constitutes a means to transform the economy to the next level, a factor that is even more important in the current economic situation. One part of the equation is the very high investments needed for new infrastructures in the broadband access domain (both superbroadband for fixed infrastructure and third/fourth generation in mobile) that will effectively remove any bandwidth barriers existing today. A second part would be that of a single all-IP network for all telecommunications needs replacing all legacy circuit-switched networks and offering more advanced service delivery platforms. These two ingredients are the basis for the new digital world that will revolutionize product and service offerings due to innovation. They will sustain the growth of the economy in general and the telcos themselves in particular, leading the product and services market against software, hardware, and Internet entrants. The Celtic raison d’être is precisely about this, leveraging European innova-
tion in the telco domain and trying to replicate large successes in the past, especially in the mobile field. From a political agenda, Europe’s ambitious target of increasing R&D investments has to be coupled with ongoing efforts to truly become an integrated European Research Area, where funds are invested in a synergistic way among public agencies. The Celtic outlook on Joint Technology Initiatives is certainly a sensible step in this direction. Nonetheless, at the end of the day we live in a global economy, and therefore steps toward connection with other top R&D economies are being taken and are a must. Celtic has been running now for five years and already has significant success stories, but will achieve even more in the second half of its lifetime.
ACKNOWLEGEMENTS We wish to acknowledge the public authorities of member states that cofund the Eureka and Celtic initiatives.
REFERENCES [1] Celtic and Eureka Websites: http://www. celtic-initiative.org http://www.Eureka.be [2] A. Sánchez, B. Carro, and S. Wesner, “Telco Services for End Customers, European Perspective,” IEEE Commun. Mag., Mar. 2008. [3] A. Sánchez et al., “Telco Services for End Customers within Spanish Programmes,” IEEE Commun. Mag., June 2008.
BIOGRAPHIES ANTONIO SANCHEZ ESGUEVILLAS (
[email protected]) ______ [SM] is innovation program manager at the corporate level of Telefónica, having previously coor-
dinated innovation activities in the Services line at Telefónica R&D, Spain. He holds a Ph.D. degree (honors) from and is also an adjunct professor at the University of Valladolid. He has coordinated very large international R&D projects in the field of value added services. His current research interests are in the areas of services and applications. He belongs to the Editorial Board of IEEE Communications Magazine, is currently a guest editor of IEEE Wireless Communications, and recently served on the TPCs of ICC, VTC, Healthcom, and PIMRC. He has more than 50 international publications, and several books and patent applications. He is also founding the Technology Management Council Chapter in Spain. Within Celtic and Eureka he has coordinated and participated in several projects, and has also been a reviewer and evaluator. J OSÉ J IMÉNEZ (
[email protected]) _________ is Chairman of Celtic, and director for Innovation, in charge of coordinating the innovation activities within Telefónica R&D. As a telecommunications engineer, he entered Telefónica in 1983 where he worked on the development of satellite communication systems, the development of the UMTS, and the development of planning and measurement tools for GSM. He has also collaborated at several technical books dealing with telecommunications and the information society. He is a member of ETNO R&D and the NEM executive committee. B ELÉN C ARRO (
[email protected]) ___________ has been an associate professor at the University of Valladolid since 1997. She is director of the Communication and Information Technologies (CIT) laboratory, where she has been principal investigator for around 15 competitive call-based international projects funded by the European Commission, Eureka (including four Celtic projects: Macs, Images, Quar2, Pablos), and the European Space Agency, and has also participated in standardization activities in ETSI. She is technical director of FP6 Opuce. She obtained her Ph.D. degree in 2001. From 1997 to 2002 she collaborated with Cedetel (Telecommunications Development Center Institute), where she was area director of Telecommunications Networks and Systems. Her research interests are in the areas of multimedia communications applied to the digital home, service engineering, IP broadband communications, NGN and voice over IP, and quality of service. She has more than 30 international publications and 10 patent applications. She has been a reviewer for IEEE Communications Magazine, IEEE Network, journals of Wiley-Interscience, and several conferences, including IEEE VTC. HEINZ BRÜGGEMANN (brueggemann@celtic-initia_______________ tive.org) is currently Celtic Director. Previously, ____ he was a telecommunications engineer at Deutsche Telekom in service management and professional education. He spent several years as expert/senior expert for service management and professional education with IT on telecommunications development projects. H was also a program manager and technical manager at Eurescom/Heidelberg, Germany. He has been involved in about 50 R&D projects in telecommunications services, network management, and security.
________
20
Communications IEEE
PETER HERRMANN (
[email protected]) _________________ is Celtic Program Coordinator. He is responsible for the selection process of R&D proposals in CELTIC/EUREKA calls for proposals, and project support and control for successful CELTIC projects. Previously he worked in different units of Alcatel. He has 25 patents and 40 publications in optical fiber cables, power systems, superconductivity, cryogenics, and material sciences.
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
_____________________________________________________
_______________
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
CONFERENCE CALENDAR 2009 MARCH
Moun)t e c a f r u S lug Inand (and P rmerss o f s n a r T or Induct ediately
imm atalog om ’s full C o ic nics.c P o e Se electr
ic w w w. p
o
Low Profile from
.19"ht.
● WSTS 2009 - Workshop on Synchronization in Telecommunications Systems, 10-12 March Broomfield, CO. Info: http://tf.nist.gov/ timefreq/seminars/WSTS/WSTS.html _______________________
■ OFC/NFOEC 2009 - 2009 Conference on Optical Fiber Communication, 22-26 March San Diego, CA. Info: http://www.ofcnfoec.org
■ IEEE ISPLC 2009 - IEEE Int’l. Symposium on Power Line Communications and Its Applications, 29 March-1 April Dresden, Germany. Info: __________ http://www.comsoc.org/confs/index.html _______________
Audio Transformers Impedance Levels 10 ohms to 250k ohms, Power Levels to 3 Watts, Frequency Response ±3db 20Hz to 250Hz. All units manufactured and tested to MIL-PRF-27. QPL Units available.
Power & EMI Inductors Ideal for noise, spike and Power Filtering Applications in Power Supplies, DC-DC Converters and Switching Regulators
Pulse Transformers 10 Nanoseconds to 100 Microseconds. ET Rating to 150 Volt Microsecond, Manufactured and tested to MIL-PRF-21038.
Princeton, NJ. Info: http://ewh.ieee.org/r1/ princeton-centraljersey/2009_Sarnoff_ __________________________ Symposium/index.html ______________
■ IEEE WCNC 2009 - IEEE Wireless Communications and Networking Conference, 5-8 April Budapest, Hungary. Info: __________ http://www.ieeewcnc.org/2009 _________
Plug-In units meet the requirements of QPL-MIL-PRF 21038/27. Surface units are electrical equivilents of QPL-MIL-PRF 21038/27.
Rio de Janeiro, Brazil. Info: http://www.ieee__________ infocom.org/2009 ___________
DC-DC Converter Transformers
● WTS 2009 - Wireless Telecommunications Symposium 2009, 22-24 April
0.4 Watts to 150 Watts. Secondary Voltages 5V to 300V. Units manufactured to MIL-PRF-27 Grade 5, Class S (Class V, 1550C available).
week one k to tities c o t S eryquan Deliv sample See for
EEM or send direct for FREE PICO Catalog Call toll free 800-431-1064 in NY call 914-738-1400 Fax 914-738-8225
PICO E
Electronics,Inc.
143 Sparks Ave. Pelham, N.Y. 10803 Mail: _________________
[email protected]
www.picoelectronics.com
22
Communications
Herrsching, Germany. Info: http://www. mcss2009.org
● PV 2009 - 17th Int’l. Packet Video Workshop, 11-12 May Seattle, WA. Info: http://www.pv2009.com
● CNSR 2009 - Communication Networks and Services Research 2009, 11-13 May Moncton, NB, Canada. Info: http://www.cnsr. info/events/csnr2009 ____________
■ IEEE CTW 2009 - IEEE Communication Theory Workshop, 11-14 May St. Croix, U.S. Virgin Islands. Info: http://www. ieee-ctw.org/2008/index.html
■ IEEE CQR 2009 - 2009 IEEE Int’l. Workshop, Technical Committee on Communications Quality and Reliability, 12-14 May Naples, FL. Info: http://www.ieeee-cqr.org/
■ IEEE INFOCOM 2009 - 28th Annual IEEE Conference on Computer Communications, 19-24 April
400Hz/800Hz Power Transformers
● MC-SS 2009 - 7th Int’l. Workshop on Multi-Carrier Systems & Solutions, 5-6 May
APRIL
Multiplex Data Bus Pulse Transformers
Input voltages of 5V, 12V, 24V And 48V. Standard Output Voltages to 300V (Special voltages can be supplied). Can be used as self saturating or linear switching applications. All units manufactured and tested to MIL-PRF-27.
IEEE
● IEEE Sarnoff 2009 - IEEE SARNOFF Symposium, 30 March-1 April
MAY
Prague, Czech Republic. Info: http://www.
JUNE ■ IM 2009 - IFIP/IEEE Int’l. Symposium on Integrated Network Management, 1-5 June Hempstead, NY. Info: http://www.iee-im.org/ 2009 ___
● ICUFN 2009 - 1st Int’l. Conference on Ubiquitous and Future Networks, 7-9 June Hong Kong, China. Info: http://www.icufn.org
● ConTEL 2009 - 10th Int’l. Conference on Telecommunications, 8-10 June
csupomona.edu/wtsi _____________
Zagreb, Croatia. Info: http://www.contel.hr
■ IEEE RFID 2009 - 2009 IEEE Int’l. Conference on RFID, 27-28 April Orlando, FL. Info: http://www.ieee___________
● IWCLD 2009 - Int’l. Workshop on Cross Layer Design 2009, 11-12 June
rfid.org/2009 ________
Mallorca, Spain. Info: http://www.iwcld2009.org
● WOCN 2009 - 6th Int’l. Conference on Wireless and Optical Communications Networks, 28-30 April
■ IEEE ICC 2009 - IEEE Int’l. Conference on Communications, 14-18 June
Cairo, Egypt. Info: http://www.wocn2009.org
soc.org/confs/icc/2009/index.html ____________________
Dresden, Germany. Info: __________ http://www.com-
■ Communications Society sponsored or co-sponsored conferences are indicated with a square before the listing; ● Communications Society technically co-sponsored or cooperating conferences are indicated with a circle before the listing. Individuals with information about upcoming conferences, calls for papers, meeting announcements, and meeting reports should send this information to: IEEE Communications Society, 3 Park Avenue, 17th Floor, New York, NY 10016; e-mail: _____________
[email protected]; fax: +1-212-705-8999. Items submitted for publication will be included on a space-available basis.
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
CONFERENCE CALENDAR
CONEC IP67 Rated Connectors
■ SECON 2009 - IEEE Communications Society Conference on Sensor and Ad Hoc Communications and Networks, 22-26 June
When the Going Gets Tough.
Rome, Italy. Info: http://www.ieee-secon.com/2009/
● MEDHOCNET 2009 - IFIP Med-Hoc-Net 2009, 29 June-2 July Haifa, Israel. Info: ___________________________ http://www.ee.technion.ac.il/med-hocnet2009/index.htm ____________
JULY ● NGI 2009 - 5th EURO-NGI Conference on Next Generation Internet Networks, 1-3 July Aveiro, Portugal. Info: http://www.ngi2009.eu
■ IEEE WiMAX 2009 - 2009 IEEE Mobile WiMAX Symposium, 9-11 July Napa, CA. Info: ________________
[email protected]
■ IWQoS 2009 - Int’l. Workshop on Quality of Service 2009, 13-15 July Charleston, NC. Info: http://iwqos09.cse.sc.edu RJ 45 Cat.5e plug and receptacle housing kit, bayonet coupling, black plastic version, Partno. 17-10000 (receptacle) 17-10001 (plug)
● NDT 2009 - 1st Int’l. Conference on Networked Digital Technologies, 28-31 July Ostrava, Czech Republic. Info: http://arg.vsb.cz/NDT2009/
AUGUST ● ICCCN 2009 - 18th Int’l. Conference on Computer Communications and Networks, 2-6 Aug.
Take advantage of a great choice of CONEC industrial interface connectors with IP67 protection. The ideal solution for rough environments.
San Francisco, CA. Info: http://www.icccn.org/icccn09/
● ITU K-IDI 2009 - ITU-T Kaleidoscope 2009 — Innovations for Digital Inclusion, 31 Aug.-1 Sept. Mar Del Plata, Argentina. Info: http://www.itu.int/ITU-T/ kaleidoscope2009/ ____________
USB 2.0 and RJ 45 Industrial Ethernet connection systems available as plastic or full metal versions for heavy duty applications.
■ IEEE EDOC 2009 - 13th IEEE Int’l. Enterprice Computing Conference, 31 Aug.-4 Sept. Auckland, New Zealand. Info: ____________________ https://www.se.auckland.ac.nz/ conferences/edoc2009/ ______________
SEPTEMBER
● ICUWB 2009 - 2009 IEEE Int’l. Conference on Ultra Wideband, 9-11 Sept. Vancouver, BC, Canada. Info: http://www.ICUWB2009.org
www.pmr-werbung.de ______
Siena, Tuscany, Italy. Info: http://www.iswcs.org/iswcs2009/
0109/9664
● ISWCS 2009 - Int’l. Symposium on Wireless Communication Systems, 7-10 Sept.
CONEC offers a broad range of plug and receptacle housing kits, cable assemblies and accessories. USB 2.0 and RJ 45 Connector Systems from CONEC: quality keeps connected!
● IEEE LATINCOM 2009 - IEEE Latin America Communications Conference Medellin, Antioquia, Colombia. Info: __________________ http://www.ieee.org.co/~comsoc/latincom ________
● WiCOM 2009 - 2009 Int’l. Conference on Wireless Communications, Networking and Mobile Computing, 24-26 Sept.
RJ 45 Cat.5e plug and receptacle housing kit, bayonet coupling, plastic metallized version, Partno. 17-10011 (receptacle) 17-10013 (plug, plastic coupling ring) 17-10044 (plug, metal coupling ring)
RJ45Cat.5e plug and receptacle housing kit, full metal version, M28 thread threat coupling ring, Partno. 17-100934 (receptacle) 17-100944 (plug plastic coupling ring)
USB 2.0 plug and receptacle housing kit, bayonet coupling, black plastic version, Partno. 17-200161 (receptacle) 17-200121 (plug)
USB 2.0 plug and receptacle housing kit, full metal version, M28 thread coupling ring, Partno. 17-200321 (receptacle) 17-200331 (plug)
Garner, NC, 27529 Tel. + 1 919 460 8800 Tel. +1 919 460 0141 E-mail _________
[email protected]
www.conec.com
Beijing City, China. Info: http://www.wicom-meeting.org/
IEEE Communications Magazine • March 2009
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
23
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
NEW PRODUCTS QUAD PHOTODIODE ARRAYS 40G AND 100G OPTICAL COMMUNICATIONS
FOR
Discovery Semiconductors, Inc. Discovery Semiconductors has introduced its 4-element Quad InGaAs Photodiode Arrays for 40 Gb/s and 100 Gb/s optical communications. The Quad PD Arrays consist of four photodiodes monolithically integrated on a common InP substrate, and are fabricated using Discovery‚s high-reliability low FIT rate InGaAs/InP semiconductor process. The Quad PD Array consists of either 10um, 30um, 40um, or 50um diameter photodiodes. The individual RF bandwidth of the photodiodes is 10, 15, 20, or 40 GHz. The top illuminated photodiodes work equally well for 1.3um as well as 1.5um based communication systems. The optical power handling of each photodiode exceeds +12dBm, thus making it a must for coherent systems requiring high optical LO power. The target applications include 100 Gb/s Long-Haul PolMux (D)QPSK, 40 Gb/s Long-Haul (D)QPSK, and 40G/100G parallel networking. The Quad PD Array can be configured as a 4-channel receiver or as a 4channel balanced receiver using two arrays. The Quad PD Array is available as a module with singlemode fiber pigtails with GPPO or coplanar waveguide RF outputs, and is available with or without integrated transimpedance amplifiers. The addition of the Quad PD Array and Quad Balanced Receiver to Discovery's existing photodiode and balanced receiver product lines is a key enabler for next-generation optical systems such as Long-Haul DWDM 100G, where the receive-side function of the network is most complex and critical. The Quad Array was designed and developed by Mr. Abhay Joshi, CEO of Discovery Semiconductors, for a mmwave defense project. Significant efforts were devoted to ensuring very high optical and RF isolation between the individual photodiodes, thus making it highly attractive for applications such as coherent systems. www.chipsat.com
FULLY INTEGRATED DOHERTY AMPLIFIERS FOR TD-SCDMA WCDMA BASE STATIONS
AND
NXP Semiconductors NXP Semiconductors has introduced fully integrated Doherty amplifiers for TD-SCDMA and WCDMA base stations, expanding its extensive portfolio of RF power transistors. The advanced
24
Communications IEEE
BLD6G21-50 and BLD6G22-50 fully integrated amplifiers offer ease-ofdesign while delivering efficiency of > 40% at an average power of 10W. This enables 35% lower power dissipation under multi-carrier signal operation compared to class AB amplifiers. The new fully integrated Doherty amplifier is plug and play and can be applied in the same way as a standard class AB transistor, hence speeding time to market. The NXP BLD6G21-50 and BLD6G22-50 amplifiers bring savings in form factor and design effort, while eliminating the need for extra tuning during manufacturing, providing significant cost efficiencies during the development process of cellular base station power amplifiers. The BLD6G21-50 incorporates an integrated Doherty concept leveraging NXP‚s state of the art GEN6 LDMOS technology specifically designed for TD-SCDMA operation at frequencies from 2010 MHz to 2025 MHz, whereas its twin device operates at frequencies between 2110MHz to 2170MHz for WCDMA transmission. Both main and peak devices and delay lines as well as the input splitter and output combiner are integrated into a standard transistor package with single input and output leads, thus minimizing required board space. The package has two additional pins, one of which is being used for external biasing purposes. www.nxp.com/experience_rfpower/
WIMEDIA-BASED MB-OFDM ULTRA-WIDEBAND VALIDATION SOFTWARE AUTOMATES VERIFICATION/ COMPLIANCE TESTING Agilent Technologies Inc. Agilent Technologies has announced highly automated WiMedia-based multiband orthogonal frequency domain modulation (MB-OFDM) ultra-wideband (UWB) validation software. This software provides engineers with an easy way to verify that their MB-OFDM designs perform to the parameters defined by the WiMedia UWB PHY 1.2 test specification. The Agilent U7239A WiMediabased MB-OFDM UWB software, which runs on Infiniium 90000 Series oscilloscopes, provides PHY verification and compliance measurements on radios based on WiMedia's ISO-published radio standard. The application covers the physical-layer (PHY) transmitter testing required by the WiMedia 1.2 PHY test specification. Engineers can make measurements with the software and an Infiniium Series oscilloscope via a simple connection to the
SMA input or by attaching the receiver antenna directly to the input of the oscilloscope for radiated testing. The U7239A UWB PHY test software performs a wide range of tests required to meet the WiMedia PHY specification. It is designed to test the requirements documented in the WiMedia PHY test specifications versions 1.0 and 1.2. Products that incorporate technologies such as Wireless USB, wireless HDMI, and high-speed Bluetooth devices that use MB-OFDM need to successfully pass a variety of compliance tests typically based on the original WiMedia specification. The U7239A software allows engineers to simply select testing to the original WiMedia specifications or testing to the Wireless USB specifications defined by the USB-IF. The Agilent U7239A MB-OFDM PHY test software automatically configures the oscilloscope for each test and generates an informative HTML report at the end of the test. It compares the results with the specification test limit and indicates how closely the device passes or fails each test. The complex analysis runs seamlessly within the scope, which saves users time and effort compared to making and analyzing measurements manually. Engineers designing and developing MB-OFDM UWB radios also can use the Agilent 89601A vector signal analysis Option BHB software, which covers testing specified by the WiMedia PHY test specifications and includes advanced features that are important for silicon development. The 89601A Option BHB software can be used in conjunction with the Agilent U7239A software. Both software systems provide measurement correlation, with the U7239A providing the key measurements used for certifying WiMedia-based PHYs and Wireless USB products. The 89601A Option BHB software provides deep analysis capabilities. WiMedia-based MB-OFDM UWB technology operates in the spectra between 3.2 GHz and 10.6 GHz, with minimum analysis bandwidths of 500 MHz. This UWB technology presents unique RF test challenges that require unique test solutions. Agilent's Infiniium 90000 Series oscilloscope, the Electra08Design and Test Product of the Year award winner, offers the industry's lowest noise floor, deepest memory and flattest response. Using the 90000 Series oscilloscopes and the powerful U7239A MB-OFDM software, engineers can maximize their design margins and gain greater insights into their system performance. www.agilent.com/find/wimedia
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
A
BEMaGS
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
IEEE
F
March 2009
O P T I C A L C O M M U N I C AT I O N S D ESIGN, T ECHNOLOGIES, and A PPLICATIONS A Supplement to IEEE Communications Magazine
O c ffi ia lC oon so r
A Total-Cost-of-Ownership Analysis of L2-Enabled WDM-PONs
Sp
Photonic Integration for High-Volume, Low-Cost Applications
The Road to Carrier-Grade Ethernet A Comparison of Dynamic Bandwidth Allocation for EPON, GPON, and Next Generation TDM PON
®
Communications IEEE
A Publication of the IEEE Communications Society
www.comsoc.org
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
_________________
_________________
______________
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
____________________________ _________________
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
IEEE
O P T I C A L C O M M U N I C AT I O N S D ESIGN, T ECHNOLOGIES, and A PPLICATIONS March 2009, Vol. 47, No. 3
www.comsoc.org
OFC/NFOEC SPECIAL ISSUE SERIES EDITORS: HIDEO KUWAHARA AND JIM THEODORAS S4 SERIES EDITORIAL: OPTICAL COMMUNICATIONS BECOMES AN INDUSTRY STALWART S16 PHOTONIC INTEGRATION FOR HIGH-VOLUME, LOW-COST APPLICATIONS To date, photonic integration has seen only limited use in a few optical interface applications. The recently adopted IEEE draft standards for 40 Gb/s and 100 Gb/s Ethernet single-mode fiber local area network applications will change this situation.
CHRIS COLE, BERND HUEBNER, AND JOHN E. JOHNSON
S24 A TOTAL-COST-OF-OWNERSHIP ANALYSIS OF L2-ENABLED WDM-PONS Candidates for next-generation broadband access networks include several variants of WDM-PONs. The total cost of ownership of these solutions is determined mainly by operational expenditures, where the cost of energy is one of the major contributors. We show that a combination of WDM-PON with active L2 switching can minimize the total cost of ownership while at the same time offering the highest scalability for future bandwidth demands.
KLAUS GROBE AND JÖRG-PETER ELBERS
S30 THE ROAD TO CARRIER-GRADE ETHERNET Carrier-grade Ethernet is the latest step in the three-decade development of Ethernet. The authors describe the evolution of Ethernet technology from LAN toward a carrier-grade operation and then, through an overview of recent enhancements.
KERIM FOULI AND MARTIN MAIER
A
BEMaGS F
ADVERTISING SALES OFFICES Closing date for space reservation: 1st of the month prior to issue date NATIONAL SALES OFFICE Eric L. Levine Advertising Sales Manager IEEE Communications Magazine 3 Park Avenue, 17th Floor New York, NY 10016 Tel: 212-705-8920 Fax: 212-705-8999 e-mail: ___________
[email protected]
PACIFIC NORTHWEST and COLORADO Jim Olsen 3155 N.E. 76th Avenue Portland, OR 97213 Tel: 503-640-2011 Fax: 503-640-3130
[email protected] _______________ SOUTHERN CALIFORNIA Patrick Jagendorf 7202 S. Marina Pacifica Drive Long Beach, CA 90803 Tel: 562-795-9134 Fax: 562-598-8242
[email protected] ______________ NORTHERN CALIFORNIA George Roman 4779 Luna Ridge Court Las Vegas, NV 89129 Tel: 702-515-7247 Fax: 702-515-7248 Cell: 510-414-4730
[email protected] ______________
SOUTHEAST Scott Rickles 560 Jacaranda Court Alpharetta, GA 30022 Tel: 770-664-4567 Fax: 770-740-1399 _________
[email protected]
EUROPE Martin Holm Huson International Media Cambridge House, Gogmore Lane Chertsey, Surrey, KT16 9AP ENGLAND Tel: +44 1 1932-564999 Fax: +44 1 1932-564998 Email: ______________
[email protected]
S40 A COMPARISON OF DYNAMIC BANDWIDTH ALLOCATION FOR EPON, GPON, AND NEXT GENERATION TDM PON The authors compare the typical characteristics of DBA, such as bandwidth utilization, delay, and jitter at different traffic load, within the two major standards for PONs: Ethernet PON (EPON) and Gigabit PON (GPON).
BJÖRN SKUBIC, JIAJIA CHEN, JAWWAD AHMED, LENA WOSINSKA, AND BISWANATH MUKHERJEE COMMENTARY S8
THE NEXT GENERATION OF ETHERNET, BY JOHN D’AMBROSIA
S2
Communications IEEE
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Get deep insight into your complex modulated 40/100G optical transmitters Agilent’s NEW optical modulation analyzer fills the gap for time-domain based analysis of amplitude and phase modulated optical signals. Be the first in the market with comprehensive characterization and verification of your design. Get ready for the next generation optical networks!
Agilent N4391A Optical Modulation Analyzer • Optical coherent receiver 100G ready • Full support of DPSK / DQPSK, Polarization Multiplex • Choice of 34 digital modulation schemes
Optical constellation diagram
Eye diagram I carrier
Error Vector Magnitude
High resolution narrowband spectrum
Eye diagram Q carrier
Statistic data, detected data bits
Come see us at Booth 1719 at OFC and get a live demo www.agilent.com/find/oma-ieee
_________________________
© Agilent Technologies, Inc. 2009
Communications IEEE
u.s. 1-800-829-4444 canada 1-877-894-4414
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
GUEST EDITORIAL
OPTICAL COMMUNICATIONS BECOMES AN INDUSTRY STALWART
Hideo Kuwahara
W
Jim Theodoras
elcome to the 2009 special OFC/NFOEC edition of the Optical Communication Series (OCS). Once again this year, we continue the annual tradition of publishing an extra fifth “quarterly” installment of OCS timed to coincide with OFC/NFOEC. For this special issue, we scout the world for columns and articles from industry experts on optical communication topics that will be hot at the show as well as for the rest of year. As we head toward OFC, global economic conditions have everyone on edge. Many entire industries have collapsed in toto. Undoubtedly, optical communications have been impacted as well, with guidance lowered across the board. Yet the hope is that optical communications and the large diverse supply chain that feeds it will weather the storm better than most industries. The broadband revolution predicted before the great telecom bubble, albeit prematurely, has finally arrived. It would be difficult in retrospect to point to any single event or killer app that ushered in the era. Rather, the combination of many events have led the way, including but not limited to the PDA revolution, MP3 music, home networks, broadband rollout, peer-to-peer traffic, digital video recorders, video on demand, and the list goes on and on. Perhaps one can lump all of these into the term “digital convergence,” perhaps not. What is clear, though, is that (as of this writing) consumers' insatiable appetite for bandwidth shows no signs of slowing. Yesterday's IP network was architected and designed to route individual data packets of varying sizes in a statistical manner. No one at the time could foresee today's voice, video, and peer-to-peer traffic. These types of content greatly stress legacy IP networks. Today's hottest end-user applications are driving infrastructure improvements that have hopefully transformed optical communications into an industry stalwart that will be able to withstand the economic storm at hand. So, what will be the hot topics at this year's OFC/NFOEC and for 2009 in general? Well, higher-speed Ethernet (HSE) continues to be at the forefront of optical efforts once again this year. What began at first as a group of industry leaders trying to figure out what comes after 10GE and when it might be needed has since grown into a full-fledged task force, IEEE 802.3ba. Since our first HSE column two years ago, these thought leaders have selected a dual 40/100GE data rate approach, chosen link definitions based on the needs of each application, and chosen both multifiber and multi-lambda approaches for standard implementation. HSE is somewhat unique in the history of Ethernet. Traditionally, the next generation of Ethernet was driven by core routing needs and gradually trickled down the food chain, eventually reaching short-range switch interconnect. With the latest HSE efforts, market applications are broad based from the get-go, ranging from core routing to short range data center interconnect,
S4
Communications IEEE
with distances from 1 meter to hundreds of kilometers. Leading this ragtag band of optical technology daredevils is John D’Ambrosia, chair of the IEEE 802.3ba, who headlines this special OFC edition of OCS with an update on their efforts. As aforementioned, the next generation of Ethernet will adopt a multi-lambda approach for three draft standard single mode fiber interfaces, which brings us to the second hot topic for 2009. The recent multicolor decision has lit a fire in proponents of photonic integration circuitry (PIC) who are excited that perhaps this will be the impetus for PICs moving from low-volume niche applications to mainstream optical communications. Rather than hundreds of photonic functions on a PIC, the magic number might be small multiples of 4, as quad transmitters and receivers will be necessary to make 4 ¥ 10G for 40GE and 4 ¥ 25G for 100GE a high-volume cost-effective reality. We lead off the features in this special edition of OCS with a ground-breaking article from Messrs. Cole, Huebner, and Johnson on how PICs are being developed to meet the new challenge. A third hot topic for 2009 that has been somewhat of a surprise is wavelength-division multiplexing passive optical networks (WDM-PONs). Not too long ago, WDM-PONs were the stuff of dreams. The general thinking at the time was that WDM technology was simply too expensive to ever play in the access space. Fast forward two years, and the global commoditization that has been the nemesis of optical component companies has actually had the net positive effect of driving WDM from its core roots closer and closer to the access layer. Exponentially increasing network power consumption and the economic downturn are forcing bandwidth providers to focus more on overall network efficiency. It turns out, when examining the total cost of ownership (TCO) of a network, initial capital expenditure (CapEx) is but a small piece of the pie. But how much operating expenditure (OpEx) does a new widget have to save to justify the buy-in? Our second article from Messrs. Grobe and Elbers examines and compares the TCO of several alternative PON access technologies, demonstrating in the process that the latest generation of WDM-PONs are looking very competitive with more traditional access architectures. A fourth hot topic for 2009 is carrier-grade Ethernet (CGE). As bandwidth has exploded and transport networks have transitioned to IP, extensions have been both needed and added to Ethernet to help it handle its new role. Legacy non-IP networks have long held a major advantage over the up and coming Ethernet: the service could be wholesaled, and since conventional Ethernet is connectionless, it could not. The Metro Ethernet Forum (MEF), as well as other standard bodies such as the IEEE, Internet Engineering Task Force (IETF), International Telecommunication Union (ITU), and (Continued on page S6)
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
________________________
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
SERIES EDITORIAL (Continued from page S4) TeleManagement Forum (TMF) have all worked closely together to add sufficient extensions to the standard Ethernet protocol to resolve any and all shortcomings in Ethernet’s newfound role. The end result has been a surge in CGE activity and a blossoming Ethernet wholesale market that led to many transport and Ethernet companies joining forces in 2008. To bring our optical readers up to speed on this new beast called CGE, Messrs. Fouli and Maier have authored our third article, a historical, if somewhat nostalgic, look at the transition of Ethernet over the last few decades into a protocol more appropriate for transport of carrier traffic. And finally, traditional PON access architectures continue to be a hot topic for 2009. Not to be outdone by an influx of newfangled competitors, the traditional PON protocols are continuing to be upgraded to meet future bandwidth projections. However, there is more to the problem than simply scaling total bandwidth to meet the exponential rise in bandwidth consumption, as the nature of the consumption itself is changing. When traditional PON architectures were devised, bandwidths profiles were typically very asymmetrical in nature, favoring file downloads, and the percentage of passed homes signing up for the newer broadband offerings was relatively low. This led to bandwidth sharing architectures with time-division multiplexing (TDM) on the uplink, which minimize initial capital outlay when faced with a large geographical area to be covered while burdened with low uptake rates. However, fast forward to
______________
S6
Communications IEEE
today, and bandwidth consumption is rapidly becoming more symmetrical in nature, while in some areas the majority of homes passed are served by a single broadband provider. Both of these trends tend to penalize a bandwidth sharing architecture, highlighting the importance of an efficient dynamic bandwidth allocation (DBA) scheme. In our last article Messrs. Skubic, Chen, Ahmed, Wosinska, and Mukherjee simulate and compare the DBA algorithms of both EPON and GPON, while discussing the implications of the migration to 10G on DBA effectiveness. This concludes the OFC/NFOEC edition of the Optical Communication Series. We wish to once again thank the authors who took time out of their busy schedules to highlight their work, and our sponsoring advertisers whose generous contributions not only made this additional issue possible, but also made it possible to include a copy in all registrant bags at this year's show. And finally, we want to thank all of our readers who continue to tune in each quarter to hear the latest in optical communication technologies.
BIOGRAPHIES HIDEO KUWAHARA [F] (
[email protected]) _________________ joined Fujitsu in 1974, and has been engaged for more than 30 years in R&D of optical communications technologies, including high-speed TDM systems, coherent optical transmission systems, EDFA, terrestrial and submarine WDM systems, and related optical components. His current responsibility is to lead photonics technology as a Fellow of Fujitsu Laboratories Ltd. in Japan. He stayed in the United States from 2000 to 2003 as a senior vice president at Fujitsu Network Communications, Inc., and Fujitsu Laboratories of America, Richardson, Texas. He belongs to LEOS and ComSoc. He is a co-Editor of IEEE Communications Magazine’s Optical Communications Series. He is currently a member of the International Advisory Committee of the European Conference on Optical Communications, and chairs the Steering Committee of CLEO Pacific Rim. He is a Fellow of the Institute of Electronics, Information and Communications Engineers (IEICE) of Japan. He has co-chaired several conferences, including Optoelectronics and Communications Conference (OECC) 2007. He received an Achievement Award from IEICE of Japan in 1998 for the experimental realization of optical terabit transmission. He received the Sakurai Memorial Award from the Optoelectronic Industry and Technology Development Association (OITDA) of Japan in 1990 for research on coherent optical communication. J IM T HEODORAS (
[email protected]) ________________ is currently director of technical marketing at ADVA Optical Networking, working on Optical + Ethernet transport products. He has over 20 years of industry experience in optical communication, spanning a wide range of diverse topics. Prior to ADVA, he was a senior hardware manager and technical leader at Cisco Systems, where he managed Ethernet switch development on the Catalyst series product. At Cisco, he also worked on optical multiservice, switching, and transport products and related technologies such as MEMs, electronic compensation, forward error correction, and alternative modulation formats, and was fortunate enough to participate in the “pluggable optics” revolution. Prior to acquisition by Cisco, he worked at Monterey Networks, responsible for optics and 10G hardware development. He also worked at Alcatel Networks during the buildup to the telecom bubble on DWDM longhaul transport systems. Prior to DWDM and EDFAs, he worked at Clarostat on sensors and controls, IMRA America on a wide range of research topics from automotive LIDAR to femtosecond fiber lasers, and Texas Instruments on a variety of military electro-optical programs. He earned an M.S.E.E from the University of Texas at Dallas and a B.S.E.E. from the University of Dayton. He has 15 patents granted or pending.
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
_________________
_______
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
COMMENTARY 40 GIGABIT ETHERNET AND 100 GIGABIT ETHERNET: THE DEVELOPMENT OF A FLEXIBLE ARCHITECTURE JOHN D’AMBROSIA Introduction 1,000,000
In December 2007 the IEEE Standards Association approved the formation of the IEEE P802.3ba Task Force, which was chartered with 100 Gigabit Ethernet 100,000 the development of 40 Gb Ethernet and 100 Gb Core Ethernet. The decision to do both rates of Ether40 Gigabit Ethernet networking net was scrutinized by the industry at the time, doubling ≈18 mos but ultimately the Higher Speed Study Group 10 Gigabit Ethernet 10,000 provided a vital forum for the stakeholders in the next generation of Ethernet to debate this very issue. The fact that this debate actually occurred is in itself a testament to the success of Ethernet. Gigabit Ethernet 1,000 Networking applications, whose bandwidth Server requirements are doubling approximately every I/O 18 months, have greater bandwidth demands than doubling ≈24 mos computing applications, where the bandwidth 100 capabilities for servers are doubling approximate2000 2005 2010 2015 2020 1995 ly every 24 months. The impact of this difference Date in bandwidth growth is illustrated in Fig. 1. It is clear from these trend lines that if Ethernet is to Figure 1. Bandwidth growth forecasts. provide a solution for both the computing and network application space, it needs to evolve past its own tradition of 10x leaps in operation rates leverages the 10GBASE-KR architecture, already develwith each successive generation. oped channel requirements, and PMD. The decision to do two rates was not taken lightly by par• 40GBASE-CR4 and 100GBASE-CR10: The 40GBASEticipants in the Higher Speed Study Group. In hindsight, this CR4 PMD supports transmission at 40 Gb/s across four difauthor, who was in the thick of this debate, feels that the deciferential pairs in each direction over a twin axial copper sion to do both 40 Gb and 100 Gb Ethernet was the correct cable assembly. The 100GBASE-CR10 PMD supports decision for Ethernet. Ultimately, it was the IEEE standards transmission at 100 Gb/s across 10 differential pairs in each development process itself that proved to be the key to resolvdirection over a twin axial copper cable assembly. Both ing this difficult decision. PMDs leverage the 10GBASE-KR architecture, already Support of two differing data rates as well as different developed channel requirements, and PMD. physical layer specifications selected for this project presented • 40GBASE-SR4 and 100GBASE-SR10: This PMD is based the task force with a dilemma. The task force needed to on 850 nm technology and supports transmission over at develop an architecture that could support both rates simultaleast 100 m OM3 parallel gigabit per second. The effective neously and the various physical layer specifications being date rate per lane is 10 Gb/s. Therefore, the 40GBASEdeveloped today, as well as what might be developed in the SR4 PMD supports transmission of 40 Gb Ethernet over a future. This column will provide the reader with insight into parallel gigabit per second medium consisting of four paralthe IEEE P802.3ba architecture, and highlight its inherent lel OM3 fibers in each direction, while the 100GBASEflexibility and scalability. SR10 PMD will support the transmission of 100 Gb
The Physical Layer Specifications Closely examining the different application spaces where 40 Gb and 100 Gb Ethernet will be used led to the identification of the physical layer (PHY) specifications being targeted by the Task Force. For computing applications, copper and optical physical layer solutions are being developed for distances up to 100 m for a full range of server form factors including blade, rack, and pedestal configurations. For network aggregation applications, copper and optical solutions are being developed to support distances and media types appropriate for data center networking, as well as service provider intra-office and interoffice connection. Table 1 provides a summary of the different PHY specifications that were ultimately targeted by the task force with their respective port type names. Below is a description of each of the different physical medium dependents (PMDs): • 40GBASE-KR4: This PMD supports backplane transmission over four channels in each direction at 40 Gb/s. It
S8
Communications IEEE
Ethernet over a parallel gigabit per second medium consisting of 10 parallel OM3 fibers in each direction. • 40GBASE-LR4: This PMD is based on 1310 nm coarse wavelength-division multiplexing (CWDM) technology and supports transmission over at least 10 km over single-mode fiber (SMF). The grid is based on the ITU G.694.2 specification, and the wavelengths used are 1270, 1290, 1310, and 1330 nm. The effective data rate per lambda is 10 Gb/s, which will help maximize reuse of existing 10G PMD technology. Therefore, the 40GBASE-LR4 PMD supports transmission of 40 Gb Ethernet over four wavelengths on each SMF in each direction. • 100GBASE-LR4: This PMD is based on 1310 nm dense WDM (DWDM) technology and supports transmission of at least 10 km over single-mode gigabit per second. The grid is based on the ITU G.694.1 specification, and the wavelengths used are 1295, 1300, 1305, and 1310 nm. The (Continued on page S10)
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Minimize Your Design – Maximize Your Business Integrated and Compact 40G Solutions Two new receiver families are being offered supporting the design of next-generation 40 Gbit/s systems, reducing your integration effort and allowing for highly compact system and subsystem solutions. Our new MPRV receiver series is suited for highvolume client-side interfaces, and is offered with improved performance in a very compact XLMD MSA compatible package. The IDRV family, a series of integrated DPSK receivers for line-side interfaces, comprises the well-established balanced receiver together with a delay line interferometer, and is offered in a compact package. u2t photonics AG Berlin, Germany Phone: +49-30-726-113-500 E-mail: _________
[email protected]
www.u2t.com
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
COMMENTARY Port Type
Reach
40 GbE
100 GbE
Description
40GBASE-KR4
At least 1m backplane
√
40GBASE-CR4 100GBASE-CR10
At least 10m cu cable
√
√
“n” × 10 Gb/s
40GBASE-SR4 100GBASE-SR10
At least 100m OM3 MMF
√
√
“n” × 10 Gb/s (Use of Parallel Fiber)
40GBASE-LR4
At least 10km SMF
√
100GBASE-LR4
At least 10km SMF
√
4 × 25 Gb/s
100GBASE-ER4
At least 40km SMF
√
4 × 25 Gb/s
4 × 10 Gb/s
4 × 10 Gb/s
■ Table 1. Summary of IEEE P802.3ba physical layer specifications. (Continued from page S8) effective data rate per lambda is 25 Gb/s. Therefore, the 100GBASE-LR4 PMD supports transmission of 100 Gb Ethernet over four wavelengths on each SMF in each direction. • 100GBASE-ER4: This PMD is based on 1310 nm DWDM technology and supports transmission over at least 40 km over single-mode gigabit per second. The grid is based on the ITU G.694.1 specification, and the wavelengths used are 1295, 1300, 1305, and 1310 nm. The effective data rate per lambda is 25 Gb/s. Therefore, the 100GBASE-LR4 PMD supports transmission of 100 Gb Ethernet over four wavelengths on each SMF in each direction. To achieve the 40 km reaches called for, it is anticipated that implementations may need to include semiconductor optical amplifier (SOA) technology.
The Architecture During the proposal selection process for the different PHY specifications, it became evident that the task force would need to develop an architecture that would be both flexible and scalable in order to simultaneously support 40 Gb
MAC client MAC control (optional) MAC Reconciliation XLGMII PCS FEC* PMA PMD AN* MDI Medium
CGMII
PHY
40GBASE-R
PCS FEC* PMA PMD AN* MDI Medium 100GBASE-R
*- Conditional based on PHY type
Figure 2. IEEE P802.3ba architecture.
S10
Communications IEEE
and 100 Gb Ethernet. These architectural aspects would be necessary in order to deal with the PHY specifications being developed by the IEEE P802.3ba Task Force, as well as those that may be developed by future task forces. Figure 2 illustrates the overall IEEE P802.3ba architecture that supports both 40 Gb and 100 Gb Ethernet. While all of the PHYs have a physical coding (PCS) sublayer, physical medium attachment (PMA) sublayer, and physical medium dependent (PMD) sublayer, only the copper cable (-CR) and backplane (-KR) PHYs have an auto-negotiation (AN) sublayer and an optional forward error correction (FEC) sublayer. For 40 Gb Ethernet the respective PCS and PMA sublayers need to support PMDs being developed by the IEEE P802.3ba Task Force that operate electrically across four differential pairs in each direction, or optically across four optical fibers or four wavelengths in each direction. It was realized, however, that in the future, the IEEE P802.3ba architecture might need to support other 40 Gb PMDs that could operate either across two lanes or a single serial lane. Likewise, for 100 Gb Ethernet the respective PCS and PMA sublayers need to support PMDs being developed by the IEEE P802.3ba Task Force that operate electrically across 10 differential pairs in each direction, or optically across 10 optical fibers or four optical wavelengths in each direction. It was also realized that in the future the IEEE P802.3ba architecture might need to support other 100 Gb PMDs that might potentially operate across five lanes, two lanes, or a single serial lane. The task force leveraged the relationship between the respective sublayers to develop the flexible and scalable architecture it needed for 40 Gb and 100 Gb Ethernet, as well as for future rates of Ethernet. The PCS sublayer couples the respective media independent interface (MII) to the PMA sublayer. For 40 Gb Ethernet, the MII is called XLGMII, and for 100 Gb Ethernet, the MII is called CGMII. The PMA sublayer interconnects the PCS to the PMD sublayer. Therefore, the functionality embedded in the PCS and PMA represent a two-stage process that couples the respective MII to the different PMDs that were envisioned for 40 Gb and 100 Gb Ethernet. Furthermore, this scheme can be scaled in the future to support the next higher rates of Ethernet. As noted above, the PCS sublayer couples the respective MII to the PMA sublayer. The aggregate stream coming from (Continued on page S12)
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
____________________
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
COMMENTARY
Aggregate stream of 64/66b words #2n+1 #2n
#n+2 #n+1
#n
#2
M1
#2n+1 #n+1
#1
M1
PCS lane 1
M2
#2n+2 #n+2
#2
M2
PCS lane 2
#n
Mn
PCS lane n
#1
= 66-bit word Simple 66b word round robin
Mn
#3n
#2n
Lane markers
Figure 3. PCS lane distribution concept.2 (Continued from page S10)
MAC client
MAC control (optional)
MAC
Reconciliation
CGMII
PCS
PMA (20:10)
CAUI
PMA (10:4)
PMD MDI
Medium 100GBASE-LR4
Figure 4. Example implementation of 100GBASE-LR4.
the MII into the PCS sublayer undergoes the 64B/66B coding scheme that was used in 10 Gb Ethernet. Using a round-robin distribution scheme, 66-bit blocks are then distributed across multiple lanes, referred to as PCS lanes, each with a unique lane marker periodically inserted. This is illustrated in Fig. 3. The PMA sublayer, which is the intermediary sublayer between the PCS and the PMD, provides the multiplexing function responsible for converting the number of PCS lanes to the appropriate number of lanes or channels needed by the PMD. There are four PCS lanes for 40 Gb Ethernet and 20 PCS lanes for 100 Gb Ethernet. The number of PCS lanes for each rate was determined by considering the number of lanes that might be employed by the various PMDs for a given rate and then calculating the least common multiple of those implementations. It is possible to have multiple instances of a PMA sublayer in a given configuration. This is particularly true for 100 Gb Ethernet. The input of the PMA sublayer essentially multiplexes/demultiplexes the input lanes back to the number of PCS lanes for the given rate, while the output stage then converts the PCS lanes to the appropriate number of lanes needed. Therefore, the four PCS lanes for 40 Gb Ethernet will support PMDs that employ one, two, or four channels or wavelengths in each direction. The 20 PCS lanes for 100 Gb Ethernet will support PMDs that employ 1, 2, 4, 5, 10, and 20 channels or wavelengths in each direction.. In this multiplexing scheme, regardless of how the PCS lanes get multiplexed together, all bits from the same PCS lane will follow the same physical path. Therefore, the PMA sublayer will demultiplex the lanes back to the original PCS lanes, at which point the PCS sublayer can then perform a deskewing operation to realign the PCS lanes, which is assisted by the unique lane markers periodically inserted into each PCS lane. The PCS lanes can then be put back into their original order, at which point the original aggregate stream can be reconstructed. The PMA sublayer also plays a critical role in the flexibility of the architecture, as a PMA sublayer will exist on both sides of the respective attachment unit interface (AUI), which is an optional physical interface. For 40 Gbt Ethernet, the AUI is called XLAUI (XL is the Roman numeral for 40). For 100 (Continued on page S14)
S12
Communications IEEE
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
_____________________________
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
COMMENTARY Task force formation
TF review Proposal selection
N
D
J
F
M
A
M
J
J
WG ballot
LMSC ballot
TF reviews
A
S
O
N
D
J
Standard
WG ballots
F
M
2008
A
M
J
J
2009
A
LMSC ballots
S
O
N
D
J
F
M
A
M
J
J
2010
Figure 5. IEEE P802.3ba timeline.
(Continued from page S10) Gb Ethernet, the AUI is called CAUI (C is the Roman numeral for 100). These interfaces are used for portioning the system design, and for chip-to-chip and chip-to-module applications. Each lane operates at an effective data rate of 10 Gb/s. For an XLAUI interface, there are four transmit pairs and four receive pairs. For a CAUI interface, there are 10 transmit pairs and 10 receive pairs.
Consider the example implementation of 100GBASE-LR4 illustrated in Fig. 4. For 100 Gb Ethernet, 20 PCS lanes are created coming out of the PCS. The uppermost PMA sublayer multiplexes the 20 PCS lanes into the 10 physical lanes of the CAUI. The PMA below the CAUI then multiplexes the 10 lanes of the CAUI into four lanes that then drive the four wavelengths associated with the 100GBASE-LR4 PHY. Looking at Fig. 4, it is easy to envision an implementation where a CAUI interface leaves a host chip and goes to a 100GBASE-LR4 module that has a CAUI interface, which is then multiplexed into four lambdas, each with an effective data rate of 25 Gb/s, and carried across 10 km of SMF.
Conclusion The IEEE P802.3ba Task Force has successfully developed an architecture that will be able to simultaneously support both 40 Gb and 100 Gb Ethernet, as well as the multitude of physical layer specifications selected for this project. At the time of this writing, the task force is preparing a request to go to Working Group Ballot, the next stage in the development of 40 Gb and 100 Gb Ethernet. The adopted schedule for the project is shown in Fig. 5. Regardless of any early debate regarding the selection of two data rates, this project has progressed in a timely fashion and remains on track for standards approval in June 2010. Furthermore, the architecture this task force has adopted will allow Ethernet to scale to even greater speeds in the future, which should interest those parties already starting to call for Terabit Ethernet.
REFERENCES
_______________ _________
S14
Communications IEEE
[1] http://www.ieee802.org/3/hssg/public/ nov07/HSSG_Tutorial_1107.zip ________________ [2] D’Ambrosia, Law, and Nowell, “40 Gigabit Ethernet and 100 Gigabit Ethernet Technology Overview,” Ethernet Alliance White Paper, http://www.ethernetalliance.org/images/ 40G_100G_Tech_ overview(2).pdf, Nov. 2008. __________________
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
________________ __________________
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
TOPICS IN OPTICAL COMMUNICATIONS
Photonic Integration for High-Volume, Low-Cost Applications Chris Cole and Bernd Huebner, Finisar Corp. John E. Johnson, CyOptics, Inc.
ABSTRACT To date, photonic integration has seen only limited use in a few optical interface applications. The recently adopted IEEE draft standards for 40 Gb/s and 100 Gb/s Ethernet single-mode fiber local area network applications will change this situation. Although first generation implementations will use discrete components based on existing technologies, long-term requirements for significant reduction in cost, size, and power of 40 Gb/s and 100 Gb/s transceivers will lead to a broad demand for photonic integration. Both hybrid planar lightguide circuit and monolithic photonic integrated circuit are feasible approaches that meet the requirements of the new IEEE standards.
INTRODUCTION Photonic integration is not used in the manufacture of most optical interfaces. This is despite the development of many different integration technologies and many examples of photonic integration in journal articles [1] and conference papers [2]. The principle reason is that optical interface architectures, defined in widely used standards, offer no opportunities for integration. For example, almost all IEEE-specified architectures for 100 Mb/s, 1 Gb/s, and 10 Gb/s optical interfaces require only a single directly modulated laser (DML). In such architectures, referred to as serial, there is nothing to optically integrate. Some longer-reach IEEE and International Telecommunication Union (ITU) standards, although still serial, use an electro-absorption modulated laser (EML). The EML integrates a single laser and modulator on a chip, representing the first wide use of photonic integration. A 10 Gb/s standard that could have benefited from photonic integration because it requires four DMLs (10GBASE-LX4) was supplanted by a serial standard (10GBASE-LRM), eliminating it as a potential market driver for photonic integration technology. The IEEE recently adopted three draft standards for 40 Gb/s and 100 Gb/s single-mode fiber (SMF) optical interfaces: 40GBASE-LR4 and 100GBASE-LR4 for reaches up to 10 km, and 100GBASE-ER4 for reaches up to 40 km. Formal adoption is projected in 2010 [3]. These
S16
Communications IEEE
0163-6804/09/$25.00 © 2009 IEEE
standards all require four lasers and wavelength division multiplexing (WDM) and represent significant long-term, high-volume, commercial opportunities for integrated photonic circuits. The candidate functions for photonic integration are: • 40GE 10 km quad 10 Gb/s CWDM 1310 nm DML transmitter • 100GE 10 km quad 25 Gb/s LAN WDM DML 1310 nm transmitter • 100GE 10 km/40 km quad 25 Gb/s LAN WDM EML 1310 nm transmitter First generation implementations of these functions will use discrete transmit components (four single un-cooled DMLs for 40GE and four single cooled EMLs for 100GE) with fiber connecting them to a discrete WDM multiplexer [3]. This is driven by time-to-market considerations. However, in the long term, demand for high-volume, low-cost, small size transceivers will lead to broad industry use of photonic integration technology because this is the only way to meet aggressive size and cost targets. The four discrete transmitters and discrete multiplexer will be replaced by a single integrated transmitter. Hybrid planar lightguide circuit (PLC) and monolithic InP photonic integrated-circuit (PIC) technologies are feasible today for use in integrated transmitter development.
SUCCESSFUL COMMERCIAL PHOTONIC INTEGRATION EXAMPLES VERTICAL CAVITY SURFACE EMITTING LASER ARRAYS The IEEE also recently adopted draft standards for parallel multi-mode fiber (MMF), multi-fiber push on (MPO) connector optical interfaces, 40GBASE-SR4 and 100GBASE-SR10, for reaches up to 100 m [3]. These exploit a mature optics integration technology: vertical cavity surface emitting laser (VCSEL) array, which has been shipping in high volume at lower channel data rates, for example, in SNAP12 transceivers. To support these new MMF standards, four or twelve 10 Gb/s VCSEL element linear arrays are fabricated using the same process as for a single VCSEL used in serial 10 Gb/s MMF transceivers like 10GBASE-SR, with additional optimization
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
of top masks for alignment during assembly onto flex circuits. Because none of the VCSEL elements are optically connected in the arrays, they are not strictly photonic circuits. However, VCSEL arrays demonstrate one of the important characteristic of photonic circuits. The yield of VCSEL arrays is inversely logarithmic with the VCSEL channel number [3], leading to significantly lower cost for a parallel transceiver than the cost of the same number of channels implemented with discrete transceivers.
EML The EML is the only broadly available, commercially successful component that can be classified as a photonic circuit. An EML integrates and optically interconnects two components: a distributed feedback (DFB) laser and an electroabsorption (EA) modulator on a monolithic InP chip [4]. EMLs are used in many optical interfaces, for example, IEEE 10GBASE-ER 10 Gb/s 40 km transceivers and ITU G.693 40 Gb/s 2 km transceivers. The modest level of integration of EMLs leads to high yields, enabling lower cost and size than alternatives using discrete components.
EML ARRAYS An array of ten 10 Gb/s EMLs integrated with an arrayed waveguide grating (AWG) mux on a single InP chip has been reported [5]. The chip is used as a key enabling technology in high-end wide area network (WAN) systems and has been successfully used in the field for several years. The technology is proprietary, its cost structure has not been published, and the chips are not sold or bought commercially. Although demonstrating what is technically possible, this technology has not been a photonic integration driver for the optics industry, because no market has been created for these chips.
TUNABLE LASERS Recently, tunable laser sources have had to use photonic integration technology to enable small form factor transceivers. Several types of tunable lasers, each consisting of multiple monolithic photonic sections along a waveguide were integrated with Mach Zehnder modulators to achieve high-speed modulation with a well-controlled chirp. Semiconductor optical amplifier (SOA) sections also were added to boost output power. An example of a multi-section tunable laser was reported in [6].
IEEE 40 GB/S AND 100 GB/S SMF DRAFT STANDARDS 40GBASE-LR4 STANDARD Figure 1 shows a block diagram of transceiver architecture for the 40GBASE-LR4 (10 km) standard. The transmitter has four 10 Gb/s signal paths, supporting either a retimed or un-retimed electrical interface. The output of the four DFBs is combined optically in the mux for transmission over one SMF. Other than the optical multiplexer and demultiplexer, the architecture replicates four single 10 Gb/s channels, like those in the 10GBASE-LR standard. This was done to per-
CDR
TX3 TX2
CDR
TX1
CDR
TX0
CDR
RX3
CDR
RX2
CDR
RX1
CDR
RX0
CDR XLAUI
10G 10G 10G 10G
10G 10G 10G 10G
LD
DML
LD
DML
LD
DML
LD
DML
TIA
PIN
TIA
PIN
TIA
PIN
TIA
PIN
PMD service interface
4:1 CWDM mux
SMF
1:4 CWDM demux
SMF
IEEE
BEMaGS F
40GBASE-LR4
Figure 1. 40GBASE-LR4 transceiver architecture. XLAUI electrical interface requires CDRs; PMD service interface does not.
mit quick time-to-market development, using existing discrete 10 Gb/s components. The coarse-wavelength-division multiplexing (CWDM) optical wavelength assignments are shown in Table 1. The 20 nm grid permits use of un-cooled DFBs, as the approximately 7 nm laser wavelength drift over the operating temperature range fits within the CWDM pass-band.
100GBASE-LR4 AND 100GBASE-ER4 STANDARDS Figure 2 shows a block diagram of the transceiver architecture for the 100GBASE-LR4 (10 km) and 100GBASE-ER4 (40 km) standards. The four channel count was selected as leading to a reasonable component count in discrete and photonic integration implementations. The channel data rate of 25 Gb/s is relatively low risk because of the commercial availability of 40 Gb/s EMLs and ongoing research toward 25 Gb/s DMLs. The EML solution has the advantage of lower dispersion penalties due to the low chirp of the EA modulator and is the only feasible solution for ER4 (40 km reach). Future DML solutions may offer higher output power and lower electrical power dissipation, but will be usable only for LR4 (10 km reach). An electrical interface rate of 10 Gb/s per channel was selected as the best match to existing complementary metal-oxide semiconductor, application-specific integrated-circuit (CMOS ASIC) interface technology. The local area network (LAN) WDM optical wavelength assignments are shown in Table 2. The 800 GHz (~5 nm) channel spacing is an optimization between relaxed wavelength accuracy requirements and limiting the total grid span to 14 nm to facilitate photonic integration and simplified processing, both leading to a high yield. The LAN WDM grid also results in the lowest interoperable link budget because it is placed in the region of minimum fiber loss (for the 1310 nm window) and dispersion.
IEEE Communications Magazine • March 2009
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
S17
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Lane
Center wavelengths
Wavelength ranges
L0
1271 nm
1264.5–1277.5 nm
L1
1291 nm
1284.5–1297.5 nm
L2
1311 nm
1304.5–1317.5 nm
L3
1331 nm
1324.5–1337.5 nm
A
BEMaGS F
requirement to emit different wavelengths while achieving uniformity of array performance characteristics: single-mode yield, output power, and burn in yield. This uniformity can be achieved by using quarter-wave shifted DFBs manufactured using direct e-beam grating-writing or phasemask printing [7].
PLC WDM MULTIPLEXER TECHNOLOGY Table 1. CWDM optical wavelength assignments.
Lane
Center frequencies
Center wavelengths
Wavelength ranges
L0
231.4 THz
1295.56 nm
1294.53–1296.59 nm
L1
230.6 THz
1300.05 nm
1299.02–1301.09 nm
L2
229.8 THz
1304.58 nm
1303.54–1305.63 nm
L3
229.0 THz
1309.14 nm
1308.09–1310.19 nm
Table 2. LAN WDM optical wavelength assignments.
HYBRID PLC DML PHOTONIC INTEGRATED CIRCUITS DML/DML ARRAYS DMLs are the optical source of choice for SMF applications up to a 10 km reach and for data rates up to 10 Gb/s. These devices are available from multiple suppliers and have mature performance and reliability. For 40GE PLC integration, four single 10 Gb/s DFB lasers can be used, similar to the single 10 Gb/s DFB laser used for 10GE. It is possible to use a quad 10 Gb/s laser array for 40GE PLC integration; however, the 60 nm 40GE CWDM grid span presents manufacturing challenges. A CWDM quad laser array requires several separate growth steps, resulting in lower yield than that of four single lasers. So, although monolithic CWDM 10 Gb/s DFB laser arrays are feasible, for example, as shown in Fig. 6, at present the use of single 10 Gb/s lasers leads to lower cost PLCs. For 100GBASE-LR4, 25 Gb/s DMLs are preferred over EMLs because of the modest dispersion requirements of the 10 km reach and the reduced chip complexity, size, and cost. Because no commercial devices are available today, device optimization and demonstration of reliable long-term operation is required. The 25 Gb/s DMLs also require the use of a thermoelectric cooler (TEC) to avoid a drop-off in efficiency and bandwidth at higher temperatures. PLC integration offers the benefit of lower power consumption through reduction of the overall passive heat load due to the smaller surface area of the integrated assembly compared to total surface area of four discrete assemblies. The 14 nm LAN WDM grid span enables quad 25 Gb/s DFB arrays to be manufactured with a single growth step using selective area growth (SAG) techniques. Challenges in the manufacture of such DFB arrays arise from the
S18
Communications IEEE
The optical power budgets of 40GE and 100GE are difficult to meet using a simple power combining for the multiplexer because of the inherent insertion loss of 6 dB for a four-channel device in addition to the laser-PLC coupling loss. Lower loss wavelength-dependent multiplexer must be used. Unlike in a power combiner, this requires the alignment of the multiplexer pass-bands to the laser wavelengths so the laser array and multiplexer must be manufactured with tight relative tolerances or must support tuning of their wavelengths relative to each other. Temperature tuning of either the laser or the multiplexer can be used to achieve this. The most common dispersive elements used for implementing PLC multiplexers are the AWG and planar Echelle grating [8]. The multiplexer performance is determined by polarization dependence, insertion loss, achievable pass-band width, and channel-to-channel isolation.
COUPLING DFB LASERS TO PLC The basic challenge of PLC hybrid integration is achieving low-loss coupling between the laser and the PLC waveguide. To achieve low-loss coupling without lenses, the optical modes of the laser and PLC waveguides must be closely matched in size. For low-loss coupling with relaxed alignment tolerances, the modes should both be as large as practical, although if taken too far, angular alignment tolerances become the limiting factor. Laser waveguides inherently have a small optical mode size in order to have a large overlap with the optical gain of the active region. A typical laser mode size is in the range of 1 to 2 μm. Optical fiber and glass-on-silicon PLCs have much larger mode sizes, in the 5 to 10 μm range, due to the smaller available refractive index step in glass. Without the use of coupling optics, coupling losses between the laser and a glass PLC waveguide can be up to 10 dB, even when perfect mechanical alignment is assumed. The mode size of silicon-on-insulator (SOI) PLC waveguides is closer to that of lasers, 3 to 5 μm, but the small alignment tolerance makes it difficult to use low-cost passive alignment of the laser. To overcome the challenge of matching the laser-waveguide spot size to the PLC spot size, various options exist for adding waveguide structures on either the laser or the PLC side to better match the two mode field sizes. Most commonly used are waveguide taper structures that widen the laser spot size or reduce the PLC waveguide spot size [8]. Both can achieve better minimum coupling loss at the expense of more stringent mechanical alignment. In the case of the laser side taper, lateral mechanical alignment is relaxed, whereas angular alignment becomes more stringent. In the case of PLC side taper, angular alignment is more forgiving whereas lat-
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
A
BEMaGS
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
F
REFCLK 25G
TX9 TX8 TX7 TX6 TX5 TX4 TX3 TX2 TX1 TX0
25G 10:4 serializer
25G 25G
MD
EML
MD
EML
MD
EML
MD
EML
4:1 LAN WDM mux
SMF
TEC
25G
RX9 RX8 RX7 RX6 RX5 RX4 RX3 RX2 RX1 RX0
25G 4:10 deserializer
25G 25G
CAUI
TIA
PIN
TIA
PIN
TIA
PIN
TIA
PIN
4:1 LAN WDM demux
SOA
100GBASE-LR4
SMF
100GBASE-ER4
Figure 2. 100GBASE-LR4 and 100GBASE-ER4 transceiver architecture. ER4 optical interface requires SOA; LR4 does not. LR4 can also use four cooled DMLs (not shown) in place of the four cooled EMLs (shown). eral alignment is more critical. In both cases, placement accuracies on a sub-micron scale must be achieved during laser chip attachment. The most commonly used attachment process is the flip-chip alignment of the laser to the PLC platform [9]. Another fairly common method is the butt coupling of a laser to the PLC facet. A combination of both flip-chip coupling for the laser diodes and butt coupling to the PLC facet is reported in [10]. Lateral alignment in this case is performed by actively monitoring the coupled power and fixing the two pieces in place when maximum power is achieved. An inherent drawback of this method, if used for direct laser attachment without the flip-chip step reported in [10], is that the vertical direction is usually the most stringent with respect to alignment tolerances. Tolerances much less than 1 micron are required. A process to routinely guarantee this accuracy at the end of the attachment process is difficult to achieve. In the case of flip-chip coupling, the laser is soldered with the active side down on the PLC to bring the two waveguides in close proximity to each other. This must be done with very high accuracy both in lateral and angular positioning. As compared to butt coupling on the PLC facet, flip-chip coupling has the benefit of relatively easy control of the relative waveguide position in the vertical dimension, based on epitaxial growth of the laser layers and only shallow etching of the PLC mounting structures. Figure 3 shows a laser flip-chipped onto a PLC with the alignment axis identified. With respect to lateral alignment, current attachment/placement technology can achieve the required accuracy with acceptable yields for single-laser attachment by pattern recognition and passive visual alignment. This means that the laser is aligned to the PLC by matching fea-
tures precisely manufactured on both chips without a requirement for active alignment between the two waveguides. For comparison, active alignment (maximizing the fiber-coupled power while the laser is switched on) is the method of choice for most optical single-channel SMF discrete transmitter assemblies. Apart from alignment based on mechanical features (alignment marks on both the laser and the PLC), methods also are reported that use purely passive alignment where these features act as stops for vertical and lateral movement. This requires a custom design of the topography on both components and therefore limits potential sourcing of laser and PLC. Another method in the purely passive area is alignment by the solder bumps and the forces exerted through the reflow and surface tension of the solder itself. This is a very elegant method, but requires very tight process control of the soldering process and the mechanical stops to provide accurate alignment after reflow. Independent of which technology is used, if more than one laser must be attached to a given PLC, the probability of one connection not meeting the required coupling efficiency reduces the overall yield of the product. Depending on the quality of the single attachment process, it can be beneficial to attach an array of lasers instead of single chips, reducing the number of overall attachment processes and therefore, the probability of a failed attachment due to too much insertion loss. The challenges of the array attachment are the larger size of the array and the requirement to have multiple pads on the laser array soldered at the same time with good consistency. The handling of long laser bars is challenging because the semiconductor materials can break easily, and the bars cannot be touched in the area of
IEEE Communications Magazine • March 2009
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
S19
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
the laser facets. A long bar also must be supported evenly during soldering, or it might bend and lose vertical alignment to the PLC laser waveguide. On the other hand, some of the challenges are offset by the easier detection of the rotation angle of the bar in the visual alignment system, and therefore, much better angular alignment accuracy can be achieved than for single laser chips.
HYBRID PLC EXAMPLE The benefits and drawbacks of the various approaches lead to different PLC types for 40GE and 100GE. For 40GE PLC, the high yield of single lasers and lower yield of CWDM laser arrays, leads to the use of single DFBs as shown in Fig. 4. Because it is much easier to build laser arrays on the 100GE LAN WDM grid, the 100GE PLC uses quad DFB arrays in place of the four single DFBs shown in Fig. 4.
A
BEMaGS F
MONOLITHIC INP EML PHOTONIC INTEGRATED CIRCUITS EML TECHNOLOGY Unlike the DML solution, the 25 Gb/s data rate does not require additional development because 40 Gb/s EMLs already are commercially available. As in the case of the DML array, the 100GE LAN WDM grid is wide enough to enable high DFB laser array wavelength yield, yet narrow enough to allow four EMLs to be integrated on a single monolithic InP chip within the multiple-quantum well (MQW), band-gap shift that is realizable using SAG. The SAG technique is used today in many commercial EMLs; therefore, no additional processing steps are required to produce an array of EMLs. EA modulators are simple and robust and typically have high yield. The additional modulator processing and chip area increases the requirements on the DFB laser yield in order for the PIC to be cost effective.
MULTIPLEXER TECHNOLOGY
DFB
Lateral alignment axis
Vertical alignment axis Si waveguide PLC
Figure 3. Laser diode flip chipped onto a PLC, showing alignment axis.
Electrical lines Laser diodes Silica layer
Focusing slab waveguide
MANUFACTURING TECHNOLOGY
Output waveguide
Silicon substrate
Figure 4. PLC with four discrete DFBs (shown) for 40 Gb/s 10 km 40GBASE-LR4 applications. Quad monolithic DFB array (not shown) replaces the four discrete DFBs for 100 Gb/s 10 km 100GBASE-LR4 applications.
S20
Communications IEEE
InP must be used for the quad 25 Gb/s EMLs array, but the choice of InP for the optical multiplexer function is not as obvious. Although the cost of InP is higher than that of silicon, the high index step makes it possible to design very compact AWGs. The high-index step also increases the AWG insertion loss relative to glass or silicon, but this is offset for the most part by the absence of the 3-dB or more coupling loss typical for hybrid integration. Thus, the total multiplexer losses of the PLC and PIC approaches are comparable. In addition, monolithic InP AWGs offer an important benefit in that they tune with temperature at the same rate as a DFB laser; therefore, no change in alignment between multiplexer pass-bands and laser wavelengths over temperature occur, as is the case with silica AWG. Also, because the insertion loss of a 4:1 wavelength-independent multi-mode interference (MMI) combiner is only slightly larger than what can be achieved with InP AWGs (without incurring additional yield loss due to wavelength misalignment), this is another potential approach for monolithic integration with a 25 Gb/s EML array. The integration technology used to fabricate a quad 25 Gb/s EML PIC must accomplish several key goals. First, active MQW epi material with band-gaps appropriate for each element of the source array must be provided. This is easily accomplished with SAG. In the SAG technique, the laser and modulator MQW regions are grown using organo-metallic vapor phase epitaxy (OMVPE) on an InP substrate, patterned with silicon dioxide (SiO2) stripes. The oxide results in additional diffusion of group-III elements into narrow gaps between the stripes, which increases the growth rate of the InGaAsP layers, shifts the band-gap to longer wavelength, but also makes the lattice mismatch more compressive. The band-gaps of the laser MQW in an array must be spaced by the channel spacing, (N – 1)*Δλ total, and the modulator MQW band-gaps must
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
be 30 to 50 nm less than the lasers for low onstate absorption and high off-state extinction ratio. Because the total band-gap difference between the shortest wavelength modulator and the longest wavelength laser in the EML array is only 14 nm more than is used for single EMLs, this is not an issue for 100GE PICs. Second, low-loss coupling to the passive waveguide material used for the AWG must be provided. Many different active-passive integration techniques exist, but the etch-and-re-grow or butt-joint technique results in low loss with minimum transition length. In the butt-joint growth technique, the active layer stack is grown first on a planar substrate and then protected with a SiO2 mask where it is required in the PIC. The exposed active layers are then etched away, and the passive waveguide layer stack is grown using OMVPE. The key to successful butt-joint integration is in the optimization of the etching and OMVPE conditions in order to produce a joint with the right vertical alignment and morphology for low-loss-mode matching between the two waveguides. Figure 5 shows a scanning electron microscope (SEM) cross-section of a high quality, active-passive butt joint. Third, tight control of the effective index of the waveguides must be provided to minimize AWG wavelength registration losses. This is accomplished by careful calibration of the composition and thickness of the waveguide layer and using dry etching techniques for the deepridge waveguide to control the ridge width. Other processing steps required for PICs, such as electrical isolation, dielectric deposition, and metallization, are the same as for single EMLs. The entire PIC process uses process equipment that is commercially available and commonly found in most InP fabrication facilities.
A
BEMaGS F
Figure 5. Scanning electron microscope photograph of the etched and regrown butt joint between the laser MQW on the right and the passive waveguide layer on the left. The oxide mask is still in place.
1.1 mm
MONOLITHIC PIC EXAMPLE An example of a quad 10 Gb/s DML PIC is shown in Fig. 6. This device consists of four, directly-modulated InAlGaAs MQW DFB lasers on a 24.5 nm 10GBase-LX4 grid integrated with an InGaAsP AWG multiplexer. As discussed, new products no longer use the LX4 grid, and a 74 nm grid span presents challenges for monolithic laser array manufacturing. However, this chip was a good project for establishing and demonstrating the feasibility of the fabrication processes required for devices such as the quad 25 Gb/s EML PIC. All of the epitaxy is performed using OMVPE. The arrays of quarter-wave shifted DFB gratings were defined using electron-beam lithography and etched into the InGaAsP grating layer using methane-hydrogen reactive ion etching (RIE). The MQW active layers for the laser array are grown with the SAG technique to shift the un-enhanced 1276 nm band-gap MQW by 24.5, 49, and 73.5 nm using successively wider pairs of SiO2 stripes. For the quad 25 Gb/s EML PIC, SAG also is used to shift the band-gaps of the modulator MQW. A second SiO2 mask is then used to protect the laser array while the exposed laser active layers are etched away and the bulk InGaAsP waveguide layer is re-grown in its place. After
2.4 mm
Figure 6. Photograph of a monolithic InP PIC comprising four O-band DFB lasers and an AWG with 24.5 nm channel spacing. The chip size is 1.1 × 2.4 mm.
growing the p-InP cladding and p+ InGaAs cap layers, the laser ridges are fabricated using selective wet etching to produce shallow ridge waveguide lasers. Then, passive waveguides are etched through the waveguide layer using methane-hydrogen reactive ion etching (RIE) to produce a high lateral index step and smooth sidewalls. The ridges are approximately 2.2 μm wide. The waveguides are then passivated with 0.5 μm thick SiO2, and conventional techniques are used to form the laser contacts and bonding pads. The resulting PIC is 1.1 mm wide by 2.4 μm long. The performance of the quad 10 Gb/s DML PIC was measured at room temperature. The quarter-wave shifted DFB arrays had good yield and were within ±1.5 nm of the target grid. The AWG 1 dB down pass-band width was 10 nm and the channels were within ±5 nm of the tar-
IEEE Communications Magazine • March 2009
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
S21
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
The IEEE 802.3ba 40 Gb/s and 100 Gb/s SMF optical interface standards offer ideal applications for developing integrated optical transmitter circuits because they require a moderate number of optical components and in the future, will require low cost, small size, and reduced power consumption.
get grid. The total insertion loss of the AWG was determined to be 6.5 dB, which is comparable to the total insertion plus coupling loss of a hybrid PLC. A major contributor is the waveguide bending loss between the lasers and AWG, which was estimated to be 3 dB. This is easily addressed by the use of slightly larger bend radii, but at the expense of chip size. Typical output power coupled into single-mode fiber was –7 dBm per channel at a bias current of 50 mA. The additional process fabrication steps required for a quad 25 Gb/s EML PIC are already standard for EMLs, such as electrical isolation between the laser and modulator, and low-k dielectrics for the modulator bonding pad. The modulators themselves are highyield components relative to the DFB lasers, so they are not expected to cause additional yield issues. The loss budget for 100GE-LR4 requires –1 dBm average power per channel, so development to further minimize waveguide, bending, and AWG losses is important. Additional studies must be conducted to determine the lowest cost multiplexer approach: AWG or MMI.
CONCLUSIONS Component and attachment technologies are available today to enable efficient high-yield manufacturing of hybrid and monolithic optical integrated circuits. The recently adopted IEEE 802.3ba 40 Gb/s and 100 Gb/s SMF optical interface standards offer ideal applications for development of integrated optical transmitter circuits because they require a moderate number of optical components and in the future, will require low cost, small size, and reduced power consumption. This combination of factors will lead to significant investment in commercial, photonic integration technology and infrastructure by the optics industry.
ACKNOWLEDGMENTS We would like to thank Dr. Julie Eng, V.P. of Transceiver Engineering, and Dr. Mark Donovan, V.P. of OSA Engineering, both at Finisar Corp.; and Dr. Len Ketelsen, V.P. of Engineering and Dr. Uzi Koren, Chief Technical Officer, both at CyOptics, Inc., for their support. We would also like to thank Dr. Mehdi Asghari of Kotura, Inc. for the PLC photograph of Fig. 3.
REFERENCES [1] T. L. Koch and U. Koren, “Semiconductor Photonic Integrated Circuits,” J. Quantum Electronics, vol. 27, no. 3, 1991, pp. 641–53. [2] L. A. Coldren, “InP-Based Photonic Integrated Circuits,” Conf. Lasers Electro-Optics, paper CTuBB1, 2008. [3] IEEE 802.3ba 40 Gb/s and 100 Gb/s Ethernet Task Force Public Area; http://www.ieee802.org/3/ba/index.html [4] M. Aoki et al., “InGaAs/InGaAsP MQW Electro-Absorption Modulator Integrated with a DFB Laser Fabricated by Band-Gap Energy Control Selective Area MOCVD,” J. Quantum Electronics, vol. 29, no. 6, 1993, pp. 2088–96.
S22
Communications IEEE
A
BEMaGS F
[5] D. F. Welch et al., “Large-Scale InP Photonic Integrated Circuits: Enabling Efficient Scaling of Optical Transport Networks,” IEEE J. Selected Topics Quantum Electronics, Jan.–Feb. 2007, pp. 22–31. [6] J.-O. Wesstrom et al., “State-of-the-Art Performance of Widely Tunable Modulated Grating Y-Branch Lasers,” Optical Fiber Commun. Conf., paper TuE2, 2004. [7] L. J. P. Ketelsen et al., “Multiwavelength DFB Laser Array with Integrated Spot Size Converters,” J. Quantum Electronics, vol. 36, no. 6, 2000, pp. 641–48. [8] A. Himeno et al., “Silica-Based Planar Lightwave Circuits,” IEEE J. Selected Topics Quantum Electronics, Nov.–Dec. 1998, pp. 913–24. [9] K. Kato and Y. Tohmori, “PLC Hybrid Integration Technology and Its Application to Photonic Components,” IEEE J. Selected Topics Quantum Electronics, Jan.–Feb. 2000, pp. 4–13. [10] Y.-T. Han et al., “Fabrication of a TFF-Attached WDMType Triplex Transceiver Module Using Silica PLC Hybrid Integration Technology,” J. Lightwave Tech., vol. 24, no. 14, Dec. 2006, pp. 5031–38.
BIOGRAPHIES CHRIS COLE [SM] (
[email protected]) _____________ received a B.S. in aeronautics and astronautics and B.S. and M.S. degrees in electrical engineering from the Massachusetts Institute of Technology. He is a director at Finisar Corp., Sunnyvale, California. He now manages the development of 40 Gb/s and 100 Gb/s LAN and WAN optical transceivers at Finisar (which acquired his previous company, Big Bear Networks). At Hughes Aircraft Co. (now Boeing SDC) and then M.I.T. Lincoln Laboratory, he contributed to multiple imaging and communication satellite programs. Later, he consulted on telecom ICs for the Texas Instruments DSP Group and Silicon Systems Inc. (now Teridian). At Acuson Corp. (now Siemens Ultrasound), he was one of the architects of the Sequoia coherent imaging ultrasound platform, where he was also director of hardware and software development groups. As a principal consultant with the Parallax Group, he carried out signal processing analysis and product definition for several imaging and communication systems. He now manages the development of 40 Gb/s and 100 Gb/s LAN and WAN optical transceivers at Finisar (which acquired his previous company, Big Bear Networks). BERND HUEBNER (
[email protected]) _______________ holds an M.S. in physics from Technische Universität, Darmstadt, Germany and a Ph.D. in physics from Julius Maximilians Universität, Würzburg, Germany. He is a manager of OSA engineering at Finisar. At Deutsche Telekom, he researched fabrication and characterization of low-dimensional compound semiconductor structures and DFB lasers with spot size conversion. Then at Aifotec AG, he was responsible for the development of an FGL module based on a hybrid photonic platform, which included designing lasers and manufacturing processes. He now manages the development of OSA technologies for FTTH, 10 Gb/s, 40 Gb/s, and 100 Gb/s transceivers at Finisar. JOHN E. JOHNSON [SM] (
[email protected]) _____________ received a B.S.and Ph.D. in electrical engineering from Cornell University. He is a manager of photonics design at CyOptics, Inc. He worked for three years at National Semiconductor in analog IC manufacturing. He then joined AT&T Bell Laboratories (later Lucent Technologies Bell Laboratories), then Agere Systems Research, and then TriQuint Semiconductor, where he led research into a broad range of InP photonic integrated circuits, including EMLs, EA-modulated and CW high-power tunable DBR lasers, spot-size converted lasers, and SOAs operating at bit rates up to 160 Gb/s. This included bringing to market award-winning EML and tunable transmitter products. Then, at T-Networks (later Apogee Photonics, now part of CyOptics after an acquisition), he became the manager of photonics development, responsible for the design and manufacturing ramp-up of several 10 Gb/s and 40 Gb/s EA-modulated devices. He now leads InP chip design and product development. He is the author or co-author of more than 50 peer-reviewed papers and 13 patents (issued and pending).
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
____________________________________
Newly Posted Online Tutorials by Global Experts
Hsiao Hwa Chen
Ilir Progri
Nicola Marchetti
Muhammad Rahman
Wolfgang Kellerer Gerald Kunzmann
Stefan Zöls
Dharma Agrawal
Behrouz Farhang
Celia Desmond
Benny Bing
Thomas Chen
Harish Viswanathan
Leonid Kazovsky Andreas Eberhart Mischa Dohler Hamid Aghvami Eldad Perahia
The Next Generation CDMA Technology Hsiao-Hwa Chen, National Cheng Kung University
Mario Baldi
Robert Stacey
Constantinos Papadias
Angel Lozano
Stefano Bregni
MPLS - the importance of offering the right solution at the right moment Mario Baldi, Politecnico di Torino
Indoor Geolocation Systems Ilir F. Progri, Giftet
WiMax: Mobilizing the Internet Benny Bing, Georgia Institute of Technology
Future Gigabits Systems: Towards Real 4G and Cogninitve Radios Nicola Marchetti and Muhammad Imadur Rahman
Peer-to-Peer Technologies for Next Generation Communication Systems – Basic Principles and Advanced Issues Wolfgang Kellerer, DoCoMo Communications Laboratories Europe, Gerald Kunzmann, Technische Universität München, & Stefan Zöls, Technische Universität München
IP Multimedia Substem (IMS): Evolution to New Capabilities Vijay K. Varma Design and Performance Issues in Wireless Mesh Networks Dharma P. Agrawal, University of Cincinnati
Broadband Fiber Access Prof. Leonid G. Kazovsky, David Gutierrez, Wei-Tao Shaw & Gordon Wong, Stanford University
Signal Processing Techniques for Spectrum Sensing and Communications in Cognitive Radios Behrouz Farhang-Boroujeny (University of Utah)
Modern Web Applications with Ajax and Web 2.0 Andreas Eberhart, HP Germany
Project Management for Telecommunications Projects Ensuring Success Celia Desmond, World Class Telecommunications
Wireless Cooperative Communication Networks Mischa Dohler, France Telecom R&D & Hamid Aghvami, Kings College London
Emerging Technologies in Wireless LANs: Theory, Design, Deployment Benny Bing, Georgia Institute of Technology
IEEE 802.11n: Throughput, Robustness, and Reliability Enhancements to WLANs Eldad Perahia & Robert Stacey, Intel Corporation
Web Security Thomas M. Chen, Southern Methodist University Next Generation Cellular Networks: Novel Features & Algorithms Harish Viswanathan, Alcatel-Lucent
MIMO Systems for Wireless Communications Constantinos Papadias, Athens Information Technology; Angel Lozano, Bell Labs Synchronization of Digital Telecommunications Networks Stefano Bregni, Politecnico di Milano
Tutorial cost: Member US$200, Nonmember US$250 Tutorials that have been available online for over one year are priced at US$50 for Communications Society members. Tutorials contain the original visuals and a voice-over by the presenters. Length of Each Tutorial: 2.5 to 5 hours. Number of Slides: 78 to 477 High speed Internet connectivity suggested; PCs only. email: ________
[email protected]
Communications IEEE
69 hot topics to choose from!
Take a FREE 5-Minute Preview Now! ®
____________________________
_________
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
TOPICS IN OPTICAL COMMUNICATIONS
A Total-Cost-of-Ownership Analysis of L2-Enabled WDM-PONs Klaus Grobe and Jörg-Peter Elbers, ADVA AG Optical Networking
ABSTRACT Next-generation access networks must provide bandwidths in the range of 50–100 Mb/s per residential customer. Today, most broadband services are provided through copper-based VDSL or fiber-based GPON/EPON solutions. Candidates for next-generation broadband access networks include several variants of WDM-PONs. The total cost of ownership of these solutions is determined mainly by operational expenditures, where the cost of energy is one of the major contributors. We show that a combination of WDM-PON with active L2 switching can minimize the total cost of ownership while at the same time offering the highest scalability for future bandwidth demands.
INTRODUCTION Broadband services such as video-on-demand (increasingly available in high definition) and high-speed Internet applications call for residential access bit rates that quickly will exceed 50 Mb/s in the near future. Because of the competitive environment (i.e., because some providers offer these services at guaranteed bit rates), service providers must be able to offer such bit rates without the high levels of oversubscription common today. In addition, applications such as teleconferencing and video/photo uploads are making the access traffic for residential customers more symmetrical in nature. As a consequence, a new and future-proof access infrastructure is required. Delivering more bandwidth at a lower cost per-bit per-second is a significant challenge to service providers. Although capital expenditures (CapEx) sometimes receives all the attention, operational expenditures (OpEx) are the biggest contributors to total cost of ownership (TCO), and OpEx reduction remains a key objective for service providers. Main OpEx categories include service planning and provisioning; operations, administration and maintenance (OAM); and also the energy cost for active sites and systems. In total, these greatly exceed capital expenditures. One proposal to significantly reduce OpEx is to reduce complexity in metropolitan-area, backhaul, and access networks [1]; by lowering the number of active sites such as points of presence (PoP) and local exchanges (LXs), primarily OAM cost is reduced. Concentrating higher-layer functionality
S24
Communications IEEE
0163-6804/09/$25.00 © 2009 IEEE
(e.g., layer-2 switching, layer-3 routing) in fewer sites and replacing part of this functionality by lower-layer transport limits the total energy consumption of the broadband network, which currently, is an area of major concern [2, 3]. Therefore, the reduction of the number of active sites and the potential de-layering of the metro access networks is of primary interest to network operators. The amount of reduction that is targeted depends upon the type of site, with an overall goal of reducing active sites by 75 percent for core PoP, and by as much as 90 percent, for local exchanges. Site reduction and network consolidation lead to larger distances for the access and backhaul technology, with the remaining sites being required to serve a significantly larger number of customers. Therefore, next-generation passive optical networks (NG-PONs) also must support both high per-customer bit rates, as well as large splitting ratios. Any new network solution must be able to support a variety of market applications to avoid multiple, purpose-built solutions, and the resultant poor system utilization (i.e., high OAM and energy cost). The three major applications are direct residential access, dedicated enterprise/business access, and wireline/wireless backhauling. Hence, any residential access technology also should support enterprise access and backhaul, with the respective requirements regarding dedicated bandwidths and security. In NG networks, residential fiber access leads to fiber-to-the-home (FTTH) solutions; enterprise access primarily leads to fiber-to-the-building (FTTB) or FTTH solutions; and backhauling leads to a mixture of fiber-to-the-cabinet, -building, or in general, -node (FTTCab, FTTB, FTTN) configurations.
BROADBAND ACCESS SOLUTIONS Today, regional Bell operating companies (RBOCs), incumbent local exchange carriers (ILECs), and competitive local exchange carriers (CLECs) mainly deploy two solutions for broadband residential access with up to 50 Mb/s per user: copper-based very-high bit-rate digital subscriber line (VDSL/VDSL2) or gigabit-capable and Ethernet PONs (GPONs/EPONs). To increase user bit rates, in PONs the number of customers (i.e., the splitting ratio) must be reduced as compared to current deployments; and in VDSL deployments, the copper distance must be
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
a)
WDM Mux/DMX
WDM Mux/DMX
Work
GPON OLT BM
132
Communications
A
BEMaGS F
GPON ONU
Passive remote node
BM
Protect PoP (AGS1/2)
LX
WDM Mux/DMX
Passive coupler
Grey I/F ONU
Protect
PoP (OLT / AGS1/2)
Active remote node
4G-ADM
PoP (AGS1/2)
CWDM Mux/DMX
4G-ADM CWDM Mux/DMX
c)
GbE-L2
Work
Single-ended switch
WDM Mux/DMX
b)
VDSL2 modem
PoP / LX (AGS1)
Figure 1. Schematic diagrams of broadband access solutions. a) GPON residential access with passive WDM backhaul; b) an active WDM-PON with layer 2 switching; c) VDSL-based access together with active CWDM backhaul.
further decreased. High bit rates of 1 Gb/s and beyond (as currently required for enterprise access and backhauling) are almost impossible to achieve. This also holds true for upcoming enhancements of GPON/EPON with bit rates of 10 Gb/s and also proposed enhancement wavelengths because near-term backhaul and enterprise access will require multiple GbE or even 10GbE channels. In addition, the simultaneous requirement of very high splitting ratios and longer reach contradicts the use of splitter-based PONs. (A splitting ratio of 1:500, combined with reach requirements in the range of 100 km can easily lead to accumulated insertion loss of 60 dB in metro areas with their typical poor fiber quality.) Figure 1 shows schematic diagrams of GPON and VDSL broadband access. For GPON, we assumed downstream and upstream bit rates of 2.5 Gb/s for symmetrical reasons (although we note that most commercial deployments are asymmetrical, using a 2.5-Gb/s downlink with only a 1.2-Gb/s uplink). The backhauling is realized with passive wavelength division multiplexing (WDM), that is, colored network interfaces of the GPON optical line terminal (OLT) that feed directly into passive WDM filters. This approach is known from International Telecommunication Union-Telecommunication (ITU-T) G.695 as the black link and leads to a simple and
cost-effective backhauling architecture. For resiliency, two colored OLT interfaces are used that feed into two independent links. Also shown as part of the access network is a generic layer-2 (L2) aggregation switch (AGS, where AGS1 and AGS2 refer to two levels of aggregation, as typically found in large metropolitan area networks). In the VDSL scenario, digital subscriber line access multiplexers (DSLAMs) are connected with two GbE uplink (backhaul) interfaces. The DSLAM backhauling is based on a coarse wavelength division multiplexing (CWDM) ring system that uses add/drop multiplexers for up to four individual GbE services at an aggregated line rate of 4.3 Gb/s. This approach provides a very costeffective solution for protected services and offers higher total system capacity when compared to a black link scenario with 1-Gb/s interfaces. Because both scenarios (GPON and VDSL) do not support 1-Gb/s and 10-Gb/s backhauling and enterprise traffic, they are complemented by point-to-point Ethernet links. For simplicity, these additions are not shown in Fig. 1. WDM-PON is commonly accepted as a future-proof solution for NG access and backhaul networks [4]. Advantages are scalable bandwidth, long reach (low insertion-loss filters, optional amplification), and the possibility to individually adapt the per-wavelength bit rate
IEEE Communications Magazine • March 2009
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
S25
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
and splitting ratio. These characteristics make a WDM-PON a good choice for a unified network solution that simultaneously supports residential access, business access, and backhauling applications. WDM-PONs have been discussed in the literature frequently in recent years [5, 6]. At dedicated user data rates below 1 Gb/s, WDM-PONs currently are more expensive than GPON and EPON for residential access. In addition, the maximum wavelength count limits the obtainable splitting ratio. The solution to both of these problems is to share individual wavelengths, thus sharing the cost among several users. High total splitting ratios of up to 1:500 or more can be achieved, which can be derived as the sum of the individual per-wavelength splitting ratios. (Examples are 40-wavelength dense-wavelength division multiplexing (DWDM)-PONs with perwavelength splits between 1:8 and 1:16.) The perwavelength splitting ratio can be individually selected lower than in GPON/EPON to achieve higher per-customer bandwidths.
A
BEMaGS F
Figure 1c shows a DWDM-PON with an active remote node. In this node, an integrated WDM/layer2 (WDM/L2) access switch aggregates/segregates sub-wavelength subscriber traffic. Advantages are a WDM-layer-2 interworking with minimum optical/electronic/optical (O/E/O) conversion and full end-to-end management. It provides an efficient combination of WDM and layer-2 functionality (scalability, reach, packet aggregation, OAM functions, and resilience). In particular, it offers very high splitting ratios in conjunction with high per-customer bit rates and very high link-loss budgets. Active remote nodes (or active PONs, though this is a contradiction in itself) were proposed previously [7]. They also can be used to deploy amplifiers for extending the maximum system reach. Such amplifiers (or low-cost regenerators) can be located in street cabinets, basements of buildings (as in FTTB), or in the remains of former local exchanges (that network operators may want to keep because they also contain passive patch frames). In general, a certain number of active
Solution-specific parameters GPON
WDM-PON/L2
VDSL2
ONU/modem power consumption (typ.)
10 W
6W
9W
OLT/DSLAM power consumption (typ.)
2 W, incl. Black Link I/Fs
6W
13 W, incl. active CWDM I/Fs
Remote node power consumption (typ.)
N/a
4W
N/a
Splitting ratio
1:32 (GPON)
1:480 (total)
1:24 (DSLAM)
Maximum distance LX/RN/Cab-CP
20 km
80 km
1 km
Maximum backhauling distance
80 km
120 km
80 km
OAM cabinets
100%
20%
0%
OAM LX
70%
100%
50%
OAM PoP
95%
100%
90%
Planning + Provisioning
100%
60%
100%
Common parameters # of residential customers
1,000,000
# of enterprise customers
10,000
# of active system generations
2.5 generations in 25 years
Cost per MWh for large industrial customers (value for 2008)
€80/MWh
Digging cost per fiber-pair meter
€50/m (urban area)
Mean residential (FTTB/H) digging distance / mean sharing factor
200 m (shared between 5 customers)
Mean business (FTTB) digging distance
500 m (not shared)
OpEx ratio cabinet/LX
1:100
OpEx ratio cabinet/PoP
1:200
Table 1. Parameters of TCO analysis.
S26
Communications IEEE
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
components, which are integrated into a NGPON, can help to reduce the overall complexity of active sites and networks while simultaneously supporting high aggregated bandwidths, splitting ratios, and maximum reach, all at the same time.
TCO ANALYSIS For the TCO analysis, all major cost categories are considered. All solution scenarios bring fiber closer, or directly, to the end customer. Because a new, passive fiber infrastructure is required, typically, a lifetime of 20–25 years is considered. Generally, costs are split into CapEx and OpEx, where OpEx must be divided into the following main contributors: • Planning and provisioning • OAM • Solution-independent overhead • Energy cost Table 1 gives an overview of the most relevant parameters and assumptions of our TCO analysis. In general, it is difficult to precisely predict all the OpEx aspects because often even service providers lack detailed numbers. Where numbers are lacking, we use reasonable assumptions derived from base data from large network operators (European ILECs). Uncertainties in the data are accounted for by error bars. In Table 1, the energy consumption figures relate to single clients (optical network units [ONUs] and modems, where applicable). The OAM numbers refer to the capability of the solutions to reduce the number of the respective active sites, as compared to the site reduction goals stated earlier. This capability does not depend upon the absolute site numbers. The relative planning and provisioning cost for WDM-PON/L2 is related to VDSL and GPON, which both include WDM backhaul and additional point-to-point Ethernet solutions for dedicated enterprise access. The two OpEx ratios at the bottom of Table 1 express how manned sites (PoP and LXs) are penalized over active cabinets that are run fully by remote control. These numbers are relevant for weighting the OAM contribution of the different types of sites. They can vary from network operator to network operator. OAM and energy cost are both major OpEx contributors. They both heavily and directly depend on the access solution, as well as the number of remaining active sites (in particular manned PoP and LXs), which in turn also depends on the access technology. Obviously, a solution that enables very high splitting ratios and maximum reach best supports site-number reductions. If the solution contains active components (as the WDM-PON/L2 solution does), there is a direct trade-off between site number reduction and usage of active sites, for example, cabinets. Integrated active components like WDM/L2 access switch blades still can help reduce the complexity and energy consumption of large active sites. Hence, deriving a globally optimized network and solution concept can be an iterative task and is left for further study. As already pointed out, energy consumption is a major contributor to TCO, if relevant lifetime is considered. In many areas of a network, core routing, for example, lifetime energy consumption cost
IEEE
BEMaGS F
60
50
33%
40 2000
2001
2002
2003
2004
2005
2006
2007
100 80 60 40
32%
20 2000
2001
2002
2003
2004
2005
2006
2007
Figure 2. Energy cost increase for large business customers in the United States (top) and the European Union (bottom) since 2000. can exceed CapEx. A problem that occurs when calculating TCO over life is the unpredictable nature of energy cost over a 20- to 25-year period. In recent years, a severe energy cost increase has been observed, reaching more than 30 percent for the period between 2004 and 2007. This is shown in Fig. 2 for both the United States and the main 15 European Union (EU) countries. Hence, energy cost increase is a sensitive parameter for TCO studies. Here, we used three different values for the annual energy cost increase (AECI), 2 percent per year for a very small increase (which we consider to be overly optimistic), 5 percent per year for a medium increase, and 10 percent per year for a moderately high cost increase. We assumed an infrastructure lifetime of 25 years for our calculations. We assumed a large number of broadband customers in order to calculate the impact of any backhaul solution correctly. One million broadband customers with a guaranteed (non-oversubscribed) bandwidth of ~80 Mb/s each were modeled. In addition, we also included a lower number (10,000) of business customers with higher bandwidth requirements of 1 Gb/s (90 percent) and 10 Gb/s (10 percent), respectively. Total CapEx and energy consumption were calculated as the sums of the respective contributions from the customer equipment (CE), (i.e., ONUs or modems), the remote nodes (RN), (including protection mechanisms between OLT/LX and remote node), the OLT/LX equipment, and the backhaul equipment (i.e., integrated, active, or passive WDM). We used publicly available specifications of commercial equipment, together with estimates of the cost of components. We acknowledge that the particular equipment and vendors we selected may not be the best or cheapest in class, but to the best of our knowledge, we collected reasonably aggressive cost and energy consumption figures. The TCO results for the three different AECI
IEEE Communications Magazine • March 2009
Communications
A
70 Cost (US$/MWh)
IEEE
Cost (€/MWh)
Communications
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
S27
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
a)
AECI 2% 12,000
TCO (M€)
10,000
8000
6000
4000 GPON+WDM b)
WDM-PON/L2
WDM+VDSL2
AECI 5% 12,000
TCO (M€)
10,000
8000
6000
4000 GPON+WDM c)
WDM-PON/L2
WDM+VDSL2
AECI 10% 12,000
TCO (M€)
10,000
8000
6000
4000 GPON+WDM
WDM-PON/L2
WDM+VDSL2
Figure 3. TCO for broadband access rollout for 1,000,000 residential and 10,000 business customers. a), b), and c) show different values of the expected annual energy cost increase (AECI).
3500
Energy cost [M€]
3000
WDM+VDSL WDM-PON/L2 GPON+WDM
2500 2000 1500 1000 500 0
AECI 2%
AECI 5%
AECI 10%
Figure 4. Energy cost comparison for different values of assumed AECI.
S28
Communications IEEE
A
BEMaGS F
parameters and access solutions are illustrated in Fig. 3a, Fig. 3b, and Fig. 3c. Obviously, the absolute TCO range and also the ranking of the three different solutions depend on energy cost. For all AECI parameters, WDM-PON/L2 is at the lower end of the TCO range. We attribute this result to the preferable combination of splitting ratio, reach, and close WDM/L2 integration. The WDMPON/L2 configuration also yields relatively low energy cost in conjunction with low planning, provisioning, and OAM cost. Whereas L2enabled WDM-PON remains most beneficial for all AECI parameters; the ranking of GPON versus VDSL2, both with WDM backhauling, changes with increasing energy cost. We verified that the result of the TCO analysis is stable against variations of any of the parameters, as indicated by the error bars. Only if all the parameters of WDM-PON/L2 are simultaneously adjusted in one direction, and the respective parameters of GPON or VDSL rollouts are changed into the respective opposite direction, can the ranking of the broadband access solutions as shown in Fig. 3 be altered. As a further check, we also compared our TCO numbers to public reference data. As an example, both the OpEx/CapEx split and the TCO range agree reasonably well with data available from other sources [8, 9]. The common statement that OpEx can be as high as 90 percent of TCO also is confirmed by our modeling of the low CapEx solutions. As the last verification step, we calculated the monthly cost to the service provider for a singleuser connection. Not included here is any cost for provisioning of content (e.g., for video on demand [VoD] services). The cost is obtained by the ratio of the TCO to the number of customers and months of service and amounts to 23–29 Euro for medium AECI (5 percent). The result compares well to current 16-Mb/s service prices (asymmetric digital subscriber line [ADSL]2+). Our TCO analysis contains a number of potential sources of systematic errors. First, comprehensive and precise specifications, or estimates at a minimum, are neither available for the cost of all of the components and energy consumption figures, nor for the impact of closing down active sites. In particular, our cabinet/LX/PoP OpEx ratio numbers are based upon reasonable assumptions. Then, each large network operator is faced with its own individual infrastructure (in terms of fiber and site coverage and distribution). This has an impact on our mean distance numbers. Lastly, the cost per MWh number we used is valid for large industrial customers only, whereas a significant portion of the energy is consumed at the customer premises where energy costs are much higher. Thus, our energy cost figures must be interpreted as a lower bound for total energy cost and an upper bound for the energy cost for the service providers. The assumption regarding the AECI has a massive impact on the TCO. This is detailed in Fig. 4 for all AECI values and access solutions considered herein. From Fig. 4, higher energy cost is obtained for an extensive VDSL rollout. The respective energy consumption is related mainly to the short maximum reach of VDSL that translates into a network
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
structure with more densely spaced active sites, with the related impact on energy, OAM, and in general, staff cost. On the other hand, from Fig. 3 it can be seen that VDSL is more expensive than GPON (or EPON) only in cases of high AECI. Generally, both solutions are in the same TCO range, especially given the (operator-specific) uncertainties indicated by the error bars. Within these uncertainties, the combined WDM-PON/L2 configuration is cheaper by 16–22 percent. The main reasons are the integration of all applications (access, backhaul) into one solution with end-toend service provisioning and management, and the combination of high splitting ratio, high per-client bandwidth, and high maximum reach. The contribution of energy cost to TCO for the minimum and maximum AECI (2 percent and 10 percent) are shown in Fig. 5. The figure also states the relative portions of equipment CapEx, duct (digging) cost, and other main OpEx contributors. For low AECI (2 percent), the energy cost for a lifetime of 25 years is in the same range as CapEx. For high AECI (10 percent), the energy cost clearly exceeds CapEx by a factor of 2–5.5, depending on the solution that is chosen. This is the main reason why a massive VDSL rollout degrades with increasing AECI, as compared to the other solutions. WDM-PON/L2 again stays stable due to low to moderate energy consumption.
CONCLUSION Three solutions for a combined 80 Mb/s broadband residential access and 1 Gb/s (10 Gb/s) business access/backhauling are investigated under mass roll-out conditions: • A GPON configuration with passive WDM uplink • A WDM-PON with active layer-2 switching in the remote node • A VDSL2 configuration with CWDM/4 Gb/s add-drop multiplexer (ADM) ring When including aspects of energy consumption and reduction of active (and manned) operator sites, we conclude that a WDM-PON/L2 solution leads to the lowest TCO compared to the other two solutions. The result is attributed to the very high splitting ratio, long reach, simplified OAM, and integrated end-to-end management and service provisioning of the WDM-PON/L2 solution. The WDM technology provides high data rates for business customers and backhauling applications on a wavelength level. The L2 switch in the remote node allows a sharing of wavelengths between multiple residential users. This approach helps to accommodate near-term bandwidth demands around 100 Mb/s for FTTH users, along with the possibility to migrate residential customers to higher bandwidths on an individual basis.
REFERENCES [1] R. Davey et al., “Long-Reach Access and Future Broadband Network Economics,” 33rd Euro. Conf. Optical Commun., Berlin, Germany, Sept. 2007. [2] J. Baliga et al., “Photonic Switching and the Energy Bottleneck,” IEEE/LEOS Conf. Photonics in Switching, San Francisco, CA, Aug. 2007. [3] R. Tucker, “Optical Packet-Switched WDM Networks: A Cost and Energy Perspective,” OFC/NFOEC, San Diego, CA, Feb. 2008.
GPON+WDM
WDM-PON/L2
IEEE
BEMaGS F
WDM+VDSL
Energy (AECI 2% Duct cost CapEx Site / OAM cost P+P, overhead Energy (AECI 10%)
Figure 5. Relative cost distribution for broadband access solutions. Top: AECI = 2 percent, bottom: AECI = 10 percent. [4] K. Grobe and J.-P. Elbers, “PON in Adolescence: From TDMA to WDM-PON,” IEEE Commun. Mag., vol. 46, no. 1, Jan. 2008, pp. 26–34. [5] A. Banerjee et al., “Wavelength-Division Multiplexed Passive Optical Network (WDM-PON) Technologies for Broadband Access: A Review,” J. Optical Net., vol. 4, no. 11, Nov. 2005, pp. 737–58. [6] J. Prat et al., “Next Generation Architectures for Optical Access,” 32nd Euro. Conf. Optical Commun., Cannes, France, Sept. 2006. [7] G. Talli and P. D. Townsend, “Feasibility Demonstration of 100 km Reach DWDM SuperPON with Upstream Bit Rates of 2.5 Gb/s and 10 Gb/s,” OFC ’05, Anaheim, CA, Mar. 2005. [8] A. Heckwolf, “Employing Passive WDM to Cost Optimise Transport Network Operations,” IIR WDM & Next Generation Optical Net. Conf., Cannes, France, June 2008. [9] Analysis Mason, “The Costs of Deploying Fibre-Based Next-Generation Broadband Infrastructure,” final report, Broadband Stakeholder Group, Sept. 2008, ref. no.: 12726-371; _____________________ http://www.broadbanduk.org/component/option,com_docman/task,doc_view/gid,1036/Itemi ______________________________ d,63/ ___
BIOGRAPHIES KLAUS GROBE [M‘94] (
[email protected]) ______________ received his Dipl.-Ing. and Dr.-Ing. degrees in electrical engineering from Leibniz University, Hannover, Germany, in 1990 and 1998, respectively. Since 2000 he has been with ADVA AG Optical Networking, Germany. He has authored or coauthored three book chapters on WDM and PON technologies, and more than 50 scientific papers. He is a member of the German VDE ITG and ITG Fachgruppe 5.3.3 on Photonic Networks. J ÖRG -P ETER E LBERS received his diploma and his Dr.-Ing. degree in electrical engineering from Dortmund University, Germany, in 1996 and 2000, respectively. Since September 2007 he has been with ADVA AG Optical Networking, where he is currently vice president, advanced technology in the CTO office. From 1999 to 2001 he was with Siemens AG — Optical Networks, most recently as director of network architecture in the Advanced Technology Department. In 2001 he joined Marconi Communications (now Ericsson) as director of technology in the Optical Product Unit. He has authored and co-authored more than 60 scientific publications and holds 14 patents.
IEEE Communications Magazine • March 2009
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
S29
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
TOPICS IN OPTICAL COMMUNICATIONS
The Road to Carrier-Grade Ethernet Kerim Fouli and Martin Maier, Optical Zeitgeist Laboratory, INRS
ABSTRACT Carrier-grade Ethernet is the latest step in the three-decade development of Ethernet. This work describes the evolution of Ethernet technology from LAN toward a carrier-grade operation through an overview of recent enhancements. After reviewing native Ethernet and its transport shortcomings, we introduce the major carrier-grade upgrades. We first discuss the evolution of layer-2 architectures. Then, we detail the service specifications and their QoS and traffic engineering requirements. Finally, we describe the new OAM and resilience mechanisms.
INTRODUCTION Ethernet has enjoyed great success as the major enabling technology for local area networks (LANs). By 1998, Ethernet accounted for 80 percent of the LAN installed base, and Ethernet port shipments exceeded 95 percent of the market share [1]. Originally set to 10 Mb/s in the 1980s, Ethernet transmission rates have evolved to higher speeds ever since, reaching 10 Gb/s upon the approval of the IEEE 802.3ae standard in 2002. Ten-gigabit Ethernet (10GbE) was the first Ethernet standard to include interoperability with carrier-grade transmission systems such as synchronous optical network/synchronous digital hierarchy (SONET/SDH). In addition to its highspeed LAN operation, 10GbE was shown to integrate seamlessly with metropolitan and wide area networks [2]. The drive toward higher transmission rates continues as the IEEE 802.3 Higher Speed Study Group (HSSG) works on the standardization of 100GbE by 2010. Ethernet passive optical network (EPON), the extension of Ethernet LANs to access environments, is poised to undergo its first tenfold, bit-rate leap from 1 Gb/s (802.3ah, 2004) to 10 Gb/s (802.3av) by 2009. Despite increased speed and interoperability with carrier-grade technology, Ethernet has remained exclusively a LAN and access network technology. Indeed, traditional Ethernet lacks essential transport features such as wide-area scalability; resilience and fast recovery from network failures; advanced traffic engineering; and operation, administration, and maintenance (OAM) capabilities. Consequently, it falls short of delivering the quality of service (QoS) and security-guarantee levels required by typical transport-network service level agreements (SLAs).
S30
Communications IEEE
0163-6804/09/$25.00 © 2009 IEEE
Carrier-grade Ethernet (CGE) is an umbrella term for a number of industrial and academic initiatives that aim to equip Ethernet with the transport features it is missing. In doing so, CGE efforts aspire to extend the all-Ethernet domain beyond the first-mile and well into the metropolitan and long-haul, backbone networks. The thrust toward CGE is driven by the promise of a reduced protocol stack. The ensuing reductions in cost and complexity are expected to be considerable. Currently installed metro and wide area networks are dominated largely by SONET; by multiprotocol label switching (MPLS); and to a lesser extent, by asynchronous transfer mode (ATM) technology, even though Internet Protocol (IP) routers lead new installations. (It is worth mentioning that having originally been conceived as a hardware switch, MPLS was subsequently revised to become a software feature.) Notwithstanding, most data traffic originates from and terminates at Ethernet LANs. In addition, most applications and services such as video and business services are migrating toward Ethernet platforms [3]. The current growth of voice over Internet Protocol (VoIP) and the expected emergence of Internet Protocol television (IPTV) imply the acceleration of that trend, leading to inhibitive costs associated with network layering and interfacing [4]. In [5], the authors show that implementing CGE in backbone networks could result in a 40 percent port-count reduction and 20–80 percent capital expenditure (CAPEX) drops compared to various non-Ethernet backbone technology alternatives. After reviewing the evolution of currently deployed Ethernet technology in the next section, this work introduces the proposed major carrier-grade enhancements. We then present the new IEEE 802.1 hierarchical forwarding architecture; the following two sections offer an overview of the emerging service, traffic engineering, resilience, and OAM standards. We conclude in the final section.
THE EVOLUTION TOWARD CARRIER GRADE NATIVE ETHERNET Ethernet is a family of standardized networking technologies originally designed for LANs. The first experimental Ethernet was a carrier-sense multiple-access with collision detection (CSMA/CD) coaxial bus network that was
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
802.3 June 1983
802.3u June 1995 100Mb/s (Fast Ethernet)
10Mb/s Coaxial (thick) 802.3a June 1985 Coaxial (thin)
802.3i Sept. 1990
802.3x Mar. 1997
Twisted pair 802.3j Sept. 1993
802.3c Mar. 1986
Full duplex, Flow control
Fiber
Repeaters/ hubs
802.3z June 1998 1Gb/s (GbE)
802.3ae June 2002
802.3ad Mar. 2000
BEMaGS F
802.3ba 2010 (submit)
10Gb/s (10GbE)
100Gb/s (40km) 40Gb/s (100m Fiber
802.3ac Sept. 1998 Q-tag support
A
802.3ah June 2004
802.3av 2009 (submit)
1Gb/s EPON
10Gb/s EPON
Link aggregation 802.3
1985
1990
1995
2000
2005
2010 802.1
VLANs
Bridges/switches STP May 1990 802.1D
Traffic classes (802.1p) June 1998 802.1D
Dec. 1998 802.1Q RSTP June 2001 802.1w
MSTP Dec. 2002 802.1s
Fault man. Sept. 2007 802.1ag
Provider bridges (Q-in-Q) Dec. 2005 802.1ad
PBB-TE 2009 (?) 802.1Qay
Congestion notification 2008 (?) 802.1Qau
PBB (MACin-MAC) 2009 (?) 802.1ah
Carrier-grade
Figure 1. Ethernet standardization milestones.
designed in 1972 at Xerox and operated at a speed of 2.94 Mb/s. Eleven years elapsed before a 10-Mb/s version was proposed and endorsed as the first IEEE 802.3 standard in 1983. Figure 1 highlights some of the major standardization milestones since 1983. Early on in the standardization process, the IEEE categorized standards by separating the Ethernet physical (PHY) layer and the medium access control (MAC) sub-layer standards (802.3, upper timeline in Fig. 1) from data-link layer bridging and management standards (802.1, lower timeline in Fig. 1). The gradual evolution of Ethernet toward higher speeds is clear from the typical tenfold bit-rate increase characterizing the standardization of Fast Ethernet (802.3u) in 1995, Gigabit Ethernet (GbE, 802.3z) in 1998, 10GbE (802.3ae) in 2002, and the projected endorsement of 100GbE (802.3ba) by 2010. Note that the rates shown in Fig. 1 are data rates. Physicallayer clock rates are typically higher due to lineencoding schemes such as 8B/10B in GbE and 64B/66B in 10GbE. The bit-rate leaps of Ethernet were accompanied by major qualitative technology transformations that are apparent in the physical/MAC (802.3) and the bridging and management (802.1) efforts. On the physical and MAC layers, CSMA/CD on thick coaxial cable (802.3-1983) was gradually abandoned in favor of a hub-segmented (802.3c, 1986) and then a switched (802.1D-1990) network capable of operating in full duplex mode (802.3x, 1997) over various media such as twisted pair and fiber.
In addition to implementing operations such as flow control and link aggregation (802.3ad, 2000) at the MAC level, Ethernet acquired a high degree of management functionality due to the enabling of traffic classes (802.1p, 1998), virtual LANs (VLANs, 802.1Q, 1998), and provider bridges (802.1ad, 2005). Furthermore, the virtual topology process of Ethernet shifted from Spanning Tree Protocol (STP, 802.1D1990) to the more elaborate Rapid Spanning Tree Protocol (RSTP, 802.1w, 2001) and Multiple Spanning Tree Protocol (MSTP, 802.1s, 2002). Several of the aforementioned networklevel amendments are briefly described in the following points. Switching — Switches and bridges started out as layer-2 devices connecting different LANs. Typically hardware-based, switches have a MAC layer at each port. They build and maintain a source address table (SAT) associating the source addresses of incoming frames to their corresponding ports — a process called learning. If the destination address of an outgoing frame is not on its SAT, a switch acts like a layer-1 repeater by sending a copy of the frame to all output ports. This is called flooding. Full Duplex — Full duplex operation refers to the creation of dedicated paths, rather than use of a shared medium, between nodes. This requires switching and enables nodes to transmit and receive at the same time using the full capacity of two counter-directional links.
IEEE Communications Magazine • March 2009
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
S31
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
IEEE
ITU
Architecture and interfaces
802.1Q 802.1ah 802.1ad 802.1Qay
G.8010/Y.1306 G.8012/Y.1308
MEF-4 MEF-11 MEF-12
802.1ag 802.1Qay 802.1aq
G.8031 G.8032
MEF-2
Survivability
802.1Qay
G.8011/Y.1307
MEF-3 MEF-6 MEF-10.1
OAM and network configuration
802.1ah 802.1ag 802.1AB 802.1ar 802.1Qau
Y.1730 Y.1730
Security
802.1AE/af
TE, QoS, and service specifications
IETF
GELS GMPLS control — of PBT
MEF
MEF-7 MEF-15 MEF-16 MEF-17
Table 1. Recent standardization initiatives sorted by standards body and field.
Flow Control — Although switching removed collisions, nodes could still face overflow problems, particularly when faster transmission rates are used at source nodes or intermediate switches. Flow control enables the receiver to regulate incoming traffic by sending PAUSE frames that halt transmission temporarily. STP — Parallel paths between nodes can create forwarding loops leading to excess traffic or large peaks in broadcast traffic (broadcast storms). By defining a logical tree topology, STP specifies a unique path between any pair of nodes and disables parallel paths, thus eliminating loops. Disabled links are used for backup. Switches/bridges use special frames called bridge protocol data units (BPDUs) to exchange STP information. Link Aggregation — In disabling parallel links between adjacent nodes, STP blocks valuable bandwidth increases. Link aggregation overcomes the STP limitation and enables nodes of exploiting parallel links. VLAN — A VLAN is essentially a logical partition of the network. VLANs were introduced to split the LAN broadcast domain, thus increasing performance and facilitating management. VLANs were initially communicated implicitly through layer-3 information such as a protocol type or an IP subnet number. The 802.1Q standard introduced the Q-tag, a new frame field that includes an explicit 12-bit VLAN identifier (VLAN ID). Traffic Classes — Besides the VLAN ID, the Q-tag also includes a three-bit field used to specify frame priority. An 802.1Q/p switch uses a queuing system capable of recognizing and processing the eight possible priority levels, as detailed in the 802.1p amendment.
S32
Communications IEEE
A
BEMaGS F
RSTP and MSTP — RSTP is an improved version of STP that achieves faster convergence by introducing measures such as more efficient BPDU exchanges. Rather than disabling parallel links as in STP and RSTP, MSTP exploits them by defining different spanning trees for different VLANs. Ethernet is sometimes dubbed as “the cheapest technology that is good enough.” Due to its simplicity, low cost, and high level of standardization, it exhibits excellent compatibility and interoperability with complementary technologies. However, these same trademark attributes are the source of a number of fundamental shortcomings as a transport platform.
SHORTCOMINGS OF NATIVE ETHERNET When it comes to delivering transport services, native Ethernet suffers from the following major shortcomings. Architectural Scalability — In spite of its universal MAC addressing scheme, traditional Ethernet is not scalable to a wide-area environment because of the lack of separation between client and service domains and its address-learning method, based on flooding. Resilience — The Ethernet network response to failure is based on the reconfiguration of malfunctioning spanning trees. Although STP was superseded by RSTP and MSTP, these developments still fall short of expected wide-area network (WAN) protection speeds. Traffic Engineering (TE) — Native Ethernet requires no traffic engineering and management. Up to now, it has relied on other layers to perform basic traffic engineering operations. The enabling of a number of traffic classes through 802.1p does not allow larger networks to provision bandwidth and fine-tune traffic across the whole network. Those TE capabilities are among the fundamental aspects of carrier technologies because they enable QoS guarantees. OAM — By enabling such operations as network configuration, equipment maintenance, and performance monitoring, OAM resources exploit TE capabilities to enable the delivery of SLA guarantees. Designed for LAN operation, traditional Ethernet lacks such capabilities. Security — The concern for security grows with the number of network subscribers, making the operations of client authentication and authorization vital. For proper WAN functionality, Ethernet requires integrated security enhancements to address issues such as flooding. Another important issue of Ethernet is synchronization. Many services and applications require synchronization of time and frequency, namely the distribution of an accurate and reliable time-of-day clock signal and/or frequency reference. The two major applications are mobile backhaul and time-division multiplexing (TDM) circuit emulation, where a frequency reference is required to derive transmission frequencies at a mobile station, or a time reference is required to recover transmitted bits at the edge-TDM emu-
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
lation points. Other important applications include audio/video access applications. TDM legacy technologies, such as SONET/SDH, naturally disseminate synchronization signaling. In contrast, reception in 802.3 Ethernet is inherently asynchronous, where preamble bits are used to achieve synchronization on a per-frame basis. Therefore, the migration from the TDM infrastructure to Ethernet means the loss of useful disseminated synchronization signaling. Standardization bodies are proposing solutions for Ethernet synchronization at different network layers. Layer-2 and layer-3 synchronization is based on the multicasting of synchronization frames or packets containing timestamps. Receivers subsequently adjust their local time according to the received timestamp, taking into account an evaluation of the transmitter-receiver delay. The standards in this category include IEEE 1588 and its more recent carrier-grade follow-ups developed by the IEEE 802.1 Audio/Video Bridging Task Group (802.1AS, 802.1Qat, and 802.1Qav), as well as Internet Engineering Task Force (IETF) Network Time Protocol (NTP). Higher-layer synchronization may be affected by packet-delay variation and traffic load. Hence, the true equivalent to TDM synchronization must be resident in the physical layer. A group of International Telecommunication UnionTelecommunication (ITU-T) standards (G.8261, G.8262, and G.8264) provide the ability for physical-layer dissemination of frequency synchronization information similar to SONET/SDH and may form the basis for the reliable implementation of a highly accurate physical-layer time-of-day. These physical-layer ITU-T standardization efforts form the basis of what is called Synchronous Ethernet (SyncE) [6].
THE ETHERNET CARRIER-GRADE ROADMAP Carrier-grade Ethernet denotes an all-Ethernet backbone infrastructure enabling an end-to-end frame transport that incorporates typical carriergrade service and quality guarantees. The following five main objectives are addressed by current standardization efforts: • Wide-area scalability • Network resilience and fault recovery • Advanced traffic engineering and end-toend QoS • Advanced OAM • Security The current 802.1 standardization efforts are focused on leveraging the existing Ethernet protocols and switch architectures to perform most of those functions on the data-link layer, thus enabling CGE while maintaining backward compatibility with legacy Ethernet equipment. In Fig. 1, the shaded area represents some of the future standard amendments designed to realize CGE. The CGE efforts currently involve major telecommunications standardization bodies such as IEEE, ITU, IETF, and the Metro Ethernet Forum (MEF). Table 1 shows a classification of some existing and in-progress standards according to their carrier-grade objectives. The latest standardization developments in the IEEE 802.1 working group aim at meeting
IEEE
BEMaGS F
Virtual LANs Payload
Payload
Payload Payload
C-VID
C-VID
VID
S-VID
S-VID
SA
SA
SA
SA
DA 802.1 (a)
DA 802.1Q (b)
DA 802.1ad (c)
DA
SA = Source MAC address DA = Destination MAC address VID = VLAN ID C-VID = Customer VID S-VID = Service VID I-SID = Service ID B-VID = Backbone VID B-DA = Backbone DA B-SA = Backbone SA
Provider bridges
I-SID B-VID B-SA B-DA 802.1ad (d) Provider backbone bridges
Figure 2. Ethernet frame evolution [7]. the mentioned carrier-grade objectives. Some of the recent and future standards involved are shown within the shaded area in Fig. 1. These include the architectural modifications of provider bridges (PB, 802.1ad, 2005) and provider backbone bridges (PBB, 802.1ah), the resilience instruments of fault management (802.1ag, 2007) and congestion notification (802.1Qau), and the specifications of PBB traffic engineering (PBB-TE, 802.1Qay). Those developments are discussed in greater detail in the remainder of this work.
SCALABILITY THROUGH HIERARCHY To fulfill carrier-grade objectives, the backbone nodes must deliver traffic that is protected, engineered, and guaranteed rather than best-effort traffic. From an architectural point of view, that implies two qualitative evolutions: • Moving from a connectionless model supported by spanning-trees toward enabling multiple connection-oriented tunnels • Moving from distributed-address learning to centralized path configuration The step-by-step evolution toward such an architecture implied the introduction of hierarchical layer-2 sublayers, starting with VLANs (802.1Q), then with PBs (802.1ad), and finally with PBBs (802.1ah). Through tagging and encapsulation, the original 802.3 Ethernet frame shown in Fig. 2a underwent a consequent evolution aimed at preserving its structure for backward compatibility. In this section, we detail the evolution of network-layer hierarchy and the associated forwarding modes designed to enable carrier-grade operation.
VLAN SWITCHING The first carrier Ethernet attributes came with the emergence of VLANs. Although VLANs began as mere partitions of the customer enterprise network, they were seen by service pro-
IEEE Communications Magazine • March 2009
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
S33
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Like MAC address learning, VLAN learning enables VLAN switches to associate new MAC addresses and VIDs dynamically with port information. To do so, VLANs maintain one or more filtering databases, depending on the learning process.
viders as the natural way to differentiate customer networks while maintaining a cheap endto-end Ethernet infrastructure. In this setting, service providers assign a unique 12-bit VLAN ID (VID) field within the Q-tag to each customer network. VLAN switches add the Q-tag at the ingress node and remove it at the egress node (Fig. 2b). Like MAC address learning, VLAN learning enables VLAN switches to associate new MAC addresses and VIDs dynamically with port information. To do so, VLANs maintain one or more filtering databases, depending on the learning process. Two VLAN learning schemes were specified in 802.1Q: independent VLAN learning (IVL) and shared VLAN learning (SVL). For a defined subset of VLANs, SVL uses one common filtering database, hence enabling the sharing of port information among VLANs. IVL, on the other hand, uses one filtering database per VLAN, thus restricting MAC learning to the VLAN space. A notable consequence of IVL is that forwarding is effectively specified by the full 60-bit combination of the destination MAC address and the VID. IVL and SVL became instrumental in enabling service providers to separate the information of customers supported by the same provider VLAN switch. Nevertheless, this use of VLANs ran into scalability issues. The VID 12 bits limited the number of supported customers to a maximum of 4094 (excluding reserved VIDs “0” and “4095”). In addition, the customers required the same VID field to partition and manage their own networks, leading to a further reduction of the range of VIDs available to the service providers.
PROVIDER BRIDGES (Q-IN-Q) In an effort to mitigate the provider scalability problems, the VLAN plane was further split by introducing an additional Q-tag, represented in Fig. 2c by its VID field. This resulted in two separate VID fields, destined to be used by customers (C-VID) and service providers (S-VID), respectively. The VLAN stacking of two Q-tags was introduced in the 802.1ad standard and is often referred to as Q-in-Q. In spite of service provider switches (PBs) controlling their own S-VID, scalability issues remained unresolved by 802.1ad. First, PBs still were required to learn all attached customer destination addresses (DAs), resulting in potential SAT overflows and broadcast storms. Second, control frames such as BPDUs were unrestricted to the provider or customer domains [8]. In addition, because it was designed for enterprise LAN applications, the 12-bit S-VID still was insufficient to perform two key functions simultaneously: the identification of customer service instances and forwarding within the provider network [9].
PROVIDER BACKBONE BRIDGES (MAC-IN-MAC) The 802.1ah standard draft introduces yet another hierarchical sublayer, this time by means of encapsulation of the customer frame within a provider frame. Backbone edge switches (PBBs)
S34
Communications IEEE
A
BEMaGS F
append their own source address (B-SA) and destination address (B-DA), as well as a backbone VID (B-VID). A new 24-bit field called the service ID (I-SID) also is introduced to identify a customer-specific service instance (Fig. 2d). PBBs complete the separation between customer and provider domains by duplicating the MAC layer, hence the term MAC-in-MAC. In addition, PBBs allow up to 16 million service instances to be defined without affecting the forwarding fields (B-VID, B-SA, and B-DA).
PROVIDER BACKBONE TRANSPORT Although PBBs create a provider infrastructure that is transparent to the customer networks, they still may use automatic best-effort techniques inherited from the LAN, such as xSTP and MAC-learning. Such techniques do not meet the configuration requirements of carrier-grade operation. Due to the modularity of Ethernet specifications, they can be turned off to pave the way for the fine-tuning of managementbased processes. Moreover, the creation of a service-provider MAC layer requires a redefinition of the VLAN space to fulfill carrier-grade requirements. In 802.1Qay, provider backbone transport (PBT) is defined by a backbone architecture implemented together with a set of measures to enable traffic engineering. PBT relies on PBBs at the edge of the provider network and PBs to perform forwarding within its core. Rather than multicast trees, VLANs represent connectionoriented, point-to- point (PtP) or multipoint-topoint (MPtP) tunnels (Ethernet switched paths [ESPs]) traversing the core from one PBB to another. A range of B-VIDs is reserved to identify the ESPs. Rather than having global significance, these B-VIDs are tied to destination PBB addresses (B-DA) and can be reused. At each provider switch, the egress port of a frame is determined by the 60-bit combination of B-VID and B-DA. For instance, reserving 16 out of the 4094 B-VIDs implies a theoretical maximum of 16 × 248 available ESPs. This eliminates the scalability limits of VLAN stacking and is considered sufficient for transport purposes [7]. To enable PBT, the following measures are required at the provider switches and within the range of B-VIDs allocated for ESPs: • Disable automatic MAC learning and flooding to enable the configuration of forwarding tables at the management layer. • Filter (remove) unknown-destination, multicast, and broadcast frames. • Disable xSTP to allow for loops and alternate path-oriented resilience mechanisms. • For any given destination PBB, assign a unique B-VID to each ESP. • Activate IVL. The latter measure disables MAC address sharing between VLANs. This prevents the egress port associated to one ESP from being altered by the configuration of an alternate ESP toward the same destination PBB. Consequently, at the provider switches, the egress port is determined locally by the full 60-bit B-VID/B-DA sequence.
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
A
BEMaGS
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Due to the
Provisioning and management
PB
B-SA = PBB-X B-DA = PBB-Y B-VID = VLAN-1
F
modularity of Ethernet specifications, they can be
PB
turned off to pave
PB
the way for the fine-tuning of Provider network
Customer network
Customer network
management-based processes. Moreover, the creation of a
PBB-X MAC encapsulation
PB
B-SA = PBB-X PBB-Y B-DA = PBB-Y B-VID = VLAN-2
service-provider MAC layer requires a redefinition of the
Figure 3. Provider backbone transport (PBT) example [7].
VLAN space to fulfill carrier-grade
Note that provider switches can revert to normal VLAN switching outside the prescribed set of B- VIDs. Moreover, ESPs must be created in both directions to establish connection symmetry, a feature that is required for the proper operation of Ethernet customer networks [7]. The PBT architecture is illustrated in the example of Fig. 3. At the management plane, two paths are configured from PBB-X to PBB-Y to interconnect two customer networks. The forwarding tables of all the traversed PBB and PB switches are updated accordingly. A distinct BVID is assigned to each path. At PBB-X, customer frames are encapsulated. PBB-X, PBB-Y, and either VLAN-1 or VLAN-2 are entered in the B-SA, B-DA, and BVID fields, respectively. The customer frame is recovered and forwarded to the customer layer at the destination edge node (PBB-Y). In the provider network core, enabling IVL ensures that PBs maintain distinct routes for different BVIDs although the same destination address appears, as is the case when VLAN-1 and VLAN-2 cross in Fig. 3. (Note that in Fig. 3, the switches within the provider network may be connected either directly or by intermediate optical cross-connects.) The PBT architecture allows for total path configuration from source to destination. Multiple ESPs can be created between PBBs for traffic engineering, load balancing, protecting connections, and separating service/customer instances.
TOWARD TRAFFIC ENGINEERING AND QOS-ENABLED SERVICES The CGE architectures and switching technology described in the previous section serve to deliver Ethernet virtual connections (EVCs) between user-network interfaces (UNIs). The MEF defines a UNI as an interface between the equipment of the subscriber and the equipment of the service provider. The UNI runs service-level data-, control-, and management-plane functions on both client and network sides. An EVC is described as a set of frame streams sharing a
common forwarding treatment and connecting two or more UNIs [10]. In its Ethernet services definitions, the MEF advances three EVC connections: PtP, multipoint-to-multipoint (MPtMP), and point-to-multipoint (PtMP) [10]. These correspond to the three service types shown in Fig. 4 and described below. An E-LINE is a PtP service connecting two UNIs. Two implementations are proposed by the MEF: Ethernet private line (EPL) and Ethernet virtual private line (EVPL). An EPL replaces a TDM private line and uses dedicated UNIs for PtP connections, whereas an EVPL uses UNIs with EVC-multiplexing capabilities to replace services such as frame relay (FR) and ATM. An E-LAN is an MPtMP service offering full transparency to customer control protocols and VLANs (transparent LAN service [TLS]). As for E-LINEs, the two E-LAN categories are Ethernet private LAN (EPLAN) and Ethernet virtual private LAN (EVPLAN). Similar to EPON, the E-TREE service offers PtMP connectivity from a root UNI to the leaf UNIs and MPtP connectivity from the leaves to the root. The MEF specifications further associate several service attributes to UNIs and EVCs [10]: • The Ethernet physical interface determines the PHY/MAC sublayer features such as speed and physical interface. • The bandwidth profile is a set of five traffic parameters that characterize the connection, namely committed information rate (CIR), committed burst size (CBS), excess information rate (EIR), excess burst rate (EBS), and color mode (CM). The CM is a binary parameter indicating whether a UNI employs the MEF color-marking system discussed below. • MEF-10.1 includes definitions of frame delay, jitter, loss, and service availability for PtP and multipoint EVCs. Together, these quantities form the performance parameter attributes of EVCs. • The class of service (CoS) attribute is a frame prioritization scheme based on the physical port, 802.1p priority bits, or higherlayer service-differentiation methods.
requirements.
IEEE Communications Magazine • March 2009
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
S35
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
E-LINE: Customer equipment
Customer equipment Point-to-point EVC
UNI
UNI
Provider network E-LAN:
UNI
Multipoint-to-multipoint EVC
UNI
UNI
UNI
Point-to-multipoint EVC UNI
UNI
UNI
Figure 4. Metro Ethernet Forum (MEF) service definitions: E-LINE, ELAN, and E-TREE. • The service frame delivery attribute determines whether client data and service frames transmitted over an EVC are unicast, multicast, or broadcast. In addition, this attribute specifies whether client layer2 control frames are processed, forwarded, or discarded. • VLAN tag support determines whether the UNI supports the various Q-tag fields. • The multiplexing capability of a UNI is indicated by the service multiplexing attribute. • Bundling establishes a mapping between customer VLAN IDs (C-VIDs) and EVCs whereby a single EVC can carry more than one VLAN. All-to-one bundling is further defined as a binary parameter mapping all customer VLANs to one EVC. • Security filters represent the frame filtering attributes of a UNI. For instance, a UNI may restrict access to source MAC addresses within an access control database [10]. A description of the current security vulnerabilities of STP-based Ethernet networks can be found in [8]. The QoS requirements for CGE are detailed within the bandwidth profile attribute specification in MEF-10.1. The set of traffic parameters defining the bandwidth profile (CIR, CBS, EIR, EBS, and CM) are controlled and enforced by a two-rate, three-color marker (trTCM) algorithm that is run at the UNI. Input frames are marked green, yellow, or red, using a token bucket model. Whereas green frames are delivered and
S36
Communications IEEE
BEMaGS F
red frames are discarded, yellow frames are delivered only if excess bandwidth is available [10, 11]. Further possible QoS and fairness control techniques based on layer-2 frame marking are discussed in [11]. In conjunction with security filters and service-frame delivery attributes, the bandwidth profile algorithm enables traffic engineering through traffic policing. Other traffic-engineering mechanisms such as traffic shaping and load balancing are enabled by the described attributes. By introducing connection-oriented networking models, the IEEE 802.1Qay (PBT) standard provides a concrete embodiment of MEF traffic-engineering requirements. Nevertheless, the standardization of the CGE trafficengineering infrastructure still is ongoing. According to [12], the current standardization initiatives still do not fully address the stringent traffic management requirements of future applications such as IPTV.
RESILIENCE AND OAM
UNI
E-TREE:
A
Resilience is defined as the ability of a network to detect faults and recover from them [8]. The carrier-grade resilience reference is set by SONET/SDH technology, with sub-50–ms recovery times. Such performance levels require a strong OAM framework comprising fault and performance management. Fault management includes failure detection, localization, notification, and recovery mechanisms, whereas performance management aims at monitoring and reporting network performance metrics such as throughput, frame loss, and bit error rates. Through coordinated initiatives, IEEE, ITU-T, and MEF developed a number of standards addressing carrier-grade resilience and OAM. Whereas IEEE 802.3ah (Ethernet in the first mile [EFM]) specifies link-level OAM processes such as automatic discovery, IEEE 802.1ag (connectivity fault management [CFM]) defines end-to-end VLAN-level OAM functions (described below). At the ITU-T, Study Group 13 introduces a broader set of OAM functions within Y.1731 (OAM functions and mechanisms for Ethernet-based networks). In addition, the completed G.8031 (Ethernet protection switching) of Study Group 15 and the ongoing G.8032 (Ethernet ring protection switching) initiatives focus on VLAN-level protection mechanisms. Focusing on the service-level, the MEF specifications (MEF-15, 16, and 17) highlight OAM requirements related to SLA performance and edge-node management functions. CFM and Y.1731 specify layer-2 OAM functions designed for connection-oriented settings (e.g., PBT) and are compatible with existing link-level EFM processes. The remainder of this section describes those functions, as well as the proposed Ethernet restoration mechanisms.
OAM ARCHITECTURE AND MECHANISMS CFM and Y.1731 introduce a hierarchical architecture where eight management levels enable customers, service providers, and network operators to run OAM processes in their own maintenance domains (MDs). The edge nodes of the various nested MDs are called maintenance end points (MEPs) and initiate OAM mechanisms,
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
A
BEMaGS
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
F
The OAM frame Carrier network Customer
Operator 1
format defined by Operator 2
Customer
CFM and Y.1731 is an Ethernet frame with a data field partitioned into OAM-specific fields. The latter include an
Customer MD
MD-level field and an operation code (OpCode) associating
MD level
Provider MD
an OAM frame with one OAM function.
Operator MDs MEP Link MDs
MIP
Unless specified, an OAM frame does not pass its domain boundaries.
Figure 5. Illustration of OAM MDs in CFM. whereas intermediate nodes (maintenance intermediate points [MIPs]) respond to them. The example of Fig. 5 shows a PtP EVC service delivered by a provider over two adjacent operators. Although the customer and provider MDs support the PtP connection from end to end, each operator establishes a distinct MD over its segment of the EVC below the provider MD-level. The nested MDs run their OAM functions independently. The OAM frame format defined by CFM and Y.1731 is an Ethernet frame with a data field partitioned into OAM-specific fields. The latter include an MD-level field and an operation code (OpCode) associating an OAM frame with one OAM function. Unless specified, an OAM frame does not pass its domain boundaries. CFM specifies three fault management functions: • Continuity check (CC): MEPs within an MD multicast to each other periodic CC frames. CC messages can be used to detect loss of connectivity or network misconfiguration and to measure frame loss. • Link trace (LT): An LT request frame sent by a MEP toward a target node triggers LT reply frames from all intermediate nodes back to the source MEP. This procedure enables fault localization and the monitoring of network configuration. • Loopback (LB): An LB message (or MAC ping) sent by a MEP to any node triggers an LB reply. This process is used to check the responsiveness of intermediate nodes and verify bidirectional connectivity. The following are some of the functions introduced within Y.1731: • Alarm indication signal (AIS) messages are used to notify nodes that a fault was reported to the network management system (NMS). AIS suppresses further alarms within the MD and at higher MD levels.
• Due to their configurable test data, test frames can be used to measure throughput, frame loss, and bit error rates. • A locked (LCK) signal indicates to higher MD levels that maintenance operations are taking place at a MEP, thus suppressing false alarms. • The maintenance communication channel (MCC) function sets up a channel for vendor-specific OAM applications such as remote maintenance. • The experimental/vendor specific OAM (EXP/VSP) frame types are unspecified and reserved for temporary or vendor-specific OAM extensions. In addition, Y.1731 defines loss, delay, and delay-variation (jitter) measurements using appropriate OAM functions.
ETHERNET PROTECTION AND RESTORATION In Ethernet LANs, the family of STP protocols performs protection and restoration functions through topology reconfiguration. The associated recovery times vary from 1 s to 60 s [8] and fall short of carrier- grade requirements. Besides, the loop prevention mechanisms of xSTP are no longer required in the CGE connection-oriented settings. G.8031 specifies SONET/SDH-style 1 + 1 unidirectional and 1 + 1 or 1:1 bidirectional protection switching for PtP paths or segments. To coordinate bidirectional protection switching, G.8031 includes an Automatic Protection Switching (APS) protocol. APS uses OAM frames identified by a specific OpCode. In PBT, end-to-end protection switching between edge PBB nodes is accomplished by provisioning a protection path for each working path. Loss of connectivity along one path automatically triggers the source PBB to replace the working-path B-VID with the protection-path BVID in outgoing frames [7]. The pre-configura-
IEEE Communications Magazine • March 2009
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
S37
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
As long as Ethernet dominates LAN technology, native transport enhancements will translate into cost and complexity reductions compared to hybrid solutions.
tion of adequate protection paths is left to the NMS. A number of alternative proprietary and studied mechanisms are mentioned in [8, 11], usually based on redundancy mechanisms or STP enhancements. The co-existence of various protection and restoration mechanisms requires careful definition and prioritization. Network failures usually are resolved faster and more efficiently within the layer where they occur [5]. In CFM, accordingly, cross-level fault notifications from lower MD levels have higher priority.
CONCLUSION The targets of the current CGE evolution are the connection-oriented architectures and control tools e s t abl is hed thro u gh SONET/SDH, MPLS, and IP. Although the complexity of such amendments departs from the trademark simplicity of Ethernet, powerful drivers back the transition. Indeed, as long as Ethernet dominates LAN technology, native transport enhancements will translate into cost and complexity reductions compared to hybrid solutions. Whether that economic advantage outweighs leveraging deployed MPLS equipment is a legitimate issue. Nonetheless, the effort to advance CGE solutions is merely starting. The recent IETF generalized-MPLS Ethernet label switching framework draft, for example, aims at giving Ethernet the advanced control features of GMPLS, such as fast reroute (FRR). In contrast to the protection-based resilience procedures described above, FRR represents a powerful restoration mechanism designed to achieve carrier-grade restoration times (tens of milliseconds) in a more bandwidth-efficient fashion through the activation of pre-computed alternate routes.
REFERENCES [1] R. Breyer and S. Riley, “Switched, Fast, and Gigabit Ethernet,” New Riders, 3rd ed., 1998. [2] J. Hurwitz and W. Feng, “End-to-End Performance of 10-Gigabit Ethernet on Commodity Systems,” IEEE Micro, vol. 24, no. 1, Jan. 2006, pp. 10–22. [3] R. Ramaswami, “Optical Networking Technologies: What Worked and What Didn’t,” IEEE Commun. Mag., vol. 44, no. 9, Sept. 2006, pp. 132–39.
S38
Communications IEEE
A
BEMaGS F
[4] P. A. Bonenfant and S. M. Leopold, “Trends in the U.S. Communications Equipment Market: A Wall Street Perspective,” IEEE Commun. Mag., vol. 44, no. 2, Feb. 2006, pp. 141–47. [5] A. Kirstädter et al., “Carrier-Grade Ethernet for Packet Core Networks,” Proc. Asia Pacific Optical Commun. Conf. (SPIE Conf. Series), South Korea, Oct. 2006, vol. 6354. [6] J.-L. Ferrant et al., “Synchronous Ethernet: A Method to Transport Synchronization,” IEEE Commun. Mag., vol. 46, no. 9, Sept. 2008, pp. 126–34. [7] D. Allan et al., “Ethernet as Carrier Transport Infrastructure,” IEEE Commun. Mag., vol. 44, no. 2, Feb. 2006, pp. 134–40. [8] M. Huynh and P. Mohapatra, “Metropolitan Ethernet Network: A Move from LAN to MAN,” Comp. Net., vol. 51, no. 17, Dec. 2007, pp. 4867–94. [9] G. Parsons, "Ethernet Bridging Architecture," IEEE Commun. Mag., vol. 45, no. 12, Dec. 2007, pp. 112-19. [10] A. Kasim, "Carrier Ethernet," Ch. 2, Delivering Carrier Ethernet: Extending Ethernet beyond the LAN, McGraw-Hill, 2007, pp. 45–104. [11] A. Iwata, "Carrier-Grade Ethernet Technologies for Next Generation Wide Area Ethernet," IEICE Trans., vol. 89-B, no. 3, Mar. 2006, pp. 651-60. [12] S. Vedantham, S.-H. Kim, and D. Kataria, "CarrierGrade Ethernet Challenges for IPTV Deployment," IEEE Commun. Mag., vol. 44, no. 7, July 2006, pp. 24-31.
BIOGRAPHIES KERIM FOULI (
[email protected]) __________ is a Ph.D. student at INRS. He received his B.Sc. degree in electrical engineering at Bilkent University in 1998 and his M.Sc. degree in optical communications at Laval University in 2003. He was a research engineer with AccessPhotonic Networks, Quebec City, Canada, from 2001 to 2005. His research interests are in the area of optical access and metropolitan network architectures with a focus on enabling technologies. He is the recipient of a two-year doctoral NSERC Alexander Graham Bell Canada Graduate Scholarship for his work on the architectures and performance of optical coding in access and metropolitan networks. MARTIN MAIER (
[email protected]) ________ was educated at the Technical University of Berlin, Germany, and received M.Sc. and Ph.D. degrees, both with distinction (summa cum laude), in 1998 and 2003, respectively. In the summer of 2003 he was a post-doctoral fellow at the Massachusetts Institute of Technology, Cambridge. Since May 2005 he has been an associate professor at Institut National de la Recherche Scientifique (INRS), Montreal, Canada. He was a visiting professor at Stanford University, California, October 2006 through March 2007. He is the founder and creative director of the Optical Zeitgeist Laboratory (http://www.zeitgeistlab.ca). His research aims at providing insights into technologies, protocols, and algorithms that are shaping the future of optical networks and their seamless integration with broadband wireless access networks. He is the author of the books Optical Switching Networks (Cambridge University Press, 2008) and Metropolitan Area WDM Networks: An AWGBased Approach (Kluwer Academic, 2003).
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
O P T I C A L C O M M U N I C AT I O N S D E S I G N , T E C H N O L O G I E S , a n d A P P L I C AT I O N S A QUARTERLY SERIES IN IEEE COMMUNICATIONS MAGAZINE
CALL FOR PAPERS The Optical Communications series invites manuscript submissions in the areas of: •Optical communicaitons networks, including optical-IP •Fault management of optical networks and systems •Optical DWDM engineering and system design •Optical DWDM components and their applicability to optical networks •Emerging technologies in optical communications •Emerging standards in optical communications The Optical Communications Series is published quarterly in IEEE Communications Magazine. The series has a particular focus in the areas listed above, and provides better visibility for the intended audience for papers in the exciting field of optical communication networks. Only quality papers are considered for publication. Submitted papers should be written for a wide international audience in optical communications, in language and at a level suitable for the practicing communications engineer. The length of published papers should not exceed six magazine pages (approximately 4500 words), should not contain more than six to eight graphics/tables/photographs, and should not include more than 20 references. Manuscripts must be submitted through the magazine’s submissions Web site at
http://commag-ieee.manuscriptcentral.com/ On the Manuscript Details page please click on the drop-down menu to select
Optical Communications Supplement
MISSION The purpose of the Optical Communications series is to bring together and better serve the growing community working in the field of optical communications. We accomplish this mission by addressing the needs of a large number of engineers and engineering managers for the dissemination of the state-of-the-art information useful for their practices via in-depth and yet easy-to-understand presentations. Currently, such needs are not satisfied by existing commercial publications. The mission of the Optical Communications series is to publish quality papers in the area of optical communication networks, systems, subsystems, and components at a level suitable for the optical communications practicing engineer. All papers are peer reviewed. The Optical Communications series accepts advertising from companies with a special interest in optical communications, particularly in the areas of components, testing, and software design.
www.comsoc.org
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
TOPICS IN OPTICAL COMMUNICATIONS
A Comparison of Dynamic Bandwidth Allocation for EPON, GPON, and Next-Generation TDM PON Björn Skubic, Ericsson Research Jiajia Chen, Zhejiang University and Royal Institute of Technology ( KTH/ICT) Jawwad Ahmed and Lena Wosinska, Royal Institute of Technology (KTH/ICT) Biswanath Mukherjee, University of California, Davis
ABSTRACT Dynamic bandwidth allocation in passive optical networks presents a key issue for providing efficient and fair utilization of the PON upstream bandwidth while supporting the QoS requirements of different traffic classes. In this article we compare the typical characteristics of DBA, such as bandwidth utilization, delay, and jitter at different traffic loads, within the two major standards for PONs, Ethernet PON and gigabit PON. A particular PON standard sets the framework for the operation of DBA and the limitations it faces. We illustrate these differences between EPON and GPON by means of simulations for the two standards. Moreover, we consider the evolution of both standards to their next-generation counterparts with the bit rate of 10 Gb/s and the implications to the DBA. A new simple GPON DBA algorithm is used to illustrate GPON performance. It is shown that the length of the polling cycle plays a crucial but different role for the operation of the DBA within the two standards. Moreover, only minor differences regarding DBA for current and next-generation PONs were found.
INTRODUCTION Passive optical networks (PONs) provide a powerful point-to-multipoint solution to satisfy the increasing capacity demand in the access part of the communication infrastructure, between service provider central offices (COs) and customer sites. A PON consists of an optical line terminal (OLT) located at the provider CO and a number of optical network units (ONUs) at the customer premises. In a time-division multiplex (TDM) PON downstream traffic is handled by broadcasts from the OLT to all connected ONUs, while in the upstream direction an arbitration mechanism is required so that only a single ONU is allowed to transmit data at a given point in time because
S40
Communications IEEE
0163-6804/09/$25.00 © 2009 IEEE
of the shared upstream channel. The start time and length of each transmission time slot for each ONU are scheduled using a bandwidth allocation scheme. In order to achieve flexible sharing of bandwidth among users and high bandwidth utilization, a dynamic bandwidth allocation (DBA) scheme that can adapt to the current traffic demand is required. Two major standards for PONs have emerged, Ethernet PON (EPON) [1] and gigabit PON (GPON) [2]. Due to significant differences between the EPON and GPON standards (different control message formats, guard times, etc.), there are many implications for the DBA approaches and how an efficient bandwidth allocation scheme should be designed for these two standards. To the best of our knowledge, not much research has addressed a qualitative and quantitative comparison of DBA within EPON and GPON. Therefore, the objective of this article is to provide insight into the working mechanisms and typical performance characteristics of the DBA schemes under a variety of network conditions in these two competing standards. Furthermore, our study is extended to the next-generation TDM PONs (i.e., 10G EPON and 10G GPON). The remainder of this article is organized as follows. In the next section we outline the key differences between the EPON and GPON standards in the context of DBA algorithms. We then discuss next-generation TDM PONs. We describe the DBA algorithms for EPON and GPON used in the article. We then define the performance parameters and methods used in this article. Results are presented in the following section, and conclusions are stated in the final section.
EPON AND GPON STANDARDS In this section we compare the two standards, EPON and GPON, which set the framework for the operation of DBA. The two standards
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
EPON
Line rate
Guard time
Overhead for bandwidth allocation
BEMaGS F
GPON
Downstream
1.25 Gb/s
Downstream
1.24416/ 2.48832 Gb/s
Upstream
1.25 Gb/s
Upstream
1.24416 Gb/s
Bit rate after 8B/10B line coding
1 Gb/s
Bit rate after scrambling line coding
1.24416 Gb/s
Laser on-off
512 ns
Laser on-off
≈25.7 ns
Automatic gain control (AGC)
96 ns, 192 ns, 288 ns, and 400 ns
Clock and data recovery (CDR)
96 ns, 192 ns, 288 ns, and 400 ns
Ethernet frame
64–1518 bytes
Preamble and delimiter
Frame size
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
GATE/REPORT
64 bytes (smallest size of Ethernet frame)
General encapsulation method (GEM)
70.7 ns
GEM header
5 bytes
Frame fragment
≤1518 bytes
Status report message
2 bytes
Table 1. Some differences related to bandwidth allocation in standards of EPON [1] and GPON [2]. embrace different philosophies, with EPON based on a simple standard with looser hardware requirements, and GPON based on a relatively complex standard with tighter hardware requirements and a larger focus on quality of service (QoS) assurance. On a detailed level, the two philosophies boil down to differences in guard times, overheads, and other forms of parameters influencing bandwidth utilization within the two systems. These underlying differences govern how DBA should be designed in order to cope with imposed traffic requirements and fairness policies while still maintaining efficient utilization of the PON’s shared upstream channel. Most research to date regarding DBA has addressed EPON [3–7]. However, GPON faces a series of distinct challenges, and new DBA algorithms tailored specifically to the GPON standard need to be developed. In Table 1 the differences related to bandwidth allocation in both standards are listed. The following subsections describe the differences between the EPON and GPON standards in more detail.
EPON In EPON both downstream and upstream line rates are 1.25 Gb/s, but due to the 8B/10B line encoding, the bit rate for data transmission is 1 Gb/s. Guard times between two neighboring time slots composed of laser on-off time, automatic gain control (AGC), and clock and data recovery (CDR) are used to differentiate the transmission from different ONUs in a given cycle. IEEE 802.3ah has specified values (classes) for AGC and CDR. In EPON, Multipoint Control Protocol (MPCP) is implemented at the medium access control (MAC) layer to perform the bandwidth allocation, auto-discovery process, and ranging. As illustrated in Table 1, two control messages,
REPORT and GATE, used for bandwidth allocation are defined in [1]. A GATE message carries the granted bandwidth information from the OLT to the ONU in the downstream direction, while the REPORT message is used by an ONU to report its bandwidth request to an OLT in the upstream direction. Their exchange allows the time slots to be assigned according to the traffic demand of the individual ONUs and the bandwidth available. The size of REPORT and GATE are defined as the smallest size of Ethernet frame (64 bytes).
GPON The GPON standard is defined in the International Telecommunication Union — Telecommunication Standardization Sector (ITU-T) G.984.x series of Recommendations sponsored by the full service access network (FSAN). Several upstream and downstream rates up to 2.48832 Gb/s are specified in the standard. Here we consider the 1.24416 Gb/s upstream rate to make it comparable with EPON. The GPON protocol is based on the standard 125 μs (~19,440 bytes at 1.24416 Gb/s) periodicity used in the telecommunications industry. This periodicity provides certain efficiency advantages over EPON, as messages (control, buffer report, and grant messages) can efficiently be integrated into the header of each 125 μs frame. In order to efficiently pack Ethernet frames into the 125 μs frame, Ethernet frame fragmentation has been introduced. Within GPON each Ethernet frame or frame fragment is encapsulated in a general encapsulation method (GEM) frame including a 5-byte GEM header. In addition, upstream QoS awareness has been integrated in the GPON standard with the introduction of the concept of transport containers (T-CONTs), where a TCONT type represents a class of service. Hence
IEEE Communications Magazine • March 2009
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
S41
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
DBA processing
A
BEMaGS F
assessment is based on estimates of the overheads for a possible future Recommendation. For GPON we assume a line rate of 9.95328 Gb/s and also that the sizes of the guard time, preamble, and delimiter remain the same in units of time, whereas the physical layer overhead (PLO) fields, GEM headers, and status report messages remain the same in units of bytes.
Polling cycle length
OLT
ONU 1 ONU 2
DBA SCHEMES
ONU 3 Status reports Updated grants sent to OLT received from OLT
125 μs
Grants received every 125 μs
Figure 1. Diagram of the bandwidth requesting algorithm used in the simulations for GPON. GPON provides a simple and efficient means of setting up a system for multiple service classes. Several status reporting modes can be set within GPON. For our comparison with EPON we consider mode 0, the simplest status reporting mode. Hence, our comparison of EPON and GPON is based on a comparable type of communication mode between the OLT and ONUs where the ONUs send REPORT messages or status reports containing buffer sizes, while the OLT sends the ONUs GATE messages or grants containing the granted time slots.
NEXT-GENERATION TDM PONS Both the current GPON and EPON standards are on the verge of evolving to their respective next-generation standards supporting 10 Gb/s downstream bandwidth allocation along with higher upstream bandwidth support. There is an implication for the DBA problem depending on how the different forms of bandwidth overhead scale in the upgraded versions of the two standards. 1G EPON-based solutions have experienced great market penetration and been widely deployed, particularly in the Asian market. In order to cater to the ever increasing demands for bandwidth requirements from end customers, the 10G EPON Task Force was formed, known as IEEE 802.3av [8], with an initiative to standardize requirements for the next-generation 10G EPON in 2006. The IEEE 802.3av draft focuses on a new physical layer standard while still keeping changes to the logical layer at a minimum, such as maintaining all the MPCP and operations, administration, and maintenance (OAM) specifications from the IEEE 802.3ah standard. 10G EPON will use 64B/66B line coding with a line rate of 10.3125 Gb/s instead of 8B/10B line coding with a line rate of 1.25 Gb/s used in 1G EPON. For EPON we assume that the guard time is the same in time units while control messages (i.e., REPORT/GATE) are the same in byte units. The most likely next-generation 10G GPON candidate will have a 2.48832 Gb/s upstream line rate. This upstream line rate has already been defined in ITU-T Recommendations. For larger upstream line rates, approaching 10 Gb/s, our
S42
Communications IEEE
Many DBA algorithms [3–7] have been developed especially for EPONs to cope with the challenges of high bandwidth utilization and QoS provisioning. However, it is difficult to pick a single best algorithm due to the multidimensional performance requirements expected of a DBA algorithm. In addition, some algorithms introduce increased complexity when supporting higher traffic demand, QoS, fairness, and so on. In order to make the comparison between GPON and EPON more general, we consider algorithms for EPON and GPON where each allocated byte corresponds to a byte residing in the buffer, a scheme we chose to refer to as bandwidth requesting. In contrast to traffic monitoring and predictive algorithms, bandwidth requesting algorithms have the advantage of high bandwidth utilization. For EPON, Interleaved Polling with Adaptive Cycle Time (IPACT) [3] is considered one of the most efficient DBA algorithms in terms of bandwidth utilization. In IPACT, when the ith ONU is transmitting Ethernet frames in the upstream, the OLT informs the (i + 1)st ONU of the grant information, including the starting time and the size of the granted bandwidth. The (i + 1)st ONU may be polled before the transmission from the ith ONU is completed. Transmission slots for different ONUs are scheduled in a given cycle such that the first bit from the (i + 1)st ONU arrives at the OLT only after the guard time has passed (i.e., after the OLT receives the last bit from the ith ONU). In addition, two basic requirements need to be fulfilled: • The GATE message carrying grant information can arrive at the (i + 1)st ONU in time. • The bandwidth granted for the ith ONU is equal to the bandwidth requested by the ith ONU. If these two requirements can be satisfied, the bandwidth in the upstream direction can be fully utilized. If the grant from the OLT is always equal to the bandwidth an ONU reported/ requested, IPACT may lead to the situation that an ONU with heavy traffic load monopolizes the upstream channel so that frames from other ONUs are delayed. To solve this problem, a limited service discipline has been proposed [3] where a maximum guaranteed bandwidth (Bmax i ) is predefined for each ONU. If the bandwidth requested by the ith ONU is less than Bmax , the i granted bandwidth from the OLT is the same as the requested bandwidth. Otherwise, the grant for the ith ONU is equal to B imax . B imax sets an upper bound on the maximum bandwidth allocated to each ONU in a given cycle. Within GPON, upstream transmission is based
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Symbol
Description
EPON
GPON
C
Bit rate after line coding
1 Gb/s
N
Number of ONUs
D
Propagation delay between each ONU and the OLT
Q
Maximum buffer size for each ONU
1 Mb
10 Mb
1.24416 Mb
9.95328 Mb
Bguard
Guard bandwidth between two neighboring slots
125 bytes (~1 μs)
1250 bytes (~1 μs)
15 bytes (~96 ns)
120 bytes (~96 ns)
BControl
Length of control message in bytes
BREPORT (BGATE) = 64 bytes
10 Gb/s
1.24416 Gb/s
9.95328 Gb/s
16 100 μs (corresponds to a distance of 20 km)
2 bytes
Table 2. Simulation parameters for EPON and GPON. on an upstream bandwidth map being broadcast to the ONUs every 125 μs. The bandwidth map is updated at regular time intervals by the DBA algorithm. Here, we propose a simple bandwidth requesting algorithm (Fig. 1) that works as follows. Within a given 125 μs upstream frame, all ONUs are scheduled to transmit buffer reports. The OLT takes the buffer reports and subtracts the previously allocated but not yet utilized grants (grants issued for the current polling cycle) to form an ONU request. This request is then used for the subsequent bandwidth allocation. The updated grants are thereafter transmitted to the ONUs together with requests for new buffer reports. The main difference between this algorithm and IPACT is that this algorithm uses a fixed polling cycle, and all the ONUs are polled essentially simultaneously (within a 125 μs frame).
PERFORMANCE PARAMETERS AND METHOD DEFINITION OF PERFORMANCE PARAMETERS An efficient DBA algorithm strives to achieve as high bandwidth utilization as possible while still satisfying typical traffic requirement constraints such as packet delay, jitter and throughput. In this article we define bandwidth utilization as the ratio between throughput and the system bit rate after line coding (Table 1; note the difference in bit rate after line coding between EPON and GPON). For the packet delay we refer to the waiting time for a packet in the ONU buffer (i.e., excluding the propagation delay for transmission to the OLT). Average delay as well as the corresponding 90 percent confidence interval is measured. Jitter is defined as the standard deviation of the delay. Furthermore, in this article we also introduce upstream efficiency, defined as
∑ Bisent ,j i, j
∑
( Bigrant ,j
+ Bguard + BControl )
,
Ethernet frames based on the grant issued by the OLT in the ith polling cycle. In EPON there is no frame fragmentation, and unused slots reminders (USRs) can be caused by the differgrant sent ence between B i,j and B i,j . In GPON this type of USR is avoided by the use of frame fragmentation. The upstream efficiency as defined here provides more insight to the operation of the DBA algorithm, and the real reasons for throughput loss in EPON and GPON.
SIMULATION METHODS Our performance comparison of PON systems is based on simulation studies. For EPON we have used a C++ based discrete event driven simulator developed for work presented in [7, 9], while for GPON we have used an event driven C++ based GPON simulator developed at Ericsson Research. Furthermore, both simulators have been modified and enhanced to simulate nextgeneration TDM PONs. Table 2 shows the primary simulation parameters for EPON and GPON. In these simulations we have used the traffic generator provided by Kramer [3] to model realistic self-similar traffic conditions. For traffic generation we used 256 pareto substreams with a hurst parameter of 0.8 and a packet size distribution taken from traffic measurements by Broadcom [10]. This traffic generator was used to generate 500,000 Ethernet frames per ONU for each value of offered load. Here the offered load is defined for the entire system and includes only the payload without any overhead. The simulation was ended after the first ONU sent the last bit of its last packet. Hence, the total simulation time of the system was determined by the ONU with the highest traffic load. For a fair comparison we set an ONU buffer size scaled according to the PON bit rate. For example, for 1G EPON the buffer size was set to 1 Mb while for 10G EPON it was set to 10 Mb (Table 2). It should be mentioned that the buffer size used in our simulations of the 1G system is on the same order of magnitude as commercial GPON products.
PERFORMANCE EVALUATION
i, j grant B i,j
where denotes the size of the bandwidth granted by the OLT for the jth ONU in the ith sent polling cycle, while B i,j denotes the size of bandwidth the jth ONU really used for sending
The starting point for our comparative study is to look at how the length of the polling cycle affects the performance of the DBA algorithms. The polling cycle length is a crucial design
IEEE Communications Magazine • March 2009
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
S43
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
1.0 24
0.9
0.8
16
Offered load
0.7
IPACT (limited service)
1.1 0.9
0.6
Delay (ms)
Bandwidth utilization
20
12 8
0.5 0.5
4 0
0.4 0.5
1.0
1.5 2.0 2.5 Maximal polling cycle (ms)
3.0
0.5
3.5
1.0
1.5 2.0 2.5 Maximal polling cycle (ms)
3.0
3.5
3.0
3.5
(b)
(a) 6
1.00
5
Upstream efficiency
0.95
Jitter (ms)
4
3
2
0.90
0.85 1
0.80
0 0.5
1.0
1.5 2.0 2.5 Maximal polling cycle (ms)
3.0
3.5
0.5
(c)
1.0
1.5 2.0 2.5 Maximal polling cycle (ms) (d)
Figure 2. Simulation results for 1G system (a–d): a) bandwidth utilization; b) delay with 90% confidence interval; c) jitter; d) upstream efficiency for EPON under different offered traffic loads. parameter. It influences almost all performance parameters such as bandwidth utilization, delay, and jitter. It also has implications on hardware requirements such as the processing power for the DBA, buffer sizes, and the complexity of the algorithm. Deciding the length of polling cycle is a matter of finding an optimal balance between different performance requirements. This balance will now be sought for EPON and GPON.
EPON AND GPON In Fig. 2 we present the results for bandwidth utilization, upstream efficiency, delay, and jitter for both EPON and GPON. Note that for EPON the results are given as a function of an imposed maximum polling cycle, while for GPON results are given as a function of fixed polling cycle. The considered EPON algorithm has an adaptive polling cycle where the average polling cycle is always smaller than the maximum polling cycle. Let us first summarize the main conclusions that can be drawn from Fig. 2. For the EPON algorithm, performance is more strongly dependent on the polling cycle than for GPON. Fur-
S44
Communications IEEE
thermore, as shown in Figs. 2a and 2b, in EPON both bandwidth utilization and delay are seen to improve as the maximum polling cycle is increased. This trend continues until a saturation point above 2 ms is reached. For GPON, as seen in Figs. 2e and 2f, there is instead a degradation in performance with increasing polling cycle. Three key characteristics have been identified to influence the bandwidth utilization and delay performance of the DBA algorithms for EPON and GPON in the figures: • The protocol overhead related to the polling cycle • Propagation delay making the OLT and ONUs wait for reception of DBA messages • The algorithms’ ability to avoid buffer overflows for single queues The first two parameters affect the performance more severely for the EPON algorithm under consideration, whereas the third problem is more severe for the considered GPON algorithm. Regarding the first characteristic, there are larger overhead related bandwidth losses (over-
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
BEMaGS F
15
1.0
0.9
12
0.8
Offered load
0.7
Delay (ms)
Bandwidth utilization
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Requesting algorithm
1.1
9
6
0.9
0.6
0.5 3
0.5
0.4
0 0.5
1.0
1.5 2.0 2.5 Polling cycle (ms)
3.0
3.5
0.5
1.0
1.5 2.0 2.5 Polling cycle (ms)
3.0
3.5
3.0
3.5
(f)
(e) 1.00
6
5
Upstream efficiency
0.95
Jitter (ms)
4
3
2
0.90
0.85 1
0
0.80 0.5
1.0
1.5
2.0
2.5
Polling cycle (ms) (g)
3.0
3.5
0.5
1.0
1.5
2.0 2.5 Polling cycle (ms) (h)
Figure 2. Simulation results for 1G system (e–h): e) bandwidth utilization; f) delay with 90 percent confidence interval; g) jitter; h) upstream efficiency for GPON under different offered traffic loads.
head, guard time, and control messages for DBA) in EPON than in GPON. This makes EPON more sensitive to changes involving the occurrence of overhead related bandwidth loss. In EPON the total size of overhead related bandwidth loss is constant per polling cycle. Hence, a smaller polling cycle leads to a larger amount of overhead related bandwidth loss. For GPON the overhead related bandwidth loss is relatively small and constant during the fixed 125 μs frame. Hence, the bandwidth overhead does not depend on the polling cycle. This characteristic provides a partial explanation of the increasing bandwidth utilization with increasing polling cycle for EPON in Fig. 2a and the rather stable bandwidth utilization for GPON in Fig. 2e. The propagation delay has a strong influence on EPON performance. Because of the adaptive polling cycle and the bursty nature of Ethernet traffic, the polling cycle will sometimes be smaller than the fiber propagation delay. The smaller the given maximum polling cycle, the higher probability that the grant information from the
OLT will not reach an ONU in time, and consequently more bandwidth will be lost. This is the main explanation of the poor bandwidth utilization and large delay for EPON seen in Figs. 2a and 2b for small polling cycles. In GPON the polling cycle is typically fixed. If the fixed polling cycle is chosen sufficiently large (i.e., larger than 0.5 ms), the corresponding loss of bandwidth can be completely avoided. Finally, the ability of the algorithm to avoid single queue buffer overflows is related to the ability of the temporal bandwidth prioritization for buffers that are almost full. An algorithm that strongly prioritizes full buffers will achieve high throughput, possibly at the expense of delay and fairness between queues. In EPON, by increasing the maximum polling cycle, the maximum guaranteed bandwidth Bmax for each i ONU is automatically increased so that the larger bandwidth is allocated to queues that request more bandwidth. Therefore, in EPON higher throughput can be obtained by increasing the maximum polling cycle and giving higher priori-
IEEE Communications Magazine • March 2009
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
S45
A
BEMaGS F
Communications
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
1.0
24
0.9
20
A
BEMaGS F
16
0.8 Offered load
IPACT (limited service)
Delay (ms)
Bandwidth utilization
IEEE
1.1
0.7
0.9 0.5
0.6
12
8
4
0.5
0.4
0 0.5
1.0
1.5 2.0 2.5 Maximal polling cycle (ms)
3.0
3.5
0.5
1.0
1.5
2.0
2.5
3.0
3.5
3.0
3.5
Maximal polling cycle (ms)
(a)
(b)
6
1.00
5 0.98 Upstream efficiency
Jitter (ms)
4
3
2
0.96
0.94
0.92
1
0
0.90 0.5
1.0
1.5 2.0 2.5 Maximal polling cycle (ms)
3.0
3.5
0.5
(c)
1.0
1.5 2.0 2.5 Maximal polling cycle (ms) (d)
Figure 3. Simulation results for the next generation TDM PONs (a–d): a) bandwidth utilization; b) delay with 90 percent confidence range; c) jitter; d) upstream efficiency for EPON under different offered traffic loads.
ty to queues with heavy traffic load. This explains the very high bandwidth utilization and small delay for EPON seen in Figs. 2a and 2b for large polling cycles. Because of the fixed polling cycle in GPON, the bandwidth allocation loses some of its dynamics. This shortcoming could be overcome by introducing load-dependent priorities to the queues. The slight drop in bandwidth utilization for GPON in Fig. 2e for larger polling cycles is due to such buffer overflows and could be avoided by changing the prioritization scheme. For the considered GPON algorithm, delay is a result of the fixed polling cycle. For low load, the delay increases proportionally with the polling cycle. For high load the buffers are filled up, and the delay approaches a more constant dependence with respect to the polling cycle. Next, we consider jitter. For EPON, in Fig. 2c one can observe a load-dependent peak in the jitter. There are two mechanisms that explain this behavior. In general, as the nature
S46
Communications IEEE
of Ethernet traffic is bursty, the delay variation increases with decreasing average polling cycle. However, when the number of dropped frames increases, the range of the delay variation is reduced. This reduces jitter for smaller polling cycles where bandwidth utilization is poor, thus producing the observed peak. For GPON it is shown in Fig. 2g that for low load, jitter increases slightly with an increase in the length of the polling cycle. This behavior is consistent with jitter being dependent on increased waiting time. For the higher load when buffers are being filled up, jitter decreases with increasing length of the polling cycle. Figures 2d and 2h show the results for upstream efficiency in the EPON and GPON, respectively. According to the definition of upstream efficiency, it depicts to what extent the protocol related overhead influences EPON and GPON. In EPON it can be observed from the simulation results that the upstream efficiency first
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications 1.0
21
0.9
18
BEMaGS F
15
0.8 Offered load
IPACT (limited service)
1.1
0.7
0.9
12
9
0.5
0.6
6
0.5
3
0.4 0.5
1.0
1.5 2.0 2.5 Polling cycle (ms)
3.0
0.5
3.5
1.0
1.5 2.0 2.5 Polling cycle (ms)
3.0
3.5
(f)
(e) 1.00
6
5
Upstream efficiency
0.98
4 Jitter (ms)
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Delay (ms)
Bandwidth utilization
IEEE
3
2
0.96
0.94
0.92
1
0
0.90 0.5
1.0
1.5 2.0 2.5 Polling cycle (ms)
3.0
3.5
0.5
1.0
1.5 2.0 2.5 Polling cycle (ms)
(g)
3.0
3.5
(h)
Figure 3. Simulation results for the next generation TDM PONs (e–h): e) bandwidth utilization; f) delay with 90 percent confidence range; g) jitter; h) upstream efficiency for GPON under difference offered traffic loads. gradually increases and then gets saturated for different traffic loads. It should be noted that a valley point appears around the maximum polling cycle of 2 ms in the case of offered load of 0.5 and 0.9. According to the previous analysis, if the maximum polling cycle is larger than 2 ms, the bandwidth utilization reaches its maximum; hence, the amount of dropped packets decreases to the lowest level (i.e., less packets in the buffer). This means that the bandwidth request of an ONU in each polling cycle becomes smaller, so the real polling cycle may also be smaller. Therefore, the curve of upstream efficiency has a valley point for the maximum polling cycle of 2 ms. In GPON the curves for upstream efficiency for different offered traffic loads are nearly constant. Comparing EPON and GPON, we find that their optimal upstream efficiency performance is similar, although frame fragmentation is not supported by EPON. This means that the efficiency of both EPON and GPON standards supporting the different DBA schemes is similar.
NEXT-GENERATION TDM PONS Figures 3a–3h show the bandwidth utilization, delay, jitter, and upstream efficiency for next-generation TDM PONs. We find that in 10G EPON the changes caused by different maximum polling cycles for all the performance parameters, including bandwidth utilization, delay, jitter, and upstream efficiency, are similar to EPON. However, one can observe that the overall performance of both EPON and GPON has improved. The main reason for this improvement is the higher bit rate. The polling cycle, which still has a similar value in time to the 1G TDM PON, is increased in bytes, while the fixed bytes used for the control message and similar USRs caused by the absence of frame fragmentation maintain the same length in bytes in 10G TDM PON.
CONCLUSION We have identified performance limiting parameters for DBA within EPON and GPON. For
IEEE Communications Magazine • March 2009
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
S47
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
For the next generation TDM PONs, if the overheads of control messages are still similar in bytes and guard time does not change in time significantly, the performance trends as a function of maximum polling cycle can be maintained while the optimal performance is improved.
EPON, the crucial performance parameters that must be managed are large overheads and effects of propagation delay. Starting from a small value and increasing the maximum polling cycle, one observes an improvement of all the performance parameters up to a saturation level. This optimal value of the maximum polling cycle is a result of configuration parameters (e.g., the buffer size at each ONU, the propagation delay from the OLT to the ONU, and the DBA message processing time). GPON performance depends crucially on the ability of the DBA to quickly respond to the momentary traffic load on the PON in order to avoid single buffer overflows. From the simulation results it is evident that in GPON it is preferable to set as small a polling cycle as possible. This increases throughput and reduces delay. A lower limit to the polling cycle in GPON is in reality enforced by hardware parameters such as propagation delay and DBA message processing time. Compared to the very dynamic IPACT scheme for EPON, GPON algorithms are slightly more static in the sense that the polling cycle is fixed. The fixed polling cycle is advantageous for QoS assurance, which, while not considered in this article, is integrated in the GPON protocol. For QoS in EPON the algorithm must be modified in a way that might imply a more static polling cycle. For next-generation TDM PONs, if the overheads of control messages are still similar in bytes and guard time does not change significantly, the performance trends as a function of maximum polling cycle can be maintained while the optimal performance is improved.
ACKNOWLEDGMENT Björn Skubic wishes to thank Stefan Dahlfort for useful discussions.
REFERENCES [1] IEEE 802.3ah Task Force; http://www.ieee802.org/3/efm [2] ITU-T G.984.x Series of Recommendations; http://www. itu.int/rec/T-REC-G/e ___________ [3] G. Kramer, “Interleaved Polling with Adaptive Cycle Time (IPACT): A Dynamic Bandwidth Distribution Scheme in an Optical Access Network.” Photonic Net. Commun., vol. 4, no. 1, Jan. 2002, pp. 89–107. [4] G. Kramer and G. Pesavento: “Ethernet Passive Optical Network (EPON): Building a Next-Generation Optical Access Network,” IEEE Commun. Mag., vol. 40, no. 2. Feb. 2002, pp. 66–73. [5] M. P. McGarry, M. Maier, and M. Reisslein, “Ethernet PONs: A Survey of Dynamic Bandwidth Allocation (DBA) Algorithms,” IEEE Commun. Mag., vol. 42, no. 8, 2004, pp. S8–S15. [6] C. M. Assi et al., “Dynamic Bandwidth Allocation for Quality-of-Service over Ethernet PONs,” IEEE JSAC, vol. 21, no. 9, Nov. 2003, pp. 1467–77. [7] B. Chen, J. Chen, and S. He, “Efficient and Fine Scheduling Algorithm for Bandwidth Allocation in Ethernet Passive Optical Networks,” IEEE J. Sel. Topics Quantum Elect., vol. 12, no. 4, July–Aug. 2006, pp. 653–60. [8] IEEE 802.3, “Call For Interest: 10 Gb/s PHY for EPON, 2006”; http://www.ieee802.org/3/cfi/0306_1/cfi_ 0306_1.pdf ______
S48
Communications IEEE
A
BEMaGS F
[9] J. Chen and L. Wosinska, “Analysis of Protection Schemes in PON Compatible with Smooth Migration from TDM-PON to Hybrid WDM/TDM-PON,” J. Optical Net. vol. 6, no. 5, May. 2007 pp. 514–26. [10] D. Sala and A. Gummalla, “PON Functional Requirements: Services and Performance;” http://grouper.ieee. org/groups/802/3/efm/public/jul01/presentations/sala_1_ 0701.pdf _____
BIOGRAPHIES BJÖRN SKUBIC (
[email protected]) _______________ holds a Ph.D. in physics, condensed matter theory, from Uppsala University and an M.Sc. in engineering physics from the Royal Institute of Technology (KTH), Stockholm, Sweden. He has previously been active in the area of magnetism and spintronics. Since 2008 he has been with Broadband Technologies at Ericsson Research. J IAJIA C HEN (
[email protected]) ________ is now pursuing a joint Ph.D. degree from KTH and Zhejiang University, China. She received a B.S. degree in information engineering from Zhejiang University, China, in 2004. Her research interests include fiber access networks and switched optical networks. JAWWAD AHMED (
[email protected]) _________ holds a Master’s degree with a major in network technologies from the National University of Science and Technology (NUST), Pakistan. Currently he is working toward his Ph.D. in photonics at KTH with a specialization in optical networks. His research interests include access networks, GMPLS and PCE-based optical networks design, interdomain routing, and discrete event simulation of communication networks. L ENA W OSINSKA [M] (
[email protected]) __________ received her Ph.D. degree in photonics and Docent degree in optical networking from KTH. She joined KTH in 1986, where she is currently an associate professor in the School of Information and Communication Technology (ICT), heading a research group in optical networking, and coordinating a number of national and international scientific projects. Her research interests include optical network management, reliability and survivability of optical networks, photonics in switching, and fiber access networks. She has been involved in a number of professional activities including guest editorship of the following special issues that appeared in OSA Journal of Optical Networking: High Availability in Optical Networks; Photonics in Switching; Reliability Issues in Optical Networks; and Optical Networks for the Future Internet. Since 2007 she is an Associate Editor of OSA Journal of Optical Networking. She is a General Chair of the Workshop on Reliability Issues in Next Generation Optical Networks (RONEXT), which is a part of the IEEE International Conference on Transparent Optical Networks (ICTON). She serves on Technical Program Committees of many international conferences. BISWANATH MUKHERJEE [F] (
[email protected]) ______________ holds the Child Family Endowed Chair Professorship at the University of California, Davis, where he has been since 1987, and served as chairman of the Department of Computer Science during 1997 to 2000. He is Technical Program CoChair of the Optical Fiber Communications (OFC) Conference 2009. He served as Technical Program Chair of IEEE INFOCOM ’96. He is Editor of Springer’s book series on optical networks. He serves or has served on the editorial boards of seven journals, most notably IEEE/ACM Transactions on Networking and IEEE Network. He is Steering Committee Chair and General Co-Chair of the IEEE Advanced Networks and Telecom Systems (ANTS) Conference. He was co-winner of the Optical Networking Symposium Best Paper Awards at IEEE GLOBECOM ’07 and ’08. He is author of the textbook Optical WDM Networks (Springer, 2006). He served a five-year term as a founding member of the Board of Directors of IPLocks, Inc., a Silicon Valley startup company. He has served on the Technical Advisory Boards of a number of startup companies in networking, most recently Teknovus, Intelligent Fiber Optic Systems, and Look Ahead Decisions Inc. (LDI).
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
__________________
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Know what you want tomorrow to be. See th he path that will ge et you u there e. Choose a partner you can trust.
Now w, let’ss advan nce.
OPTICAL+ETHERNET INNOVATION SPEED FOR CUSTOMERS TRUSTED PARTNER
The FSP product family provides comprehensive Optical+Ethernet networking solutions for access, metro core and regional networks. ADVA Optical Networking is focused on the needs of enterprise and service provider customers deploying data, storage, voice and video applications. Our solutions have been deployed at more than 200 carriers and 10,000 enterprises around the world.
www.advaoptical.com
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
___________________
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
SERIES EDITORIAL
RADIO COMMUNICATIONS: COMPONENTS, SYSTEMS, AND NETWORKS
Joseph B. Evans
D
ear readers: Welcome to the latest issue of the Radio Communications Series! In this edition we are continuing our effort to bring new developments in radio technology to the Series. This issue focuses on a rapidly evolving area in radio communications, that of dynamic spectrum access. This field has seen amazing progress in the past few years, and promises to revolutionize the way in which radio systems are designed, deployed, and operated. Dynamic spectrum access is based on the observation, supported by many measurement studies, that the radio frequency spectrum is underutilized. Although regulators have allocated most spectrum bands of interest, the utilization of many bands appears to be poor. This provides an opportunity for radio systems to utilize the “white spaces” in the spectrum, those regions that are not heavily used, on a dynamic basis. Several technological advances are necessary to create dynamic spectrum access systems. Radios need to be able to determine whether spectrum is available using techniques such as sensing, geolocation, brokering services, or some combination. Given that spectrum is available, radios need to be able to adapt their transceivers to operate within the designated band. In order to perform useful communication, radios need to rendezvous with other radios, and at the same time avoid interfering with other dynamic and static radio systems. Protocols need to be developed to best utilize the opportunities offered by this more dynamic and flexible model of radio resource allocation. The dynamic spectrum access concept also requires a dramatic reappraisal of spectrum policy, which has been based on static allocations by regulators. Regulators need to be reassured that the technology is feasible, and then engaged in an ongoing process to determine the rules and etiquettes dynamic systems require. The progress in both technology and policy in the past few years has been amazing. In terms of policy, the transition of the television bands from analog to digital in the United States has provided an opportunity to rethink how the bands freed by the transition might be used. This has led to promising FCC decisions on the use of the TV white spaces by radios using dynamic spectrum access technology. On the technology side, a substantial number of prototypes have been developed by both the research community and industry in response to the TV white spaces proceedings. These radios have demonstrated the ability to sense incumbent transmissions, and dynamically adapt frequency and power to utilize
78
Communications IEEE
Zoran Zvonar
the TV white spaces without significant interference. Although both the technology and policy are nascent, the prospects for the future are exciting. We hope that this issue will provide a sampling of some of the progress in dynamic spectrum access technology and policy. In closing, we would like to offer special thanks to Prof. Douglas Sicker for his efforts in helping to compile the highquality group of papers in this issue. Happy reading! Please do not hesitate to send us your feedback on the series content as well as suggestions for new issues. Sincerely, Joseph Evans and Zoran Zvonar
BIOGRAPHIES J OSEPH B. E VANS [SM] (
[email protected]) ___________ is the Deane E. Ackers Distinguished Professor of Electrical Engineering and Computer Science and director of the Information and Telecommunication Technology Center at the University of Kansas. He served as a program director in the Division of Computer and Network Systems in the Directorate for Computer and Information Science and Engineering at the National Science Foundation from 2003 to 2005. His research interests include mobile and wireless networking, adaptive and cognitive networks, high-speed networks, and pervasive computing systems. He has been involved in major national high-performance networking testbeds and broadband wireless mobile networking efforts, and has published over 120 journal and conference works. He has been a researcher at the Olivetti and Oracle Research Laboratory, Cambridge University Computer Laboratory, USAF Rome Laboratories, and AT&T Bell Laboratories. He has been involved in several startups, and was cofounder and member of the board of directors of a network gaming company acquired by Microsoft in 2000. He received his Ph.D. degree from Princeton University in 1989, and is a member of the ACM. ZORAN ZVONAR (
[email protected]) ________________ is the director of the Systems Engineering Group of RF and Wireless Systems, MediaTek, focusing on the design of algorithms and architectures for wireless communications, with emphasis on integrated solutions and real-time software. He received his Dipl.Ing. and M.S. from the Department of Electrical Engineering, University of Belgrade, Yugoslavia, and his Ph.D. degree in electrical engineering from Northeastern University, Boston, Massachusetts. He has been with the Department of Electrical Engineering, University of Belgrade, Yugoslavia, and Woods Hole Oceanographic Institution, Woods Hole, Massachusetts. Since 1994 he has pursued an industrial carrier within Analog Devices and has been the recipient of the company’s highest technical honor of ADI Fellow. Since January 2008 he is with MediaTek. He was Associate Editor of IEEE Communications Letters and a Guest Editor of IEEE Transactions on Vehicular Technology, International Journal of Wireless Information Networks, and ACM/Baltzer Wireless Networks, and co-editor of the books GSM: Evolution Towards Third Generation Systems and Wireless Multimedia Networks Technologies (Kluwer Academic), and Software Radio Technologies: Selected Reading (IEEE Press).
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Visit us at GOMACTech • Mar 16-19 • Booth 502
Delivering High Linearity... enabling the communications evolution M/A-COM Technology Solutions offers a broad portfolio of products designed for the demanding needs of advanced modulation waveforms, including:
Part Number
• High dynamic range, LNAs with NF < 1 dB • Driver amplifiers with IP3 > 40 dBm • Broadband switches with P1dB > 30 dBm • HMIC™ switches with power handling > 40 dBm • Digital attenuators and VVAs with IIP3 > 42 dBm For more information visit www.macom.com or contact our sales office.
Freq (MHz)
Gain
P1dB OIP3 NF
Type
MAAL-007304 MAALSS0038 MAAMSS0049 MAAMSS0058
500-3000 70-3000 250-4000 250-4000
25.5 12 15.5 20
7 21 27 33
0.7 1.5 1.5 5.5
LNA LNA Driver Driver
Part Number
Freq (MHz)
IL
P1dB IIP3
Isol
Type
MASW-008543 MASW-007107 MASW-007921 MASW-000822 MASW-000825 MASW-000834
500-4000 DC-8000 DC-7000 50-6000 50-6000 50-6000
0.75 0.5 0.6 0.35 0.29 0.33
25 30.5 38* 42 45 47
65 29 25 29.5 28.6 44.6
SPDT: GaAs SPDT: GaAs SPDT: GaAs SPDT: HMIC SPDT: HMIC SPDT: HMIC
Part Number
Freq (MHz)
IL
P1dB IIP3
Range Type
1.7 1.8
25 30
48 42
31.5 31
6-bit 5-bit
34 33
49 42
24 24
HMIC HMIC
Amplifiers
Switches
Digital Attenuators MAAD-000123 MAADSS00016
700-6000 50-4000
19 32 43 45
53 55 60 65 65 65
Voltage Variable Attenuators MA4VAT907-1061T 600-1200 1 MA4VAT2007-1061T 1500-2500 1.4
M/A-COM Technology Solutions, Inc. Lowell, Massachusetts 01851 North America 800.366.2266 • Europe +353.21.244.640 India +91.80.4155721 • China +86.21.2407.1588
*P0.1 dB All data measured at 2 GHz
www.macom.com a Cobham company
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
GUEST EDITORIAL
DYNAMIC SPECTRUM ACCESS
Douglas C. Sicker
T
he transformation of dynamic spectrum access (DSA) from theory into practice has begun — regulatory decisions such as the recent U.S. FCC “TV White Space Order” embrace DSA, standards bodies have ratified DSA enabling standards, commercial DSA devices are now available, and numerous DSA prototypes, such as DARPA's XG, have been demonstrated. As networks and devices increasingly gain intelligence and “cognitive” capabilities, and regulators around the world seek to enhance spectrum utilization, dynamic access is becoming one of the most important but most complex topics in wireless communications development. Such spectrum access methods present numerous challenges that broadly span technology, economics and public policy. In this special focused issue of IEEE Communications Radio Communications Series, we seek to present the breadth of the activity in dynamic spectrum access research and practice, and have therefore included articles that cover both the technology, and policy and economics challenges facing DSA. The first article, by Marshall, examines the use of DSA to manage the challenges facing the operation of wireless devices in densely utilized spectrum environments. It also proposes methods whereby cognitive radio technology could be used to manage front-end linearity and dynamic range in radio devices. The second article, by Willkomm, Machiraju, Bolot, and Wolisz, examines the application of DSA to cellular spectrum bands. This article offers a measurement-based analysis of actual cellular band usage across a large number of base stations and across a long time frame. Based on this analysis, models of primary usage are described, and the implication of DSA on these bands is then considered. The third article, by Atia, Sahai, and Saligrama, considers the issues of enforcement and lia-
80
Communications IEEE
bility in cognitive radio systems. The fourth article, by Lehr and Jesuale, examines the economic and market challenges of pooling public safety spectrum. This work describes the policy and spectrum management reform needed to enable spectrum pooling and portability; through this reform public safety users could have expanded access to spectrum, which could enable improved interoperability and access to emerging broadband services. The final article, by Bazelon, considers the economic implications of incremental spectrum allocation. This article defines a broad set of economic criteria that can be used to develop an analytic framework for assessing the value of both licensed and unlicensed spectrum allocations. In closing, I wish to thank all those who contributed to the success of the quality of the articles contained within this issue. For more information on dynamic spectrum access networks, I recommend the IEEE Dynamic Spectrum Access Networks (DySPAN) conference. DySPAN provides a global forum for discussing all aspects of devices and networks that dynamically utilize spectrum on either a consensual or nonconsensual basis. More information on conference is available at http://www.ieee-dys______________ pan.com. ______
BIOGRAPHY DOUGLAS C. SICKER [SM] is an associate professor in Computer Science and director of the Interdisciplinary Telecommunications Laboratories at the University of Colorado at Boulder. Prior to this he was director of Global Architecture at Level 3 Communications, Inc. Prior to that, he was chief of the Network Technology Division at the Federal Communications Commission (FCC). His research interests include cognitive radios, network security, and public policy. He is currently an investigator on a number of NSF and DARPA funded projects. These projects explore adaptation and spectrum agility in wireless networks. After leaving the FCC, he served on the FCC Technical Advisory Council and as Chair of the FCC Network Reliability and Interoperability Council steering committee. He is a member of the ACM and the Internet Society.
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
TOPICS IN RADIO COMMUNICATIONS
Cognitive Radio as a Mechanism to Manage Front-End Linearity and Dynamic Range Preston F. Marshall, Defense Advanced Research Projects Agency
ABSTRACT Most of the consideration of the benefits and applicability of dynamic spectrum access has been focused on opportunities associated with spectrum availability. This article describes the use of DSA to resolve challenges in achieving wireless and cognitive radio operation in dense or energetic spectrum. It also demonstrates that the use of DSA can significantly reduce requirements for linearity and dynamic range in the radio front-end, and reduce the intermodulationinduced noise floor through integration of DSA with the selection of front-end filter settings. This approach could enable DSA and cognitive radios to be more affordable than conventional radios, and addresses spectrum issues that are not practically manageable through manual approaches.
INTRODUCTION
The author is also a graduate student at the Centre for Telecommunications Value-Chain Research (CTVR).
The effects of receiver front-end linearity and dynamic range is currently a significant factor in the performance of conventional radio devices that inhabit densely populated (by radio frequency [RF] devices) environments with either discrete high-power emitters or aggregations of emitters collectively resulting in high energy levels. Manual planning has generally introduced an implicit control over the environment to which wireless devices may be subjected. Even with high-quality RF equipment, there are significant instances of interference that have been caused by nonlinearity within the RF receivers. The problem arises when the total energy provided to the amplifying stages (passive stages can intermodulate, but the effects are generally not the dominant ones) is significant enough to introduce mixing of the signals. This mixing creates a number of intermodulation products, which appear at the sum and difference frequencies. Second order product frequencies are the sum and difference of the original input signals. For adjacent input signals, these are removed from original input signals by an octave. The
IEEE Communications Magazine • March 2009
Communications IEEE
third order products are the mixing of the original input signals and the second order products, which results in signal artifacts that are in the frequency range of the original signals. It is thus not possible to filter these products or eliminate inputs that would cause them. The primary mitigation is to minimize the total energy provided to the front-end by use of high-quality preselector filters and provide sufficient circuit immunity, measured as the third order input intercept point (IIP3). This behavior is important to the dynamic spectrum access (DSA) and cognitive radio community for three reasons: • One of the primary benefits of DSA is operation in bands that are currently dedicated to other uses. DSA devices thus will be subject to a greater range of environments than with conventional spectrum band practices. • The future deployment of DSA and cognitive radios will only make this situation become more stressful, as spectrum density is increased by technologies such as DSA. For example, if DSA increased spectrum access by 10 times and total energy correspondingly, the third order products would increase by 30 dB. Intermodulation products that were well below the noise would become constraints on performance. • Provision of high front-end performance is a fundamental cost and energy driver for wireless devices, and is a significant operational impediment. Adaptations that minimize the effects of intermodulation enable significant reductions in required IIP3 performance, and increase reliability beyond what is possible through high power and cost circuit approaches. As an example of the range of intermodulation immunity, a typical low-cost consumer device might have an IIP3 as low as –12 dBm, while a high quality military or public safety radio might need an IIP3 as high as 20 dBm or more to be reliable in the environments in which it must operate. Both cost and energy consumption is strongly related to these performance measures.
0163-6804/09/$25.00 © 2009 IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
81
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
-10 -20 -30 -40 -50 -60 600
800
1000
1200
1400
1600
1800
1400
1600
1800
(a) 0 -10 -20 -30 -40 -50 -60 -70
600
800
1000
1200 (b)
Figure 1. Effect of low noise amplifier distortion on spectral distribution: a) input signal spectral distribution; b) LNA output signal distribution.
It has been suggested by the author [1] that a cognitive radio can adapt the use of spectrum to avoid situations in which the front-end will be overloaded. If the device is operating with static frequency assignments, there is little effective mitigation of this front-end overload condition other than high levels of receiver front-end performance. To mitigate these effects, this article will investigate the benefits of applying the principle that “frequency should be selected such that the preselector tuning can ensure that the total energy passing through the preselector is constrained to no more than a certain ratio of the overload input energy.” DSA enables the radio and network the flexibility to select the band and frequency of operation, and rendezvous nodes onto that frequency automatically and dynamically. A conventional radio has little or no opportunity to perform this adaptation, since frequencies are typically statically assigned to an application, network, service, or device. The preselector tuning is dictated by these assignments. The implementation of cognitive radio techniques in this area is specific to the hardware capability and organization, but the following analysis is generally applicable to typical configurations and filter technology. It is not the intention of this article to provide an in-depth analysis of filter technology; instead, it is intended to develop the fundamental relationships between the capability of the filters, the distortion of the low noise amplifier (LNA), the cognitive radio’s algorithms, and the performance impact on the device in a range of environments.
82
Communications IEEE
BEMaGS F
THE PROBLEM WITH FRONT-END INTERMODULATION DISTORTION
0
-70
A
In assessing this problem, the algorithm will consider the input preselector bandwidth (typically a significant fraction of the carrier frequency), which is the sole constraint on energy entering the front-end. The signaling bandwidth is typically a small fraction of the preselector bandwidth and is the range over which additional noise energy will enter the demodulation process. The cognitive radio must be able to determine the relationship between the signal environment it can sense and the noise products generated internally through intermodulation of signals prior to the intermediate frequency (IF) or the digitization stages of the receiver. The standard engineering measurement of intermodulation inserts two pure tones and measures the energy of the intermodulation products, which are therefore also tonal. This situation is of reduced concern to a cognitive radio, since the resulting tonal intermodulation products can be avoided through DSA algorithms by the same mechanisms that avoid any other occupied frequency. However, in complex environments it is likely that the intermodulation products will be present in large numbers, and also have significant bandwidth due to the signals’ modulation bandwidth. When these factors are present, the effect is much less correlated, and approaches an additive white Gaussian noise (AWGN) source composed of many individual intermodulation distortion (IMD) products, with energy falling throughout the band of interest. The imperative to address these concerns is not specific to DSA systems. In the United States a lengthy regulatory proceeding was initiated to resolve interference between the towers of a mobile service provider, NEXTEL, and numerous local public safety systems. In the end the U.S. regulator (the Federal Communications Commission) elected to relocate systems through a mix of spectrum offerings and cellular provider contributions to public safety frequency relocation [2]. These systems did not overlap in spectrum, so this was not a frequency management issue as typically defined. But the placement of high-power cellular base stations did have a very significant impact on the performance of the public safety radio systems due to the very high energy level in adjacent frequency bands. Similar anecdotal experience is often referred to as “co-site interference,” as it is often the result of receiver placement in close proximity to a strong emitter in an adjacent frequency or band. It is the contention of this article that front-end overload due to adjacent channel energy is a common and generally less recognized experience, and that with denser spectrum assignments, heterogeneous usage, and as technologies such as DSA become more prevalent, this phenomenon will become a fundamental constraint on wireless networking. An example of this effect is shown in Fig. 1. In Fig. 1a the input spectral distribution of eight (unequally spaced) signals are shown as inputs to an LNA. There is considerable “white space” between them. The total energy is just below the
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
A MODEL OF SPECTRUM ENERGY DISTRIBUTION The author has shown that the cumulative distribution of energy in typical spectrum environments can be approximated as a beta distribution between Emin and Emax values (front-end minimum and maximum energy, respectively) [3]. This model is shown in Fig. 2. The energy and beta distribution parameters (α and β) are a function of the bandwidth of the filters. Spectrum measurements collected by McHenry [4] were used to identify appropriate energy extremes and α and β parameters for a number of spectrum collections. For purposes of this article, the samples collected at Illinois Institute of Technology are used for the closed form expressions of the spectrum distribution
EFFECTS OF NONLINEAR RESPONSE ON SIGNALING CHANNELS A predictable mapping from the measured total environmental energy at the filter output to the estimated IMD noise density in the signaling
F
0
Emax
Where: Emin is the lowest value of energy for a given distribution (spectrum sample) and bandwidth, and Emax is the largest value of energy for a given distribution (spectrum sample) and bandwidth.
Energy increasing
Figure 2. Typical filter output of preselector filters for a range of filter bandwidth. channel is critical to utilizing post-filter energy as criteria for preselector selection. Figure 3 illustrates the relationship between the input energy and IMD energy for the Chicago spectrum measurements and a specific value of IIP3 (–5 dBm). The red line indicates the pure twotone distortion for the aggregate energy, and the predicted distortion energy for a subset of spectrum samples are shown by blue dots. These results directly scale with IIP3 [5]. Applying least squares fit to these collections yields a first order polynomial fit to estimate the IMD3 noise from the input energy. The coefficients are consistent with the analytic expression and provide an average of approximately 12 dB less intermodulation energy than would be expected if all of the input energy was concentrated into the two tones. The slightly increased slope reflects a slight increase in correlation within the higher energy, which has an increased impact on IMD energy. In the Chicago collection, the standard deviation of the IMD estimate is 5.8 dB. This corresponds to an input energy standard deviation of 1.9 dB. The energy measured on a single frequency has a standard deviation of 1.5 dB, so the error in estimating the intermodulation energy is only slightly more than the inherent uncertainty in knowing the exact energy in the channel. This approximation is reliable to the point where input power approaches the IIP3 point, where the overloaded LNA is so disruptive that operation is severely compromised. This estimator provides a technique that can closely estimate intermodulation noise over the usable operating range, from below the front-end noise temperature to approaching the point where the amplifier is clipping. With a few signals present in the filter passband, the anticipated distribution of energy would be quite uneven. However, as the density of the spectrum increases, the “law of large numbers” becomes dominant, and the distribution becomes more even, and in the extreme case is AWGN-like. In summary, the effect of front-end overload is significant for many of the environments in which a cognitive (or non-cognitive) radio will operate. It is possible to establish a straightforward and readily computable relationship
IEEE Communications Magazine • March 2009
IEEE
BEMaGS
1.0
Cu mu lat ive pro ba bil ity dis tri bu tio n
IIP3 of the amplifier. Figure 1b illustrates the spectral distribution of the front-end LNA output. A denser and non-tonal signal mix would create even denser and complex sets of intermodulation products. The intermodulation also broadens the bandwidth of the intermodulation products compared to the original signals. For example, if a given segment of spectrum was occupied by 10 signals that each occupied 0.6 percent of the bandwidth, the intermodulation would create 100 signals, each with 1.8 percent bandwidth! Collectively, they would appear as noise to the receiver. The broadening effect can be seen in considering two nonoverlapping signal inputs, one from f1l to f1h and one from f2l to f2h. One of the second order intermodulation products would be f 1 + f 2 , which has a lower frequency of f1l + f2l and an upper one of f1h + f2h. The frequency extent of this product is thus (f1h + f2h) – (f1l + f2l), or f1h – f1l + f2h – f2l). The second order product bandwidth is the sum of the intermodulating component bandwidths. A similar argument exists for the higher order intermodulation products. Intermodulation products can fall in a wide range of frequencies, but the ones of most concern to a high performance radio are the third order intermodulation ones that fall within the spectral range of the input signals. Total IMD is maximal when only two signals are present, and minimal when the energy is most evenly distributed. The real world operating points that lie between these extremes are a function of the characteristics of the individual environments. For a cognitive radio to assess the impact of the IMD, it must be able to assess the noise impact of an environment in a computationally straightforward manner. Adaptation mechanisms are dependent on the radio’s ability to predict the effect of energy on front-end performance, and thus the need for and effectiveness of the cognitive radio adaptations.
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Emin
IEEE
Cumulative probability
Communications
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
83
A
BEMaGS F
Communications Total third order intermodulation power
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
50 0 -50 -100
o-t Tw
-150
o
o ne
utp
ut
-200 -250 -300 -100
Chicago collection IIP3 = .5 dBm -80
-60
-40 -20 Filter output power
0
20
Figure 3. Third order intermodulation power as a function of LNA input energy.
between a low-resolution measurement of total energy in each of the front-end filter passbands and a sufficiently high confidence estimate of the total energy that will be distributed across the signal bandwidth of interest.
FRONT-END LINEARITY EVALUATION METHODOLOGY
1
The independence of this selection is somewhat constrained by the statistical characteristics of the spectrum assignment process. For example, if one preselector band is tuned to a TV broadcast signal, it certainly raises the probability that the adjacent one is also a TV band.
84
Communications IEEE
Assessment of algorithm performance will be in terms of the environmentally induced probability of front-end overload and, if not overloaded, the the probability of a given level of intermodulation-induced noise. The first function quantifies the likelihood that a given algorithm and design will experience overload in any specific environment, and the second determines the likely noise floor induced by intermodulation levels, even when the input power is below the IIP3 value. The first metric is the overload probability, which reflects the spectral energy distribution, the effect of the filtering process on this distribution, and the overload characteristics of the front-end. In this model only the third order distortion is considered; however, if the bandwidth of the signals provided to the LNA was sufficiently wide (more than one octave), the same function would be replicated for the IIP2 performance. However, an octave filter width in a cognitive radio would be a poor performer in almost any environment investigated. Although these results appear severe, they match anecdotal experience with wideband receivers and sensors in the presence of strong broadcast signals, such as TV or FM radio. In the non-cognitive radio baseline we consider the case in which the radio is assigned to any frequency within an operating range, and the filter is adjusted accordingly (or alternatively is fixed tuned and discretely selected). The percentage of time the power in the filter distribution exceeds the limit of the LNA’s linear limits (without desensitizing it by AGC, which would reduce adjacent channel sensitivity) is probability of overload. Since overload conditions are generally long compared to symbol time, this measure should be considered in the context of the
A
BEMaGS F
packet error rate (PER) of the link or the probability of acquiring and closing the link. It is clear that even high levels of LNA performance (specifically IIP3) are not adequate to ensure high reliability and performance communications in bands that do not have homogenous usage, such as cellular up/downlinks, satellite links, or other spectrum that has been segregated for “likes with likes” [6]. Later we compare cognitive radio algorithms against these performance benchmarks and determine the required IIP3 to support identical probability of overload values. Even if the front-end is not driven into a distorting region, the intermodulation noise can be a significant contributor to the noise floor encountered by the receiver. The probability of a given amount of noise being generated through IMD is the probability of more than the energy level (required to generate the IMD noise, as in Fig. 2) being present in the spectrum within the stated filter bandwidth. This is provided by solving the regularized incomplete beta function.
FRONT-END LINEARITY MANAGEMENT ALGORITHMS The algorithms for front-end linearity are, in a way, an extension of the ones for DSA. While the DSA algorithms search for available signal bandwidth within the bandwidth of a given preselector bandwidth, the front-end linearity management function searches for preselector bands with acceptable energy within the tuning range of the radio. This analysis is based on the assumption that the linearity of the receiver is driven by the wideband RF stages prior to the first mixer stage, or the analog-to-digital (A/D) conversion stage. For the non-cognitive mode, the radio is assumed to be assigned randomly over the operating range of the device. The cognitive radio is assumed to be able to select the most apparently optimal operating point, and therefore, its choices are driven by the filter resolution (essentially its bandwidth) and the tuning range of the device. For example, if the filter bandwidth was 10 percent and the radio tuned over two octaves, there would be approximately 15 discrete and statistically independent choices.1 Our baseline spectrum density mitigation algorithm consists of applying all possible filter selection points to the incoming (unfiltered) spectrum and measuring the total power. We imagine that the cognitive radio has a set of filter tuning parameters that each has discrete center frequencies and effective Q factor. This is certainly a reasonable assumption for typical devices. Sole reliance on fixed band-pass or high and low pass filters is not considered, as their performance is generally unacceptable in either a conventional or cognitive wideband radio. The algorithm simply locates the lowest energy filter option and thus the minimum front-end linearity challenge. This structure supports both fixed and variable Q filter structures, and cascaded combinations of tunable filters in series. In practical application the number of effective preselector settings that are likely to be available
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
IEEE
are typically less the number of settings due to overlapping coverage (and thus not independent trials), limitations on the permitted use of spectrum, and possible correlation of usage (some preselector bands may be less likely to be usable if their neighbor is unusable).
PROBABILITY OF OVERLOAD DETERMINATION For a cognitive radio to fail due to overload, all of the available filter settings must contain a signal that is above the energy threshold. This is therefore the probability of overload of a random channel (the same as the non-cognitive radio) to the power of the number of independent filter settings. The benefits of narrow filter bandwidths are compounded, since narrow filter bandwidths both reduce the energy admitted to the front-end and provide more “trials” in which to select a band without excess energy. Figure 4 illustrates the probability of overload of both cognitive and non-cognitive radios. Note that several of the bandwidths do not appear within the range of the graph because the values of Poverload for many reasonable IIP3 levels are below the lowest axis for the cognitive radio case! Inspection of these results leads directly to the conclusion that the most important resource to avoid front-end overload in a cognitive radio is the filter; increases in IIP3 are much less significant in reducing the probability that a given node will be overloaded. There is no obvious comparison between a cognitive and a non-cognitive radio; the exponentially better performance of the cognitive radio cannot be achieved by any reasonable non-cognitive alternative. For example, for an IIP3 of –5 dBm and a filter bandwidth of 20 percent, the non-cognitive radio has a 4 percent probability of overload, while the equivalent cognitive radio would have one of 10–4. The effect of front-end overload is so significant that it is highly likely that an appropriately sized communications link will either fail to acquire, and/or fail to achieve operation within, the provided link margin and error correcting regime for which it was designed. For that reason, we should consider the overload case as a link reliability issue rather than an incremental contributor to bit error rate (BER) or an adjustment to the link throughput. The benefits of the adaption mechanism are directly related to the performance of the nonadaptive radio. If the chance of overload in a given environment was 5 percent, with 10 independent preselector options the chance of overload in this environment is (0.05)10, or approximately 10–13. An equally important consideration is to exploit these benefits to reduce the required IIP3, while not degrading performance. This approach enables affordability and lower energy consumption through cognitive adaptation. A cognitive radio with only a 30 percent filter and IIP3 of –27 dB has the equivalent probability of overload as a non-cognitive one at –5 dBm IIP3! The performance of a cognitive radio with a 20 percent filter shows over 30 dB of potential IIP3
100
IEEE
BEMaGS
Co Cogn gnitive ra itive r dio B W= adio Cog 50% BW = nitiv 40% e ra d io BW =3 0% Co gni tive rad io B W =2 0%
10-1
10-2
10-3
10-4 -30
F
Non-cognitive radio BW = 50% Non-cognitive radio BW =4 0% Non-cognitive radio BW = 30% Non-cognitive radio BW = 20%
-25
-20
-15 -10 IIP3 (dBm)
-5
0
5
Figure 4. Probability of overload for ranges of IIP3 and filter bandwidths for conventional and cognitive radios operating over one octave.
reduction. In practice, this decision should be based on both the probability of absolute overload and intermodulation noise floor, as discussed next.
INTERMODULATION CHANNEL NOISE IMPACT A similar probability distribution can be developed for the occurrence of different levels of intermodulation induced noise floor elevation. In this case the probability is the possibility that, for a given level of IMD noise, none of the filter selections will have lower energy. This is inherently a binomial probability of determining that none of the number of settings would have less than a given threshold of noise. The same algorithm that reduces overload probability also reduces total energy, and thus intermodulation energy. To reflect the fact that reliability of communications is so important, the noise value that is used as a metric is not the mean case, but is driven by consideration of the worst case. How such a worst case would be determined is an application-specific consideration, driven by the recourses available to mitigate rare events of disruptive IMD noise. The noise distribution for cognitive and non-cognitive radios are shown in Fig. 5 for the 90 percent case (90 percent of the cases are better, 10 percent worse), but the relationship of the performance is similar for even rarer cases. Clearly, probability of overload is not a sufficient indicator of front-end performance. Even when the front end is not overloaded (above 3 dBm), at least 10 percent of the time there is still very significant link degradation due to intermodulation noise for a non-cognitive radio. To ensure that the performance of the device is not impacted by intermodulation, tens of dB of additional front-end IIP3 performance would be required, such as is seen in military and missioncritical applications. The cognitive radio is able, with high probability, to locate spectrum that has minimal intermodulation noise, given that it has both reasonable filter selectivity and sufficient frequency tuning range to provide selection candi-
IEEE Communications Magazine • March 2009
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Probability of front-end overload
Communications
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
85
A
BEMaGS F
Communications Intermodulation noise in 25 kHz channel
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
0
Non-cognitive radio BW = 50% Non-cognitive radio BW = 40% Non-cognitive radio BW = 50% Non-cognitive radio BW = 20%
-50
Cognitive
radio BW = 50% e radio BW = 4 0%
Cognitiv
-100
Typical noise floor (170 dBm/Hz)
Cognitiv
-150 Cognitive
-200
-250 -30
-25
-20
-15
radio BW
e radio
BW = 3 0
%
= 20%
-10 IIP3 (dBm)
-5
0
5
Figure 5. 90 percent high energy equivalently performing combinations of IIP3 and PSS for cognitive and non-cognitive radios over one octave of tuning range.
dates. A cognitive radio with only minimal filter performance can achieve equivalent noise levels to a non-cognitive one and still provide the opportunity to reduce front-end linearity over 30 dB. Alternatively, an average performance frontend and filter complement can achieve noise levels that are significantly lower than the worst case experience of a non-cognitive device. These results should not be surprising. High reliability communications link design has always been driven by the need to compensate for the effects of rare events, such as high rain rate, antenna pointing errors, fading, and even solar transit. In the case of adjacent channel effects, the use of cognitive radio adaptation provides the opportunity to essentially ignore the worst case, and “redraw” a new situation that is much less likely to have the detrimental conditions present. Radio designers have generally been reluctant to include high selectivity filters in front of the LNA stages in order to avoid noise figure
A
BEMaGS F
degradation from the filter insertion. It is clear that in dense and stressing environments the use of high selectivity filters has significant benefits for both reliability of operation and noise floor reduction, despite the signal attenuation this design choice might imply. It is more predictable to allow for a low level of fixed insertion loss than to provide adequate margin for the range of potential intermodulation noise floor elevation. We need to think in terms of link operation, not in a shielded room environment but in the worst case dense and energetic spectrum environments our systems will see in the coming decades. The cost of the processing to identify energy per preselector band is quite nominal. In the simplest implementation an existing AGC signal would be sufficient to generally determine the relative amount of energy occurring within the response band. Since exploiting dynamic frontend energy is dependent on also implementing dynamic frequency selection, it is reasonable to assume that most cognitive radio candidates already require sufficient capability to determine energy density to the resolution of the signaling bandwidth as a minimum. In this case sensing of energy after the preselector filter adds minimal, if any, additional hardware requirements. Since the entire preselector is essentially treated as just one frequency band of interest, it requires much less fast Fourier transform (FFT) resolution than other DSA and receiver functions.
CONCLUSIONS This article describes a quantitative process that can yield reasonable estimates for the overload probability and noise distribution of cognitive and non-cognitive radios within spectrum environments where spectrum activity is independent and not significantly correlated (outside of the range of the tuning filters). Analysis of collected spectrum environments demonstrates that dynamic spectrum access, combined with LNA
Spectrum too tight MIMO
Dynamic spectrum key to adaptive networking Dynamic spectrum
Relocate around spur Device spurs, ...
Move to new preselector bank Strong neighbor signal
No good MIMO paths
Need more range
Beamforming
Nulling
Each technology can throw “tough” situations to other more suitable technologies without impact on user QoS
Figure 6. Wireless network after next operational concept.
86
Communications IEEE
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
performance-aware band selection provides significant advantages in both reliability of operation and likely intermodulation induced noise floor. These results can be extended to any spectrum environment, simulated or real, through use of the spectrum distribution variables provided in [5]. There are arguments for DSA that go beyond increased access to spectrum. The application of DSA provides an effective solution to co-site and adjacent channel interference that cannot be accomplished through practical levels of linearity in receiver front-ends, and should be able to provide this mitigation at decreased equipment acquisition and energy costs. The same regulatory processes that are conservative in embracing DSA because of its disruptive effect on spectrum regulation may find it attractive to resolve the adjacent channel energy management issues that are exponentially more complex than the inchannel spectrum management process currently practiced. The economic costs of adjacent channel effects are hinted at by the Nextel/public safety issues in the U.S. [2]. This analysis also points to the need to encompass more understanding of the radio environment and its effects on links and network, and develop systemic approaches to mitigate effects. It is clear that the future of wireless may not the homogeneous “walled gardens” of current practice, but much more heterogeneous as TV white space is opened up, secondary spectrum markets emerge, and wireless devices appear on almost every device in use. As mentioned at the start of this article, this approach to front-end linearity management is one of the fundamental principles of the WNaN
cognitive radio program [1], whose top level operating concept is shown in Fig. 6. This demonstrates the central role that is proposed for DSA techniques in mitigating effects from the radios internal and external environment; mitigating other users, as well as internal artifacts, such as receiver spurs that are otherwise difficult to suppress in the design and fabrication of low-cost small-form-factor devices.
IEEE
BEMaGS
REFERENCES [1] P. Marshall, “Wireless Network after Next Edge Network Communications,” MILCOM ’08, San Diego, CA, 2008. [2] L. Luna, “Nextel Interference Debate Rages On,” Mobile Radio Tech., Aug. 1, 2003. [3] P. Marshall, “Closed-Form Analysis of Spectrum Characteristics for Cognitive Radio Performance Analysis,” 3rd IEEE Int’l. Symp. New Frontiers at DySPAN, Chicago, IL, Nov. 2008. [4] M. McHenry et al., “Chicago Spectrum Occupancy Measurements & Analysis and a Long-term Studies Proposal,” Proc. 1st Int’l. Wksp. Tech. Policy Accessing Spectrum, Boston, MA, 2006 [5] P. Marshall, “Dynamic Spectrum Management of Frontend Linearity and Dynamic Range,” 3rd IEEE Int’l. Symp. New Frontiers at DySPAN, Chicago, IL, Nov. 2008. [6] FCC, Spectrum Policy Task Force Report, ET Docket No. 02-135, Nov. 2002, pp. 22.
F
The same regulatory processes that are conservative in embracing DSA because of its disruptive effect on spectrum regulation may find it attractive to resolve the adjacent channel energy management issues that are exponentially more complex than the in-channel spectrum management process currently practiced.
BIOGRAPHY P RESTON M ARSHALL (
[email protected], ___________ __________ preston.marshall@ darpa.mil) is a program manager at the U.S. Defense ______ Advanced Research Projects Agency. He is program manager for many of the DARPA wireless communications initiatives, including dynamic spectrum access, cognitive radio, sensor networks, and disruption and delay tolerant networking. He has a B.S.E.E. and M.S. from Lehigh University, and is currently a graduate student in the Ph.D. program at the Center for Telecommunications Value-Chain Research (CTVR) at Trinity College, Dublin, Ireland.
IEEE Communications Magazine • March 2009
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
87
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
TOPICS IN RADIO COMMUNICATIONS
Primary User Behavior in Cellular Networks and Implications for Dynamic Spectrum Access Daniel Willkomm, Technische Universität Berlin Sridhar Machiraju and Jean Bolot, Sprint Applied Research Adam Wolisz, Technische Universität Berlin and University of California, Berkeley
ABSTRACT Dynamic spectrum access approaches, which propose to opportunistically use underutilized portions of licensed wireless spectrum such as cellular bands, are increasingly being seen as a way to alleviate spectrum scarcity. However, before DSA approaches can be enabled, it is important that we understand the dynamics of spectrum usage in licensed bands. Our focus in this article is the cellular band. Using a unique dataset collected inside a cellular network operator, we analyze the usage in cellular bands and discuss the implications of our results on enabling DSA in these bands. One of the key aspects of our dataset is its scale — it consists of data collected over three weeks at hundreds of base stations. We dissect this data along different dimensions to characterize if and when spectrum is available, develop models of primary usage, and understand the implications of these results on DSA techniques such as sensing.
INTRODUCTION The prevailing approach to wireless spectrum allocation is based on statically allocating longterm licenses on portions of the spectrum to providers and their users. It is, however, well known that any static allocation leads unavoidably to underutilization — at least from time to time. Therefore, the option of reusing assigned spectrum when it is temporarily (and locally) available — frequently referred to as dynamic spectrum access (DSA) — promises to increase the efficiency of spectrum usage. A multitude of DSA-based approaches have been proposed for secondary spectrum usage in which secondary users (SUs) use parts of the spectrum that are not being used by the licensed primary users (PUs). PUs can enable such secondary usage, for instance, by using short-term auctions of underutilized spectrum [1]. Alternatively, SUs can sense and autonomously use parts of the spectrum that are currently not being used by (licensed) PUs. A key technical component of
88
Communications IEEE
0163-6804/09/$25.00 © 2009 IEEE
such approaches is the cognitive radio (CR), which enables spectrum sensing. Apart from detecting idle spectrum, the sensing done by CRs is also needed by SUs to vacate the spectrum again when PUs resume their usage. Hence, understanding the way PUs use spectrum is very important in implementing DSA. We present the results of a large-scale measurement-driven study of PUs [2] in cellular bands and the implications of these results on enabling DSA in these bands. Our focus on cellular spectrum is important for several reasons. Apart from TV bands, cellular bands are viable to implement DSA — because they are widely used throughout the world, and also because engineering devices and data applications for these bands are well understood. In fact, cellular femtocells, which have recently become popular, can be viewed as implementing a type of secondary usage that uses a naive mechanism, reduced power, to avoid interference. We believe that future femtocells will likely incorporate more sophisticated mechanisms based on sensing that minimize such interference. Our study is based on the analysis of a unique dataset consisting of call records collected inside a cellular network. Thus, we are able to provide insights on a call level that prior sensing-based studies [3–5] were unable to offer. Another advantage of our study is its scale: we are able to study usage at hundreds of base stations simultaneously. In contrast, sensing-based studies are usually based on only a few spectrum analyzers and typically have limited spatial resolution. Moreover, we are able to study the entire spectrum band used by a cellular operator. Sensingbased studies take time to “sweep” such a band and hence have to trade off the sampling frequency with the width of a band. The temporal diversity of our data is also large: we use measurements of tens of millions of calls over a period of three weeks. Finally, by looking at call records, we measure the “ground truth” as seen by the network, and hence are able to model call arrival processes as well as system capacity. We provide insights into three different
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
METHODOLOGY The data set we use in this article was collected from hundreds of cell sectors 1 of a U.S. codedivision multiple access (CDMA)-based cellular operator. The data captured voice call information at those sectors, which were all located in densely populated urban areas of northern California, over a period of three weeks. In particular, our data set captured the start time, the duration, and the initial and final sectors of each call. Note that the call duration reflects the radio frequency (RF) emission time of the data transmission for the call (i.e., the duration of time for which a data channel was assigned). This is precisely what is relevant for DSA questions. The start time of the call was measured with a resolution of several milliseconds. The duration was measured with a resolution of 1 ms. Overall, our data consists of tens of millions of calls and billions of minutes of talk time. To our knowledge, such a large-scale network viewpoint of spectrum usage has not been analyzed in prior work. As with any measurement-based study, our data set has certain limitations. We state these up front since it is important to understand what our results capture and what they do not. The first limitation of our data set is its lack of full information on mobility. We were able to record only the initial and final sector of each call. Thus, we are unable to account for spectrum usage in the other sectors users may have visited during calls. To address the resulting incompleteness of information, we use two types of approximations. In the first approximation we assign the entire call as having taken place in the initial sector. We use this approximation by default. In the second approximation we assign the first (last) half of the call to the initial (final) sector. We refer to this as the mobile approximation. Throughout the article, we provide results using both approximations and find that our conclusions do not change. This indicates that the results are not sensitive to our approxima-
1
Cell load
aspects relevant for enabling cellular DSA. First, we show that cellular DSA is viable and attractive, especially during nights and weekends. Hence, we recommend an emphasis on developing scenarios for secondary usage that operate during such non-peak hours. Second, we describe two models of primary usage. The first models the call arrival process but needs to account for the skewed distribution of call durations. The second model tracks the total number of calls and does not require any knowledge of call durations. However, it is less successful than the callbased model and is more applicable during peak hours when the number of calls is high. We also find that rare but significant spikes in usage exist and must be guarded against. Third, since the success of cognitive radios depends crucially on how readily spectrum bands can be sensed, we provide guidelines for sensing in cellular bands. This is much more challenging than sensing in TV bands, for example, because cellular voice usage exhibits frequent variations in time and space. Hence, SUs of cellular voice bands likely need to employ more agile DSA techniques than SUs of TV bands.
IEEE
BEMaGS F
0.5
Sa Su Mo Tu We Th
Fr
Sa Su Mo Tu We Th Fr
Sa Su
Fr
Sa Su Mo Tu We Th Fr
Sa Su
Fr
Sa Su Mo Tu We Th Fr
Sa Su
Time 1
Cell #2
0.5
0 Su Mo Tu We Th Fr
Sa Su Mo Tu We Th
Time 1
Cell #3
0.5
0 Su Mo Tu We Th Fr
Sa Su Mo Tu We Th
Time
Figure 1. Normalized load of three different cell sectors over three weeks. We plot the moving average of each cell over 1 s. The cells show high load (top), varying load (middle), and low load (bottom).
tions and would likely not change with full mobility information. The second limitation relates to the cellular system from which we collected our data set, a CDMA-based network. Without additional knowledge from the base stations, the precise CDMA system capacity cannot easily be calculated. Hence, we implicitly assume that each voice call uses the same portion of a cell capacity. This assumption, which is correct for timedivision multiple access (TDMA)-based systems like Global System for Mobile Communications (GSM), is obviously not precise for CDMA. Due to the critically important power control loop, individual CDMA calls may require different portions of the cell capacity, which cannot easily be expressed only in the number of calls. Nevertheless, since user calling behavior is unlikely to depend on the underlying technology, except under rare overload conditions, many aspects of our analysis are likely to apply to other cellular voice networks. Using either of the aforementioned approximations, we compute the total number of ongoing calls in each cell sector during the entire time period of our study. To do so, we split the call records based on the sector. We create two records for each call, corresponding to the beginning and end of each call. Then we sort these records in order of their time. We maintain a running count that is increased by +1 when a call begins and decreased by –1 when a call terminates.
DYNAMICS OF SPECTRUM AVAILABILITY We plot the obtained “load” of three representative cells in Fig. 1. For proprietary reasons, we normalize the values of load by a constant value
1
We do not give the specific number for proprietary reasons.
IEEE Communications Magazine • March 2009
Communications
A
Cell #1
0 Su Mo Tu We Th Fr
Cell load
IEEE
Cell load
Communications
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
89
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
such that only the relative change is seen. The top cell has low load only at night, whereas the middle cell has low load during the weekends too (note that the second Monday in the observation period was a public holiday). The bottom cell always has low load (i.e., during both day and night). Our plots in Fig. 1 show that spectrum usage varies widely over time and space — an illustration of the challenges that are likely to be faced with cellular DSA. The day/night dependence is exhibited system-wide, as seen in Fig. 2. Here, we ignore information about the individual cells to which calls are assigned and consider all calls as arriv-
1
Weekday 1 Weekday 2 Weekend 1 Weekend 2
Relative arrival rate
0.8
0.6
0.4
0.2
0 0
4
2
6
8
10
12 14 Hour of day
16
18
20
22
Figure 2. Distribution of system-wide average call arrival rates during four different days. The arrival rates are averaged over 5-min slots.
450
Weekday 1 Weekday 2 Weekend 1 Weekend 2
400
Mean call duration (s)
350 300
250 200
150
100 50 0
2
4
6
8
10 12 14 Hour of day
16
18
20
22
Figure 3. Distribution of average call duration over 5-min periods during four different days. The large spikes during the mornings are due to small gaps in collection.
90
Communications IEEE
A
BEMaGS F
ing to a single entity. For such a hypothetical system, we plot the normalized average call arrival rates during four different days. Figure 2 illustrates three key effects regarding the dynamics observed in the system. First, there are two distinct periods that roughly correspond to day and night, and have high and low arrival rates, respectively. Moreover, the steepest change in arrival rates occurs in the morning and late in the evening, which corresponds to the transition between the day and night periods. Second, the system characteristics are unlikely to remain stationary at timescales beyond an hour. Except for the transition hours, the mean arrival rates do not vary significantly during an hour. Third, weekdays and weekends appear to show distinct trends. This is not wholly unexpected since many cell phone pricing plans provide unlimited calling on the weekend. Figure 3, which plots average call durations as a function of time, illustrates similar trends as Fig. 2. However, we find that the range of variability in mean call duration is much smaller than that of arrival rates. Note that there are a few large spikes in Fig. 3. These are caused by a brief interruption in the data collection, which caused some short calls to not be recorded, thereby artificially inflating the mean duration of calls. Secondary usage requires the availability of free spectrum. Assuming secondary users are immobile, the best scenario is one in which free spectrum is available for as long as possible in any given cell. In other words, variability in percell spectrum availability is not desirable. We quantify this variability by computing the variation in load of each cell during each hour. We calculate the “average case” variation using the standard deviation and the “worst case” variation as the difference between maximum and minimum 1-min load in a cell during each hour. We average these over all cells and plot them on an hour-of-day basis in Fig. 4. As before, we normalize the metrics by a constant factor for proprietary reasons. Notice that both metrics show the same trends. Not surprisingly, the variation is larger during the day, when the load is higher.
IMPLICATIONS Knowing the spectrum occupancy of a PU, more precisely the dynamic change of the occupancy over time, is crucial to determining the degree to which secondary usage can be allowed, for example, as discussed in [6, 7]. First of all, the instantaneous occupancy sets an upper limit on the resources available for SUs. Thus, our results in Fig. 2 indicate that significant secondary usage is possible during the night until almost 7 a.m., regardless of the location. Additionally, in some locations, spectrum can become available during the weekends and weekdays. Knowing the future trends of occupancy further helps spectrum owners optimize their auction process without impairing PUs. For instance, if the primary spectrum occupancy tends to vary significantly (as can be observed in Fig. 4 for the afternoon hours), secondary usage has to be allowed more conservatively, such that enough resources are available for new PUs. On the other hand, if the PU occupancy tends to decrease, spectrum can
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
be rented more aggressively. Figure 4 highlights a significant challenge for cellular DSA: when there is less spectrum available, the availability is more variable, too. Hence, more spectrum should be left unused when more spectrum is being used.
CALL DURATIONS Although the interarrival times of calls are well modeled as a Poisson process, the call-based model has a significant disadvantage: the distribution of call durations. In Fig. 5 we plot the empirically observed histogram of call durations. The histogram is quite unlike that of an exponential distribution. In fact, the histogram is not even monotonic. We see about 10 percent of calls having a duration of about 27 s. These correspond to calls during which the called mobile users did not answer and the calls were redirected to voice mail. However, RF voice channels were allocated during these calls. This illustrates that call durations can be significantly skewed toward smaller durations due to nontechnical failures (e.g., failure to answer). Also, note that the variance of the call durations is more than three times the mean, which is significantly higher than that of exponential distributions. Further analysis shows that there are likely
F
Standard static Maxmin static Standard mobile Maxmin mobile
Average per-cell variation
0.8
0.6
0.4
0.2
0 0
2
4
6
8
10 12 14 Hour of day
16
18
20
22
Figure 4. Average per-cell variation of load on an hour-of-day basis. We calculate the variation using the standard deviation and the difference between maximum and minimum.
0.25 1-s bins 10-s bins
0.2
0.15
0.1
0.05
0 0
50
100 Duration [secs]
150
200
Figure 5. Histogram of call durations. We plot the histogram using different bin sizes. two different distributions of call durations, one during the day and the other during the night (11 p.m.–5 a.m.). Furthermore, the transition hours between day and night likely see a mixture of both these distributions. In Fig. 6 (left), we compare the overall and nighttime distributions of call durations. Note that we use the log-log scale. We find that the nighttime distribution has more short calls as well as a heavier tail than the overall distribution. Both distributions have a “semi-heavy” tail and are not well modeled by classic short-tailed distributions such as Erlang (results not shown). However, the shape of the above distributions is reminiscent of the lognormal distribution, which is parabolic in log-log
IEEE Communications Magazine • March 2009
IEEE
BEMaGS
1
Frequency
MODELING PRIMARY USAGE Since SUs opportunistically use spectrum not utilized by PUs, models of primary usage in individual cells play an important role in designing and deploying cellular DSA approaches. There are two simple models that fit the behavior of primary users well (see [2] for details). One such model is the call-based model. This model uses two random variables, T and D, to describe the interarrival time between two calls and the duration of calls. An obvious and popular choice is to model call arrivals as a Poisson call process (independent and identically distributed exponential interarrival times) and call durations as being exponentially distributed. It turns out that the distribution of call interarrival times is well described by an exponential distribution in more than 90 percent of the hours for most cells. We use the Anderson-Darling test with 95 percent confidence level as a goodnessof-fit test for exponential distribution. Note that since we use a 95 percent confidence level for the Anderson-Darling test, we expect only 95 percent of our tests to succeed. The technical details of these tests can be found in [2]. We also calculate the auto-correlation coefficient for each per-cell per-hour sequence of call interarrival times. We find that only 20 percent of these sequences have auto-correlation coefficients (at non-zero lags) higher than 0.16. Although not conclusive, such low auto-correlation is consistent with independence. Hence, we believe that call interarrivals are well modeled as an exponentially distributed i.i.d. sequence. In other words, call arrivals can be viewed as Poisson processes. Although Poisson processes have been used to model fixed telephone calls for a long time, our study is one of the first to show that this is largely true for individual cells in mobile systems.
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
91
A
BEMaGS F
Communications IEEE
A
BEMaGS
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
100
F
0.1 Overall Outlier 1 Outlier 2 0.08
Frequency
PDF
10-2
10-4
0.06
0.04
10-6
10-8 10-2
0.02
All calls empirical All calls LN fit Night calls empirical Night calls LN fit 100
102
104
0
0
Duration (s)
20
40 60 Duration (s)
80
100
Figure 6. Left: duration distributions and lognormal fits; right: illustration of anomalous distributions during two hours. scale. Recall that D is lognormally distributed with parameters μ and σ 2 if log(D)is normally distributed with the same parameters. In Fig. 6 (Left), we also plot the best lognormal fits for the distributions of call durations. The head of the empirical distribution shows significant deviation from the best lognormal fit. Although the tails of the empirical and best fit agree better, they too diverge at large values. Not only is the distribution of call durations hard to model, there can also be significant deviations during certain hours. We plot two such “outlier hours” in Fig. 6 (right). The two outlier plots correspond to the weekday hours plotted in Fig. 2; the spikes in the arrival rate correspond to the spikes of Fig. 6 (right). Both hours see a sudden spurt in short calls. We verified that at least one of these is caused by a large number of calls to a popular television show, whose telephone lines are often busy. Figure 6 (right) thus demonstrates that social behavior and external events, which may not be easily predicted, can and do have significant short-term impact on spectrum usage.
EVENT-BASED MODEL The skewed distribution of call durations is the primary disadvantage of the call-based model and can be eliminated by using an alternative event-based model. This model ignores details about individual calls and instead models only the load X(·) (i.e., the total number of ongoing calls). Under this model, the load is considered to be a one-dimensional continuous-time random walk where steps are either +1 or –1, corresponding to the initiation and termination events of a call: X(t + E)= X(t) + (–1)Φ.
(1)
Here E is a random variable representing the time between consecutive steps/events, and Φ is a Bernoulli random variable that takes the value +1 with probability p and 0 otherwise. Since there is a +1 for every –1, p should be 1/2. A Poisson process is the obvious choice to model the inter-event times.
92
Communications IEEE
It turns out that inter-event times are well modeled as exponential distributions for only about 50 percent of the hours in most cells. As shown in Fig. 7 (left), exponential modeling fails almost twice as often during the night (when the load is low) as during the day. The skewed distribution of call durations is also responsible for the failures of the eventbased model. This is because the +1 and –1 events correspond to the initiation and termination of a call, and are separated by the duration of that call, which is not exponentially distributed. If there are no additional events during the duration of that call, the duration itself will be an inter-event time. In general, call durations or portions thereof will be part of the inter-event times. Thus, during the hours of the night when the system load is low, the non-exponential distribution of call durations has a significant impact on the distribution of inter-event times. During the day, this impact is reduced. A second component of the event-based model is that the ±1 events form a Bernoulli process. A necessary (but not sufficient) condition for this to be true is that the sequence of ±1 s should have close to zero auto-correlation at nonzero lags. To understand if this is true, we plot the mean autocorrelation at nonzero lag values on an hour-ofday basis in Fig. 7 (right). We see a similar effect as above. During the night, when the load is lower, the +1 of a call is more likely to be followed by the –1 of that call. This causes negative correlation at odd lags. Accordingly, we can also see the positive correlation at even lags. During the day this effect is reduced. The above discussion shows that the event-based model is more applicable when the load is high, although the Bernoulli assumption is not strictly valid. However, when the load is low, the call-based model with a skewed distribution of call durations is the superior model.
IMPLICATIONS Characterization and modeling of PU spectrum usage provides several insights that are crucial to enable secondary usage of spectrum. For example, the owners of spectrum need models of
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
100
0.4
BEMaGS
60
40
F
Lag 1 Lag 2 Lag 3 Lag 4
0..2
Auto correlation
80
Successful fit (%)
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
0
-0.2
-0.4 20
0
Event model Call model Event model (mobile) Call model (mobile 0
2
4
6
8
10 12 14 Hour of day
16
18
20
-0.6 0
22
2
4
6
8
10 12 14 Hour of day
16
18
20
22
Figure 7. Left: the percentage of successful fits (across all cells) averaged on a per-day basis; right: the per-hour auto-correlation of the step sizes (Φ) in our event-based models averaged across all cells.
0.7
1
5s 10 s 20 s 30 s
0.6
Weekday a.m. Weekday p.m. Weekend a.m. Weekend p.m.
Maximum change in load
Maximum change in load
0.8 0.5 0.4 0.3 0.2
0.6
0.4
0.2 0.1 0
0
2
4
6
8
10 12 14 Hour of day
16
18
20
22
0
1
5
10
15 Time (s)
20
25
30
Figure 8. Left: maximum change in load, averaged across all cell sectors, plotted on an hourly basis. We use different time windows Ts over which the maximum change is calculated. Right: maximum change in load, averaged across all sectors, plotted as a function of Ts for four different hours.
their PUs to determine how much secondary usage is feasible and how it can be priced. Models for call arrival and call duration are essential for optimal pricing strategies of auctioned spectrum. In [1, 8] the authors develop optimal pricing strategies for secondary usage of cellular CDMA networks. The strategies only depend on the call arrival and call duration distributions, which are both assumed to be exponential. Our results show significant deviations of call durations from exponential distributions. Hence, these strategies may have to be revised. The precise implications are subject to further studies.
SHORT-TERM VARIABILITY One of the primary requirements of DSAbased approaches is that SUs should not affect PUs. Hence, it is critical that SUs in cellular net-
works frequently sense the spectrum and vacate it if new PUs are detected. Also, since the available spectrum could change between two consecutive sensing periods, SUs must be aware of the extent of such short-term variations, and choose the time Ts between consecutive sensing periods accordingly. Figure 8 (left) provides insights into this by plotting the maximum increase in load averaged over all cells and plotted for different values of Ts. We plot the variation during a representative day of our dataset. The low variations at night are seen again. We see the peak variations in the late afternoon and a steep reduction thereafter. Notice also that the variation at Ts = 30 is often close to the variation for Ts = 5 and never more than twice. This indicates that 20–30 s may provide a better trade-off between sensing overhead and the spectrum SUs need to leave unoccupied for a sudden arrival of PUs.
IEEE Communications Magazine • March 2009
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
93
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
BEMaGS F
tions shows that there are many short calls, and the remaining are spread over a semi-heavy tail. Hence, a conditional sensing process is well motivated: the SU initially uses a rapid sensing frequency for the case that a new call is short. After a few tens of seconds, rapid sensing is likely to yield little benefit. Hence, slower sensing is justified.
case that a new call
We take a detailed look at the variation with Ts for four representative hours in Fig. 8 (right). We see less variation during the weekend, possibly due to the reduced average load. We also see that during the morning hours, a small Ts (1–2 s) does not pay off, since the maximum change in load only increases slightly. We found this to be true for all morning hours (before 10 a.m.). In the afternoon hours, however, there might be benefits to using a small Ts.
is short. After a few
IMPLICATIONS FOR SPECTRUM SENSING
tens of seconds,
From a CR perspective, there are two fundamental questions to be answered for the development of sensing techniques for SUs: • How often must sensing be performed? • What is the required observation time of a single channel to reliably detect potential PUs? Answers to these questions determine how much time and resources are needed for detecting PUs. The first question is usually answered by the PU, which specifies the so-called maximum interference time, the maximum time an SU is allowed to interfere with PU communication. Clearly, the maximum interference time sets an upper limit on the periodic time interval after which a channel used by an SU network has to be sensed (T s). Knowing the probability distribution of the arrival process of the primary communication (in our study the call arrivals), and given a target probability p i that the SU interferes with the PU, Ts can be simply calculated using the cumulative distribution ffunction (CDF) (p i = P(X ≤ T s)). Equation 2 shows the calculation of T s assuming an exponential call arrival process.
In recent years many measurement studies have been carried out to show the underutilization of licensed spectrum. Some examples of wideband measurement campaigns include the Chicago spectrum measurements [9], covering the spectrum ranges from 30 MHz to 3 GHz and 960 MHz to 2500 MHz, respectively, and the New Zealand measurements [10] in the spectrum range from 806 to 2750 MHz. Although these studies show the abundance of temporally unused spectrum, they give little insight into the dynamic behavior of the licensed users legally operating in those bands. A measurement campaign focusing on the cellular voice bands was carried out during the soccer World Cup 2006 in Germany [3, 4]. The authors show the differences in spectrum occupancy in the GSM and Universal Mobile Telecommunications System (UMTS) bands before, during, and after a match. However, similar to the wideband measurements mentioned above, little insight into call dynamics such as call arrivals or call durations is gained. The authors of [5] analyze the spectrum utilization in the New York cellular bands (CDMA as well as GSM). The CDMA signals are demodulated to determine the number of active Walsh codes (i.e., the number of ongoing calls). To determine the number of calls in the GSM bands, image processing of the spectrogram snapshots is used. Although this analysis provides more detailed results for the utilization of the cellular bands, call arrivals and durations are also not examined.
A conditional sensing process is well-motivated: the SU initially uses a rapid sensing frequency for the
rapid sensing is likely to yield little benefit. Hence, slower sensing is justified.
ln(1 − pi ) (2) λ The knowledge of the arrival process thus enables us to adjust the time (Ts) after which a channel needs to be sensed. For our investigation, the mean call interarrival time (over 1 h) per cell varies from the subsecond range to tens of minutes. Assuming a maximum of 30 calls/cell and a probability of interference of pi = 0.001, this would result in a required intersensing time between T s = 0.03 s and T s = 18 s. This huge gap clearly indicates the gains achievable by choosing T s based on the call interarrival time, which can itself be gleaned by sensing. Results such as those in Fig. 8 also provide insights into good trade-offs for sensing strategies. An answer to the second question (i.e., determining the time needed for sensing a single channel) is much more complex and depends on various factors such as the sensitivity requirements of the PU, the specific sensing technique used, and distributed/cooperative sensing aspects. However, regardless of the time the sensing process takes for a specific system, it is desirable not to waste this time on sensing an occupied channel. Here, a model of the duration of a PU communication can help determine the time after which a channel sensed to be occupied by a PU should be sensed again. In particular, our analysis of call durapi = 1 − e − λTs ⇔ Ts =
94
Communications IEEE
A
RELATED WORK
CONCLUSIONS AND FUTURE WORK We presented a large-scale characterization of primary users in the cellular spectrum and discussed the implications on enabling cellular DSA. We used a data set that allowed us to compute the load of hundreds of base stations over three weeks. We derived several results, some of which are summarized below: • Often, the durations of wireless calls (and the time for which voice channels are allocated) are assumed to be exponentially distributed. We find that the durations are not exponential in nature and possess significant deviations that make them hard to model. • An exponential call arrival model (coupled with a non-exponential distribution of call durations) is often adequate to model the primary usage process. • A simpler random walk can be used to describe primary usage under high load conditions. • Spectrum usage can exhibit significant variability. We found that the load of individual sec-
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
tors varies significantly even within a few seconds in the worst case. We also find high variability even across sectors of the same cell. We believe that our work provides a first-step proof point to guide both policy and technical developments related to DSA. In this article we made no use of sensing data and relied wholly on network data. In future work we intend to perform simultaneous sensing and in-network data collection. This would allow us to investigate how accurate a sensing-based approach is and also validate the results in this article.
REFERENCES [1] A. Al Daoud, M. Alanyali, and D. Starobinski, “Secondary Pricing of Spectrum in Cellular CDMA Networks,” Proc. IEEE DySPAN, Apr. 2007, pp. 535–42. [2] D. Willkomm et al., “Primary Users in Cellular Networks: A Large-Scale Measurement Study,” Proc. IEEE DySPAN, Oct. 2008. [3] T. Renk et al., “Spectrum Measurements Supporting Reconfiguration in Heterogeneous Networks,” Proc. 16th IST Mobile Wireless Commun. Summit, July 2007, pp. 1–5. [4] O. Holland et al., “Spectrum Power Measurements in 2G and 3G Cellular Phone Bands During the 2006 Football World Cup in Germany,” IEEE DySPAN, Apr. 2007, pp. 575–78. [5] T. Kamakaris, M. M. Buddhikot, and R. Iyer, “A Case for Coordinated Dynamic Spectrum Access in Cellular Networks,” IEEE DySPAN, Nov. 2005, pp. 289–98. [6] A. P. Subramanian et al., “Fast Spectrum Allocation in Coordinated Dynamic Spectrum Access Based Cellular Networks,” IEEE DySPAN, Apr. 2007, pp. 320–30. [7] O. Ileri, D. Samardzija, and N. Mandayam, “Demand Responsive Pricing and Competitive Spectrum Allocation via a Spectrum Server,” IEEE DySPAN, Nov. 2005, pp. 194–202. [8] H. Mutlu, M. Alanyali, and D. Starobinski, “Spot Pricing of Secondary Spectrum Usage in Wireless Cellular Networks,” INFOCOM ’08, Apr. 2008, pp. 682–90. [9] M. A. McHenry et al., “Chicago Spectrum Occupancy Measurements & Analysis and a Long-Term Studies Proposal,” Proc. 1st Int’l. Wksp. Tech.Policy Accessing Spectrum, Aug. 2006. [10] R. Chiang, G. Rowe, and K. Sowerby, “A Quantitative Analysis of Spectral Occupancy Measurements for Cognitive Radio,” Proc.65th IEEE VTC, Apr. 2007, pp. 3016–20.
IEEE
BEMaGS F
BIOGRAPHIES DANIEL WILLKOMM (
[email protected]) _______________ received his Diploma degree (with distinction) in electrical engineering from the Technische Universität Berlin (TUB), Germany, in 2004. He is currently a Ph.D. student at the Telecommunication Networks Group (TKN), TUB, working in the area of cognitive radio networks. From 2005 to 2007 he received a scholarship from the DFG graduate program (Graduiertenkolleg) Stochastic Modeling and Quantitative Analysis of Complex Systems in Engineering, an interdisciplinary research group of three major universities in Berlin. Currently he is visiting Sprintlabs in California for a joint project between Sprint and TKN. SRIDHAR MACHIRAJU (
[email protected]) ____________ is a research scientist in the Applied Research Group at the Sprint Advanced Technology Laboratories, Burlingame, California. He has been with Sprint since summer 2005. He received his M.S. and Ph.D. degrees from the University of California at Berkeley in 2003 and 2006, respectively. Prior to that, he obtained his Bachelor’s degree from the Indian Institute of Technology, Madras, Chennai. He is broadly interested in problems related to performance analysis and algorithm design in the area of networked computer systems, especially mobile systems. His current research is focused on wireless resource allocation, wireless security, and active measurements.
In future work, we intend to perform simultaneous sensing and in-network data collection. This would allow us to investigate how accurate a sensing-based approach is and also validate the results in this article.
JEAN BOLOT (
[email protected]) __________ runs the research laboratory of Sprint located in the San Francisco Bay area. His research interests center around the measurement, analysis, and economics of the Internet, and in particular the mobile Internet. Prior to joining Sprint, he was a founding team member of Ensim, a Silicon Valley company in the area of data center automation. Earlier, he did research at INRIA in France on Internet measurement and voice over the Internet. He received his M.S. and Ph.D. in computer science from the University of Maryland at College Park in 1988 and 1991, respectively. ADAM WOLISZ [SM] (
[email protected]) _________ received his degrees (Diploma 1972, Ph.D. 1976, Habil. 1983) from Silesian Unversity of Technology, Gliwice, Poland. He joined TUB in 1993, where he is a chaired professor in telecommunication networks and executive director of the Institute for Telecommunication Systems. He is also an adjunct professor at the Department of Electrical Engineering and Computer Science, University of California, Berkeley. His research interests are in architectures and protocols of communication networks. Recently he has been focusing mainly on wireless/mobile networking and sensor networks. He was with the Polish Academy of Sciences until 1990 and GMD-Fokus, Berlin from 1990 to 1993.
IEEE Communications Magazine • March 2009
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
95
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
TOPICS IN RADIO COMMUNICATIONS
A Technical Framework for LightHanded Regulation of Cognitive Radios Anant Sahai and Kristen Ann Woyach, University of California, Berkeley George Atia and Venkatesh Saligrama, Boston University
ABSTRACT Light-handed regulation is discussed often in policy circles, but what it should mean technically has always been a bit vague. For cognitive radios to succeed in reducing the regulatory overhead, this has to change. For us, light-handed regulation means minimizing the mandates to be met at radio certification and relying instead on incentives to deter bad behavior. We put forth a specific technical framework in which the certification mandates are minimal — radios must modulate their transmitted waveform to embed an identity fingerprint, and radios must obey certain go-to-jail commands directed toward their identities. More specifically, the identity is represented by a temporal profile of taboo time slots in which transmission is impossible. The fraction of taboo slots represents the overhead of this approach and determines how reliably harmful interference can be attributed to the culprit(s) responsible. Meanwhile, the fraction of time that innocent radios spend in jail is the overhead for the punishment system. The analysis is carried out in the context of a real-time spectrum market, but is also applicable to opportunistic use.
INTRODUCTION
Some of this material has appeared previously in [1–3].
96
Communications IEEE
Governments around the world have to decide what regulation is going to look like for the next generation of wireless devices. The current regulatory model — often called “command-and-control” — in which spectrum is parceled and allocated to specific uses and companies was designed for one-to-many broadcast systems such as TV and AM/FM radio. This centralized solution is easy to enforce, but has difficulty managing allocations on the heterogeneous usage scales of interest. It leaves “holes” in both time and space where valuable spectrum is being wasted [3]. In common language, both the wasted spectrum and the need to get lengthy government approvals are often viewed as problems of regulatory overhead. Legal scholars and economists have debated how to solve this problem. While all agree that decentralized and more “light-handed” regulation is desirable, the form of this regulation is contested. Spectrum privatization advocates rely on market forces to determine who will be allowed to transmit. In this model, government regulation
0163-6804/09/$25.00 © 2009 IEEE
certifies devices, monitors market transactions, and resolves disputes as civil offenses through the courts. Spectrum commons advocates, on the other hand, note that with current technological advances, a simpler approach is possible that puts the burden of regulation entirely on equipment: any certified device may transmit. Regardless of the policy approach, the looming introduction of frequency-agile and softwaredefined radios poses a major challenge. Cognitive radios are autonomous and possibly adaptive, allowing them to adjust their transmission patterns according to local observations [4]. This forces us to confront the wireless version of an age-old philosophical question: for autonomous beings, is the freedom to do good distinguishable a priori from the freedom to do evil? From this perspective, frequency agility runs the risk of being the wireless equivalent of Plato’s Ring of Gyges that conferred invisibility and hence unaccountability to its wearer. Faulhaber raises this specter through his discussion of “hit and run radios” that are virtually uncatchable because they turn on, use the spectrum for a period of time, and turn off without a trace [5]. The knee-jerk response to this prospect is to just ban frequency agility altogether. But in the age of an ever increasing number of wireless interfaces on portable devices, the potential monetary and power savings enabled by radio unification through frequency agility is hard to ignore. Furthermore, usage holes exist at time and space scales that are smaller than the device lifetimes and the lifetime mobility of devices. So regardless of whether we move to privatization or commons, precluding frequency agility would eliminate the long-term prospects for dynamic spectrum access to reduce the regulatory overhead of wasted spectrum. So the core question the wireless community faces is how to exploit frequency-agile devices for reducing regulatory overhead while still allowing enforceability. It is tempting to wish for an unambiguous way to certify the safety of wireless protocols involving frequency agility and then lock these down at device certification time. Besides the obvious problem Gödel and Turing have brought to our attention, that automatically proving correctness of general programs is impossible and engineering bug-free software is hard even in deterministic settings, Hatfield has pointed out that the unpredictable
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
interactions of the wireless environment make a priori certification even more diffcult [6]. The detailed code-level certification this situation would demand is likely to be costly, and thus represents a barrier to entry that effectively reduces the freedom to innovate at the wireless transport level. The real world of politics dictates that such complex barriers will provide many opportunities for manipulation by parties interested in blocking competitors [7]. If it is hard to certify against bad behavior, why not just require behavior that is known to be good? Why do wireless devices need freedom over how they access spectrum? If all desirable future wireless services with all device lifetimes can be served using a few stable interfaces, freedom to innovate at the spectrum access level is not necessarily very valuable. This is reminiscent of the apocryphal quote from the pre-digital-revolution days, “I think there is a world market for maybe five computers,” or the pre-Internet-revolution world view that the “information superhighway” would just consist of audio/video on demand, home shopping, multiplayer gaming, digital libraries, and maybe some distance learning. Meanwhile, multiuser information theory is still revealing innovative ways of doing wireless communication; the question of potential synergies between content/application and transport layers is still open [8]. For now, it seems reasonable to come down on the side that freedom is important. If entirely a priori enforcement is difficult, it seems natural to follow the example of crime in human society and have a role for a posteriori spectrum rule enforcement that uses incentives to deter bad behavior rather than precluding all bad behavior by design. The role of a priori certification is then limited to maintaining the incentives. Existing game-theoretic literature says that while a pair of equal users can self-enforce [9] to a range of stable and fair equilibria, this breaks down when users are unequal. Consider a case where the first user can cause very little interference to the second while the second can cause a great deal of interference to the first. The first has neither defense nor ammunition. Without a possibly external force to which the second is vulnerable, the first cannot reasonably believe that the second will follow sharing rules. Indeed, vulnerability is the mother of trust; certification will be required to produce the necessary vulnerability. Furthermore, robust identity is needed to avoid the “Ring of Gyges” problem when there are more than two users since without identity, there is no threat of being held accountable. In this article we consider how to give radios an identity in a way that is easy to certify, easy to implement, and does not presume much about the kinds of waveforms the radio system can implement. Perhaps more important, this approach to radio identity allows harmful interference to be causally attributed with great confidence to the guilty radio(s) without imposing a significant physical layer (PHY) burden on the victims. This is done by giving each radio its own spectral fingerprint of time-frequency slots that it is forbidden to use. The proportion of taboo slots quantifies the spectrum overhead of such an identity system. To understand how to set the parameters, we then sketch out a simple system
of punishment for misbehaving radios that involves sending them to “spectrum jail” for finite amounts of time. This system is explained in the context of a toy real-time spectrum market where the overhead imposed is the proportion of time that innocent systems spend in jail. Somewhat surprisingly, this gives us a spectral analog of human criminal law’s Blackstone’s ratio (“Better that ten guilty persons escape than that one innocent suffer”) [10]. Overall, we see that while light-handed regulation is possible, some significant spectral overhead seems unavoidable.
IEEE
BEMaGS
IDENTITY There are many potential approaches to “identity.” In the most straightforward approach, identity is explicitly transmitted by the physical layer as a separate signal in a mandated format. If a victim experiences harmful interference, it merely has to decode these signals to learn the identities of all the potential interferers. However, while this approach is conceptually simple, it has three major shortcomings: • It forces us to mandate a standard PHY waveform for transmission of this identity information. This adds additional complexity to systems that need different waveforms for their own signals. • It imposes an additional decoder PHY burden on the part of either specially deployed enforcement radios or the potential victims of interference. • A broadcasted identity does not distinguish between the guilty and innocent bystanders. Thus, it reduces the incentive to deploy innovative approaches (e.g., beamforming) to reduce interference. This last issue is particularly significant where we wish to only punish users who are actually causing harmful interference as opposed to punishing any user who is transmitting without authorization. The “no harm no foul” principle is attractive in the context of light-handed regulation, but the explicit identity beacon approach does not distinguish between harmful interference and unfortunate fading or bad luck. A second approach to identity can be developed [11] where idiosyncrasies of the radio front-ends are used to identify devices. While this “accidental identity” approach deals with the first objection above, the others remain. Furthermore, such accidental identities provide no way of having multiple coordinates associated with a single transmission. For example, an offending transmission might originate from a particular device that is in a particular network and being used by a particular human user. An explicit beacon could just concatenate bit fields to transmit all three identities, but there is no way to do this with accidental identities. For example, contrast “tall female, short blond hair, slim build, wearing a purple bodysuit” as a description with a more engineered identity such as “Seven of Nine, Tertiary Adjunct to Unimatrix Zero-One.” Stepping back, the use of accidental identities is very much like the use of cyclostationary signal features to detect and distinguish legacy primary users. It turns out that much better performance can be obtained if we design these signal features
F
If entirely a priori enforcement is difficult, it seems natural to follow the example of crime in human society and have a role for a posteriori spectrum rule enforcement that uses incentives to deter bad behavior rather than precluding all bad behavior by design. The role of a priori certification is then limited to maintaining the incentives.
IEEE Communications Magazine • March 2009
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
97
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Network ID User ID Device ID
× TX identity: band 1 Cannot transmit TX identity: band 2
...
ourselves [12]. We were inspired by how geographic profiling of criminals exploits the fact that serial killers tend to maintain a taboo buffer zone around their homes wherein they do not kill anyone [13]. Figure 1 shows the wireless equivalent: an engineered identity-specific code that specifies which time slots are taboo for this temporal profile. The length of the time slots should be significantly longer than the delay spread of the relevant channels as well as longer than the length of packets. Something like 1–10 ms seems reasonable. This temporal taboo can easily be certified since it only requires a circuit that disables transmission. Different identities can also be stacked as in Fig. 1 by giving each code a veto over using time slots that are taboo to it. This avoids all the problems above: no separate PHY is needed, there is no additional decoder burden on victims since they just need to record the pattern of harm, and there is the hope that only the guilty parties will be convicted. A technical analysis of why these codes work is given in [1], but the basic idea can be understood by considering a randomized code wherein each user’s temporal profile is chosen randomly by tossing an independent biased coin. If it comes up heads, the slot is taboo. This is like a randomized trial in clinical medicine: the taboo periods act as “experimental controls” for the efficient testing of the hypothesis that this user is causing interference. As a hypothesis testing problem for each user, the usual trade-offs apply and are depicted in Fig. 2. It turns out that there are three critical parameters: Tc: Time till conviction is the number of slots that must be observed before a decision can be made to the requisite probability of false alarm and missed detection. Δ: (disruption due to interference) The additional probability of bad performance induced by the presence of the harmful interferer. This represents the degree to which the guilty user is harming the victim. It plays a role analogous to the channel gain in wireless communication. The top panel in Fig. 2 shows that more disruptive interferers are easier to convict. γ: (overhead) The fraction of slots that are taboo for a user. This represents the timefrequency overhead of this approach to identity since even when users are honest, they are unable to use these opportunities.
98
Communications IEEE
BEMaGS F
This parameter plays a role analogous to the signal strength in traditional wireless communication. The top right panel in Fig. 2 shows that higher overhead makes it easier to convict a guilty party. For a single cognitive user, the key trade-off equation obtained using the central limit theorem is [1]
TX identity: band 3
Figure 1. Taboo-based identities, demonstrated here as the composition of three levels: network, user, and device. The taboo times can be different in different bands to enable intelligent frequency hopping to maintain steady lowlatency links.
A
Tc ≈
⎡ ⎤2 1 ⎢ θ(1 − θ ) z f ⎥ γ ⎢ ⎥ ⎢ ⎥ ( ( )) 1 1 θ − θ − γ 0 ⎢ + θ1 (1 − θ1 ) + 0 zm ⎥ ⎢⎣ ⎥⎦ γ (1 − γ)Δ
2
(1) ,
where θ0 is the true background loss, or the background probability of harm without the added interference, θ1 = θ0 + Δ is the net level of harm with the interference, and θ = (1 – γ)θ1 + γθ0 is the overall level of observed harm. zf = Φ–1(1 – pfa) (similarly for zm using instead the target probability of missed detection) with Φ–1 (.) denoting the inverse cumulative distribution function (CDF) of a standard normal Gaussian distribution. Notice that this formulation is agnostic to the underlying mechanism of the harm — it could be raising the noise floor, intermodulation, receiver desensitization, or even disrupting the medium access control (MAC) layer. It does not matter what it is, as long as the victim can note when it is experiencing low performance. However, the top two panels in Fig. 2 reveal that the nature of the victim matters. If a victim is like Hans Christian Anderson’s folk tale princess and has a low tolerance for background loss θ0, it is easier to catch those introducing small amounts (a figurative “pea”) of additional disruption. The worst case for conviction are victims where the acceptable background levels of loss are much higher. This particular approach to identity does not demand strict synchronization. Suppose that the criminal and the victim clocks were offset by up to 10 time slots in either direction. Then each user effectively has 20 identity codes, all shifts of each other. If any one of them is convicted, the user will be punished. So the effect of imperfect synchronization is just a proportional change in the required probability of false alarm. The more subtle issue is how to deal with multiple users. It might be that more than one user is guilty. It is important to deal with this to avoid the cognitive radio equivalent of looting where the presence of one criminal induces others to join in. As long as each criminal is imposing sufficient additional harm on its own, such additional criminals will be detected under their own hypothesis tests. The harm caused by the other guilty users will just effectively raise the level of background losses and hence make it take longer to catch the weaker criminals. We can also search for pairs or triples of users together to catch small conspiracies. However, the bottom left panel of Fig. 2 shows that this comes at a cost since the effective overhead for the group is greatly reduced — the group can transmit if anyone within the group can transmit. This in turn increases the time required to catch —
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
50
BEMaGS F
0.5
e cas rst Wo
1%
) s (O o
) (O o
ti o rup dis
)
o
) Oo s(
s (O
e ss lo
e oss
0.005
ses
d un ro
0.01
los
g ck
l nd rou
Δ) n( ti o rup dis
Identity overhead (γ)
%
ba
ckg
0.5
25% overhead (γ)
0.05 10
ba
e as
%
c st
0 .1
or W
10% overhead (γ)
se los und gro
5
k bac
0.1
5% overhead (γ)
1
d (Δ) oun tion kg r rup ba c dis ase %c 0 .1
10
% 50
Disruption due to interference (%) (Δ)
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Δ) n(
0.1
0.001 102
103
104
105
106
107
102
103
104
0.5
0.5
ov
(γ )
a rhe γ) d(
γ)
rhead
ove
0.005
0.05
ve 25% o
d( ea erh ov
5%
0.01
0.1
ove
0.05
10 %
er he ad
) (γ
25
ad he er ov
Utility/interference imposed
1
%
106
5%
1
0.1
105
Wait till conviction (Tc)
% 10
Effective overhead
Wait till conviction (Tc)
a rhe
0.01
(γ)
d (γ )
0.005
0.001 0
1
2
3
Number of users
4
5
0
20
40
60
80
100
Number of codes implicated
Figure 2. The trade-offs involved in the taboo approach to identity for the specific case of 90 percent probability of detection and 0.5 percent probability of false alarm. The top two panels consider a single isolated user, while the bottom two panels consider trade-offs relevant to multiple users. the conspirators. To be able to catch even small conspiracies in a timely manner requires an identity overhead that is substantial — more than 25 percent. However, this same plot says that from a societal perspective, this full overhead is only experienced when there is only a single legitimate user of the channel. If a channel can legitimately be oversold to many different users, then the effective societal overhead is much less. A final cost of additional overhead is shown in the bottom right panel of Fig. 2. Increased overhead makes it harder for groups of radios to find times to coordinate transmissions for adaptive beamforming or other such purposes. They have to find times that are not taboo for any of them, and this may reduce their utility. The critical open question here is how many radios will need to simultaneously transmit in the realistic systems of the future. The bottom right panel of Fig. 2 can also be
interpreted in a different context: a single criminal might decide to maliciously “frame” other innocent users by voluntarily deciding not to transmit in the taboo slots that correspond to another identity. However, it does so at a cost to itself (its own utility is reduced) and a benefit to the potential victims, who suffer less harm. Even with only 5 percent overhead in the codes, trying to frame a hundred different identities reduces the harm to less than 0.6 percent of slots. How to avoid being framed is discussed at the end of the next section.
DETERRENCE AND THE NEED FOR SOMETHING TO LOSE Even in a real-time spectrum market, the actual waveform design within a band might be done in software (consider an orthogonal frequency-divi-
IEEE Communications Magazine • March 2009
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
99
A
BEMaGS F
Communications IEEE
A
BEMaGS
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
F
1
h
ng
at c
ch
ng
Global jail
Ppen 1
Pcatch = 0.1
0.75
Pcatch = 1 Ppen = 0.6 Pwrong= 0
0.5
0.25
Pwrong = 0.03
0 1
Never cheat
2
0.75 0.5
PTX
0.25 0 0
0.25
0.5
p
0.75
2
4
6
8
Expansion
10
Utility of cognitive user
3 0 1
Pcatch = 1 Pcatch = 0.5
0.5
Always cheat Pcheat
q1 p No TX alarm PTX False = TX q/(q+p) TX q11 No TX TX q1 No TX Legal TX No cheat False alarm No cheat False alarm No cheat False alarm Cheat Legal TX Cheat Legal TX Cheat Legal TX
Pwro
P cat
P wro
Ppen
Band 3 Band 2 Band 1 Primary Cognitive Cognitive Primary Cognitive
Cheat
No TX
Cognitive
No cheat
qB
Pc
TX
Primary
Band B
pB
1
1
0.5
1
Fraction of time in jail
0 0
10
PTX = 0.55 Pcatch = 1 Pwrong = 0.03 0
20 Expansion
30
Figure 3. A jail-based deterrence system for punishing cognitive radios. The plots show the constraints on expansion and the size of purchased home bands when the utilities of the players are considered. sion multiplexing [OFDM] based system), and it is therefore hard to certify that a radio will only use those channels for which it has paid. So, to enable light-handed regulation, the key idea is that in addition to mandating the identity code, each radio is certified to obey a go-to-jail command from the spectrum regulator. (A monetary fine could serve a similar purpose, but it is much harder to certify that a device will pay a fine than it is to certify that it will go to jail.) Figures 3 and 4 explore the jailbased deterrence system to understand the important parameters for encouraging cognitive users to follow spectrum sharing rules. For more information, see [2]. Although the treatment there is in the context of opportunistic spectrum use by cognitive radios, we see here that most of the same arguments apply to cognitive radios participating in a dynamic spectrum market. The first panel of Fig. 3 shows the setup. There are B channels that are available for sale. Some of these may be occupied for a time by priority users willing to pay more than you. The cognitive radio is supposed to pay before using a channel, but it is technically capable of transmitting at will when it is not in jail. If the cognitive user is caught introducing interference, we model it as receiving a go-tojail command with probability Pcatch. At that point, it is sent to jail where it is not allowed to use any of the channels, including any channels for which it might have actually paid or any unlicensed bands. The length of the jail sentence is determined by Ppen. Since all systems of wireless identity will have some level of false accusations, a radio can also be wrongfully sent to jail with probability Pwrong. The market operator is concerned about the case illustrated in the second panel of Fig. 3 for B = 1. If the priority user is very active and the jail sentence is not harsh enough, a rational cognitive user that wants to maximize its access to channels will cheat because jail is not a big enough threat. The problem is that radio certification (including checking P pen compliance) occurs in advance, while the attractiveness of cheating varies based on local market and propagation conditions. So the regulator must make jail painful enough to deter cheating even in the most attractive case — when there are simply no more channels available for sale or noninterfering use.
100
Communications IEEE
The somewhat surprising consequence is that for deterrence to be effective, a cognitive radio always has to have something to lose. One way is to have exclusive access to at least one paid priority channel. The prospect of temporarily losing access to the unlicensed bands might also provide such a deterrent, but we do not explore this here. Let β be the number of channels on which the cognitive radio already has highest priority. The important quantity is the “expansion factor,” (B – β)/β, representing the ratio of the number of additional channels that could potentially be illegally/legally accessed to the number of channels on which the radio already has priority access. As the expansion factor increases, the jail sentences must lengthen in order to balance the added temptation to cheat. This is illustrated in the top half of the third panel of Fig. 3. It is at this point that the prospect of wrongful conviction must be considered. As the jail sentences lengthen to deter cheating, honest radios also suffer from being occasionally sent to jail for longer and longer periods of time. The green curve in the bottom half of the third panel of Fig. 3 depicts this. The fraction of time spent in jail by an innocent radio can be viewed as the overhead imposed by the enforcement scheme since usable spectrum is being wasted. Meanwhile, the additional benefits to a cognitive radio from participating in a real-time market depends on the fraction of channels that are available for use/purchase (not being used by a higher-priority user). The blue curve shows that wrongful convictions really take their toll on the extra utility — with 15 extra channels of which 6.75 on average ((1 – 0.55) * 15) are available, the actual net extra benefit is less than 3 because of utility lost to being in jail! The full benefits of dynamic spectrum access are only obtained when the expansion factor is high — because that is when statistical multiplexing best allows radios to exploit local spectrum. After all, an expansion factor of 0 just means that all the channels you can access are already preassigned to you: static spectrum access. The first panel in Fig. 4 shows the maximal expansion factor a cognitive radio will tolerate (when the blue curve in the last panel of Fig. 3 starts to dip) as a
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
function of P wrong and P catch. The cutout shows the interesting dependence on PTX — participating in a spectrum market becomes less and less appealing the more you believe that other users will have higher priority (willingness to pay) than you. You risk jail for less benefit. A more insightful picture is obtained in the last two panels of Fig. 4. These concentrate on the overhead: the time innocent users spend in jail. Notice that Pwrong is the critical parameter. Although the last panel shows that Pcatch has an effect on the expansion and overhead, the requirements on Pwrong are much stricter. In order to get good expansion with low overhead, Pwrong must be very small. What does all this tell us about the identity system? Perhaps the most important consequence is that the identity code system must support many distinct identities — probably at least in the tens of thousands. Suppose that there were 1000 identities shared among individual radios. This way, if one radio commits a violation, any other individual radio only has a 1/1000 chance of sharing an identity code (and hence jail sentence) with that guilty radio. It takes 10,000 distinct identities in the system to bring the probability of wrongful conviction down to 1 percent if we also fear that a miscreant is capable of framing 100 others. Too few identity codewords would result in too high a chance of wrongful conviction. This pretty much rules out relying on accidental identity as a viable way forward. In addition, a radio should have its own identity code assignment randomly change as time goes on so that it is not always subject to collective punishment with the same miscreant. Notice that this changing identity does not have to be registered with the regulator and can remain private, and thus preserve user anonymity. All that is required for the incentives to work is that the radio knows its own identity and is certified to obey go-to-jail messages directed to it.
IEEE
F
16
12
PTX = 0.9 PTX = 0.55
8
PTX = 0.1
Pcatch = 1 Pcatch = 0.5 Pcatch = 0.1
4
0
PTX = 0.55 0.1
0.2
0.3
0.4
0.5
Pwrong 40 Pwrong = 0.001
Pwrong = 0.005 Pwrong = 0.01
30 PTX = 0.55 Pcatch = 1
Pwrong = 0.02 20
Pwrong = 0.035 10 Pwrong = 0.06 Pwrong = 0.1 0
0.1
0.2
0.3
Maximal expansion 0.4
0.5
Overhead 40 Pcatch = 1 Pcatch = 0.8
Expansion
30
PTX = 0.55 Pwrong = 0.01
Pcatch = 0.6
20 Pcatch = 0.4
10
Pcatch = 0.2 Pcatch = 0.1
0
0.1
0.2
0.3
Maximal expansion 0.4
0.5
Overhead
Figure 4. When all parameters are optimized, we can see the maximal expansion possible and the overhead necessary to achieve it. Notice that while Pcatch affects the expansion and overhead, the more critical parameter is in fact Pwrong, the probability of being wrongfully punished.
IEEE Communications Magazine • March 2009
Communications
BEMaGS
Pcatch = 1
CONCLUSIONS Light-handed regulation is desirable to allow cognitive radios to be deployed in a way that encourages innovation. Doing this requires imposing only minimal certification requirements that do not restrict technological innovation while also not imposing a large regulatory overhead. The requirements need to be such that users have some faith that others will follow the rules. Surprisingly, in this perspective, whether cognitive radios are permitted to use spectrum opportunistically or whether they must engage in “Coasian bargains” using markets turns out not to matter much in terms of the broad outline of what is required. Either way, there has to be a system of identity for radios so that violators can be caught. Identity systems must be able to reliably assign blame for harmful interference even when the victim is much older than the culprit. The spirit of light-handed regulation suggests that the identity of the culprit should be reliably inferable from the pattern of interference itself, and this leads naturally to the fingerprint metaphor for identity. The challenge in fingerprint design is to manage the overhead of the fingerprints. One overhead is easy to understand: the degrees of freedom and spectral resources left unavailable for productive use because they are dedicated instead to the
A
20
Maximal bandwidth expansion
IEEE
Expansion
Communications
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
101
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
This article has sketched out a new paradigm for light-handed spectrum regulation, but a great deal of technical work remains to be done before the viability of this approach can be established.
fingerprints. The other is more nebulous: the extent to which the fingerprint discriminates in favor of or against certain approaches to spectrum use as well as how hard it is to certify the fingerprints. The fingerprints discussed here are easy to certify and are largely agnostic toward how spectrum is used. An initial analysis was done using random codes, and it reveals that the “overhead” of the code (the proportion of taboo time slots) plays a role analogous to the transmit signal power in traditional wireless communication. It has to be high enough or it will not be possible to meet target enforcement QoS parameters. Going forward, it will be important to see how well new or existing error correcting code families can be adapted to this task. List decoding, rateless fountain codes, and jamming-oriented adversarial models are all likely to be important to this task. More subtly, this problem is closer to Ahlswede’s appropriately named informationtheoretic problem of identification [14] than it is to Shannon’s classical sense of communication. Identity on its own is insufficient; the incentives must be there to encourage good behavior for cognitive radios. This article has explored a model of deterrence in which radios that have been convicted of misbehaving are sentenced to finite jail sentences. Two surprising things have been found. First, in order for a lightly certified cognitive radio to be trustworthy, it must have something to lose. This suggests that cognitive radios are subject to their own “Matthew effect” [15] — it will be easiest for systems that already have licensed spectrum to deploy cognitive radios. Second, overall system performance is much more dependent on the probability of wrongful conviction than on the probability of successfully punishing an actual wrongdoer. Going forward, incentives must be understood in contexts beyond the simple delay-insensitive infinitely bandwidthhungry cognitive users implicitly considered here. For example, power-sensitive and delay-sensitive cognitive users might respond differently. This article has sketched out a new paradigm for light-handed spectrum regulation, but a great deal of technical work remains to be done before the viability of this approach can be established. Intuitively, the two overheads (identity and wrongful convictions) must be balanced appropriately to find the sweet spot of maximal regulatory efficiency. However, it might be that qualitatively different applications having very different wireless requirements will require different balances — suggesting that some form of centralized spectrum zoning will remain with us. The advantage of this new paradigm is that such questions might be answerable by theorems rather than mere rhetoric.
REFERENCES [1] G. Atia, A. Sahai, and V. Saligrama, "Spectrum Enforcement and Liability Assignment in Cognitive Radio Systems," Proc. 3rd IEEE Int'l. Symp. New Frontiers at DySPAN, Chicago, IL, Oct. 2008. [2] K. A. Woyach et al., "Crime and Punishment for Cognitive Radios," Proc. 46th Allerton Conf. Commun., Control, and Comp., Monticello, IL, Sept. 2008. [3] A. Sahai et al., "DSP Applications: Cognitive Radios for Spectrum Sharing," IEEE Sig. Processing, Jan. 2009. [4] J. Mitola, Cognitive Radio: An Integrated Agent Architecture for Software Defined Radio, Ph.D. thesis, Royal Inst. of Tech., Stockholm, Sweden, 2000.
102
Communications IEEE
A
BEMaGS F
[5] G. R. Faulhaber, "The Future of Wireless Telecommunications: Spectrum as a Critical Resource," Info. Economics Policy, vol. 18, Sept. 2006, pp. 256–71. [6] D. Hatfield and P. Weiser, "Toward Property Rights in Spectrum: The Difficult Policy Choices Ahead," CATO Inst., Aug. 2006. [7] N. Isaacs, "Barrier Activities and the Courts: A Study in Anti-Competitive Law," Law and Contemporary Problems, vol. 8, no. 2, 1941, pp. 382–90. [8] J. Andrews et al., "Rethinking Information Theory for Mobile Ad Hoc Networks," IEEE Commun. Mag., Dec. 2008. [9] R. Etkin, A. Parekh, and D. Tse, "Spectrum Sharing for Unlicensed Bands," 1st IEEE Int'l. Symp. New Frontiers at DySPAN, Baltimore, MD, Nov. 2005. [10] A. Volokh, "N Guilty Men," Univ. PA Law Rev., vol. 146, no. 1, 1997, pp. 173–216. [11] V. Brik et al., "PARADIS: Physical 802.11 Device Identification with Radiometric Signatures," Proc. ACM Mobicom, Burlingame, CA, Sept. 2008. [12] R. Tandra and A. Sahai, "Overcoming SNR Walls Through Macroscale Features," Proc. 46th Allerton Conf. Commun., Control, and Comp., Monticello, IL, Sept. 2008. [13] D. K. Rossmo, Geographic Profiling: Target Patterns of Serial Murderers, Ph.D. thesis, Simon Fraser Univ., 1995. [14] R. Ahlswede and G. Dueck, "Identification via Channels," IEEE Trans. Info. Theory, vol. 35, no. 1, 1989, pp. 15–29. [15] R. K. Merton, "The Matthew Effect in Science," Science, vol. 159, Jan. 1968, pp. 56–63.
BIOGRAPHIES ______________ joined the DepartANANT SAHAI (
[email protected]) ment of Electrical Engineering and Computer Sciences at the University of California at Berkeley in 2002. Prior to that, he spent a year at the wireless startup Enuvis developing software radio algorithms for GPS at very low signalto-noise ratios. He is currently a member of the Berkeley Wireless Research Center and the Wireless Foundations Center. His research interests are in wireless communication, signal processing, information theory, and distributed control. He is particularly interested in all aspects of spectrum sharing.
KRISTEN ANN WOYACH (
[email protected]) ________________ is currently a graduate student with an NSF fellowship at Berkeley. Her research interests are in spectrum sharing at the intersection of technology and policy. Prior to this, she was an undergraduate student researcher at the University of Notre Dame where she worked on sensor networks. GEORGE ATIA (
[email protected]) __________ received his Ph.D. degree in electrical and computer engineering from Boston University, Massachusetts, in 2009. He received B.Sc. and M.Sc. degrees, both in electrical engineering, from Alexandria University, Egypt, in 2000 and 2003, respectively. He is the recipient of the outstanding graduate teaching fellow of the year award in 2003–2004 from the Electrical and Computer Engineering Department at Boston University. In 2006 he received the College of Engineering Deans Award at the BU Science and Engineering Research Symposium. He is also the recipient of the best paper award at the International Conference on Distributed Computing in Sensor Systems (DCOSS) in 2008. His main research interests are in the field of wireless communications, network information theory, and distributed signal processing. VENKATESH SALIGRAMA (
[email protected]) ______ received his Ph.D degree from MIT in 1997. He was a research scientist at the United Technologies Research Center, Hartford, Connecticut, 1997–2001 and a visiting scientist at MIT in 2000–2001. He is currently an associate professor in the Department of Electrical and Computer Engineering at Boston University. He has received numerous awards including the Presidential Early Career Award, Office of Naval Research Young Investigator Award, NSF Career Award, and the Outstanding Achievement Award from United Technologies. His team was a finalist in the 2005 Crossbow Sensor Network Challenge Competition. He is currently serving as an Associate Editor for IEEE Transactions on Signal Processing. His research interests are in information and control theory, and networked signal processing with applications to sensor networks. His recent research interests are in high dimensional analysis, sparse reconstruction, and statistical learning.
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
TOPICS IN RADIO COMMUNICATIONS
Public Safety Radios Must Pool Spectrum William Lehr, Massachusetts Institute of Technology Nancy Jesuale, NetCity
ABSTRACT The dynamic-spectrum-access research and development community is maturing technologies that will enable radios to share RF spectrum much more intensively. The adoption of DSA technologies by the public-safety community can better align systems with the future of wireless services, in general, and can contribute to making next-generation public-safety radio systems more robust, capable, and flexible. A critical first step toward a DSA-enabled future is to reform spectrum management to create spectrum pools that DSA-enabled devices, such as cognitive radios, can use — under the control of more dynamically flexible and adaptive prioritization policies than is possible with legacy technology. Appropriate reform will enable spectrum portability, facilitating the decoupling of spectrum rights from the provision of infrastructure. This article examines the economic, policy, and market challenges of enabling spectrum pooling and portability for public-safety radios.
INTRODUCTION Dynamic spectrum access (DSA) technologies, including cognitive radio (CR) technologies, are in development for the next generation of commercial, military, industrial, and publicsafety networks. These technologies hold the promise of delivering more flexible and adaptive radio architectures, capable of sharing the RF spectrum much more intensively than is feasible with currently deployed technologies. The current landscape of wireless networking reflects the legacy of a world premised on static network architectures and spectrum allocations. In this world, public-safety networks traditionally were designed to meet capacity and reliability “standards” that are based on user requirements at the worst-case level — that is, the capacity and reliability required during an emergency or a catastrophe. It is not assumed that the network will always require these levels of capacity and reliability during day-to-day operations. However, it is assumed that the network must always have these levels of capacity and reliability
IEEE Communications Magazine • March 2009
Communications IEEE
available when required. Worst-case planning implies that significant spectrum and equipment resources must be stockpiled and remain unused most of the time. This creates significant artificial spectrum scarcity, especially in the public-safety bands, which are small allocations fragmented across multiple bands and many system owners. The wireless world is changing. The need for wireless systems of all types, and for public-safety systems in particular, has expanded greatly. This increases the costs and collective infeasibility of continuing worst-case planning and the wasteful allocation of resources that it implies. The future of radio, of necessity, will require shifting to more DSA-friendly modes of spectrum usage. Besides being inevitable, the transition to DSA offers many significant benefits for the public-safety community and for wireless users in general. These benefits include better mission responsiveness, expanded capabilities, and ultimately, lower costs. However, reaching this future also entails overcoming important challenges. A number of complementary innovations are required. These include further technical developments, public-policy reform, and changing industry and end-user attitudes. Although further technical research and product development is certainly required, our focus here is on the policy and business-practice challenges of developing DSA technologies for use by public-safety systems. See [1] for a more detailed discussion.
THE CHANGING ENVIRONMENT FOR PUBLIC-SAFETY RADIOS Although the precise shape of the future of radio may be difficult to discern, certain key aspects appear certain. The future radio environment will include more wireless entities of all kinds, greater demand for mobility and portability, and more heterogeneous wireless networks. These future developments have concrete implications for the design of radio networks, including a requirement for more broadband capacity, which would enable more dynamic and flexible services, and a requirement for spectrum sharing.
0163-6804/09/$25.00 © 2009 IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
103
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Past
Present
Proprietary, single user, single channel, single locale
BEMaGS F
Future
Multichannel, trunked, narrowband (voice only) Key characteristics of public safety radios
A
Regional
Multichannel, multimedia (voice, data, integrated) National
Proprietary
Open Interoperable Broadband (data) Mesh/ad hoc
Shared infrastructure?
No. All dedicated to single user/department.
Yes. Shared access infrastructure and base stations via trunking. Channels shared within trunk group but not otherwise.
Yes. Shared access infrastructure and radios. Pooling of spectrum for sharing among multiple trunked groups.
Shared spectrum?
No.
Channel sharing within trunk calling group only.
Yes. Sharing of spectrum across bands. Pooled spectrum.
Infrastructure/ spectrum tied?
Yes. Closely coupled, closed systems. Limited interoperability via gateways, tying up additional spectrum.
Yes. Spectrum still tied to infrastructure. Gateways used to link systems.
No. DSA facilitates unbundling of infrastructure and spectrum. Infrastructure shared across multiple bands.
CPE
Single-channel radios
Multichannel radios
Multiband radios and flexible CPE
Table 1. Past, present, and future of public safety radios.
POLICY-BASED RADIO
1
See http://www.psst.org/public safetynetwork.jsp, last _________ accessed July 14, 2008.
104
Communications IEEE
CR captures the flavor of these advances; a CR is capable of sensing its local radio environment and negotiating modifications to its waveform (modulation scheme, power-level, or frequency/channel access behavior) in real time with other CRs, subject to policy constraints (e.g., limitations to the range of waveforms allowed). The policy constraints are enforced by the radio policy engine. Policies can include authorization to transmit in specific locations and frequencies at specific times or access protocol constraints (e.g., listen-before-talk). These policies can be static and hard-coded into the radio, downloaded from a database, or can be dynamic and subject to updating in real time in communication with a network operator or other CRs. DSA/CR devices typically require location awareness capability to support the policy engine and because interference is a local phenomenon occurring at the location of a receiver. Finally, CRs are inherently multiband radios, enabling the radio to transmit or receive in a wider range of frequencies than might be used in a specific communication environment. This enables CRs to utilize unused spectrum opportunistically and facilitates their interoperability with legacy radio systems. Although significant technical work still must be performed in academic research and commercial product development laboratories to field a commercially viable CR, prototypes already exist, and many aspects of the technology already are embedded and working at scale in commercial systems. In this article, we do not focus on the technical developments that still must be made, but rather on the policy innovations that are required to make commercialization viable.
NEXT-GENERATION PUBLIC-SAFETY RADIOS MUST EMBRACE DSA The same forces that are shaping the future for commercial wireless apply even more strongly to public-safety wireless systems. First, public-safety first responders are more likely than most other users of information and communications technology (ICT) to require mobile, wireless access. In many first-responder scenarios, the only option is wireless. Second, first responders, who are dealing with life-and-death situations, generally are perceived as deserving higher priority in the event of competition for resources. Third, first responders are more likely to deal with adverse environments. This increases their need for flexible, adaptive systems (e.g., systems that are capable of supporting ad hoc or mesh networking in the absence of other supporting infrastructure). First responders are likely to suffer from localized congestion; disasters typically happen in specific places and at specific times. The demand for all wireless services by all first responders is likely to be concentrated in time and place, increasing the peak-provisioning problem. Finally, public-safety system capabilities still are woefully inadequate, even compared to the services available to commercial users (e.g., 3G mobile telephony vs. legacy land mobile radio [LMR] systems). The public-safety community shares this conclusion.1 Public safety cannot rely on the improvement of LMR designs. There is a requirement and an opportunity to replace outmoded legacy infrastructure with leapfrogging technology to enable the wireless future required by public safety. Rather than continue the development of static, private, and expensive narrowband digital LMR network infrastructures, public safety requires a network architecture where privacy, reliability,
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
capability, adaptability and flexibility are built in, no matter whose infrastructure the radios traverse, or even when infrastructure is damaged or non-existent. The future of public-safety radio must be much more adaptive and responsive to its environment (spatially, temporally, and situationally) to account for the greater demands placed on first responders. A public-safety responder must be able to take a radio, authentication and security, spectrum rights, and priority status with him or her to any incident in the country and power up the radio, be recognized, and be admitted to whatever incident command network he or she is authorized to support. Table 1 summarizes our vision of the past, present, and future for public-safety radio.
FACILITATING DSA IN PUBLIC SAFETY Currently, public-safety radio systems are fragmented, overly expensive, under-capacitated, and limited. In part, this is due to the legacy regime of dedicated, narrowband, and overly restrictive spectrum policy. However, regulatory reforms such as the consolidation of licensing eligibility, approving the certification of software radios, and allowing secondary trading for some licensed spectrum demonstrate that progress is being made. In contrast to the case for commercial wireless services that depend more directly on market-based processes, reform of publicsafety spectrum management depends on nonmarket institutions to coordinate cooperative evolution. Over time, a number of policy reforms helped to make spectrum pooling and DSA more feasible in public-safety applications.
COOPERATIVE ROLE- AND POLICY-BASED INSTITUTIONS ARE DEVELOPING The national system of frequency coordinators, the regional planning committees (RPCs), and the introduction of the National Incident Management System (NIMS) within the National Response Framework (NRF) provide the institutional foundation required to enable the transition to DSA and spectrum pooling. 2 These relatively new institutions are positioned to enable public-safety managers to define global and local priorities and static and dynamic rules and policies that can assist in self-regulation of spectrum use. The development of appropriate user-based prioritization and policies that reflect accepted practices in emergency management and incident response are essential to support developing CR and DSA technologies.
REGIONAL PLANNING FOR PUBLIC-SAFETY BAND MANAGEMENT Since its creation, the Federal Communications Commission (FCC) has licensed public-safety spectrum by segregating uses/users into eligible and non-eligible categories to control radio interference. Eligible users compete for very small slivers of available spectrum. “The results are: (a) a set of narrow slots spread throughout the spectrum that users of different eligible
classes cannot traverse; (b) a body of superexpensive technologies designed to serve specific channel assignments; and (c) a patchwork of non-interconnected transmission facilities serving single-use licensees. Each user/licensee is compelled to build its own infrastructure, and jealously guard its spectrum allocation and existing licenses [2].” This fragmentation of the public-safety spectrum results in artificial spectrum scarcity. As we discuss below, the spectrum pooling concept can help correct this problem. In 1982, Congress provided the FCC with the statutory authority to use frequency coordinators to assist in developing and managing the LMR spectrum. Frequency coordinators are private organizations that have been certified by the Commission to recommend the most appropriate frequencies for applicants in the designated Part 90 radio services. In general, applications for new frequency assignments, changes to existing facilities, or operation at temporary locations must include a showing of frequency coordination. Although the FCC issues the actual license, frequency coordinators perform essentially all of the spectrum acquisition activities on behalf of licensees. Each community of users in the LMR bands has at least one frequency coordinator entity that is owned and operated by its trade association, or in the case of the Federal Government, by the Department of Defense (DOD). In the newer 700- and 800-MHz bands designated for public safety, the FCC has required that RPCs be formed to create policy and to prioritize uses for the band on a regional basis. The RPCs must submit detailed regional plans to the FCC that are developed by consensus in each region and that serve to pre-coordinate access to the band for all eligible public-safety entities in a region. The essential role of both the frequency coordinators and the RPCs is to organize the access to spectrum so that interference is avoided, and communications requirements (both present and future) are planned for and accommodated. Frequency coordinators and RPCs also perform the valuable function of communicating with existing licensees about plans for new facility construction, and they provide a valuable consensus and peer-review function. Additionally, RPCs can establish prioritization for the band through a consensus-based process. The RPCs and frequency coordinators are federally sanctioned and empowered, trusted, local, user-owned, and controlled agents who implement group (pool) policies to manage spectrum and avoid interference. If the RPCs were authorized and empowered to implement more extensive and flexible policies that could be enforced by better technologies, public-safety spectrum management could move out of a spectrum-scarcity paradigm and into a world where communication was always available and portable across both geography and spectral bands.
THE DYNAMIC COOPERATIVE-POLICY FRAMEWORK The recent adoption of the NIMS and the Incident Command System (ICS) within the NRF provide an excellent working basis for the new
IEEE
BEMaGS F
2
The NRF describes the national framework for responding to all hazardous events, including describing who is responsible for what. The NIMS is the system/framework under the NRF for managing the reporting and tracking of domestic hazardous incidents across all federal, state, and local agencies. See National Response Framework (NRF), U.S. Department of Homeland Security, January 2008 (available at: http://www.fema.gov/emer gency/nrf/) and National ______ Incident Management System (NIMS), U.S., Department of Homeland Security, March 1, 2004 (available at: http://www.nimsonline.co m/docs/NIMS-90__________ web.pdf). The incident _____ command system (ICS) is a management tool, originally conceptualized in the 1970s, intended to assist in emergency response. It identifies best practices and is an important element of NIMS (see http://www.training.fema.g ov/EMIWeb/IS/ICSRe_____________ source/index.htm or Inci______________ dent Command System Review Materials, 2005; http://www.training.fema.g ov/EMIWeb/IS/ICSRe_____________ source/assets/reviewMate______________ rials.pdf). _____
IEEE Communications Magazine • March 2009
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
105
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
To fully realize the benefits of sharing on a large, national scale, standardized approaches toward sharing must be developed to simplify negotiating multilateral sharing agreements and to facilitate the design and production of equipment that can take advantage of pooled bands.
paradigm for dynamic policy-based spectrum management. The NIMS is a set of generic protocols for incident preparedness, management, response, and recovery that all U.S. first responders must conform to. The NIMS includes the ICS, which defines the specific way incidents will be managed, from very small and local to major nationwide disasters. The ICS and NIMS include planning and response and recovery protocols for day-to-day, tactical, and emergency activities. With national frequency coordinators managing the knowledge of license rights granted in all bands across the nation, with RPCs empowered to create static regional prioritization rules and access protocols, and with the NIMS and ICS to ensure hierarchical consistency and to guide local-layer dynamic prioritization and localized, tactical network formation on the ground; the federal, state, and local public-safety communities have a significant part of an institutional framework in place to enable public-safety spectrum pooling.
TRANSFERRING INCIDENT MANAGEMENT VALUES TO SPECTRUM MANAGEMENT DSA/CR and associated radio technologies will provide the technical solutions to enable spectrum rights and authentication to be transferred dynamically and to enable radios to follow the policies associated with more complex spectrum transfers and authorizations. Facilitating the commercialization of these advanced radio technologies, however, requires the creation of spectrum pools. This is a classic chicken/egg problem. Without spectrum to share dynamically, the value of deploying DSA/CR technology is reduced. Without commercially available DSA/CR equipment, incentives to invest in the business relationships and policies required to share spectrum are reduced. Pooling of public-safety spectrum can address this conundrum.
SPECTRUM POOLING EXTENDS RADIO RESOURCES FOR INCIDENT RESPONSE
3
See [3] for a discussion of how time-limited leases can be used to implement this functionality.
106
Communications IEEE
In the most general sense, spectrum pooling is the situation wherein multiple users share access rights to a common pool of spectrum. We envision a context in which holders of exclusive-use licenses for public-safety spectrum would voluntarily agree to contribute their spectrum to a common pool. Access to the pool would be closed, relative to an unlicensed regime of openaccess, to all/any complying devices. In essence, the license rights would transfer to the pool from the individual. The use of the spectrum would be in compliance with pool policies. Enforceable restrictions on who is permitted to use pooled spectrum and strong limits on what constitutes acceptable secondary use likely will be important. At the radio-system level, technologies and access policies/protocols must ensure that the radio will learn and confirm that spectrum is accessible, that access is allowed
A
BEMaGS F
(including the terms that govern such access), and that its use is appropriate (i.e., a better alternative is not available). Additionally, the radio systems must include the capability to signal and learn when conditions change (e.g., when the primary user must preempt or reclaim pool spectrum) and allow the radio to release the spectrum when it is no longer required or the radio is no longer allowed to use the spectrum.3 This makes it feasible to allow the intended use to dictate the best choice of spectrum usage, based on factors including the radio environment and location (e.g., “I am underground.”), the application (e.g., “I must stream video.”), the incident (e.g., fire, hurricane, interstate pile-up, chemical spill), the role (e.g., “I am a paramedic.”), and the permissions (i.e., “I have authority.”). Pooling can enable DSA/CR radios to combine narrowband channels opportunistically to support broadband access. Pooling provides not only a way to access spectrum without individual licenses, it creates the mechanism for spectrum policies to be authored, adopted, and transmitted to DSA/CR radios. Pooling is the first step in dynamic spectrum management. To fully realize the benefits of sharing on a large, national scale, standardized approaches toward sharing must be developed to simplify negotiating multilateral sharing agreements and to facilitate the design and production of equipment that can take advantage of pooled bands. Standardized approaches are also important to enable users to roam more widely, even nationally.
STANDARDIZED ELEMENTS FOR POOLING A number of core systems/elements are required to appropriately manage spectrum pool access and usage policies. Structured Pooling Policies — Spectrumaccess policies are required both for placing frequencies into a pool and for accessing them from a pool. Some policies may be static, some may be universal, and some may be dynamic or regional. Some policies may be invoked only in certain circumstances and at certain locations. Some static policies can be hard-coded into the CRs when they are manufactured, whereas others can be downloaded periodically from a database. We envision a hierarchy of spectrum pool policies that guide the radio to the best choice for channel selection, based on its ability to resolve available options within a structure of rules. Figure 1 represents a possible policy hierarchy for pooling and accessing spectrum. After the radio learns the static policies that apply in any location, it can resolve dynamic user requests for spectrum, based on more situational policies, depending on such factors as the application, the user’s role in the incident, or the developing ICS as an incident grows and wanes. Policy Servers — Policy servers are the primary infrastructure element of a DSA/CR radio network. Replacing radio system controllers, which control channel trunking and channel assignments in an LMR network today, policy
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
servers will sit at multiple locations in a network, including the incident area to enable local incident command to issue specific policies to responder radios (e.g., to set up a tactical network). As the radio powers up and authenticates, it asks the server for its policy update, role, and tactical assignment information as shown in Fig. 2.
Policy Authoring Tools — Standardized policy authoring tools are required that enable flexible policies to be designed and communicated to the radio infrastructure and managers. CR policies must be rendered into appropriate machine-readable formats and distributed to the radios and the band managers. Moreover, conflicts among policies must be detected and resolved. Policy Enforcement — To ensure that policies are followed, and that all policies co-exist without conflict or interference, a policy enforcement system is required. Spectrum Portability — A user must have the ability to roam with his radio across applications, locations, and networks. The ability to serve the radio the best available channel for the user (based on role and authentication), the use (i.e., applications, e.g., broadband, video, or sensor data), and the location (“I am providing mutual aid to a community that is not my home base.”) is what we call spectrum portability. Our concept has important differences from current trunking practices. Today, radio systems that can trunk channels, serve the next and best-available channel to the user requesting a talk channel. However, that only works in the user’s home radio system, where the radio is hard-coded with access to a limited number of talk groups, and the base stations are hard-coded to specific frequencies. Because a DSA/CR radio will not rely on hard-coded base stations, but instead will sense “white spaces” in a broad range of frequencies, it will, in theory, have the capability to transmit on any unused channel at any given time. Its decision about which channel to use will be determined, not by hard-coded information (having the “system key” installed in current trunked system architectures), but by knowing and following the policy rules of the pools for each band. A public-safety DSA/CR radio could be “told” to access only the public-safety spectrum pools. But the policy servers and policy enforcers must recognize and authenticate this radio as a public-safety radio before it receives its policy download. This recognition and authentication should be portable across the nation much like recognition and authentication of cellular phones currently is portable across national networks. Such portability involves the development of roaming agreements between infra-
F
Federal laws and regulations (static pool protocols) State laws and regulations (static pool protocols) Regional rules (static pool protocols) Local jurisdiction, mutual aid, and other local sharing rules (home or “host” policies, dynamic, locally authored) Shift or incident, time- and capacity-based policies (agency, user, role-based policies, dynamic, locally authored)
Figure 1. Hierarchy of pooling policies.
What is available? (Inventory provided by frequency coordinators to policy manager/ policy server)
To me? (Given my role and priority established during authentication)
Now? (Possible, assignments)
For this? (Radio chooses based on best frequency for application/ situation/location)
Thank you (Done, session recorded, billing channel returned to pool)
Figure 2. How policies resolve. structure owners, allowing access to infrastructure resources such as policy servers, backbone networks, switches, and frequencies. The pool managers must be vested with the ability to represent pool members and commit pooled resources to binding mutual agreements between pool members and suppliers of network resources (such as infrastructure, additional secondary rights to other pooled frequencies, and application services). This is required to economize on transaction costs. It is impractical to expect individual licensees to negotiate individual agreements with each other. We believe that frequency coordinators are well positioned to manage this top level of DSA pool relationships and transactions.
OVERCOMING CHALLENGES TO SPECTRUM POOLING Spectrum pooling and DSA represent elements of a cooperative spectrum management regime. This paradigm is very different from the current prevailing command-and-control approach that underpins spectrum allocations and rights. Because it is so different, the public-safety community and wireless stakeholders generally are not expected to embrace the concept until it is challenged and proves effective. Table 2 summarizes what we see as both real and perceptual challenges to spectrum pooling in public safety.
IEEE Communications Magazine • March 2009
IEEE
BEMaGS
Performance-based policies (static environment-based policies)
Embedded CR Technology — CRs must include appropriate technology to enable them to know and obey DSA policies. For some policies, especially the most dynamic and location/context-dependent, the CRs must know their location and specific characteristics of the spectral environment in that location. Other policies can be hard-coded.
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
107
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Real Challenges Technology will not work as expected • Legacy services will work less well than with traditional technology • Prioritization will not work, Secondary uses not preemptible • Shared spectrum will have more congestion, less assured peak access than traditional model • Systems will fail to perform as predicted/promised Government regulations will not permit • Necessary changes in regulatory framework will not occur • Political failure, Resistance of status quo vested interests Early-adopter challenge • Pioneers face higher costs, lower benefits (network externalities) • Getting the adoption bandwagon started Cost of NextGen Public Safety wireless systems • Learning, scale & scope economies accumulate over time, lowering costs • Managing cost recovery of shared systems • Incremental deployment and managing overlays Perceptual Challenges Risk of losing spectrum assets • Spectrum shared will not be reclaimable • Loss of ability to obtain additional spectrum allocations • Loss of control over radio networks Systems will not be adequately reliable • Systems cannot be made robust (or as robust as legacy systems) • Cost of making systems adequately robust prohibitive for public safety radios • Systems will fail to meet standard of “worst case” planning which is necessary Expanding pooling to wider communities • Sharing beyond narrow first-responder/public safety community infeasible, too risky
Table 2. Challenges for spectrum pooling in public safety.
CONCLUSIONS AND FUTURE RESEARCH DIRECTIONS The radio frequency spectrum must be shared much more intensively than has been possible with legacy technologies, business models, and regulatory policies. A paradigm shift is required to enable a wireless future of greatly expanded wireless usage and advanced capabilities required by our information-based economy and society. The need for this paradigm shift is especially acute in the public-safety community. The legacy regime severely limits interoperability among first responders and with those with whom they must communicate. The fragmentation of infrastructure into incompatible silobased networks increases costs, reduces available capabilities and capacity, and ultimately, harms the ability of public-safety professionals to perform their jobs. The traditional approach of over-provisioning static network infrastructure to meet worst-case scenario requirements is neither feasible nor desirable. Luckily, it also is no longer necessary. DSA technologies like software/ CR are making it feasible to share spectrum much more intensively. Transitioning to a future of DSA/CR for radio will enable radio systems to be much more flexible and adaptable to local conditions. This
108
Communications IEEE
will increase system capacity and capabilities, enhance interoperability and reliability, and reduce costs. Although the wireless future is bright, reaching it will not be easy. Coordinating the design, investment, and deployment of new technologies without disrupting existing operations will be challenging. Even if all of the requisite technology existed and were commercially available at scale — which is far from the current reality — we would be required to reform business models and spectrum-management policies to enable use of the technologies. One important and mandatory first step toward building the wireless future is to transition to spectrum management based on spectrum pooling. With pooling, public-safety users would expand their effective access rights and facilitate the adoption of DSA/CR wireless technologies. Significant progress already was accomplished toward establishing the institutional and policy framework to successfully implement the spectrum pooling concept. The NRF, the NIMS, the ICS, frequency coordinators, and the RPCs provide some of the glue and apparatus required to coordinate and manage pooled spectrum. Essential components (e.g., agreement on prioritization policies to manage shared access) still must be developed and challenges overcome
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
(e.g., mobilizing coordinated adoption of DSA/CR technologies) to progress along the path to next-generation public-safety communication systems. To maximize the likelihood of a successful transition, it is important to move incrementally. If public-safety professionals are to be convinced that spectrum pooling is indeed a concept whose time has come, they will require assurance that they will not experience any degradation in current capabilities or loss of resources. Future progress will build on early experience and learning. Over time, however, we expect the spectrum sharing concept to be accepted. All future wireless systems should be more dynamic and capable of interacting with expanded notions of priority in spectrum access rights. Public safety provides an important first-test case for commercialization of these sharing ideas, and success here will deliver positive externality benefits for the wider adoption of DSA/CR more generally.
REFERENCES [1] W. Lehr and N. Jesuale, “Spectrum Pooling for Next Generation Public Radio Systems,” Int’l. Symp. New Frontiers at DySPAN 2008, Oct. 2007.
IEEE
BEMaGS F
[2] N. Jesuale and B. Eydt, “A Policy Proposal to Enable Cognitive Radio for Public Safety and Industry in the Land Mobile Bands,” Int’l. Symp. New Frontiers at DySPAN 2008, Apr. 2007. [3] J. Chapin and W. Lehr, “Time-Limited Leases for Innovative Radios,” IEEE Commun. Mag., June 2007.
BIOGRAPHIES WILLIAM LEHR (
[email protected]) _________ holds a Ph.D. in economics from Stanford, an M.B.A. in finance from the Wharton School, and M.S.E., B.A., and B.S. degrees from the University of Pennsylvania. He is an economist and research associate in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at Massachusetts Institute of Technology, where he helps direct the Communications Futures Program. In addition to his academic work, he provides business strategy and litigation consulting services to public and private sector clients. NANCY JESUALE (
[email protected]) ___________________ holds a Master’s degree in telecommunications management from the Annenberg School at the University of Southern California. She is the president and CEO of NetCity Inc. in Portland, Oregon, a telecommunications strategic-planning consulting practice advising governments and industry on technology strategies. She is a past chair of the Public Technology Inc. Task Force on Information Technology and Telecommunications. She was the director of strategic planning for telecommunications for the City of Los Angeles and director of communications and networking for the City of Portland.
IEEE Communications Magazine • March 2009
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
109
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
TOPICS IN RADIO COMMUNICATIONS
Licensed or Unlicensed: The Economic Considerations in Incremental Spectrum Allocations Coleman Bazelon, The Brattle Group
ABSTRACT
This article is based on prior work presented at the Telecommunications Policy Research Conference in 2007 and at DySPAN in 2008. I thank Charles Jackson for helpful comments and Abhinab Basnyat, Melissa Goldstein, and Ben Reddy for assistance in preparing this article. Errors remain mine. 1
For just one example of potential gains, see [2]. Some radio services have the flexibility to permit various degrees of repurposing, but most bands of spectrum do not have property rights such as the ability to change technologies or services found in the commercial mobile bands. For an example of the latter, PCS licensees did not need FCC approval to add broadband Internet access in addition to digital voice services on their licensed spectrum.
110
Communications IEEE
At present, no existing market mechanism allows for the trading of radio spectrum between licensed and unlicensed uses. Whenever spectrum is made available for reallocation, the FCC faces a dilemma in determining which access regime to use. For example, the television white spaces are largely unused and available for reallocation. Since both licensed and unlicensed allocations are valuable, allocation decisions (for the TV white spaces or any new band of radio spectrum) must be based on a clear understanding of the trade-offs between the two choices. This article defines economic criteria that can be used in making these important decisions. Economic criteria can go beyond the simple measures of profit and consumer surplus from market transactions. Although some measures of benefit, such as the value of innovation, may be difficult to quantify, the analytic economic framework presented here can easily incorporate them. This analysis does not address any noneconomic considerations in choosing between licensed and unlicensed uses. As one example, the issue of potential societal benefits from promoting minority ownership of spectrum through restricted licenses — something only possible in a licensed regime — is not addressed in this economic analysis. The analysis herein provides the economic information needed for policy analysis; it need not be the sum total of that policy analysis. Standard economic theory tells us that the value of an additional unit of spectrum is equal to the increase in socially beneficial services it produces. For licensed spectrum allowed to trade in markets, this value is relatively easy to calculate: It is the price firms pay for the licensed spectrum. The equation is more complex, however, when unlicensed spectrum is involved. The current value of unlicensed spectrum bands is equal to the sum of the value of the spectrum in all uses in those bands. The incremental value of additional spectrum allocated to unlicensed uses, however, is based on the relief to congestion the additional spectrum will provide. Unlicensed spectrum also contains a value associated with the possibility of future innovation made avail-
0163-6804/09/$25.00 © 2009 IEEE
able by the lower transaction costs of gaining access to unlicensed spectrum. This option value increases with additional allocations of unlicensed spectrum, leading to the benefit of incremental option value from additional unlicensed spectrum. The formula for the benefits from additional unlicensed spectrum allocations can be summarized as “congestion alleviation plus incremental option value.” I apply the analysis developed in this article to the case of TV white spaces. I use information from the recent auction of the lower 700 MHz band E block to calculate the incremental value of licensing the white spaces. I also calibrate an estimate of the incremental value of the white spaces under an unlicensed allocation. Initial calibration of the economic criteria that determine the trade-off between incremental licensed and unlicensed spectrum allocations indicates that currently licensing incremental allocations is the favored policy. If policy makers choose to allocate incremental spectrum as unlicensed, they should recognize the economic costs of that choice.
INTRODUCTION Radio spectrum is an unnecessarily scarce resource. The current management regime of “command and control” [1] largely segregates federal from non-federal users, licensed from unlicensed, private wireless from commercial wireless, and communications from broadcast. It is effective at protecting authorized users of radio spectrum from unwanted interference from other users. This protection, however, comes at a great cost: generally speaking, there is no mechanism for radio spectrum to migrate from lower valued uses to higher valued uses, even if gains from trade would benefit both the original and new users of the radio spectrum.1 A further limitation of command and control is that it limits the ability of spectrum users to share spectrum, even when such sharing increases the valuable uses of a band. This inability to allow higher valued uses creates artificial scarcity in radio spectrum. From time to time, under the current regulatory structure, opportunities to reallocate radio
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
spectrum arise. (It is beyond the scope of the current article to examine how these opportunities come about.) A current example of this is found in the portions of the band allocated for television, but not directly used for broadcasting — the so-called white space of the TV band. Other recent opportunities include reallocations in the 3 GHz and 5 GHz bands. A threshold question associated with these opportunities for reallocation is whether or not they should be reallocated to licensed or unlicensed uses. This article offers an economic perspective on how to answer that question. It is perfectly appropriate that politicians and regulators do, and often should, consider more than just economic factors when making decisions — they should, however, have an accurate understanding of those economic factors so as to make an informed decision. This article also accepts the broad definition of the current spectrum management regime in the United States with the FCC as the relevant decider for non-federal spectrum allocations. Earlier papers have argued for a fuller propertyrights-based system of spectrum management [3, 4]. Many of the benefits of unlicensed access would still be available in a propertied spectrum regime [4]. I characterize the FCC as having three choices for allocating radio spectrum. The first choice is to liberally license a band of spectrum. This creates de facto property rights in the spectrum2 and allows markets to choose the highest valued uses of the band. Currently, the allocations for commercial mobile voice and data services fall under this category. The second choice is to allocate to unlicensed uses where any device that meets certain technical requirements may have access to the band. The third choice is to use the more traditional allocation approach with specific service and technology restrictions. Broadcasting falls under this third category. Throughout the remainder of this article I assume that the FCC is choosing between the first and second categories. The focus of this article is on incremental allocations, not about how to efficiently reorganize all bands of spectrum. Allocations of unlicensed bands are largely irreversible. Once unlicensed devices populate a band, it becomes very difficult to clear them to make the band available for a licensed use. Recent reallocations to licensed uses in the form of fairly liberal property rights associated with the radio spectrum licenses have vested interests — often in the form of auction winners with significant investments in acquiring licenses — that make reversing these allocations very unlikely. Consequently, both the existing unlicensed and liberally licensed allocations are taken as given. Future reallocations, therefore, are most likely to come from the majority of radio spectrum that is either used by the federal government or licensed by the FCC with inflexible use restrictions. Any reallocation from lower value to higher value uses will reduce the overall scarcity of radio spectrum. Nearly any licensed or unlicensed use is preferable to leaving spectrum unused.3 Therefore, if the choice were between
IEEE
BEMaGS F
licensed allocations or doing nothing, the unambiguous answer would be to allocate to licensed uses. Similarly, if the choice were between unlicensed allocations or doing nothing, the unambiguous answer would be to allocate to unlicensed uses. Fortunately, we are not so constrained. For candidate reallocations, we have the superior choice of deciding between licensed and unlicensed allocations. The question is: which one will more efficiently utilize incremental radio spectrum and consequently do more to reduce spectrum scarcity? The next section briefly reviews existing allocations. The following section explains the basic analytical framework for the economic choice between licensed and unlicensed spectrum allocations. I then review some empirical evidence used to calibrate the equilibrium condition derived in the third section. The final section performs that calibration and concludes.
EXISTING ALLOCATIONS Liberally licensed spectrum allocations have been growing and soon will account for about one-sixth of the radio spectrum below 3 GHz (Table 1). For the most part, these bands are used for mobile voice and data communications. Several bands of spectrum, including some under 3 GHz, are allocated to unlicensed uses (Table 2). These uses include, among others, cordless phones, wireless LANs, and baby monitors. Table 2 does not include numerous smaller unlicensed bands such as Citizen Band radio and the Family Radio Service.
ANALYTIC FRAMEWORK This analysis takes existing allocations of unlicensed and liberally licensed radio spectrum as given.4 Because the analysis focuses on the allocation of the next available portion of radio spectrum, the marginal analysis of economics is particularly well suited. Marginal analysis allows the characterization of the conditions when a resource is used as efficiently5 as possible, and therefore, economic welfare is maximized. The key result when an input can be put to two different uses is that in equilibrium, an additional increment of the input would be used equally efficiently in either use. Maximum efficiency in the allocation of radio spectrum between licensed and unlicensed uses is therefore characterized by the condition that an incremental allocation to either use would be equally beneficial. The corollary to this result is that if the resource can be used more efficiently in one use — that is, create more economic value in that use than in the alternative use — incremental deployments should be in that use. The law of diminishing returns implies that as more of a resource is devoted to one use, the incremental benefit of the resource in that use diminishes. Equilibrium is achieved when an additional allocation to one use brings the incremental returns to both uses in line with each other. In the case of radio spectrum allocations, incremental allocations should go to the use that, on the margin, can put the incremental radio spectrum to higher valued uses.
2
The radio spectrum remains legally owned by the public. 3
One caveat to this claim is if a given use of spectrum in the near term prevents a sufficiently more efficient use in the future. 4
It also takes the existing base of illiberally licensed spectrum as given, except on the margin. The economic analysis of moving spectrum from illiberally licensed services is beyond the scope of this article. 5
Efficiency of use of radio spectrum is an economic concept, not an engineer’s metric.
IEEE Communications Magazine • March 2009
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
111
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Band name
Location
MHz
Availability
PCSA
1.9 GHz
120 MHz
Now
CellularB
800 MHz
50 MHz
Now
SMRC
800 MHz/900 MHz
14 MHz + 5 MHz
Now/soon
G BlockD
1.9 GHz
10 MHz
Now/very soon
BRS/EBSE
2.5 GHz
174 MHz
Now/soon
AWSF
1.7 GHz/2.1 GHz
90 MHz
Now/soon
700 MHz1,G
700 MHz
78 MHz
Soon
Total MHz
A
BEMaGS F
541 MHz
Sources and Notes: 1
Note that this does not include guard bands but does include the upper 700 MHz D block. U.S. Congressional Budget Office, “Where Do We Go From Here? The FCC Auctions and the Future of Radio Spectrum Management,“ Apr. 1997. B FCC Wireless Telecommunications Bureau, “Cellular Services;” http://wireless.fcc.gov/services/cellular/ C FCC Wireless Telecommunications Bureau, “900 MHz SMR;” http://wireless.fcc.gov/smrs/900.html D “Improving Public Safety Communications in the 800 MHz Band,” Report and Order, 5th Report and Order, 4th Memorandum Opinion and Order, and Order 19 FCC Rcd. 14969, 2004. E FCC Wireless Telecommunications Bureau, “BRS & EBS Radio Services;” http://wireless.fcc.gov/services/brsebs/ F “Amendment of Part 2 of the Commission’s Rules to Allocate Spectrum Below 3 MHz for Mobile and Fixed Services to Support the Introduction of New Advanced Wireless Services, Including Third Generation Wireless Systems,” 2nd Report and Order 17, FCC Rcd. 23193, 2002. G Revised 700 MHz Band Plan for Commercial Services, 2007; http://wireless.fcc.gov/auctions/default.htm?job=auction_ summary&id=33 _________ A
Table 1. Base of liberally licensed radio spectrum.
6
For simplicity of exposition, the geographic dimension of spectrum is not characterized. It seems unlikely that it would be optimal to allocate a band of spectrum as unlicensed in one geographic area and as licensed in another, although the FCC did propose this in the 3.650 GHz band [4, p. 16].
To apply this analysis to spectrum allocations, the spectrum planner’s optimization problem is specified in Eq. 1. The spectrum planner should seek to maximize total social welfare from spectrum use by allocating spectrum (or bandwidth) to private liberally licensed uses (“private uses” for short) and unlicensed uses, subject to the condition that total spectrum allocated to private uses and unlicensed uses is less than or equal to the total spectrum the spectrum planner can allocate to both uses. The amount of spectrum available to the two uses considered here does not include allocations to the government (federal, state, or local) or other incumbent illiberally licensed private uses such as broadcasting or amateur radio. If the total amount of spectrum available for licensed or unlicensed uses were to increase, it would come from these other uses.
7
A fuller accounting of the value of spectrum by band would illustrate a central point raised in [6]. That paper emphasized the greater value of the UHF band for longerrange communications and the concurrent disadvantages for WLAN applications due to the FCC channelization plan. The relative advantages/disadvantages are reversed in the 2.4 GHz and 5 GHz unlicensed bands.
112
Communications IEEE
Social Welfare from spectrum use = Social Welfare from licensed uses + Social Welfare from unlicensed uses s.t.
(1)
Spectrum in licensed uses + Spectrum in unlicensed uses ≤ Total spectrum In this characterization, the available spec-
trum is measured independent of how it is allocated. 6 However, licensed and unlicensed uses may require different amounts of guard band, as is the case with the TV white spaces [5]. For simplicity of exposition, this issue is ignored. A fuller analysis would account for the differential amounts of usable spectrum under licensed vs. unlicensed regimes. The effect of one allocation having larger guard bands would be to scale down the incremental value of additional spectrum in that use. For example, one recent analysis finds that the amount of white space available can vary by a factor of 2.5 under different interference protection rules [6]. Consequently, the calibration discussed later in this article likely overvalues unlicensed uses because it is treating an incremental MHz of spectrum as being the same in both licensed and unlicensed uses, where larger guard bands may be needed for many unlicensed uses. One obvious simplification of the current analysis is that, beyond how it is allocated, all spectrum is treated as equivalent or substitutable. This is clearly not the case. Even limiting the discussion to spectrum below about 3 GHz, there is significant variation in the propagation characteristics of different bands.7 For the current analysis, the total benefit to society from use of spectrum consists of the benefits from private uses and the benefits from unlicensed uses.8 For spectrum in private uses total benefit is equal to the benefits accrued to firms, producer surplus, and the benefits accrued to consumers, consumer surplus. 9 For unlicensed
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
spectrum, total benefit again equals producer surplus and consumer surplus, but also includes the benefits from reducing the externalities associated with unlicensed spectrum use. This latter category includes economic benefits that are not captured by or accounted for in market-based transactions. The benefits from spectrum use discussed here and explained in more detail below focus on the impact of changes in the amount of spectrum available for licensed or unlicensed uses on the benefits in question. It is worth emphasizing, however, that how much benefit society ultimately receives from an allocation is dependent on many things. For example, a new technology can greatly influence the efficiency of spectrum use in licensed or unlicensed uses. The current analysis does not directly address many of these other factors affecting the benefits derived from radio spectrum. If the social planner is maximizing the total social welfare from spectrum use expressed in Eq. 1, it can be shown that an incremental allocation to either licensed or unlicensed uses should have the same impact on total social welfare. If this were not the case — say an incremental allocation to unlicensed uses had a larger impact on total social welfare than an incremental allocation to licensed uses — total welfare could be increased by taking a little spectrum from licensed allocations and allocating it to unlicensed uses. This characterization of what the social planner is trying to achieve can be characterized as Changes in social welfare from licensed uses = Changes in social welfare from unlicensed uses
(2)
Once allocated, it is very difficult to take spectrum away from its current use. As discussed above, it is beyond the scope of this article to explore how spectrum becomes available for allocation or, similarly, how to take spectrum away from a current user. Consequently, this article only focuses on how to apply incremental allocations to maximize social welfare, not how to reallocate the existing spectrum bands to maximize social welfare. As such, if Eq. 2 does not hold, the analysis in this article indicates that incremental allocations should go to licensed or unlicensed uses so as to move Eq. 2 closer to balance. To do so, we need to characterize the value of incremental allocations of radio spectrum to licensed and unlicensed uses. In the remainder of this section the value of incremental allocations of radio spectrum for licensed and unlicensed uses is characterized. The empirical question of whether one of the two uses creates more value is addressed in subsequent sections.
CHARACTERIZATION OF THE INCREMENTAL VALUE OF RADIO SPECTRUM IN LICENSED USES: PRODUCER SURPLUS AND CONSUMER SURPLUS Increasing the amount of spectrum that is licensed creates benefits in the market for spectrum-based services. In essence, the increased
Band name
Allocation
Location
Bandwidth
900 MHz
Pre-1990
900–928 Mhza
28 MHz
1920–1930 MHza
10 MHz
Unlicensed PCS
1993 2390–2400 MHza
10 MHz
2.4 GHz
Pre-1990
2400–2483.5 MHza
83.5 MHz
3650 MHz
2005
3650–3700 MHzb
50 MHz
1997
5.15–5.35 GHza,b
200 MHz
2003
5.47–5.725 GHzc
225 MHz
Pre-1990
5.725–5.850 GHza,b
125 MHz
Millimeter wave
1995
57–64 GHza
7 GHz
—
2001
24.0–24.25 GHzd
250 MHz
—
2003
92–95 GHze
3 GHz
U-NII
IEEE
BEMaGS F
Sources and Notes: * C. L. Jackson, D. Robyn, and C. Bazelon, “Unlicensed Use of the TV White Space: Wasteful and Harmful,” FCC Ex Parte Comments, ET Docket no. 04-186, ET Docket no. 02-380, Aug. 20 2008. a K. R. Carter, A. Lahjouji, and N. McNeil. “Unlicensed and Unshackled: A Joint OSP-OET White Paper on Unlicensed Devices and Their Regulatory Issues,” OSP Working Paper Series no. 39, May 2003. b 5.725–5.850 GHz was authorized for unlicensed uses in the 1980’s. C. Jackson, “The Allocation of the Radio Spectrum,” Scientific American , vol. 242, no. 2, Feb. 1980. The NII band only includes the portion of that band up to 5.825 GHz. c FCC, “Report and Order and Memorandum Opinion and Order. In the Matter of Wireless Operations in the 3650-3700 MHz Band,” ET Docket no. 04-151, Rel. Mar. 16, 2005. d FCC, “Report and Order. In the Matter of Revision of Parts 2 and 15 of the Commission’s Rules to Permit Unlicensed National Information Infrastructure (U-NII) Devices in the 5 GHz band,” Rel. Nov. 18, 2003. e FCC, “Report and Order. In the Matter of Amendment of Part 15 of the Commission’s Rules to Allow Certification of Equipment in the 24.05–24.25 GHz Band at Field Strengths up to 2500 mV/m,” ET Docket no. 98-156, Rel. Dec. 14, 2001. f FCC, “FCC Opens 70, 80, and 90 GHz Spectrum Bands For Deployment Of Broadband “Millimeter Wave Technologies,” WT Docket no. 02-146, Oct. 16, 2003.
Table 2. Base of unlicensed radio spectrum.
spectrum under licensed use lowers the costs of competing and tends to make the market for spectrum-based services more competitive. This increased competition has the effect of transferring some producer profits to consumers in the form of lower prices. Liberally licensed radio spectrum is largely free to trade in markets and consequently has a market price. For a private firm that is maximizing its profits, it will use additional spectrum up to the point where the cost of the additional spectrum is equal to the extra profits the firm can make with that additional spectrum. (Otherwise, it could increase its total profits by using either a little more or a little less spectrum.) Through the workings of the market, the price of spectrum adjusts up and down until all purchasers of licensed spectrum are indifferent to further transactions. One great effect of using a market to set a
8
Social welfare could be defined with greater weight on either producer or consumer surplus [7]. 9
This assumes that the boundaries of rights in private use are well defined and property rights are enforced. Consequently, there are no externalities in the normal economic sense from private use of spectrum.
IEEE Communications Magazine • March 2009
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
113
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
price and allocate a resource such as radio spectrum licenses is that all buyers and sellers, by facing a common price, will place the same incremental value on the resource. If they did not, some market participant would see a profitable opportunity to either buy or sell licensed spectrum, putting upward or downward pressure on the price of licensed spectrum — the process continuing until the market clears. Having all firms that use radio spectrum have the same incremental value for that resource means that spectrum is efficiently used because value cannot be created by rearranging how much each market participant uses. This is analogous to the condition that the incremental benefit between licensed and unlicensed uses should be the same, only here it is the incremental benefit between different licensed uses that should be the same. Adding an incremental amount of spectrum to the licensed pool changes the value of spectrum to all users. The increased supply of spectrum will, holding all other things equal, slightly reduce the market price of spectrum and the price of spectrum-based services to consumers. Although the change in prices will likely be negligible for a small increase in spectrum under licensed uses, these price reductions are applied to all liberally licensed spectrum and services already in use, and so add up to a significant change in aggregate. The consumer benefit from the marginal increase in more privately licensed spectrum is approximately equal to the loss in producer profits from the increased supply of the fixed spectrum resource.10 In other words, the producers’ loss from lower spectrum values and a more competitive market is the consumers’ gain. Taking producer and consumer surpluses together, the loss in profits to producers from increased supply of private-use spectrum is offset by gains to consumers. The net effect from a marginal increase in private use spectrum is therefore simply the market value of the new spectrum as measured by its price.
CHARACTERIZATION OF THE INCREMENTAL VALUE OF RADIO SPECTRUM IN UNLICENSED USES
10
The difference between the gain in consumer surplus and the loss in producer surplus is equal to the consumer surplus associated with the incremental quantity of spectrum-based services (analogous to the Harbinger Triangle). Consequently, the producer losses slightly underestimate the consumer gains. 11
This sensitivity to interference suggests that such services should be provided on licensed spectrum.
114
Communications IEEE
Producer Surplus and Consumer Surplus — By its nature, unlicensed spectrum is unpriced or, equivalently, priced at zero. The same logic discussed above that leads firms to use additional spectrum up to the point where the incremental value equals the incremental cost applies here, too. The key difference in the unlicensed case is that because unlicensed spectrum is priced by regulators at zero (and the price does not increase as firms increase their demand for the resource), firms use additional spectrum up to the point where the incremental benefits of additional spectrum are also zero. The phenomenon of using unpriced inputs to the point where more of the input adds no additional value to production is well studied in economics. For example, a factory that faces no costs for emitting air pollution (a zero price of clean air) will not invest in pollution savings technologies. Similarly, a firm that faces no cost of spectrum use will not bear costs related to using spectrum more efficiently.
A
BEMaGS F
This effect can be seen in current uses of unlicensed spectrum. One prominent example is WiFi. WiFi systems (especially those deployed in homes) often have channels with a capacity in excess of 50 Mbytes/s but typically carry Internet connections that only use a fraction of that capacity. This overprovisioning is likely supported by the fact that WiFi systems use unlicensed, and therefore unpriced, radio spectrum. Consumption of unlicensed spectrum or unlicensed-spectrum-based goods is also not constrained by the availability of unlicensed spectrum. In contrast to a cell phone where the price of spectrum affects the cost of mobile phone service, a consumer who chooses to use a cordless phone at home is not considering, or affected by, the availability of the unlicensed spectrum used by the cordless phone. Consequently, increased allocations of unlicensed spectrum do not lead directly to increases in consumer surplus because additional spectrum does not directly change the benefit a consumer receives from the use of, say, a cordless phone. Externalities — Although incremental allocations of unlicensed spectrum have no direct benefit to producers or consumers, they have an indirect effect by creating social value through their effects on externalities. The first obvious externality associated with an increase in unlicensed spectrum is congestion relief. Users of unlicensed spectrum do not internalize the cost to other users of consuming unlicensed spectrum. Given the low power restrictions on unlicensed bands, this cost is typically zero, but can be positive at times and places where numerous users are trying to access the same spectrum or one user with numerous devices is trying to access the same spectrum. Furthermore, some types of uses such as wireless Internet service providers (WISPs) are more sensitive to interference and therefore less tolerant of other users in a band. 11 When congestion exists, additional allocations of unlicensed spectrum may have the effect of mitigating the congestion. The second externality associated with unlicensed spectrum is the potential innovation effects from having unlicensed spectrum available [8]. For example, a manufacturer could release a new device without negotiating with spectrum owners for access to licensed spectrum and thus require a lower economic return. This lower economic return will allow ideas to come to market that might not in a licensed regime.
EMPIRICAL ANALYSIS This section provides rough preliminary empirical estimates of the components of the incremental value of spectrum in licensed and unlicensed uses. In what follows, all value measures are for a one-time payment on a dollar per MHz basis for national spectrum.
PRODUCER SURPLUS PLUS CONSUMER SURPLUS FROM LICENSED SPECTRUM As discussed above, the sum of producer and consumer surplus is equal to the market price of private use spectrum. The recently concluded 700
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
MHz auction raised approximately $19 billion for 52 MHz of spectrum available nationwide [9], or approximately $365 million/MHz. The current analysis will use the 700 MHz E block as comparable because of its similarities to the TV white spaces [6]. The E block sold for $1.3 billion or $211 million/MHz. Any increased allocation to licensed use, such as licensing the TV white spaces, will increase the total supply of licensed spectrum and, as a consequence, reduce the price of licensed spectrum. Although the TV white spaces provide for considerably more spectrum in lower valued rural areas than in higher valued urban areas, 12 the current analysis assumes 48 MHz are available nationally.13 Drawing on other analysis that uses an elasticity of demand of –1.2 for licensed spectrum [10] and a base of liberally licensed spectrum reported in Table 1 of 541 MHz, the E block price should be reduced by 7.4 percent to account for the increased supply from the white spaces. The adjusted E block price is therefore $195 million/MHz.
PRODUCER SURPLUS PLUS CONSUMER SURPLUS FROM UNLICENSED SPECTRUM As discussed above, the marginal direct value of producer surplus and consumer surplus from unlicensed spectrum is zero.
CONGESTION EXTERNALITY VALUE FROM UNLICENSED SPECTRUM Three types of potential congestion in the unlicensed bands are identified. The first is congestion in the home (or office or car or other private space). This would occur if someone tried to simultaneously use more unlicensed devices in the home than the available unlicensed spectrum could support. This problem could be exacerbated by spill-over of unlicensed devices from close proximity neighbors. The second type of congestion identified would occur in public spaces, such as public WiFi hotspots. The third type of congestion identified is that experienced by WISPs using unlicensed bands to provide commercial broadband services, typically in rural areas. Marginal Value of Alleviating Congestion in the Home — Extensive research found no studies on the marginal willingness to pay to alleviate congestion of unlicensed devices in the home. A search of news articles describing interference in unlicensed bands found no articles describing congestion in the home.14 Nevertheless, many people have heard the occasional anecdote about congestion in the home. The anecdotes typically involve a cordless phone and WiFi or a microwave and could presumably be solved by purchasing a 5.8 GHz cordless phone. An initial cut at estimating one potential upper bound on the willingness to pay for incremental unlicensed spectrum in the home is based on purely speculative guesses at values and is included simply to illustrate the point. Assume that interference in the home is only likely to happen in homes that have baby monitors as well as WiFi units and cordless phones. According to the CEA, in 2002 on average 81 percent
IEEE
BEMaGS F
of homes had 1.5 cordless phones, and 10.5 percent of homes had 1.38 baby monitors [11, 12]. By one account, at the end of 2005, 13.2 million homes had WiFi units [13]. It is unknown how many homes use all three devices or experience interference in the home. Given the lack of serious complaints, this analysis arbitrarily assumes that 1 million households experience interference in the home. Further assume each household would be willing to pay $20 (the price of an inexpensive 5.8 GHz cordless phone) to alleviate that interference. 15 Also assume that it would take an additional 10 MHz of unlicensed spectrum to alleviate the congestion. 16 With these assumptions, a rough estimate of the marginal value of alleviating congestion in the home would be $2 million/MHz. This estimate is extremely rough and only meant as a placeholder pending further research. Revising estimates of the number of homes affected, their willingness to pay, or the amount of additional unlicensed spectrum needed to alleviate the congestion will proportionally affect the final estimate. Marginal Value of Alleviating Congestion in Public Areas — A search of news articles and the academic literature did not find any documented congestion of public WiFi hotspots outside of convention centers [6]. In fact, to study the technical effects of congestion on WiFi network operations, one group of researchers had to set up their testing equipment in a convention hall hosting over 1000 engineers [14]. This result is not surprising given that the more recently allocated 5 GHz unlicensed bands are also being used for WiFi. Without demonstrated congestion in public areas, no cost is identified or positive value assigned to alleviating this source of congestion with incremental allocations of unlicensed spectrum. Marginal Value of Alleviating Congestion for WISPs — WISPs that use unlicensed spectrum to provide commercial broadband access services have chosen not to offer those services over licensed frequencies. WISPs using unlicensed spectrum typically operate in rural areas. This seems to be because wireless broadband access in urban areas is more appropriately provided on licensed frequencies, presumably to be able to provide the quality of service needed for a commercial service. In the AWS auction, RSAs sold for an average price of 28.4 percent of the band average, and in the more recent 700 MHz auction, the B block RSAs sold for 26.6 percent of the band average. Using the latter number, the implied value of RSAs in the E block is $0.20/MHz-pop. Applying this value to all RSA pops, a rural MHz is valued at $12.9 million. 17 The value of congestion alleviation for WISPs cannot be more than this amount. The Sum of the Marginal Congestion Values — Taken together, the value of alleviating all congestion externalities is estimated to be $14.9 million/MHz. This estimate is very rough and only intended to characterize the order of magnitude of the marginal value of the congestion externality.
12
See [6] for a detailed estimate of the available white space. 13
The analysis here assumes that on a value adjusted basis there are 8 white space channels available nationwide or 48 MHz of spectrum available. 14
A thorough search of a wide range of news sources, including 265 telecom-specific publications, returned only articles that promoted newly developed devices to alleviate congestion in the unlicensed bands. No study actually reported any specific or verifiable examples of congestion in homes [6]. 15
This is not intended to imply that this is the only interference a household could experience, rather simply to put some economic bounds on the value of alleviating that interference. 16
See previous note.
17
According to the FCC, the population of all RSAs is 65,792,705.
IEEE Communications Magazine • March 2009
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
115
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Given the significant amount of unlicensed spectrum identified here, it seems likely that we are already experiencing benefits from unlicensed-induced innovation. The issue here, however, is how much additional innovation is likely from incremental unlicensed allocations.
116
Communications IEEE
INNOVATION EXTERNALITY VALUE FROM UNLICENSED SPECTRUM Several analysts argue that one of the benefits of unlicensed spectrum is that it allows for innovation [8]. Yet no characterization of the value from innovation is available. Furthermore, no characterization of the incremental value from innovation can be found. However, it is likely to be non-negative. Given the significant amount of unlicensed spectrum identified in Table 2, it seems likely that we are already experiencing benefits from unlicensed-induced innovation. The issue here, however, is how much additional innovation is likely from incremental unlicensed allocations.
CONCLUSION The incremental value of a nationwide megahertz of licensed spectrum in the United States is estimated here to be $195 million. The identifiable incremental values of 1 MHz of unlicensed spectrum are estimated here to total $14.9 million, with the incremental innovation value unknown. If the allocation of licensed and unlicensed spectrum is in equilibrium, the incremental value of 1 MHz in either use must be the same. For this to be the case, the incremental innovation value alone of spectrum in unlicensed uses would have to be approximately $180.1 million/MHz. For an allocation of 50 MHz of spectrum (a rough average value of the TV white spaces), the total additional innovation value from the allocation would have to equal more than $9 billion. This is a very large amount of additional value from potential innovations — additional to the innovation value associated with existing unlicensed bands. If the incremental value of innovation from the TV white spaces — or the equivalent amount of spectrum — is less than about $9 billion, the allocation between licensed and unlicensed spectrum is not in equilibrium, with the marginal value of licensed spectrum allocations exceeding the marginal value of unlicensed spectrum allo-
A
BEMaGS F
cations. The policy implication of this imbalance is that incremental spectrum allocations should go to licensed uses.
REFERENCES [1] FCC — Spectrum Policy Task Force, “Report,” ET Docket no. 02-135, Nov. 2002. [2] E. Kwerel and J. Williams, “Changing Channels: Voluntary Reallocation of UHF Television Spectrum, FCC — Office of Plans and Policy,” Working Paper no. 27, Nov. 1992. [3] United States Congress — Congressional Budget Office, “Where Do We Go From Here? The FCC Auctions and the Future of Radio Spectrum Management,” Apr. 1997. [4] T. W. Hazlett and C. Bazelon, “Market Allocation for Radio Spectrum,” ITU Wksp. Market Mechanisms Spectrum Mgmt., Geneva, Switzerland, Jan. 2007. [5] “Reply Comments of Charles L. Jackson and Dorothy Robyn,” FCC Docket 04-186, Mar. 2, 2007. [6] “Comments of Charles L. Jackson, Dorothy Robyn and Coleman Bazelon,” FCC Docket 06-150, June 20, 2008. [7] C. Bazelon, “The Importance of Political Markets in Formulating Economic Policy Recommendations,” AAEA annual meeting, Manhattan, KS, Aug. 1991. [8] W. Lehr, “Dedicated Lower-Frequency Unlicensed Spectrum: The Economic Case for Dedicated Unlicensed Spectrum Below 3GHz,” New America Foundation, Spectrum Policy Program, Spectrum Series Working Paper no. 9, July 2004. [9] FCC; http://wireless.fcc.gov/auctions/default.htm? job=auction_summary&id=73 ________________ [10] C. Bazelon, “Analysis of an Accelerated Digital Television Transition,” May 31, 2005. [11] “Consumer Electronics Association Comments,” FCC Docket 02-135, Sept. 30, 2002. [12] FCC — Spectrum Policy Task Force, “Report of the Unlicensed Devices and Experimental Licenses,” Working Group 6, Nov. 15, 2002. [13] E. Chavez, “Wireless Not Free of Risks,” Sacramento Bee, July 4, 2005. [14] A. Jardosh et al., “Understanding Congestion in IEEE 802.11b Wireless Networks,” USENIX Ass’n Internet Measurement Conf., 2005.
BIOGRAPHY COLEMAN BAZELON (
[email protected]) _________________ specializes in regulation and strategy in the wireless, wireline, and video sectors. He frequently advises regulatory and legislative bodies on many issues, including auction design and performance, and regularly serves as an advisor for bidders in spectrum license auctions. He received his Ph.D. and M.S. in agricultural and resource economics from the University of California at Berkeley. He also holds a Diploma in economics from the London School of Economics and a B.A. from Wesleyan University.
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
TOPICS
IN
A
BEMaGS F
CALL FOR PAPERS RADIO COMMUNICATIONS: COMPONENTS, SYSTEMS, AND NETWORKS A SERIES IN IEEE COMMUNICATIONS MAGAZINE
The Radio Communications series will cover components, systems and networks related to RF technologies. Articles are in-depth cutting-edge tutorials, emphasizing state of the art technologies involving physical and higher-layer issues in radio communications, including RF, microwave, and radio-related wireless communications topics. The Radio Communications series emphasizes practical solutions and emerging research topics for innovative research, design engineers and engineering managers in industry, government, and academic pursuits. Articles are written in clear, concise language and at a level accessible to those engaged in the design, development, and application of products, systems and networks. The peer review process ensures that only the highest quality articles are accepted and published. The rigorous, triple peer review process evaluates papers for technical accuracy, and ensures that no self-serving marketing or product endorsement messages are published in the papers or editorials. RADIO SYSTEMS AND ARCHITECTURE •Systems such as WLAN, Bluetooth, WiFi, WiMax, cellular 3G and 4G systems, Automatic Link Establishment (ALE), microwave and millimeter wave trunking and backhaul, RF identification (RFID), intelligent vehicle highway radio systems, radionavigation (GPS, Glonass, and hybrid GPS-INS systems), location finding (E911, search and rescue, general methods), handhelds •Radio architectures (direct conversion radios, low IF radios), open architecture standards such as the SDR Forum and Object Management Group radio standards, system security, and novel and emerging approaches to radio/wireless systems architectures •Air-interface architectural aspects such as framing, burst generation, duplexing, air interface security, multiair-protocol switching, channel modulation, etc. •Radio-enabled services such as proximity badges RADIO COMPONENTS •Processors (e.g. CMOS, GaAs, BiCMOS, SiGe and emerging System on Chip (SoC) technologies) and related software technologies (downloads, security, compact operating systems, real-time CORBA, development environments, and other radio-enabling technologies) •Specific components (e.g. antennas, power amplifiers, synthesizers, superconducting components, highly programmable analog parameters, etc.) •Radio Techniques (e.g. pre-distorion for non-linear amplifiers, polar transmitter architectures, Direct Digital Synthesis and advanced approaches) •Receiver techniques (DC offset compensation, I/Q gain/phase imbalance, etc) •Smart antennas including sectorized and emerging massively parallel apertures, MEMS signal processing, shared apertures, Space-Time Adaptive Processing (STAP) and related multi-user smart antenna technologies
•Multiple-Input Multiple Output (MIMO) and technologies that exploit multipath and spatial diversity for increased communications capacity or Quality of Service (QoS) •Baseband platform (e.g. chip with dual core DSP/MCU, single chip with digital and converters, SoC hybrids of FPGAs, DSPs and ASICs; ADCs, sampling and resampling techniques, timing and frequency control subsystems, etc.) •Algorithms residing in baseband (filter algorithms, equalizers, error control coding and link layer protocols; protocol interactions; MIMO algorithms) RADIO NETWORKING •Signal processing and coding techniques related to wireless applications (e.g. speech coding, multi-codec conversion, video coding, multimedia integration) •Radio resource management, especially agile use of RF spectrum and radio etiquettes •Internetworking of radio networks to fixed wireline system •Impact of radio on network topology and mobility EMERGING TOPICS IN RADIO COMMUNICATIONS, TECHNOLOGY, & SERVICES •Location-aware radio •Cognitive radios and cognitive/adaptive radio networks •User-aware radio •Non-radio sensors sharing RF apertures on radio devices (temperature, accelerometers, binaural microphones, directional sound projection) and related emerging integrated applications such as language processing, spatially discriminating microphones, machine learning, and biometrics
Manuscripts must be submitted through the magazine’s submissions Web site at http://com mag-ieee.manuscriptcentral.com/ _____________________________ On the Manuscript Details page please click on the drop-down menu to select
Radio Communications Series
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
GUEST EDITORIAL
MODELING AND SIMULATION: A PRACTICAL GUIDE FOR NETWORK DESIGNERS AND DEVELOPERS
Jack Burbank
M
odeling and simulation (M&S) is a critical element in the design, development, and test and evaluation (T&E) of any network product or solution. It is almost always preferable to have insight into how a particular system, individual device, or algorithm will behave and perform in the real world prior to its actual development and deployment. Network designers and developers want assurances that their designs will achieve the desired goal prior to actual production and deployment. Network operators want assurances that the introduction of a new device type into their network is not going to have unintended consequences. There are many methods available to provide these types of assurances, including analysis, prototype development and empirical testing, low-scale trial deployments, and M&S. While not sufficient unto itself, M&S is a particularly valuable method because in many cases M&S is the only viable method to gain insight into the performance of the eventual product or solution in a large-scale environment prior to actual deployment, and allows for more informed design trade studies and deployment decisions. The role of M&S in the design and development process is expected to further increase in the future due to the rapidly increasing complexity of communications networks. Because of the increasingly large and distributed nature of networked systems, and the resulting interdependencies of individual subsystems to operate as a whole, it will often be the case that individual subsystems cannot be tested in isolation. Rather, multiple systems must sometimes be considered in concert to verify system-level performance and behaviors. This increases the scale of network T&E and adds significant complexity as several different types of measurements will often be required in several different locations simultaneously. This will further increase the required support for physical test events in the required platforms, personnel, and measurement equipment, further limiting the realistic amount of physical testing. It will place a premium on M&S to perform requirements verification to augment physical testing. Not only is M&S an important element of the overall T&E strategy, but high-fidelity M&S will become increasingly important in predicting system performance. Large-scale, high-fidelity M&S is notoriously challenging to accomplish, and is still an active research area. There are numerous highly capable M&S tools and techniques available today that span virtually all aspects of communications and network design. However, it is often difficult for a network designer, developer, or tester to know what are the most appropriate tools and techniques for a particular task. In an ever growing field where technology evolves rapidly, it can be challenging to understand the subtleties of M&S
118
Communications IEEE
tool capabilities and what they are best suited for. When choosing the proper M&S tool for a particular application, how does one differentiate a useful feature from the equivalent of “bling?” The challenge of proper M&S tool selection is exacerbated by the sometimes poor understanding of the proper role and application of M&S, and of how to best apply these tools. Additionally, while a large amount of distinguished technical contributions have been made into the M&S community, there are still many key technical limitations that must be addressed to enable large-scale high-fidelity M&S of complex network systems. Until then, it is important to understand the capabilities and limitations of particular M&S tools as well as M&S approaches in general. The goal of this feature topic is to present the state of the art in M&S, with a particular focus on practical information such as current best practices, tools, and techniques. Following a rigorous technical review process, seven articles were chosen for publication. The articles presented in this issue cover important topics, including: • An overview of state-of-the-art M&S tools and environments available commercially and from open source • Examples of how M&S can be used for important functions such as fault restoration and design validation • How M&S can be used to make meaningful comparative studies of different technologies (i.e., how to make an “apples-to-apples” comparison) • Improving M&S performance through computational complexity reduction and parallel processing techniques I hope that you enjoy these seven articles, and find that they are useful in helping to understand the complex landscape of this important and exciting field.
BIOGRAPHY J ACK B URBANK [M] (
[email protected]) _______________ received his B.S. and M.S. degrees in electrical engineering from North Carolina State University in 1994 and 1998, respectively. As part of the Communications and Network Technologies Group of the Johns Hopkins University Applied Physics Laboratory (JHU/APL), he currently leads a team of engineers focused on assessing and improving the performance of wireless networking technologies through test, evaluation, and technical innovation. His primary expertise is in the areas of wireless networking and modeling and simulation, focusing on the application and evaluation of wireless networking technologies in the military context. His recent work has focused on the areas of wireless network electronic attack, wireless network security, and cognitive radio networking. He has published numerous technical papers and book chapters on topics of wireless networking, regularly acts as a technical reviewer for journals and magazines, and is a co-author of an upcoming book on the subject of modeling and simulation. He teaches courses on the topics of networking and wireless networking in the Johns Hopkins University Part Time Engineering Program, and is a member of the ASEE.
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
ComSoc Online
where communications professionals come together
_____________________________________________________
Announcing breakthrough results: How can we showcase our technical contributions for the general public without jeopardizing our professional credibility and the credibility of our professional community?
®
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
MODELING AND SIMULATION: A PRACTICAL GUIDE FOR NETWORK DESIGNERS AND DEVELOPERS
Wireless Network Modeling and Simulation Tools for Designers and Developers William T. Kasch, Jon R. Ward, and Julia Andrusenko, The Johns Hopkins University Applied Physics Laboratory
ABSTRACT Modeling and simulation methods are employed by scientists and engineers to gain insight into system behavior that can lead to faster product time-to-market and more robust designs. These advantages come at the cost of model development time and the potential for questionable results because the model represents limited attributes of the actual system. M&S techniques are used to characterize complex interactions and performance at various layers of the protocol stack. This article provides a discussion of M&S for wireless network designers and developers, with particular attention paid to the architectural issues, as well as a discussion of the various communication M&S tools available today. The topics presented span the protocol stack, including radio-frequency-propagation M&S tools, physical-layer and waveform M&S, network-layer M&S, and distributed simulation, as well as waveform generation for test and evaluation.
INTRODUCTION Modeling and simulation (M&S) tools can be quite powerful in gaining insight into the behavior of a complex system. Generally, a network system can be represented by a model implemented in hardware, software, or a combination of both. A simulation represents the execution of that model, consisting of a typical set of inputs, algorithms, and routines that model the system behavior and a set of outputs that provide insight into the system performance. These systems can be as simple or as complex as the user desires. Figure 1 shows a high-level representation of a software-based communication system M&S tool. Here, representative userspecified inputs are processed by routines that model the digital communication system to produce outputs such as bit-error rate (BER), packet-error rate (PER), or throughput metrics that are useful for characterizing the performance of the system.
120
Communications IEEE
0163-6804/09/$25.00 © 2009 IEEE
The advantages of M&S include the ability to exercise scenarios not achievable easily through empirical methods (i.e., scalability testing of networks) and the ability to modify models to test system sensitivity and tune performance [1]. The most substantial M&S weakness that remains is that such a system is not real — and as such, the value of the results are largely a function of the modeled system aspects and the degree of modeling detail; however, shared-code models, based on software from actual hardware platforms (e.g., software-defined radio), are beginning to mitigate this M&S weakness because they offer the ability to produce high-fidelity results for operational scenarios. It should be noted that M&S is not a replacement for empirical testing; analytical models, prototyping, laboratory testing, and field testing are all critical components in a successful design. Generally, three types of simulations are used for network M&S today. These include software, hardware, and hybrid approaches. Software approaches provide insight into the conceptual system performance and can help engineers to find system issues before hardware development begins. Furthermore, software-based simulations often are upgraded more easily to model new system features or experiment with variations in system parameters. However, high-fidelity, software-based simulations usually are slow unless distributed techniques can be used to improve execution time. Hardware approaches generally are substantially faster in execution time compared to software approaches of comparable fidelity; however, a hardware-based simulation can be more expensive to develop and maintain because of hardware-associated costs (e.g., circuit-board design) compared to software (e.g., purchasing a C++ compiler and associated development labor). Modern hardware development procedures usually include a behavioral software definition using a language such as very high-level design language (VHDL) or Verilog, plus the added manufacturing cost. Furthermore, changes to a hardware-based simulation may not be simple and can incur substantial
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
IEEE
Signal level Noise level Waveform type Coding method Retransmission scheme Number of nodes Contention method
Inputs
F
Simulation system Bit error rate •Algorithms •Hardware, software, or Outputs Packet error rate Message success hybrid rate
Application layer Transport layer Network layer Logical link layer MAC and PHY
RF propagation M&S Waveform M&S
Figure 1. Wireless network system simulation example.
(MANET) simulations. The results from [2] show that 85 percent of the simulation results were not repeatable. Furthermore, [2] reported the following simulation statistics that suggest more validation, verification, and documentation of assumptions for results determined through M&S methods are required to gain confidence in results: • 85 percent did not specify the simulator platform version. • 85 percent did not use confidence intervals. • 75 percent did not specify the traffic type. • 65 percent did not state the number of simulation runs. • 62 percent did not use mobility. • 45 percent did not specify the transmission distance. • 30 percent did not specify the simulator platform. Even simple simulation components such as a random number generator (RNG) can potentially introduce errors in a simulation by allowing a pseudo-random sequence to repeat prematurely if the seed value is not carefully chosen. Finally, it is important to consider cost when choosing an M&S tool. For commercially-available products, costs often include substantial upgrade and maintenance fees. Furthermore, computer platform costs can be a substantial factor if high processing power is required. Other costs that should be considered include technical support, productivity, and staffing — some M&S tools may require a large investment in developing staff competence. The remaining sections of this article describe M&S tools that apply to the various layers of the network stack. The sections are organized according to Fig. 1 from the bottom layer to the top, including a section that describes using distributed simulation (DS) approaches to improve simulation performance and challenges of waveform generation for test and evaluation (T&E) of a completed product. The objective of this article is not to present the features of each M&S tool exhaustively or to make recommendations about the tools that a designer should use. Instead, the remaining sections of this article present a survey of M&S tools with an overview
IEEE Communications Magazine • March 2009
IEEE
BEMaGS
PHY M&S
costs for redesign. Hybrid simulation approaches combine software and hardware simulation methods. In particular, hardware-in-the-loop (HITL) simulations can be quite powerful for testing existing hardware platforms and gaining insight into their performance characteristics while modeling other aspects of the system in software. In developing a simulation, it is important to consider the dimensions of performance — scalability, execution speed, fidelity, and cost. Generally, these are conflicting goals, and as such, trade-offs must be determined and weighed against the overall goal of the simulation. For network simulations, scalability refers to the impact of an increasing number of nodes or increased traffic-loading characteristics on the simulation performance metrics. Simulations that scale well generally can handle an increased node count or traffic level with minimal penalty in increased execution time compared to simulations that do not scale. Execution speed refers specifically to the time that the simulation takes to complete, given a set of inputs (such as a network scenario with topology and traffic characteristics). Generally, faster execution time is more desirable; however, it is largely a function of both scalability and fidelity. Fidelity refers to the degree of modeling detail that is developed within a simulation to approximate the system. However, higher fidelity can increase execution time or cost substantially. As a general approach when selecting M&S methods, engineers can choose a publicly-available M&S product or alternatively, can develop their own M&S tool. In-house development has certain advantages over commercially-available M&S tools. A designer might know more about the specific features, capabilities, assumptions, and limitations of the in-house tool compared to the commercially-available tool. However, developing in-house M&S tools can require a substantial investment in labor and development timeframes. Furthermore, with the increasing number of network technologies being developed and fielded, choosing an in-house approach can be prohibitive for those simulations where modeling scope encompasses a large number of these technologies. Many network M&S tools are available publicly, either as commercial offerings or as public-domain, open-source projects. Thus, knowing which tool to choose also can be challenging. Publicly available M&S tools vary in many ways, including their feature sets, user experience, performance, and cost; all have advantages and disadvantages, assumptions, and limitations. When using M&S tools, it is important to validate results to gain confidence that the tool is indeed modeling the actual system behavior as expected. In particular, it is important to collect a wide variety of empirical data where the appropriate input parameters are varied to extremes to test the M&S tool limitations and verify assumptions. When employed properly, these methods help the engineer build confidence in the results. Reference [2] is a pertinent example of the potential pitfalls of simulation. In particular, the authors in [2] performed a literature survey focusing on mobile ad hoc network
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Network M&S
Communications
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
121
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Digital communications system designers rely on metrics such as BER, PER, and error-vector magnitude (EVM) to
PHY simulation tool
URL
LabVIEW*
www.ni.com
Simulink/Matlab*
www.mathworks.com/products/simulink/
Agilent Advanced Design System (ADS)*
http://eesof.tm.agilent.com/products /ads_main.html ________________________________
CoWare Signal Processing Designer (SPD)*
www.coware.com/products/signalprocessing.php
A
BEMaGS F
characterize a given transmission medium and correspondingly
(*) indicates the platform is a commercial product
Table 1. Some PHY simulation tools.
the predicted performance of a communications system.
of capabilities and direct the reader to locations where more information about a given platform can be obtained.
PHYSICAL-LAYER AND WAVEFORM M&S Digital communications system designers rely on metrics such as BER, PER, and error-vector magnitude (EVM) to characterize a given transmission medium and correspondingly the predicted performance of a communications system. This section presents a survey of commercial products available that can potentially simplify and improve accuracy in physical-layer (PHY) and waveform M&S. This includes product features and capabilities, as well as areas to which the designer should pay careful attention to assist in choosing a product that best meets project requirements. In this article, waveform is considered to be the time-varying voltage signal transmitted across the medium to convey information. The designer typically begins with a highlevel, functional block diagram that captures each of the basic functions of the system PHY in single blocks, including inputs and outputs from one block to another. Three of the commercial software packages listed in Table 1, LabVIEW, Simulink, and Agilent Advanced Design System (ADS), use graphical user interfaces (GUIs) that enable the designer to develop simulations according to a functional block-diagram form. Each of these software packages includes generic blocks applicable to most PHY simulations, such as: pseudo-random, raw data-bit generation; data randomizers, forward error correction (FEC), interleaver algorithms, modulators, and channel impairments. Other functional blocks perform analysis on the modeled system output, including: BER, PER, and achievable data rate. The three aforementioned products are generally aimed at modeling specific attributes and algorithms of a larger system implementation. For example, one could model the effects of a given FEC or interleaver algorithm on system performance instead of modeling a complete, large-scale system implementation. Each platform offers add-on packages that also enable custom waveform definition and analysis such as the LabVIEW modulation toolkit and the Matlab communications tool-
122
Communications IEEE
box. These add-on packages enable the designer to create software definitions of digitallymodulated waveforms. LabVIEW, Simulink, and ADS each offer ready-made implementations of standard PHYs and waveforms. These include PHY descriptions for technologies such as IEEE 802.11, 802.16, or cellular standards. The availability of these implementations is platform dependent, and the designer should contact the appropriate company sales representative directly for specific inquiries. Alternatively, some platforms may not offer standard model implementations directly, but they may be available through Internet message boards or third-party vendors. In all cases, the standard PHY and waveform implementation is built from the basic building blocks of a communications system as previously discussed. For example, an IEEE 802.11a or 802.16 PHY model would require a data randomizer and FEC blocks defined by the specified standard generator polynomial and the standard interleaver algorithm block. For larger-scale system simulations, CoWare Signal Processing Designer (SPD) is another PHY and waveform M&S tool that offers complete system libraries of standard technologies including IEEE 802.11, 802.16, and cellular implementations. CoWare SPD uses a functional block diagram GUI and includes a model library of prevalent integrated circuits (ICs) that the designer might be required to interface into a given design. These models capture the complete behavior of a given IC that may not be accurately modeled in a generic technology standard implementation. CoWare SPD is hardware-oriented and allows fixed-point mode to facilitate design implementation on a field programmable gate array (FPGA). LabVIEW FPGA has similar capabilities as an addon package, and Simulink and Matlab models can be converted to VHDL or Verilog with packages from Xilinx. The wireless network designer also might wish to characterize the air interface between transmitter and receiver. In this case, radio-frequency (RF)-propagation M&S tools are used to estimate electromagnetic energy effects between a transmitter and a receiver. There are a wide variety of commercial tools, some of which are included in Table 2. Many of these models differ by their user interface and visualization tools and not necessarily by their choice of underlying propagation model (e.g., Cooperative for Scientific and Technical Research [COST] or Hata
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Tool Name
Company
URL
Athena*
Wave Concepts
http://www.waveconceptsintl.com/athena.htm
CellOpt*
Actix
http://www.actix.com/main.html
Comstudy*
RadioSoft
http://www.radiosoft.com
EDX SignalPro*
EDX Wireless
http://www.edx.com/products/signalpro.html
ENTERPRISE Suite*
AIRCOM International
http://www.aircominternational.com/Software.html
LANPlanner*
Motorola, Inc.
http://www.motorola.com
Mentum Planet*
Mentum S.A.
http://www.mentum.com
NP WorkPlace*
Multiple Access Communications Ltd
http://www.macltd.com/np.php
Pathloss*
Contract Telecommunication Engineering
http://pathloss.com/
PlotPath*
V-Soft Communications LLC
http://www.v-soft.com/web/products.html
Probe*
V-Soft Communications LLC
http://www.v-soft.com/web/products.html
Profiler-eQ*
Equilateral Technologies
http://www.equilateral.com/products.html
RFCAD*
Sitesafe
http://www.rfcad.com/
RPS*
Radioplan GmbH
http://www.radioplan.com/products/rps/index.html
Volcano*
SIRADEL
http://www.siradel.com
Wavesight*
Wavecall
http://www.wavecall.com
WinProp*
AWE Communications
http://www.awe-communications.com/
Wireless InSite*
Remcom, Inc.
A
BEMaGS F
http://www.remcom.com/wireless-insite/overview/ wireless-insite-overview.html __________________ (*) indicates the platform is a commercial product
Table 2. Some RF propagation M&S tools. models). The majority of these products are designed to assist the RF engineer in cellular communication prediction and therefore characterize environments in the common cellular frequency bands.
NETWORK LAYER M&S Network layer M&S equips the designer with a method to design and debug network protocols and verify distributed functions. Typical figures of merit include system throughput, retransmission rate, or average packet size. This section presents a survey of open-source and commercial network simulation platforms. A list of network simulators is included in Table 3. The majority of network simulators began as university projects focused on simulating a specific attribute of the behavior of a network under explicit conditions. Consequently, the designer must select from numerous simulators that vary in cost and ease-of-use from opensource, command line interface (CLI)-driven simulators with limited documentation to propri-
etary, commercial GUI-driven simulators with comprehensive documentation and support options. To reiterate this point, specific features of GloMoSim, QualNet, NS-2, and OPNET Modeler are discussed. GloMoSim is a discrete-event simulator for large-scale (e.g., thousands of nodes), wireless, protocol simulations with limited documentation or support because it is an open-source project from the University of California, Los Angeles (UCLA). QualNet is the commercial version of GloMoSim from Scalable Network Technologies that includes wired and wireless protocol support, customizable model source code, and documentation including technical support. The basic functionality of these two simulators is similar, but the user experience is different depending on which one is chosen. QualNet offers many simulation features not found in GloMoSim including models for common technologies such as IEEE 802.11, 802.16, global system for mobile communications (GSM), and military data links (e.g., Link 16, single channel ground and airborne radio system [SINCGARS]). QualNet also
IEEE Communications Magazine • March 2009
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
123
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Network Simulation Tool
URL
BRITE
http://www.cs.bu.edu/brite
Cnet
http://www.csse.uwa.edu.au/cnet/
GloMoSim
http://pcl.cs.ucla.edu/projects/glomosim/
to choose these over
J-Sim
http://www.j-sim.org/
an open-source or
NS-2
http://www.isi.edu/nsnam/ns/
OMNeT++
http://www.omnetpp.org/
OPNET*
http://www.opnet.com/
PacketStorm-Network Emulator*
http://www.packetstorm.com/4xg.php
QualNet*
http://www.scalable-networks.com/
SSFNet
http://www.ssfnet.org/homePage.html
x-sim
http://www.cs.arizona.edu/projects/xkernel/
The rich feature sets available in the
A
BEMaGS F
commercial simulators, QualNet and OPNET, offer some compelling reasons
even an in-house simulator; however, the designer must consider the specific project, as well as user-familiarity and comfort.
(*) indicates the platform is a commercial product
Table 3. Some network simulation tools.
includes built-in RF propagation models, support for HITL simulations, and parallel processing capabilities. Custom protocol models also can be created using QualNet. NS-2 is a popular open-source network simulator primarily used for developing and investigating new protocols in wired and wireless scenarios. It is used extensively in universities and offers large message-board communities where demonstrations and example source code can be obtained. The basic Internet Protocol (IP) routing algorithms and upper-layer protocols such as Transport Control Protocol (TCP) are included, but features such as support for HITL or parallel computing are only possible through custom user-modifications or shared add-on modules. This is contrasted with OPNET Modeler, currently the leading commercial simulator. OPNET offers customer support, documentation, and customizable simulator code for small- and large-scale simulations. OPNET contains specific model descriptions for popular networking equipment including operational parameters and enables data capture and/or playback capabilities for actual networking equipment. HITL simulations and DS across a computing grid also are possible with OPNET. Academic researchers have tended to use NS-2 for detailed protocol design, but this may change as the OPNET university presence continues to grow in support of their university program. The rich feature sets available in the commercial simulators, QualNet and OPNET, offer some compelling reasons to choose these over an open-source or even an in-house simulator; however, the designer must consider the specific project, as well as user-familiarity and comfort. All network simulators have steep learning
124
Communications IEEE
curves; commercial simulators have the advantage of documentation and technical support to make the journey a little less daunting. Commercial simulators also offer a more scalable solution, in terms of implementation effort and run time, that includes debuggers and DS. But commercial simulators typically do not offer the customization options of an open-source simulator and may not be best for protocol development. The designer potentially can achieve more insight into the underlying algorithms that compose open-source modules than for large commercial packages. The designer must understand the network components and corresponding models in sufficient detail to correctly interpret the output. As an example, [3] reports results from a comparison scenario between OPNET Modeler, NS-2, and a hardware testbed. The results demonstrate the importance of fine-tuning parameters and understanding underlying assumptions if the designer seeks a high-fidelity result; however, detailed parameter adjustments might not be required if the designer seeks a coarse result. An example of a high-fidelity result would be the absolute maximum throughput achieved in a multi-node network using a modified version of the TCP back-off algorithm versus a general trend throughput result for the same scenario. The designer must anticipate that, depending on the fidelity expected, M&S results may differ between individual simulators and between hardware testbeds. If high-fidelity simulations are required, the designer should be prepared to characterize individual model components and validate their behavior with real-world equipment before testing the complete system, such that anomalies can be isolated and explained.
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
A
BEMaGS
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Processing node rack
Processing node
F
As the Internet grows and becomes
Processing node PN master PN1 PN2
Contral DS control
increasingly wireless, current serial network models may not achieve the
Processing node
ink rk l wo t e N
fidelity required to PNm
accurately capture complex network
Enterprise level
Centralized
behavior. This is especially true in the
Internet
military domain, Processing node
where network systems are highly
Processing node
Processing node
ad hoc.
Contral DS server and data repository WAN distribution
Figure 2. DS topology examples.
DISTRIBUTED SIMULATION Generally, networks exhibit behavior that is nondeterministic and complex in nature. Large-scale simulations often are required to model largescale networks sufficiently. As the Internet grows and becomes increasingly wireless, current serial network models may not achieve the fidelity required to accurately capture complex network behavior. This is especially true in the military domain, where network systems are highly ad hoc and are expected to support thousands or tens of thousands of nodes at one time. Distributed or parallel simulation approaches can be useful for boosting simulation performance by scheduling tasks and distributing the execution of those tasks to independent computing platforms that operate in parallel. These computing platforms can be connected to some master control platform and can exchange information with the master control platform as required during the DS execution. Alternatively, each platform could be distributed entirely, with only minimal interaction with a centralized server that holds a repository of data to be processed (e.g., the Search for Extra-Terrestrial Intelligence [SETI] project). Generally, the computing platforms can be located centrally or distributed widely; as long as the network infrastructure is in place to support the exchange of information between the computing platform and a master control platform or data repository, location can be inconsequential. Figure 2 illustrates possible DS computing topology examples. DS also enables model abstraction by separating simulation tasks into different simulations that share interfaces for information exchange. For network M&S, the DS approach can be appropriate because of the nature of indepen-
dent network processes, which reduce the need for strict timing requirements and significant overhead control messages. Table 4 includes a list of distributed simulation packages. Federated simulations are a way to interconnect separate simulations to serve as a “simulation-of-simulations.” An example could be an end-to-end, wireless-radio simulation that uses an application simulation to generate a bit stream, a PHY simulation to generate symbols that convert the application bit stream to conform to a particular modulation scheme, such as binary-phased shift keying (BPSK) or quadrature phase shift keying (QPSK), and a channel simulation to generate path losses and channel-fading characteristics. By linking each of these simulations together through standard interfaces that enable information exchange between them, they form a federated simulation. In the federated simulation model, these simulations exchange information with each other through common interfaces that enable the simulations to work together to serve as a complete simulation system. One such standard for defining the interfaces is an architecture known as the high-level architecture (HLA) [4]. HLA addresses and establishes the required abstractions, processes, and interfaces for separate simulations to work effectively together; however, because the standard is comprehensive, it is very overhead intensive. Full implementation of HLA in a federated simulation can reduce execution time because of the additional processes required for HLA compliance. However, there are risks associated with the DS approach. First, the complexity required to develop a DS is generally more than a serial simulation because multiple, independent, processing platforms must interact and coordinate the simulation functions. Second, there may be a
IEEE Communications Magazine • March 2009
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
125
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Network Simulation Tool
URL
OPNET*
http://www.opnet.com/
QualNet*
http://www.scalable-networks.com/
Georgia Tech Parallel/Distributed Network Simulator (PDNS)
http://www.cc.gatech.edu/computing/compass/pdns/
Virginia Tech SystemX
http://www.arc.vt.edu/arc/SystemX/index.php
Department of Defense (DoD) High Performance Computing Modernization Program (HPCMP)
http://www.hpcmo.hpc.mil/
A
BEMaGS F
(*) indicates the platform is a commercial product
Table 4. Some distributed simulation tools. requirement for the simulation processes between independent computing platforms to be synchronized, and as such, master timing algorithms might be required. Furthermore, these algorithms must be designed within the constraints of the control network used to transport synchronization messages. Elements of the control network that must be considered include the number of computing platforms, their location (i.e., centrally located or distributed across the Internet), and the traffic-loading characteristics, which may affect the latency of control messages. Third, maintaining control over the DS is required for starting or stopping execution, initializing computing platforms with the correct inputs, and receiving outputs from the computing platforms during or after simulation execution. This can be performed through periodic messages sent to all computing platforms, requiring computing platforms to send status updates or to query response mechanisms. Both timing and control requirements might require extra computational complexity or increase traffic loading on the control network. Furthermore, it is not necessarily true that increasing the number of processing nodes increases the performance of the DS. At some point, if enough overhead messages are flooding the control network infrastructure because of too many processing nodes, the DS may experience poor execution performance. Optimizing the control network for the particular DS design is essential.
WAVEFORM GENERATION FOR T&E T&E engineers generally verify the performance of a complete system by supplying an input stimulus and measuring the corresponding output. In the case of a wireless networking system, the input and output are standard analog waveforms. This section discusses possible challenges in the conversion of a digital waveform implementation to an analog vector-signal generator (arbitrary waveform generator) output. The designer typically begins with a discrete timesampled version of a waveform in one of the aforementioned commercial software packages such as Matlab or LabVIEW. Generally, the time-domain samples are stored as in-phase (I) and quadrature (Q) components, which completely capture the phase information of a given
126
Communications IEEE
waveform. I and Q are the preferred method for representing an arbitrary waveform, especially at microwave frequencies (300 MHz to 300 GHz) because implementation of phase manipulation of high-frequency carriers is non-trivial. Some of the potential challenges in arbitrary waveform generation include [5]: • Waveform phase discontinuity • Clipping • Sampling and filtering The waveform designer can mitigate these sources of error by carefully defining the digital waveform. First, waveform phase discontinuity occurs when the simulated periodic waveform does not have an integer number of cycles stored in I/Q memory. That is, if a non-integer number of cycles are output, the playback phase abruptly changes when the end of the memory depth is reached, and the replay returns to the beginning of memory. Phase discontinuity causes spectral regrowth and distortion at the vector-signal generator output. Waveform clipping is a result of overdriving the digital to analog converter (DAC) and can be avoided by scaling I/Q values to less than full scale after normalization to avoid intermodulation distortion (IMD). The waveform designer also must correctly sample the waveform in excess of the Nyquist rate, such that sampling images can be removed by the baseband low-pass filter (LPF). That is, for vector signal generators with wideband generation capabilities, the large passband of the LPF also might allow sampling images to distort the generated signal if oversampling is not applied [5]. Unfortunately, there are other sources of error in waveform generation that generally cannot be easily mitigated. Many of these issues are specific to the vector-signal generator being used and furthermore are temperature, frequency, and power dependent. These include I/Q path delay and skew, peak-to-average power ratio (PAPR), automatic level-control (ALC) burst overshoot and droop, and group delay. The effects of these sources of error generally are mitigated through equipment-specific pre-distortion techniques. Software packages such as the Agilent Signal Studio enable the designer to create standardscompliant waveforms to which pre-distortion is applied for a specific vector-signal generator, such that a calibrated test stimulus is achieved at the RF output. This point should not be over-
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
looked by the designer. Although Matlab, LabVIEW, and similar products allow flexibility for arbitrary waveform creation, the analog-equivalent waveform is not guaranteed to be calibrated.
SUMMARY AND CONCLUSIONS This article presents a general overview of network M&S with a discussion of various architectural issues related to choosing or developing M&S tools. The article also includes a list of publicly available, commercial, and open-source tools for PHY and waveform development, RF propagation, network-layer M&S, and DS. In addition, the article discusses the performance advantages of DS and the challenges associated with waveform generation for T&E. There is no universal solution to any aspect of M&S; each tool we describe has associated limitations that should be well understood by the designer. The designer must be equipped with enough knowledge about a given project to select an appropriate M&S tool with which he or she is comfortable and with limitations that are of minimal impact to the result. This article seeks to identify the most popular M&S tools throughout the protocol stack and enable the user to begin considering the required capabilities to achieve a given M&S project objective.
ACKNOWLEDGMENTS The authors would like to acknowledge Jack L. Burbank and Brian K. Haberman of the Johns Hopkins University Applied Physics Laboratory for their contributions to this article. The authors also would like to thank industry representatives that participated in the 2006 and 2007 Globecom Design and Developer’s Forum on M&S tools. These participants include: Heath Noxon of National Instruments, Mike Griffin and Randy Becker of Agilent Technologies, John Irza and Dick Benson of The Mathworks, Bo Wu and Johannes Stahl of CoWare, Jigar Shah of Scalable Network Technologies, and Yevgeny Gurevich and Shawn Khazzam of OPNET Technologies.
REFERENCES [1] A. Maria, “Introduction to Modeling and Simulation,” Proc. 1997 Winter Simulation Conf., 1997. [2] T. Andel and A. Yasinac, “On the Credibility of MANET Simulations,” Computer, July 2006.
IEEE
BEMaGS F
[3] G. F. Lucio et al., “OPNET Modeler and NS-2: Comparing the Accuracy of Network Simulators for PacketLevel Analysis Using a Network Testbed,” WSEAS Trans. Comp., vol. 2, no. 3., July 2003, pp. 700–07 [4] IEEE Std. 1516-2000, “IEEE Standard for Modeling and Simulation High Level Architecture-Framework and Rules,” Sept. 21, 2000. [5] M. Griffin and J. Hansen, “Conditioning and Correction of Arbitrary Waveforms — Part 1: Distortion,” High Frequency Electronics, Aug. 2005; “Conditioning and Correction of Arbitrary Waveforms — Part 2: Other Impairments,” High Frequency Electronics, Sept. 2005.
enough knowledge
ADDITIONAL READING
or she is comfortable
The designer must be equipped with about a given project to select an appropriate M&S tool with which he
[1] M. C. Jeruchim, P. Balaban, and K. S. Shanmugan, Simulation of Communication Systems — Modeling, Methodology, and Techniques, Kluwer Academic/ Plenum, 2000. [2] G. A. Di Caro, “Analysis of Simulation Environments for Mobile Ad Hoc Networks,” tech. rep. no. IDSIA-24-03, Dec. 2003. [3] D. Cavin et al., “On the Accuracy of MANET Simulators,” Proc. Wksp. Principles Mobile Comp., 2002. [4] A. Brown and M. Kolbert, “Tools for Peer-to-Peer Network Simulation;” draft-irtf-p2prf-core-simulators-00.txt [5] T. Watteyne “Using Existing Network Simulators for Power-Aware Self-Organizing Wireless Sensor Network Protocols,” INRIA, no. 6020, Sept. 2006.
and with limitations that are of minimal impact to the result.
BIOGRAPHIES WILLIAM T. KASCH (
[email protected]) ______________ received a B.S. in electrical engineering from the Florida Institute of Technology in 2000 and an M.S. in electrical and computer engineering from Johns Hopkins University in 2003. His interests include various aspects of wireless networking, including MANETs, IEEE 802 technology, and cellular. He participates actively in both the IEEE 802 standards organization and the Internet Engineering Task Force. JON R. WARD (
[email protected]) ____________ graduated from North Carolina State University (NCSU) in 2005 with an M.S. degree in electrical engineering. He works at the Johns Hopkins University Applied Physics Laboratory (JHU/APL) on projects focusing on wireless network design and interference testing of standards-based wireless technologies such as IEEE 802.11, IEEE 802.15.4, and IEEE 802.16. He has experience in wireless network M&S and T&E of commercial wireless equipment. JULIA ANDRUSENKO (
[email protected]) _________________ received B.S. and M.S. degrees in electrical engineering in 2002 from Drexel University, Philadelphia, Pennsylvania. She currently works as a communications engineer at the Johns Hopkins University Applied Physics Laboratory, Laurel, Maryland. Her recent work focuses on the following areas: ground-to-ground RF propagation prediction, military satellite communications, wireless networking, communications vulnerability, and MIMO technology. Her background is in communications theory, wireless networking, computer simulation of communications systems, evolutionary computation, genetic algorithms, and programming.
IEEE Communications Magazine • March 2009
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
127
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
MODELING AND SIMULATION: A PRACTICAL GUIDE FOR NETWORK DESIGNERS AND DEVELOPERS
Simulation Tools for Multilayer Fault Restoration George Tsirakakis, King’s College London Trevor Clarkson, Department for Business
ABSTRACT The complexity of networks today makes it difficult to handle fault restoration by means of human intervention. Future network architectures are expected to be self-protecting and, more generally, self-organizing. In this article we describe the modeling methods and simulation tools we have used for the analysis of a new integrated restoration scheme operating at multiple layers/networks. The networks over which we have performed the analysis are ATM and SDH, but the methodology could be applied to other multilayer architectures too. A network model has been created that consists of both ATM and SDH layers. Using this network model, we have seen how the new scheme explores the free bandwidth over both layers. The multilayer bandwidth exploration was modeled using Microsoft Visual C++. Using OPNET Modeler, a node model was created to analyze the restoration message processing within it and the interaction with other nodes for the purpose of simulating the restoration delay. The article emphasizes the simulation methodology rather than the results of the simulation. Among others, we describe which additional functionalities a simulation tool is necessary to have in order to simulate multilayer networks. A comparison of OPNET and Microsoft Visual C++ is made, describing their advantages and the reasons we found it necessary to use both.
INTRODUCTION The analysis of a bandwidth restoration scheme has two main elements. One is the study of the scheme’s delay in restoring failed traffic; the other is the study of the scheme’s ability to restore all failed traffic, called the restoration rate (RR). RR is defined as the percentage of failed traffic the scheme can restore. Every restoration scheme operates by exchanging control and information packets across the network elements. To find the restoration delay of the scheme by simulation, we need to simulate the process of packet exchange and the actions that set up the reroute. This, is turn,
128
Communications IEEE
0163-6804/09/$25.00 © 2009 IEEE
needs the restoration logic to be defined (what is going to happen under certain conditions), and the delay corresponding to the physical execution of events to be captured (e.g., cross-connects in nodes or packet processing). These are achieved by adding some user program code into the simulation of these components. OPNET Modeler was chosen as the simulation tool for modeling the restoration delay. It has built-in tools for modeling delays (see “OPNET Modeler Simulator” later) and adding user processing function blocks (e.g., for restoration message processing) within existing or new OPNET node models. The user can program the restoration logic via these added functions in the nodes and also set delays in certain stages of the restoration execution. OPNET can model classic single-layer restoration, but as explained later, it has some drawbacks in modeling multilayer networks for simulating many layers simultaneously. Due to this limitation, we have chosen to simulate the delay of a special case of multilayer network where all asynchronous transfer mode (ATM) nodes coincide with synchronous digital hierarchy (SDH) nodes, and ATM links are one hop in SDH. This assumption is not very critical for this part, as delay is most affected by the inherent actions in nodes rather than the topology. RR is directly related to the restoration bandwidth found on the reroutes. To find the RR, one should simulate the “route find” function, which reroutes failed traffic around the failure (e.g., implementing a Dijkstra or linear programming algorithm). Topologically, different reroute functions will explore free bandwidth differently. For example, line (local) restoration explores available bandwidth less efficiently than path (end-to-end) restoration. The main task for the modeler in simulating the RR is to implement by some additional program code the function that finds the reroutes of the failed link traffic, and apply it over different failure cases over the network. It is possible that this simulation may be done independent of the delay simulation (if the reroutes are found at the beginning, before the exchange of packets). For the RR simulation where topology infor-
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
mation is important but delay is not, we have developed our own simulation tool using Visual C++. As explained later, Visual C++ can model and display various logical links on top of physical, and also allows extended user interaction with the onscreen topology, something very useful for this part. Integrated restoration is a new scheme that explores scattered and isolated segments of free bandwidth on each layer, to form continuous connections by joining them through the use of hybrid network elements. In order to study an integrated restoration scheme we need a simulation tool capable of simulating many layers at the same time. The user needs to interact with both layers, something easily done with Visual C++ using its built-in windows programming blocks to create a customized user interface for exploring the reroute topology. Node models belonging to different layers need to be able to interact with each other to study their restoration delay. This is achieved by building interacting function blocks using OPNET Modeler’s node process modeling tools. Restoration schemes most often apply to the high-capacity core part of the network, and information on this part is usually unknown. Thus, we need to build a representative model of the network topology and bandwidth allocation (working and spare) across the network, on which the scheme is going to perform and be tested. For this, we have followed a simplified method, without having to go through the procedures of design, by making certain assumptions (e.g., simplified multiplexing hierarchy and bandwidth granularity) that are adequate for this analysis. The created network was used for the RR analysis with Visual C++. In the following sections we start by giving an outline of the integrated restoration scheme, and continue with the description of the simulation methods. There, we describe how we have created the network, OPNET, and Visual C++ models, giving at the same time information about the tools’ capabilities.
A SHORT DESCRIPTION OF THE INTEGRATED RESTORATION SCHEME Each link, logical or physical, in a multilayer network will have a working and a free bandwidth. A restoration reroute will be along a set of links with free bandwidth. The free bandwidth on each layer is isolated and separate from the bandwidth on the other layer. A restoration process may be initiated on a layer, and may fail if insufficient free bandwidth is found on that layer. However, there may be free bandwidth on the other layer. The integrated restoration scheme is a new scheme since it uses bandwidth on links belonging to multiple layers at the same time. This is possible with the use of new hybrid ATM/SDH network elements [1–3], which have functionalities of multiple layers. Hybrid network elements have been used in the past to provide efficient migration to new technologies by allowing better filling/grooming of traffic channels [1], but their functionalities have not been explored for restoration.
pr_0
pt_0
pr_1
pt_1
pr_2
ATM
IEEE
BEMaGS F
pt_2
pr_3
pt_3
pr_4
pt_4 MAN
pr_5
pt_5
pr_6
pt_6
pr_7
SDH
pt_7
pr_8
pt_8
pr_9
pt_9
Figure 1. Modules within a hybrid node model. An integrated reroute will consist of SDH segments and ATM segments. A segment is a path (series of links) on the same layer. Hybrid nodes will join the SDH and ATM segments at the point where they meet, via an ATM to SDH interface port, to form a continuous connection for rerouting. In this way segmented free bandwidth on each layer is joined together to form a complete connection. Restoration messages are created and exchanged by modules in nodes (Fig. 1), which will process the message, make the necessary crossconnections. and forward the message to the next node on the reroute. As stated, the principle of this integration could be applied to other technologies. As long as on each layer there are isolated/channelized paths of bandwidth, and there is a hybrid node of these layers’ technologies, the principle of joining these isolated segments of bandwidth could be applied. ATM was chosen because it is a representative packet switching scheme, creating reserved bandwidth channels/pipes, the virtual paths (VPs). Multiprotocol label switching (MPLS) is a similar technology in this context. The patent in [3] describes the implementation of a hybrid SDH circuit and packet switch, where it is said that the packet part could be of any technology, such as MPLS. It is clear that the integrated restoration scheme can be applied to MPLS over SDH/synchronous optical network (SONET).
CREATING THE NETWORK MODEL Since we do not have enough network information (logical link topology, link sizes, etc), we should have built a new network using linear optimization techniques in the same way a network is designed [7]. However, here we have
IEEE Communications Magazine • March 2009
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
129
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
13 6|13 7 17 9
6|13
3 0 0|11 1|4 5|6
4|8
10
18 4|8 5|6
4 2|15 0|11
14 4|8 5|6 6|13
11
3|16 2|16 1 1|4 2|15 3|6
6|13
19
0|11
5
8
4|8
12 1|4
15 23
5 1|4 3|6
24
6
16
Figure 2. Restoration of multiple paths.
used a much simpler and quicker procedure, as mentioned above and as other researchers have done. The created model was used by Visual C++.
GENERATING PDH LINKS AND PDH LOAD The modeling of plesiochronous digital hierarchy (PDH) links is necessary in order to implement a realistic SDH and ATM restoration phase. This is because during SDH layer path restoration, SDH virtual container (VC) paths carrying PDH and being interrupted are competing to explore the available free bandwidth on fibers, with SDH VC paths carrying ATM traffic. We need to include the effect PDH paths have on ATM. For the design of PDH links, methods similar to those described in [4, 5] are followed. According to these, the demand/load between a pair of SDH crossconnects is inversely proportional to their distance. PDH links were created using the procedure shown below. Using this method, a fiber may support multiple PDH links, which will all fail if this fiber fails (Fig. 2). For each pair of SDH crossconnects { Find their minimum hop distance (MD) If (MD <= 3) { Select the route in MD routes that have less load (this route will be a new PDH path) Set the demand on this route = (1 / MD) * Constant } }
GENERATING ATM LINKS AND ATM LOAD Here we are interested in the topology of SDH VC paths that carry ATM traffic. According to [6, 7], if a demand between a pair of ATM crossconnects is approaching the bandwidth granularity of their server (SDH) network (e.g., the VC3 bandwidth), the optimal topology (routing of this demand) is the one having a direct SDH pipe (a
130
Communications IEEE
A
BEMaGS F
VC) between this pair, rather than the ATM VPs going through another ATM crossconnect to be combined for consolidation (to get a better filling of the SDH VCs). The ATM topologies used here represent such cases (ATM demand close to the SDH VC bandwidth). The ATM VP (path) topology does not need to be modeled since VP path restoration is not implemented. Only VP line restoration is implemented. Line restoration happens between the nodes adjacent to the failed link, while path restoration happens between the end nodes of the path. Thus, in line restoration we do not need to know the end nodes. ATM links are created manually using the Visual C++ windows user interface. Between a selected set of ATM node pairs, SDH VC paths are created that connect the pair, defining the fibers over which these paths pass. An ATM layer link between a pair of ATM nodes could be set to be supported by more than one SDH path. (If more than one SDH VC connect a pair of ATM nodes, their routing in the SDH network may be different, as shown in Fig. 3.) The assignment of demand (ATM load) quantities within the ATM VC structure is as follows. According to [8], network analysis has been performed using random demand patterns, such as those conforming to uniform, normal, or scaled demand distributions. Here, we adopt a normal distribution for the applied demand pattern, with an average value over the whole network set by the user, depending on the level of ATM adoption. The working bandwidth on each fiber is calculated following the outcome of the PDH and ATM link creation stages, and is the sum of the load of the fiber’s PDH and ATM clients.
VISUAL C++ SIMULATOR Visual C++ was chosen because its object oriented functionalities and containment relationship make it suitable for straightforward modeling of multilayer networks of any type (links within other links). It has close bonds with the Windows operating system, and has built-in functionality to create windows, buttons, textboxes, and so on for input and output. With Visual C++, the user can design its input/output interface to analyze the network as he/she would like. For example, double clicking on a link could bring up data, such as load, free bandwidth, and restored bandwidth, for all layers on that link, or the physical path of a logical link could be displayed. We can customize the plots to display the information as required. The information on the plots can be manipulated; for example, if a plot of average values is presented, all individual values of a specific average can be displayed in one click. We could make individual links fail by clicking on them and present restoration information in a new window or display this on the individual segments of the reroute. We could also create single or multilayer networks, as shown in Fig. 4, adding links by clicking and dragging. We could display any layer we wish, or all layers superimposed. This flexibility is a great advantage in analyzing the operation of a scheme.
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Figure 4 shows a multilayer network, with logical ATM (red) and SDH (blue) links, displaying the working and free bandwidth on them. Figure 2 displays the result of a PDH path restoration. Certain PDH paths (black lines) carried by a failed fiber are down and restored if possible (green lines). Visual C++ has the additional advantage of having extensive built-in debugging tools, which helps in controlling or testing the execution of the modeled scheme.
MODELING A MULTILAYER NETWORK WITH C++ We have followed an object-oriented approach so as to be generic and able to apply the same approach to other types of network technologies. Every link in any network layer is modeled as a generic object of class CLink, having attributes and methods. The CLink object has a LAYER attribute identifying the network to which it belongs. A network layer is modeled as a collection of CLinks. An ordered array of CLink objects forms a path. The CLink object will have pointers to CLinks above and below it, as explained in this paragraph. Many paths of a client layer may pass through a single server Clink at the layer below them. In Fig. 3 ATM links L34 and L12 (belonging to some paths in ATM) both pass (are supported) by SDH VC link S1. Also, a client CLink may be supported by more than one server paths (i.e., a CLink could be a set of paths at its server layer). In Fig. 3 L12 is supported by SDH paths S2-S3-S4-S5-S6 and S1-S7. Thus, a CLink object will have an array of pointers (CLIENTLINKS) to CLink links it supports (located at its client network layer) and an array of arrays of pointers (SERvPATHS) to CLink paths that is supported by (located at) the server network layer below. The above modeling allows unlimited nesting of any number of layers on top of each other. The CLink object has attributes, such as free/working bandwidth, and methods, such as LineRestore(), PathRestore(), and ClientRestore(). Normal restoration uses CLinks in the same layer as the failed links. Integrated restoration happens after all other restorations have tried (sequentially), and uses CLinks located at both layers in a manner described previously. In order to find a reroute, ideally we would need a linear optimization routine, as this would give the best reroute (or reroutes) in terms of accommodating all failed bandwidth and taking the least network resources. For an integrated restoration reroute, a multilayer optimization scheme would be needed similar to that described in [7]. We can observe from [7] that this is a very complicated and time consuming step. We have created and used a simplified C++ routine, which we have compared with linear programming optimization routines available in NAG libraries and found a few, but not very large, differences in the reroutes found. However, the integrated reroute part has plenty of scope for improvement in finding possible reroutes.
N1
L12
IEEE
BEMaGS
N4
ATM nodes
F
N2 N3
L34
S7
S1
SDH VC3 S6
S2 S3
ATM VP S5
S4
SDH nodes
S1 supports both L34 and L12. L12 is supported by 2 SDH VC3s one going through S2-S3-S4-S5-S6, one going through S1-S7.
Figure 3. Connection client-server relationship.
OPNET MODELER SIMULATOR Process models in OPNET Modeler representing restoration logic are built as finite state machines (FSMs). FSM models are common in telecommunication networks. An FSM has a defined set of states, a set of conditions for transferring from one state to another, and a set of functions that are executed if a transition from a state occurs. When in a state, the module processor in the node is restricted from doing anything else until it finishes the relevant operation (restoration action implemented by user program code). It waits in this state for a certain time set by the user. In this way we model the delay elements in the node. The only condition whereby a state is left before the wait time is if a new packet is received, which will be added to a queue for later processing (X/PX transition). The implemented FSM is shown in Fig. 5. It consists of seven states. These are find route ( FIND_R), packet process (PKT_PR), reroute (REROUT), send packet (SEND), port add (PORT), and free (FREE). As an example, if a fault occurs in the network and the node is the master node (controller), the FLT&MSTR/SF transition from FREE to FIND_R state happens, and then the states “packet process” (create packet) to “reroute” (crossconnect) to “packet send” (forward packet) are occupied in turn. The “add port” state is only occupied if integrated restoration (the new proposed scheme) is applied. In OPNET each FSM is a module, as shown in Fig. 1. Our hybrid node model contains an ATM module, an SDH module, and a manager module (MAN) that represents functions required to manage each element. These modules can communicate internally. OPNET has built-in packet objects and link objects that can be modified in their content and structure. Using OPNET’s packet modeling
IEEE Communications Magazine • March 2009
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
131
A
BEMaGS F
Communications IEEE
A
BEMaGS
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Restored VPs of 7
restoration represent 133|16
ATM restoration. 0
found the delay of integrated restoration to be less than 2 seconds, the time
186|21 10
453|51
252|27 4
233|27
142|13 275|31
158|22
295|34 283|38
515|56
419|42 481|53
250|29
339|45
142|16
14
302|34
419|36 5 158|14 8 194|19 12 75|9
2
185|19 125|13 293|29
208|27
6
252|26
413|47 225|26
259|31
309|35
192|21 22 127|11
465|54 185|18 250|25 16
117|13 202|21
233|26
15
21
19
275|26
192|25
200|21
208|26
492|45
275|25
159|18
bandwidth.
20
426|42
387|44
1
175|17
150|16
18
482|49 261|31 267|30
11
136|14
511|67
325|37
depending on the amount of failed
17 152|17
9
233|20
simulations, we have
227|21
245|32 100|12
233|26 3 356|43 292|28
unrestored VPs of From our
13
267|29
integrated
F
23 24
202|27
Figure 4. A two-layer network. tools, we create a packet model having a structure with space to hold the route found, the bandwidth to be taken, the layer of restoration, and so on. Links are set here to contain two communication streams for the modeling of the transport of restoration packets. One is the ATM operation, administration, and maintenance (OAM) transport stream with a data rate of 1 Mb/s, while the other is the SDH overhead transport stream with a data rate of 64 kb/s. The SDH and ATM nodes receive restoration packets from the SDH and ATM streams, respectively. A hybrid can receive packets from either stream. In a hybrid, if the ATM (or SDH) module receives a packet of the integrated restoration type and finds (from the bandwidth type array in the packet) that the bandwidth type on the next link changes (e.g., from ATM to SDH), it will forward the packet to the MAN module. The MAN module will create a new port and forward the packet to the next node via the ATM or SDH module, according to the next bandwidth type. In the way described previously, we can model a node and its inherent delay elements during a reroute execution. Thus, we have modeled the following delay elements: • Packet queuing • Packet processing (10 ms/packet) • Packet send (10 ms/packet), • Crossconnect (1 ms/VP if ATM , 10 ms/VC if SDH) • Local map update (1 ms) • Q3 management protocol stack (30 ms) • Add SDH/ATM port (20 ms/port) • Transmission delays that are inherent in the link models OPNET Modeler at present does not have mechanisms ready to relate an ATM VP or MPLS path to an SDH VC4 time slot or a
132
Communications IEEE
fiber wavelength, and display this relationship in an easily explorable way to the user. For a limited multilayer simulation, OPNET allows the combined use of two of its products. OPNET WDM Guru or SP Guru Transport Planner has extra functionalities for the simulation of restoration at the physical layer. OPNET SP Guru Network Planner has network layer restoration simulation tools. All of them have tailored user menus for exploring and supplying network information relating to restoration. Even though they can be used together, there is not a facility to add a process in an SP Guru Network Planner node that would control an SP Guru Transport Planner node, which is what is needed during integrated restoration (and which could be done in OPNET Modeler). Ideally, we would like to have the process modeling mechanisms of OPNET Modeler within the extended user interface of WDM Guru, SP Guru Transport Planner, or SP Guru Network Planner. OPNET as a simulation package has additional tools, such as animation, with which we can see the flow of packets, and a native debugger, with which we may enter stop points and examine the state of the model. Combining the native debugger and the animation, we can follow the creation, processing, and transport of a particular packet to its destination, or visualize the FSM state changes. OPNET can also link with an external debugger (e.g., Microsoft C++ debugger) in such a way that the external debugger can control with brake points the execution of OPNET and display the values of the C variables in the OPNET model. OPNET interfaces well with popular networking tools, such as HP OpenView or Sniffer, for importing topology automatically, and also for importing information for building a socalled application characterization environment,” in which OPNET can simulate the delay
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
A
BEMaGS
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
(X)/PX
(X)/PX
(X)/PX (SLF_FND&&Q)/SP
FIND_R
F
(SLF_PRC&&(ItoMAN))
PKT_PRO
REROUT
(FLT&&MSTR)/SF (SLF_RRT_LST1)SPRT (STR&&(!MANQ))/SP
(SLF_PRC&&toMAN)/SPRT
(SLF_RRT_LST0)/SS
(Q&&SLF_PRT0)/SP
(SLF_FND&&(!Q)&&(!MSTR)) FREE
((!Q)&&SLF_PRT0)
(SLF_PRT1)
PORT
(SLF_SND)/SPRT
SEND (X)/PX
(X)/PX
(Default) (STR&&MANQ)/SPRT INI
Figure 5. Finite state machine of the SDH module in OPNET.
components in the execution of networked applications or the operation of protocols, and identify bottlenecks (e.g., server processing delays vs. transmission delays). We did not find OPNET ideal for exploring topology changes and behavior, which was part of the modeling of the RR. This was mainly because of the need for a customized user interface for this function. For example, we prefer to see the results of a single link failure by doubleclicking on it, seeing the topology of the rerouting of each restoration scheme occurring sequentially on the failure, and seeing the numerical results at the same time on screen. With Visual C++ we could create control menus/windows that allow such customized interaction with the network.
RESULTS OPNET restoration delay results are shown below. Each line contains the results from a different failed link. Restored VPs of integrated restoration represent unrestored VPs of ATM restoration. From our simulations, we have found the delay of integrated restoration to be less than 2 s, the time depending on the amount of failed bandwidth (Table 1). Visual C++ RR simulation has shown that with integrated restoration we gain up to 20 percent on the required backup bandwidth at ATM for the same ATM RR result. The gain of introducing integrated restoration depends on SDH and ATM hop limit, the density of ATM nodes, and the variance of spare bandwidth over the network. Analysis of the results will be given in another paper to be published.
CONCLUSIONS Simulation of multilayer networks requires additional functionalities to normal simulations. We need to be able to relate within the
ATM hop limit = 4; SDH hop limit = 3 SDH restoration
ATM restoration
Integrated restoration
Restored VCs
Time (s)
Restored VPs
Time (s)
Restored VPs
Time (s)
7
0.286
40
0.220
84
0.764
7
0.388
42
0.260
110
0.844
15
0.440
44
0.198
0
—
11
0.409
50
0.240
38
0.605
10
0.358
83
0.280
0
—
18
0.500
72
0.290
176
1.04
Table 1. Restoration time for restored traffic. model the logical links of the upper layer to the physical (or logical) links of the lower layer (i.e., from which paths and channels they pass). If we analyze issues on integration and layer interaction, simulation functions on one layer need to interact dynamically with simulation functions on the other layer, in a similar fashion to how nodes on one layer interact with nodes on the other. This is not the case with present simulation tools. The graphical interface needs to be more advanced and powerful in the information it can provide, and be able to isolate, hide, or increase the level of information as needed. The task of creating a model network topology on which the simulation is going to run is also challenging, but if certain assumptions are accepted this is much simplified, as we have shown. We explain how Visual C++ object oriented capabilities help in modeling multilayer net-
IEEE Communications Magazine • March 2009
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
133
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Integrated routing is a profitable research area, where both layers are viewed dynamically and concurrently. If linear programming tools are to be used in the simulation, then new packages must be developed, due to the complexity of this tool for the optimization of multilayer reroutes.
works by allowing the definition of the relationship between logical and physical links. We also show how its customized graphical Windows user interface can provide specialized support to visualize and analyze topology changes that occur during restoration. This was very helpful in analyzing the behavior of the scheme (with regard to rerouting). The C++ program code we had to write for this simulation was about 200 kbytes. OPNET Modeler is invaluable in modeling the delay of the restoration scheme, as it has built-in means to incorporate delay elements within the restoration process. Also, it has ready tools for creating models for packet and link objects. However, OPNET Modeler or any other OPNET package alone has limitations in modeling multilayer architectures for this purpose, as it does not have mechanisms to easily model extended (multilayer) bandwidth containment and multiplexing structures or ready tools to display them, while at the same time allowing modeling of interacting node processes. Finally, integrated routing is a profitable research area, where both layers are viewed dynamically and concurrently. If linear programming tools are to be used in the simulation, new packages must be developed due to the complexity of this tool for the optimization of multilayer reroutes.
REFERENCES [1] A. Sultan, “Hybrid STM/ATM Network Architectures — A First Assessment,” NOC 1997 — Core ATM Net., 1997. [2] BELLCORE GR-2891, “ATM Functionality in SONET Digital Cross-Connect Systems,” proposed generic requirements, Dec. 1998
134
Communications IEEE
A
BEMaGS F
[3] Euro. Patent Number EP1701495, “Hybrid Digital CrossConnect for Switching Circuit and Packet Based Data https://publications.european-patentTraffic,” 2006; _____________________ office.org/PublicationServer/getpdf.jsp? ______________________________ cc=EP&pn=1701495&ki=A1 _______________ [4] A. Herschtal and M. Herzberg, “Dynamic Capacity Allocation and Optimal Rearrangement for Reconfigurable Networks,” IEEE GLOBECOM ’95, vol. 2, Nov. 14–16, 1995, pp. 946–51. [5] M. Herzberg and D. Wells, “Optimal Resource Allocation for Path Restoration in Mesh-Type Self-Healing Networks,” Proc. 15th Int’l. Teletraffic Cong., vol. 15, June 1997. [6] M.-J. Lee and J. R. Yee, “An Efficient Near-Optimal Algorithm for the Joint Traffic and Trunk Routing Problem in Self-Healing Networks,” IEEE INFOCOM, Apr. 23–27, 1989, vol. 1, pp. 127–35. [7] P. Demeester and M. Gryseels, “A Multilayer Planning Approach for Hybrid SDH-based ATM Networks,” 6th Int’l. Conf. Telecommun. Sys., Modeling, Analysis, Mar. 5–8. [8] K. Murakami and H. S. Kim, “Comparative Study on Restoration Schemes Of Survivable ATM Networks,” IEEE INFOCOM ’97, Apr. 7–12, 1997, vol. 1, pp. 345–52.
BIOGRAPHIES G EORGE T SIRAKAKIS (
[email protected]) ______________ worked on Management of Integrated ATM & SDH (MISA), an E.E.C. funded project, while a research assistant at University College London, United Kingdom. His Ph.D. thesis was on integration of fault restoration in ATM over SDH networks and was completed at Kings College London, supported by a Greek government scholarship. His Ph.D. resulted in an application for a U.K. patent. He has been a senior lecturer at the University of Portsmouth, United Kingdom. TREVOR CLARKSON [SM] (
[email protected]) ___________ received his B.Sc. and Ph.D. degrees in electronic engineering from King’s College London. He was head of research at the United Kingdom’s communications regulator from 2002 to 2004, and is now head of environmental regulation in the U.K. Department for Business. He was previously a full professor (1995) and head of department (2000–2002) in the Department of Electronic Engineering at King’s College London. His research interests are in radio, spectrum management, intelligent communications, and networks. He has published over 120 papers in these fields. He is a Fellow of IET and a Chartered Engineer.
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
MODELING AND SIMULATION: A PRACTICAL GUIDE FOR NETWORK DESIGNERS AND DEVELOPERS
Design Validation of Service Delivery Platform Using Modeling and Simulation Tony Ingham, BT Sandeep Rajhans, Dhiraj Kumar Sinha, Kalyani Sastry, and Shankar Kumar, Tech Mahindra Ltd.
ABSTRACT The value of models as tools to design and plan networks, aid the decision-making process, and as methods to conceptualize abstract ideas is well known. Models play a key role in evaluating varied design options and assist in discovering design issues that likely will impact performance. In the service delivery platform, spreadsheet models can be applied to learn about strategic traffic volume, estimate end-to-end latencies and box capacities, whereas more complex simulation models can help to clarify an understanding of the dynamic aspects and trade-offs that influence the end-user experience. This article discusses how modeling and simulation effectively help validate the design of various components constituting the service delivery platform.
INTRODUCTION Design validation of a system refers to the act of justifying the rationale for a particular choice of design for that system. In complex systems, where the number of entities interacting with one another is high and the variables influencing system behavior is higher, the impact of various design options on the end-user experience is difficult to predict. Large telecommunication applications running in real-time environments must have high, predictable, and scalable performance. Service delivery platform (SDP) is an architecture used in the telecom domain for rapid development and deployment of new converged media services from the plain old telephone system (POTS) to the latest Internet Protocol television (IPTV) or triple-play services. A post-implementation, dissatisfied user experience will result in an adverse impact on the business of a service provider. Also, after the design is implemented, performance problems seldom can be fixed by adding or modifying functionality (although caches are a counter-example); and generally, the solution lies in redesign, thereby delaying the service
IEEE Communications Magazine • March 2009
Communications IEEE
launch. Therefore, design validation is particularly important to ensure that performance problems are detected early in the design stage of an SDP. The focus of the work on performance is the establishment of suitable performance metrics and building confidence that system design meets the targets expressed in terms of these metrics. In the context of an SDP, this means very early in the design cycle justifying that: • The expectations for end-user response times, throughputs, concurrency, and availability are defined and realistic. • The optimal design choice for the restructured and transformed framework is made. • The design will result in acceptable latencies, transaction accept-reject ratios, queue lengths, wait times, server utilizations, and so on. • The design is scalable, available, and reliable.
SDP: DEFINITION AND DESIGN ISSUES SDP solves the age-old problem of delivering telecom services in silos. SDP enables rapid development and delivery of new services from basic phone services to complex audio/video conferencing for multiplayer games, through a reusable, common capabilities-based architecture. An SDP typically consists of a service-control environment, a service-creation environment, a service-orchestration and execution environment, and abstractions for media control, presence/location, integration, and other low-level communications capabilities. Some example services offered by an SDP are as follows: • Wireline or wireless users can view incoming phone calls and online instant-messaging buddies or locate their friends from global positioning system (GPS)-enabled devices on their television screens.
0163-6804/09/$25.00 © 2009 IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
135
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Sensitivity analysis is used to determine how sensitive a model is to changes in the value of the parameters of the
• Users can order video-on-demand services from their mobile phones or watch streaming video at home or on mobile phones. • Airline customers can receive a text message from an automated system regarding a flight cancellation and then, opt to use a voice self-service interface to reschedule. The key SDP design issues are outlined below.
model and to
END-TO-END RESPONSE TIME
changes in the
The examples described above illustrate how the responsiveness of the SDP architecture is important for the value of the service to be realized. A text message regarding a flight cancellation must be delivered in real time and cannot get delayed beyond stipulated time. Similarly, video cannot have long intermittent delays. Considering that SDP components work together to provide a variety of distinct services; the impact of random synchronous and asynchronous component interactions on end-to-end response time must be ascertained and evaluated at the design phase. Setting realistic response-time requirements is as important as achieving them.
structure of the model. Sensitivity analysis helps build confidence in the model by studying the uncertainties associated with model parameters.
ARCHITECTURE CAPACITY AND SCALABILITY The system design must ensure it meets the forecasted business transaction volume. Design must be scalable to gracefully cope with the load arising from a sudden increase in concurrent user requests. The ability of SDP components to carry the required load and scale to transient load conditions must be understood during platform design.
END-TO-END PERFORMANCE ASSURANCE The SDP uses a common architecture to deliver different services but must ensure that each of these services meets its respective service level agreements (SLAs). For example, an IPTV service and a voice service have distinct SLAs. Engineering the shared components of the SDP to meet a variety of disparate SLAs is a challenge.
INTEROPERABILITY BETWEEN DIFFERENT COMPONENTS The SDP typically consists of disparate components from different third-party vendors. The solution architect must be concerned with the interoperability of different interfaces and technologies, as well as the performance issues.
DEPLOYMENT DESIGN CONFORMANCE A deployment design describes the hardware platform and interconnecting underlying network. A deployment designer must address questions about the number of servers and CPUs; types of CPUs; memory requirements; server architecture, for example, for clustering or virtualization; load balancing; and resilience before a service goes live. Because the components are shared and reused across dissimilar services, sizing the SDP platform optimally is a challenge. All these issues indicate the requirement to employ sound design-validation techniques that aid informed, design decision making by highlighting upfront the effect of the proposed design on end-user experience during the design phase
136
Communications IEEE
A
BEMaGS F
itself — prior to system implementation — thereby, increasing the degree of confidence in the proposed design. The next section discusses different approaches to address design validation and then elaborates on the modeling and simulation techniques.
DESIGN VALIDATION TECHNIQUES There are a number of design validation techniques. Some of them are discussed below.
PROTOTYPING When there is uncertainty as to whether a new design actually does what it is designed to do, a prototype is built to test the performance of the new design before starting industrial-strength production. Sometimes, a variant called [[rapidprototyping]] is used for the initial prototypes that implement part, but not all of the complete design. This enables rapid and inexpensive testing of the parts of the new design that are most likely to have problems, solving those problems, and then building the full design. Prototyping results in a quick working model of the end system, provides an early view of the performance of the final system, and points to areas of improvement in the design.
SENSITIVITY ANALYSIS Sensitivity analysis is used to determine how sensitive a model is to changes in the value of the parameters of the model and to changes in the structure of the model. Sensitivity analysis helps build confidence in the model by studying the uncertainties associated with model parameters (e.g., server processing time, queue lengths, wait time, etc). Many parameters in sensitivity-analysis models represent quantities that are very difficult, or even impossible, to measure with a great deal of accuracy in the real world. Also, some parameter values change in the real world. Therefore, when building a sensitivity-analysis model, a designer is usually uncertain about the parameter values he or she selects, and he or she must use best estimates. Sensitivity analysis enables the designer to determine the level of accuracy that is required for a parameter value to make the model appropriately useful and valid. If the tests reveal that the model is insensitive, then it may be possible to use an estimate rather than a value with greater precision. Sensitivity analysis also can indicate which parameter values are reasonable to use in the model. If the model behaves as expected from real-world observations, it gives some indication that the parameter values reflect, at least in part, the real world.
FAILURE MODE EVALUATION AND CRITICALITY ANALYSIS Failure mode and effects analysis (FMEA) and failure modes, effects, and criticality analysis (FMECA) are techniques used to identify potential failure modes for a component or process, to assess the risk associated with those failure modes, to rank the issues in terms of importance, and to identify and perform corrective actions to address the most serious risks.
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
A
BEMaGS
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
FMECA is a technique used to identify, prioritize, and eliminate potential failures from the system design before they reach the customer. It helps in selecting design alternatives with high reliability and availability during the early design phase. It also helps develop early criteria for test planning and requirements for test equipment. FMECA is a structured and reliable method for evaluating hardware and systems, and the approach makes evaluating complex systems easy. However, it is a tedious and time consuming process, unsuitable for multiple failures, and prone to human errors.
Order
Provisioning
Service control environment
Customer care
OSS/ BSS
Service orchestration and execution
SDP
Mobile (IMS)
Network service platform
Billing
Service creation environment
F
SIMULATION MODELING A model is a simplified representation of a system at some particular point in time or space intended to promote understanding of the real system. A simulation model operates on time or space to compress it and enables one to perceive interactions that would not otherwise be apparent because of their separation in time or space. Simulation modeling is a process for developing a level of understanding about the interaction between parts of a system and the behavior of the system as a whole. Dynamic interaction between reactive system components can be validated through simulation models. The objective of models built for design validation is to explore hidden facets of how the design under study impacts performance. For results from a well validated and calibrated model to be useful, they must be timely and relevant to the design phase of the system. Only a true representation of the system can be relied upon to provide indications with respect to the real system and to help discover bottlenecks within the design that must be understood, and options to correct these bottlenecks can be evaluated again through the model.
MODELS FOR SDP DESIGN VALIDATION The formal approach to design validation involves both specification and verification of the system design. In SDP, static models like latency models, volumetric models, capacity models, and availability models are used to validate SDP design requirements as described below. Latency models help in validating end-to-end response-time expectations for various transactions across platform interfaces. Sequence diagrams and network deployment designs serve as input into latency models. The impact of component latencies on end-to-end latencies for different transaction flows and co-location options can be understood using these models. These models help designers and system developers obtain a view of the upper bound and lower bound for the latency budget of each component in order to meet the end-to-end response-time requirements. Volumetric models use product forecasts from business and convert them into transactions-perunit time that the design must support at each interface and component, even before design commences. These models are prototypes that provide a quick view about capacity and scalabil-
Data
Voice (IMS, NGN)
Video (IPTV)
Figure 1. Service delivery platform reference architecture.
ity requirements in terms of throughput for each component in the SDP. This helps deployment designers, for example, to use high-speed servers for highly loaded components and to incorporate resilience into their designs appropriately. These models also provide direction to performance test design. Capacity models document resource utilization for various activities at different servers, based on industry benchmarks and vendor performance test results, and aid in identifying how many servers of a particular type are required for a mix of transactions as specified by the volumetric model. The capacity model can help answer questions about which component gets overwhelmed first and at what load. This is useful insight for deployment designers. Availability models provide a view of the hardware availability of a design option providing validation of resilient design, fall-back policies and failover, and recovery SLAs. Dynamic simulation models are applied to discover, quantify, and explain the impact of random transaction arrivals, queuing disciplines and deployment options, and overload-control mechanisms on end-user experience. Queue length versus response-time trade-off, selective co-location versus wide area network (WAN) latency impacts, transaction rejects versus transaction leak rates, and leaky bucket depths and serverthread pool limits can be evaluated and quantified to aid scientific decision making in the design phase. Network deployment designs, end-to-end transaction-sequence diagrams, and forecast transaction volumes feed as input into the simulation models. Most models that are built early during the design phase use industry benchmarks, design assumptions, and/or educated guesses for service times to provide an early view of the performance bottlenecks. When these models are calibrated with inputs from performance testing and/or in-life system monitoring, the accuracy of results improves,
IEEE Communications Magazine • March 2009
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
137
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Deployment options colour coding in graph below
Percentage of requests meeting SLA of 1 s
Percentage of requests meeting SLA of 2 s
Scenario A1
0%
68.38%
Scenario A2
0%
0%
Scenario B
0%
99.21%
Scenario E
0%
95.61%
Scenario E2
0%
0%
3
2
1
0 20 m
30 m
40 m
50 m
60 m
Figure 2. Quantifying differences in end-to-end response time using five different deployment scenarios to aid in deployment option decision making. and the model can be used to validate full deployment scalability of the design and can aid in-life capacity planning.
REAL LIFE EXAMPLE: RESULTS AND ANALYSIS In this section we discuss the real-life case study of a design validation of the SDP of one of biggest telecom service providers in Europe. The service provider is in the process of implementing SDP to deliver its twenty-first-century nextgeneration network and application services. All the services are provisioned, delivered, and assured through a common capability stack, built using commercial off-the-shelf products to guarantee equal customer experience and faster and simpler service delivery. The SDP simulation model started at a high level with two capabilities and one service and grew to evaluate scalability of more than six capabilities and seven services. The full-scale and enhanced model consists of more than 200 servers and network elements. The model simulates application interfaces such as remote method invocation (RMI)/remote procedure calls (RPCs) to understand concurrent invocations, priority-
138
Communications IEEE
F
VALIDATING DEPLOYMENT DESIGN
Application response time (s)
10 m
BEMaGS
queuing mechanisms, and overload-control mechanisms at application interfaces, namely the leaky bucket and the token bank. The model is based on use cases and business processes reflecting provisioning and service execution flows on the SDP. Transaction rates are derived from normal and busy hour forecasts for various business transactions. The model has been used to evaluate scalability up to 200 transactions per second (tps).
4
0m
A
Deployment designers faced the requirement to host a set of common components across a set of geographically separated data centers. They were keen to understand which set of components should be co-located at the same site to optimize response time from the end user perspective. There were five different component co-location options with the designers. The network design for all five options was fed into the simulation model, and the transaction flows were run through the model at forecasted transaction volumes for year one after product launch. Component latencies were not required to be accurate for this model; these could be set to anything, so long as component latency per incoming request remained the same in all scenarios. In this model run, however, best possible values for component latencies were configured so that SLA quantifications could be made. The number of network hops to and from data centers that the end-to-end transaction journeys require for each design option has an influence on the end user response time in each scenario. The result of these five scenarios is summarized in Fig 2. We can see that scenario A2 is the worst option, and scenario B is the best. None of the scenarios meet the SLA of 1 s (if component latencies were as per those configured in this model run), whereas in scenarios A2 and E2, all transactions fail to achieve an SLA of 2 s as well. Further, in all scenarios, there are at least a small percentage of requests outside the 2-s SLA.
VALIDATING PLATFORM CAPACITY The capacity planner was keen to understand how long the current platform can provide acceptable end-user experience and what proactive steps should be taken to prevent unacceptable SLAs. Volumetric forecasts from the volumetric model feed into the simulation model. All major transaction flows are modeled. The network design remained constant in all scenarios in this case. Each scenario simulated a forecast load from a different year. In this case, reasonably accurate component service times must be configured in the model to ascertain a realistic breakpoint. Therefore, information in the highlevel capacity models also feed into the simulation model. It is interesting to note the synergy of three models working hand-in-hand here. The results of the simulation model run for the year 10/11 forecast load is shown in Fig 3. All transaction response times are seen increasing linearly as simulation progressed at constant
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Object: End User Browser App of Network Annotation: Login_To_Service_Sync Application Response Time (seconds) 80 6
40 0
Annotation: Login_To_Service_Sync/SIP200_SessiontoAccess_2 Phase Response Time (seconds) Object: Access_Server_1 of Network Server Jobs/CPU Total Utilization (%) 100
4
Object: End User Browser App of Network Annotation: Login_To_Service_II_Device Application Response Time (seconds)
50
2 0
80
6
40
0 Object: Access_Server_2 of Network Annotation: Login_To_Service_Sync/CobaReq_Accessfor Server Jobs/CPU Total Utilization (%) Phase Response Time (seconds) 100
4
0
Object: End User Browser App of Network Annotation: Logoff_from_Service_Sync Application Response Time (seconds)
0
80 6
0 Object: Access_Server_3 of Network Server Jobs/CPU Total Utilization (%)
Annotation: ILogin_To_Service_Sync/XCAP200_Accessto1 100 Phase Response Time (seconds) 50
40 0
50
2
4
Object: End User Browser App of Network Annotation: ReReg_to_SIP_Services Application Response Time (seconds)
20
0 0m
2 0 ...4 m 10 s
2 m 20 s
3m
1m
2m
3m
4m
3 m 40 s
10 Locate faulty phases within transactions
0 2 m 30 s
...1 m 40 s
3 m 20 s
Linearly increasing e2e latencies
100
What’s going wrong?
Object: Access_Server_1 of Network Server Jobs/CPU Total Utilization (%) 6.0
50 Object: Access_Server_2 of Network Server Jobs/CPU Total Utilization (%) 120
5.0
60
...4.5
0 Object: Access_Server_3 of Network Server Jobs/CPU Total Utilization (%) 120
1.8
60
120
Object: End User Browser App of Network Annotation: Login_To_Service_Sync Application Response Time (seconds)
5.5
0
0
Bottleneck found! All three access servers choke up, thus causing long latencies!
4m 10s
Object: End User Browser App of Network Annotation: Login_To_Service_II_Device Application Response Time (seconds)
1.6 1.4
Object: Access_Server_7 of Network Server Jobs/CPU Total Utilization (%)
1.2
60
...1.0
0
Object: Access_Server_8 of Network Server Jobs/CPU Total Utilization (%) 120
1.8 1.6 1.4
60 0 0m
5m
10 m
1.2 1.0 ...0.8
Two additional servers deployed
Object: End User Browser App of Network Annotation: Logoff_from_Service_Sync Application Response Time (seconds)
.45 .40 .35
Object: End User Browser App of Network Annotation: ReReg_to_SIP_Sent •When to add new hardware? Application Response Time (seconds)
•How much to add?
Benefit: Happy Users!
.30 .25 ....20 ... 1 m 40 s
2 m 20 s
3m
3m40s
4 m 20 s
Figure 3. Identifying that latency will begin to increase in the future due to high traffic resulting in overutilized servers, aiding capacity planning.
IEEE Communications Magazine • March 2009
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
139
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
High number of rejected requests in design option 1 CL1 queue current BPAS queue length
BEMaGS F
Queue current service queue length 6 4
10.00 8.75 7.50 6.25 5.00 3.75 2.50 1.25 0.00 4
A
2 0 Object NP_SRB_CPU_5 of NP SRB CPU
w ard re h o m eed ng? lly n O R n w r o rea g i I s de Do he Is t
Queue total rejected
3 2
are
1
Queue rejected requests 2
1
0
0 1 m 40 s 2 m 5 s 2 m 30 s2 m 35 s3 m 20 s3 m 45 s4 m 10 s4 m 35 s
5
Application response time (s)
Application response time (seconds)
7
4
6 3
5 4
With priority queuing vast improvement!
3 2
2
1
1 0 1 m 40 s 2 m 5 s 2 m 30 s2 m 55 s3 m 20 s3 m 45 s4 m 10 s4 m 35 s
Reject percentage drops to less than 0.1%
0 ...1 m 40 s2 m 5 s 2 m 30 s 2 m 55 s 3 m 20 s 3 m 45 s4 m 10 s 4 m 35 s
Design option 2
Figure 4. Evaluating the impact of two different queuing disciplines on the percentage of rejects, aiding informed decision making for queue design implementation. load in the first graph above. Drilling down revealed that all subtransactions interacting with Access Server 1, 2, and 3 were facing queuing delays at the server. Further analysis revealed that these three access servers were congested. Thus, an estimate is gained about the transaction rates at which capacity breakpoints are reached. Further, the simulation model pinpoints the exact system components that are congested. A few additional simulation runs helped determine that the addition of a few servers solves the latency problem at least temporarily, by removing the CPU bottleneck. Based on the level of information provided to a system model as input, the models provided insights about resource utilization of that system, amount of data received and sent (throughput), and protocol-specific statistics. However, note that the underlying application itself must be scalable for these model results to hold good. The next example shows how an application-layer interface could become a bottleneck.
VALIDATING INTERFACE DESIGN Finite state machines were used in the SDP model to evaluate the impact of varied queuing disciplines, to aid design decisions about optimal
140
Communications IEEE
settings for session inactivity timeouts, and to justify the synchronized effectiveness of overload-control schemes of various systems when the systems are subjected to overload. In one particular case, solution designers were modeling two distinct design options at a particular interface of the solution. Their aim was to understand the impact of queue lengths on downstream applications, transaction reject percentages, and end-to-end response times in both scenarios. The simulation model assisted in evaluating two different queuing disciplines and the impact on the percentage of requests that were rejected due to a full queue. The result of the simulation run is shown in Fig. 4. The design option on the right showed a remarkable improvement, in that it shows that less than 0.01 percent of the requests that were rejected in option 1 were rejected in option 2, and end-to-end response times also were reduced. Thus an interface design validation using simulation helps evaluate the best option for a given traffic pattern. An incorrect design option at the application interfaces, if selected without validation, could result in a poorly scalable implementation and increase hardware and maintenance costs for the business in the medium to long term.
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
ACHIEVEMENT AND BUSINESS BENEFITS ACHIEVEMENTS The achievements are related to the real-life SDP case study discussed in the Results and Analysis section above. Results foreseen through the model have been used to direct and focus performance testing onto the critical components. Despite using design assumptions, the model pointed out issues that later also were identified in performance testing — as a result, modeling has been used to validate designs before commencing development for future releases. This model has over 400 systems in it. Modeling is considered as the only way that so many instances of all the systems can be studied, and various what-if scenarios can be analyzed prior to testing and deployment of the system. Simulation has become part of business practice and is relied upon for providing pointers to impending performance bottlenecks and design changes.
BUSINESS BENEFITS The long-term business benefits that the service provider deploying the SDP realized are as follows: • Reduced risk for deploying applications that failed to meet the performance requirements. • Reduced post deployment re-work efforts by 15 percent after using the model to proactively identify issues.
CONCLUSION The modeling and simulation technique helps to validate proposed system design changes even before the development and deployment of services, thus providing long-term, cost-saving benefits. All the re-work costs can be avoided by performing design validation. The models help to concentrate on the areas of concern or potential risks by determining the level of system abstraction and/or depth. The what-if scenarios help provide parameter tuning for real systems. To conclude, using the simulation and modeling technique for design validation can proactively identify and minimize the non-functional risk of the platform or solution.
ACKNOWLEDGMENT We would like to thank Jas Gill, 21CN CC Service Champion — Communications, BT Wholesale, and Ranjit Surae, CC Service Management Consultant — BT Wholesale, for providing us with the funding and the opportunity to apply design validation using modeling and simulation on the service delivery platform. Our thanks to Dr. Gowri Krishna, VP, NDE, Tech Mahindra Ltd. for encouraging us to come together, write this article, and share our experience with the wider industry. We would fail our duty if we do not acknowledge Dr. Mandaar Pande, Principal Performance Consultant Tech, Mahindra Ltd.
We thank him for the coordination and support we received throughout the writing of this article.
IEEE
BEMaGS F
The modeling and simulation technique
BIBLIOGRAPHY
helps to validate
[1] T. Ingham, J. Graham, and P. Hendy, “Engineering Performance for the Future of Intelligence,” BT Tech. J., vol. 23, no. 1, Jan. 2005 [2] R. E. Gantenbein and S. Y. Shin, “A Practical Method for Design Validation of Critical Software Systems,” CrossTalk, J. Defense Software Eng., vol. 8, no. 8, Aug. 1995, pp. 23–26. [3] R. Aberg and S. Gage, “Strategy for Successful EnterpriseWide Modeling and Simulation with COTS Software,” AIAA Modeling Simulation Tech. Conf., Aug. 16, 2004. [4] W. Damm and M. Cohen, “Advanced Validation Techniques Meet Complexity Challenge in Embedded Software Development,” Embedded Sys. J., 2001. [5] N. W. Macfadyen, “Performance Modeling,” BT Tech. J., vol. 21, no. 2, Apr. 2003.
proposed system
BIOGRAPHIES TONY INGHAM graduated from Cambridge University, United Kingdom, with an M.A. in mathematics in 1987 with statistics, probability, and stochastic processes as major fields of study. He joined the BT Performance Engineering Department in 1987, and has worked on a number of major projects involving network routing and dimensioning, signaling, call control, intelligent networks, and service execution, with a focus on performance design. He is currently head of Shared Services for the Service Execution Platform in BT Design. He has co-authored a number of papers, including “Network Intelligence — Performance by Design” (IEICE Transactions on Communications, Feb. 1997) and “Engineering Performance for the Future of Intelligence” (BT Technology Journal, Jan. 2005).
design changes even before the development and deployment of services, thus providing long-term, cost-saving benefits. All the re-work costs can be avoided by performing design validation.
SANDEEP RAJHANS acquired a Bachelor’s degree in electronic engineering from the University of Pune, India, in 1994. His major field of study included communications and microprocessors. He is currently based at Hoffman Estates, Illinois, working on a project with a U.S.-based telecom service provider as principal consultant. His current job involves research and consultation in application performance engineering and performance modeling using simulation modeling techniques. He started his career as a network engineer in 1994 and handled various technocommercial responsibilities. He joined Tech Mahindra Ltd., Pune, in 2004. KALYANI SASTRY graduated from the University of Pune with an M.Sc. in computer science in 2002 and joined Tech Mahindra Ltd. in 2003. A Microsoft Certified Professional in VC++, she started her career in software development using C and C++. At Tech Mahindra, she has contributed to and led simulation modeling studies using OPNET Modeler, which include enhancing the GPRS model followed by numerous modeling studies for the BT Service Execution Platform. She is currently a performance design consultant to BT. She co-authored the paper “GPRS Model Enhancement” published at OPNETWORK 2004. SHANKAR KUMAR earned a Bachelor’s degree in telecommunication engineering from B.I.T, Bangalore, India in 2004. He is currently working on the BT Service Creation Framework as a performance consultant. He started his career as a network engineer in 2005 with Tech Mahindra Ltd., Pune. He has experience in network and application performance engineering and management, capacity planning, and performance modeling using simulation modeling techniques. DHIRAJ KUMAR SINHA earned a Bachelor’s degree in computer science and engineering and a Master’s degree in information technology with specialization in computer networks from Symbiosis (SCIT), Pune, in 2004. He is currently based in San Diego, California, and is working as a network design consultant with a U.S.-based telecom service provider. He started his career as a network engineer in 2003 with Tech Mahindra Ltd., Pune. He has experience in network planning, design, and engineering, and network and application performance management.
IEEE Communications Magazine • March 2009
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
141
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
MODELING AND SIMULATION: A PRACTICAL GUIDE FOR NETWORK DESIGNERS AND DEVELOPERS
Unified Simulation Evaluation for Mobile Broadband Technologies Yuehong Gao, Xin Zhang, and Dacheng Yang, Beijing University of Posts and Telecommunications Yuming Jiang, Norwegian University of Science and Technology
ABSTRACT System-level simulation has been widely used to evaluate system performance. Simulation methodologies have also been comprehensively discussed in different organizations or institutions. Most of the simulation methodologies proposed, however, mainly focus on one area of specific technologies. How to evaluate the performance of multiple air interface systems (e.g., cdma2000, WCDMA, WiMAX, and their evolutions) in a fair and comprehensive manner has not been addressed extensively. This article presents a unified simulation methodology, including fading channel models, system configurations, and how to consider technology-dependent algorithms such as scheduling, overhead modeling, interference margin definition, and resource allocation based on system loading. This article uses this framework to compare three major existing radio technologies: cdma2000 1x EV-DO Rev. A, WCDMA HSPA, and mobile WiMAX based on 802.16e. Key simulation results based on our suggested system models and settings are presented and analyzed. It is shown that under our unified framework, the two CDMA systems exhibit higher spectrum efficiency than mobile WiMAX, especially on the downlink, while mobile WiMAX provides a higher peak rate.
INTRODUCTION
This work was supported in part by QUALCOMM, Inc.
142
Communications IEEE
Wireless communication continues to evolve rapidly since the 1990s. Third-generation (3G) technologies such as cdma2000 evolution-data optimized (EV-DO) and wideband code-fivision multiple access (WCDMA) high-speed packet access (HSPA) are undergoing commercialization today, while new technologies such as longterm evolution (LTE) are being adopted as beyond 3G (B3G) systems. Most commercial mobile systems today are evolved from International Mobile Telecommunications-2000 (IMT2000), defined by a set of International Telecommunication Union (ITU) Recommendations, rooted in the Third Generation Partnership Project (3GPP), 3GPP2, and IEEE.
0163-6804/09/$25.00 © 2009 IEEE
There are two 3G CDMA variants widely commercialized or in the process of being commercialized today: cdma2000 and WCDMA. They are frequency-division duplex (FDD) solutions based on 1.25 MHz and 5 MHz bandwidth, respectively. Another broadband technology, mobile WiMAX, which is based on orthogonal frequency-division multiple access (OFDMA) technology with a relatively wide band, typically 10 MHz, and time-division duplex (TDD), is also deployed in some parts of the world. In order to improve capacity, 3GPP, 3GPP2, the WiMAX Forum, and other organizations have developed corresponding evolutionary systems, such as the air interface evolution (AIE) for cdma2000 and LTE of WCDMA, and 802.16m in IEEE. To evaluate the expected performance of these mobile systems, multiple simulation methodologies have been proposed. Each organization has published the simulation methodology for the system it standardizes. For example, 3GPP2 published the methodology used for cdma2000 1x EV-DO/evolution-data and voice (EV-DV) evaluations in [1], and 3GPP issued [2, 3] for WCDMA. These methodologies also evolve with the standard progressions. Each of them, however, focuses only on one specific system. How to evaluate the performance of these systems in a fair manner under unified settings has not been addressed extensively. The WiMAX Forum has proposed simulation methodologies that compare the three systems and provided some results such as [4], but there are still some issues regarding fairness; some configurations and methodology need to be clarified (e.g., advanced receivers for CDMA systems have not been considered); and overhead assumptions have been simplistic. Other questions are also open, such as what fading channel model should be used, how to evaluate the key technologies involved and network performance, and how to make use of simulation in the standardization process. It is our major task in this article to evaluate the performances of the systems mentioned above in a unified manner and clarify the issues between different simulation methodologies
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
developed by various standardization bodies so as to obtain reasonable results as much as possible. We give an evaluation of each system’s performance based on comprehensive consideration of system overhead, receiver setting, fairness in resource allocation, and unified channel models. Our results are obtained via computer simulation under an appropriately simplified deployment scenario designed for theoretical analysis. Comparison between these technologies under the unified framework provides insight to the related key air interface techniques and leads to better understanding of the potential of commercial wireless communication networks. In addition to establishing a comprehensive simulation methodology for comparing the performances of cdma2000 1x EV-DO Rev. A, WCDMA high-speed downlink/uplink packet access (HSDPA/HSUPA), and mobile WiMAX, this article also highlights the features of air interfaces and points out the key design advantages of different systems while making comparisons between them. The rest of this article is organized as follows. First, existing simulation methodologies and conditions are briefly reviewed. A unified methodology is proposed, which includes the evaluation workflow, introduction to simulation platform, channel model, key technologies, overall system parameters, and platform calibration methods. Further discussions are then presented with conclusions in the final section, followed by acknowledgments.
SIMULATION METHODOLOGIES: A REVIEW System-level simulation has been widely used to evaluate system performance with incorporation of link-level simulation curves. Figure 1 shows the typical simulation methodology model. System-level simulation concentrates on evaluation of the performance when multiple base stations (BSs) and mobile stations (MSs) operate together, while link-level simulation generally focuses on single transmitter-to-receiver link performance. The results from the link-level simulation are usually adopted as inputs to the system-level simulation. Simulation methodologies have also been comprehensively discussed in different organizations and institutions. International organizations and companies have put much effort into developing the simulation methodology of cdma2000 systems and their evolutions. The simulation methodology used for cdma2000 has been explicitly presented in 3GPP2 C.R1002 [1]. Basic system-level simulation models for both the forward link (FL, or downlink, DL) and reverse link (RL, or uplink, UL) are defined. Some simulation results are given in [1] as well, which has been updated to version 7, in which ITU channel models, spatial channel model (SCM), carrier-to-interference ratio (C/I) modeling for simulation, simulation flow model used for the wrap-around method, and universal service models such as voice over IP (VoIP) are added. 3GPP2 published the technical report of C.R1008-0 v1.0 in 2007, which provides the eval-
IEEE
BEMaGS F
Traffic model System level simulation (SLS)
Radio resource management (scheduling algorithms, power control and etc.)
Interface between SLS and LLS
Physical layer Link level simulation (LLS) Wireless channel
Figure 1. Simulation methodology model. uation methodology of multimedia services and presents part of the results. 3GPP2 also developed C.R1009-0 v1.0 and C.R1010-0 v1.0. The former is a software tool to implement multimedia simulation, while the latter one is to implement H.263 video codec in simulation. IEEE publications embody a great number of articles on cdma2000, of which the majority involve simulation methodology and results, including radio resource assignment and scheduling models, handoff algorithms, channel estimation, and other newly proposed methods. 3GPP has also put much effort into simulation methods for WCDMA and its evolutions, and many contributions have been submitted and discussed. These documents involve cell layout model, antenna pattern, channel model, typical simulation parameters, and other key factors when evaluating WCDMA and its enhanced systems such as HSPA and LTE. In addition, some link-level curves and system-level simulation results have also been included. In [5] link-level curves and system performance are presented. Reference [2], however, mainly focuses on evaluating HSUPA with different system configurations and typical simulation scenarios included. Suggestions on improved methodology after introducing orthogonal frequency-division multiplexing (OFDM) technology are discussed in [6], where several paramount aspects were mentioned, including link-level assumptions, mapping of signal-to-interference-plus-noise ratio (SINR) when using a rake receiver, calculation of downlink C/I, feedback mode of uplink channel quality indicator (CQI), different scheduling algorithms, implementation of hybrid automatic repeat request (HARQ) technology, and so on.
IEEE Communications Magazine • March 2009
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
143
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
HSxPA
Settings used by different organizations
Channel model 3GPP
3GPP2
ITU
etc.
Key algorithms Overall configurations
Calibration between platforms Fairness calibration Stability calibration
Results
Figure 2. Unified evaluation workflow. As for the system level, simulation methods of different service models, such as full queue and burst traffic, were also presented. In a word, 3GPP and its members have tightly tracked the improvements of system simulation methodology along its evolutionary road, and its methodology has been commonly adopted when evaluating system performance. WiMAX has gained a lot of attention since it participated into the evaluation process as a 3G candidate system in ITU — Radiocommunication Standardization Sector (ITU-R). Some organizations proposed simulation methodologies to evaluate WiMAX system performance; for example, the WiMAX Forum published a series of white papers on the technical overview as well as performance evaluations and comparisons [4, 7]. Evaluation methodologies proposed by the WiMAX Forum cover every aspect of simulation. The outcome includes link- and systemlevel results. System modeling contains cell layout, interference calculation model, mobility model, channel model, traffic model, implementation of key technologies (e.g., power control, HARQ, and handover), interface between system and link levels, various output metrics, and so on. Typical WiMAX system configurations and parameters as well as corresponding results with single-input multiple-output (SIMO) and multiple-input multiple-output (MIMO) antennas are listed in [7]. Reference [4] mainly focuses on comparisons between WiMAX and other 3G/B3G systems. WiMAX capacity of VoIP traffic, and performances of handover and data traffic under specific simulation parameters are presented in [8]. In a word, many organizations, of which the WiMAX Forum is the leader, are involved in developing
Communications IEEE
F
WiMAX
Unified settings
144
BEMaGS
simulation methodologies for WiMAX, and have established an architecture including descriptions of every aspect.
Calibration of each platform EV-DO Rev. A
A
UNIFIED METHODOLOGY: CHANNEL MODEL, KEY TECHNOLOGIES, SYSTEM CONFIGURATIONS, CALIBRATION, AND OUTPUT MATRICES Through the review of existing simulation configurations and methods published by various organizations or researchers, we found that each proposed methodology aimed at one specific system, and was not quite applicable to other systems. If different simulation methods or configurations are used to evaluate systems’ performance, the comparison fairness baseline cannot be guaranteed because of inconsistent criteria. Hence, we propose unified simulation configurations in this article after carefully analyzing each paramount factor, such as channel model, scheduling algorithms, fairness rule, and primary system parameters. In this section the unified approach is introduced first, and then a unified set of channel models is proposed. The simulation methods of key technologies, such as scheduling, are described. Primary system parameters are also summarized in this section. In addition, we study the calibration of fairness and stability for various systems. At last, we give the output matrices of various systems under our simulation.
FLOW CHART OF UNIFIED EVALUATION Figure 2 shows the flow chart of our work. First we develop the simulation platform for each system involved, and then calibrate each by adopting commonly used simulation configurations and comparing the outcomes with published results. Then we apply the unified settings, which are proposed based on existing documents of different organizations, to each platform. Calibration between platforms is implemented in order to meet the comparison baseline.
SIMULATION PLATFORMS The platforms used are developed by the Wireless Theories and Technologies Laboratory at Beijing University of Posts and Telecommunications. The simulation methodology model is as plotted in Fig. 1. The typical simulation operates in a top-down-first and bottom-uplater manner, which can be divided into four layers as Fig. 1 shows. The link-level simulation is abstracted into result curves, and the statistical data are mapped through the interface to the system level to finish one simulation cycle. As long as the number of simulation runs is large enough, the simulated system will reach a converged state when simulation outcomes are collected. We use a full buffer as the traffic model, which is not practical in real systems but is helpful when comparing the upper bound of throughput. In the radio resource management layer scheduling algorithms are optimized for each
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
system for the sake of comparing the throughput bound. The physical layer parameters used in deployment or operating scenarios are adopted in this article, which are generally hardware limited; therefore, some of them are not the same for different systems, and this forms the essential proportion of unified evaluation. As for the wireless channel, the resolutions for multipath are different due to various bandwidths. Therefore, the common set of different models is used for evaluation.
CHANNEL MODEL Currently widely used channel models are modified from the multipath channel model in ITU-R Recommendation M.1225, because different systems have distinct bandwidths, which leads to different multipath resolutions and prevents using the M.1225 model directly. M.1225 defines several scenarios: indoor A/B, pedestrian A/B, and vehicular A/B, each with six subpaths except pedestrian A. The relative time delay and average power for each subpath are specified. 3GPP, 3GPP2, and ITU have recommended the channel models for WCDMA, 1x EV-DO, and WiMAX with SIMO (typically 1 Tx and 2 Rx) antenna configuration, respectively. When MIMO technology is adopted, the SCM and SCM-enhancement (SCM-e) should be used to generate multipath fadings. Channel model proposed by 3GPP2 is summarized in Tables 2.2.2-1 and 2.2.2-2 in [1], 3GPP’s channel model is in Tables A-3 through A-6 in [2], and ITU’s model for WiMAX is in Tables 16 and 17 in [9]. These modified models were originally designed for single system evaluation, not for the purpose of comparison between several systems. Therefore, only part of each modified model can match with each other. We carefully analyze and compare these models, and propose the mixed model shown in Table 1. Although it is a specific configuration, it simulates a unified mobile environment and can be equally applicable to many systems. It is shown that three types are included: pedestrian B with speed of 3 km/h, and vehicular A with 30 km/h and 120 km/h. Different multipath parameters are proposed for corresponding systems because of distinct bandwidths leading to diverse resolutions for multipath.
KEY ALGORITHMS Some technologies are widely used by many systems, for example, power control on the reverse link (or uplink), and scheduling on both the forward link (or downlink) and reverse link, HARQ, and adaptive modulation and coding (AMC). Since each system has its own characteristics, the algorithms should be adjusted accordingly. Several methods have been standardized or widely used [1–3], while some are for further study. Compared to EV-DO Rev.A and WCDMA HSPA, standardization of mobile WiMAX is more opens. Here, we do not describe the well accepted algorithms in detail for the sake of simplicity. We would like to illustrate our schemes for the scheduling algorithm and dynamic overhead model in mobile WiMAX because they are of great importance in the final outcomes.
Scheduling Algorithm in Mobile WiMAX — The scheduling process can be divided into several steps: determination of packet transmission order over the air interface, resource allocation, and creation of DL/UL maps. In our simulation a basic resource unit in the frequency domain is one subchannel. On each subchannel, a proportional fair (PF) scheduling algorithm is used.
IEEE
BEMaGS F
Currently widely used channel models are modified from the multi-path channel model in ITU-R recommenda-
Dynamic Overhead Model in Mobile WiMAX — The physical overhead is a critical factor that may affect overall performance significantly. Models used to calculate the overhead in EV-DO Rev. A and HSPA have been well established by removing the power of signaling channels from the total available power. But the impact of overhead on system performance in WiMAX has not gained enough attention. The overhead is usually set to a fixed value in the major published documents [4, 8]. Much research has been conducted considering the medium access control (MAC) overhead. Specific overhead size consumed on the physical layer using 802.16e WiMAX as backhaul is calculated and analyzed in [10]. But [10] assumed fixed overhead size, which means that the overhead size is predefined before the simulation and does not change according to different user numbers, scheduling results, and other factors. In addition, the downlink overhead includes the frame control header (FCH), DL_MAP, UL_MAP, and DL_ACK (acknowledgment), while the uplink overhead consists of three parts: fast feedback (FF), raging, and UL_ACK in [10]. But we deem this modeling of DL/UL overhead incomplete for evaluating system performance. We propose a more accurate model to dynamically calculate the overhead size during simulation so as to guarantee the validity of outcomes, as follows in [9]:
tion of M.1225, because different systems have distinct bandwidth, which leads to different multi-path resolution and prevents from using M.1225 model directly.
OHDL = P + FCH + DL_MAP + UL_MAP + DCD + UCD + ACKDL + PCDL (1) OHUL = ACKUL + Ranging + CQIUL
(2)
Link-Level to System-Level Mapping — In order to reduce the simulation complexity, the effects of link-level technologies are abstracted into several curves, which will be used at the system level. This method is widely used in CDMA systems. As for OFDMA-based systems such as mobile WiMAX, the effective SINR in one subchannel is needed. But one subchannel has several subcarriers, which should be considered when calculating effective SINR. The exponential effective SINR mapping (EESM) model is a typical solution. But when the bandwidth is large, such as 10 MHz in which the fast Fourier transform (FFT) size is 1024, the amount of calculation would be quite large and would greatly reduce the simulation efficiency. Through our simulation verification and validation, it is found that if the effective SINR is calculated by using the value on every four subcarriers instead of every subcarrier, the simulation results under the channel models mentioned above will not be affected, but the simulation complexity will be greatly reduced.
IEEE Communications Magazine • March 2009
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
145
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
complexity, the effects of link level technologies are abstracted into several curves, which
BEMaGS F
Model I
In order to reduce the simulation
A
Multipath model
Pedestrian B
Speed (km/h)
3
Assignment probability
60%
Organization
3GPP
3GPP2
WiMAX Forum
will be used in system level. This method is widely used in CDMA systems. Parameters
Power (dB)
Delay (ns)
Power (dB)
Delay (chip)
Power (dB)
Delay (ns)
0
0
-1.64
0
–3.92
0
–0.9
200
-7.8
1.23
-4.82
200
–4.9
800
-11.7
2.83
-8.82
800
–8.0
1200
–11.92
1200
–7.8
2300
–11.72
2300
–23.9
3700
–27.82
3700
Model II
Model III
Multipath model
Vehicular A
Vehicular A
Speed (km/h)
30
120
Assignment probability
30%
10%
Organization
Parameters
3GPP
3GPP2
WiMAX Forum
Power (dB)
Delay (ns)
Power (dB)
Delay (chip)
Power (dB)
Delay (ns)
0
0
–0.9
0
–3.14
0
–1.0
310
–10.3
1.23
–4.14
310
–9.0
710
–12.14
710
–10.0
1090
–13.14
1090
–15.0
1730
–18.14
1730
–20.0
2510
–23.14
2510
Table 1. Mixed channel model.
OVERALL SYSTEM CONFIGURATIONS As mentioned, different systems operate with various parameters; some are configurable while others are hardware limited. Each organization recommends a set of parameters for simulations, which cannot match with the others completely. Therefore, in order to run simulations on a unified baseline, we compare related documents [1, 2, 8] and summarize a unified configuration as shown in Table 2. The results shown later are all based on these configurations.
FAIRNESS CALIBRATION In order to provide a fair comparison between results, all of the systems are adjusted and opti-
146
Communications IEEE
mized so that the following two criteria are fulfilled: • Their fairness curves (defined in [1, Section 2.1.6]) are similar, which will ensure consistency in the comparison. • Their fairness curves are as close to the fairness criterion as possible, which indicates that the throughput is maximized. The fairness curves for EV-DO, HSUPA, and WiMAX systems are demonstrated in Fig. 3. In addition, the percentage of users in soft/softer handoff should also be adjusted. As Fig. 3 shows, the percentage for EV-DO is about 30 percent (the step on the curve is due to soft/softer handoff), which equals that of
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Simulation parameters
Unit
Unified values
GHz
2
A
BEMaGS F
System configurations Operating frequency Duplex
FDD for EV-DO and HSxPA; TDD for WiMAX
Bandwidth
MHz
1.25 for EV-DO on FL or RL; 4.5 for HSDPA or HSUPA; 10 for WiMAX DL and UL
Log-normal standard deviation
dB
8.9
Thermal noise density
dBm/Hz
–174
Frequency reuse factor
1
BS-to-BS distance
km
2
Minimum distance between BS and MS
m
35
Maximum RL total path loss (no fading)
dB
140
Antenna configuration
FL: 1*1 RL: 1*2
Propagation model Propagation model
128.1 + 37.6l gd; d in kms
Antenna height of BS
m
30
Antenna height of MS
m
1.5
BTS Tx power
dBm
43 per carrier in EV-DO; 43 per carrier in HSDPA; 43 per antenna in WiMAX
BTS Tx/Rx antenna gain with cable loss
dBi
15
Other loss
dB
10
BTS noise figure
dB
5
Antenna horizontal pattern
dB
70˚ (–3 dB) with 20 dB front-to-back ratio
BTS configurations
Correlation between BS
0.5
MS configurations MS Tx power
dBm
23 per carrier in EV-DO; 21 per carrier in HSUPA; 23 in WiMAX
MS Tx/Rx antenna gain
dBi
–1
MS noise figure
dB
9
Other losses
dB
10
Table 2. Overall system configurations.
HSUPA. Note that hard handover is used in mobile WiMAX.
SYSTEM STABILITY CALIBRATION On the RL/UL, system stability must be guaranteed. The percentage of time when RoT (rise over thermal, or interference over thermal, IoT, in WiMAX) is above the given bound is used to
measure the stability of the RL system. The allocation of the total system resource is carried out while the reverse link load does not exceed the predefined maximum threshold to keep the system stable. At the same time, RoT (or IoT in WiMAX) outage criterion must be met, that is, the percentage of time when RoT (or IoT in WiMAX) is above the upper bound shall not
IEEE Communications Magazine • March 2009
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
147
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
exceed a limit. To keep the system stable enough, the overall throughput can be traded off in some cases. Since this criterion greatly affects performance, it must be calibrated across different systems in order to guarantee fairness. Figure 4 is shown as an example. It is shown that when the RoT outage criterion is met in EV-DO, the average RoT is about 6.2 dB (the corresponding RoT when the value on the yaxis is 0.5). But in a WiMAX system, when the IoT outage rate is met, the average IoT is about 5 dB. If the average IoT is adjusted to 6.2 dB, the outage criterion will be violated. Hence, it is indicated that the interference management mechanism in WiMAX can be further improved.
1 0.9 0.8 0.7
CDF
BEMaGS F
OUTPUT MATRICES Different traffic has various quality of service (QoS) requirements. For example, the latency requirement of VoIP is much stricter than that of FTP service. Hence, the simulation output matrices depend on the characteristics of traffic. For packet data traffic, the sector/cell throughput is widely used to evaluate system performance. But it is correlated with system bandwidth and typically increases when more bandwidths are occupied. Hence, it can only reflect the absolute outcomes and conceals the efficiency. Spectrum efficiency is defined as the throughput divided by the effective bandwidth with the unit of bits per second per Hertz. Higher efficiency means that more data can be transmitted over one time-frequency unit. Table 3 shows the throughputs and spectrum efficiencies of different systems with 10 full buffer FTP users per sector. Take the FL/DL as an example: it is obvious that WiMAX has the biggest throughput, but the spectrum efficiency is not the highest. More results can be found in [11].
FURTHER DISCUSSION
0.6 0.5 0.4 0.3 0.2 Fairness criterion WiMAX EV-DO HSUPA
0.1 0
A
0
0.5
1
1.5 2 Normalized throughput
2.5
3
Figure 3. Fairness curves on RL/UL.
The models proposed in this article provide a unified method to evaluate the performance of various systems. It eliminates many inconsistent aspects existing in previous literature and maintains the most essential parts. The results can be used in other areas, such as network planning. The methodology can be extended to compare different network costs. It would be possible for operators to lower their capital and operating expenditures by deploying the system with higher spectrum efficiency. Furthermore, system performance heavily depends on deployment settings, so the comparison results may be different from the outcomes listed above when using another configuration set.
CONCLUSIONS 1
WiMAX UL: Average IoT=6.2dB WiMAX UL: P(RoT>7dB)<1% EV-DO RL: P(RoT>7dB)<1% 7dB threshold
0.9 0.8 0.7
CCDF
0.6 0.5 0.4 0.3 0.2 0.1 0
ACKNOWLEDGMENT 0
2
4
6 8 IoT(or RoT) in dB
10
Figure 4. CCDF curves of RoT/IoT with 10 users/sector.
148
Communications IEEE
In this article we propose a unified simulation methodology that can highlight the features of air interfaces and advantages of different systems. Most important, the methodology compares the various systems’ performance in a comprehensive manner. We give brief descriptions of key algorithms, recommended parameter settings, platform calibration criteria, and output matrices. Based on these configurations, results are obtained which show that under the considered settings, EV-DO Rev.A and WCDMA HSDPA/HSUPA systems have higher spectrum efficiency than an OFDMA-based WiMAX system on the downlink, while on the uplink, WiMAX and EV-DO systems have similar spectrum efficiency higher than that of an HSDPA/HSUPA system. The simulation methodology may be extended to use in network planning.
12
14
The authors would like to thank the experts at Qualcomm for their insightful comments and suggestions. The authors would also like to thank Ruiming Zheng, Wei Bai, Qun Pan, Yang Hai, Li Chen, and Wenwen Chen for their discussions
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Throughputs System
WiMAX
EV-DO
WCDMA HSPA
FL/DL
2.626
1.067
3.752
RL/UL
1.362
0.600
0.971
A
BEMaGS F
[10] D. T. Chen, “On the Analysis of Using 802.16e WiMAX for Point-to-Point Wireless Backhaul,” IEEE Radio Wireless Symp., Jan. 2007, pp. 507–10. [11] Y. Gao et al., “System Spectral Efficiency and Stability of 3G Networks: A Comparative Study,” IEEE ICC ’09, June 2009, to be published.
BIOGRAPHIES Spectrum efficiencies System
WiMAX
EV-DO
WCDMA HSPA
FL/DL
0.3848
0.8539
0.7504
RL/UL
0.4261
0.4798
0.1943
Table 3. Throughputs and spectrum efficiencies. as well as their efforts on the simulation involved. We are also very thankful to anonymous reviewers since their valuable comments helped us further improve the quality of this article.
REFERENCES [1] 3GPP2 C.R1002-0 v.1.0, “cmda2000 Evaluation Methodology,” Dec. 2004. [2] 3GPP TR25.896 v.6.0.0, “Feasibility Study for Enhanced Uplink for UTRA FDD,” Mar. 2004. [3] 3GPP TS25.214 v.8.0.0, “Physical Layer Procedures (FDD),” Dec. 2007. [4] WiMAX Forum, “Mobile WiMAX — Part II: A Comparative Analysis,” May 2006. [5] 3GPP TR25.892 v.6.0.0, “Feasibility Study for Orthogonal Frequency Division Multiplexing (OFDM) for UTRAN Enhancement, “Aug. 2004. [6] 3GPP R1-030518, “Nortel Networks’ Reference Simulation Methodology for the Performance Evaluation of OFDM/WCDMA in UTRAN,” May 2003. [7] WiMAX Forum, “Mobile WiMAX — Part I: A Technical Overview and Performance Evaluation,” Aug. 2006. [8] ITU Document 8F/1079-E, “Additional Technical Details Supporting IP-OPDMA as an IMT-2000 Terrestrial Radio Interface,” Dec. 2006. [9] Y. Gao et al., “Performance Evaluation of Mobile WiMAX with Dynamic Overhead,” Proc. IEEE 68th VTC, Sept. 2008.
YUEHONG GAO [S] (
[email protected]) ___________ received her B.Eng. degree in communication engineering from Beijing University of Posts and Telecommunications (BUPT) in 2004. Now she is pursuing her Ph.D. degree in communications and information systems at BUPT. In August 2008 she joined the Center for Quantifiable Quality of Service in Communication Systems (Q2S) at the Norwegian University of Science and Technology (NTNU), Norway, as a visiting Ph.D. student. Her research interests focus on radio resource management and performance evaluation of wireless communications. XIN ZHANG [M] (
[email protected]) _____________ received his B.Eng. (1997) degree in communications engineering, M.Eng. (2000) degree in signal and information processing, and Ph.D. (2003) degree in communications and information systems from BUPT. He joined BUPT in 2003, working in the Wireless Theories and Technologies Laboratory, and focuses his research mainly on key technologies and performance analysis of air interfaces for wireless networks. D ACHENG Y ANG (
[email protected]) ____________ received his M.S. (1982) and Ph.D. (1988) degrees in circuits and systems from BUPT. From 1992 through 1993 he worked at the University of Bristol, United Kingdom, as a senior visiting scholar, where he was engaged in Project Link-CDMA of the RACE program. In 1993 he returned to BUPT as an associate professor. Currently he is a professor at BUPT, working in the Wireless Theories and Technologies Laboratory, and his research interests are focused on wireless communications. Y UMING J IANG [M] (
[email protected]) ___________ received his B.Sc. from Peking University, M.Eng. from Beijing Institute of Technology, and Ph.D. from National University of Singapore. He worked with Motorola from 1996 to 1997. From 2001 to 2003 he was a research scientist with the Institute for Infocomm Research, Singapore. From 2003 to 2004 he was an adjunct assistant professor of electrical and computer engineering at the National University of Singapore. In 2004 he joined the Center for Quantifiable Quality of Service in Communication Systems (Q2S), Norwegian University of Science and Technology (NTNU) as an ERCIM Fellow. Since 2005 he has been a professor with the Department of Telematics at NTNU, and since 2008 he has been co-appointed as a professor at Q2S, NTNU.
IEEE Communications Magazine • March 2009
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
149
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
MODELING AND SIMULATION: A PRACTICAL GUIDE FOR NETWORK DESIGNERS AND DEVELOPERS
Modular System-Level Simulator Concept for OFDMA Systems Andreas Fernekeß and Anja Klein, Technische Universität Darmstadt Bernhard Wegmann and Karl Dietrich, Nokia Siemens Networks GmbH & Co. KG
ABSTRACT Modeling and simulation is essential for performance evaluation of wireless systems and specific algorithms used, for instance, during resource allocation. For current and future wireless systems, several properties have to be considered such as data traffic, frequency selective fading channels, and radio resource management. Throughout this article, a new system-level simulator concept is presented for packet switched systems using OFDMA, which is called snapshot-based SLS and represents a compromise between state-of-the-art static and dynamic SLSs. The proposed methodology only considers short time intervals within the busy hour by their statistics so that RRM and data traffic can be considered, as in dynamic SLS, but with lower computational complexity. In this article a description is given of how the cellular setup and traffic generation are performed for the proposed snapshot concept. Furthermore, a new methodology is proposed for a quality measure of resource units that is applicable to future wireless systems using an interleaved subcarrier allocation. Exemplary simulation results show that the developed concept is able to evaluate the performance of OFDMA systems considering the impact of, for example, data traffic and resource allocation strategies.
INTRODUCTION It is essential to have accurate performance predictions available if wireless systems must be modeled or the performance of radio resource management (RRM) algorithms, such as resource allocation (RA), must be evaluated. For state-of-the-art circuit-switched systems like Global System for Mobile Communications (GSM), the Erlang B equation describes the relationship between the amount of available resources, traffic load (i.e., the amount of active users), and blocking probability [1]. For current and future packet-switched systems, more parameters have to be considered. Resources are no longer allocated exclusively to one user. Instead, different scheduling and RA strategies can be applied [2]. The amount of transmitted
150
Communications IEEE
0163-6804/09/$25.00 © 2009 IEEE
data per resource depends on the channel conditions since link adaptation is performed. Multimedia traffic such as download or Web browsing traffic leads to varying traffic load conditions. Blocking is no longer critical if data traffic without or with low delay sensitivity has to be served (e.g., download or Web browsing traffic). Nevertheless, users want to achieve a certain guaranteed data rate to be satisfied [3]. Analytical expressions of the system performance are hard to formulate. Therefore, accurate system-level simulations (SLSs) have to be performed to get system performance results. In general, SLSs become very complex and time consuming if all parameters are considered. Based on the requirements of the evaluation, several SLS concepts are possible. The two common concepts are known as static SLS and dynamic SLS ([4], references therein]. Static SLSs are well suited to obtain results about the system capacity. Static SLSs lack time dependencies so that RRM algorithms or smallscale fading can only be considered by average values. To develop and evaluate RRM algorithms, dynamic SLSs have to be used. The time resolution in a dynamic SLS has to be set to the time period of the property that is investigated. For instance, to investigate RA in a system using time frames for transmission, the time resolution has to be set to the frame duration. While in a static SLS only stationary users with fixed bit rates are assumed, and small-scale fading and RRM algorithms are only considered by their average value, in dynamic SLSs small-scale fading and RRM algorithms are modeled in detail as well as call and packet arrival processes and user mobility. Therefore, dynamic SLSs are usually very complex and need great computational effort due to the long time window. Only systems using orthogonal frequency-division multiple access (OFDMA) are considered in the following, because OFDMA is a promising candidate for future cellular systems due to favorable properties [5]; for instance, WiMAX and Long Term Evolution (LTE) use OFDMA. Those systems are packet-switched to cope with multimedia traffic. Therefore, sophisticated RA and link adaptation strategies gain importance. Furthermore, it can be observed that these sys-
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
tems have to be flexible (e.g., regarding available system bandwidth [1]). In WiMAX this can be achieved by using different numbers of subcarriers with constant subcarrier spacing. Simulation and modeling of these systems requires that properties like scalable bandwidth, RRM algorithms and data traffic be considered. Throughout this article, we present a new snapshot-based SLS concept called OFDMA-based network performance simulator (ONe-PS). The proposed concept is designed especially for OFDMA networks and reduces the computational complexity compared to a dynamic simulator, but evaluation of RRM algorithms with data traffic models and adaptation to different system requirements is still possible. Random intervals are cut out of the busy hour and considered in ONe-PS. The proposed concept leads to modifications of the cellular setup and traffic generation compared to dynamic SLSs, so independent snapshots can be generated that fulfill the statistical properties of the busy hour instead of modeling the user movement and data traffic in a time continuous fashion. Additionally, an information theoretic quality measure is proposed for resource units (RUs) that can be allocated to users that is independent of a specific system technology. The proposed quality measure can be applied to different technologies such as WiMAX or LTE. The proposed quality measure describes average performance assuming that only one modulation and one code rate is used on each subcarrier of the RU. The quality measure is also used to perform RRM. The remainder of the article is organized as follows. In the next section, the proposed snapshot based concept of ONe-PS is presented and compared with the state of the art static and dynamic SLSs. The cellular setup for the new concept is described in the following section. We then give an overview about generation of data traffic when using the proposed snapshot approach. The definition of the RU used during RRM is depicted in the following section. The methodology for the quality measure for RUs and the link to the system-level interface used in ONe-PS are then presented. Some performance results are given; and finally, conclusions are drawn.
SYSTEM-LEVEL SIMULATOR CONCEPT In this section the proposed concept of ONe-PS is described. For ONe-PS, a compromise between static and dynamic simulation is developed, which is called snapshot-based SLS. The snapshot approach leads to lower computational complexity than a dynamic simulation due to consideration of independent short intervals of the busy hour. However, it provides more functionality than static simulation due to consideration of such things as data traffic models. A snapshot is a time interval selected randomly from the investigated busy hour. If the snapshot duration is short, some properties can be simplified to reduce the computational complexity. For instance, user movement has almost no influence, so path loss and shadowing of a user can be assumed to remain constant. On the other hand, the snapshot includes enough frames so that RRM algorithms can be evaluated and data traffic models considered. Each snapshot con-
IEEE
BEMaGS F
Parameter input
Parameter input
Data traffic generation
Cellular setup
Radio resource management
Parameter input
Link to system-level interface
Parameter input
Evaluation
Figure 1. Modular structure of ONe-PS. tains statistical properties describing the busy hour, and a certain number of snapshots have to be considered to achieve reliable results. Within a snapshot the granularity is given by the frame or symbol duration depending on the system configuration that shall be investigated. Of course, ONe-PS also has a modular structure to fulfill the requirements concerning simulation and modeling of future wireless systems. The modules can be adapted depending on the considered system. The five modules and their interaction can be found in Fig. 1. There is one module for the cellular setup of the simulation, including propagation and channel models, where, for instance, the signal-to-interference-plus-noise ratio (SINR) is calculated and provided to the RRM module. The second module is responsible for the generation of data traffic and provides data traffic volume to the RRM module. Between the cellular setup and traffic generation modules, the amount of users per cell is exchanged so that for each user one SINR value and one traffic pattern are derived. Within the RRM module, transmit buffers for users are available and resources used for transmission are provided (e.g., to perform RA). A frame is built that is transferred to the link to system-level interface module. In the link to systemlevel interface module, transmit conditions given by the cellular setup and actual channel conditions are considered to determine the data rate of the transmitted data blocks. The results of the link to system-level interface module are collected and processed in the evaluation module.
CELLULAR SETUP In this section the impact of the snapshot concept on the cellular setup is described. Usually in dynamic SLSs, path loss and shadowing are cal-
IEEE Communications Magazine • March 2009
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
151
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
SINR (dB) 3000 60 2000 50
y distance (m)
1000
40 30
0
20 -1000 10 -2000 0 -3000
-10
-4000 -3000
-2000
-1000
0
1000
2000
3000
4000
x distance (m)
Figure 2. SINR in the investigation area due to path loss and shadowing in a regular cell structure.
culated on demand and have to be updated if the subscriber station (SS) moves through the investigation area. Due to the proposed snapshot concept, path loss and shadowing can be assumed to be constant during each snapshot. Only fast fading conditions vary during the snapshot. Therefore, path loss including shadowing can be calculated based on the distance between the base station (BS) and SS in advance of the snapshot [6, 7]. To avoid border effects due to a limited investigation area, a wraparound technique is used where the investigation area is mapped onto a torus [6]. In ONe-PS a pool of possible path loss and shadowing values between SSs and BSs is generated in advance and stored in a file. For each snapshot, the values of the active SSs are taken randomly from the pool. Due to the high amount of possible combinations, the pool does not need to be very large to generate independent snapshots that achieve statistical reliability. This reduces the computational complexity. Figure 2 shows the SINR due to path loss and shadowing in the downlink for different positions as the outcome of the cellular setup module. A specific investigation area for a system with omnidirectional antennas, equidistantly spaced BSs, and all BSs transmitting with equal transmit power is considered. The path loss is calculated with a path loss coefficient of 3.5, and shadowing is modeled by a log normally distributed random variable with standard deviation of 8 dB and a correlation distance of 150 m.
TRAFFIC GENERATION In this section the data traffic generation model for the proposed snapshot-based SLS concept is described. Data traffic is an important factor that has to be considered for modeling and performance evaluation of wireless networks. One simplification that is often used is the full buffer traffic model [8]. With full buffer traffic, it is
152
Communications IEEE
A
BEMaGS F
assumed that each SS in the system always has enough data available so that all allocated resources can be used for transmission. With full buffer traffic, a cell is fully loaded as long as one SS is active. It is assumed that all SSs in the scenario are active for the same duration. With realistic data traffic models, it can be observed that SSs with low SINR remain longer in the system than SSs with high SINR. Furthermore, the amount of data available for transmission varies over time. Therefore, results obtained assuming the full buffer traffic model lead to better performance results and are more optimistic (e.g., in terms of cell throughput) than those obtained with realistic traffic models [8, 9]. Multimedia data traffic models are very important for the performance evaluation of wireless networks. A lot of appropriate and tested models for different applications such as download, Web browsing, and speech traffic can be found in evaluation methodologies of different regulation bodies, such as the Third Generation Partnership Project 2 (3GPP2) or European Telecommunications Standards Institute (ETSI). Static SLSs do not usually consider traffic modeling due to complexity. In conventional dynamic SLSs, data traffic is modeled in a time-continuous fashion. Using a birth process (e.g., Poisson process), new sessions are generated and sessions vanish from the system after transmitting all data. A different traffic generation model is necessary for the snapshot-based SLS concept proposed in this article. Only a snapshot of the busy hour is considered. Therefore, statistics are necessary that describe each snapshot. The main problems are to define the buffer occupancy, the number of SSs that are active at the beginning of the snapshot, and termination of SSs during the snapshot. The buffer occupancy at the beginning of the snapshot is a random variable given by the data size of the traffic model. The number of active SSs is obtained by the session duration and the arrival process considered. An SS terminates its session depending on the duration for which it was already active before the snapshot described by the session duration of the traffic model and buffer occupancy statistics, indicating how much data is still left for transmission. The number of SSs becoming active during the snapshot depends on the arrival process [3]. For the results of a specific SS (e.g., outage), the activity duration during the snapshot is considered, and the results are weighted by the duration this SS is active during the snapshot, so SSs that are only active for a very short time have less impact on the results than SSs that are active for the whole snapshot duration. The arrival of new packets at the BS is modeled as in a conventional dynamic SLS, and the frequency of the packet arrival is called the service data rate (SDR). It is assumed that a predefined fraction of the SDR has to be achieved as active session throughput to keep the user satisfied and the quality of service (QoS) requirement fulfilled.
RADIO RESOURCE MANAGEMENT RRM is an important issue for future wireless systems, and there is a lot of ongoing research in the area of RA algorithms [2]. It is not the focus
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
IEEE
of this article to derive optimal RA algorithms; rather, a few state-of-the-art algorithms are considered. The focus is on the implementation of RRM algorithms so that the impact of RRM on system performance can be investigated. Therefore, an RU is defined in ONe-PS representing the smallest unit that can be allocated to an SS and is defined by frequency bandwidth, time duration, and space. The RU represents a slot in WiMAX as well as a resource block in LTE. Both adaptive transmit modes with blockwise subcarrier allocation (e.g., adaptive modulation and coding in WiMAX) and diversity transmit modes with distributed subcarrier allocation (e.g., partial usage of subcarriers in WiMAX) can be modeled. For each RU, only one modulation scheme is used, and encoding is performed over the whole RU. Therefore, one quality measure is necessary for each RU. The methodology to derive the quality measure proposed in this work is described in the next section. If the sender is aware of the quality measure, RA and link adaptation can be performed for each RU based on this quality measure. Furthermore, with the RU concept used for ONe-PS, it is very easy to adapt the SLS design to the requirements of future wireless systems. For instance, the WiMAX standard, IEEE 802.16, gives different possibilities for the OFDMA physical layer as to which system bandwidth, number of subcarriers, and frame duration can be used. Based on the assumptions made for system bandwidth, number of used subcarriers, or frame duration, in ONe-PS only the amount of RUs has to be adjusted. Even multiple-antenna systems can be investigated with this concept, by modeling the spatial component as another layer of RUs. This makes ONe-PS a scalable SLS tool not limited to one specific system design with, say, fixed system bandwidth or frame design.
LINK TO SYSTEM-LEVEL INTERFACE As mentioned in the previous section, it is necessary to determine a quality measure for each RU. The information theoretic methodology used to derive the quality measure in ONe-PS is described in this section, including the interface that maps link-level results to the system-level evaluation. Each RU may contain several subcarriers that may be physically neighbored or distributed over the whole system bandwidth. If the subcarriers are physically neighbored, it can be assumed that the fading channel is constant over the whole RU, and the performance of the RU can be described, say, by one pilot subcarrier located in the center of the RU. If the subcarriers are distributed over the whole system bandwidth as, for instance, in the default transmit mode of the WiMAX system, frequency diversity is achieved. In this case the performance of one RU containing several subcarriers can no longer be described by one pilot subcarrier. An effective signal-to-noise ratio (SNR) is calculated indicating the performance that can be achieved when using the RU. In ONe-PS, the SINR is calculated for each subcarrier and
IEEE
BEMaGS F
8 QPSK 16-QAM 64-QAM 256-QAM
7
6 5
4
3
2
1 0 -5
10
5
10
15
20
25
30
Effective SNR (dB)
Figure 3. Average mutual information as a function of the effective SNR. OFDM symbol depending on the fading channel coefficients. Instead of calculating the expectation value of the SINR conditions on the subcarriers, the mutual information is calculated for each subcarrier considering the modulation used for the RU. This takes into account that very bad channel conditions have a stronger impact on the performance than very good channel conditions. Afterward, the average mutual information for each RU is derived, and an effective SNR can be given that would lead to the same mutual information for the RU. The relationship between average mutual information and effective SNR can be seen in Fig. 3, showing results obtained from Monte Carlo simulations for different modulation types. The effective SNR is used as a performance measure during RA and link adaptation in the RRM module. It is used to determine the block error probability (BLEP) of the RU after decoding in the receiver, which is known as effective SNR mapping and is a link to system-level interface that is widely used when evaluating OFDMA systems [10]. BLEPs are obtained from link-level simulations implementing all necessary transmitter and receiver structures of the system.
EVALUATION AND PERFORMANCE RESULTS In this section some performance results are given. The investigated resource allocation strategies described in the following are well known, so only example results are presented in this article. However, it is shown that the proposed SLS concept has no limitations regarding the results that can be achieved compared to dynamic SLSs, which have higher computational complexity. ONe-PS is able to provide results for the SINR experienced by the SS, active session, and cell throughput, as well as the delay, packet
IEEE Communications Magazine • March 2009
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Mutual information (b/s/Hz)
Communications
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
153
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
error probability, and SS outage. Some example results obtained for a specific scenario showing the impact of data traffic and RA strategies are discussed in the following. The main system and simulation parameters can be found in Table 1. A cellular wireless system in the downlink is considered. The system does not follow any specific standard, but general conclusions for wireless OFDMA systems can be drawn. Without loss of generality, omnidirectional antennas at BSs and SSs are considered, and the BSs are placed in the center of hexagonal cells. SSs are distributed uniformly in the simulation area and assigned to the BS with the lowest path loss. The amount of users placed in
A
BEMaGS F
each cell depends on the traffic model. The amount of active users is chosen such that on average, an equal amount of data is offered to each cell for transmission. The system contains transmit blocks (TBs) of four RUs that are used for transmission of data to the SS. A TB can last one, two, or four RU durations. After each RU duration, two TBs become free for RA. Four different RA strategies are considered in this investigation. The TB can be allocated either randomly (RR), so that the minimum active session throughput in a cell is maximized (MMT), so that the allocated number of RUs is proportionally fair (PF), or so that RUs are allocated to the SS with the best SINR
Parameter
Value
Cellular structure
25 hexagonal cells, frequency reuse of 1, omnidirectional antennas in the cell center, wraparound
Site-to-site distance
1000 m
Path loss exponent
3.5
Standard deviation shadow fading
8 dB
User velocity
8.3 m/s
Channel model
ITU vehicular A channel type
Center frequency
450 MHz
System bandwidth
1.25 MHz
FFT size
128
Subcarrier spacing
11 kHz
Symbol duration including cyclic prefix
100 μs
RU bandwidth
264 kHz
RU duration
1.4 ms
Noise floor
–174 dBm/Hz
Transmit power for traffic channels
41.7 dBm
Modulation
QPSK, 16-QAM, 64-QAM, 256-QAM
Coding
Low density parity check codes, code rate 1/6, 1/3, 1/2, 2/3, 3/4, 5/6
RA aims
Random allocation (RR),proportional fairness in resources (PF), maximizing the minimum SS throughput (MMT), maximizing the sum throughput in the cell (MST)
Service data rate — data traffic combinations
Download and Web browsing with 100 kb/s, download with 1 Mb/s
Average traffic load per snapshot and cell
10 Mbytes
User outage
If active session throughput is less than 10% of service data rate
Snapshot duration
10 s
Number of snapshots
200
Table 1. Main system and simulation parameters. FFT: fast Fourier transform.
154
Communications IEEE
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
conditions (MST). Other RA strategies are also possible but not considered in this article. Two different data traffic types, download and Web browsing traffic, are considered in this investigation with different SDRs. For download traffic one file has to be downloaded, while Web browsing traffic consists of packet calls of small size. The packet calls are interrupted by a reading time period where the SS goes idle. A user is considered unsatisfied if the active session throughput is below 10 percent of the SDR. To make the different scenarios comparable, the sum amount of data provided within one snapshot remains constant. The amount of users per cell depends on the amount of data each session contributes in one snapshot. In the following results for the distribution of the cell throughput and the active session throughput normalized to the SDR are presented. Curves of the same color represent the same RA strategy, curves of the same line style represent the same data traffic type and SDR. Figure 4 shows the cumulative distribution function (cdf) of the throughput measured per cell of the simulation area averaged over one snapshot. The cell throughput in the assumed scenario for download traffic with SDR of 100 kb/s and RR allocation is between 440 and 950 kb/s in 90 percent of the investigated cells. It can be seen that the RA strategy influences the cell throughput distribution. If MST or PF is used, higher cell throughput is achieved than with RR. Both strategies perform equally due to allocating RU only to SSs with the best actual SINR conditions or good actual SINR conditions compared to average SINR conditions. Cell throughput of more than 1 Mb/s can be achieved in approximately 20 percent of the cells. MMT leads to worse cell throughput results, less than 730 kb/s in 90 percent of the cells. Looking at the results for download traffic with 1 Mb/s, it can be seen that 35 percent of the cells do not contain active SSs, and no cell throughput is achieved during the snapshot. On average only one SS is active simultaneously per cell, while in a scenario with an SDR of 100 kb/s, on average 10.6 SSs are active in parallel per cell to provide the same sum amount of data per snapshot. Due to the small number of SSs, all RAs perform equally well for download traffic and an SDR of 1 Mb/s, and only the result for RR is given in Fig. 4. For Web browsing traffic with an SDR of 100 kb/s, a high number of SSs are also active simultaneously, on average 40 per snapshot and cell, but not all of them have data for transmission available due to the reading time period; thus, RUs cannot be utilized and the cell throughput is reduced. Therefore, all RAs perform worse than in the scenario with download traffic due to fewer SSs being available for selection. The reduction for the median cell throughput is 150–200 kb/s for all RA strategies, and the cdfs for RR and PF are given in Fig. 4. Figure 5 shows the cdf of the active session throughput normalized to the SDR. A normalized active session throughput of 1 means that the active session throughput is equal to the SDR, and all data is transmitted immediately. SSs are in outage if the normalized active session throughput is below 0.1.The cdf of the nor-
IEEE
BEMaGS F
1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 MMT RR PF MST
0.1 0 1,000 500 Cell throughput (kb/s)
0
1,500
Figure 4. Distribution of cell throughput, solid line: download with SDR 100 kb/s, dashed line: download traffic with SDR 1 Mb/s, dash-dotted line: Web browsing with SDR 100 kb/s.
1
MMT RR PF MST
0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0
0.1
0.2
0.4 0.6 0.8 Normalized active session throughput
1
Figure 5. Distribution of normalized active session throughput, solid line: download with SDR 100 kb/s, dashed line: download traffic with SDR 1 Mb/s, dash-dotted line: Web browsing with SDR 100 kb/s. malized active session throughput is also a fairness measure: the steeper the cdf, the fairer the system. For download traffic with an SDR of 100 kb/s, it can be seen that only 0.4 percent of the SSs are in outage. Nearly 50 percent of the SSs achieve the SDR as active session throughput. PF and MMT lead to similar results regarding the SS outage, but PF achieves a normalized active session throughput of 1 for over 60 percent of the SSs, while MMT is able to provide maximum normalized active session throughput to only 15 percent of the SSs. MST is more unfair than the other algorithms by privileging
IEEE Communications Magazine • March 2009
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
cdf
IEEE
cdf
Communications
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
155
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
The proposed SLS concept is used for modeling and evaluation of OFDMA based systems like WiMAX or LTE. The snapshot based concept has less computational complexity than conventional dynamic SLS and still considers the influence due to data traffic or RRM.
SSs with good SINR conditions so that nearly 4 percent of the SSs are in outage. The best tradeoff between fairness and performance is given by PF. If download traffic with SDR of 1 Mb/s is considered, again a high number of SSs are in outage (i.e., 3.6 percent) due to high QoS requirements. Also again, the different RAs perform equally due to a low number of SSs active simultaneously (i.e., on average one SS per cell), and only the results for RR are presented in Fig. 5. Considering Web browsing traffic with an SDR of 100 kb/s, similar outage results can be achieved as for download traffic. A higher number of SSs are active simultaneously per cell, but only a fraction of the SSs have data for transmission and are considered during RA. For RR allocation, a lower number of SSs during RA for Web browsing than for download traffic leads to a higher number of RUs allocated to each single user, so on average a higher normalized active session throughput than for download traffic is achieved, as depicted in Fig. 5. Adaptive RA, like MMT, MST, or PF, performs worse if fewer SSs are available for selection due to lower multiuser diversity. This effect compensates for the higher amount of RUs that can be allocated to each single user, so the normalized throughput is smaller for MMT, MST, and PF for Web browsing than for download traffic. The results for RR and PF are shown in Fig. 5.
CONCLUSION A snapshot-based modular SLS concept is presented in this article. The proposed SLS concept is used for modeling and evaluation of OFDMA based systems like WiMAX or LTE. The snapshot based concept has less computational complexity than conventional dynamic SLS and still considers the influence due to data traffic or RRM. Path loss and shadowing remain constant during each snapshot, so a pool of values can be calculated in advance of the simulation run, and combinations are taken randomly from the pool to form independent snapshots with random SS positions. A data traffic model is presented in this article for the proposed concept that cuts out a random interval from the busy hour with SSs that are already active at the beginning of the snapshot, and enter or leave the system during the snapshot. Additionally, a definition of an RU is presented that makes ONe-PS able to model different wireless systems with subcarriers grouped together that may be physically neighbored as well as distributed over the whole system bandwidth. Furthermore, the RU concept introduces flexibility in terms of system bandwidth or frame duration, so the concept is not limited to one specific system design but can be adapted easily to different demands. This is a very important issue because it is expected that future wireless systems will have this flexibility as a requirement. An information theoretic quality measure is proposed for the RU based on the mutual information of each subcarrier that is also used during RA and link adaptation. Some performance results are presented that show the ability of the presented concept to model and evaluate the influence of RRM and data traffic on the user and system performance.
156
Communications IEEE
A
BEMaGS F
REFERENCES [1] A. R. Mishra, Fundamentals of Cellular Network Planning and Optimisation, Wiley, 2004. [2] K. B. Letaief and Y. J. Zhang, “Dynamic Multiuser Resource Allocation and Adaptation for Wireless Systems,” IEEE Wireless, vol. 13, no. 4, Aug. 2006, pp. 38–47. [3] A. Fernekeß et al., “Influence of Traffic Models and Scheduling on the System Capacity of Packet-Switched Mobile Radio Networks,” Proc. IST Mobile & Commun. Summit, Myconos, Greece, June 2006. [4]D. Soldani, A. Wacker, and K. Sipilä, “An Enhanced Virtual Time Simulator for Studying QoS Provisioning of Multimedia Services in UTRAN,” MMNS, LNCS 3271, Springer, Dec. 2004, pp. 241–54. [5] R. van Nee and R. Prasad, OFDM for Wireless Multimedia Communications, Artech House, 2000. [6] J. Zander and S.-L. Kim, Radio Resource Management for Wireless Networks, Artech House Publishers, 2001. [7] T. S. Rappaport, Wireless Communications Principles and Practice, Prentice Hall, 2002. [8] C. F. Ball et al., “Performance Evaluation of IEEE 802.16 WiMAX with Fixed and Mobile Subscribers in Tight Reuse,” Euro. Trans. Telecommun., vol. 17, Mar. 2006, pp. 203–18. [9] A. Fernekeß et al., “Influence of High Priority Users on the System Capacity of Mobile Networks,” Proc. Wireless Commun. Net. Conf., Hong Kong, China, Mar. 2007. [10] E. Tuomaala and H. Wang, “Effective SINR Approach of Link to System Mapping in OFDM/Multi-Carrier Mobile Network,” Proc. Int’l. Conf. Mobile Tech., Apps., and Sys., Nov. 2005.
BIOGRAPHIES ANDREAS FERNEKEß (
[email protected]) _________________ received his diploma in electrical engineering and information technology with a focus on communications engineering from Technische Universität Darmstadt, Germany, in 2004. Since 2005 he has been pursuing his Ph.D. at the Institute of Telecommunications of Technische Universität Darmstadt as a member of the research group of the Communications Engineering Laboratory. His research interests are in the planning and dimensioning of packet-switched wireless systems, and the impact of data traffic and radio resource management on the system performance. A NJA K LEIN received her diploma and Dr.-Ing. (Ph.D.) degrees in electrical engineering from the University of Kaiserslautern, Germany, in 1991 and 1996, respectively. In May 2004 she joined Technische Universität Darmstadt as a full professor, heading the Communications Engineering Laboratory. Her main research interests are on one hand, mobile radio, including multiple access and transmission techniques such as single-and multicarrier schemes and multi-antenna systems, and on the other hand, network aspects such as resource management, network planning and dimensioning, cross-layer design, and relaying and multihop. In 1996 she joined Siemens AG, Mobile Networks Division, Munich and Berlin. From 1991 to 1996 she was a member of the staff of the Research Group for RF Communications at the University of Kaiserslautern. She was active in the standardization of third-generation mobile radio in ETSI and 3GPP. She was a vice president, heading a development department and a systems engineering department. She has published over 140 refereed papers and has contributed to five books. She is an inventor and co-inventor of more than 45 patents in the field of mobile radio. In 1999 she was named inventor of the year at Siemens AG. She is a member of Verband Deutscher Elektrotechniker-Informationstechnische Gesellschaft (VDE-ITG). BERNHARD WEGMANN received his Dipl.-Ing. degree in 1987 and Dr.-Ing. (Ph.D.) degree in 1993, both in communication engineering, from the University of Technology Munich. He joined the Communications Group of Siemens AG in 1995, where he dealt with several areas such as R&D, standardization, strategic product management, and network engineering. Along with the merger of the Communication Group of Siemens AG and the Network Group of Nokia in April 2007, he moved to Nokia Siemens Networks, where he is currently with a research group on future mobile radio systems. His professional interests are radio transmission techniques, radio resource management, and radio network deployment.
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
IEEE COMMUNICATIONS MAGAZINE CALL FOR PAPERS/FEATURE TOPIC FEMTO CELLS Femto cells are small Base Stations (BTS) operating in the regulated cellular bands. They are so small and inexpensive that they can be placed in individual homes, connected to the operator's network via DSL, cable or even WiMAX broadband access. The primary drivers for femto cell deployment are improved indoor coverage, delivery of high bandwidth to cellular consumers, and capacity off-load of the macro cellular networks. Femto cells are also expected to allow mobile operators that are looking for new ways to differentiate themselves in the market to offer new "femtozone" type of services. Femto cells have the potential of significantly transforming the way mobile operators grow their coverage and network capacity, and become a major component of the operators' business model. For these reasons, telecom operators, network infrastructure suppliers, innovative startups, and system integrators are pushing vigorously this emerging technology. In addition, both 3GPP and 3GPP2 have been making rapid progress in standardizing femto cell architectures and requirements, and in studying femto cell related issues such as handover scenarios and interference. These work items were completed in 2008. The first Home NodeB, the 3GPP adopted term for a femto cell, specifications are part of the 3GPP R8, out in December 2009. Standardization will continue and information will be part of the R9 Release, due in December 2009. In parallel to the 3GPP activities, the Femto Cell Forum was created in 2007 and now counts more than 80 members who include mobile operators, telecom hardware and software vendors, content providers, and innovative start-up companies. Their mission is to advance the development and adoption of femto cell products and services as the optimum technology for the provision of high-quality 3G coverage and premium services within the residential and SME markets. Mobile operators worldwide have started femto cell field trials in 2008, and the first deployments of femto cells solutions are expected to be commercially available in the second half of 2009. Suggested topics include (but are not limited to) the following subject categories: •Overview of relevant standards such as 3GPP and Femto Forum •Comparisons of different end-to-end architecture alternatives •Femto radio issues: propagation models, frequency allocation, and interference •Architectures and techniques for low-cost Femto Radios •Operator femto trial initiatives, results, and lessons learned •Femto cell business models and deployment scenarios •Algorithms for QoS and traffic management •Access control, provisioning, and security issues •Femto cell scalability considerations •Femto cell based services and applications •Regulatory aspects •Comparison with UMA/GAN, WLAN •Femto Cells and IMS •Network planning of femto cell deployments
SUBMISSION Articles should be written in a style comprehensible to readers outside the specialty of the article. Mathematical equations should not be used (In justified cases up to three simple equations could be allowed, provided you have the consent of the Guest Editor. The inclusion of more than three equations requires permission from the Editor-in-Chief). Articles should not exceed 4500 words. Figures and tables should be limited to a combined total of six. Complete guidelines for prospective authors can be found at: http://www.comsoc.org/pubs/commag/sub_guidelines.html. Please send PDF (preferred) or MSWORD formatted papers by April 1, 2009 to Manuscript Central (http://commag____________ ieee.manuscriptcentral.com), resister or log in, and go to the Author Center. Follow the instructions there. Select the ___________________ topic "September 2009/Femto Cells."
SCHEDULE Submission deadline: April 1, 2009 Publication date: September 1, 2009
GUEST EDITORS Fabio Chiussi, Airvana: _____________
[email protected] Deepak Kataria, HCL America: ___________
[email protected] Dimitris Logothetis, Ericsson Hellas: _______________________
[email protected] Indra Widjaja, Bell Laboratories: ______________________
[email protected]
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
MODELING AND SIMULATION: A PRACTICAL GUIDE FOR NETWORK DESIGNERS AND DEVELOPERS
High-Fidelity and Time-Driven Simulation of Large Wireless Networks with Parallel Processing Hyunok Lee, Vahideh Manshadi, and Donald C. Cox, Stanford University
ABSTRACT Parallel processing is a promising technique for reducing the execution time of a large and complex wireless network simulation. In this article, a parallel processing technique is presented for time-driven simulations of large and complex wireless networks. The technique explicitly considers the physical-layer details of wireless network simulators such as shadowing and co-channel interference. The technique uses multiple processors interconnected with highspeed links. Principles and issues are elaborated, including workload partitioning, synchronization, and database design. Numerical results from two different wireless network simulators employing the technique are presented to demonstrate the performance of the technique. The results indicate that the technique can significantly improve the run-time performance of time-driven simulations of large and complicated wireless networks of different types.
INTRODUCTION
This work was supported in part by the U.S. Army Research Office DURIP Instrumentation Grant Award no. W911NF-05-10124, the National Science Foundation under Award no. CNS-0548767 and 0627085, the Center for Integrated Systems (CIS) at Stanford University, and Ericsson Inc.
158
Communications IEEE
Wireless networks are growing rapidly in both size and complexity, and it is becoming increasingly difficult to investigate such large and complex networks analytically. Therefore, powerful computer simulation tools are becoming indispensable in studying such networks. However, accurate and realistic simulation of large wireless networks, incorporating all relevant models, requires significant computing power such that run times on the order of tens of days are required for a single network configuration and parameter set. To reduce the execution time of a large wireless network simulation, several studies have investigated the parallelization of simulation using discrete event simulation (DES) techniques [1–10]. WiPPET [1, 2] is a parallel simulation testbed for wireless mobile networks. It partitions the network into zones and distributes the task of received-power calculation among them. As a result, it obtained a speed-up gain of almost six for eight processors in some scenarios.
0163-6804/09/$25.00 © 2009 IEEE
However, when the network was partitioned geographically, essentially no speed-up gain was achieved due to the substantial communication overhead among zones. SWiMNet [3] is another parallel simulation framework for wireless mobile cellular networks. It divides the network based on cells and assigns cells to logical processes (LPs). LPs then execute the simulation in parallel utilizing pre-computed events. For a simple channel assignment scheme, the simulator achieved a speed-up gain of 12 using 16 processors. However, speed-up gains remain unclear for channel assignment schemes that entail interactions among LPs. Moreover, the simulator does not include radio-signal propagation and thus does not consider the communication overhead among LPs required for calculating received signal and interference levels. The parallel version [4] of GloMoSim [5] is yet another example of a parallel wireless network simulator. Reference [4] proposes optimization techniques to reduce the communication and synchronization overhead for parallel simulation based on DES techniques and demonstrates performance improvements. For time-driven wireless network simulation, however, few studies on parallelization techniques exist. Time-driven simulators are particularly useful when details of the physical-layer characteristics and behavior are to be incorporated into the simulator. This is because the large number and high frequency of “events” from the detailed models would lead to an implementation overhead that is too heavy for eventdriven simulators, as demonstrated in [2, 4]. Because parallelization techniques based on event-driven simulation do not directly apply to time-driven simulators, we present a new parallelization technique for time-driven simulators of large and complex wireless networks [11]. Similar to the works previously mentioned, the technique partitions the network into subareas and assigns one processor to each subarea. Yet it focuses on time-driven simulation with substantial physical-layer details such as radiosignal propagation and interference. We identify issues related to the time-driven nature of the
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
A
BEMaGS
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
F
There are three P1
P2
P3
...
Pn
P1
P2
P3
...
Pn
major issues to consider in developing a parallel
Master processor
simulator: workload partitioning, P1
P2
P3
...
Pn
P1
P2
P3
...
Pn
database synchronization
(a)
(b)
Figure 1. Primary communication pattern among processors in one simulation time step: a) geographybased; b) channel-based workload partitioning. Pi denotes a processor. simulation in such key areas as database synchronization, database design, and inter-processor communication and propose schemes in those areas for effective and efficient parallelization. The enormous computational complexity and communication overhead for incorporating such details is addressed by a unique computing platform that comprises multiple processors with large memory and interconnected with highspeed links. Such a computing platform was made possible by recent advances in supercomputing blade technologies and parallel-processing techniques. The remainder of this article is organized as follows. In the next section, we describe the simulation platform and illustrate the principles and issues of the parallelization technique. In the following section, the technique is applied to two different types of wireless networks (i.e., circuitoriented, mobile cellular networks with comprehensive resource management schemes and packet-oriented, large wireless mesh networks) and is shown to reduce the run times significantly. The last section concludes the article with final remarks.
PRINCIPLES OF THE TECHNIQUE There are three major issues to consider in developing a parallel simulator: workload partitioning, database synchronization among multiple processors, and the data structure design of the simulator. In this section, we discuss each of the issues in detail. First we describe our simulation platform.
SIMULATION PLATFORM We consider a modular supercomputing platform, comprising 32 AMD Opteron 64-bit processors running at a 2.0-GHz clock speed with 2 GB of memory. Groups of four processors are interconnected by 2-GB/s InfiniBand links, and the processors within a group are connected by 8-GB/s links. On the platform, we use a version [12] of message passing interface (MPI), which is a widely used standard library for parallel processing.
WORKLOAD PARTITIONING Overview — To gain from parallel processing, the workload must be distributed efficiently
among multiple processors, and the data structure design of the simulator.
among multiple processors. For simulating a large, wireless network, two general approaches can be considered [2]: geography-based partitioning and channel-based partitioning. In geography-based partitioning, we divide the network into subareas and assign one processor to each subarea. In this case, because co-channel interferers are handled by multiple processors, processors must exchange co-channel interference information among themselves. On the other hand, in channel-based partitioning, processors are assigned a subset of channels, and co-channel interferers on each channel are handled by a single processor. Operations across different subsets of channels assigned to different processors are handled by a more powerful processor called a master processor. Geography-Based Partitioning — Figure 1 illustrates the primary communication pattern among processors performed in one simulation time step under both types of workload partitioning. In Fig. 1a, under geography-based partitioning, first processors process their assigned set of network entities independently without communicating with one another. After all of the processors finish updating the network entities, they collectively communicate with one another to exchange the updates. These exchanged updates are related mainly to updated interference information, for example, changes in the locations of mobile users, changes in transmit power levels, and so on; all of which must be known to other processors before the processors proceed to the next time step. The size of each subarea is calculated proportionally to the workload in the subarea — primarily the traffic load — so that the workload is distributed uniformly among processors. For example, for a uniform traffic load, subareas are equally sized. In the current version of the technique, workload is distributed at the beginning of a simulation according to the input traffic load and remains the same throughout the simulation; that is, workload partitioning is not adaptive to varying network conditions over the course of simulation. The interference information exchange among processors in every time step incurs considerable communication overhead among processors, and it may overshadow the gain from
IEEE Communications Magazine • March 2009
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
159
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Considering our computing platform with 32 equal processors and a very fast communication architecture among the processors, we choose a geographybased partitioning approach: the network is divided into subareas, and each subarea is handled by one processor.
distributing the workload among processors as reported in [2, 4]. Thus, it is critical to have fast communications links among processors to gain from geography-based workload partitioning. Due to the high-speed InfiniBand links, the technique proposed in this article — which employs geography-based workload partitioning — was indeed able to achieve significant speedup gains as demonstrated in the next section. Channel-Based Partitioning — In channelbased partitioning, illustrated in Fig. 1b, processors first update the network entities on their subset of channels independently, similar to the case of geography-based partitioning. After they finish updating, all the processors transfer the updates to the master processor. The master processor then performs operations across different subsets of channels such as handoff, channel selection for new users, and so on, based on the updates from the processors, and then broadcasts the results back to the processors. We note that because co-channel interferers on a channel are handled by a single processor, operations performed by different non-master processors are orthogonal; thus, they are not required to exchange interference information among themselves directly. They communicate only with the master processor for inter-processor operations. As a result, this approach incurs less communication overhead than the geography-based approach. Because this partitioning approach places significantly more workload on the master processor, it is more appropriate when one processor is more powerful than the others. In addition, this approach might not be applicable to a wireless network in which orthogonal channels are not clearly defined, such as one running the 802.11 medium access control (MAC) on a single-frequency channel. Moreover, parallelization gain may be limited when the number of orthogonal channels available in the simulated network is less than that of available processors. Adopted Scheme — Considering our computing platform with 32 equal processors and a very fast communication architecture among the processors, we choose a geography-based partitioning approach: the network is divided into subareas, and each subarea is handled by one processor. Then, network entities are assigned to one of these processors. The assignment is based primarily on their geographical locations: an entity is assigned to the processor that handles the subarea where the entity is located. However, users are typically assigned to the processor that handles the network entity with which the users are primarily associated, for example, base stations in a cellular network. This is because if users are assigned to a processor according to their locations, a data exchange between a user and its associated network entity located in a different subarea must be transferred between the two corresponding processors, resulting in a large inter-processor communication overhead. Later in this section, we provide more details on designing the database according to the geography-based partitioning that is selected.
160
Communications IEEE
A
BEMaGS F
SYNCHRONIZATION Because the database of a parallel simulator is updated by multiple processors simultaneously, it is critical to properly synchronize the database update among processors. In a singleprocessor sequential program, an update is immediately written to the database, and the processor instantly refers to the updated database for subsequent operations; in this way, the synchronization is handled trivially. On the other hand, in a parallel program, when a processor handles its assigned set of network entities during a simulation time step, it does not have access to the updates that other processors make during the same simulation time step until the processors communicate with one another. This limited access leads to two major considerations: one is to maintain a duplicate of part of the database for intra-processor updates, and the other is to transfer updates for inter-processor updates. Intra-Processor Synchronization — To manage intra-processor synchronization, we maintain a duplicate of the part of the database that is constantly referred to, as well as updated by other processors, for example, the part that contains interference-related information. With the duplicate, the processors refer to the original database when computing updates and write the updates onto the duplicate. Consequently, the original database remains intact over one simulation time step, and it is ensured that the processors refer to the identical database. Keeping the duplicate is required because if we allow processors to write updates to the database immediately as in the sequential program, the replicas of the database in different processors would become dissimilar and even inconsistent over one simulation time step. As a result, processors could operate on incompatible replicas of the database. In maintaining the duplicate, it is critical to ensure that a particular resource, for example, a frequency channel that is found to be available at the beginning of a simulation time step is not allocated to multiple network entities over the same time step. To resolve such conflicts that are internal to a single processor, the processor checks the resource availability in the duplicate database, as well as the original one when computing updates. If a conflict occurs between different processors, the involved processors collectively resolve it as explained later. Finally, we note that any part of a database that is accessed by only one processor does not require a duplicate. More details on database design in consideration of the duplicate maintenance are described later in this section. Inter-Processor Synchronization — Inter-processor synchronization handles situations in which a processor must update a part of the database assigned to other processors. Typically, these updates are made at the boundaries of subareas where adjacent network entities interacting with each other may be located in different subareas that are handled by different processors. For example, in a mobile cellular
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
BEMaGS F
DB_multiple
DB_multiple_dup
DB_request_send (p2)
DB_single (p2)
DB_request_send (p1)
DB_single (p1)
DB_request_rcv (p2)
DB_unchanged
DB_request_rcv (p1)
DB_unchanged
A
DB_multiple DB_multiple P1
DB_multiple_dup
P2
DB_multiple_dup
DB_request_rcv (p1)
DB_request_rcv (p2)
DB_multiple
DB_request_send (p2)
DB_multiple_dup
DB_request_send (p1)
Computation phase Interprocessor update transfer
Duplicate DB exchange
Communication phase
Figure 2. Database structure and operations executed in one simulation time step for an example of two processors. network, a base station might be required to hand off a user that is located close to a subarea boundary to another base station outside of the subarea that the sending base station belongs to. Or in a multihop wireless network, a router near a boundary may be required to transmit a packet to an adjacent router in another subarea. These updates are stored in an additional data structure. More details on database design for interprocessor updates are provided later. After processors finish updating their assigned network entities, they transfer the data structure of inter-processor updates to corresponding destined processors. Destined processors then incorporate the transferred updates into the updated database. When incorporating updates transferred by different processors during a simulation time step, conflicts can occur in which the same resource is claimed by multiple network entities. This type of conflict seems inherent in any parallel program in which more than one processor can modify the same part of the database. Such a conflict must be resolved depending on the context of the simulated network. For example, consider a mobile cellular network and a user in a subarea handed off to a base station in another subarea. In that case, a simple scheme can be devised as follows. When a destined base station processes the transferred users from other subareas in other processors, for each of the users it checks whether the initially assigned channel was taken by other users during the same simulation time step. If not, it simply confirms the assignment and updates the database accordingly. Otherwise, it attempts another assignment process to find a new channel. If a channel is found, the new channel is assigned to the user. Otherwise, the user is blocked or dropped.
DATABASE DESIGN The entire database of the simulator can be divided into four groups, and Fig. 2 illustrates the different groups for an example of two processors. DB_unchanged — This group includes those data structures that are determined during the initialization phase of simulation and that do not change afterward, for example, shadowing maps that store shadowing values for a set of geographical locations across the simulated universe. All processors have the same copy of this part and thus do not exchange it among themselves. With N processors involved, the parallel simulator has N copies of this part across the processors; hence, the memory overhead is linear in the number of processors. DB_single — This group contains those data structures that are updated as simulation evolves but by only one processor. Any data structure that other processors are not required to access for their operations belongs to this part of the database. For example, in a wireless mesh network, a routing table of a wireless router may not be required by, or be accessible to, other routers. In that case, a processor does not exchange the routing tables of the routers assigned to itself with other processors. Note that this part of the database is distributed among processors according to the workload partitioning scheme discussed previously. For equal-sized subareas, processors hold 1/N of the total for N processors involved. Regardless of the partitioning strategy, the aggregate size across the processors remains constant for different numbers of processors.
IEEE Communications Magazine • March 2009
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
161
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
This difference in run times is due mainly to the different computational complexity and simulated traffic load levels; the more complex system was simulated under much heavier traffic loads. The more complex system also requires more time to stabilize the statistical variation in the results.
DB_multiple and DB_multiple_dup — This group includes segments that are referred to, as well as updated by multiple processors in each simulation time step. The data are related mainly to calculating co-channel interference. Every processor has the same copy of this group, and thus, for N processors used, the parallel simulator has a total of N copies across processors, resulting in a linear memory overhead in the number of processors. The shaded region within DB_multiple denotes the subarea that the corresponding processor handles. For example, processor p1 handles the first half, and processor p2 processes the second half. Moreover, as explained in the subsection about synchronization in the section called Principles of the Technique, processors keep a duplicate of this group, denoted as DB_multiple_dup in Fig. 2. When a processor must update part of DB_multiple assigned to itself, it writes the update onto the corresponding location in DB_multiple_dup. As a result, the same copy of DB_multiple is referred to by all processors during each simulation time step. It is important to design this part of the database to be amenable to exchange among processors. We choose to store each type of network entity into an array according to the subarea in which the entities are assigned because arrays are exchanged very efficiently among processors using MPI [12], a standard library for parallel processing used in this parallel simulation platform. For example, consider a two-processor parallel simulator. For simulating a cellular network, we create an array with two entries, and each entry contains base stations located in one subarea. Similarly, for simulating a wireless mesh network, wireless mesh routers are stored into an array according to their locations in the network. Although each processor keeps all the entries of such an array in its memory, it updates only the entry corresponding to the subarea assigned to itself. DB_request_send and DB_request_rcv — The last group contains the data structures for storing and transferring inter-processor updates discussed earlier. Each type of update is stored into an array, similar to the data structures in DB_multiple discussed earlier. Each entry of such an array corresponds to one destined processor. We note that the entry size of an array for each type of update must be determined from the frequency of such updates between two processors over one simulation time step. For example, for storing data packets in a wireless mesh network to be transferred to one destined processor, each entry must be large enough to account for the number of such data packet transmissions between two adjacent routers and the number of routers near the subarea boundary between the two corresponding processors. Finally, we mention that the size of such an array combined across the processors tends to increase with the number of processors because more network entities are near subarea boundaries because the size of each subarea shrinks with more processors. Consequently, the communication overhead of transferring these data structures tends to
162
Communications IEEE
A
BEMaGS F
increase with more processors for a given simulated network.
EXAMPLES Interference Calculation — When a processor calculates the received co-channel interference level at a receive node in its subarea, it refers to DB_multiple for transmission status, transmit power level, geographical location, and so on, to compute the path-loss from each of the co-channel interferers in the simulated universe. Recall that transmission-related information for all network entities is kept in the DB_multiple of each processor. The shadowing value, on the other hand, is determined based on the shadowing maps in DB_unchanged, which also is held in each processor. Inter-Processor Updates — Consider a wireless router transmitting a data packet to another router in a subarea handled by another processor. The router stores the data packet into an entry in DB_request_send corresponding to the destined processor. There can be multiple data packets to be transferred to other processors when all the routers in a subarea finish transmitting data packets. Processors wait until all routers in all processors finish updating. Then, they transfer DB_request_send to DB_request_rcv in destined processors. Upon receiving updates from other processors, destined processors process and incorporate the updates into their database. For example, a router receiving a data packet from another router in a different processor would update the queue size and other related information accounting for the newly arrived packet.
SYSTEM ROUTINE Given the workload partitioning, synchronization schemes, and database structure discussed so far, operations executed in each simulation time step can be grouped into two groups. The first group forms a computation phase where each processor independently processes its assigned set of network entities, and the second group comprises a communication phase where all the processors collectively communicate with one another to exchange updates made during the computation phase. Figure 2 illustrates the flow of operations performed in one simulation time step for an example of two processors. Arrows in the figure between different parts of database and processors indicate the flow of read/write operations: processors refer to DB_unchanged, DB_single, and DB_multiple when computing updates for their assigned subarea; processors write updates into the corresponding location within DB_single and DB_multiple_dup, and DB_request_send for intra- and inter-processor updates, respectively. In the subsequent communication phase, processors transfer DB_request_send to DB_request_rcv of other processors and incorporate the inter-processor updates into their database. Then finally, they exchange DB_multiple_dup with one another to update interference-related information made over the current simulation time step. In the next time step, the entire routine is repeated with the updated database.
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
12,000
10,000
8000 4 days 2 h 6000
2 days 2 h 1 day 5 h
18 h 14 min
2000
5
10
15 20 Number of processors
25
(a)
25 20 15 10 5
17 h 6 h 30 min
0
F
Cellular, Erl 15, ARP Cellular, Erl 20, ARP Cellular, Erl 25, ARP Cellular, Erl 30, ARP Cellular, Erl 35, ARP+PC+OB Cellular, Erl 40, ARP+PC+OB WMN
30
Speedup
Runtime (min)
6 days 22 h
0
BEMaGS
35
Cellular, Erl 15, ARP Cellular, Erl 20, ARP Cellular, Erl 25, ARP Cellular, Erl 30, ARP Cellular, Erl 35, ARP+PC+OB Cellular, Erl 40, ARP+PC+OB WMN
7 days 17 h
4000
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
0 30
35
0
5
10
15 20 25 Number of processors
30
35
(b)
Figure 3. Performance of the technique for a mobile cellular network simulator and a WMN simulator: a) runtime vs. number of processors; b) speedup gain vs. number of processors.
NUMERICAL RESULTS In this section, we present and discuss numerical results from two different simulators that employ the technique illustrated in this article. One is a mobile cellular network simulator developed in [13, 14] as a sequential program and modified in [11] for parallel processing. The other is a large wireless mesh network simulator developed in [15] as a parallel program from the beginning.
MOBILE CELLULAR NETWORK SIMULATOR The simulator [13, 14] is developed to examine the effects of various dynamic schemes for channel allocation, power control, and adaptive antennas on system performance. The simulated network consists of a 16 × 16 grid of base stations with 850-m grid spacing in a toroidal universe in which a radio signal propagating out of the universe reappears at the opposite edge and continues to propagate in the same direction. Each base station can access up to 128 channels. Signal propagation is modeled by path-loss and correlated shadowing. Interference from all cochannel interferers is included in calculating the received signal-to-interference-plus-noise-ratio (SINR). The two lower sets of curves in Fig. 3a show the run times of the parallel program versus the number of processors for two systems with different computational complexity. Each run time is obtained from one long simulation run. One system employs a channel assignment scheme, called autonomous reuse partitioning (ARP); the other system incorporates not only the channel assignment scheme but also schemes for power control (PC) and beamforming with multiple antennas, designated in the figure as PC and optimal beamforming (OB), respectively. Each system was run under various network traffic load levels, measured in Erlangs (Erl) per base station. The run times with a single processor were obtained from the existing sequential simulator. We first note the huge difference in run times between the two systems. For the more
complex system, one simulation run on a single processor took longer than seven days. On the other hand, most of the runs for the system of lighter complexity took less than one day. This difference in run times is due mainly to the different computational complexity and simulated traffic load levels; the more complex system was simulated under much heavier traffic loads. The more complex system also requires more time to stabilize the statistical variation in the results. The run-time behavior of the parallel simulator is seen more clearly in Fig. 3b where the speed-up gain versus the number of processors is presented. For the more complicated system, the speed-up gain is almost linear up to eight processors and is noticeably less than linear, but more than 11, with 16 processors. The almost linear gain implies that the inter-processor communication overhead for the parallel processing is almost negligible as compared to the computation load. On the other hand, the speed-up gain for the less complicated system is far less than linear. This rather low gain suggests that the communication phase in the system routine is comparable to the computation phase so that the gain from distributing the computational workload among processors is offset by the communication overhead. We also note the increased run time (or speed-up less than one) with the parallel processing in the case of 15 Erlangs. In this simulation scenario, the network is very lightly loaded so that the computation workload is minimal, and the inter-processor communication overhead tends to dominate the overall run time, resulting in a longer run time with the multiple processors. This observation is, in fact, consistent with the results of the parallel version of GloMoSim given in [4]. Thus, it is seen that a more complex program benefits more from the parallel processing. Figure 4 plots grade-of-service (GOS) values for each simulation run. GOS is a combination of blocking and dropping rates and is approximately the sum of the two rates. Although GOS is observed to be consistently higher for the par-
IEEE Communications Magazine • March 2009
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
163
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Erl 15, ARP Erl 20, ARP Erl 25, ARP Erl 30, ARP Erl 35, ARP+PC+OB Erl 40, ARP+PC+OB
Grade of service (%)
20
15
5
0
2
4
6
8
10
12
14
16
Number of processors
Figure 4. Grade of service (GOS) vs. number of processors of a mobile cellular network.
allel program, the increase due to the database synchronization discussed earlier is negligible. Overall, the GOS results of the parallel program are consistent with those of the sequential program in all simulated scenarios.
LARGE WIRELESS MESH NETWORK SIMULATOR The large wireless mesh network (WMN) simulator [15] simulates a large time-division multiple-access (TMDA)- and time-division duplex (TDD)-based WMN comprising a 120 × 120 grid of wireless mesh routers with 100-m grid spacing in an urban area. Detailed propagation models for path-loss and shadowing are incorporated and interference from all co-channel interferers in the simulated toroidal universe is considered. The simulator implements a fully distributedcontrol time-slot assignment protocol through which every wireless mesh router in the network acquires a broadcast time slot that supports a minimum average SINR. The uppermost curve in Fig. 3a shows the run times of the WMN simulator versus the number of processors for a single parameter set and network configuration. Each run time is obtained from one long simulation run. A single run took longer than four days using eight processors, and one day and five hours with 32 processors. These long run times are due to the substantial computational complexity of the simulator resulting from the huge network size and the large number of mesh routers. The corresponding speed up of the parallel WMN simulator is shown in the uppermost curve in Fig. 3b. For that curve, the speed up of up to eight processors is approximately linear in the number of processors from the observation that the speed-up gain with 16 processors is almost two compared to the case with eight processors. Then, we obtain a speed-up gain of almost 27 using 32 processors. This tremendous gain from parallel processing implies that the inter-proces-
164
Communications IEEE
F
CONCLUSION
10
0
BEMaGS
sor communication overhead is negligible compared to the computation workload. We note that the speed-up performance of this simulator is better compared to the cellular network simulator discussed previously. This is because the WMN simulator design was based on the parallel-processing technique from the beginning so that it has been more optimized to the technique than the cellular network simulator that was modified from an existing sequential simulator.
30
25
A
We presented a parallel processing technique for time-driven simulation of large and complex wireless networks with substantial physical layer details such as radio-signal propagation and interference. We identified and demonstrated issues of the technique related to the time-driven nature of the simulation with regard to workload partitioning, synchronization, and database design and proposed schemes in those areas for effective and efficient parallelization. The enormous computational complexity and inter-processor communication overhead of the parallel simulator is addressed by a supercomputing platform that comprises multiple processors with large memory and interconnected with highspeed links. We applied the technique to two different wireless network simulators, a mobile cellular network simulator and a large WMN simulator, and demonstrated its performance. The technique was shown to achieve significant runtime speed-up gains for both simulators; for one set of simulation parameters and network configuration, a speed-up gain of more than 11 was obtained using 16 processors from the mobile cellular network simulator and almost 27 using 32 processors from the large WMN simulator. The substantial gains from both simulators indicate that the technique is general enough to be applied to different types of wireless networks.
REFERENCES [1] J. Panchal et al., “Parallel Simulations of Wireless Networks with TED: Radio Propagation, Mobility and Protocols,” Performance Evaluation Rev., vol. 25, no. 4, Mar. 1998, pp. 30–39. [2] J. Panchal et al., “WiPPET, a Virtual Testbed for Parallel Simulations of Wireless Networks,” Proc. Parallel Distributed Simulation, May 1998. [3] A. Boukerche, S. K. Das, and A. Fabbri, “SWiMNet: A Scalable Parallel Simulation Testbed for Wireless and Mobile Networks,” Wireless Net., vol. 7, no. 5, 2001, pp. 467–86. [4] Z. Ji et al., “Optimizing Parallel Execution of Detailed Wireless Network Simulation,” Proc. Parallel Distributed Simulation, May 2004. [5] X. Zeng, R. Bagrodia, and M. Gerla, “Glomosim: A Library for Parallel Simulation of Large-Scale Wireless Networks,” Proc. Parallel Distributed Simulation, May 1998. [6] G. F. Riley, R. M. Fujimoto, and M. H. Ammar, “A Generic Framework for Parallelization of Network Simulations,” Proc. Symp. Modeling, Analysis, Simulation Comp. Telecommunication Sys., Oct. 1999. [7] B. K. Szymanski et al., “Genesis: A System for LargeScale Parallel Network Simulation,” Proc. Parallel Distributed Simulation, May 2002. [8] L. Bononi et al., “Performance Analysis of a Parallel and Distributed Simulation Framework for Large Scale Wireless Systems,” Proc. Conf. Modeling, Analysis, Simulation Wireless Mobile Sys., Oct. 2004.
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
[9] A. Ferscha, “Parallel and Distributed Simulation of Discrete Event Systems,” Handbook of Parallel and Distributed Computing, McGraw-Hill, 1995. [10] R. M. Fujimoto, Parallel and Distributed Simulation Systems, John Wiley & Sons, 2000. [11] H. Lee et al., “High Fidelity Simulation of Mobile Cellular Systems with Integrated Resource Allocation and Adaptive Antennas,” Proc. Wireless Commun. Net. Conf., Mar. 2007. [12] MVAPICH (MPI over InfiniBand) Project, Network-Based Computing Lab., Dept. Comp. Sci. Eng., Ohio State Univ.; http://nowlab.cse.ohio-state.edu/projects/mpi-iba/ [13] A. Lozano, “Integrated Dynamic Channel Assignment and Power Control in Mobile Wireless Communications Systems,” Ph.D. diss., Stanford University, Nov. 1998. [14] S. S. Cheng, “Integrated Resource Allocation and Adaptive Antennas in Mobile Cellular Systems,” Ph.D. diss., Stanford University, Feb. 2002. [15] H. Lee and D. Cox, “A Fully-Distributed Control Time Slot Assignment Protocol for Large Wireless Mesh Networks,” Proc. MILCOM, Nov. 2008.
BIOGRAPHIES HYUNOK LEE (
[email protected]) ___________________ received a B.S. degree in electronics and electrical engineering from Pohang University of Science and Technology (POSTECH),
IEEE
BEMaGS F
Republic of Korea in 2001, and an M.S. degree in electrical engineering from Stanford University in 2004. She is working toward her Ph.D. degree at the Wireless Communications Research Group, Stanford University. Her current research interests include performance and scalability of large wireless mesh networks (WMNs) and high-fidelity simulation of large wireless networks utilizing parallel processing techniques. V AHIDEH H. M ANSHADI (
[email protected]) ___________________ received a B.S. degree in electrical engineering from Sharif University in 2003 and an M.S. degree in electrical engineering from Stanford University in 2005. She is currently working toward a Ph.D. degree at the Information System Laboratory (ISL), Stanford University. Her current research interests are theoretical studies of social networks and distributed algorithms for communication networks. DONALD C. C OX [F] (
[email protected]) ________________ received his B.S. and M.S. degrees from the University of Nebraska in 1959 and 1960, and his Ph.D. degree from Stanford University in 1968, all in electrical engineering. In 1993, he became a professor of electrical engineering and Director of the Center for Telecommunications at Stanford University. He was appointed Harald Trap Friis Professor of Engineering in 1994. He is a member of the National Academy of Engineering.
IEEE Communications Magazine • March 2009
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
165
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
MODELING AND SIMULATION: A PRACTICAL GUIDE FOR NETWORK DESIGNERS AND DEVELOPERS
Agent-Based Tools for Modeling and Simulation of Self-Organization in Peer-to-Peer, Ad Hoc, and Other Complex Networks Muaz Niazi and Amir Hussain, University of Stirling
ABSTRACT Agent-based modeling and simulation tools provide a mature platform for development of complex simulations. They however, have not been applied much in the domain of mainstream modeling and simulation of computer networks. In this article, we evaluate how and if these tools can offer any value-addition in the modeling & simulation of complex networks such as pervasive computing, large-scale peer-to-peer systems, and networks involving considerable environment and human/animal/habitat interaction. Specifically, we demonstrate the effectiveness of NetLogo — a tool that has been widely used in the area of agent-based social simulation.
INTRODUCTION The last decade has seen unprecedented growth in the scale and complexity of computer networks; networks today are much larger and far more complex than originally anticipated. In turn, the larger and more complex the network, the harder it becomes to test and evaluate its design and eventual implementation. Modeling and simulation (M&S) serves a vital role in the design and development of distributed interacting systems because of their peculiar stochastic nature, especially if they involve systems with decentralized self-organizing capabilities [1, 2]. Although networks have grown in both size as well as complexity, the tools to model and simulate them have not scaled at the same rate. This has not gone entirely unnoticed, so papers such as [3] have attempted to point out problems with the current simulation methodologies for telecommunications networks, such as the use of pseudo-random number generators. However, another problem with the existing toolset, which has not received much attention, lies in what initially was their strongest point: their focused orientation on computer networks. Since newer networks have grown in size and complexity in
166
Communications IEEE
0163-6804/09/$25.00 © 2009 IEEE
directions such as pervasive computing (including but not limited to body area networks, vehicular area networks, and other pervasive networks), the existing M&S tools are not quite designed to deal with these emerging paradigms. With so much unanticipated change in the air, there is a need for alternative flexible tools equipped with effective techniques for modeling modern networks as they evolve into complex systems. This article seeks to address this important area of research for the M&S community in the domain of computer networks by demonstrating, for the first time, the use of agent-based modeling tools for modeling self-organizing mobile nodes and peer-to-peer (P2P) networks. We focus on NetLogo, a tool that has proven its worth in the M&S domain of complex systems. We conclude that not only can agent-based tools model P2P and ad hoc networks, they can also be used to effectively model complex systems where the network is only a part of the system, and where humans, animals, habitat, and other environment-related interactions need to be modeled and simulated as well.
BACKGROUND In the domain of complex networks, being the new kid on the block, pervasive computing applications do not have many dedicated tools at their disposal, unlike the P2P and wireless network M&S communities. These tools range from general-purpose tools such as Opnet, OMNET++, and NS 2 to customized tools for particular domains such as Tiny OS Simulator (TOSSIM) for wireless sensor networks. Designed from the ground up for modeling computer networks, these specialized tools satisfy the basic requirements of M&S. However, the main focus of these tools is to model and simulate computer networks only. On one hand, we have complex problems where humans and mobile systems/users are involved, and on the other, we have pervasive computing and large-
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
scale global networks on an Internet scale. Pervasive computing, being a set of immersive technologies, always has a need for tools with stronger modeling capabilities. The modeler needs the ability to work at a higher level of abstraction instead of just being able to tweak the network parameters. Tools are needed that are flexible enough to conveniently model and simulate complex interactions and protocols with ease. Such tools need to have inherent strengths, and should have demonstrated abilities to model self-organization techniques, engineered emergence, and other socio- and bio-inspired techniques including domains such as ambient intelligence [4]. Another area of application of such tools and techniques is large networks such as unstructured P2P systems, which tend to grow in unpredicted dimensions. A recent example is Gnutella, where strong emergence and unexpected/unexplored graphs have been discovered quite recently. 1 Although in these domains tools primarily designed for modeling of computer networks can occasionally be used to model and simulate, the norm is to have various levels of add-ons; this, however, is not exactly an easy marriage. The designers, being human, feel over-constrained by the limitations of having to keep tweaking and tracking lowlevel network parameters when their primary focus is on modeling more abstract concepts. It is important to note that the problem emerges because these concepts are natural for the designer but alien for the tool; modern complex network M&S specialists not only require the ability to change sensor ranges and other basic parameters, they also need abilities to model and simulate a myriad of entities and techniques: humans, mobile robots, animals, habitat interaction, as well as self-* techniques (selforganization, self-healing, self-adaptive, etc.). This could eventually be extended to any conceivable concept that could be expected to form a part of the system. So in this domain, even if existing network oriented tools can somehow be used to model these concepts, the concept is typically an add-on and an afterthought to the design. Tools that are not specifically developed to cater for complex simulations and situations make the M&S specialist work longer to fulfill these requirements. In contrast to this, in another part of the M&S landscape, there are a set of tools focusing on agent-based M&S techniques. These are a mature set of tools whose utility has been demonstrated in various domains. They are used primarily for M&S of complex systems’ simulation models ranging from social networks and groups to self-organizing systems and bioinspired models. The important question we address in this article is the following: Is there any value addition to the use of generic and highly flexible agent-based modeling tools (which are already known to be very effective in modeling self-organization and complex systems) in the modeling and simulation of communication networks such as P2P, wireless, and pervasive networks? The rest of the article is organized as follows. The next section briefly reviews the background
and distinction of agent-based M&S tools. We then introduce NetLogo, and evaluate its usefulness using a number of M&S experiments. Finally, some concluding remarks are given.
IEEE
BEMaGS F
In a distributed environment with large scale complex
REVIEW OF AGENT-BASED TOOLS
interacting entities,
ARE THERE ANY EXISTING TOOLS?
sometimes it makes
Before we start our detailed analysis on the use of NetLogo in this particular domain, it is pertinent to mention some of the previous tools in use by network designers. The literature mentions a large set of tools, which range anywhere from customized scripts such as GnutellaSim, 2 which runs on general purpose tools (e.g., NS-2), to packet-level discrete event simulators for simulation of similar networks as in Structella.
more sense to work on a simulation based on local parameters of each network node and assuming intelligence at a local level.
AGENT-BASED TOOLS In the agent-based arena, a number of popular tools are available. These range from Java-based tools such as Mason to Logo-based tools such as StarLogo and NetLogo [5]. Each of these tools has different strengths and weaknesses. In the rest of the article we focus on just one of these tools, NetLogo, as a representative of this set. We attempt to critically evaluate NetLogo and see how it fares when compared to other standard tools that are already in use for M&S of communication networks. Building on the experience of previous tools based on the Logo language, NetLogo has been developed from grounds up for complex systems research. Historically, NetLogo has been used for modeling and simulation of complex systems, including multi-agent systems, social simulations, and biological systems, on account of its ability to model problems based on human abstractions rather than purely technical aspects. However, it has not been widely used to model computer networks, especially by any mainstream practitioners in the computer network M&S domain. Agent-Based Thinking — In a distributed environment with large-scale complex interacting entities, sometimes it makes more sense to work on a simulation based on local parameters of each network node and assuming intelligence at a local level. Now, for the network modeling specialist to model a system in terms of agents, the nodes, humans or robots or any other entities that might be part of the model, need to be directly addressable. The designer of the system should be free to address one or more types or breed of objects/entities/nodes of the system. This kind of addressability is the norm in agentbased modeling and simulation. As an example, it is fairly easy to address one or more types of entities (called agents or turtles in Logo-speak) just by their name. In other words, programs in NetLogo3 are produced at a higher level of abstraction. Instead of the regular paradigms of looping through objects and executing functions, the designer can ask entities/ nodes to move or change colors, or create links with other entities without worrying about the low-level details of how the animation or actual interaction is going to be executed. This paradigm actually produces small yet functional programs. An important benefit of small pro-
1http://personalpages.man
chester.ac.uk/staff/m.dodg ______________ e/cybergeography/atlas/ _____________ topology.html ________ 2http://www.cc.gatech.edu
/computing/compass/gnut ______________ ella/usage.html ________ 3
NetLogo is based on the Logo language.
IEEE Communications Magazine • March 2009
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
167
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
In NetLogo, modeling complex protocols does not have to be limited to the simulation of networks alone; it can readily be used to model human users, intelligent agents, mobile robots interacting with the system or virtually any concept that the M&S designer feels worthwhile having in the model.
4
As is clear from their prevalence in social simulation literature.
168
Communications IEEE
A
BEMaGS F
Distinction of Agent-Based M&S Tools — In this section we distinguish the power of agentbased techniques as compared to previous tools in general and NetLogo in particular. There are certain metrics by which we can evaluate agentbased tools and compare them with their counterparts in the domain of computer network M&S. These metrics, however, are of a qualitative nature. The existing tools for network-based M&S are not specifically tailored to developing self-organizing or complex system simulations. The M&S tools from the agent-based M&S domain have several strong features such as direct addressability of nodes, ease of implementation and evaluation of self-organization, and emergence and bio-inspired algorithms as well as the capability of being understandable from the human perspective (having their background rooted in social simulation), all of which make them extremely useful for application in the domain of ad hoc, P2P, and pervasive systems. One way of looking at this is that originally, the object oriented programming features of the C++ programming language were developed entirely in C in the form of a front-end that would compile to C code (as an academic exercise); however, it was too cumbersome and counterproductive for programmers to follow the same practice in their regular development. It was infeasible for companies to develop programs with object-oriented features using pure C in general. However, with the advent of modern languages with built-in features of object orientation, it has become much easier and “natural” for software engineers to start with a template based on an object-oriented paradigm. Using the same analogy, self-organization and emergence techniques are even more abstract than object orientation, but they do entice and fit well with human nature. 4 So the significance of agentbased tools is to attain a level of comfort for the M&S specialist in designing complex paradigms such as self-organization.
list processing features (with LISP being one of the most commonly used languages in the AI literature). Thus, modeled entities can be shown to be interacting with the computer networks all at the same time and with the same ease that is there for modeling networks. Alternatively, the simulationist can interact and create runtime agents to interact with the network to experiment with complex protocols that are not otherwise straightforward to conceive in terms of programs. As an example, let us suppose that we were to model the number of human network managers (e.g., from 10 to 100) attempting to manage a network of 10,000 nodes by working on workgroups the size of n nodes (e.g., ranging from 5 to 100) at one time while giving a total of 8-hour shifts with network attacks coming in as a Poisson distribution; this could all probably be modeled in less than a few hours with only a little experience in NetLogo per se. The simulation can then be used to evaluate policies of shifts to minimize network attacks. Another example could be the modeling and simulation of link backup policies in case of communication link failures in a complex network of 10,000 nodes along with network management policies based on part-time network managers carrying mobile phones for notification and management vs. full-time network managers working in shifts, all in one simulation model. And to really make things interesting, we could try these out in reality by connecting the NetLogo model to an actual J2ME-based application in phones using a Java extension; so the J2ME device sends updates using General Packet Radio Service (GPRS) to a Web server that is polled by the NetLogo program to get updates while the simulation is updated in a user interface provided by NetLogo. Again, although the same could be done by a team of developers in a man-year or so of effort using different technologies, NetLogo provides for coding these almost right out of the box, and the learning curve is not steep either. This is the expressive power of NetLogo, which lies in the sense of modeling even nonnetwork concepts such as pervasive computing where human mobile users (e.g., in the formation of ad hoc networks for location of injured humans) or body area networks come in play along with the network. Now, it is important to note here that simulation would have been incomplete without effective modeling of all related concepts that come into play. Depending on the application, these could vary from ambulances, doctors, and nurses to concepts such as laptops and injured humans. in addition to readily available connectivity to GIS data provided by NetLogo extensions.
Complex Interaction Protocols Modeling — Another point to note here is that in NetLogo, modeling complex protocols does not have to be limited to the simulation of networks alone; it can readily be used to model human users, intelligent agents, mobile robots interacting with the system, or virtually any concept the M&S designer feels worthwhile having in the model. NetLogo in particular has the advantage of LISP-like
Range of Input Values — Being a generalpurpose tool, the abstraction level of NetLogo is specifically much higher. As such, the concepts of nodes, antenna patterns, and propagation modeling are all user-dependent. On one hand, this may look burdensome to the user accustomed to using these on a regular basis, as it might appear that he or she will be working a little extra to code these in NetLogo modeling. On
grams is their ability to greatly reduce the tweaktest-analyze cycle. This age-old paradigm of development (coming from the heritage of interpreted languages) where paper-based models are assisted by quick remodeling of the model to fit the given parameters for verification and validation is very obvious in NetLogo. Thus, it is more likely to model complex paradigms within a short time without worrying about the lower layers of other parameters unless they are important for the particular application. As we shall see in this article, it also has great expressive power and, most of all, is fun to use. Although used very frequently to model complex and selforganizing systems, it has not previously been used extensively to model computer networks.
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
the other hand, NetLogo allows for the creation of completely new paradigms of network modeling, wherein the M&S specialist can focus on, for example, purely self-organization aspects or developing antenna patterns and propagation modeling — directly in NetLogo, a relatively trivial task per se. Range of Statistics — NetLogo is quite flexible in terms of statistics and measurements. Any variable of interest can be added as a global variable, and statistics can be generated based on a single run or multiple runs. Plots can be automatically generated for these variables as well. Handling Complex Metrics — Measurements of complex terms in NetLogo programs are limited only by the imagination of the M&S specialist. Almost any concept that can be conceived as important can easily be added to the model. As an example, if it is required to have complex statistics such as network assembly time, global counters can be used easily for this. Similarly, statistics such as network throughput, network configuration time, and throughput delay can easily be modeled by means of similar counters (which need not be integral). By default, NetLogo provides for real-time analysis. Variables or reporters (functions that return values) can be used to measure real-time parameters, and the analyst can actually have an interactive session with the modeled system without modifying the code using the command window.
NETLOGO TUTORIAL In this section we introduce NetLogo, and demonstrate its usefulness using a number of modeling and simulation experiments.
WHAT NETLOGO IS AND HOW TO GET IT NetLogo is a popular tool based on the Logo language with a strong user base and an active publicly supported mailing list. It provides visual simulation and is freely available for download [5], and has been used considerably in multiagent systems literature [6]. It has also been used considerably in social simulation and complex adaptive networks [7]. One thing that distinguishes NetLogo from other tools is its very strong user support community. You can usually get a reply from someone in the community in less than a day. The current version of NetLogo is 4.0.4; the higher number actually demonstrates its stability and active development. NetLogo also contains an enormous number of code samples and examples. Most of the time, it is rare to find a problem for which there is no similar sample freely available either within NetLogo’s model library or elsewhere from the NetLogo M&S community.
BASIC STRUCTURE OF A NETLOGO PROGRAM The NetLogo World — Based on the Logo language, the NetLogo world consists of an interface that is made up of “patches.” The inhabitants of this world can range from turtles to links. In addition, one can have user interface elements such as buttons, sliders, monitors, and plots.
The NetLogo Interface — NetLogo is a visual tool and is extremely suitable for interactive simulations. When one first opens up a NetLogo screen, an interface with a black screen is visible. There are three main tabs and a box called the command center. Briefly, the interface tab is used to work on the user interface manually, and theinformation tab is used to write the documentation for the model. Finally, the procedures tab is where the code is actually written. The command center is a sort of interactive tool for working directly with the simulation. It can be used for debugging as well as trying out commands similar to the interpreter model which, if successful, can be incorporated in one’s program. The commands of a NetLogo procedure can be executed in the following main contexts.
IEEE
BEMaGS
Turtle — The key inhabitants of the Logo world are the turtles, which, from our perspective of designing networks, can be used to easily model network nodes. The concept of agents/turtles is to provide a layer of abstraction similar to the layer of abstraction object-oriented programming adds to structured programming paradigms. In short, the simulation can address much more complex paradigms, including pervasive models, environment or terrain models, or indeed any model the M&S specialist can conceive of, without requiring many additional add-on modules. However, the tool is extensible and can be directly connected to Java-based modules. By writing modules using Java, the tool can potentially be used as the front-end of a real-time monitoring or interacting simulation. For example, we could have a Java-based distributed file synchronization systemthat reports results to the NetLogo interface and vice versa; the NetLogo interface could be used by the user to set up the simulation at the back-end (e.g., how many machines, how many files to synchronize), and subsequently, with the help of the simulation, the user could simply monitor the results. Although the same can be done with a lot of other tools and technologies, the difference is that NetLogo offers these facilities almost out of the box and requires minimal coding besides being noncommercial, free, and easy to install.
F
NetLogo is quite flexible in terms of statistics and measurements. Any variable of interest can be added as a global variable and statistics can be generated based on single or multipleruns. Plots can be automatically generated for these variables as well.
Patch — A single place where the turtle exists is a patch. Observer — This is a context that can be used in general without relating to either a patch or a turtle. The NetLogo user manual, which comes prepackaged with NetLogo, says: “Only the observer can ask all turtles or all patches. This prevents you from inadvertently having all turtles ask all turtles or all patches ask all patches, which is a common mistake to make if you’re not careful about which agents will run the code you are writing.” Explanation of NetLogo Nomenclature — Inside the NetLogo world, we have the concept of agents. Coming from the domain of complex systems, all agents inside the world can be addressed in any conceivable way the user can think of; for example, if we want to change the color of all nodes with communication power
IEEE Communications Magazine • March 2009
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
169
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
(a) 1. to setup 2. ca ; Clears everything so if we call setup again, it won’t make a mess 3. crt 100 ;This means we are creating a 100 turtles 4. [ 5. setxy random-pxcor random-pycor ;These 100 turtles, we want them to be spaced out at random 6. ; patch x and y co-ordinates 7. ] 8. let mycolor random 140 ; Randomly select a color value from 0 to 139 9. ask patches 10. [ 11. set pcolor mycolor ; Ask all patches to set their color to this random color 12. ] 13. end (b) 1. 2. 3. 4. 5. 6.
to go ask turtles [ fd 0.001 ; ask each turtle to move a small step ] end
■ Figure 1. a) Code for setup; b) code for go.
less than 0.5 W, a user can simply say: ask nodes with [power < 0.5] [set color green]; or if a user wants to check nodes with two link neighbors only, this can also be done easily, and so on. The context of each command is one of three. The observer object is the context when the context is neither turtle nor the patch. It is called the observer because this can be used in an
Figure 2. Mobile nodes.
170
Communications IEEE
interactive simulation where the simulation user can interact in this or another context using the command window. The Setup and Go Buttons — Although there are no real rules to creating a NetLogo program, one could design a program to have a set of procedures that can be called directly from the command center. However, in most cases it suffices to have user interface buttons to call procedures. For the sake of this article, we shall use the standard technique of buttons. In our program, to start with, we shall have two key buttons: setup and go. The idea is that the setup button is called only once, and the go button is called multiple times (automatically). These can be inserted easily by right clicking anywhere on the interface screen and selecting buttons. So just to start with NetLogo, the user will need to insert these two buttons in his or her model, remembering to write the names of the buttons in the commands. For the go, we shall make it a forever button. A forever button is a button that calls the code behind itself repeatedly. Now, the buttons show up in red text. This is actually NetLogo’s way of telling us that the commands here do not yet have any code associated with them. So let us create two procedures named setup and go. The procedures are written, as shown in Fig. 1a, in the procedures tab, and the comments (which come after a semi-colon on any line in NetLogo) explain what the commands do. The code in Fig. 1a creates 100 turtles (nodes in our case) on the screen. However, the shape is a peculiar triangle by default, and colors are assigned at random. Notice that we have written code here to have the patches colored randomly. To create the procedure for go, write the code shown in Fig. 1b.
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
A
BEMaGS
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
F
There are two types of nodes; one type is spatially fixed and corresponds to the transmission towers. The second type of nodes perform a random walk until they come within a certain (pre-specified) radius of one of the towers, when they stop moving and change their color to (a)
(b)
tower’s color.
Figure 3. a) Towers (flags) and nodes; b) nodes after flocking. Now, if we press setup and then go, we see turtles walking slowly in a forward direction on the screen, a snapshot of which is shown in Fig. 2.
MODELING COMPLEX PARADIGMS IN P2P SYSTEMS
MODELING SELF-ORGANIZATION
In the next set of experiments we demonstrate the use of NetLogo for modeling and simulation of complex paradigms in P2P systems.
Experiment I: Modeling Flocking of Mobile Nodes — In this model flocking of mobile wireless nodes within a certain radius of a transmission tower is simulated. There are two types of nodes. One type is spatially fixed and corresponds to the transmission towers. The second type performs a random walk until they come within a certain (prespecified) radius of one of the towers, when they stop moving and change their color to the transmission tower’s color. We are using this behavior to model flocking of wireless nodes within a certain radius of a tower. These are illustrated in Fig. 3.
Experiment II: Clustering of Nodes in a P2P System — Clustering is an important technique extensively used in the literature [8]. In this example we demonstrate the use of NetLogo for clustering in P2P systems. To generate this selforganizing behavior, nodes initiate messaging between nodes. The nodes first discover other nodes within their sensing radius. Next, nodes perform an election algorithm looking for a node with the lowest ID (as is the norm in election algorithms). Figures 4a and 4b illustrate , the initial unclustered nodes and clustered nodes after applying self-organization, respectively.
(a)
the transmission
(b)
Figure 4. a) Initial unclustered nodes; b) clustered nodes after applying self-organization.
IEEE Communications Magazine • March 2009
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
171
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
(a)
A
BEMaGS F
(b)
Figure 5. a) Expanding ring algorithm in a lattice; b) k-random walk with check on a randomly connected graph.
Experiment III: Modeling Unstructured Overlay Networks — Here we show how NetLogo can be used to model real-world examples of unstructured overlay forming algorithms such as Gnutella, random walk in P2P networks, as well as pervasive environments as shown in [9]. Figure 5a shows the expanding ring algorithm being applied to a lattice. The expanding ring algorithm is an advanced form of flooding algorithm where flooding is first performed with a time-to-live (TTL) value of 1 and then subsequently 3, 5, and so on, until one of the queries reaches the destination. Figure 5b shows the application of an advanced form of random
(a)
walk, k-random walk with check, where there are k random walkers. The idea is that every few hops, these random walkers (queries) send a message back to the source node to verify whether or not the query needs to continue its quest. Experiment IV: Distributed Averaging Using Gradients — In this final experiment we first show, in Fig. 6a, how NetLogo can be used to evaluate distributed consensus formation using gradients. Next, we illustrate gradient formation in wireless sensors (Fig. 6b). Fast selfhealing gradients are discussed in [10].
(b)
Figure 6. a) Mobility based-consensus formation in n = 1000 nodes; b) gradients formed around wireless sensors.
172
Communications IEEE
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
CONCLUSIONS In this article we have demonstrated the use of NetLogo, an increasingly popular tool for agentbased modeling in the M&S communities, from the perspective of modeling and simulation of self-organizing mobile and P2P systems. We have illustrated the strengths and distinctive aspects of agent-based modeling tools by employing a number of simple experiments to demonstrate the utility of the NetLogo tool in the important domain of modeling and simulation of P2P and ad hoc networks. In summary, we conclude that NetLogo is not only extremely flexible with a very short learning curve, but its powerful generic capabilities can readily be exploited to model self-organization in any complex system.
ACKNOWLEDGMENTS We would like to thank Prof. Jose Vidal for his suggestion to try out NetLogo and Prof. Michael Huhns for helping in selection of agent-based toolkits, both from the University of South Carolina.
REFERENCES [1] L. F. Perrone, Y. Yuan, and D. M. Nicol, “Simulation of Large Scale Networks II: Modeling and Simulation Best Practices for Wireless Ad Hoc Networks,” Proc. 35th Winter Simulation Conf., 2003, pp. 685–93. [2] V. Naoumov and T. Gross, “Simulation of Large Ad Hoc Networks,” Proc. 6th ACM Int’l. Wksp. Modeling Analysis Simulation Wireless Mobile Sys., 2003, pp. 50–57. [3] K. Pawlikowski, J. Jeong, and R. Lee, “On Credibility of Simulation Studies of Telecommunications Networks,” IEEE Commun. Mag., 2000. [4] M. Niazi and A. Baig, “Phased Approach to Simulation of Security Algorithms for Ambient Intelligent (AmI) Environments,” Proc. 40th Winter Simulation Conf., Ph.D. Student Colloqium, Dec. 7–11, 2007. [5] U. Wilensky, NetLogo, Center for Connected Learning Comp.-Based Modeling, Northwestern Univ., Evanston, IL, 1999; http://ccl.northwestern.edu/netlogo [6] J. M. Vidal, P. Buhler, and H. Goradia, “The Past and Future of Multiagent Systems,” AAMAS Wksp. Teaching Multi-Agent Sys., 2004.
[7] M. Niazi et al., “Simulation of the Research Process,” Proc. 2008 Winter Simulation Conf., 2008, pp. 1326–34. [8] O. Younis, and S. Fahmy, “HEED: A Hybrid, Energy-Efficient, Distributed Clustering Approach for Ad Hoc Sensor Networks,”IEEE Trans. Mobile Comp., vol. 3, no. 4, pp. 366–79. [9] M. Niazi, “Self-Organized Customized Content Delivery Architecture for Ambient Assisted Environments,” UPGRADE-CN ’08, Boston, MA, June 23, 2008. [10] J. Beal et al., “Fast Self-Healing Gradients,” ACM SAC ’08, Fortaleza, Ceará, Brazil, Mar. 16–20, 2008.
IEEE
BEMaGS
BIOGRAPHIES M UAZ N IAZI [M] (
[email protected]) ___________ obtained his B.S. in electrical engineering from NWFP UET, Pakistan, and a Master’s degree in computer science from Boston University in 1996 and 2004, respectively. After working in industry for several years, he is currently an assistant professor of software engineering at Foundation University, Pakistan, as well as a doctoral student at the University of Stirling, Scotland. He is a member of the IEEE Communications and Computational Intelligence Societies. He is also a member of the IEEE CIS Task Force on Intelligent Agents and the IEEE CIS Task Force on Organic Computing. He has served on the program committees of a number of conferences and workshops such as Bio-Inspired Algorithms for Distributed Systems and the IEEE Wireless Communications and Networking Conference 2009, as well as the IEEE CIS Symposium on Intelligent Agents, where he is also co-chair of a special session on self-adaptive agents. He is also a reviewer for international journals, including the Elsevier Journal of Network and Computer Applications. His areas of research interest are modeling and simulation of selforganizing and self-adaptive systems in the P2P and ad hoc network domain, and novel applications of socially inspired techniques.
F
NetLogo is not only extremely flexible with a very short learning curve, but its powerful generic capabilities can be readily exploited to model self-organization in any complex system.
A MIR H USSAIN [SM] (
[email protected]) _____________ obtained his B.Eng. (with 1st Class Honors) and Ph.D., both in electronic and electrical engineering from the University of Strathclyde in Glasgow, Scotland, in 1992 and 1996, respectively. He is currently a reader in computing science at the University of Stirling in Scotland. His research interests are mainly interdisciplinary, and include machine learning and cognitive computing for modeling and control of complex systems. He holds one international patent in neural computing, is editor of five books, and has published over 100 papers in journals and refereed international conference proceedings. He is Editor-in-Chief of Springer’s Cognitive Computation, Associate Editor for IEEE Transactions on Neural Networks, and serves on the Editorial Boards of a number of other journals. He is IEEE Chapter Chair of the U.K. & RI IEEE Industry Applications Society and is a Fellow of the U.K. Higher Education Academy.
IEEE Communications Magazine • March 2009
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
173
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
ACCEPTED FROM OPEN CALL
Identity Management and Web Services as Service Ecosystem Drivers in Converged Networks Juan C. Yelmo, Rubén Trapero, and José M. del Alamo, Universidad Politécnica de Madrid
ABSTRACT This article describes a Web service-based framework supported by federated identity management technologies, which enables fixed and mobile operators to create a secure, dynamic, and trusted service ecosystem around them. With this framework, new service providers can be incorporated automatically, dynamically negotiating and accepting service level agreements to control their activities. Furthermore, it is based on identity management, not only to keep user identities private and protected, but also to enable the creation of value-added services using network resources and user profiles. This service ecosystem is not limited to thirdparty providers but also can be extended through collaboration agreements between operators.
INTRODUCTION A wide-ranging transformation of the telecom industry has revolutionized its structure during the last 25 years. It initially focused on the liberalization of the telecom sector followed by the integration of different business units. For decades, each country had its own national telecom operator, which offered voice services together with a few data services for the particular country, resulting in a near monopoly at best. Governments slowly began to regulate this situation by privatizing most operators and introducing competition. One of the clearest examples is the transformation of the American Telephone & Telegraph Company. First it was split into seven independent companies, nicknamed the Baby Bells, which then amalgamated again into three large operators through mergers and acquisitions. Meanwhile, in Europe, the main British operator was split into two companies: British Telecom to provide services over the fixed network and Vodafone to offer services over the mobile one. In other countries, the incumbent operator could be the owner of both the mobile and fixed networks, but with the operator separated into several different branches. At the same time, the Internet began to grow in popularity and importance, and its cheap and
174
Communications IEEE
0163-6804/09/$25.00 © 2009 IEEE
easy-to-manage protocol (Internet Protocol [IP]) was quickly extended to other networks. Currently, the IP protocol quickly is becoming the choice for telecom networks, too. It enables operators to integrate their different access networks into a so-called all-IP core network, thus reducing integration, exploitation, and management expenses. This is the current state of convergence, both technological (around IP with different access networks) and in business (branches integrating to create a converged operator). It enables telecom operators to offer new value-added voice, video, and data services, using all their networks together. For example, a cellular device connected to a mobile network can access Web sites located anywhere on the Internet just like a computer connected to a LAN. However, users are increasingly looking for value-added, mobile IP services that include new applications other than just browsing on the mobile. These applications include messaging and presence or location-based features. These services will help telecom operators to improve their currently stalled, average revenue per user (ARPU). However, some risks are threatening the expectations of the telcos. The most serious risk is that of becoming mere bit pipes [1]. If the operator allows its customers to access the Internet with no restrictions, it could result in customers paying for these third-party services rather than the operator’s services. It is a paradox for a mobile operator to carry unprofitable bits for third-party services while offering voice over IP (VoIP) services to end users, for example, Skype. This scenario would not yield a higher ARPU and might even decrease it. A possible solution could be the opening up of telecom networks, creating attractive service platforms that link the different stakeholders in the provision process: service providers, consumers, and the operator itself. By playing such a role, the operator becomes the central entity in the service provision process and not just a bit pipe, generating a service ecosystem around its core business — the network. In this context, we use the term service ecosystem in the sense of an open and dynamic
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
service marketplace where a community of thirdparty service providers work together to provide a wealth of services, which are exposed and consumed using Web services technologies in a particular business service delivery environment [2]. Around this model, it is possible to provide and benefit from useful ancillary services that telcos are used to managing, such as integrated billing and accounting or customer-relationship management, thus increasing the value of the bits carried on behalf of third parties. Furthermore, in a converged environment, the integrated operator also is well positioned to offer other profitable services related to customer information, such as presence, location, profiles, and so on. This article describes a Web-service–based framework supported by federated identity management technologies that enables mobile operators to create a secure, dynamic, and trusted service ecosystem. It delivers service level agreements (SLAs) for customers while protecting the core assets of the telcos from unintentional but unregulated exploitation by service providers, as well as intentional exploitation by untrustworthy and malicious entities. The next section describes several previous approaches to opening up telecom networks to third parties, as well as those issues that have not been resolved. Then, we introduce our solution to enlarge an operator’s service ecosystem in two ways: through the dynamic incorporation of third-party providers to a service ecosystem and through collaboration between the service ecosystems of different operators.
THE PROBLEM: ENABLING DYNAMIC TELCO SERVICE ECOSYSTEMS Various approaches have been proposed over the years to open up telecom networks to third parties and to collaborate with them. They have evolved from International Telecommunication Union-Telecommunication (ITU-T)/ European Telecommunications Standards Institute (ETSI) intelligent network standards toward service-oriented architecture (SOA) approaches, such as ParlayX or Open Mobile Alliance (OMA) enablers, converging in the use of Web services as the middleware, enabling third parties to access and control network resources. These solutions provide telcos with the means to generate service ecosystems. However, there are still some difficulties that have not been completely ironed out yet. As we stated previously, most of the valuable services an operator can offer third parties are related to its subscribers’ identities and profiles. For example, sending a short message service (SMS), getting a customer location, and billing on behalf of a third party are all about subscribers’ identities and related information. This data can be the catalyst that spawns a service ecosystem around telecom networks because the knowledge that an operator has about its subscribers is of a highly valuable nature. In most countries, however, there are laws that require operators to ensure security and privacy when revealing personal information about a customer, such as the 2002/58/EC Directive [3] of the European Union.
A telecom operator trusts only a small, wellknown set of third parties. To deploy their services over the platform and use those of the operator, the third parties must sign off-line contracts. However, this solution results in a longer time-to-market for new telecom services than for the Internet, even though they both use Web services as a means of facilitating rapid collaborative services. Therefore, the Web services used in telecom networks have a number of constraints, mainly related to issues of trust when it comes to collaborating with third parties and other operators, as well as the protection of the access to the network and to customer-identity resources. Typical service provider license agreements often fail to fulfill some of these conditions; additional mechanisms are required to guarantee user privacy-related clauses. Identity management is the discipline that can provide solutions to these problems, facilitating a secure, reliable, extendable, and profitable Web service ecosystem around telecom networks, not just with commitments, but with effective mechanisms to support them. Identity management commonly refers to the processes involved with management and selective disclosure of user-related identity information, either within an institution or between several of them, while preserving and enforcing privacy and security requirements. If the different companies established trust relationships between themselves, the process does not require a common root authority and is called federated identity management. Each company maintains its own customer list, together with their identity attributes, and identifies each client properly when required. At a technical level, it is related to network security, service provisioning, customer management, single sign on (SSO), single logout (SLO), and the means to share identity information. A federated approach for identity management in a Web services environment can be supported by several standards and frameworks, among which the most important are security assertion markup language (SAML) [4], Liberty Alliance [5], Shibboleth [6], and WS-Federation [7]. SAML is an extended markup language (XML) standard for exchanging authentication and authorization data between security domains. There are two main architectures based on SAML: Shibboleth and Liberty. Shibboleth focuses on educational environments, whereas Liberty is an industrial-strength set of specifications that are the de facto federated identity management standard for most telcos. The Liberty approach associates service providers to trusted domains called circles of trust, which are supported by Liberty technology and operative agreements in which trust relationships are defined between providers (Fig. 1, left). Circles of trust support users in transacting business with the associated providers in a secure and apparently seamless environment. It is worth noting that this concept of trust-based relationships between organizations and their individual or joint customers has existed in the offline business world for years; two common examples are travel alliances and affiliate-business partnerships.
IEEE
BEMaGS F
Identity management commonly refers to the processes involved with management and selective disclosure of user-related identity information, either within an institution or between several of them, while preserving and enforcing privacy and security requirements.
IEEE Communications Magazine • March 2009
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
175
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
BEMaGS F
External, nontrusted service provider
Circle of trust - Business agreements, service level agreements, guidelines and policy
Circle of trust
Discovery service - Knows where users’ identity resources are stored - Returns endpoint references and credentials to access identity resources
Internet Access network
SP2 SP1
SP4
SPx
Session manager
SP3 IDP DS
Identity provider - Trusted entity - Manages federations - Authentication infrastructure - Manages active sessions
A
SP1, SP2, SP3, SP4 service providers - Provide services to final users - No investment (necessary) in authentication infrastructures
SP2 SP1
Core mobile network
SP3 IDP
Circle of trust DS SP4
Session manager Operation support system
Figure 1. Circle of trust (left) and trust domain around a mobile network (right).
Within a circle of trust, subscribers can federate (link) isolated accounts that they own across different providers using a pseudonym. Some entities could be prepared especially to manage these federations, as well as providing other ancillary services: • Identity providers know all users and service providers within the circle of trust and their federations. They also know how to authenticate users; thus they can certify a user’s identity to any provider. Whenever users want to access a service provider, they are redirected to the identity provider against which they must be authenticated. After a user has successfully logged on, the identity provider sends a statement back to the service provider containing information related to the identity of the user (pseudonym) at the service provider, where other entities within the circle of trust and the credentials required to access them can be found. • Discovery services know where the users’ identity resources are stored within the circle of trust. When a service provider wishes to find user information, it sends a query to the discovery service, which returns the endpoint reference for that resource and a valid credential to access it for one-time only. A mobile operator is well positioned to become both of the aforementioned entities: it knows all of its customers, as well as some of their identity attributes, and has a relationship of trust with them. Furthermore, it already has business agreements with different third parties situated either within its private network or on the Internet, defining a trust domain (Fig. 1, right). There is a similarity between a service ecosystem around a telecom network and a trust domain supported by federated identity management. We have used it to create a framework
176
Communications IEEE
based on Web services that enables mobile operators to create secure, dynamic, and trusted service ecosystems. This framework has enhanced existing Liberty models and architecture by providing automatic incorporation of third-party providers into a mobile service ecosystem. Off-line contracts and hard integration efforts are no longer required, thus reducing the time to market for new services. The framework also allows third-party providers to accept pre-defined SLAs online, as well as different off-line negotiated ones. The fulfillment of the SLA terms is monitored within the framework and exclusion mechanisms are triggered when the outcomes are not in compliance with what was agreed upon. Finally, we consider that in a globalized and interconnected world, subscribers must be able to access mobile IP services wherever they are connected. This means that users away from their home domain still must be able to use services provided by the visited network in the same way as the services provided in the home network. Moreover, they also must be able to use the services they subscribe to within their local service ecosystem. Our framework enables this service-level roaming, facilitating a dynamic collaboration between different operators to extend the service ecosystem to roaming users.
OPENING THE SERVICE ECOSYSTEM TO THIRD-PARTY PROVIDERS Mobile operators have been looking for novel killer applications (or killer services) for years without success. It is time to let others find these killer applications; however, of course, operators should provide these in their networks so that they also can benefit from them. As we have seen, a service platform is what enables mobile
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
A
BEMaGS
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
F
Mobile operators have been looking for novel killer applications (or killer
Operator’s service ecosystem
External service providers
Subscription framework
Control Successful incorporation to service ecosystem Service level agreement Parties involved Parameters Metrics Objectives Action guarantees
Monitoring system
Access to trusted service provider
services) for years without success. It is time to let others find these killer applications; Identity provider
however, of course, operators should provide these in their networks so that they also can benefit from them.
Operator’s network
Access to nontrusted service provider User
Figure 2. Dynamic incorporation of third party providers into an operator service ecosystem.
operators to open their network infrastructure to third parties, thus making a profit from them. A rich and dynamic community of service providers is desirable. After all, if the operator has a good business model, it should receive compensation from each service provider and from user activities related to them, no matter who the service provider is. Furthermore, a dynamic service ecosystem is required to enable quick subscriptions to new providers and to terminate those that are no longer profitable. An ample provider ecosystem does pose some threats, however. Because many of these service providers are small and have no record of trust (i.e., are untrusted), operators must establish whether they are working properly. In principle, the use of an identity management infrastructure such as the Liberty Alliance ensures that the user’s identity remains protected, but what about the operator’s network and its interests? Of course, the operator must control the use of its network resources. Our framework enables not only the automatic incorporation of new service providers to the trust domain of an operator, but also agreement, acceptance, and control of an SLA, which is automatically submitted to the service providers, accepted (or rejected) by them, and loaded into a monitoring infrastructure, available to the operator to control the fulfillment of the SLA (Fig. 2). The SLA is an XML-based document that includes all that is required to describe the activities of a service provider within the operator’s domain of trust, covering both the technical and business agreements to be fulfilled. In this research, we have followed the Web service level agreement (WSLA) schema proposed by IBM [8]. It includes the parameters to be measured and the metrics used to measure them. It also includes the service level objectives to indicate
the limits that must be respected by the service providers, as well as action guarantees, that is, what to do when any of the objectives included in an SLA are violated. Because the entire framework is based on Web services, the use of XML to describe the SLA is a convenient way of performing an exchange between parties, for incorporation, or during the processing of the content of the SLA used to monitor service providers. Imposing a unique SLA for every service provider can be counterproductive for the operator. There might be important service providers of more interest to the operator that merit a different and preferential treatment. To solve this problem, the framework allows for the customization of the SLA, which can be negotiated offline before its incorporation, with customized conditions for each participant in the agreement. This customized SLA should be the one that the service provider is offered during the incorporation procedure. Meanwhile, the remaining untrusted service providers are forced to accept a default SLA, which likely would be more restrictive. This feature can be viewed as a protection measure for the operator against unprofitable or malicious service providers. The incorporation of service providers and the fulfillment control for agreements are both features implemented using Web services. The use of a SOA approach and standard interfaces eases the adaptation of the system of a service provider to service ecosystem of the operator. All of the metadata information that must be known by both parties is defined in [9] and is automatically exchanged using Simple Object Access Protocol (SOAP) messages. This information consists of service end points, as well as available profiles and credentials. Following this approach, compatibility between the systems of both parties is ensured. Additionally, a standard
IEEE Communications Magazine • March 2009
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
177
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
One issue to consider is how user identi-
Operator’s domain 1
Operator’s domain 2
User
ty is certified. For example, if two operators from different countries — or convergent operators in the same country — offering unlicensed mobile
Service ecosystem
Discovery service Identity provider
Collaboration agreements
Identity provider
Discovery service
Service ecosystem
Users’ identities
Users’ identities
access (UMA) services are collaborating, users probably
Identity provider
will have their identity registered in just one of them but not in the other.
Users’ identities
Discovery service Service ecosystem
Operator’s domain 3
Figure 3. Collaboration between trust domains.
method of making subscription invocations to the operator domain offers new, cost-effective business opportunities for both service providers and operators, promoting the development and deployment of new services.
EXTENDING THE ECOSYSTEM WITH COLLABORATION AGREEMENTS Another aspect of our contribution is the use of identity management technologies to enable collaboration between operators when providing services to other operators’ subscribers. The intention is to extend the service ecosystem around the operator not only with third-party providers, which can be incorporated using the aforementioned mechanisms, but also to other operators (Fig. 3). Among other capabilities, this enables users to: • Use their home-subscribed services when they are roaming in a visited network • Access services in the visited network with an anonymous (but authenticated) identity and even to personalize them according to the user profile stored in their home domain One of the issues to consider here is how user identity is certified. For example, if two operators from different countries — or convergent operators in the same country — offering unlicensed mobile access (UMA) services are collaborating, users probably will have their identity registered in just one of them but not in the other. Whenever roaming, a user who would like to access services in the visited domain (1 in Fig. 4) will be redirected to the identity provider of that domain for authentication (2–3). As the real user’s identity is unknown in the visited domain, it is redirected to its home identity provider (4–5) to be
178
Communications IEEE
authenticated. Once authenticated, identity providers from both domains agree to link the certified user’s identity as stated by the home identity provider to a new temporary and anonymous (but now authenticated) identity in the visited identity provider (6–7). After this process is performed, the identity provider in the visited domain sends the new identifier back to the requester service provider (8–9), which then allows the user to access it (10). At the end of this process the identity provider of the visited domain also sends a message to its discovery service stating that a new user has temporarily joined the domain and that identity information could be available from now on in the user’s home discovery service. It also sends the location and credentials required to access it. The identity provider in the visited domain can find out which domain the user comes from because of a pre-defined code in the international mobile subscriber identity (IMSI). Note that a service provider always relies on the entities of its domain (e.g., the identity provider). Now, imagine that the roaming user has accessed the service in the visited network successfully. This service might require access to a user profile to retrieve some identity-related information such as the user’s bank information (e.g., to pay for cinema tickets). That information is stored in the home operator’s network. As previously stated, the discovery service of a domain is the only entity that knows where the user’s identity resources are stored within that domain and how to access them. So, the service provider asks the discovery service of its own domain about the user’s identity resources (1 in Fig. 5), which in turn forwards the request to the discovery service
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
A
BEMaGS
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Home domain
Visited domain
F
The public key infrastructure
User
V-SP
V-IDP
H-IDP
recommended by Liberty for securing
1 Request for session establishment
communications guarantees that
2 AuthnRequest A
sensitive information
3 AuthnRequest A
is neither modified nor intercepted by
4 AuthnRequest B
unauthorized parties 5 AuthnRequest B Authentication dialogue
Login performed at HIDP, access session created.
and also supports mutual authentication between the
6 AuthnResponse B 7 AuthnResponse B
parties involved in a Anonymous access session created.
8 AuthnResponse A
conversation.
9 AuthnResponse A Anonymous service session created.
10 Sesssion established
Figure 4. Sequence to access a service provider of a visited domain. of the home domain using the information received in the previously described process (2). The home discovery service performs as usual, sending the location and credentials to access the requested resources (3) back to the requesting entity (visited discovery service), which in turn forwards them to the requester service provider (4). With this information, the service provider in the visited domain can gain direct access to the requested identity resource (5–6). Again, note that a service provider still is relying solely on the entities of its domain, in this case the discovery service. On the other hand, the identity resources are never replicated outside the domain but just queried by the requesting service provider; thus, no complex synchronization tasks must be performed.
SECURITY AND PRIVACY ISSUES From a security point of view, Liberty already provides integration with different types of security mechanisms. The use of SAML assertions, combined with digital signatures, provides strong trust relationships between identity providers, service providers, discovery service providers, and customers. Furthermore, the public key infrastructure recommended by Liberty for securing communications guarantees that sensitive information is neither modified nor intercepted by unauthorized parties and also supports mutual authentication between the parties involved in a conversation. From an operator’s point of view, this approach protects the operator’s discovery service. The idea of a non-affiliated, unknown, and untrusted thirdparty service provider accessing these resources does not sound very attractive from an operator’s
point of view. The discovery service acts as the key that can be used to discover both the location of user resources and how to obtain them. If nonaffiliated service providers coming from an external domain can access the discovery service, the location of user resources might be accessible to unauthorized entities who might be able to obtain references and credentials to access them. For these reasons alone, we consider our model to be more suitable for a real production environment with strong security restrictions. The only entity that can query a discovery service provider about user resource references is either a service provider from its own circle of trust or the discovery service of a collaborating operator or company. From a user point of view, our model guarantees the users that their identity is protected and that their profiles are accessible only to approved providers. Without our approach, users cannot use their identities to access the services of, or services from, domains belonging to other operators or companies, and if they did, user identities probably would be available to third parties. For instance, under service-level roaming with our approach, the only thing the identity provider who is visited knows about each user’s identity is a pseudonym that identifies the user federation and maps both real and anonymous identities in the home and visited circles of trust, respectively. Furthermore, the scope of our approach is not limited solely to service access. The privacy of user profiles also is guaranteed by combining the Liberty approach and our model for circleof-trust collaboration. Security and privacy can be increased with an interaction service to ask the users for permission to access sensitive identity attributes such as those in banking profiles.
IEEE Communications Magazine • March 2009
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
179
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Visited domain
The future points
not only to thirdparty providers,
V-SP
V-DS
H-DS
1 Query
but also to the
F
customers
Identity resource
Query response includes independent reference and credentials
2 Query 3 QueryResponse
themselves. This is one of the objectives
BEMaGS
Home domain
toward opening the service ecosystem
A
4 QueryResponse
of OPUCE, a work-
5 Query + Credentials
in-progress European
6 QueryResponse
research project
V-SP sends the query to the resource owners with the credentials to access to it
within the Sixth Framework Program.
Figure 5. Sequence to obtain identity information stored in a home domain from a visited domain.
SUMMARY AND CONCLUSIONS In this article, we present a solution that could help boost the business of operators in the provision of services. It comes from a Web services–based framework that can be used both to extend the service ecosystem of an operator with third-party providers and also with collaboration agreements between operators. This framework leverages on identity management (following the Liberty Alliance specifications) to support users in transacting business in a secure and apparently seamless environment and to ensure user privacy. All parties that participate in our approach benefit from it. On the one hand, telcos benefit from the agreements that have been established with other companies or operators. They receive more services and customers and thus increased revenue. Service providers also benefit from the sharing of user resources, which enable better service customization and user experience. As for subscribers, such an ecosystem facilitates more and better-trusted services. Furthermore, they can access their home services even when they are on a visited network. In addition, they also can use services provided within the visited service ecosystem from the visited network that were not available before. The future points toward opening the service ecosystem not only to third-party providers, but also to the customers themselves. This is one of the objectives of open platform for user-centric service creation and execution (OPUCE) [10], a work-in-progress European research project within the Sixth Framework Program. OPUCE, in which this research group is involved, aims to deliver a next-generation telecommunication service-delivery platform, enabling customers to create and deploy their own services easily in converged environments, thus helping carriers expand their range of services.
REFERENCES [1] A. Cuevas et al., “The IMS Service Platform: A Solution for Next-Generation Network Operators to Be More than Bit Pipes,” IEEE Commun. Mag., vol. 44, no. 8, Aug. 2006, pp. 75–81.
180
Communications IEEE
[2] A. Barros and M. Dumas, “The Rise of Web Service Ecosystems,” IT Professional, vol. 8, no. 5, Oct. 2006, pp. 31–37. [3] “Directive 2002/58/EC of the European Parliament and the Council of 12 July 2002, Concerning the Processing of Personal Data and the Protection of Privacy in the Electronic Communications Sector,” Official J. EU, vol. L 201, July 2002, pp. 37–47. [4] S. Cantor et al., “Assertions and Protocols for the OASIS Security Assertion Markup Language (SAML),” OASIS v. 2.0, Mar. 2005. [5] Liberty Alliance Project, 2008; _____________ http://www.projectliberty.org ___ [6] Shibboleth Project, 2008; http://shibboleth.internet2.edu [7] S. Bajaj et al., “Web Services Federation Language (WSFederation),” 2003; ___________________ ftp://www6.software.ibm.com/software/developer/library/ws-fed.pdf __________________ [8] H. Ludwig et al., “Web Service Level Agreement (WSLA). Language Specification,” v.1.0, IBM Corp., Jan. 2003. [9] P. Davis et al., “Liberty Metadata Description and Discovery Specification,” Liberty Alliance Project v. 1.1, Dec. 2004. [10] The OPUCE Project, 2008; http://www.opuce.eu
BIOGRAPHIES J UAN C. Y ELMO (
[email protected]) ____________ received a Ph.D. in 1996 from the Universidad Politécnica de Madrid (DITUPM), Spain. He joined DIT-UPM in 1991, where he is an associate professor in the Department of Telematics Engineering. He has worked as a technical researcher and research manager on several Europe-wide projects in the IST and Eureka CELTIC programs in the areas of services engineering, digital identity and distributed applications, and middleware. Currently, he is DIT-UPM representative at several industrial standardization bodies such as Liberty Alliance and the OMG. He is also director of doctorate and postgraduate studies in the Higher Technical School of Telecommunications Engineering of UPM. From 1988 to 1991 he worked in Telefónica I+D (the R&D subsidiary of Telefónica). RUBÉN TRAPERO (
[email protected]) ____________ received his M.Eng. degree in telecommunications engineering from UPM in 2004. He joined this research group in 2003, where he developed his Master’s thesis in the area of identity management and peer-to-peer mobile services. Since then he has been involved in several European and national projects as a technical researcher while conducting his Ph.D. work. His research interests include identity management and services engineering over convergent networks J OSÉ M. DEL A LAMO (
[email protected]) ___________ is a Ph.D. candidate at UPM and is involved in several European projects as a technical researcher. He received an M.Eng. degree in telecommunications engineering from UPM in 2003. His research interests are in identity management and services engineering over next-generation networks.
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
IEEE
A
BEMaGS F
Global MAGAZINE
Newsletter March 2009
Highlights from the 4th IEEE Broadband Wireless Access Workshop, Collocated with GLOBECOM 2008, New Orleans, Louisiana By T. M. Bohnert, SAP Research, Switzerland; D. Moltchanov, Tampere Univ. of Tech., Finland; G. Fodor, Ericsson R., Sweden; D. Staehle, Univ. Würzburg, Germany; E. W. Knightly, Rice Univ., USA From 1 to 4 December, the IEEE Global Communications Conference 2008 (IEEE GLOBECOM) was held in New Orleans, Louisiana, United States. New Orleans was hosting GLOBECOM for the second time, the first having been in 1985, and in order to account for the enormous changes in the communications industry that have happened since, ComSoc’s flagship conference was themed “Building A Better World Through Communications.” While contemporary change appears omnipresent and has the utmost positive connotation, especially in a place like New Orleans, little to no change has happened with respect to the popularity of IEEE GLOBECOM. Thus, in some cases, no change augurs well too. This manifestation of recognition was reflected in the number of workshop proposals submitted to the workshop chair, Dr. Devetsikiotis. After a careful review process, only 9 out of 60 (!) workshops were included in the conference program. The 4th IEEE Broadband Wireless Access Workshop (BWA) (http://www.bwaws.org) was selected as one of these full-day workshops. After being collocated with NGMAST ’07, IEEE CCNC ’08, and IEEE ICC ’08, this achievement unequivocally confirms the Workshop’s overall objective, which is to remain the prime workshop in the area of broadband wireless access research. This claim is underlined by the response to the call for papers: a new record number of 96 papers were submitted from 28 different countries (70 from 26 for the previous edition). In order to accommodate this number, the Technical Program Committee (TPC) has selected 27 papers for publication and inclusion on IEEE Xplore. After acceptance ratios of 32, 28, and 22 percent, this edition’s 28 percent well matched the Workshop’s excellent profile. On top of this, the best papers, ranked on reviewers’ comments and workshop presentation, will be forwarded to the EURASIP Journal of Wireless Communications and Networks, which accepted a special issue in association with and dedicated to the workshop. The papers were organized into four technical sessions: Optimization and Performance Evaluation, Resource Allocation and Scheduling, Channel and Physical Layer Modeling, and Relay Networks. Topics of these technical sessions included theoretical as well as practical deployment aspects of broadband wireless access technologies; most notably, roughly 50 percent of the papers were authored by key industrial players. The Workshop highlight was the keynote speech of Prof.
Dr. E. W. Knightly from Rice University, Texas, one of the leading researchers in BWA research. In his career Dr. Knightly has published numerous seminal papers in several areas, but the Workshop’s context refers foremost to his current research in wireless mesh networks. In recognition of his work, he was recently appointed IEEE Fellow. He also remains committed to the community (e.g., as General Chair of ACM MobiHoc 2009 and Editor of IEEE Transactions on Mobile Computing and IEEE/ACM Transactions on Networking). In his talk Edward reported valuable practical and theoretical insights revealed by a project aimed at operating a wireless access network in an under-resourced community in Houston, Texas. The network serves thousands of users over several square kilometers via a three-tier multihop architecture employing 802.11. He described important lessons from the field spanning measurement studies that point to protocol design challenges to anthropological studies that characterize community response to new technology. The slides of his talk are available on the Workshop’s Website. We hope that this edition of the BWA Workshop series has fulfilled the expectations of the authors and workshop attendees, as well as the conference host. From our point of view, scientifically and organizationally, it was an outstanding event in the area of wireless research, and we would like to express our sincere gratitude to all people involved, in particular to the outstanding performance of the TPC and all reviewers. Not to forget, special thanks goes to the IEEE GLBOECOM 2008 Workshop Chairs for their reliable support and constructive cooperation. As announced at the closing of the 2008 Workshop, we plan to propose the 5th IEEE Broadband Wireless Access Workshop to IEEE GLOBECOM 2009. However, this is some time ahead, and we therefore point the interested reader to our Website, http://www.bwaws.org, where we announce the latest information. •Thomas Michael Bohnert, SAP Research, Switzerland, and Dmitri Moltchanov, Tampere University of Technology, Finland — General Chairs •Edmundo Monteiro, University of Coimbra, Portugal, and Yevgeni Koucheryavy, Tampere University of Technology, Finland — General Co-Chairs •Gabor Fodor, ERICSSON Research, Sweden, and Dirk Staehle, University of Wuerzburg, Germany — TPC Chairs •Andreas Kassler, Karlstad Universitet, Sweden, Sean Murphy, University College Dublin, Ireland, and Christian Schwingenschloegl, SIEMENS CT, Germany
Global Communications Newsletter • March 2009
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
1 A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Age vs. Time Speed at Sweden ComSoc By Juan Hernandez, Jan Nilsson, Tommy Svensson, Bogdan Timus, and Leif R. Wilhelmsson, Sweden ComSoc Chapter “It must be a long way from being an IEEE Student Member to being able to become an IEEE Senior Member!” Did you think like this when you were about to obtaining your engineering degree? Have you had the feeling that this road is too long and tough to be worth even contemplating, that you need to be a genius to make it to the end? Certainly many new graduates without insight into how IEEE works must think and feel like this. But you don’t have to find a short and very important formula like Einstein! An IEEE Senior Membership is just a recognition of those engineers who have fulfilled their obligations to the Society: to enhance the life quality of their fellow citizens through their contributions and developments. This simple statement reveals the fact that many IEEE Members deserve this recognition. The Swedish ComSoc Chapter is actively working on making the trip from Student to Senior Member easier. Let’s look at the different steps in this process. We start with young members, typically Student Members. At this stage progress seems to be very slow, often accompanied by the anxiety of not having a clear idea of what happens after one’s studies, not seeing a clear application for our studies, and so on. Our initiative Willing to Integrate (Will2Int) aims to create a bridge over experience and age boundaries, between young members willing to integrate in the larger IEEE community and experienced members who are already recognized integrators, called RecInt members. Events of general interest such as Distinguished Lecturer Talks can be used as fora to establish contact between Youngs and Seniors. Will2Int members are given the opportunity to introduce themselves, their work, and their interests. The contacts established during the informal discussions following Distinguished Lecturer Talks have the potential to evolve into mentor-apprentice relationships. These interactions are useful as complementary guidance during studies, but also for integrating into the local IEEE community. Later, when some achievements have been inserted in the CVs, we give the opportunity to disseminate them with our sponsored workshops. To present your work and discuss it with your colleagues, sometimes in your own mother tongue, is a good complement to international conferences. These events, organized at the highest technical level with the participation of world leaders in the field, are easy to attend with local transportation time and costs. Attending high-quality national workshops helps build a good local research network, which is valuable for your continued work and further career development. The industrial scenario is often different. Conference papers and journal publications might no longer be what fill the CV. Instead, work might be targeting the next generation of some system or some implementation. A career might go a variety of ways. We might aim for a management position, project leading, or a career as a technical expert. Irrespective, our ComSoc Chapter can always serve as a good interface with the younger generation. Then time starts going faster; years have gone, and your CVs are thicker. Your activities are taking time from your life, perhaps you consider too much, but you are happy. The satisfaction makes the Senior title not seem so far. However, find ing Senior Members who can act as references for you can be an obstacle. Therefore, Sweden ComSoc periodically organizes meetings with Seniors who can provide the three references needed. The meetings are held at different locations around the country, and the aim is that within a three-year period each member should have at least one chance to par-
2
Communications IEEE
ticipate in a meeting close to home. ComSoc gives you the opportunity to disseminate your expertise. Give to others the opportunity to integrate in these circles, with which you feel so identified. ComSoc gives you the possibility to be a Distinguished Lecturer. Breaking your territorial limits, your expertise reaches many places distant from your home. Also, you can be a little more expansive and become an international leader in this process. You can be a member of the group of world leaders in the organization. But when time is going by very fast, compensated by the fact that you need do nothing by obligation, Sweden ComSoc offers the right scenario for enhancing the life of your neighbors. You are still recognized, and even if the time unit for you is in terms of years, the satisfaction remains. You have the opportunity to attend Will2Int sessions organized in parallel with the Distinguished Lecturer Talks and as a member of the RecInt group in our Chapter. The cycle is over, but ComSoc has been with you all the time, allowing you to receive the benefits of this organization of volunteers with hundreds of thousands members all over the world with a main objective: to enhance the life of its members and society. As we are an organization of engineers, perhaps one of our readers is a social mathematician willing to find numerical expressions for what we have written here. To you we give the following hint: if A stands for age, TS for time speed, and TUF for time unit feeling, then what we say here is that TUF=TS/A is a constant. The good news is that ComSoc, like all IEEE Societies, gives its members constant benefit over our whole engineering lives.
ComSoc Portuguese Chapter: Activities in 2008 Luis M. Correia and Joel Rodrigues, Portugal In 2008 the ComSoc Portuguese Chapter continued the activities of organizing seminars and providing, through its Website, a permanent list of conferences in the area, allowing an improvement of the information flow among the Portuguese scientific community working in communications. By the end of May a seminar on mobile communications was organized, essentially aimed at students about to graduate but open to industry as well. It was organised jointly with IST - Technical University of Lisbon, taking advantage of this university’s initiative. Speakers came from industry (AlcatelLucent, Vodafone, and Optimus), as well as the Portuguese telecommunications regulator (Anacom), giving a perspective on the foreseen evolution of technology, and services for mobile and wireless communications. The event was well attended, by more than 50 people, the majority being students. This joint organization will be pursued in coming years. Following the past organization of joint activities with the Portuguese Engineers Association (Ordem dos Engenheiros), another event took place at the end of September, this time dedicated to “Perspectives on Electromagnetic Exposure.” It brought together speakers from various backgrounds, given the transdisciplinarity of the topic: Luis M. Correia, from IST - Technical University of Lisbon, addressed the telecommunications engineering aspects; Antonio Tavares, from ENSANew University of Lisbon, gave the medical and health viewpoint; Catarina Freitas, from the Municipality of the City of Almada, presented the perspective of environmental engineering on this theme. At the end, using a wideband probe and a spectrum analyzer from IST-TUL, participants were able to measure the electromagnetic fields from their own mobile phones. The event attracted more than 130 people, (Continued on Newsletter page 4)
Global Communications Newsletter • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
The 60 Years of HTE: The Hungarian IEEE Sister Society By: Gyula Sallai, President of HTE, Csaba A. Szabó, Editor-in-Chief Infocommunications Journal, and Rolland Vida, Chair of International Affairs Committee of HTE, Hungary 2009 is a special year not only for the IEEE, which celebrates its 125th anniversary, but also for HTE, the Scientific Association for Infocommunications, Hungary, which was founded 60 years ago, on January 29, 1949. The organization, formerly known as the Scientific Society for Telecommunications, is a voluntary and autonomous professional society of engineers and economists, researchers and businessmen, managers and educational, regulatory, and other professionals active in the fields of telecommunications, broadcasting, electronics, information, and media technologies in Hungary. With a membership of over 1300 individuals and 100 corporate members (large companies, small and medium enterprises, research laboratories, and educational institutions), HTE has become one of the most important scientific associations in the country, with continuously broadening international relations and activities as well. Being an IEEE Sister Society has certainly enabled us to maintain or even further enhance our reputation as an important player in the field of infocommunications, for which we are of course grateful to the IEEE. One of the important activities of HTE is the organization of high-quality international scientific conferences. Having organized the World Telecommunications Congress (WTC) in 2006 and the 16th IST Mobile and Wireless Communications Summit in 2007, last year we brought to Budapest the 13th International Telecommunications Network Strategy and Planning Symposium, Networks 2008. We are happy to say that the five-day conference was a great success, as it attracted more than 260 specialists from 30 countries, in addition to the 75 participants that registered only for the tutorial sessions that were held the first two days. The program of the symposium was divided in 21 technical sessions, which included 90 presentations altogether. In addition, there were plenary talks and panel sessions that involved prestigious speakers from regulatory bodies, industry, and academia. For more information, please visit the conference Website at http://www. networks2008.org. And the 60th anniversary of founding HTE cannot be truly celebrated, of course, without organizing a similarly prestigious conference. That is why we are happy to help the Communications Society in organizing the IEEE Wireless Communications and Networking Conference (WCNC 2009), which will be held in Budapest in April 2009, after being hosted by cities like Hong Kong, New Orleans, and Las Vegas in the last few years. WCNC is the premier conference purely dedicated to wireless communications, and with a technical program including 10 tutorials and more than 500 research papers, it is certainly one of the major scientific events not to be missed by researchers and industry professionals. Besides this, the entire year of 2009 will be celebrated by HTE as an anniversary year, with many dedicated symposia, workshops, and publications presenting the history of the association and the evolution of its activities. We have also to mention that in 2009 we significantly restructured the Infocommunications Journal, the scientific journal published monthly by the HTE. Until recently, our journal was a predominantly Hungarian language journal (in fact, the only one in our field), fulfilling the important mission of disseminating information on the state of the art in different areas in telecommunications and information technology among Hungarian professionals, while also trying to preserve and develop the professional language in this field. We also published English issues twice a year, which were compiled mostly from the best research papers published in Hungarian during the preceding half-year period.
As of January 2009, we are going to increase the number of English issues to four, with the objective to become a quarterly international journal. We intend to publish original research papers in the aforementioned areas after a rigorous peer reviewing process. Our newly appointed International Advisory Committee, consisting of high-standing representatives of the international professional community, supports the Hungarian editorial team in maintaining the quality of published papers. The Infocommunications Journal is intended to become a recognized international publication forum for researchers not only from Hungary, but also from neighboring countries and, in principle, from all over the world. We will be particularly happy to publish theoretical and experimentation research results achieved within the framework of European ICT projects. As a conclusion, we hope that the 60th anniversary of our association will be celebrated through attractive high-quality events, and that in 2069 we will be able to inform you of another 60 successful years in the life of HTE.
Highlights from BIHTEL 2008: Implementation and Exploitation of New Communication Technologies, Applications and Services By Kerim Kalamuji, University of Sarajevo, Bosnia and Herzegovina Sarajevo, the capital of Bosnia and Herzegovina, hosted the 7th International Symposium on Telecommunications, BIHTEL 2008. It was held on 3–5 November 2008 at the state-ofthe-art congress hall of UNITIC Business Center. BIHTEL is a traditional biannual international symposium, which, over almost 15 years, has grown into one of the most respected international conferences on telecommunications in the region. BIHTEL 2008 was organized by the Faculty of Electrical Engineering of the University of Sarajevo in collaboration with IEEE Bosnia and Herzegovina Section and IEEE Communications Chapter. This year’s main theme was “Implementation and Exploitation of New Communication Technologies, Applications and Services,” and it appeared to be a quite tempting and interesting theme for all those who wanted a glimpse of forecomming technologies. The opening ceremony consisted of the welcome speech by the organizer, followed by two keynote speakers, both wellknown leaders in their respective fields. The first keynote speaker was Prof. Branka Zovko-Cihlar from Faculty of Electrical Engineering and Computing, University of Zagreb, Croatia. Her speech was entitled “Broadcast Networks Planning for Digital Terrestrial Transmission.” The second keynote was delivered by Prof. Gregor Rozinaj from Slovak University of Technology, Bratislava, Slovakia. His speech was entitled “Natural Communications with Computers.” BIHTEL 2008’s three-day program consisted of five sessions, where 44 accepted papers were presented in a very relaxed atmosphere. Papers were submitted not only from Bosnia and Herzegovina, but also from other countries in Southeastern Europe; that was more evidence of the regional importance of this conference. It is important to mention that there were four student papers presented this year. It is a practice that will be continued in coming years, to inspire young students and entry-level engineers to do research, and afterward write and submit their papers. (Continued on Newsletter page 4)
Global Communications Newsletter • March 2009
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
3 A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
What We Learned at the GLOBECOM 2008 Communication History Sessions By Jacob Baal-Schem Communications History Committee “History is a many splendored thing.” Sometimes it teaches us unexpected lessons — such as the relation between the discovery of radio and Irish whiskey. This was a lesson we learned at the recent History Panel at GLOBECOM 2008 on “Who Invented Radio?” It appears that whiskey was one of the reasons for the success of Marchese Guglielmo Marconi (25 April 1874–20 July 1937), best known for his development of a radiotelegraph system, who shared the 1909 Nobel Prize in Physics with Karl Ferdinand Braun, “in recognition of their contributions to the development of wireless telegraphy.” Marconi’s Irish wife, Annie Jameson, was the granddaughter of the founder of the Jameson Whiskey distillery, and the Jameson family assisted in funding the experiments of the young inventor. Another myth that was brought up at this panel was about a fur coat given to Marconi by the Russian inventor Alexander Popov. It might be true that Marconi received a coat from a person named Popov (which is a quite common name, meaning “the son of a priest”), but it seems quite unreasonable that this was A. S.Popov. These and many other (and more scientific and historic) events were discussed at the “historic” History Sessions held
at GLOBECOM 2008 in New Orleans. These were the first ever sessions on history at a major ComSoc Conference, as noted by ComSoc President Doug Zuckerman. The first session opened with an invited lecture by Steve Weistein on the origins of OFDM, followed by a presentation by former ComSoc President Paul Green and additional papers. The second session included a Panel discussion, chaired by another former ComSoc President, Mischa Schwartz, with participants discussing the contributions of Tessla, Branly, Fessenden, Popov, Marconi, and others, to the development of radio communications. The History sessions at GLOBECOM ’08 drew up to 80 attendees, and many participants showed interest in the activities of the newly formed Communication History Committee, who intend to form a ComSoc History network and publish a History Column in IEEE Communications Magazine. History sessions will continue, and the sessions for GLOBECOM 2009 are already planned, with Jerry Hayes and Jacob Baal-Schem as co-chairs and a scheduled opening lecture on the development of the ALOHA communication protocol.
BIHTEL 2008/continued from page 3 IEEE
Global MAGAZINE
Newsletter www.comsoc.org/pubs/gcn
STEFANO BREGNI Editor Politecnico di Milano - Dept. of Electronics and Information Piazza Leonardo da Vinci 32, 20133 MILANO MI, Italy Ph.: +39-02-2399.3503 - Fax: +39-02-2399.3413 Email:
[email protected], __________ ___________
[email protected] IEEE COMMUNICATIONS SOCIETY
MARK KAROL, VICE-PRESIDENT CONFERENCES BYEONG GI LEE, VICE-PRESIDENT MEMBER RELATIONS NELSON FONSECA, DIRECTOR OF LA REGION GABE JAKOBSON, DIRECTOR OF NA REGION TARIQ DURRANI, DIRECTOR OF EAME REGION ZHISHENG NIU, DIRECTOR OF AP REGION ROBERTO SARACCO, DIRECTOR OF SISTER AND RELATED SOCIETIES
The Symposium was sponsored by BH Telecom, Iskratel, Recro-net, Ericsson, IBM, Energoinvest, HT-Eronet, and Hermes SoftLab. Four of these sponsors (IBM, BH Telecom, Iskratel, and Recro-net) held corporate presentations, where attendees could get to know their business activities better. During the Symposium, Iskratel donated a new laboratory for testing triple play services, worth €40,000, to students of the Faculty of Electrical Engineering. The laboratory will be used by high schoolers and faculty students from Bosnia and Herzegovina for development and testing of their own ideas and solutions from the domain of telecommunications and business information systems. About 200 attendees and visitors enjoyed the symposium as well as summer-like days, quite extraordinary for this time of year. But nobody complained about the weather. On the contrary, everybody did their best to get the most out of their free time to enjoy sunny days and all the beauties of the host city. Sarajevo is famous for its traditional architectural, cultural, and religious diversity, coexisting for centuries. Sarajevo is famous for its cuisine, delicious traditional Bosnian dishes that leave no one indifferent. The next BIHTEL will be held in 2010, and we welcome you all to join us next year.
REGIONAL CORRESPONDENTS WHO CONTRIBUTED TO THIS ISSUE
THOMAS MICHAEL BOHNERT, SWITZERLAND (____________________
[email protected]) JOEL RODRIGUES, PORTUGAL (__________
[email protected] KERIM KALAMUJIC, BOSNIA HERZEGOVINA (___________________
[email protected]) MARKO JAGODIC, SLOVENIA (___________________
[email protected]) JACOB BAAL-SCHEM, REG. 8 HISTORY ACTIVITIES COORDINATOR, ISRAEL (___________
[email protected])
®
filling the room of Ordem dos Engenheiros. In 2009 the ComSoc Portuguese Chapter will continue pursuing the same objectives, namely, participation in Distinguished Lecturer Tours and the organization of seminars on a monthly basis with speakers from industry and academia (which is already in an advanced stage).
A publication of the IEEE Communications Society
4
Communications IEEE
PORTUGESE CHAPTER/continued from page 2
Global Communications Newsletter • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
IEEE COMMUNICATIONS MAGAZINE CALL FOR PAPERS/FEATURE TOPIC MILITARY COMMUNICATIONS The design and fielding of military communications systems capable of enabling Network Centric Operations remains one of the greatest challenges facing military institutions today. The challenges are particularly acute in the area of tactical wireless communications, which are typically characterized by user mobility and unpredictable transmission channels. Together with demanding security requirements and the need for interoperability among disparate systems, including legacy systems and those of allies, these characteristics make it difficult to leverage commercial wireless technology to meet the needs and expectations of military users. Some of these difficulties can be traced back to a lack of fundamental theoretical underpinnings for communications networks, a problem that is widely recognized today and beginning to receive significant attention in academia and industry alike. While network theoretic frameworks for wireless systems remain elusive, there is a great deal of work aimed at improving the design of wireless systems, some of which include the adoption of new design paradigms. Cross layer design remains a popular approach to minimizing inefficiencies associated with strictly layered design paradigms in highly dynamic wireless environments. Researchers are also focusing on the idea of exploiting the inherent broadcast properties of wireless networks through the development of cooperative diversity protocols, though practical applicability in highly mobile environments remains a question. Network coding is an example of another promising, and increasingly popular, approach aimed at improving the throughput of multi-hop wireless networks. Other areas of research, including network security, mobile routing, dynamic resource allocation, management and media access control, continue to play an important role in the struggle to improve the feasibility of truly network centric operations. This Feature Topic will bring together the most recent advances in communications and networking technologies applicable to the area of network centric military communications. Papers are solicited that present research, development, experimentation, test and measurement, and practical deployment activities. Suggested areas include (but are not restricted to) the following subject categories: •Network Science for military communications •Cognitive radio and cognitive networking •Peer-to-peer networking •Cross-layer design and optimization •Network management and control in tactical networks •Network characterization and modeling •Protocol efficiency on bandwidth constrained links •Mobility management •Signaling and Quality of Service provisioning
•Mobile routing •Network security and information assurance •Cooperative Diversity •Freespace Optical Communications •Network coding •Dynamic spectrum allocation and management •Airborne networks •Space-based networks •Content Oriented Networking
SUBMISSION Articles should be tutorial in nature and be of direct interest to those engineering military communication systems. They should be written in a style comprehensible to readers outside the specialty of the article. Mathematical equations should not be used (In justified cases up to three simple equations could be allowed, provided you have the consent of the Guest Editor. The inclusion of more than three equations requires permission from the Editor-in-Chief). Articles should not exceed 4500 words. Figures and tables should be limited to a combined total of six. Complete guidelines for prospective authors can be found at: http://www.comsoc.org/pubs/commag/sub_guidelines.html. Please send PDF (preferred) or MSWORD formatted papers by April 1, 2009 to Manuscript Central (http://commag___________ ieee.manuscriptcentral.com), resister or log in, and go to the Author Center. Follow the instructions there. Select the ___________________ topic "October 2009/Military Communications."
SCHEDULE Submission Deadline: April 1, 2009 Notification of Acceptance: July 1, 2009 Final Manuscript Due: August 1, 2009 Publication Date: October 2009
GUEST EDITORS Torleiv Maseng Forsvarets Forskningsinstitutt Email: _______________
[email protected]
Communications IEEE
Randall J. Landry The MITRE Corporation Email: _____________
[email protected]
Ken Young Telecordia Technologies Email: ___________________
[email protected]
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
PRODUCT SPOTLIGHTS Emerson Network Power Emerson Network Power Connectivity Solutions Trompeter line announces the newly patented miniaturized BNC technology which delivers true 75Ù impedance throughout the DS3 telco frequency range with outstanding return loss performance. www.trompeter.com
RSoft Design Group AMCC AMCC is launching the new Pemaquid/S19258 , a XAUI to XFI 10G Framer/Mapper/Phy device for 10GbE, 10G FC, WIS, and OTU2 networks. Via its rich suite of 10GbE over WAN/OTN mapping modes, integrated 10G Phy XFI interface, and provisional G.709 GFEC/EFEC features, it provides a seamless interface to XFP and SFP+ optical modules. www.amcc.com
RSoft announced the upcoming release of OptSim 5.1, the award-winning optical communication system design tool. Among the new features are advanced models and design examples of interferometric fiber optic gyro (I-FOG) devices, reflective semiconductor optical amplifier (R-SOA) systems, polarization multiplexed QPSK (PM-QPSK) systems, and electrical circuit simulation capabilities. In addition, sophisticated BER estimation techniques make OptSim 5.1 the ideal framework to design next-generation PON and 100 Gb/s Ethernet networks. www.rsoftdesign.com
Digital Lightwave The NIC Platform, with battery power option, is the industry’s leading solution for telecom/datacom testing. The NIC 40G is the smallest/lightest 40/43 Gb/s test set; the NIC Plus can be equipped with multiple modules for an integrated solution including all SONET/SDH/OTN rates, Ethernet, Fibre Channel, PDH/T-Carrier, and Optical Spectrum Analyzer (OSA). www.lightwave.com
Accumold Accumold® is a high-tech manufacturer of precision micro, small, and lead frame injection molded plastic components. Utilizing processes developed from our Micro-Mold® technology, we design, build, and produce unique molds and parts efficiently for markets that include Micro Electronics, Automotive, Micro Optics, Medical, and Military Applications. www.accu-mold.com
186
Communications IEEE
Discovery Semiconductor With the ability to incorporate any of our 30+ devices, Discovery’s Lab Buddy is the most versatile O/E converter on the market! The Lab Buddy provides device specific voltages, factory set current limits, and appropriately sequenced biasing for Discovery’s family of optical receivers in a convenient bench top package. www.chipsat.com
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
PRODUCT SPOTLIGHTS Anritsu Anritsu's MS4640A VectorStar(tm) VNA offers a new performance benchmark for S-parameter measurements of RF, Microwave, and Millimeter-wave devices. The MS4640A delivers best-in-class frequency coverage of 70 kHz to 70 GHz, dynamic range of 100 dB @ 70 GHz, and measurement speed of 20 μs/point. www.anritsu.us/IEEE
GL Communications GL provides a range of compact, hand held T1, E1, T3, and E3 testers, which can test a wide variety of communications facilities and equipment including Asynchronous, Synchronous, Modems, Multiplexers, CSU/DSUs, and more. The products are: LinkTest Single, LinkTest Single+, LinkTest Dual, and LinkSim. The data rates supported range from 50 to 50 Mb/s with single or dual interfaces for data interfaces such as RS 232, RS 422, V.35, and X.21, as well as T1 and E1, and T3 and E3. With GL’s LinkSim™, users can simulate a variety of communications links, inserting delays, and errors to determine what effects these impairments will have on the network. http://www.gl.com/ HandPortableUnits ______________
Vicor The VME450 single-slot VME power supply - filtered 28 Vdc, four output (3.3, 5, ±12 V), 550 W - is a military COTS solution that is compliant to the vibration requirements of MILSTD-810F and EMI per MIL-STD461E. In contrast with VME power supplies using conventional technology, the one-slot VME450 provides users with higher efficiency (85%), lower weight (2.4 pounds), and higher power (up to 550 W). www.vicorpower.com
Telogy Telogy is a leading renter of test and measurement equipment. The company has superior pricing and availability to meet customers' rental needs. Renting is ideal solution for companies that need additional equipment on short notice or for short periods of time. www.TELOGYLLC.com
GigOptix u2t u2t’s latest ultra-compact receiver family, optimized for client side applications such as small form factor transponders, with an AC coupled differential interface, with low power consumption and optimized performance for high optical input power enables subsystem vendors to use multiple sources because of its MSA compatible package. http://www.u2t.de
GigOptix is a leading manufacturer of electronic engines for the optically connected digital world. Offering the widest selection of high-speed optical PMD ICs with a portfolio including modulator drivers, laser drivers and TIAs for telecom, datacom, Infiniband and consumer optical systems, from 3.125G-100G, covering all laser technologies, serial and parallel. www.gigoptix.com
IEEE Communications Magazine • March 2009
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
187
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
ADVERTISERS’ INDEX Company
Page ADVERTISING SALES OFFICES
Accumold...................................................................................39
Closing date for space reservation: 1st of the month prior to issue date
Adva Optical Networking.........................................................OCS Cover 4 Agilent Technologies ................................................................29 AMCC........................................................................................31 Anritsu Company ......................................................................Insert
NATIONAL SALES OFFICE Eric L. Levine Advertising Sales Manager IEEE Communications Magazine 3 Park Avenue, 17th Floor New York, NY 10016 Tel: 212-705-8920 Fax: 212-705-8999 e-mail: _____________
[email protected]
Cambridge University Press.....................................................17 Cisco Systems ............................................................................Cover 2 Conec Corporation ...................................................................23 Crystek Corporation .................................................................12
PACIFIC NORTHWEST and COLORADO Jim Olsen 20660 NE Crestview Dr. Farview, OR 97024 Tel: 503-257-3117 Fax: 503-257-3172
[email protected] _________________
Digital Lightwave ......................................................................37 Digital Voice Systems ...............................................................20 Discovery Semiconductor.........................................................27 Elsevier.......................................................................................Insert
SOUTHERN CALIFORNIA Patrick Jagendorf 7202 S. Marina Pacifica Drive Long Beach, CA 90803 Tel: 562-795-9134 Fax: 562-598-8242
[email protected] ________________
Emerson Network Power .........................................................11 GigOptix ....................................................................................32 GL Communications ................................................................13 John Wiley & Sons, Ltd............................................................Cover 3
NORTHERN CALIFORNIA George Roman 4779 Luna Ridge Court Las Vegas, NV 89129 Tel: 702-515-7247 Fax: 702-515-7248 Cell: 702-280-1158
[email protected] ________________
M/A-Com Technology Solutions..............................................79 Narda Microwave East .............................................................41 OFC/NFOEC 2009 ...................................................................OCS Cover 3
SOUTHEAST Scott Rickles 560 Jacaranda Court Alpharetta, GA 30022 Tel: 770-664-4567 Fax: 770-740-1399
[email protected] ___________
Optiwave Corporation..............................................................33 Pico Electronics.........................................................................22 Remcom.....................................................................................1 RSOFT Design Group .............................................................OCS Cover 2 Samsung .....................................................................................Cover 4
EUROPE Rachel DiSanto Huson International Media Cambridge House, Gogmore Lane Chertsey, Surrey, KT16 9AP ENGLAND Tel: +44 1428608150 Fax: +44 1 1932564998 Email: _________________
[email protected]
Seikoh Giken .............................................................................40 Telogy .........................................................................................3 The Math Works .......................................................................5 u2t Photonics .............................................................................35 188
Communications IEEE
IEEE Communications Magazine • March 2009
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
______________
___________ ______________ ______________ _________
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F
Why did Samsung develop WiMAX?
We like speed.
Samsung has taken a leading role in WiMAX since the beginning. We were on the board of the WiMAX Forum and helped develop the iEEE 802.16e standard. We’ve been involved with everything from infrastructure to chip design and devices. We launched the first commercial Mobile WiMAX network in the world, then in the United States. Today, we’re working hard to bring WiMAX technology to everyone. We’re working fast, too. For the latest information on Samsung and WiMAX, go to www.samsung.com/wss.
©2009 Samsung Telecommunications America, LLC (“Samsung”). All rights reserved. All product and brand names are trademarks or registered trademarks of their respective companies.
Communications IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS F