CARRIER ETHERNET Providing�the Need�for�Speed
AU6039.indb 1
2/13/08 9:19:35 AM
OTHER TELECOMMUNICATIONS BOOKS FROM AUERBACH Active and Programmable Networks for Adaptive Architectures and Services Syed Asad Hussain ISBN: 0-8493-8214-9 Ad Hoc Mobile Wireless Networks: Principles, Protocols and Applications Subir Kumar Sarkar, T.G. Basavaraju, and C. Puttamadappa ISBN: 1-4200-6221-2
Introduction to Mobile Communications: Technology, Services, Markets Tony Wakefield, Dave McNally, David Bowler, and Alan Mayne ISBN: 1-4200-4653-5 Millimeter Wave Technology in Wireless PAN, LAN, and MAN Shao-Qiu Xiao, Ming-Tuo Zhou, and Yan Zhang ISBN: 0-8493-8227-0
Comprehensive Glossary of Telecom Abbreviations and Acronyms Ali Akbar Arabi ISBN: 1-4200-5866-5
Mobile WiMAX: Toward Broadband Wireless Metropolitan Area Networks Yan Zhang and Hsiao-Hwa Chen ISBN: 0-8493-2624-9
Contemporary Coding Techniques and Applications for Mobile Communications Onur Osman and Osman Nuri Ucan ISBN: 1-4200-5461-9
Optical Wireless Communications: IR for Wireless Connectivity Roberto Ramirez-Iniguez, Sevia M. Idrus, and Ziran Sun ISBN: 0-8493-7209-7
Context-Aware Pervasive Systems: Architectures for a New Breed of Applications Seng Loke ISBN: 0-8493-7255-0
Performance Optimization of Digital Communications Systems Vladimir Mitlin ISBN: 0-8493-6896-0
Data-driven Block Ciphers for Fast Telecommunication Systems Nikolai Moldovyan and Alexander A. Moldovyan ISBN: 1-4200-5411-2
Physical Principles of Wireless Communications Victor L. Granatstein ISBN: 0-8493-3259-1
Distributed Antenna Systems: Open Architecture for Future Wireless Communications Honglin Hu, Yan Zhang, and Jijun Luo ISBN: 1-4200-4288-2
Principles of Mobile Computing and Communications Mazliza Othman ISBN: 1-4200-6158-5
Encyclopedia of Wireless and Mobile Communications Borko Furht ISBN: 1-4200-4326-9
Resource, Mobility, and Security Management in Wireless Networks and Mobile Communications Yan Zhang, Honglin Hu, and Masayuki Fujise ISBN: 0-8493-8036-7
Handbook of Mobile Broadcasting: DVB-H, DMB, ISDB-T, AND MEDIAFLO Borko Furht and Syed A. Ahson ISBN: 1-4200-5386-8
Security in Wireless Mesh Networks Yan Zhang, Jun Zheng, and Honglin Hu ISBN: 0-8493-8250-5
The Handbook of Mobile Middleware Paolo Bellavista and Antonio Corradi ISBN: 0-8493-3833-6
Wireless Ad Hoc Networking: Personal-Area, Local-Area, and the Sensory-Area Networks Shih-Lin Wu and Yu-Chee Tseng ISBN: 0-8493-9254-3
The Internet of Things: From RFID to the Next-Generation Pervasive Networked Systems Lu Yan, Yan Zhang, Laurence T. Yang, and Huansheng Ning ISBN: 1-4200-5281-0
Wireless Mesh Networking: Architectures, Protocols and Standards Yan Zhang, Jijun Luo, and Honglin Hu ISBN: 0-8493-7399-9
AUERBACH PUBLICATIONS www.auerbach-publications.com To Order Call: 1-800-272-7737 • Fax: 1-800-374-3401 E-mail:
[email protected]
AU6039.indb 2
2/13/08 9:19:35 AM
CARRIER ETHERNET Providing�the Need�for�Speed
GILBERT HELD
Boca Raton London New York
CRC Press is an imprint of the Taylor & Francis Group, an informa business
AN AUERBACH BOOK
AU6039.indb 3
2/13/08 9:19:35 AM
Auerbach Publications Taylor & Francis Group 6000 Broken Sound Parkway NW, Suite 300 Boca Raton, FL 33487‑2742 © 2008 by Taylor & Francis Group, LLC Auerbach is an imprint of Taylor & Francis Group, an Informa business No claim to original U.S. Government works Printed in the United States of America on acid‑free paper 10 9 8 7 6 5 4 3 2 1 International Standard Book Number‑13: 978‑1‑4200‑6039‑3 (Hardcover) This book contains information obtained from authentic and highly regarded sources Reason‑ able efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The Authors and Publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers. For permission to photocopy or use material electronically from this work, please access www. copyright.com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC) 222 Rosewood Drive, Danvers, MA 01923, 978‑750‑8400. CCC is a not‑for‑profit organization that provides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Library of Congress Cataloging‑in‑Publication Data Held, Gilbert, 1943‑ Carrier Ethernet : providing the need for speed / Gilbert Held. p. cm. ISBN 978‑1‑4200‑6039‑3 (hardback : alk. paper) 1. Ethernet (Local area network system) 2. Metropolitan area networks (Computer networks) I. Title. TK5105.8.E83H448 2008 004.6’8‑‑dc22
2007049111
Visit the Taylor & Francis Web site at http://www.taylorandfrancis.com and the Auerbach Web site at http://www.auerbach‑publications.com
AU6039.indb 4
2/13/08 9:19:36 AM
Dedication One of the advantages associated with living in a small town for almost 30 years is the commute to work. After having lived in New York City and the suburbs of Washington, D.C., moving to Macon, Georgia, provided me with over ten hours per week of additional time that I could devote to writing manuscripts and preparing presentations. Over the past 30 years that I have lived in Macon, I was fortunate to be able to teach over a thousand graduate students locally and perhaps ten thousand or more students who came to various seminars I taught throughout the United States, Europe, Israel, and South America. Many of those students were highly inquisitive and their questions resulted in a mental exercise for this old professor as well as second, third, and even fourth editions of some of the books I authored. In recognition of the students who made teaching truly enjoyable, this book is dedicated.
AU6039.indb 5
2/13/08 9:19:36 AM
AU6039.indb 6
2/13/08 9:19:36 AM
Contents Preface............................................................................................................ xv Acknowledgments........................................................................................xvii About the Author..........................................................................................xix
1
Introduction to Carrier Ethernet............................................................1 Defining Carrier Ethernet.............................................................................1 Overview..................................................................................................2 Rationale..................................................................................................2 Expanded Use of Ethernet LANs.........................................................2 Frame Compatibility...........................................................................3 Low Cost.............................................................................................3 High Network Access Speeds..............................................................4 Mass Market for Technology...............................................................4 Business Continuity.............................................................................4 Enabling Technologies..................................................................................5 Copper and Fiber Infrastructure..............................................................5 ADSL..................................................................................................6 ADSL2 and ADSL2+..........................................................................6 SHDSL................................................................................................8 VDSL..................................................................................................8 VPNs.......................................................................................................9 Types of VPNs.....................................................................................9 Protocols..............................................................................................9 Service Provider Provisioned VPNs...................................................10 VLANs..................................................................................................11 Broadcast Domain Reduction............................................................12 Facilitate Subnet Creation..................................................................12 Reduce Hardware Requirements.......................................................13 Traffic Control...................................................................................13 Types of VLANs................................................................................13 vii
AU6039.indb 7
2/13/08 9:19:36 AM
viii n Contents
MPLS.....................................................................................................13 Overview...........................................................................................14 Architecture.......................................................................................14 Operation..........................................................................................14 Applications................................................................................................17 Interconnecting Distributed Offices.......................................................17 Providing Real-Time Backup.................................................................17 Voice, Video, and Data Support.............................................................17 Challenges to Carrier Ethernet...................................................................18 Total Cost of Operation.........................................................................18 Packet Overhead....................................................................................18 Management and Troubleshooting.........................................................19 Reliability...............................................................................................19 Security..................................................................................................20 QoS........................................................................................................20
2
AU6039.indb 8
Data Networking Concepts...................................................................21 Transport Technologies...............................................................................21 LANs.....................................................................................................22 WANs....................................................................................................22 Characteristics...................................................................................23 Wireless.............................................................................................23 Data Protocols............................................................................................25 Ethernet.................................................................................................25 Evolution...........................................................................................25 IEEE Involvement.............................................................................26 Network Interfaces.....................................................................................29 Network Equipment...................................................................................29 Network Interface Cards........................................................................30 Hubs......................................................................................................30 Operation..........................................................................................31 Passive versus Intelligent Hubs...........................................................31 Switches.................................................................................................32 Operation..........................................................................................32 Advantages........................................................................................32 Evolution...........................................................................................33 Routers...................................................................................................33 Operation......................................................................................... 34 Advantages....................................................................................... 34 Capabilities....................................................................................... 34 Firewall..................................................................................................35 Placement..........................................................................................35 Operation..........................................................................................36
2/13/08 9:19:37 AM
Contents n ix
VPN Appliances.....................................................................................36 Operation..........................................................................................36 Advantages........................................................................................37 Combining Functions........................................................................37 Network Facilities.......................................................................................37 T1..........................................................................................................37 The DSO Time Slot...........................................................................38 T-Carrier Hierarchy...........................................................................38 Channelized versus Non-Channelized...............................................38 SONET.................................................................................................39 Optical Carrier Levels........................................................................39 Framing.............................................................................................41 Utilization.........................................................................................43
3
AU6039.indb 9
The Flavors of Ethernet.........................................................................45 Metcalfe’s Original Design.........................................................................45 Bus-Based Network Structure............................................................... 46 The DIX Standard................................................................................. 46 DIX Version 2.0............................................................................... 46 IEEE 802.3 Standardization.......................................................................48 Division of Effort...................................................................................48 Physical Layer Effort..........................................................................48 Network Layer Effort.........................................................................51 Data Link Layer.................................................................................51 IEEE Changes from DIX.......................................................................53 802.3 Frame Format..........................................................................53 Sub-Network Access Protocol................................................................54 The CSMA/CD Protocol.......................................................................54 Frame Size..............................................................................................54 Early Ethernet.............................................................................................55 The 10 Mbps Ethernet Family................................................................55 10BASE-5..........................................................................................55 10BASE-2..........................................................................................56 10BROAD-36...................................................................................56 10BASE-T.........................................................................................56 Network Characteristics....................................................................59 5-4-3 Rule.........................................................................................59 FOIRL and 10BASE-F......................................................................59 Fast Ethernet............................................................................................. 60 100BASE-T........................................................................................... 60 Layer Subdivision............................................................................. 60 100BASE-TX.........................................................................................62 Network Configuration.....................................................................63
2/13/08 9:19:37 AM
n Contents
Coding..............................................................................................63 Repeaters...........................................................................................63 100BASE-T4......................................................................................... 64 100BASE-T4 Repeater Hub...............................................................65 100BASE-T2......................................................................................... 66 Auto-Negotiation.................................................................................. 66 LIT Pulses........................................................................................ 66 FLP Pulses.........................................................................................67 Parallel Detection Function...............................................................67 The Base Page....................................................................................67 The Next Page Function....................................................................69 Extended Next Page Function...........................................................71 Priorities............................................................................................73 Option Considerations......................................................................73 Fiber.......................................................................................................74 100BASE-FX.....................................................................................74 100BASE-SX.....................................................................................75 100BASE-BX.....................................................................................75 Gigabit Ethernet.........................................................................................75 Fiber-Based Gigabit Ethernet.................................................................76 1000BASE-SX...................................................................................76 1000BASE-LX...................................................................................76 Fiber Auto-Negotiation.....................................................................76 1000BASE-ZX and LH.....................................................................78 Copper-Based Gigabit Ethernet..............................................................78 1000BASE-CX..................................................................................78 1000BASE-T.....................................................................................78 Summary...............................................................................................79 10 Gigabit Ethernet................................................................................... 80 GbE versus 10 GbE............................................................................... 80 Layers and Interfaces..............................................................................81 XGMII..............................................................................................81 XAUI.................................................................................................82 XGMII..............................................................................................82 MAC.................................................................................................83 PCS...................................................................................................83 PMA..................................................................................................83 PMD.................................................................................................83 WAN Physical Layer..........................................................................83 10 GbE over Copper............................................................................. 84 10GBASE-CX4.................................................................................85 10GBASE-T......................................................................................85
AU6039.indb 10
2/13/08 9:19:37 AM
Contents n xi
Ethernet in the First Mile...........................................................................87 Architectures..........................................................................................87 Physical Layer Interfaces....................................................................88 Applications...........................................................................................89 Advantages........................................................................................89 Use of Dual Fibers............................................................................ 90 Use of Single Fibers...........................................................................91 EPON...............................................................................................91 MPCP...............................................................................................94
4
AU6039.indb 11
Frame Formats.......................................................................................99 Basic Ethernet...........................................................................................100 The Ethernet II/DIX Frame.................................................................100 Preamble Field.................................................................................100 Destination Address Field................................................................100 Source Address Field........................................................................101 Type Field........................................................................................101 Data Field........................................................................................102 Frame Check Sequence Field...........................................................102 The 802.3 Frame..................................................................................103 Length Field....................................................................................103 Preamble Field Modification............................................................103 Type/Length Field Values................................................................103 The 802.2 Header................................................................................104 Subnetwork Access Protocol............................................................104 LLC Header Operation...................................................................105 The SNAP Frame.............................................................................105 IPX over Ethernet............................................................................106 Full Duplex and the Pause Frame..........................................................107 Advantages...........................................................................................108 Flow Control........................................................................................108 PAUSE Frame..................................................................................108 Overview.........................................................................................109 Frame Fields....................................................................................109 VLAN Tagging........................................................................................109 The 802.1Q Standard........................................................................... 110 Advantages........................................................................................... 110 Frame Format....................................................................................... 110 SNAP Frames....................................................................................... 111 Frame Determination........................................................................... 111 Fast Ethernet............................................................................................ 111 4B5B Coding.......................................................................................112
2/13/08 9:19:38 AM
xii n Contents
Delimiters............................................................................................112 Interframe Gap.................................................................................... 114 Gigabit Ethernet....................................................................................... 114 Carrier Extension................................................................................. 114 Half-Duplex Use.............................................................................. 115 Frame Bursting.................................................................................... 115 Jumbo Frames...................................................................................... 116 Operation........................................................................................ 116 Length Rationale............................................................................. 116 Advantages...................................................................................... 117 Problems and Solutions.................................................................... 117 Performance.............................................................................................. 118 Basic Ethernet...................................................................................... 118 SNAP Frames....................................................................................... 119 Gigabit Ethernet................................................................................... 119 Frame Rates.........................................................................................120 Mathematical Oddities....................................................................121 Frame Rate Computations...............................................................122 Gigabit Constraints.........................................................................124
5
AU6039.indb 12
LAN Switches......................................................................................127 Bridge Operations.....................................................................................127 Transparent and Translating Bridges....................................................127 Plug-and-Play Operation.................................................................128 Bridge Operation.............................................................................128 Intelligent Switching Hubs...................................................................131 Basic Components...........................................................................131 Buffer Memory................................................................................131 Delay Times....................................................................................131 Parallel Switching.................................................................................133 Switch Operations....................................................................................133 Switching Techniques...........................................................................134 Cross-Point Switching.....................................................................134 Operation........................................................................................134 Latency............................................................................................135 Store-and-Forward...............................................................................135 Filtering Capability..........................................................................135 Operation........................................................................................135 Delay Time......................................................................................136 Hybrid.................................................................................................137 Switch Port Address Support................................................................137 Port-Based Switching.......................................................................137
2/13/08 9:19:38 AM
Contents n xiii
Segment-Based Switching................................................................138 Applications.....................................................................................139 Considering Port Capability.................................................................140 Basic Switching................................................................................141 Multi-Tier Networking....................................................................141 Interconnecting Dispersed Offices...................................................142 Virtual LANs............................................................................................143 Characteristics......................................................................................143 Construction Basics.............................................................................143 Implicit versus Explicit Tagging...........................................................144 Using Implicit Tagging........................................................................144 Explicit Tagging................................................................................... 145 The IEEE 802.1Q Standard............................................................. 145 Vendor Implementation................................................................... 151
6
Carrier Ethernet Services....................................................................157 Overview.................................................................................................. 157 The Metro Ethernet Forum.................................................................. 157 Requirements for Use...........................................................................158 VLAN Tagging...............................................................................158 The 802.1P (Priority) Standard........................................................160 Latency Considerations....................................................................160 Fiber Connectivity...........................................................................163 Transporting Ethernet in a Service Provider Network..............................164 Operating over Other Transports.........................................................164 Comparison to Other Layer 2 Protocols...............................................165 Ethernet Topologies.............................................................................165 Carrier Ethernet Service Types.............................................................165 E-LINE...........................................................................................166 E-LAN............................................................................................166 E-TREE...........................................................................................167 Encapsulation Techniques....................................................................167 VLAN Stacking...............................................................................168
7
Service Level Agreements and Quality of Service...............................175 The Service Level Agreement....................................................................176 Metrics................................................................................................. 176 Availability...................................................................................... 176 Latency............................................................................................181 Jitter.................................................................................................181 MTTR............................................................................................182 Installation Time.............................................................................182
AU6039.indb 13
2/13/08 9:19:38 AM
xiv n Contents
Bandwidth Provisioning..................................................................183 Packet Loss......................................................................................183 Guaranteed Bandwidth...................................................................183 SLA Problems...........................................................................................183 OAM Overview........................................................................................184 OAM and Ethernet..............................................................................184 Ethernet OAMs....................................................................................184 Functions.........................................................................................185 Testing.............................................................................................185 Link-Layer OAM.............................................................................185 Service OAM...................................................................................187 Quality of Service Overview.....................................................................187 Soft versus Hard QoS...........................................................................188 Soft QoS..........................................................................................188 Hard QoS........................................................................................189 QoS Actions......................................................................................... 191 Classification................................................................................... 191 Policing............................................................................................ 191 Queuing..........................................................................................192 Scheduling.......................................................................................192 Cisco ML-Series Card.....................................................................194 Index............................................................................................................197 Numbers Index.............................................................................................203
AU6039.indb 14
2/13/08 9:19:39 AM
Preface Similar to a fine watch, the technology behind the original Ethernet specification continues to move forward. From a 10 Mbps transmission technology, Ethernet was enhanced several times over the past 40 years. From the original 10 Mbps coaxial cable, BUS-based technology, Ethernet evolved first into a 10-Mbps twisted-wire, hub-based technology, shortly thereafter followed by Fast Ethernet, which extended the data rate to 100 Mbps. By the early 1990s Gigabit Ethernet made its appearance, which was then followed by 10 Gigabit Ethernet. Today, work is progressing on extending the data rate of Ethernet further up the Gigabit range. Although a significant portion of the preceding Ethernet technologies were oriented towards moving data over local area networks, both Gigabit and 10 Gigabit Ethernet include the ability to transmit data over optical fiber at long distances; this provides customers with the ability to interconnect buildings in a campus environment easily while communications carriers gradually began deploying the technology into their metropolitan area networks as a low-cost overlay network to provide customers with inter-site connectivity. Originally referred to as Metropolitan Area Ethernet (MAE) and today primarily referred to as Carrier Ethernet, this technology represents the focus of this book and in addition results in communications carriers providing a transmission technology that can be used to significantly enhance the data rate between customer sites. Because new technology is rarely an island, we will discuss the major components of the technology behind Carrier Ethernet prior to focusing our attention upon a detailed investigation of Carrier Ethernet services. Thus, in this book we will examine data networking concepts, the difference between so-called “flavors” of Ethernet, the Ethernet frame, and the manner by which switches operate. In addition, we will refresh our knowledge of virtual LANs (VLANs), virtual private networks (VPNs), Multi-Protocol Label Switching (MPLS), and other technologies used to provide a Carrier Ethernet service tailored to the requirements of subscribers. Using this information as a base will provide readers with a firm background in Ethernet and its relevant technologies that will allow one to obtain a maximum benefit from the portion of this book that covers Carrier Ethernet technology in detail.
xv
AU6039.indb 15
2/13/08 9:19:39 AM
xvi n Preface
Once we complete our discussion of the technology associated with Carrier Ethernet services, we will then conclude this book by turning our attention to another important topic: Service Level Agreements. Because Carrier Ethernet represents a service, we need to understand the structure of a Service Level Agreement as it represents a contract that will enhance our organization’s use of this evolving service. Because Carrier Ethernet technology deals with such important issues as obtaining a quality of service for the movement of voice and real-time video and the creation of VLANs to facilitate the movement of data, we will also discuss each of these important topics in this book. Thus, the reader of this book will be exposed to both the different versions of Ethernet and the technologies that result in the use of Carrier Ethernet being rapidly implemented by many organizations as a mechanism to interconnect separated locations in a manner that allows high data transfers at a reasonable cost. As a professional author who has spent approximately 30 years working with different flavors of Ethernet technology, I welcome reader feedback. Please feel free to write to me in care of my publisher whose address is on the jacket of this book; or you might choose to send an e-mail to
[email protected]. Because I periodically travel overseas, it may be a week or more before I can respond to specific items in the book. Please feel free also to provide your comments concerning both the material in this book as well as topics you may want to see in a new edition. Although I try my best to “place my feet” literally in the shoes of the reader to determine what may be of interest, I am human and make mistakes. Thus, let me know if I omitted a topic you feel should be included in this book or if I placed too much emphasis on another topic. Your comments will be greatly appreciated.
AU6039.indb 16
2/13/08 9:19:39 AM
Acknowledgments As the author of many books, a long time ago I realized that the publishing effort is dependent upon the work of a considerable number of persons. First, an author’s idea concerning a topic must appeal to a publisher who is typically inundated with proposals. Once again, I am indebted to Rich O’Hanley at Auerbach Publications for backing my proposal to author a book focused upon a new type of Ethernet communications. As an old-fashioned author who periodically travels, I like to use the original word processor — a pen and paper — when preparing a draft manuscript. Doing so ensures that I will not run out of battery power nor face the difficulty of attempting to plug a laptop computer into some really weird electric sockets I encountered while traveling the globe. Unfortunately, a publisher expects a typed manuscript, and Auerbach Publications is no exception. Thus, I would be remiss if I did not acknowledge the fine efforts of my wife, Beverly J. Held, in turning my longhand draft manuscript into a polished and professionally typed final manuscript that resulted in the book you are now reading. Once again, I would like to acknowledge the efforts of Taylor & Francis/Auerbach Publications employees in Boca Raton, Florida. From designing the cover through the editing and author queries, they double-checked this author’s submission and ensured that this book was ready for typesetting, printing, and binding. To all of you involved in this process, a sincere thanks.
xvii
AU6039.indb 17
2/13/08 9:19:39 AM
AU6039.indb 18
2/13/08 9:19:39 AM
About the Author Gilbert Held is an internationally recognized author and lecturer who specializes in the applications of computer and communications technology. He is a frequent lecturer and conducts seminars on topics such as LAN/WAN internetworking, data compression, and PC hardware and software. Held is the author of more than 40 books on computers and communications technology and has won several awards for his technical excellence in writing.
xix
AU6039.indb 19
2/13/08 9:19:39 AM
AU6039.indb 20
2/13/08 9:19:40 AM
Chapter 1
Introduction to Carrier Ethernet Similar to other books written by this author, the purpose of an introductory chapter is to provide readers with general information about the topic of the book. This chapter is no exception as we will commence our familiarity of Carrier Ethernet with a definition. Once this is accomplished we will discuss the rationale for this relatively new technology, briefly tour several key aspects of the technology, and then discuss some of the applications that can benefit from the use of Carrier Ethernet. Because a coin has two sides, this author would be remiss if he did not point out some of the challenges to this evolving technology. Thus, in concluding this chapter we will turn our attention to some of the challenges faced by communications carriers offering a Carrier Ethernet service as well as by end users looking to utilize this service.
Defining Carrier Ethernet Carrier Ethernet can be simply defined as “a high-speed Ethernet transport mechanism for metropolitan area networking.” Because of this, the terms “Carrier Ethernet” and “Metropolitan Area Ethernet” are often used synonymously; however, in this book we will refer to the technology as Carrier Ethernet due to the fact that it is primarily a communications carrier service offering, although it is possible for an end user to install a Carrier Ethernet infrastructure in a campus environment.
AU6039.indb 1
2/13/08 9:21:59 AM
n Carrier Ethernet: Providing the Need for Speed
Overview Carrier Ethernet defines the use of Ethernet frames as a transport facility, enabling such frames to transport IP packets or even ATM (Asynchronous Transfer Mode) cells. Because Ethernet is scalable, with 10-G Ethernet now many years old while higher data rates are on the standards horizon, the technology can be viewed as presenting a challenge to a traditional Synchronous Optical Network (SONET) telephony infrastructure. However, because SONET rings are designed for providing near-immediate recovery in the event of a cable cut or another type of communications failure, it is this author’s opinion that Carrier Ethernet will complement SONET and in many cases be carried via a SONET connection between communications carrier offices.
Rationale The advent and expansion of the use of Carrier Ethernet results from a series of inter-related issues. Those issues, which are listed in Table 1.1, will be briefly discussed in this section.
Expanded Use of Ethernet LANs The so-called “LAN wars” during the later part of the 1980s through the mid1990s are now history. During that period Ethernet battled IBM’s Token-Ring for LAN supremacy. Similar to the VHS versus Beta videotape recorder battle a decade earlier, one technology survived while the other technology was relegated to history. In the battle for LAN supremacy Ethernet won the LAN wars many years ago. Although there are still some universities, research laboratories, government agencies, and commercial organizations that operate Token-Ring networks, their days are numbered. Due to the increase in Internet access and the use of graphics in e-mails the relatively low data rate of the 16-Mbps Token-Ring network is not sufficient for most modern communications networks. Thus, operators of Token-Ring Table 1.1 Rationale for Carrier Ethernet Expanded use of Ethernet LANs Frame compatibility Low cost and high access speeds Mass market for technology Simplifies business continuity
AU6039.indb 2
2/13/08 9:21:59 AM
Introduction to Carrier Ethernet n
networks have been replacing their infrastructure with Fast Ethernet and Gigabit Ethernet LANs. Within a few years it is more than likely that the only Token-Ring networks in use will operate in museums. Today over 90 percent of all LANs are based upon Ethernet technology.
Frame Compatibility A logical evolution of the use of end-to-end Ethernet technology is to enable data to flow between locations connected via a Metropolitan Area Network (MAN) as Ethernet frames. In doing so, this action would eliminate the necessity to convert Ethernet frames into ATM cells or another transport facility and then re-convert them back into their original format. Due to the growth in the transport of realtime data conveying voice and video, the elimination of frame-to-cell-to-frame or other conversions can have a beneficial effect on the reconstruction of voice or video at their destination location. Simply put, the avoidance of conversion lowers delay time, which is a key metric in determining if a digitized voice stream can be transported and converted back to an analog format without experiencing distortion.
Low Cost Most organizations go through a budgetary process where they allocate various funds for different projects into the future. One of the projects typically budgeted in an IT environment is for network upgrades. In the original LAN wars mentioned earlier in this chapter, Ethernet won over Token-Ring for a variety of reasons, with one of the primary benefits of Ethernet being its low cost; a second key benefit was its ability to scale upward. Concerning the latter, an organization operating a legacy 10-Mbps Ethernet LAN could either upgrade the network to a 100-Mbps Fast Ethernet network or selectively use switches to connect the existing network to a backbone network operating at a much higher data rate. Similarly, a Fast Ethernet network operating at 100 Mbps could be upgraded to a Gigabit Ethernet network or the end user could selectively use Gigabit LAN switches with some Fast Ethernet ports that could be employed to connect the existing network to a faster high-speed Gigabit Ethernet backbone. These network scenarios enable data to flow end-to-end as Ethernet frames. This significantly reduces the cost associated with training network personnel as well as the cost of diagnostic equipment. In addition, because the use of LAN switches enables portions of a network to be selectively upgraded, this allows the cost associated with a network upgrade to be spread over more than one budgetary period. When we discuss the use of Carrier Ethernet to interconnect two or more locations within a metropolitan area, similar cost savings are obtainable due to the ease in connecting existing Ethernet LANs via a Carrier Ethernet service. Thus, the
AU6039.indb 3
2/13/08 9:21:59 AM
n Carrier Ethernet: Providing the Need for Speed
low cost associated with connecting LANs to a Carrier Ethernet service represents another reason for considering the use of this service.
High Network Access Speeds The ability to connect locations via Carrier Ethernet implies the transport of data at high speeds. Thus, the use of Carrier Ethernet enables locations within a metropolitan area to be connected to one another via access lines that operate at high data rates. When transporting delay-sensitive data such as real-time voice and video minimizing network ingress and egress times can be quite beneficial. A second area that deserves mention is the use of Carrier Ethernet as a replacement for lower-speed T1 and T3 transmission systems. A T1 line was originally developed to transport 24 digitized voice conversations, and by the early 1990s was primarily used as a 1.544-Mbps data pipe to connect stations on a LAN to the Internet. Similarly, the T3 transmission system was originally developed to transport 28 T1 lines, each transporting 24 digitized calls. Today, a majority of local loop T3 lines are used to provide large organizations with Internet access at a data rate approaching 45 Mbps. Through the use of Carrier Ethernet it becomes possible to obtain an access line operating at a gigabit data rate.
Mass Market for Technology A fifth driving factor behind the acceleration in the use of Carrier Ethernet is the mass market for Ethernet technology. Having won the LAN wars many years ago, Ethernet in a variety of flavors represents the dominant technology for moving data over local area networks. This results in Ethernet providing an economy of scale for developing such products as LAN switches, router ports, and network adapters. Because Carrier Ethernet is based on Ethernet, the mechanism required to connect Ethernet LANs to a carrier Ethernet service does not represent a quantum leap in technology. Instead, the connection can occur using off-the-shelf products, which enables a mass market of equipment to be usable. This in turn drives down the cost of interconnecting Ethernet LANs via a Carrier Ethernet service, resulting in the use of the service becoming more appealing.
Business Continuity Until 9/11 many small- and medium-sized corporations discussed the need for continuity of operations, but did not invest the necessary funds to achieve a high level of backup. The world changed after 9/11 and today business continuity is a major operational goal of business. Through the use of Carrier Ethernet it becomes relatively easy for one office to back up its data onto the data storage residing at another office. Thus, one of the
AU6039.indb 4
2/13/08 9:22:00 AM
Introduction to Carrier Ethernet n
Table 1.2 Technologies Enabling Carrier Ethernet Copper and fiber infrastructure VPNs VLANs MPLS
benefits obtained from the high speed provided by Carrier Ethernet is to enable off-site updates to occur in a timely fashion. In addition, organizations can use Carrier Ethernet to transmit backup data to off-site storage repositories, providing another option for business recovery that can be tailored to changing data patterns and either supplement or complement conventional backup strategies where tapes or disks are transported to an off-site storage facility. Now that we have an appreciation for a few of the driving forces contributing to the growth in the use of Carrier Ethernet, we will turn our attention to some of the technology issues that enable the relatively high data rate of this new version of Ethernet to be used effectively.
Enabling Technologies In this section we will examine a core series of relatively new technologies that enable organizations to effectively use Carrier Ethernet. Table 1.2 lists four key technologies that enable the use of Carrier Ethernet to become a viable transport technology for interconnecting locations at a high data rate within a metropolitan area.
Copper and Fiber Infrastructure Over the past decade significant improvements in the data transmission rate obtainable via copper wires occurred while many communications carriers strung fiber into buildings or to the curb where copper was used to deliver high-speed data for relatively short distances into the home. Concerning the use of copper, although conventional modems are only capable of reaching a data rate of approximately 56 Kbps, such modems only use approximately 4 KHz of the bandwidth of copperbased wiring. In actuality, the available bandwidth of twisted-pair copper wiring is over 1 MHz. However, because the telephone network was originally developed to transport voice, low and high pass filters are used to form a passband of approximately 4 KHz, limiting the ability of modems to transmit data at high speed.
AU6039.indb 5
2/13/08 9:22:00 AM
n Carrier Ethernet: Providing the Need for Speed
Voice
Upstream
4 KHz 25.875 KHz
Downstream
138 KHz
1 MHz+
Figure 1.1 ADSL frequency use on copper wire
ADSL Recognizing the availability of a significant amount of unused bandwidth on copper wiring resulted in telephone companies altering their last-mile connections to take advantage of frequencies from approximately 40 KHz to or beyond 1 MHz. In doing so, they used their current copper-based local loop, which runs from a telephone exchange to the customer presence, to become capable of transporting both voice and data. To do so, the telephone company initially installed Asymmetric Digital Subscriber Line (ADSL) modems at the customer premises and a rack-mounted Digital Subscriber Line Multiplexer (DSLAM) at the central office, with the latter serving multiple subscribers. Through the use of Frequency Division Multiplexing (FDM) the ADSL modem created two frequency bands above the voice band, enabling both voice calls and data transmission to occur simultaneously over a common copper-wire connection. Figure 1.1 illustrates the general frequency division that occurs when ADSL is implemented on a telephone copper wire. Note that the lower 4 KHz is used for voice. In comparison, the larger bandwidth devoted to data transmission supports downstream (central office to subscriber) transmission while the lower amount of bandwidth devoted to data transmission is used to support upstream (subscriber to central office) communications. This partition of upper frequency into two different sized bands results in an asymmetric data rate and is designed to support typical Internet access where short amounts of upstream transmissions in the form of URLs are followed by lengthy downstream transmissions in the form of Web pages.
ADSL2 and ADSL2+ Since the adoption of ADSL standards in 1998 there have been several enhancements to the technology, most notable being ADSL2 and ADSL2+. Table 1.3 provides a comparison of the original ADSL2 and ADSL2+. Note that the International Telecommunications Union (ITU) standard G.992.5 Annexes J and M shift the upstream/downstream frequency split from 138 to 276 KHz as a mechanism to boost upstream data rates. In addition, the “all digital-loop” variation of
AU6039.indb 6
2/13/08 9:22:02 AM
Introduction to Carrier Ethernet n
Table 1.3 Comparing Maximum Operating Rates Technology
Standard
Downstream Rate Upstream Rate (Mbps) (Mbps)
ADSL
ANSI T1.413
8
1.0
ADSL2
ITUG.992 3⁄4
12
1.0
ADSL2
ITUG.992 3⁄4 Annex J
12
3.5
RE-ADSL2
ITUG.992 3⁄4 Annex L
5
0.8
ADSL2+
ITU G.992.5
24
1.0
RE-ADSL2+
ITU G.992.5 Annex L
24
1.0
ADSL2+
ITU G.992.5 Annex M
28
3.5
ADSL2 and ADSL2+ defined in Annexes I and J, which define ADSL2+_ISDN both without and with overlapped spec- Table 1.4 Distance versus trum, support an additional 256 Kbps of Downstream Data Rate upstream data when the 4-KHz bandDistance Maximum Data Rate (feet) (Mbps) width allocated for voice is reallocated for ADSL. <1000 28 The downstream and upstream data 1000–2000 24 rates listed in Table 1.3 represent theoretical maximum data rates. As the distance 2001–3000 23 between the central office DSLAM and 3001–4000 22 the subscriber’s premises increases, the maximum obtainable data rate decreases. 4001–5000 21 Table 1.4 provides a general relationship 5001–6000 19 between transmission distance and data rate for the downstream channel of ADSL2+. 14001–15000 1.5 Note that different versions of ADSL are primarily used for Internet access and providing a Fiber-to-the-Curb (FTTC) or Fiber-to-the-Neighborhood (FTTN) connection. For both FTTC and FTTN the communications carrier installs fiber to a central location in a residential area and then uses existing copper wire to provide a high-speed connection via a version of ADSL into a residence. Because this eliminates digging and installing fiber directly into a residence, the communications carrier can significantly reduce the cost of service. However, new services such as IPTV require a data rate at or above 20 Mbps to support both standard and high-definition television, limiting the distance over which ADSL can be used to avoid routing fiber directly into a residence. Although different versions of ADSL are normally sufficient for residential and some business users, other business users who need to interconnect locations require
AU6039.indb 7
2/13/08 9:22:02 AM
n Carrier Ethernet: Providing the Need for Speed
a balance between upstream and downstream data rates. That balance is commonly obtained through a variation of ADSL referred to as Symmetric High-speed Digital Subscriber Line (SHDSL).
SHDSL SHDSL was standardized by the ITU as recommendation G.991.2 in February 2001. This standard defines symmetrical data rates from 192 to 2304 Kbps in increments of 64 Kbps for transmission over a single copper-wire pair. When two copper pairs are used, the data rate ranges from 384 to 4608 Kbps in 128-Kbps increments. The use of SHDSL provides businesses with a flexible transport capability because SHDSL supports a variety of payloads, from unstructured clear channel to full rate or fractional T1 or E1, ATM, nXISDN Basic Rate Access where n defines a number of 64-Kbps channels to a dual bearer mode where a mixture of two data streams such as T1 and packet-based data can share the payload bandwidth of the SHDSL loop.
VDSL Very-high-bit-rate Digital Subscriber Line (VDSL) represents a technology that can supplement fiber to the curb, enabling very high data rates into homes and offices. VDSL operates over existing copper wires similar to the manner that ADSL operates. The key difference between VDSL and the various versions of ADSL are in data rate supported and transmission range. VDSL can achieve data rates up to 52 Mbps in the downstream channel and up to 16 Mbps in the upstream channel, which is considerably faster than data rates obtainable via the use of any version of ADSL. However, this additional data rate is only applicable for relatively short distances of approximately 4000 feet or 1200 meters. Because many communications carriers are replacing copper-based main feeds routed from neighborhoods to central offices with fiber optic, this action allows them either to install an FTTC or FTTN infrastructure. For either situation the use of VDSL can be used to provide high-speed access into the subscriber’s premises, enabling businesses to access the fiber infrastructure without requiring the actual routing of fiber into the business. Thus, VDSL represents a mechanism for businesses to access a nearby fiber-optic transmission facility at a relatively high speed without having to wait for a communications carrier to extend the fiber into their facility.
Versions Currently there are two versions of VDSL. One version, which is supported by a partnership between Alcatel, Texas Instruments, and other vendors, uses a carrier system
AU6039.indb 8
2/13/08 9:22:02 AM
Introduction to Carrier Ethernet n
known as Discrete Multi Tone (DMT). DMT divides signals into 247 separate channels, each 4-KHz wide, and modulates data on each channel. The second version of VDSL, which is supported by the VDSL Coalition to include Lucent and Broadcam, uses a pair of technologies called Quadrature Amplitude Modulation (QAM) and Carrierless Amplitude Phase (CAP). Today it appears that DMT has won the “VDSL war” as most equipment is based upon the use of DMT technology.
VPNs A second technology that represents a driving force for the use of Carrier Ethernet is the growth in the use of Virtual Private Networks (VPNs). VPNs are used to enable the creation of a private network across a shared public network infrastructure such as the Internet or even a metropolitan area network formed by the use of communications carrier facilities to interconnect two or more locations within a city or general metropolitan area.
Types of VPNs There are two general types of VPNs: site-to-site and remote access. A site-to-site VPN allows secure connectivity to occur between fixed locations such as many branch offices and a regional office. In comparison, a remote access VPN enables mobile or home users to access an organization’s internal network in a secure manner. From the internal network the remote user may be able to access various computational facilities depending upon the availability of access to different computers connected to the internal network. For both types of VPNs secure tunnels are created between sites by encapsulating user traffic within other packets. Encapsulation results in an additional header or headers, tags, or labels that correspond to the tunneling protocol being prefixed to the tunneled packets. Although tunneled data does not have to be encrypted to be transported via a VPN, in reality encryption is almost always used to hide the contents of the tunneled data from persons who could monitor network traffic as such traffic flows over a public packet network. In addition to encryption, it is also important to verify the originator of a data transmission session. Thus, most tunneled data transported via a VPN is both authenticated and encrypted.
Protocols There are several popular secure VPN protocols in use, including IPSec, Layer 2 Tunneling Protocol (L2TP), which is secured using IPSec; and Secure Sockets Layer (SSL). Both hardware- and software-based VPNs are available. Some VPNs are provisioned by the service provider; others can be provisioned by the customer.
AU6039.indb 9
2/13/08 9:22:02 AM
10 n Carrier Ethernet: Providing the Need for Speed
Service Provider Provisioned VPNs There are several types of service-provider provisioned site-to-site VPNs that can be used to transport either Layer 2 or Layer 3 protocols. Commonly available technologies include L2TP version 3, IEEE 802.1Q, IPSec, Generic Routing Encapsulation (GRE), Multi-Protocol Label Switching (MPLS), and Any Transport over MPLS (AToM). Although AToM can be used to tunnel a variety of protocols to include Ethernet, the IEEE 802.1Q protocol is only applicable for tunneling Ethernet.
Layer 2 Operations A Layer 2 site-to-site VPN enables Ethernet to flow between sites without first being converted to a higher-layer protocol such as IP. Layer 2 VPNs that provide site-to-site connectivity can be provisioned between computers, switches, or routers. Communications are based upon Layer 2 addressing, such as Ethernet’s 48-bit MAC addresses. There are several site-to-site protocols that operate at Layer 2; for example, AToM and L2P2 version 3 can be used to emulate circuits, providing a Layer 2 site-to-site VPN capability. When AToM is used, Layer 2 frames are transported across an MPLS network, with the Label Distribution Protocol (LDP) used for signaling the capabilities and attributes of the emulated circuit.
Layer 3 Operations In a Layer 3 site-to-site VPN, communication occurs based on network layer addressing. In an Ethernet environment, this action results in dual conversions because Ethernet must be converted into IP packets to flow at the network layer and then reconverted to the destination MAC address for delivery as an Ethernet frame. In actuality, a variable delay will commonly occur as the host and router at one end of the connection use the Address Resolution Protocol (ARP) to equate a MAC address with an IP address. If the addresses were previously resolved, the router can immediately discover the correct IP address without broadcasting an ARP message and waiting for a response. Otherwise the router will broadcast an ARP and wait for a response.
Summary The use of a variety of well-proven site-to-site VPNs enables Carrier Ethernet to be used to interconnect geographically dispersed locations within a metropolitan area in a secure manner. In Table 1.5 readers will find a summary of site-to-site VPNs that can be used to link sites separated by a few miles to those on separate continents.
AU6039.indb 10
2/13/08 9:22:02 AM
Introduction to Carrier Ethernet n 11
Table 1.5 Common Site-to-Site VPNs VPN AToM
Attributes
Provisioner
Operates at Layer 2, supporting point-to-point transport of Layer 2 traffic over an MPLS backbone
Service provider
IEEE 802.1Q tunneling Operates at Layer 2, segregating Ethernet traffic by adding an extra 802.1Q tag to the Ethernet VLAN header
Service provider
IPSec
Operates at Layer 3, provides encryption and authentication of IP traffic
Service provider or customer
L2T2v2
Operates at Layer 2, can encapsulate and tunnel Point-toPoint Protocol (PPP) over an IP backbone
Service provider or customer
L2T2v3
Operates at Layer 2, encapsulates Layer 2 protocols over a point-topoint connection
Service provider or customer
SSL
Operates at Layer 4 through Layer 7, Service provider or enabling dynamic deployment customer
VLANs Virtual LANs (VLANs) represents a method for creating one or more independent logical networks within a physical network. The simplest type of VLAN is port based, which is shown in Figure 1.2 for illustrative purposes. In this example the 4 × 4 port-based Ethernet switch is shown with two configured VLANs, one for engineering personnel, and the second is for administrative personnel. In examining Figure 1.2, note that VLAN1 represents the accounting VLAN, which consists of ports 0, 1, 2, 3, 12, 13, 14, and 15, while ports 2, 4, 5, 6, 7, 8, 9, 10, and 11 are assigned to the engineering VLAN, VLAN2. This means that the engineers and the accountants have two separate broadcast domains, with the exception of port 3 which allows both engineers and accountants to access a router that is in turn connected to the Internet. Also note that the engineers have one port in their VLAN that is connected to a server (port 2) and the accountants have two ports in their VLAN (port 0 and port 1) connected to a server. Based upon the switch-based Ethernet VLAN shown in Figure 1.2, we can note a variety of advantages associated with the use of VLANs. Table 1.6 lists four key advantages usually associated with the use of VLANs. In addition, because
AU6039.indb 11
2/13/08 9:22:03 AM
12 n Carrier Ethernet: Providing the Need for Speed
A
S
S
0
1
S
Router 2
3
15
A
14
A
13
A
12
4
E
5
E
6
E
7 11
10
9
E
E
E
E
8 E
Figure 1.2 A 4 × 4 port-based VLAN
Table 1.6 Advantages of VLANs Reduces the size of broadcast domains Reduces effort associated with creating subnetworks Reduces hardware requirements Enhances control over traffic
VLANs can operate at Layer 2 in the protocol stack, another advantage is that their data output can be conveyed across a Carrier Ethernet network without requiring conversion to a higher-layer protocol.
Broadcast Domain Reduction Through the use of VLANs the sizes of broadcast domains can be reduced, in effect reducing the overhead resulting from the transmission of ARP and other messages. At the same time, users can configure the VLAN to structure broadcast domains to their particular working environment.
Facilitate Subnet Creation Another advantage associated with the use of VLANs concerns subnet creation. Instead of having to move cables physically, the use of a VLAN enables the configuration of different VLANs to occur electronically to represent required subnets. Thus, the use of a VLAN reduces the effort required to create subnets. Because
AU6039.indb 12
2/13/08 9:22:03 AM
Introduction to Carrier Ethernet n 13
moves, adds, and changes can be done electronically, this eliminates the cost of re-cabling in many instances. According to several trade journals, that cost can run over $400 per change when the time of a technician is included for re-cabling. Thus, the use of VLANs may be able both to simplify the effort of technicians as well as minimize the time required to make a change, resulting in a more efficient use of personnel.
Reduce Hardware Requirements As indicated in our discussion of subnet creation, the use of a VLAN eliminates the need for configuring subnets through cabling. Thus, we can say that the use of VLANs can reduce hardware requirements.
Traffic Control In the example shown in Figure 1.2, two VLANs were created via the assignment of ports to form each VLAN. However, one port was in turn connected to a router to provide access to the Internet and was assigned to each VLAN as a mechanism to enable both engineers and accountants to access the Internet. Thus, we can note that the use of VLANs provides a mechanism for traffic control.
Types of VLANs There are several common types of VLANs based upon the use of switches beyond portbased VLANs, including MAC-based, protocol-based, and software-defined VLANs. A MAC-based VLAN results in the 48-bit MAC address being used for VLAN creation. In comparison, a protocol-based VLAN can use Layer 3 data within frames to create VLANs based upon protocol (IP, NetWare) or an IP address if all frames are IP frames. Although all VLANs are software defined to a certain degree, when we speak of a software-defined VLAN we are normally referencing VLANs formed based upon a switch looking into the frame to observe Layer 4 through Layer 7 conditions to associate the frame to a VLAN. Thus, VLANs could be formed at Layer 7 based upon application or at a lower layer based upon information contained in the Layer 2 through Layer 6 headers.
MPLS Multi-Protocol Label Switching is another technology that is providing a driving force for the use of Carrier Ethernet. MPLS represents a standards-based technology used to speed up network traffic by prefixing a label to each packet
AU6039.indb 13
2/13/08 9:22:03 AM
14 n Carrier Ethernet: Providing the Need for Speed
that provides routing information concerning the path a packet should traverse through a network. Under MPLS Layer 2 information about such metrics as bandwidth, latency, and utilization is integrated into an autonomous system such as an ISP’s network both to simplify and improve the flow of packets through a network. Because MPLS results in the use of labels prefixed to packets that specify a specific path through a network, the time required to route each packet is minimized, enhancing the transit of packets through the network. In this section we will obtain an overview of MPLS and how it can be used in a Carrier Ethernet environment.
Overview As its name implies, MPLS can support multiple protocols, ranging from IP to ATM and Frame Relay (FR). As packets are routed through an MPLS network for the most part they can be forwarded at Layer 2 (switching) level instead of at Layer 3 (routing) level, making traffic move faster. In addition, MPLS enables users to manage different data streams based on priority and the service plan they subscribe to. Thus, MPLS makes it easier to manage a network’s Quality of Service (QoS). Because the best way to explain the operation of MPLS is by example, we will do that next.
Architecture In an MPLS network, packets need to be labeled so they can flow via predefined paths through the network. This labeling occurs at the edge of the network by Label Edge Routers (LERs). The prefix of labels provides an identifier that can include information based on the routing table entry to include destination, bandwidth delay, and other metrics such as data that reference the source IP address, Layer 4 port number, differentiated service value, and similar data. Once this classification is completed and mapped, different packets are assigned to Label Switch Paths (LSPs) which provide the routing through the network as labels are then swapped as packets flow through routers in the network. Figure 1.3 illustrates a network consisting of two label edge routers and five Label Switch Routers (LSRs), that use special software to examine the header of each packet onto a label switch path towards its destination.
Operation As data enters the MPLS network, LERs examine ingress traffic. Using a database, each LER matches the destination of the packet to an entry in the database to determine if a packet should be labeled. If so, an MPLS shim header is inserted into the packet. Figure 1.4 illustrates the composition of the MPLS shim header; note
AU6039.indb 14
2/13/08 9:22:04 AM
Introduction to Carrier Ethernet n 15
R3 s1
s0
LSR
R2
s1
LSR
s2 R5 s1
LSR
s0
R4
s0
LSR
s0
s0
s1 LER
LER
e0
e0 LER Label Edge Router LSR Label Switch Router
c
A
Figure 1.3 Data packets flowing through an MPLS network
that a shim header is placed between Layer 2 and Layer 3 of a packet. As such, the shim header is not a part of either layer, but is used as a mechanism to provide Layer 2 and Layer 3 information to the MPLS network. The shim header consists of 32 bits that are divided into four fields. Twenty bits are used to define the label. The next 3 bits are reserved for experimental functions, followed by a 1-bit slack function and 8 bits for a Time To Live (TTL) value. Through the insertion of a shim header both Layer 2 and Layer 3 protocols are considered. The resulting shim header is then used by LSRs in the network to forward packets through the network. The actual label varies based upon the type of network. For example, in an ATM network the label is placed in the VPI/VCI fields of each ATM cell header. In comparison, in a LAN environment the header in the form of the shim illustrated in Figure 1.4 is placed between the Layer 2 and Layer 3 headers. Packet
Layer 7-Layer 3 Headers
Layer 2 Header
Placement
Layer 7-Layer 3 Headers
MPLS Shim Header
Layer 2 Header
Experimental (3 bits)
Label (20 bits)
Composition
TTL (8 bits)
Stack (1 bit)
Figure 1.4 The formation of an MPLS shim header
AU6039.indb 15
2/13/08 9:22:04 AM
16 n Carrier Ethernet: Providing the Need for Speed
Returning our attention to Figure 1.3, we will assume the workstation labeled A generates a standard Ethernet frame destined to workstation C, which resides on a different network. The Ethernet frame generated by station A is converted into an IP packet by its default gateway which also serves as the LER at the ingress to the MPLS network. The router searches its database, technically referred to as a Label Forwarding Information Base (LFIB), and inserts a label between Layer 2 and Layer 3 headers, which defines the path the packet should take through the MPLS network to station B. In addition, the LER forwards the frame out of its serial 1 (s1) port towards the first LSR it is directly connected to. At that location router R2 examines the label in the packet and forwards the packet on port s2 towards router R4. At router R4 the label in the packet is again examined and the frame is routed onto port s1 towards router R5. At router R5 another examination of the label results in the packet being forwarded out port s1 to router R6, where the label informs that router to deliver the packet onto port eO where the destination station resides. Because data is flowing across the MPLS network in the form of Layer 3 IP packets, router 6 will use the IP address in the Layer 3 header to determine the MAC address at Layer 2 required for delivery of the frame on the Ethernet LAN. To do so, the router will first check its cache memory to determine if a MAC address was previously learned for the IP address. If so, it will use that MAC address. If not, the router will transmit an Address Resolution Protocol (ARP) packet to determine the MAC address associated with the destination IP address. Note that once a label is inserted into a packet at the edge of an MPLS network, routing is expedited through the network. This is because routers in the network use the labels inserted into packets and label forwarding information maintained by LSRs simply to forward packets out onto an appropriate interface. To do so, an LSR uses the label as an index into its label information base. Each entry in the label information base consists of an incoming label and one or more subentries. Those subentries include the outgoing label, outbound interface, and outbound link-level data. When the router matches the inbound label to an entry in its label information, the inbound label is replaced or swapped with the outbound label, link-level data is used to replace other link-level data in the packet, such as the MAC address, and the packet is then forwarded via the outbound interface. Thus, MPLS forwarding is based upon label swapping. Because a simple label-swapping process is employed, the time between a packet arriving at an MPLS router and being forwarded onto an outbound interface is minimized. In addition, because MPLS provides an easy method to mark packets as belonging to a particular class after they are initially classified, this enables MPLS to be used to define a level of QoS through the network.
AU6039.indb 16
2/13/08 9:22:05 AM
Introduction to Carrier Ethernet n 17
Table 1.7 Popular Carrier Ethernet Applications Interconnecting distributed offices Providing real-time backup Supporting voice, video, and data services
Applications There are almost an unlimited number of applications that can be enhanced through the use of Carrier Ethernet as a transport facility linking offices within a metropolitan area. Such applications can include the transport of most any type of digital data, ranging in scope from digitized voice to videoconferencing. Three of the more popular applications that can be enhanced through the use of Carrier Ethernet are listed in Table 1.7.
Interconnecting Distributed Offices Because Carrier Ethernet represents a metropolitan area transport facility, it is ideally suited for interconnecting offices located within a metropolitan area. Because it is possible to use bridges or switches to connect Ethernet LANs residing in different locations, it becomes possible to interconnect offices directly at Layer 2. In addition, if offices require the ability to transport data that requires a QoS capability to be recognized, most Carrier Ethernet services provide this capability.
Providing Real-Time Backup Since 9/11, the backup of corporate and governmental data has increased in importance in recognition that terrorism is as viable a threat as a fire, flood, and other acts of God. Due to the high data rate provided by Carrier Ethernet it becomes possible to use this transport facility to back up data in real time or stagger backups to predefined periods during the day. For either situation, the high data rate provided by Carrier Ethernet eliminates the need to transfer data physically via tape or disk to off-site storage and the transportation and personnel costs associated with the physical movement of data.
Voice, Video, and Data Support In addition to being a high-speed transport facility, when MPLS is integrated into Carrier Ethernet QoS can be supported. This means that different data streams with different QoS requirements can then be supported, enabling voice, data, and
AU6039.indb 17
2/13/08 9:22:05 AM
18 n Carrier Ethernet: Providing the Need for Speed
Table 1.8 Challenges Facing Carrier Ethernet Users Total cost of operation Packet overhead Management and troubleshooting Reliability Security QoS
video to include teleconference data to be transported. Now that we have a general appreciation for the applications that can be supported via Carrier Ethernet, we conclude this introductory chapter by turning our attention to some of the challenges facing this transport service.
Challenges to Carrier Ethernet Earlier in this chapter it was mentioned that the author would be remiss if he did not mention a series of challenges facing end users who wish to consider the use of Carrier Ethernet. In this concluding section of this introductory chapter, we will turn our attention to a series of technical and economic issues that serve to inhibit the use of Carrier Ethernet. In addition, as we discuss each issue we will also discuss, when relevant, how such issues could be migrated to enable an organization to make better use of a Carrier Ethernet service. Table 1.8 lists six of the key challenges facing users of a Carrier Ethernet service.
Total Cost of Operation It is important to recognize that Carrier Ethernet represents a packet transmission service. As such, users may have either to purchase or lease network access devices at the edge of the network as well as pay a fee for service. The fee can vary from a per-packet charge to a fixed monthly rate, depending upon the manner by which the communications carrier bills for the use of its service. Thus, end users must estimate carefully the total cost to use a Carrier Ethernet service.
Packet Overhead A second item that warrants consideration is the potential overhead associated with transporting packets through a Carrier Ethernet network. For example, if Ethernet traffic is converted first to IP packets, an IP header followed by a TCP or UDP
AU6039.indb 18
2/13/08 9:22:05 AM
Introduction to Carrier Ethernet n 19
Layer 3 with Shim
TCP or IP Shim Header UDP Hdr Header
Layer 2
Ethernet Frame Ethernet Frame
Figure 1.5 Encapsulating Ethernet to flow over an MPLS IP network
header is commonly added as a Layer 3 header. Then, if MPLS is used, a shim header will also be added to the packet. The top portion of Figure 1.5 illustrates the most common Ethernet frame, referred to as Ethernet Type II or the DIX frame, named after DEC, Intel, and Xerox, the three developers of the original Ethernet specification. Note that the overhead is 26 bytes. In the lower portion of the figure, the Ethernet frame is shown encapsulated within an IP packet consisting of a 20-byte IPv4 header and either a 20-byte TCP header or 8-byte UDP header, with a 4-byte shim header inserted between the Layer 2 and Layer 3 headers. Thus, routing an Ethernet frame as an IP packet via an MPLS Carrier Ethernet network adds either 32 bytes when UDP is the transport layer or 44 bytes when TCP is the transport layer. When the Ethernet payload is relatively small, this results in a high overhead. Conversely, as the Ethernet payload approaches or reaches its maximum length of 1500 bytes, the overhead decreases.
Management and Troubleshooting Because Carrier Ethernet represents a third-party service, management and troubleshooting may depend upon the response of the communications carrier receiving an organization’s call. One way to expedite response is through defining management changes and troubleshooting responsiveness within a Service Level Agreement (SLA). Another method might be through the use of a management console and software connected to the communications carrier’s network, assuming this type of access is permitted.
Reliability Because Carrier Ethernet represents a service the reliability of network operations is beyond the control of the end user. Although end users normally cannot control the reliability of Carrier Ethernet, they can and should check the mesh structure of the network for redundancy as well as ensuring that SLAs provide a penalty for an unacceptable level of reliability. In addition, the last-mile connection to the end user can be enhanced through the use of a SONET ring or the use of fiber to two different carrier offices.
AU6039.indb 19
2/13/08 9:22:06 AM
20 n Carrier Ethernet: Providing the Need for Speed
Security As data flows through a Carrier Ethernet network, it has the potential for viewing through the use of diagnostic testing equipment. In addition, it is possible that data can be mirrored to another site, inadvertently viewed by carrier personnel, and misrouted to another organization (although a low probability). Due to the preceding as well as other security issues, it is important to recognize that Carrier Ethernet represents a public transport system that can be shared by many users. If authentication or encryption is required, users should consider establishing a secure site-to-site VPN over the Carrier Ethernet network.
QoS Another challenge that warrants discussion is for the service provider to maintain a desired QoS level. As the use of Carrier Ethernet expands, users having the requirement to transport real-time data between sites can also be expected to increase. At some point in time, this could result in the inability of the network to provide a desired QoS level. Although an end user has no direct control over a Carrier Ethernet network, you do have an indirect control mechanism in the form of a Service Level Agreement, which defines different network parameters to include a desired QoS level and penalties if the carrier does not provide that level of service.
AU6039.indb 20
2/13/08 9:22:06 AM
Chapter 2
Data Networking Concepts Writing a book for a diverse audience represents a trade-off. If the author assumes each reader is well versed in the fundamentals of the subject, some readers new to the field may be literally left at the starting gate. If the author assumes that readers are not well versed in the fundamentals of the subject and they are, a large number of readers may be bored by a review of what they consider to represent trivial material. To strike a balance this author decided to include a chapter covering networking concepts in this book. This chapter can be skimmed by advanced readers or they can focus their attention upon sections that are of interest. However, for readers new to the field of communications technology this chapter will provide a foundation for better understanding the concepts presented in subsequent chapters. Thus, this chapter represents a trade-off to provide readers who require a background in networking concepts with such information. In this chapter we will discuss transport technologies that provide the mechanism for moving data through networks. In doing so we will also discuss popular Layer 2 and Layer 3 data protocols, network interfaces, network equipment, and network facilities. Thus, as the title implies, we will focus our attention upon acquiring a solid foundation concerning networking concepts.
Transport Technologies There are three basic transport technologies associated with different versions of Ethernet: Local Area Network (LAN), Wide Area Network (WAN), and wireless. In actuality, as we discuss transport technologies we will note that the use of optical fiber Gigabit and 10 Gigabit Ethernet LANs permits such networks to 21
AU6039.indb 21
2/13/08 9:22:06 AM
22 n Carrier Ethernet: Providing the Need for Speed
span metropolitan areas, resulting in what many refer to as Metropolitan Area Networks (MANs). However, because individual organizations cannot install fiber within a metropolitan area, the term “MAN” when used to reference a high-speed Ethernet network offered by a communications carrier as a transport service actually represents a Carrier Ethernet transport service. Thus, between our discussions focused on LANs and WANs we will briefly discuss MANs in the form of a Carrier Ethernet service.
LANs A Local Area Network, as its name implies, represents a network that transports data over relatively short distances. Prior to the turn of the century we could define a short distance in terms of hundreds of meters for a single LAN or several thousand meters when LANs were bridged to extend their transmission distance. Since the turn of the century Ethernet LANs in the form of Gigabit and 10 Gigabit Ethernet that primarily use fiber-optic cable provide transmission distances up to 70 km. Thus, with the use of Gigabit Ethernet and 10 Gigabit Ethernet the “local” (from the term “LAN”) may be obsolete as the network is capable of supporting transmission over a large metropolitan area. Thus, the use of “local” as a characteristic to describe a LAN in comparison to a WAN may not represent a good metric in comparison to its use seven or eight years ago.
WANs A Wide Area Network, as its name implies, can span a large geographic area. In actuality, a WAN can span the globe, and there are several global networks in operation that consist of leased lines and routers and multiplexers that allow data to be routed literally around the globe. Although the transmission distance of WANs is capable of being significantly longer than that of LANs, the trade-off is typically in the data rate obtainable on each type of network. Modern Ethernet LANs typically provide a data rate many times that obtainable on WANs. This results from the fact that most Gigabit and 10 Gigabit Ethernet LANs as well as an eventual 100 Gigabit Ethernet LAN will use fiber-optic cable that can support extremely high data rates. In comparison, although the long haul portion of WANs has been based upon the use of fiber since the late 1970s, the access line to the communications carrier’s central office more often than not is a copper line. Thus, unless a fiber cable can be run directly into a building the highest data rate obtainable is limited to approximately 50 Mbps when VDSL (Veryhigh-data-rate DSL) is used on the copper access line. In comparison, when a fiber connection is available for the access line, it is possible to use a switch or router port to maintain the LAN operating rate on the WAN.
AU6039.indb 22
2/13/08 9:22:06 AM
Data Networking Concepts n 23
Characteristics In addition to difference between the operating rate and transmission distance, several additional characteristics can be used to differentiate LANs and WANs, including cabling and cable ownership as well as any required testing and troubleshooting.
Cabling and Testing In a LAN environment the end user is responsible for installing required equipment to include cabling. In addition, the end user also becomes responsible for any testing and troubleshooting that may become necessary. In a WAN environment the communications carrier is responsible for cabling to the point of demarcation at the customer’s premises. In addition, the communications carrier is also responsible for the network from the point of demarcation at the customer’s premises to include the access line routed into the communications carrier’s network. This means that while the communications carrier is responsible for testing and troubleshooting from the access line through the network, when a problem materializes the cause may not be obvious and finger-pointing between the communications carrier and end user can occur.
Wireless A third transport technology that warrants attention is wireless. Wireless LANs were first developed approximately a decade ago, but have only recently gained acceptance by government agencies and corporations due to their enhanced security and higher operational speed.
Types of Wireless LANs There are two types of wireless LANs: peer-to-peer and centralized. A peer-topeer wireless LAN network results in each station having the ability to communicate directly with every other station within its range. In comparison, a centralized wireless LAN results in all communications flowing through an access point.
Access Point An Access Point (AP) can be considered to represent a two-port bridge. One port provides a connection to the wireless interface in the form of an antenna, and a second port provides a connection to a wired LAN infrastructure. Because all communications flow through the access point, it can be thought of as a relay. Thus, in a
AU6039.indb 23
2/13/08 9:22:07 AM
24 n Carrier Ethernet: Providing the Need for Speed
wireless infrastructure network the transmission distance between two stations can be further apart than in a peer-to-peer network that does not use an access point.
Basic Service Set The Basic Service Set (BSS) represents the building block of a wireless LAN. The BSS consists of an access point that functions as a master that controls stations serviced by the BSS. Two or more BSSs are connected together through the use of an extended service set. Thus, we will briefly discuss the role of the ESS.
Extended Service Set An Extended Service Set (ESS) represents two or more connected BSSs. Although the IEEE has not defined how the BSSs are to be connected, the most popular method by far is the use of a wired LAN infrastructure. Figure 2.1 illustrates the relationship between a pair of connected BSSs and an ESS. The connection between the BSSs is referred to as a Distributed System (DS). The set of interconnected BSSs must have a common name, referred to as a Service Set Identifier (SSID) or network name. While a connection to an AP- based network requires entering an appropriate SSID, you can also enter the word “Any” either to connect directly to an AP or display a list of APs when there are more than one within range of the station that desires a connection. Through the use of an ESS, roaming becomes possible. In addition, because most APs are connected to a wired LAN the frames that flow over the air to the AP in a modified Ethernet format are converted to Ethernet LAN frames. Thus, in allowing roaming as well as in extending access of wireless stations onto a wired infrastructure, the AP performs frame conversion. Now that we have an appreciation for the basic networking transport technologies, we will turn our attention to several data protocols.
Extended Service Set (ESS) Access Point
Distribution System
Access Point
Figure 2.1 Forming an extended service set
AU6039.indb 24
2/13/08 9:22:07 AM
Data Networking Concepts n 25
Data Protocols Over the past 30 years numerous data protocols evolved, some the work of standardization committees and others were the work of specific vendors. In this section, we will focus our attention upon Ethernet and wireless Ethernet, which are data link or Layer 2 protocols, and TCP/IP, which represents a Layer 3 through Layer 5 protocol. Although we will go into considerable additional detail when we discuss the Ethernet frame in Chapter 4, the purpose of this section is simply to become acquainted with each protocol.
Ethernet Ethernet represents a now near-ubiquitous LAN protocol that operates at the data link layer. Although the frame composition of Ethernet is near-uniform from its operation at 10 Mbps through 10 Gigabit, its physical layer varies to define the use of a different type and number of copper wiring and signaling as well as the use of different types of fiber-optic cable and the signaling mechanisms used to transport data over fiber. Thus, in actuality Ethernet can be considered to represent a family of similar frame-based networking technologies for local and metropolitan area networks.
Evolution Ethernet represents one of several breakthrough technologies developed at the Xerox Palo Alto Research Center (PARC). In addition to Ethernet the graphic user interface, the computer mouse, and the laptop can all trace their initial developmental concepts to work performed at PARC. Returning our attention to Ethernet, its technology was invented at PARC during the period from 1973 through 1975 by Robert Metcalfe and David Boggs. Their initial paper titled “Draft Ethernet Overview” described a local area network that operated at 3 Mbps and employed 8-bit destination and source address fields. Later, the source and destination address fields were expanded to 48 bits to enable global addressing. In 1975, Xerox filed a patent application, listing Metcalfe and Boggs as well as Chuck Thacker and Butler Lampson as inventors. In 1977 the United States Patent Office granted Xerox, Metcalfe, and his team patent number 4063220 for their Ethernet network technology.
The DIX Standard In 1979 Metcalfe left Xerox and formed 3Com Corporation. He convinced Digital Equipment Corporation (DEC), Intel, and Xerox to develop an Ethernet standard that would be in the public domain. Through his efforts the first formal Ethernet
AU6039.indb 25
2/13/08 9:22:07 AM
26 n Carrier Ethernet: Providing the Need for Speed
standard, referred to as the DIX standard in recognition of the three companies, was published in 1980. The DIX standard resulted in a 10-Mbps operating rate on a coaxial bus and defined the use of 48-bit destination and source addresses.
Access Method The access method developed for Ethernet can be traced to the Aloha packet network operating in the Hawaiian Islands during the 1960s and 1970s. That network required several stations to share a common frequency and to “back off” from transmission when a collision was detected. In Metcalfe’s Ethernet a shared coaxial cable acted as the broadcast transmission medium, which used a scheme referred to as Carrier Sense Multiple Access with Collision Detection (CSMA/CD) to define the manner by which computers shared access to the medium. Under the CSMA/CD access method, a computer first “listens” to the medium to determine if it is idle. If so, it can begin transmission; if not, the computer waits until it is idle plus an interframe gap period of 9.6 μs in a 10-Mbps Ethernet environment to commence transmission. If circuitry detects the occurrence of a collision the computer will continue transmission until a minimum packet time, referred to as a jam signal, is reached. Here the jam signal ensures that all receivers detect the collision. Next, the computer updates its transmission-attempt counter and compares it to the maximum number of transmission attempts allowed. If that number is reached, the computer aborts further transmission. If that number has not been reached, a random backoff period is computed and, when reached, the computer begins anew by checking the medium. Of course, if no collision occurs the computer has a successful transmission and the frame reaches its destination.
IEEE Involvement The Institute of Electrical and Electronic Engineers (IEEE) was tasked by the American National Standards Institute (ANSI) to develop LAN standards. Perhaps recognizing a good standard as well as not wishing to literally reinvent the wheel, the IEEE used the DIX Ethernet standard as the basis for an evolving family of Ethernet standards defined by its 802.3 Committee.
Initial Standards In 1985 the IEEE published the first of a series of 802.3 standards that differed slightly from the original DIX Ethernet standard. This series of standards initially followed the following format:
AU6039.indb 26
X
Base Broad
L
2/13/08 9:22:08 AM
Data Networking Concepts n 27
where X represents the data rate in Mbps, BASE references baseband signaling, BROAD references broadband signaling, and L initially defined the maximum cabling distance in hundreds of meters. The first series of standards included 10BASE-5 low-loss coaxial cable, also referred to as thick Ethernet; 10BASE-2, referred to as “thin” Ethernet (1989); and 10BASE-T, a low-cost, twisted-pair copper cable-based Ethernet (1990). Also during 1990 the IEEE published its 802.1D standard that defined Ethernet bridging operations. During 1993 the IEEE published its 802.3j specification, which defines the transmission of Ethernet over Fiber, commonly referred to as 10BASE-F. Also during 1993 the IEEE defined media access control layer bridging in its 802.1D standard.
Fast Ethernet Until 1995 IEEE Ethernet LANs operated at 10 Mbps. In that year the IEEE defined three versions of Fast Ethernet that operate at 100 Mbps, including 100BASE-TX, which operates over two pairs of wires, using one pair to transmit while the second pair is used to receive data; 100BASE-T4, which defines operations over four pairs of wires; and 100BASE-FX, which represents a version of Fast Ethernet that operates over optical fiber. Although the frame format remained unchanged, the signaling used by each version of Fast Ethernet differed.
Gigabit Ethernet In 1998 the IEEE released its 802.3z standard that defines Gigabit Ethernet over fiber. This standard defines transmission over multi-mode fiber (1000BASE-SX), transmission over single-mode fiber (1000BASE-LX), and transmission over balanced copper cabling (1000BASE-CX). 1000BASE-SX represents a short-wavelength laser version of Ethernet that operates over multi-mode fiber using an 850-nm light wave. 1000BASE-LX represents the use of a long-wavelength laser that uses a 1300- or 1310-nm wavelength; 1000BASE-CX represents the standard for Gigabit Ethernet flowing over balanced shielded twisted-pair cable. 1000BASESX has a transmission distance of 220 m when 62.5/125 nm (where the first value represents the core diameter and the second value represents the cladding diameter) fiber is used, while the use of 50/125-nm fiber permits the transmission distance to be extended to 500 m. In comparison, 1000BASE-LX is designed to work at distances up to 2 km while 1000BASE-CX is limited to a transmission distance of 25 m. Shortly after the release of the 802.3z standard, in 1999 the IEEE released its 802.3ab standard, which defines the transmission of Gigabit Ethernet over four cable pairs. Referred to as 1000BASE-T, this specification permits simultaneous transmission and reception of data due to the use of echo cancellation and a fivelevel Pulse Amplitude Modulation (PAM-5) technology.
AU6039.indb 27
2/13/08 9:22:08 AM
28 n Carrier Ethernet: Providing the Need for Speed
Table 2.1 10 GbE Physical Layers LAN
Physical Layers Description
10GBASE-SR
Short distance of 26 to 82 m over multi-mode fiber.
10GBASE-LR
Uses 1300-nm single-mode fiber to reach distances of up to 25 km.
10GBASE-ER
Extended range supports distances up to 40 km when transmitting at 1550 nm over single-mode fiber.
10GBASE-ZR
Although not in the IEEE 802.3ae specification, several manufacturers introduced 80-km range interfaces under the 10GBASE-ZR label.
10GBASE-LX4 Supports distances from 240 to 300 m over multi-mode fiber through the use of four separate laser sources operating at 3.125 Gbps in the range of 1300 nm on unique wavelengths. This standard also supports single-mode fiber at a distance of up to 10 km.
Wireless Ethernet In 2001 the IEEE defined its initial 802.11 standard for wireless Ethernet. That standard was built upon over the past five years via annexes that lifted the operating speed of wireless transmission from a maximum of 2 to 54 Mbps with the 802.11g standard and to several multiples of 54 Mbps under the emerging 802.11n standard.
10 Gigabit Ethernet 10 GbE, defined by the IEEE 802.3ae specification in 2003, represents the fastest Ethernet standard as of 2007. The 10 GbE standard defines five physical layer standards for transmission over copper. Table 2.1 provides a summary of the physical layers of the 10 GbE specification. Due to the relatively young age of this standard, it is difficult to predict which of the nine physical layer standards will eventually have the most usage.
Ethernet in the First Mile Perhaps the latest Ethernet standard other than a version of 10 GbE over copper, Ethernet in the First Mile (EFM) represents a collection of protocols also known as 802.3ah that was approved in 2004 and included into the base IEEE 802.3 standard during 2005. EFM defines the use of Ethernet in an access network to a
AU6039.indb 28
2/13/08 9:22:08 AM
Data Networking Concepts n 29
carrier infrastructure, defining how Ethernet can be transmitted over voice-grade copper, long wavelength single fiber and dual strand fiber, and Point-to-Multi-Point (P2MP) fiber, which is also commonly referred to as Ethernet over Passive Optical Networks (EPON).
100 GbE Although probably a few years away from standardization, the IEEE in late 2006 formed a study group to target 100 GbE as the next version of Ethernet technology. The IEEE 802.3 study group, more formerly referred to as the 802.3 Higher Speed Study Group, adapted a series of objectives which will serve to direct their effort. Those objectives included supporting a 100-Gbps operating rate of at least 100 m on multi-mode fiber and at least 10 km on single-mode fiber for full-duplex operations while preserving the Ethernet frame format and frame size standards. Thus, in the near future versions of Ethernet will operate from 10 Mbps to 100 Gbps using the same frame format and frame size standards.
Network Interfaces No discussion of Carrier Ethernet would be complete without a discussion of the two types of network interfaces: 1. The User-to-Network Interface (UNI) defines the connection between an end user and a network operator. 2. The Network-to-Network Interface (NNI) defines the connection between two network operators. In a Carrier Ethernet environment where transmission is focused upon a metropolitan area there will more than likely be a single network operator. Thus, the primary area of concern for most end users will be the UNI. In addition, even if a Carrier Ethernet service provides a gateway to a long-distance carrier the NNI issues will occur between each network operator, enabling the end user to focus attention upon becoming compatible from an access perspective with the Carrier Ethernet operator.
Network Equipment The ability to transport data depends upon both the use of equipment and a transport media. In this section we will focus our attention upon a range of communications equipment that represents the foundation of modern networking.
AU6039.indb 29
2/13/08 9:22:08 AM
30 n Carrier Ethernet: Providing the Need for Speed
Network Interface Cards Network Interface Cards (NICs) were originally fabricated as adapter cards that were inserted into a system expansion slot in a PC. During the 1990s most PC manufacturers began to include the NIC as a chipset built into the motherboard. This chipset performed the same functions as a stand-alone adapter-based NIC. That is, when receiving serial data bit by bit the chipset stores the frame in a buffer and examines its destination address. If the destination address matches the address of the chipset, further processing occurs; otherwise the frame is discarded. Concerning further processing, the chipset computes the Cyclic Redundancy Check (CRC) on the received frame and compares it to the CRC field. If the two match, the frame is considered to have been received correctly; otherwise, an error of one or more bits is considered to have occurred and the originator of the frame is then informed of this situation. Assuming the frame was correctly received, the chipset will pass the data field as a series of parallel bytes to the computer. Similarly, when data is to be transmitted onto the LAN the chipset receives parallel data from the computer, computes and adds a CRC to form a frame and when the media is free transmits the frame as a series of serial bits. Currently, just about all PCs, servers, switches, and routers that require 10/100 Mbps operations include built-in chipsets to perform Ethernet operations. Concerning GbE, Intel and other manufacturers now offer a variety of Gigabit Ethernet controller chips that are being incorporated onto computer, switches, and router motherboards. Such chips are actually 10/100/1000 devices as they provide backward compatibility with 10 and 100 Mbps Ethernet. Although not yet standardized, PMC-Sierra and other vendors have announced 10 GbE chipsets. Thus, in a few years we can expect a majority of Ethernet NICs will actually be fabricated as chips installed on the motherboard of different communications devices.
Hubs The initial version of Ethernet used a coaxial cable bus-based structure that defined the minimum and maximum distances where stations could be attached to the media. The IEEE standardized this version of Ethernet as 10BASE-5. Later another version of Ethernet that used a thin version of coaxial cable was standardized as 10BASE-2. Because coaxial cable is both more expensive and less flexible than twisted pair, developers looked for a method whereby they could use twisted-pair wire. In doing so they needed a method that would make stations on the network aware of the fact that another station was using the hub as a mechanism to minimize collisions. The result was the development of a hub-based version of Ethernet that was
AU6039.indb 30
2/13/08 9:22:09 AM
Data Networking Concepts n 31
Figure 2.2 By repeating data flowing on one port onto all other ports, the hub functions as a bus
standardized by the IEEE as 10BASE-T, where “T” designates the use of twistedpair copper wire.
Operation A hub contains multiple ports that function similar to a bus. That is, when data from one station connected to the hub is transmitted to another station the data is repeated to all stations connected to the hub. This concept (shown in Figure 2.2) enables each station connected to the hub to listen and determine if the hub is being used. Thus, by repeating transmission input on one port onto all other ports the hub functions as a bus. In addition to introducing the use of less expensive and more flexible wiring, 10BASE-T enabled full-duplex transmission. Because two wire pairs were used, transmission could occur on one wire pair while the other allowed the simultaneous reception of data. Although this capability had limited use for workstations, throughput would increase when servers were connected to a hub.
Passive versus Intelligent Hubs The previously described hub is commonly referred to as a passive hub as it simply repeats data entering one port onto all other ports. The next evolution of Ethernet resulted in the development of the intelligent hub, which included the ability of an administrator to monitor traffic flowing through the device as well as to configure each port in the hub. Because an intelligent hub enabled an administrator to manage certain hub features, this type of hub was also referred to as a managed hub.
AU6039.indb 31
2/13/08 9:22:09 AM
32 n Carrier Ethernet: Providing the Need for Speed
Switches Both passive and intelligent hubs have a key limitation in that they only allow one data flow through the device at any point in time. Manufacturers recognized that this limitation could be overcome by incorporating buffer memory and a microprocessor into a hub, resulting in a new type of communications device known as a switch or switching hub. A switch originally was a Layer 2 device, examining the destination address of each Ethernet frame. Later, switches capable of looking “deeper” into each frame to determine, for example, the IP address carried within the IP header transported by an Ethernet frame were developed. This type of switch was capable of operating at Layer 2 or Layer 3. Today some switches are capable of operating through the application layer (Layer 7).
Operation The Layer 2 switching hub device operates based upon the three Fs, flooding, forwarding, and filtering, using a reverse learning process. That is, as a frame enters the switch, the device examines its MAC source address and the port number it entered the switch. If this is the first occurrence of the source address, it is then entered into the switching table along with the port number that the frame entered the switch. The switch also examines the destination MAC address as a decision criteria for forwarding the frame. If the destination address is not in the address table the frame is flooded, meaning that the frame is output to all ports other than the port it entered the switch. If the destination address in the frame is matched in the address table and the port associated with the address is on a different segment, the frame is forwarded through the switch to the destination port. Otherwise, if the destination address is on the same segment that is connected to a switch port, the frame is filtered and does not flow through the switch.
Advantages There are a variety of advantages associated with using a LAN switch. First, because the switch primarily forwards packets to the required port, it is more efficient than a conventional hub. A second key advantage is that the switch enables multiple data flows to occur through the device. An example of this is shown in Figure 2.3 for an eight-port switch connected to two servers and six stations. In this configuration it is possible for two stations to simultaneously access two different servers. In comparison, a conventional hub would only allow one frame at a time to be transferred through the device.
AU6039.indb 32
2/13/08 9:22:09 AM
Data Networking Concepts n 33
Legend: Server Workstation
Figure 2.3 A switch permits multiple data flows to occur simultaneously
Evolution Originally switching hubs were introduced to support 10BASE-T. Later, switching hubs evolved to support 100 Mbps Fast Ethernet and 1 GbE connections. Initially the cost of higher speed ports precluded their use on each switch port, resulting in the higher speed ports primarily used to connect servers to a switch, with the term “fat pipe” used to reference the connection of several high-speed ports to a server. Because some ports operate at one speed and others operate at a higher speed, this type of switch is significantly more complex. In addition to containing buffer memory the switch must support flow control to ensure that data transmitted via a high speed connection into the switch arrives and is accepted by a station connected to the switch at a lower operating rate. Other features that have been incorporated into switches include load balancing typically implemented on a fat pipe consisting of two or more high-speed ports connected to a server, backward speed compatibility from 1 GbE to 100 Mbps to 10 Mbps, allowing each port to operate at one of three common Ethernet data rates.
Routers The router represents a device that forwards packets between networks. Routers have two or more interfaces, usually a LAN interface and a WAN interface. The LAN interface in an Ethernet environment may support the connection to a hub or switch at either a defined operating rate or at an automatically negotiated data rate.
AU6039.indb 33
2/13/08 9:22:10 AM
34 n Carrier Ethernet: Providing the Need for Speed
Internet
s0 Interface Router e0 Interface Switch
Figure 2.4 A two-port router is commonly used to connect a LAN to the Internet
Operation Routers are initially configured so that the device knows the address assigned to each interface, the type of each interface, and its operating rate. This action enables a router to make forwarding decisions when a packet is received. In comparison, a switch is a plug-and-play device that learns addresses via a backward-looking process and either performs the flooding, filtering, or forwarding frames. Figure 2.4 illustrates the connection of a router to a switch port on its Ethernet (eO) interface. This router is shown providing a connection to the Internet via its serial (sO) port.
Advantages In Figure 2.4, note that the key advantage associated with using the router to connect LAN stations to the Internet is that traffic from all stations flows through the router. This eliminates the need for multiple connections and additional equipment. At the Internet the router’s WAN connection is typically terminated into an ISP router port, which examines the destination address of each packet against information in its forwarding table to determine where to forward packets.
Capabilities Similar to most communications equipment, routers are manufactured with different capabilities. Some devices can be considerably expanded; others have minimal, if any, expansion capability. Most routers support the use of access lists that can be
AU6039.indb 34
2/13/08 9:22:10 AM
Data Networking Concepts n 35
used to filter inbound and outbound traffic as well as provide a measure of security to users located on a LAN, which in turn is located behind the router. Routers typically support several protocols, such as Ethernet at Layer 2 and IP at Layer 3. In addition, routers support the Internet Control Messaging Protocol (ICMP), which is defined in Request For Comment (RFC) 792. ICMP enables routers to exchange data packets that can contain error, control, and informational messages. Two of the more commonly used ICMP messages are Type 8 and Type 0, more commonly known as echo message and echo reply, which form the basis for the ping utility program used to determine if a specific IP address is available. Although routers have been enhanced so that their access lists can provide a high level of packet filtering, they are no substitute for a sophisticated firewall.
Firewall A firewall is a hardware- or software-based system designed to secure a network. In doing so, the firewall operates via a series of customized rules established during the configuration of the device.
Placement As a minimum a firewall-based hardware system has two ports. One port is connected to an internal network that is to be protected; the second port is connected to the unprotected side of the network, such as a router to the Internet. Figure 2.5 illustrates an example of the use of a firewall to protect a network from data received via the Internet.
Internet
s0 Interface Router e0 Interface Firewall Internal Protected Network
Figure 2.5 Using a firewall to protect an internal network
AU6039.indb 35
2/13/08 9:22:11 AM
36 n Carrier Ethernet: Providing the Need for Speed
Operation Firewalls perform several types of operations. Some, such as packet filtering, are similar to operations that can be performed by routers; others, such as functioning as a proxy server, are not usually performed by a router. In addition, the firewall typically has the ability to look deep into packets to examine data at higher layers in the ISO Reference Model than a router. Thus, the firewall can be considered to provide a higher level of network security than a router.
Techniques Firewalls can be configured to look into higher layer headers within packets. As the firewall examines packet headers it can compare data from one header with prior headers, looking for repetitive actions, such as a sequence of attempted log-ons. Thus, the firewall can perform an inspection of both data and headers that is usually beyond the capability of a router. Other functions performed by a firewall include operating as an application gateway, circuit level gateway, and as a proxy server. When operating as an application gateway, the firewall applies predefined security mechanisms to specific applications, such as Telnet. When functioning as a circuit-level gateway, the firewall will apply predefined security mechanisms when a connection is established. Once the connection is established, the firewall allows packets to flow between source and destination without further checking. The third major function most firewalls can provide is operating as a proxy server. In doing so, the firewall intercepts all packets entering and leaving the protected network, performing an address translation to hide the actual addresses of stations on the private network. This eliminates the ability of persons on the public Internet to directly attack stations on the private network.
VPN Appliances The last category of communications equipment that deserves mention is the VPN (Virtual Private Network) appliance. In this section we will briefly examine the operation and use of this network device.
Operation The VPN appliance is a hardware device that is normally located at a central site facility and is used to terminate remote access VPN tunnels. Some VPN appliances are designed to work on a site-to-site basis; others are designed to terminate up to a predefined number of remote access users, such as 100, 250, or several thousand.
AU6039.indb 36
2/13/08 9:22:11 AM
Data Networking Concepts n 37
Advantages Several advantages associated with the use of a VPN appliance explain its popularity. First, most are plug-and-play devices that can be connected at the user’s central site without having skilled personnel configure the product. Second, most appliances support the Secure Sockets Layer (SSL) transport, which is built into all modern Web browsers. Thus, many VPN appliances can be used via remote access without having users install and become familiar with the use of a new software product, saving both training time and software cost. In addition to serving as the VPN endpoint, many VPN appliances include a series of additional features that can facilitate management control of computers. For example, some VPN appliances can be set to check a PC’s network and device settings as well as provide a scan of the device to detect the presence of malware such as key-logging software, and to verify the presence and operation of applications to include any special endpoint security program that may be required.
Combining Functions In addition to terminating VPN tunnels some VPN appliances are marketed as a firewall/VPN appliance, combining both functions into a single hardware product. Other VPN appliances support both the Point-to-Point Tunneling Protocol (PPTP) and IPSec. PPTP, developed by Microsoft and a few other vendors, uses Generic Routing Encapsulation (GRE) which results in IP packets wrapped in GRE packets prior to the packets being transmitted in a tunnel. IPSec represents a TCP/IP-based protocol and standard for both authenticating and encrypting IP packets.
Network Facilities In concluding this chapter we will turn our attention to obtaining a basic understanding of the primary types of WAN long-distance transmission facilities used to move voice, data, and video. Doing so will provide us with the ability to note how Carrier Ethernet can provide an alternative to the use of some common WAN transmission facilities.
T1 As the use of the Internet evolved it increased the demand for what eventually became one of the most commonly employed digital transmission facilities: the T1 line. The T1 line uses two pairs of normal twisted wires that are the same as those used in most homes and offices for telephone service. The key difference is the fact
AU6039.indb 37
2/13/08 9:22:11 AM
38 n Carrier Ethernet: Providing the Need for Speed
Table 2.2 Common Digital T-Carriers Carrier (DSOs) Voice Channels
Data Rate (Mbps)
T1
24
1.544
T2
96
6.312
T3
672
44.736
T4
4023
274.176
that the T1 line is designed to transport 24 voice conversations, each one digitized using Pulse Code Modulation (PCM) at 64 Kbps.
The DSO Time Slot Each digitized voice conversation takes one time slot, referred to as a DSO. The equipment used to place 24 digitized voice conversations takes 8 bits from each of the 24 conversations and adds a framing bit, resulting in a total of 193 bits per frame. Because sampling occurs 8000 times per second, the T1 line operates at 8000 * 193 or 1.544 Mbps, of which the digital data requires 1.536 Mbps, while framing data requires 8 Kbps.
T-Carrier Hierarchy Over the years a hierarchy of T-carrier circuits was developed by communications carriers, of which the T1 and T3 carriers were made available for use by customers, while T2 and T4 carriers were developed for internal use. Table 2.2 provides a comparison of the four T-class digital carriers to include the maximum number of voice conversations they can transport and their data rate, with the latter including any framing characters needed by the T-carrier.
Channelized versus Non-Channelized Although the T1 line was developed as a mechanism to reduce the wiring requirements of organizations by multiplexing 24 digitized voice conversations onto a wire pair, with the growth in the use of the Internet it soon became a popular mechanism to connect LANs to the Internet. Instead of providing each workstation with a modem and dial line, organizations used routers and T1 lines to connect LANs to the Internet; however, because the T1 line is now used as a data pipe, it is ordered as non-channelized T1 because the line is not subdivided through the use of 24 DSOs that are sampled by extracting 8 bits per sample and adding a framing bit
AU6039.indb 38
2/13/08 9:22:11 AM
Data Networking Concepts n 39
8000 times per second. Instead, the T1 line is used to transport 1.536 Mbps of data and 8000 bps of framing data.
SONET SONET (Synchronous Optical Network) represents a standard for connecting fiber-optic transmission systems. Originally proposed by Bellcore, a consortium established by the Regional Bell System Operating Companies at the time of their divesture from AT&T and which is now known as Telcordia Technologies, SONET became an ANSI standard. Today SONET defines the interface standards and a hierarchy of interface rates that enable data streams to be multiplexed and transported over fiber-optic transmission systems. SONET defines Optical Carrier (OC) levels from 51.8 Mbps (OC-1) to 9.95 Gbps (OC-192). The international equivalent of SONET was standardized by the International Telecommunications Union (ITU) as the Synchronous Digital Hierarchy (SDH).
Optical Carrier Levels Optical Carrier levels define a range of digital signals that can be transported on a SONET fiber-optic network. In general, the data rate of OC lines for OC-n is computed as n * 51.8 Mbps.
OC-1 OC-1 represents the SONET base rate that transports data at 50.112 Mbps and has an overhead of 1.728 Mbps, resulting in a total data rate of 51.84 Mbps.
OC-3 OC-3 transports a payload of 148.608 Mbps with an overhead of 6.912 Mbps, resulting in a transmission speed of 155.52 Mbps. The OC-3 line rate is the minimum rate defined in SDH and is referred to as STM-1. When OC-3 is not multiplexed by transporting data from a single source, the letter c is appended such that the signal is referred to as OC-3c.
OC-3c An OC-3c signal concatenates three OC-1 frames. Such frames carry only a limited amount of overhead, resulting in a payload rate of 149.76 Mbps, and overhead is 5.76 Mbps.
AU6039.indb 39
2/13/08 9:22:12 AM
40 n Carrier Ethernet: Providing the Need for Speed
OC-12 An OC-12 represents an optical transmission system that has a payload of 601.344 Mbps, and an overhead of 20.736 Mbps. Thus, the line operating rate is 622.08 Mbps.
OC-24 An OC-24 represents an optical transmission system that transports a payload of 1202.208 Mbps and has an overhead of 41.472 Mbps. Thus, the transmission speed of an OC-24 line is 1243.68 Mbps.
OC-48 The OC-48 represents a network line with a transmission rate of 2488.34 Mbps. This line rate includes a payload of 2405.376 Mbps and an overhead of 82.944 Mbps. Currently the OC-12 and OC-48 are used by many regional ISPs as a backbone for connecting ISPs at peering points where data is exchanged between networks.
OC-96 The OC-96 represents a network line that transports a payload of 4810.752 Mbps and has as overhead of 165.888 Mbps. Thus, the transmission speed is 4976.64 Mbps. The commercial use of OC-96 is rare, with organizations that require a higher speed typically selecting an OC-192 line.
OC-192 The OC-192 network line represents the fastest connection commonly used on the Internet. This line has a transmission speed of 9953.28 Mbps to include a payload of 9621.505 Mbps and an overhead of 331.776 Mbps. Because OC-192 operates near 10 Gbps, a variant of 10 GbE, called WAN-PHY, was developed to inter-operate with OC-192. However, it is important to note that the more common version of 10 GbE, which is referred to as LAN-PHY, is not compatible with OC-192. In Chapter 3, we will examine both versions of 10 GbE.
OC-768 An OC-768 network line has a transmission speed of 39,813.12 Mbps. This transmission speed results in a payload of 38,486.016 Mbps and overhead is 1327.194
AU6039.indb 40
2/13/08 9:22:12 AM
Data Networking Concepts n 41
Table 2.3 SONET/SDH Data Rates SONET SONET SDH Level OC Level Frame Format Frame Format
Payload Kbps
Line Rate Kbps
OC-1
STS-1
STM-0
48,960
51,840
OC-3
STS-3
STM-1
150,336
155,520
OC-12
STS-12
STM-4
601,344
622,080
OC-24
STS-24
STM-8
1,202,688
1,244,160
OC-48
STS-48
STM-16
2,405,376
2,488,320
OC-96
STS-96
STM-32
4,810,752
4,976,640
OC-192
STS-192
STM-64
9,621,504
9,953,280
OC-768
STS-768
STM-256
38,486.016
39,813,120
OC-1536
STS-1536
STM-512
76,972,032
79,626,120
OC-3072
STS-3072
STM-1024
153,944,064 159,252,240
Mbps. The primary use of OC-768 as well as two additional higher-speed optical carriers known as OC-1536 and OC-3072 are for current or proposed research as their data rates exceed almost all current requirements. Table 2.3 summarizes the SONET OC levels to include their payload and line operating rate. In addition, this table indicates the SONET frame format and equivalent SDH level and frame format for each OC level.
Framing When we examine most transport protocols, such as IP and Ethernet, we note that the packet or frame consists of one or more headers and a payload, possibly followed by a trailer, such as a CRC. In a SONET environment the header is referred to as the overhead. However, unlike most transport protocols the overhead is interleaved with the payload. Figure 2.6 illustrates the basic STS-1 SONET frame format. Note that the frame consists of 810 bytes, of which 27 bytes represent overhead and 783 bytes represent the payload. The frame is transmitted as 3 bytes of overhead followed by 87 bytes of payload nine times over until 810 bytes have been transmitted. At a rate of 8000 frames per second the line rate becomes
AU6039.indb 41
9 * 90 bytes/frame × 8 bits/byte × 8000 frames/second = 51.480 Mbps
2/13/08 9:22:12 AM
42 n Carrier Ethernet: Providing the Need for Speed
3 Bytes
87 Bytes
Synchronous Payload Envelope
9 Rows
Transport Overhead
Figure 2.6 STS-1 frame format
The 51.840 Mbps line rate is known as the STS-1 signal rate, which is the electrical rate. The optical equivalent of STS-1 is OC-1, which is used for transmission across the fiber. The transmission of 8000 frames per second represents an interval of 125 μs between samples. This results in the same byte position in each frame occurring every 125 ms. If 1 byte is extracted from the bit stream every 125 ms, this results in a data rate of 8 bits per 125 ms or 64 Kbps, which represents the basic PCM voice digitization rate carried in a DSO. This relationship enables SONET to be capable of extracting low-rate channels or data streams from high data rate streams by simply extracting bytes at regular time intervals, enabling the use of multiplexers at points along a SONET path to add or drop data streams. When you compare the overhead of SONET to that of a T-carrier, you will note that SONET’s overhead is considerably higher. The reason for this results from the fact that SONET provides several layers of overhead information as DS1 signals (Tcarrier) are mapped and carried on a path. That path consists of sections and lines, as illustrated in Figure 2.7. A section consists of a span between equipment such as an add/drop multiplexer and regenerator, with section overhead used for communications between adjacent network elements. In comparison a line can consist of multiple sections and represents the STS-N signal between multiplexers, with line overhead conveying information about Operations, Administration, Maintenance, and Provisioning (OAM&P) as well as pointers to the payload, a parity code, synchronization status, and other information. Moving up to the path level, path overhead flows on an end-to-end basis to provide support for performance monitoring, path status, path tracing, and other information.
AU6039.indb 42
2/13/08 9:22:13 AM
Data Networking Concepts n 43 Path Line Section PTE
Line Section
REG
Section ADM/DCS
REG
Section PTE
REG Regenerator ADM ADD/Drop Multiplexer DCS Digital Cross-Connect System PTE Path Terminating Equipment
Figure 2.7 SONET overhead
Utilization Although originally designed for transporting public telephone network calls, SONET has evolved into a transport mechanism for voice, video, and data with spans that enable data to be transmitted from a few kilometers to hundreds of kilometers. One of the more interesting characteristics of SONET is its support of a ring topology that enables a self-healing network that can be extremely important for communications when reliability and availability are concerns.
SONET Rings A key ability of SONET is its support of a ring topology. By placing multiple fibers along the ring as well as fault-sensors, it becomes possible for any impairment to the operating ring to be compensated for by using the alternate fiber. Figure 2.8 illustrates an example of a redundant SONET ring. In this example, the inner ring represents the working ring while the outer ring represents the standby or protection ring. If the working ring fails, sensors detect the absence of light and switch traffic onto the standby ring, which then becomes the working ring. Sensor Standby Ring
Working Ring Sensor
Sensor
Sensor
Figure 2.8 A SONET ring provides a self-healing capability to compensate for an outage on the working ring
AU6039.indb 43
2/13/08 9:22:13 AM
AU6039.indb 44
2/13/08 9:22:13 AM
Chapter 3
The Flavors of Ethernet From the title of this chapter a person not well versed in technology might believe that our focus is upon a type of ice cream, soda, or another product that we eat or drink. In actuality, Ethernet resembles certain products found in food stores in that there are a number of “flavors” of Ethernet technology that were developed over the past 40 years. In this chapter we will focus our attention upon the major “flavors” of Ethernet technology, commencing with a brief review of Dr. Metcalfe’s design and the DIX standard. After focusing our attention upon the IEEE family of 10 and 100 Mbps standards we will discuss the role of auto-negotiation, which evolved due to the development of several operating speeds beyond Ethernet’s initial 10 Mbps data rate. Once this is accomplished we will turn our attention to Gigabit Ethernet, 10 Gigabit Ethernet, and the evolving 100 Gigabit Ethernet family of products. In addition, we will discuss the Ethernet in the First Mile standard and how it relates to the use of Carrier Ethernet. We will conclude this chapter by providing readers with an overview of the technology associated with the “flavor” of Ethernet.
Metcalfe’s Original Design In 1972, while working at the Xerox Palo Alto Research Center (PARC) Dr. Robert Metcalf developed the first experimental Ethernet system. Because Dr. Metcalfe was familiar with the Aloha Hawaiian radio network that used a technique to share a broadcast frequency similar to the manner by which Ethernet stations listened to the medium prior to transmitting, the initial network was known as the Alto
45
AU6039.indb 45
2/13/08 9:22:14 AM
46 n Carrier Ethernet: Providing the Need for Speed Station Tap
I Interface Cable
C
Interface Controller
Terminator The Ether
Figure 3.1 The initial Ethernet diagram
Aloha Network. However, due to some confusion the name was changed to Ethernet in 1973 to make it clear that this network had evolved beyond the Hawaiian Aloha system and could support any computer connected to the network. In that same year Xerox applied for a patent in the name of Dr. Metcalfe and his team of researchers, which was granted in 1977.
Bus-Based Network Structure Figure 3.1 illustrates the diagram Dr. Metcalfe drew to describe the bus-based Ethernet system that originally operated at 2.94 Mbps via a coaxial cable bus. Note that while some of the terminology eventually changed, the structure of the network based upon the use of Carrier Sense Multiple Access/Collision Detection (CSMA/CD) remained the same in the original Ethernet standard published in 1980 by the DEC–Intel–Xerox (DIX) vendor consortium as well as when adopted by the IEEE as the first 802.3 standard, known as 10BASE-5 in 1985.
The DIX Standard As a mechanism to hasten the adoption of Ethernet DIX developed what is known as the DIX Ethernet standard. This standard was published in 1980 with the title “The Ethernet, A Local Area Network: Data Link Layer and Physical Layer Specification.”
DIX Version 2.0 The original DIX standard used 2-byte addressing, which proved insufficient for universal addressing, resulting in a revision of Ethernet that allowed 2- and 6-byte addressing as well as operations at 10 Mbps. Referred to as DIX version 2.0, this new standard was also known as Ethernet II.
AU6039.indb 46
2/13/08 9:22:14 AM
The Flavors of Ethernet n 47
Preamble
Destination Address
Source Type Address
Data
Frame Check Sequence
Figure 3.2 The DIX II Ethernet frame
Type Field Under DIX version 2.0 the Ethernet frame has a Type field that indicates the type of data transported in the frame. Figure 3.2 illustrates the DIXII Ethernet frame format. Although we will discuss the fields in different Ethernet frames in considerable detail in Chapter 4, in this chapter we will note how the frames evolved. Returning to the DIX II standard, the Type field was more formally referred to as the Ethertype field. The purpose of this field is to indicate the type of data contained in the frame. By convention a value of 1536 decimal (0600 hex) or greater was used to indicate the use of the DIX frame format, which has an Ethertype field. Because a version of the original Xerox specification used a 16-bit length field, values between 0 and 1500 were used to indicate the use of the original Ethernet frame format, which was carried over to the initial IEEE 802.3 frame. Table 3.1 provides examples of the Ethertype values for some common protocols. Table 3.1 Examples of Ethertype Protocol Values Ethertype Hex Value
AU6039.indb 47
Protocol
800
IPv4
806
ARP
8035
Reverse ARP
8100
IEEE 802.1Q-tagged frame
8600
IPv6
8847
MPLS unicast
8848
MPLS multicast
8863
PPPoE discovery stage
8864
PPPoE session stage
888E
EAP over LAN (IEEE 802.1X)
88A2
ATA over Ethernet
88E5
MAC security (IEEE 802.1AE)
2/13/08 9:22:15 AM
48 n Carrier Ethernet: Providing the Need for Speed
IEEE 802.3 Standardization One of the problems associated with the DIX standard was the fact that it was controlled by a vendor consortium. This made it difficult, if not impossible for the DIX standard to become an international standard. Because the IEEE was assigned by ANSI the task of developing formal international standards for all LAN technologies, the vendors behind DIX allowed the IEEE to take over their effort. The IEEE formed its 802 Committee to work on different LAN technologies, with the 802.3 Committee focused on Ethernet.
Division of Effort Beginning in the mid-1980s several standards bodies have become involved in developing standards that make different versions of Ethernet a reality. Although the scope of the IEEE effort is on the data link layer that was subdivided into Logical Link Control (LLC) and Media Access Control (MAC), other organizations are also involved in setting standards.
Physical Layer Effort At the physical layer both the Telecommunications Industry Association (TIA) and the Electronic Industry Association (EIA) joined together to develop standards for different media such as the well-known series of CAT (category) copper wiring.
Cable Characteristics Twisted-pair cable used for communications consists of four wire pairs bound within a thermal insulating jacket. As a signal is applied on one pair an electromagnetic field is produced that results in crosstalk occurring on the other three wire pairs. Because the signal power is greatest at the origin, near-end crosstalk (NEXT) is greater than far-end crosstalk (FEXT). In addition to NEXT and FEXT, other impairments occur. First, as a signal propagates down a cable it loses power, referred to as attenuation or insertion loss. Next, when you inject a signal on a pair in the center cable of a bundle of cables, you obtain two impairments outside the cable, referred to as Alien NEXT (ANEXT) and Alien FEXT (AFEXT). ANEXT represents the impairment that comes out on the near end of the other pairs and AFEXT represents the impairment that occurs on the far end. The performance parameters for UTP cable consider the resistance of the cable to the flow of electrons, the characteristic impedance, attenuation, crosstalk, and propagation delay. Over the years a series of cable categories were standardized that govern the number of twists in twisted-pair cable as well as the thickness of the copper conductor and other
AU6039.indb 48
2/13/08 9:22:15 AM
The Flavors of Ethernet n 49
Table 3.2 EIA/TIA-568B UTP Copper Cable Type
Description
Use
CAT1
Supports signaling up to 1 MHz
Telephone communications
CAT2
Supports data rate up to 4 Mbps
Token-Ring
CAT3
Supports signaling rate up to 16 MHz, data rate up to 10 Mbps
10BASE-T
CAT4
Supports signaling rate up to 20 MHz, data rate up to 16 Mbps
10BASE-T,10BASE-TX
CAT5
Supports signaling rates up to 100 MHz 10BASE-T,100BASE-T and data rate up to 155 Mbps
CAT5e Replaced CAT5 cable with enhanced far crosstalk capability
10BASE-T,100BASE-T, 1000BASE-T
CAT6
Supports signaling rates up to 250 MHz 10BASE-T,100BASE-T, 1000BASE-T
CAT7
Supports signaling rates up to 600 MHz 10BASE-T,100BASE-T, 1000BASE-T, 10GBASE-T
cable parameters. Such standards are commonly referred to in the following cable categories or CAT cables: n Copper cable: The TIA and EIA joined to define a pair of rating systems for unshielded twisted pair (UTP) cable and connecting hardware. Referred to as the TIA/EIA-568 standard, this standard is actually two, labeled as TIA/ EIA-568A and TIA/EIA-568B. Both reference the characteristics of the four twisted-pair wirings with the only difference between the two is that pairs 2 and 3 are interchanged going from A to B. Table 3.2 lists types of EIA/TIA568B UTP copper cable. In examining the different categories of UTP cable listed in Table 3.1, note that because 1000BASE-T uses 8B10B encoding, it operates at 125 MHz. Thus, the original CAT5 cable is not suitable for Gigabit Ethernet over UTP; however, CAT5e, CAT6, and an unofficial CAT7 can be used. n Fiber-optic cable: Because this section is focused on cable categories this author would be remiss if he did not also discuss the three basic types of fiber used by version of Ethernet. Those versions are single-mode, multi-mode graded-index, and multi-mode step-index. Each of the three fibers has a similar structure, which is illustrated in Figure 3.3. An optical fiber is a cylindrical waveguide that enables light to flow along its axis. The fiber consists of a core that is normally glass (sometimes plastic) and is surrounded by a
AU6039.indb 49
2/13/08 9:22:15 AM
50 n Carrier Ethernet: Providing the Need for Speed
Cladding Core Buffer
Figure 3.3 Structure of optical fiber Light Source
Figure 3.4 Single-mode fiber light transmission
cladding layer. Then, one or more optical fibers are encapsulated in a buffer that may be in the form of a polymer coating. The refractive index of the core is greater than the cladding so that the signal in the form of pulses of light remains in the core. − Single-mode fiber: A single-mode fiber (SMF) has a relatively narrow diameter between 7 and 9 μm, through which only one mode of transmission will propagate. A wavelength of 1310 or 1550 nm is typically used with SMF. Due to the narrow core the light source used is a laser. Figure 3.4 illustrates how light is passed through a single-mode fiber. − Multi-mode fiber: A multi-mode cable has a larger core than a singlemode fiber. This core allows light to flow on multiple paths along the core, allowing a small impulse light pulse to be smeared into a wide output pulse. This dispersion degrades the rate at which light pulses can be transmitted. n Step-index: A step-index multi-mode fiber has a large core that can be up to 100 μm in diameter. This results in some light rays zigzagging as they bounce off the cladding, while other light rays traverse the core directly. The alternative paths taken by light rays known as modes arrive separately at the destination, resulting in a transmitted pulse losing its shape, as shown in Figure 3.5. Due to the high degree of modal depression step-index fiber is rarely used in LANs. n Graded-index: A graded-index multi-mode fiber contains a core in which the refractive index slowly diminishes from the center towards the cladding. The higher refractive index at the center results in light rays moving down the axis advancing more slowly than at the cladding. Thus, the light rays tend to reach the destination at nearly the Light Source
Figure 3.5 Step-index multi-mode fiber
AU6039.indb 50
2/13/08 9:22:16 AM
The Flavors of Ethernet n 51
Light Source
Figure 3.6 Graded-index multi-mode fiber
same time and with a minimum of smearing, as shown in Figure 3.6. Today almost all multi-mode LAN fiber is graded-index with a fiber core of 50 or 62.5 μm.
Network Layer Effort At and above the network layer various organizations are responsible for defining standards. For example, in a TCP/IP environment the Internet Engineering Task Force (IETF) is responsible for defining Layers 3 and above as well as how IP and MAC addresses are resolved via the Address Resolution Protocol (ARP) system. Figure 3.7 illustrates where Ethernet and related standards fit within the Open Standards Interface (OSI) model developed by the International Standards Organization (ISO). Note the dashed line into the physical layer represents the fact that the IEEE 802 standards include defining many aspects of operating at the physical layer.
Data Link Layer The primary task of the data link layer is the transfer of data from the network layer of one device to the network layer of another device. In addition, the data link layer converts the raw bit stream of the physical layer into groups of bits, which forms bytes that in turn form the fields in a frame. In a LAN environment the IEEE subdivided the data link layer into logical link control and media access control sublayers. ISO OSI Reference Model Application Presentation Session Transport
TCP/UDP
Network
IP/ICMP/ARP
Data Link Physical
Logical Link Control Media Access Control Physical Media
IETF
IEEE TIA/EIA
Figure 3.7 Relationship of Ethernet to OSI
AU6039.indb 51
2/13/08 9:22:17 AM
52 n Carrier Ethernet: Providing the Need for Speed
802.3
802.4
802.1
IEEE Reference Model
802.2
LLC
802.5
*
*
*
802.11
MAC Physical Layer
Figure 3.8 The IEEE 802 family of standards
Logical Link Control The LLC represents the higher portion of the data link layer. This sublayer provides an interface between the Ethernet MAC and upper layers in the protocol stack of workstations on a LAN. By making LLC independent of the MAC, a uniform logical link control became available for use by Ethernet, Token-Ring, wireless Ethernet, and other types of local area networks. The LLC sublayer is defined by IEEE 802.2 standards, as illustrated in Figure 3.8.
Media Access Control The transmission and reception of data is controlled via the MAC layer. Originally, coaxial-based Ethernet was limited to half-duplex operations; however, most modern Ethernet networks can operate in either half or full duplex dependent upon support from the physical layer. The MAC layer receives data from the upper layers in the protocol stack, encapsulating data within the Ethernet frame format. In the opposite direction, frames received are de-capsulated and data is passed to the upper layers. Because the MAC layer is independent of the physical layer it only needs to know the speed of that layer and does not care about the type of physical layer in use.
MAC Protocols There are two MAC protocols defined for most versions of Ethernet: half duplex and full duplex. Half duplex refers to data transmission in one direction at a time and uses the CSMA/CD protocol. Half-duplex operations were the only mode of transmission supported by bus-based Ethernet and networks using repeaters that needed to detect collisions. With the use of unshielded twisted pair and, later, multiple-strand fiber devices it became possible to transmit and receive simultaneously, a technique referred to as full-duplex transmission. Thus, full-duplex mode allows two stations simultaneously to exchange data over a point-to-point connection that provides independent transmit and receive paths. In effect, the aggregate throughput of the connection is effectively doubled; however, because most workstations
AU6039.indb 52
2/13/08 9:22:18 AM
The Flavors of Ethernet n 53
rarely transmit and receive simultaneously only server connections to a LAN can truly benefit from a full-duplex connection.
IEEE Changes from DIX The IEEE 802.3 Committee used the DIX II standard as the basis for developing its first Ethernet standard. That standard, which was published in 1985 with the title “IEEE 802.3 Carrier Sense Multiple Access with Collision Detection (CSMA/ CD) Access Method and Physical Layer Specifications,” represents the first official and internationally recognized Ethernet standard. The IEEE version of DIX incorporated both functional and terminology changes. From a functional perspective the IEEE made two changes to the Ethernet frame format. First, the 64-bit (8-byte) preamble field that is used for synchronization was changed from a repeating pattern of 0s and 1s so that it ended with the binary sequence 11, for a 1-bit change. A more significant change was the replacement of the Ethertype field with a length field. The DIX standard did not need a length field because the vendor protocols it carried, such as IP and IPX, had their own length fields. However, the 802.3 Committee wanted a standard that did not depend upon the proper operation of other programs. Thus, the 2-byte type field was replaced with a 2-byte length field.
802.3 Frame Format Figure 3.9 illustrates the IEEE 802.3 frame format. Note that the last eight bits of the DIX preamble field whose last bit was modified by the IEEE is now a separate field. That field is referred to as the Start of Frame Delimiter (SFD).
Length Field The IEEE 802.3 Length field takes on one of two meanings, depending upon its numeric value. When the value of this field is greater or equal to 1536 (600 hex), the field is an Ethertype field which defines the protocol transported. That definition is obtained from the IEEE Ethertype Field Registrar. When the value of this field is Preamble
bytes: 7
SFD Destination Address
1
6
Source Address
Length
6
2
Data
Frame Check Sequence
46 – 1500
4
Figure 3.9 The IEEE 802.3 frame
AU6039.indb 53
2/13/08 9:22:19 AM
54 n Carrier Ethernet: Providing the Need for Speed
between 0 and 1500, it denotes the length of the data field in the Ethernet frame. Thus, there is no conflict between DIX and 802.3 standards.
Sub-Network Access Protocol Because the use of a length field precludes multiplexing different protocols on an Ethernet LAN, the IEEE developed the Sub-Network Access Protocol (SNAP). In Chapter 4, when we examine Ethernet frames in more detail we will also look at the Ethernet SNAP frame format and how it provides the ability to multiplex different protocols on an Ethernet network.
The CSMA/CD Protocol Ethernet uses CSMA/CD, which allows stations to access the medium. The carrier sense portion of the protocol references the fact that prior to transmitting, a station checks the medium to determine if another station is using it. If not, the station with data to transmit can begin to send data. The multiple access portion of the protocol means that every station is connected to the network bus, which forms a single data path allowing each station to access the medium. To understand the rationale for the collision detection portion of the CSMA/ CD protocol we will assume two stations are connected 300 ft from one another on the LAN. Because the IEEE 802.3 original Ethernet operated at 10 Mbps, this resulted in each bit having a duration of 100 ns. Because electricity travels approximately 1 ft/ns, this means that if each station listed to the wire and heard nothing that allowed them to transmit at the same time they would be transmitting 3 bits prior to hearing the signal from the other station. Thus, a collision detection mechanism is needed to detect when signals from stations collide. Once a collision is detected each station will stop transmitting and use a random exponential backoff algorithm to determine the amount of time to wait prior to starting over.
Frame Size Readers familiar with many Ethernet concepts probably know that the data field within the frame must be between 46 and 1500 bytes. The rationale for the minimum data field results from the fact that the time required for the worst-case round trip through the network is limited to 50 μs. This represents a worst-case situation where a station on one end of the network begins transmission and a station connected at the opposite end of the network begins transmission just as the first bit from the opposite end of the network arrives. Although the second station almost immediately realizes a collision occurred, the first station will not realize a collision has taken place until the signal flows the length of the LAN back to its starting point.
AU6039.indb 54
2/13/08 9:22:19 AM
The Flavors of Ethernet n 55
During the design of Ethernet the maximum round trip was limited to 50 μs. At 10 Mbps this was enough time to transmit 500 bits or approximately 64 bytes and represents the reason that the data field must be a minimum of 46 bytes. That is, to make sure a collision is heard Ethernet requires that a station must continue transmitting until a period of 50 μs has ended. Thus, if a station has less than 46 bytes of data it pads the data field with zeros until 46 bytes is reached. Because the source address, destination address, length/type, and FCS fields have 18 bytes, the sum is then 64. Thus, the use of a length field enables data to be distinguished from padding.
Early Ethernet When we talk about collisions we are referencing a shared medium. In a modern Ethernet environment that uses switches and fiber-optic cable, it is possible to have full-duplex (FDX) Ethernet that avoids the possibility of a collision occurring. As we cover new material in this book we will note how FDX Ethernet can be enhanced in the Gigabit range through the use of jumbo frames.
The 10 Mbps Ethernet Family The IEEE 802.3 Committee standardized four versions of Ethernet designed to operate at 10 Mbps. In developing a nomenclature to distinguish one version from another the IEEE initially used the following structure:
S {BASE/BROAD}SL
where S represents the transmission speed in Mbps, BASE represents baseband signaling, BROAD represents broadband signaling, and SL represents the maximum segment length in hundreds of meters. Later, SL was used to identify the media such as T for twisted pair, or in the case of the use of fiber optics whether transmission was long wave (LX) or short wave (SX) over fiber.
10BASE-5 10BASE-5, also known as “thick-net,” represents the IEEE’s first Ethernet standard that used media similar to thick RG-8X coaxial cable. A 10BASE-5 segment is limited to 500 m. Although 100 nodes can be connected to a segment, they must be connected to the cable via vampire taps located at least 2.5 m from one another. In comparison to Dr. Metcalfe’s drawing of Ethernet shown in Figure 3.1, 10BASE-5 included some terminology changes. The “interface” was renamed an Attachment Unit Interface (AUI). The AUI includes a 15-pin connector that connects to a 15-pin socket built into an Ethernet card or Network Interface Card
AU6039.indb 55
2/13/08 9:22:19 AM
56 n Carrier Ethernet: Providing the Need for Speed
(NIC; formerly called a controller) inserted into a computer. Although 10BASE-5 is obsolete, it paved the way for more modern versions of Ethernet.
10BASE-2 Shortly after the 10BASE-5 standard was released the IEEE standardized a version of Ethernet that operates over thin RG-58 coaxial cable, resulting in “thin-net” and “cheap-net” used to reference this network. Known as 10BASE-2, this version of Ethernet uses BNC T-connectors to connect to the medium that can be up to 185 m in length. The transceiver used by each computer is typically built into the network adapter and cabled to the BNC-T connector. Similar to 10BASE-5, 10BASE2 represents an obsolete technology.
10BROAD-36 During the period when the IEEE was developing the initial series of Ethernet standards many persons believed that the emerging LAN would be used to transport video and other types of data. At that time it was felt that a broadband connection that enabled multiple channels to flow over a common coaxial cable would be useful, resulting in the 10BROAD-36 standard. The 10BROAD-36 standard specifies the use of a 10-MHz signal on multiple channels within a 75-Ω coaxial broadband cable. Each channel uses three pairs of wires in the broadband cable, which supports a transmission distance up to 3600 m. Although 10BASE-5 and 10BASE-2 had numerous installations, 10BROAD-36 never attracted a significant base of users, and like its earlier cousins is now obsolete.
10BASE-T 10BASE-T represents a revolution in the development of Ethernet standards for two key reasons: it specified the use of Ethernet over inexpensive, unshielded twistedpair cable, and it specified the use of hubs that changed Ethernet’s topology from a bus-based structure to a star-based structure centered on the hub, which eventually evolved into the modern-day LAN switch.
Hub-Based Architecture To enable workstations connected to a hub to maintain the use of the CSMA/CD protocol, a method was required that allowed a star-based topology to function as a bus, where each workstation could listen to the medium to determine if it was in use. The method developed resulted in the hub functioning as a repeater. That is, data transmitted into the hub on one port was transmitted to stations connected to all other hub ports, as illustrated in Figure 3.10. Because 10BASE-T uses two-pairs
AU6039.indb 56
2/13/08 9:22:19 AM
The Flavors of Ethernet n 57
Figure 3.10 A hub functions as a repeater, enabling the star topology to resemble a logical bus with respect to data flow
in an eight-wire jack it becomes possible for point-to-point connections to operate either as half or full duplex.
Overcoming Bus Problems The development of 10BASE-T resulted in the topology of Ethernet changing from a bus-broadcast network structure, shown in the left portion of Figure 3.11, to a star based point-to-point network structure, shown in the right portion of the figure. This topology change resulted in several significant improvements to Ethernet. In a bus topology a break in the coaxial cable can sever service to multiple nodes while a fault due to incorrect termination of the cable ends could disrupt all transmission. In comparison, a star-based topology where workstations connect to hub ports results in a cable break only affecting one node.
Use of Modular Connectors Included in the 10BASE-T standard was the use of an eight-position modular connector similar to an RJ-45 (Registered Jack) jack that is usually wired with four-pair Bus Topology
Cable Break can affect multiple nodes
Star Topology
Cable Break affects only one node
Figure 3.11 Bus versus star topologies
AU6039.indb 57
2/13/08 9:22:20 AM
58 n Carrier Ethernet: Providing the Need for Speed
1
2
T1 T 1
R1 2
3
R 4
5
6
7
8
Figure 3.12 A 10BASE-T eight-wire jack and its data connections
Category 3 (later Category 5) or above twisted-pair cable. Figure 3.12 illustrates the eight-wire jack and its 10BASE-T data connection. 1 2 3 4 5 6 7 8
1 2 3 4 5 6 7 8
MDI and MDI-X
As illustrated in Figure 3.12, 10BASE-T requires two pairs, one of which must be on pins 1 and 2, while the other must be on pins 3 and 6. A 10BASE-T node transmits on pins 1 and 2 and receives data on pins 3 and 6. This wiring patFigure 3.13 An MDI-X crossover tern is referred to as a Medium Depencable dent Interface (MDI). When a PC is to be directly connected to a hub or switch port, the hub or switch port receives data on pins 1 and 2 and transmits on pins 1 and 3. Thus, pins 1 and 3 at one end of the wire must be cross-connected to pins 3 and 6. This cross-connection occurs internally within the cable, which is referred to as an MDI-X cable. Figure 3.13 illustrates the wiring associated with a 10BASE-T crossover (MDI-X) cable. Note that pins 4, 5, 7, and 8 remain the same. Because it is relatively easy to fabricate hub and switch ports with an internal crossover many manufacturers do so, labeling their ports 1X, 2X, and so on, to denote that each port has an internal crossover, allowing a PC to be cabled to the port via the use of an MDI cable. When two PCs or two hubs or switches are directly connected to one another they normally require the use of an MDI-X cable. However, some ports on a hub or switch may be labeled “uplink,” which means it follows the wiring pattern shown in Figure 3.11. This allows a straight-through cable to be used.
Auto MDI/MDI-X One final type of connection worthy of a few words is an Auto MDI-X, which describes the capability of some network adapters and hub and switch ports to
AU6039.indb 58
2/13/08 9:22:21 AM
The Flavors of Ethernet n 59
enable and disable their crossover automatically. Thus, an Auto MDI-X port can be connected to either an MDI or MDI-X cable.
Network Characteristics Unlike prior standards the 10BASE-T standard does not specify the exact type of wiring to be used nor a maximum length. Instead, the standard specifies several characteristics the cable must meet, such as attenuation, noise, and propagation delay. These characteristics result in the use of UTP up to 100 m, although the use of higher-quality cable has been known to extend the transmission distance to 150 m or more.
5-4-3 Rule Until the development of switches, the number of nodes on an Ethernet network was governed by the 5-4-3 rule, which states that between any two nodes on a network there can only be a maximum of five segments, connected via four repeaters, and only three of the five segments may contain user connections (populated). This rule considers each hub to be a repeater and ensures that a signal transmitted by one node reaches each part of the network within a specified period of time. Because switches can buffer frames and all nodes can simultaneously access a switched Ethernet LAN, the 5-4-3 rule does not apply to a switched LAN environment.
FOIRL and 10BASE-F The Fiber-Optic Inter-Repeater Link (FOIRL) and 10BASE-F represent two 10 Mbps standards for Ethernet transmitted over fiber. FOIRL was the original standard for Ethernet over fiber. FOIRL enabled a transmission distance of up to 1000 m between two electrical to optical repeaters, in effect extending Ethernet’s transmission distance. As the cost of repeaters dropped, vendors integrated their electronics into a fiber-optic port on 10BASE-T repeater hubs that was not specifically described in the FOIRL standard. This as well as the need for a more comprehensive fiber-optic Ethernet resulted in a new set of fiber optic media standards, referred to as 10BASE-F.
Segment Types The 10BASE-F standard defines three types of fiber-optic-based Ethernet segments: 1. 10BASE-FL: The 10BASE-FL standard represents a replacement of the FOIRL specification for extending an Ethernet link. This new standard extends fullduplex transmission up to 2000 m when equipment at both ends is 10BASE-FL
AU6039.indb 59
2/13/08 9:22:21 AM
60 n Carrier Ethernet: Providing the Need for Speed
compliant. Otherwise, when mixed with FOIRL equipment the maximum segment length is 1000 m. 10BASE-FL equipment can be used to connect two computers, two repeaters, or a computer and a repeater port. 2. 10BASE-FB: The 10BASE-FB specification defines a fiber backbone to include defining synchronous signaling that enables the limit on the number of repeaters that can be used in an Ethernet system to be exceeded. Unlike the 10BASE-FL specification, which allows connections between computers, repeaters, or ports, 10BASE-FB links are limited for use by special 10BASEFB repeater hubs connected together to form a large repeated backbone that can extend up to 2000 m between hubs. 3. 10BASE-FP: The third type of fiber link specified under 10BASE-F is a fiberpassive system referred to as 10BASE-FP. Under the 10BASE-FP standard a set of specifications for a fiber-optic mixing segment is defined, which enables multiple computers on a fiber-optic system to be connected without the use of repeaters. A 10BASE-FP segment can be up to 500 m in length and a 10BASE-FP passive star coupler can normally connect up to 33 computers.
Fast Ethernet The term Fast Ethernet references a series of specifications developed to support the CSMA/CD protocol at 100 Mbps. Those specifications include transmitting Ethernet over copper and fiber. In this section we will first turn our attention to three Ethernet 100 Mbps copper- based standards prior to examining the operation of Ethernet at 100 Mbps over fiber.
100BASE-T The term 100BASE-T references any of three Fast Ethernet standards designed for operation over twisted-pair copper wire: 100BASE-TX, 100BASE-T4, and 100BASE-T2.
Layer Subdivision To develop three Fast Ethernet standards for operation over twisted-pair copper wire the IEEE subdivided the data link layer, as shown in Figure 3.14. Although Fast Ethernet Media Access Control 100 Base-TX
100 Base-T4
100 Base-T2
Figure 3.14 Fast Ethernet subdivides Layer 2 operations
AU6039.indb 60
2/13/08 9:22:22 AM
The Flavors of Ethernet n 61
DTE (Port)
Medium Independent Interface (MII) Physical Layer Device Medium Dependent Interface (MDI) Physical Medium
Figure 3.15 Subdivision of the physical layer
each version of Ethernet uses the same media access control, they connect to the medium differently. In addition to subdividing the data link layer the IEEE also subdivided the physical layer. That subdivision, which is shown in Figure 3.15, includes a medium independent interface, a physical layer device, and the MDI.
Medium Independent Interface In examining the component shown in Figure 3.15, note that the Data Terminal Equipment (DTE) is the physical port on a device to be connected to the medium. The DTE represents the connection to an Ethernet network adapter or chipset typically located in the system unit of a PC. Next, the Medium Independent Interface (MII) represents an optional set of electronics that can be used to link Ethernet’s MAC functions in the PC with the physical layer device (PHY) that transmits signals to the physical medium. Because the MII converts network signals into a digital format used by Ethernet chipsets it can be used to connect a DTE to any standardized Fast Ethernet media and uses a 40-pin connector with a cable that can be up to 0.5 m in length.
Physical Layer Device The physical layer device (PHY) performs similar to a 10 Mbps Ethernet transceiver. From a physical standpoint, it can be integrated into the Ethernet electronics of a network device or fabricated as a small box connected to an MII cable.
AU6039.indb 61
2/13/08 9:22:23 AM
62 n Carrier Ethernet: Providing the Need for Speed
Medium Dependent Interface The lower portion of Figure 3.15 shows the physical medium used to transport data. The connection to the medium is via an MDI, which is an eight-pin twistedpair connector for Fast Ethernet operating over copper.
100BASE-TX 100BASE-TX represents the dominant method of obtaining Fast Ethernet over copper wiring. This version of Ethernet operates over two pairs of CAT5 cable. 100BASETX is similar to other versions of Fast Ethernet in that data is transferred 4 bits at a time, clocked at 25 MHz to obtain a 100-Mbps data transfer rate. The use of a standardized wiring scheme is followed by all versions of 100BASET. Figure 3.16 illustrates the RJ-45 wiring, which is based upon the joint EIA/TIA 568 standard. The lower portion of Figure 3.16 illustrates the eight-wire jack wiring used by 100BASE-TX. Pin
Pair
Wire
Color
2
2
2
Orange
3
3
1
White/green
4
1
2
Blue
5
1
1
White/blue
6
3
2
Green
7
4
1
White/brown
8
4
2
Brown
1
2
1
White/green
100 Base-TX 1 T1 T 1
2
R1
2
3
100 Base-TX Crossover Cable TD+
1 TD– 2 RD+ 3 RD– 6
4
R 5 6
7
8
TD+ 1 TD– 2 3 RD+ 6 RD–
Figure 3.16 RJ-45 Wiring according to the TIA/EIA-568B standard
AU6039.indb 62
2/13/08 9:22:23 AM
The Flavors of Ethernet n 63
As shown in Figure 3.16, 100BASE-TX runs over two pairs of wires. Those wires must be CAT5 or above, and the pairs use pins 1, 2, 3, and 6. Each network segment can have a maximum transmission distance of 100 m (330 ft), with each pair of twisted wiring providing 100 Mbps of throughput in each direction, proving a full-duplex transmission capability.
Network Configuration The use of 100BASE-TX is similar to the use of 10BASE-T, enabling computers to be connected to a hub or switch, resulting in a star network. Also similar to 10BASE-T, some devices have a built-in cross-connection (crossover cable) capability, other devices require the use of a crossover cable. When networking, 100BASE-TX device segments are limited to a maximum of 100 m to ensure that the round-trip timing specifications are met. Under the EIA/ TIA cabling standard a maximum segment length of 90 m is specified from the wire closet to a wall plate in an office, allowing up to 10 m of cable from the wall plate to the network card. Although the most popular wiring used with 100BASETX is unshielded twisted-pair cable, the standard also allows the use of shielded twisted-pair with a characteristic impedance of 150 ohms.
Coding Initially 100BASE-TX hardware used a 4B5B encoding scheme to convert data into a series of 0 and 1 bits clocked at 125 MHz. 4B5B encoding is used to provide DC equalization. An additional medium dependent sublayer is used by 100BASETX, which uses MLT-3 (multilevel-threshold 3) encoding. Next, the resulting bit stream is transferred to the medium attachment layer using Non Return to Zero Inverted (NRZI) encoding.
Repeaters Fast Ethernet defines two types of repeaters referred to as Class I and Class II. A Class I repeater can have relatively large timing delays, which enables the device to repeat signals between segments that use different signaling techniques, such as 100BASE-TX and 100BASE-T4. In comparison, a Class II repeater immediately repeats an incoming signal onto all other ports, resulting in a very small timing delay. Due to this Class II repeaters can only be used to connect segments that use the same signaling method. A maximum of one Class I repeater can be used in a given collision domain, while a maximum of two Class II repeaters can be supported. Two 100-m 100BASE-TX
AU6039.indb 63
2/13/08 9:22:23 AM
64 n Carrier Ethernet: Providing the Need for Speed Two Repeaters
Single Repeater Class I (140)
100 m UTP (111)
R
Class II (92) R
DTE
DTE DTE (50)
Total bit Times: 462
Transmission Model DTE to DTE One Class I Repeater One Class II Repeater Two Class II Repeaters
Class II (92) R
100 Meters 100 Meters UTP (111) UTP (111)
100 m UTP (111)
DTE (50)
5 Meter UTP (5)
DTE
DTE
DTE (50)
DTE (50)
Total bit Times: 512
Copper Media 100 200 200 205
Fiber Media 412 272 320 228
Figure 3.17 Collision domains resulting from the use of a single and two repeaters
segments can be connected together via the use of a Class I or Class II repeater, resulting in a system with a total diameter of 200 m between DTEs. The top portion of Figure 3.17 illustrates collision domain formed by the use of a single repeater and a maximum of two repeaters that can be used for connecting two DTEs. The lower portion of Figure 3.17 includes a table that defines the maximum transmission distance between two DTEs for copper and fiber as well as the transmission distance when Class I and Class II repeaters are used. Note that at 100 Mbps, 512 bit times governs the size of the collision domain diameter. On both copper and fiber, the maximum collision domain diameter depends upon the presence or absence of repeaters and, if they are used, the type and number of repeaters used. A Class I repeater can require up to 140 bit times, while a Class II repeater can require up to 92 bit times of delay. When using Fast Ethernet 512 bit times results in a collision constraint of 5120 ns because a bit time at 100 Mbps is 10 ns. On copper the round-trip cable delay for CAT5 cable is approximately 2337 ns, resulting in 2783 ns or approximately 100 m for connecting two DTEs to ensure a collision will occur in the first 512 bit times so that there is certainty that transmission on the network is successful if a jam resulting from a collision is not heard. On fiber the maximum collision domain diameter is extended to 412 m as a result of the round-trip time on that media.
100BASE-T4 100BASE-T4 represents an early implementation of copper-based Fast Ethernet. This standard used four twisted-copper pairs, one for transmit, one for receive, and two pairs that switched directions as negotiated.
AU6039.indb 64
2/13/08 9:22:24 AM
The Flavors of Ethernet n 65
Class II Four Port 100 Base-T4 Repeater Hub
Computer Eight-Pin Plug 100 Base-T4 NIC
Maximum of 100 meters of four-pair CAT 3, 4 or 5 UTP
Eight-Pin Jack (MDI)
Figure 3.18 Physical connection of 100BASE-T4 PC to a hub port Class II four port
Figure 3.18 illustrates the physical connection of a PC with a 100BASE-T4 Ethernet adapter to a 100BASE-T4 hub port. Note that because 100BASE-T4 operates over four twisted pairs of wires, it is possible to use CAT3 UTP cable. Similar to 100BASE-TX, 100BASE-T4 segments are limited to 100 m to ensure round-trip timing specifications are met.
100BASE-T4 Repeater Hub The top portion of Figure 3.19 illustrates the signals on the 100BASE-T4 eight-pin connector. Note that one pair is for transmit data (TX), one pair for receive data (RX), and two pairs are bidirectional (BI). Each pair of wires uses one wire for a
1 2 3 4 5 6 7 8
Pin Assignments
Pin Number
Signal
TX_D1+ TX_D1– RX_D2+ BI_D3+ BI_D3– RX_D2– BI_D4+ BI_D4– Crossover Cable 1
1
2 3 6 4
2 3
5
6 4 5
7 8
7 8
Figure 3.19 100BASE-T4 pin assignments and crossover wiring
AU6039.indb 65
2/13/08 9:22:25 AM
66 n Carrier Ethernet: Providing the Need for Speed
positive (+) signal while the other wire carries the negative (−) signal. The lower portion of Figure 3.19 illustrates a 100BASE-T4 crossover wiring diagram. Under 100BASE-T4 an 8B6T code converts eight data bits into six base-3 digits. This conversion enables signal shaping as there are three times more six-digit base-3 numbers than eight-digit base-2 numbers. Two three-digit base-3 symbols are then transmitted in parallel over three twisted pairs using three-level Pulse Amplitude Modulation (PAM-3). Today 100BASE-T4 is considered obsolete.
100BASE-T2 The third 100BASE-T standard is 100BASE-T2, which defines the transmission of data over two copper pairs using 4 bits per symbol. Under 100BASE-T2 each 4-bit symbol is expanded into two 3-bit symbols based upon the use of linear feedback registers. Once this is accomplished the resulting symbols are mapped to a PAM-5 line modulation level. Similar to 100BASE-T4, 100BASE-T2 is now considered obsolete.
Auto-Negotiation Due to the different modes of operation that existed for Ethernet operating over twisted pair, such as 10BASE-T half duplex and full duplex, 100BASE-TX half duplex and full duplex, and so on, a method was desired to allow two network interfaces to negotiate operations. Called auto-negotiation and once upon a time known as NWay, this standard was initially released in 1995 and when employed allows devices to provide speed matching for multi-speed devices automatically as well as select an applicable mode of operation. Auto-negotiation has evolved over the years and is now defined in section 28 of the IEEE 802.3 standard. The technologies supported range from 10BASE-T full and half duplex to 10GBASE-T. Thus, auto-negotiation provides users with the ability to upgrade a network incrementally instead of having to upgrade every workstation at the same time, with the latter usually quite disruptive to ongoing network operations.
LIT Pulses The key to the operation of auto-negotiation is based on the manner by which 10BASE-T devices detect the presence of a connection to another device. Similar to other communications methods Ethernet uses keep-alive signals. Under 10BASE-T keep-alive signals are unipolar, positive-only pulses of a duration of 100 ns generated at intervals of 16 ns ± 8 ns. These pulses are called Link Integrity Test (LIT) pulses under 10BASE-T and are referred to as Normal Link Pulses (NLP) in the auto-negotiation specification.
AU6039.indb 66
2/13/08 9:22:25 AM
The Flavors of Ethernet n 67
The absence of either a frame or one of the pulses for a period between 50 and 150 ms indicates a failure occurred. That failure can be either the remote device or the connection and typically illuminates an LED or generates a message to a console that results in further troubleshooting.
FLP Pulses Under auto-negotiation similar unipolar, positive-only pulses are used; however, each 10BASE-T pulse is replaced by a sequence of up to 33 pulses referred to as a Fast Link Pulse (FLP). The time interval between FLPs results in pulses separated by 62.5 μs. Pulses are transmitted in pairs, with a clock pulse preceding a data pulse. The value of the data pulse bit is indicated by its presence (1) or lack of existence (binary 0). Because the 33rd pulse is a clock pulse, this scheme allows 16 bits of data to be transmitted during a 2-ms burst, which transmits information about the capabilities of a device. The auto-negotiation protocol also defines rules for the configuration of the device based upon the information in the FLP signals.
Parallel Detection Function Because a FLP burst is not recognized as a valid NLP, a 10BASE-T device that detects the FLP during auto-negotiation will detect a link failure. However, a link with an auto-negotiation device at the other end can establish the link via a parallel detection function. Under the parallel detection function capability of auto-negotiation the 10BASE-T device will continue sending NLPs or frames, which will result in the auto-negotiation device switching to a half-duplex 10BASE-T mode of operation. However, if the 10BASE-T device is operating in full-duplex mode a duplex mismatch will occur. Thus, the parallel detection function represents a method for an auto-negotiation device to establish a link with a non-negotiation, fixed speed device. Unfortunately, a device can never parallel detect 1000BASE-T, 100BASE-T2, nor 10GBASE-T as they have a master–slave relationship that parallel detection does not recognize.
The Base Page As previously discussed, each FLP transmits a sequence of 33 bits, of which 16 represent data. The first sequence of 16 data bits is referred to as the base page and indicates the capabilities of the device performing auto-negotiation. Table 3.3 lists the composition and use of the base page fields. In examining the entries in Table 3.3, note that the selector field defines the technology being used. This 5-bit field permits 32 possible combinations, of which
AU6039.indb 67
2/13/08 9:22:25 AM
68 n Carrier Ethernet: Providing the Need for Speed
Table 3.3 Auto-Negotiation Base Page Fields Bits
Field
0–4
Utilization
Selector
Indicates standard used
Technology ability
Defines capabilities supported
12
Extended next page
Used to define 1000BASE-T or 10GBASE-T
13
Remote fault bit
Set to 1 when link failure detected
14
Acknowledgment bit
Set to 1 to indicate correct receipt by base page
15
Next page bit
Used to indicate intention to send other code words after base page
5–11
only two are allowed. Figure 3.20 indicates the allowable selector field values. 01000 IEEE 802.9 For those not familiar with the IEEE Figure 3.20 Allowable selector field 802.9 standard, it represents 10 Mbps values Ethernet combined with 96 64-Kbps ISDN “B” channels originally developed to provide voice and data over a common LAN infrastructure. Referred to as iso Ethernet, it went the way of the Dodo bird due to the rapid acceptance of Fast Ethernet. The 7-bit technology ability field defines the capabilities supported by the NIC, such as the operating rates and duplex modes. Table 3.4 indicates the meaning of each bit position in this 7-bit field. 100000 IEEE 802.3
Table 3.4 Technology Ability Field Bit Meanings Bit
AU6039.indb 68
Technology
0
10BASE-T
1
10BASE-T full duplex
2
100BASE-TX
3
100BASE-TX full duplex
4
100BASE-T4
5
PAUSE operation for full-duplex links
6
Asymmetric PAUSE for full-duplex links
2/13/08 9:22:26 AM
The Flavors of Ethernet n 69
The 12th bit in the base page is the extended next page bit. When this bit is set along with the next page bit by both devices it indicates that the devices will use extended message pages and extended unformatted message pages to transmit additional information. Bit 13, known as the remote fault bit, is set to a binary 1 when the device detects a link failure. Bit 14, the acknowledgment bit, is set to a binary 1 to indicate the correct reception of the base page from the distant party. The last bit in the base page, bit 15, which indicates the next page function, is set to a binary 1 to indicate the intention of the device to transmit other code words after the base page.
The Next Page Function When bit 15 is set it allows devices to transmit additional information beyond their base page. Two common uses of next page exchanges are to advertise the abilities of 1000BASE-T and 100BASE-T2. A next page exchange will occur immediately after the base page exchange. If both devices have bit 15 set in the base page, then they are each required to transmit at least one next page.
Types There are four types of next pages: message pages, unformatted pages, extended message pages, and extended unformatted pages. Figure 3.21 illustrates the format of the message page and unformatted message page. In examining Figure 3.21, note that the first 11 bits (M0 to M10 and U0 to U10) in each page are capable of generating up to 2048 potential message codes or unformatted codes. However, because many bits are reserved for future use, only a portion of potential messages are actually defined. Table 3.5 indicates the nine currently defined Message Page M0 M1 M2 M3 M4 M5 M6 M7 M8 M9 M10 T ACK2 MP ACK
NP
Message Code Field
Unformatted Page U0
U1 U2 U3
U4 U5
U6
U7 U8 U9
U10
T
ACK2
MP
ACK
NP
Unformatted Code Field
Figure 3.21 Format of message page and unformatted page
AU6039.indb 69
2/13/08 9:22:27 AM
AU6039.indb 70
1
0
1
0
1
0
1
0
1
2047
1
2
3
4
5
6
7
8
9
10
1
0
0
1
1
0
0
1
1
0
0
1
0
0
1
1
1
1
0
0
0
0
1
1
1
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
M1 M2 M3 M4 M5 M6 M7 M8 M9 M10
Reserved
10GBASE-T/1000BASE-T technology message code
100BASE-T technology message code
100BASE-T2 technology message code
PHY identifier tag code
Organizationally unique identifier tagged message
One up; binary coded remote fault follows
Two ups; technology ability field follows
One up; technology ability field follows
Null message
Reserved for future use
Description
Note: Message codes 7 through 9 not only identify the technology, but in addition indicate that one or two ability pages will follow using a predefined format. For message code 7 one ability page will follow using an unformatted next page, while a message code 8 indicates that two 1000BASE-T ability pages follow using unformatted next pages. For a message code 9 an extended next page will follow. Both the message page and unformatted page encode bit positions 11 through 15 in the same manner. Thus, they will be described for both.
0
M0
0
Message Code Number
Table 3.5 Message Code Field Values
70 n Carrier Ethernet: Providing the Need for Speed
2/13/08 9:22:27 AM
The Flavors of Ethernet n 71
message codes and their bit composition as well as the fact that codes 10 to 2048 are presently undefined.
Toggle (T-Bit) This bit toggles between binary 1 and 0 in consecutive next pages. Thus, the purpose of this bit is to provide a station with assurance that it is receiving next pages in their proper order and that no page was lost.
Acknowledge 2 (Ack2) The purpose of this bit is to inform the station’s link partner if it can comply with a message. This bit is set to a binary 1 if the station can comply with a message and to a binary 0 if it cannot.
Message Page (MP) This bit indicates the type of next page. If set to a 1 it indicates a message page; a binary 0 indicates it is an unformatted page.
Next Page (NP) This bit is used to indicate whether there are additional next pages. When set to a binary 1 it indicates that one or more next pages follow; a binary 0 indicates there are no additional next pages.
Next Page Exchange Figure 3.22 illustrates the next page exchange in a 1000BASE-T environment. The base page has a selector field value of 10000, which indicates IEEE 802.3 and the next page bit is set, indicating another page follows. The device at the other end of the link responds with a message code 8, indicating it supports 1000BASE-T. In addition, it will set its toggle, message page, and next page bits as it will transmit two 1000BASE-T ability pages using unformatted next pages.
Extended Next Page Function Extended next pages represent an extension of the next page function. That is, when both devices on a link set their next page and extended next page bits, they will then use extended message pages and extended unformatted message pages to exchange additional information. Although similar to regular next pages, extended
AU6039.indb 71
2/13/08 9:22:27 AM
72 n Carrier Ethernet: Providing the Need for Speed Next Page Base Page
1
0
0
0
0
0
0 0
0
0
0
0
0 0
0 0 1
0
0
0 0 0
0
0 1
0
1
Message Code 8 Unformatted Page # 1
0 0 0
0
1
Message Toggle Page Next Page
IEEE 802.3 Message Page
0
1
1 0
0
1
Next Page
0 0 0
0 0
0 0
0 0
1
1000 Base-T Full Duplex 1000 Base-T Half Duplex Toggle Unformatted Page # 2
1
0
0
0 0
0 0
0
0 0
1
0 0
0
0
Manual/Slave Seed Value
Figure 3.22 1000BASE-T next page exchange advertising full- and halfduplex capabilities
next pages are 48 bits in length instead of 16 bits and an extended unformatted page is either 32 or 48 bits in length. Figure 3.23 illustrates the format of the extended message page and the extended unformatted page, indicating the commonality of the flags field whose meanings are the same as those previously mentioned for the next page function. Extended Message Page D0
D10
D11
Message Code Field
D15
D47
Flags Field
Extended Unformatted Page
D11
Unformatted Code Field
D15
Extended Unformatted Code Field
D16
Flags Field
T
ACK2
MP
D47
Extended Unformatted Code Field
ACK
NP
Figure 3.23 Extended message page and extended unformatted page messages
AU6039.indb 72
2/13/08 9:22:28 AM
The Flavors of Ethernet n 73
Table 3.6 Auto-Negotiation Priorities 1
10GBASE-T full duplex
2
1000BASE-T full duplex
3
1000BASE-T
4
100BASE-T2 full duplex
5
100BASE-TX full duplex
6
100BASE-T2
7
100BASE-T4
8
100BASE-TX
9
10BASE-T full duplex
10
10BASE-T
The extended unformatted code field is 32 bits wide in an extended message page and 43 bits wide in an extended unformatted page. The extended next page exchange will conclude once both stations have transmitted all the information they need to exchange and then begin to transmit null message codes. A typical use of the extended message page is to negotiate the use of 10GBASE-T.
Priorities When two devices with multiple capabilities are connected, they must decide upon the type of connection to establish. To do so, devices will select the same Highest Common Denominator (HCD) technology by implementing a priority resolution function that ranks technologies and requires a device to select the highest one supported. Table 3.6 lists the auto-negotiation priorities.
Option Considerations Because auto-negotiation is an optional feature, it may not be included in the capability of one device on a link. If this feature is only operational at one end of a link, auto-negotiation will detect the absence of this feature at the other end. Thus, if a computer with a 10/100BASE-T Ethernet NIC is cabled to a port on a 10BASE-T hub, it will generate FLPs but only receive NLPs from the hub. This will inform the NIC adapter to operate as a half-duplex 10BASE-T device and was previously mentioned earlier in this section as being referred to as parallel detection. To operate correctly, it is important for the correct cable to be used to join endpoints. For example, assume a computer NIC and hub port are each capable of
AU6039.indb 73
2/13/08 9:22:29 AM
74 n Carrier Ethernet: Providing the Need for Speed
10BASE-T/100BASE-TX operations. Because auto-negotiation pulses are bursts of the link pulses used in 10BASE-T, such pulses could travel over CAT3 wire. However, because 100BASE-TX requires the use of CAT5 cable, the negotiation to operate at 100 Mbps will either result in a high rate of errors or it may not even operate.
Fiber As mentioned earlier in this section there are three versions of Fast Ethernet over optical fiber: 100BASE-FX, 100BASE-SX, and 100BASE-BX.
100BASE-FX 100BASE-FX uses two strands of multi-mode optical fiber, one for transmission and one for reception. The maximum length is 400 m (1310 ft) for half-duplex connections to ensure collisions are heard or 2 km (6600 ft) for full-duplex connections. 100BASE-FX uses a graded-index multi-mode fiber with a 62.5-μm core and 125-μm outer cladding (62.5/125). The wavelength of light used on a fiber link is a near-infrared (1350 nm). There are three connectors that can be used with 10BASE-FX, a duplex SC connector that pushes into place, a Media Interface Connector (MIC) used in an FDDI LAN system, and an ST spring-loaded bayonet-type connector. Of the three, the SC represents the preferred connector. Figure 3.24 illustrates the use of a 100BASE-FX Ethernet NIC in a PC to connect to a 100BASE-FX hub port. Note that the primary purpose of the use of 100BASE-FX is to extend the transmission distance between DTEs.
Repeaters 100BASE-FX can be used with two types of repeaters, similar to 100BASE-TX. A Class I repeater can translate signals due to its longer timing delays and a Class II
Computer
Class II Fiber Optic 100 Base-Fx Repeater Hub Tx Rx Tx Rx
100 Base-FX Tx NIC Rx
Figure 3.24 Extending the transmission distance via a 100BASE-FX Ethernet NIC
AU6039.indb 74
2/13/08 9:22:29 AM
The Flavors of Ethernet n 75
repeater, which has smaller timing delays, cannot perform signal translation. Thus, a Class I repeater can be used to connect 100BASE-TX and 100BASE-FX segments and a Class II repeater is used to extend a 100BASE-FX segment. Although a 100BASE-FX segment can be up to 412 m in length, when repeaters are used the maximum transmission distance is reduced. For example, when one Class II repeater is used to connect fiber segments, the maximum transmission distance between any DTEs on the all-fiber segments is reduced to 320 m. If a single Class I repeater is used, the distance is further reduced to 272 m, and the use of two Class II repeaters results in a maximum transmission distance between two DTEs with an all-fiber segments being further reduced to 228 m.
100BASE-SX 100BASE-SX represents the second of three optical standards for Fast Ethernet. This standard uses low-cost, short-wavelength optics at 850 nm, which results in a transmission distance up to 300 m (980 ft). Under 100BASE-SX two strands of multi-mode fiber are used, one for transmission and one for reception of data. Because 100BASE-SX uses the same wavelength as 10BASE-FL, 100BASESX is backward compatible with 10BASE-FL. To facilitate this compatibility an optional auto-negotiation capability is available. Until 100BASE-SX became available users with a base of installed fiber could not implement auto-negotiation. This resulted from the fact that the 10 Mbps standard (10BASE-FL) operates at 850 nm, while the first optical 100 Mbps standard (100BASE-FX) operates at 1300 nm. Today 100BASE-SX is backward compatible with 10BASE-FL as well as forward compatible with the 1 Gbps short-wavelength standard 1000BASE-SX.
100BASE-BX The third optical version of Fast Ethernet, referred to as 100BASE-BX, operates over a single strand of optical fiber. Through the use of wavelength division multiplexing the signal is split into transmit and receive wavelengths. Transmission occurs at 1310 nm, and reception occurs at 1490 nm, with a transmission distance up to 10000 m possible. Because the transmission, although split by wavelength, is bidirectional on the single fiber, this results in the “B” in the 100BASE-BX.
Gigabit Ethernet Gigabit Ethernet, which was standardized by the IEEE as the 802.3z standard, represents a merging of 8022.3 Ethernet and ANSI X3T11 fiber channel technology. Under the 802.3z umbrella the IEEE defined three standards for the operation of
AU6039.indb 75
2/13/08 9:22:29 AM
76 n Carrier Ethernet: Providing the Need for Speed
Gigabit Ethernet in June 1998, referred to as 1000BASE-SX, 1000BASE-LX, and 1000BASE-CX. A year later the IEEE’s 802.3ab standard defined the operation of Gigabit Ethernet over unshielded twisted-pair CAT5, CAT5e, or CAT6 cabling, which became known as 1000BASE-T.
Fiber-Based Gigabit Ethernet There are two laser standards for Gigabit Ethernet over fiber, 1000BASE-SX (shortwavelength laser) and 1000BASE-LX (long-wavelength laser).
1000BASE-SX 1000BASE-SX operates over multi-mode fiber using an 850-nm laser, which enables a transmission distance between endpoints of 220 m over 62.5/125-nm fiber. However, the use of 50/125-nm fiber can reliably extend the transmission distance to 500 m, enabling its use as a building backbone.
1000BASE-LX 1000BASE-LX represents the long wave laser version of Gigabit Ethernet over fiber. This version of Gigabit Ethernet can operate over single-mode or multi-mode fiber. Single-mode fiber using a 9-μm core and a 1300-nm laser is specified to work over a distance of up to 2 km; however, most vendors support a transmission distance up to 10 to 20 km if their equipment is used at both ends of a connection. When used over multi-mode fiber, a maximum segment length of 550 m is possible.
Fiber Auto-Negotiation Similar to copper, auto-negotiation is an optional capability available for 1000BASESX and 1000BASE-LX devices. Fiber auto-negotiation uses 16-bit code words, but instead of using FLPs that are used on copper media, ordered sets of data are transmitted as light pulses. The ordered sets indicate full- and half-duplex settings, a PAUSE setting, a remote fault, and next page. Figure 3.25 illustrates bit composition of a fiber auto-negotiation base page. Although auto-negotiation is similar between copper and fiber its location in the protocol stack results in one major difference between the two. For copper, autonegotiation is located below the Physical Medium Attachment (PMA) layer. In comparison, for fiber auto-negotiation is located in the Physical Coding Sublayer (PCS). Figure 3.26 illustrates the subdivided physical layers for fiber and copper Gigabit Ethernet, indicating where auto-negotiation occurs for each. This difference results in auto-negotiation being able to be used for multiple data rates for devices
AU6039.indb 76
2/13/08 9:22:30 AM
The Flavors of Ethernet n 77
Bits 0-4
Bit 5
Bit 6
Reserved
Full Duplex
Bit 7
Bit 8
Bits 9-11
Bit 12
Half Pause 1 Pause 2 Reserved Remote Duplex Fault 1
Bit 13
Bit 14
Bit 15
Remote Fault 2
ACK
Next Page
Figure 3.25 Auto-negotiation base page for fiber 1000 Base-SX 1000 Base-LX
1000 Base-T
Reconciliation
Reconciliation
GMII
GMII
PCS Autonegotiation PMA PMD Medium
MDI
Legend GMII PCS PMA PMD
PCS PMA Autonegotiation MDI
Medium
Gigabit Media Independent Interface Physical Coding Sublayer Physical Medium Attachment Physical Medium Dependent
Figure 3.26 Gigabit Ethernet physical layer differences
that provide this support; fiber auto-negotiation is limited to negotiation if the device is 1000BASE-SX or 1000BASE-LX operating in full or half duplex. When auto-negotiation for 1 Gbps Ethernet occurs, the priority used favors full duplex for either 1000BASE-SX or 1000BASE-LX over half duplex for either standard. In examining Figure 3.26 on a top-down basis, note that the Gigabit Media Independent Interface (GMII) represents the interface between the MAC layer and the physical layer and as shown is divided into three sublayers: PCS, PMA, and PMD. The PCS represents the GMII sublayer responsible for the interface to the reconciliation layer. PCS uses 8B/10B encoding. Next, the PMA sublayer is the part of GMII responsible for providing a medium independent for the PCS to support serial bit-oriented physical media. To do this, the PMA serializes code groups for transmission and de-serializes bits received from the medium into code groups. For fiber operations the PMD sublayer becomes responsible for mapping the physical medium to the PCS. Note that the MDI represents the physical layer interface and is part of the PMD.
AU6039.indb 77
2/13/08 9:22:31 AM
78 n Carrier Ethernet: Providing the Need for Speed
Because 8B/10B coding results in 8 bits converted to 10, this means the baud rate is 1.25 G baud. Thus, 1.25 G baud in 8B/10B format results in a data rate of 1 Gbps.
1000BASE-ZX and LH Although 1000BASE-ZX and 1000BASE-LH do not represent IEEE standards, they represent industry-accepted terms used to reference Extended Gigabit Ethernet transmission technologies. 1000BASE-ZX references the use of single-mode fiber and a long-wavelength laser (1550 nm) to obtain a span line of 70 km (43.4 mi), with the use of a premium single-mode fiber providing a link span up to 100 km (62 mi). Under 1000BASEZX a signaling rate of 1.25 Gbps is used to transmit and receive data using 8B/10B encoding. 1000BASE-LH, where “LH” stands for “long haul” represents a second nonstandard but industry accepted version of Gigabit Ethernet. 1000BASE-LH uses a long wave laser (1310 nm) to support a maximum transmission distance of 550 m when multi-mode fiber is used and of 10 km when single-mode fiber is used.
Copper-Based Gigabit Ethernet There are two IEEE standards that govern Gigabit Ethernet over copper: 1000BASECX for short cable runs up to 25 m and 1000BASE-T for transmission up to 100 m.
1000BASE-CX 1000BASE-CX represents the initial IEEE standard for Gigabit Ethernet over copper cabling using 150-Ω balanced shielded twisted-pair wire. The ends of the shielded cable uses a DB-9 connector and both transmitter and receiver share a common ground. Although 1000BASE-T is a more preferable method of achieving Gigabit Ethernet over copper, 1000BASE-CX is used for short-haul data interconnections to include inter- and intra-rack connections, such as connecting blade servers and switch ports.
1000BASE-T The IEEE standardized Gigabit Ethernet over twisted-pair wiring in the 802.3ab specification took place during 1999. Referred to as 1000BASE-T, this standard defines the transmission of Gigabit Ethernet over four pairs of cable, similar to 100BASE-T4. At a minimum, the cable must be CAT5e (enhanced), and CAT6 and CAT7 can also be used to provide a maximum transmission distance of 100 m.
AU6039.indb 78
2/13/08 9:22:31 AM
The Flavors of Ethernet n 79
Table 3.7 Ethernet 1000BASE-T CAT5 Pinout Pin Number Signal Name
Function
1
BI_DA+
Bidirectional pair + A
2
BI_DA-
Bidirectional pair - A
3
BI_DB+
Bidirectional pair + B
4
BI_DC+
Bidirectional pair + C
5
BI_DC-
Bidirectional pair - C
6
BI_DB-
Bidirectional pair - B
7
BI_DD+
Bidirectional pair + D
8
BI_DD-
Bidirectional pair - D
Using a 125-MHz signaling rate on all four pairs of cables results in a 500-MHz composite signaling rate. Data is transmitted over four copper pairs, 8 bits at a time. First, 8 bits are expanded into four 3-bit symbols similar to the three-level MLT-3 signaling used in 100BASE-TX. Next, each 3-bit symbol is mapped to one of five voltage levels, which vary continuously during transmission. This results in the use of five-level PAM (PAM-5) signaling that is also used in 100BASE-T2. Because every 8 bits of data is translated into two voltage levels, this is equivalent to mapping 2 bits per signal level. Thus, the 500-MHz composite signaling rate results in a data transfer rate of 1 Gbps that is simultaneously bidirectional or full duplex. Through the use of electromagnetic cancellation the same signal is transmitted twice, with the second inverted in polarity to minimize crosstalk. At the receiver the difference between the two signals represents noise, which makes it simple for the receiver to discard. Table 3.7 lists the Ethernet 1000BASE-T RJ-45 connector pin assignments. Note that in CAT5 pinout, pairs are 1/4, 3/6, 4/5, and 7/8. Another version of the pinout, referred to as the Standard pinout, pairs are 1/2, 3/4, 5/6, and 7/8. The Standard pinout also uses an R-J45 connector; however, it is obviously not compatible with a CAT5 pinout.
Summary The series of IEEE Gigabit standards operate over fiber, Cat5 UTP, and balanced copper. In addition, the use of single-mode and different types of multimode fiber result in a wide variety of 1 Gbps transmission ranges from 25 m, when using 1000BASE-CX, to 20 km, when using 1000BASE-LX that operates over single-mode fiber. Thus, Gigabit Ethernet represents a standard that
AU6039.indb 79
2/13/08 9:22:31 AM
80 n Carrier Ethernet: Providing the Need for Speed
can transport data at a billion bits per second between equipment racks to workstations in offices as well as for being used to create building and campus backbones.
10 Gigabit Ethernet 10 Gigabit Ethernet was standardized by the IEEE as the 802.ae standard during 2002. This standard preserves the 802.3 Ethernet frame format to include the minimum and maximum frame lengths of the current Ethernet standard. Currently 10 Gbps Ethernet defines two families of physical layers, one for operation on a LAN, the other includes a simplified SONET/SDH framer, which makes it suitable for use into and through a carrier WAN. Both families are limited to supporting only full-duplex operations. Because 10 GbE provides 10 times the performance of 1 GbE at approximately three to four times its cost, this newer technology assures the continued deployment of Ethernet into the metropolitan and wide area network environments as well as in the LAN environment, where heavy traffic occurs, such as at network peering points.
GbE versus 10 GbE When comparing GbE to 10 GbE there are several improvements that result in the new technology beyond its data rate. Those improvements include the ability to directly connect to SONET/SDH equipment and an extended transmission range that can be up to eight times the range of GbE. Table 3.8 provides a general comparison of Gigabit Ethernet and 10 Gigabit Ethernet. Table 3.8 Comparing GbE and 10 GbE Feature
Gigabit Ethernet
IEEE standard
802.3z
802.3ae
Media support
Copper and optical fiber
Optical fiber
Mode(s) of operation
Half and full duplex
Full duplex
Coding scheme
8B10B
64B/66B
PMD layer
From fiber channel
New
Transmission range
5 km
40 km
SONET/SDH attachment No
AU6039.indb 80
10 Gigabit Ethernet
Yes
2/13/08 9:22:31 AM
The Flavors of Ethernet n 81
Media Access Control Full Duplex
10 Gigabit Media Independent Interface (XGMII) or 10 Gigabit Attachment Unit Interface (XAUI)
WWDM LAN PHY (8B 10B)
WWMD PMD 1310 nm –LX4
Serial LAN PHY 64B/66B
Serial WAN PHY (64B/66B+WIS)
Serial PMD 850 nm
Serial PMD 1310 nm
Serial PMD 1550 nm
–SR
–LR
–ER
Serial PMD 850 nm
–SW
Serial PMD 1310 nm
–LW
Serial PMD 1550 nm
–EW
Legend: WIS WAN Interface Sublayer PMD Physical Media Dependent Sublayer PHY Physical Layer
Figure 3.27 10 GbE supports LAN and WAN operations using different types of optical transceivers
Layers and Interfaces As previously mentioned, 10 GbE is designed to operate on LANs and WANs. To support this capability 10 GbE has two physical layer standards, one that supports LANs with a data rate of 10.31225 Gbps, and one that supports compatibility with OC-192c payload rates that operates at 9.95328 Gbps for WAN use. Figure 3.27 illustrates the general subdivision of the initial 10 GbE standards to include their support of different operating media. Figure 3.28 illustrates the relationship of 10 GbE layers and interfaces. In examining Figure 3.27, note that there were seven distinct versions of 10 GbE initially defined, four supported for use on LANs and three supported for use on WANs.
XGMII The 10 Gigabit Media Independent Interface (XGMII) transfers data 32 bits at a time, equivalent to four “lanes” of 8 bits plus 4 control bits (one per lane) and 1 clock
AU6039.indb 81
2/13/08 9:22:32 AM
82 n Carrier Ethernet: Providing the Need for Speed
MAC Reconciliation XGMII XGXS XAUI
Optional GXMII Extender XGXS
XGMII PCS PMA
PHY
PMD Medium XAUI 10GbE Attachment Unit Interface XGMII 10GbE Media Independent Interface MAC Media Access Control PCS Physical Coding Sublayer PMD Physical Medium Dependent
Figure 3.28 10 GbE layers and interfaces
bit. A clock rate of 3.125 Gbps is used, with data encoded using 8B/10B encoding while 10B/8B decoding is employed for receiving data. The XGMII interface requires the use of a 72-pin connector and is limited to a transmission distance of up to 3 ft.
XAUI A second interface standard defined for attaching 10 GbE ports is the XAUI standard, where the Roman X is used to denote the numeral 10 and AUI is borrowed from the Ethernet Attachment Unit Interface. The XAUI standard extends the reach of the XGMII interface from 3 ft to a maximum of 20 ft while reducing the 74-pin XGMII connector to a 16-pin connector that supports transmit and receive for four lanes using differential signaling. Each of the four lanes operates at 3.125 Gbps and is encoded through the use of 8B/10B encoding.
XGMII The XGMII Extender Sublayer is designed to remove the 3-ft transmission limitation of the XGMII interface, allowing a physical chip-to-chip interconnection up to 20 ft.
AU6039.indb 82
2/13/08 9:22:33 AM
The Flavors of Ethernet n 83
The XGMII Extended Sublayer uses the four lane 3.125 Gbps XAUI interface and 8B/10B encoding and 10B/8B decoding with lane-by-lane synchronization.
MAC The MAC layer operates at a data rate of 10 Gbps. To achieve this data rate, data is transmitted either serially at 10.3 G baud using a 64B/66B code or parallel in four lanes at 3.125 G baud using an 8B/10B code. While maintaining the 802.3 frame format and size restrictions, the MAC layer transmits a longer inter-frame gap when the WAN physical layer is used. The number of bytes added to each interframe gap is proportional to the length of the previous packet and are removed during 64B/66B encoding as a mechanism to reduce the data rate.
PCS The PCS encodes data using a 64B/66B code while received data is decoded using a 66B/64B code. In comparison to GbE, the 10 GbE PCS layer improves upon transition density and framing.
PMA The PMA sublayer is responsible for serializing and de-serializing the data rate to 10 Gbps. Clock synthesis occurs at the PMA sublayer as well as clock and data recovery functions.
PMD The PMD sublayer is responsible for electrical-to-optical (E/O) and optical-toelectrical (O/E) conversion. In the initial 10 GbE standard there were three serial PMDs supporting the LAN and WAN physical layers, supporting 850-, 1310-, and 1550-nm wavelengths. The optical transceivers supported by the PMD sublayer operate over Wide Wavelength Division Multiplexing (WWDM), Multi-Mode Fiber (MMF), and Single-Mode Fiber (SMF), providing support of transmission distances from 65 m to 40 km. Table 3.9 lists the optical transceivers supported by the PMD interfaces and their range.
WAN Physical Layer As previously mentioned, the 10 GbE WAN physical layer provides compatibility with SONET OC-192c and SDH STM-64. To accomplish this task an additional
AU6039.indb 83
2/13/08 9:22:33 AM
84 n Carrier Ethernet: Providing the Need for Speed
Table 3.9 10 GbE Optical Transceivers Standard
Optical Transceiver Maximum Range
10BASE-SR/-SW
850 nm, MMF
300 meters
10BASE-LX4
1310 nm, MMF
300 meters
10BASE-LR/-LW
1310 nm, SMF
10 km
10BASE-ER/-EW
1550 nm, SMF
40 km
17280 bytes
9 Rows
576
1
Section
P a t h
16640 bytes
63 E. Frame
E. Frame
Frame
…
E. Frame
E. Frame
E.
Line
O v e r h Transport e Overhead a d SONET Payload SONET Payload Envelope (SPE) STS–192c Frame
Figure 3.29 Encapsulating 10 GbE frames in SONET
SONET/SDH sublayer was added as an option that maps the encoded data stream of the LAN physical layer into the payload envelope of a SONET frame. Although the 10 GbE WAN physical layer is SONET “friendly” and enables the use of the SONET infrastructure, it is not actually SONET compliant. This means that a 10 GbE device can be connected to SONET access devices, but not directly into a SONET infrastructure. Figure 3.29 illustrates how the 10 GbE Ethernet WAN physical layer allows Ethernet frames to be encapsulated into SONET frames.
10 GbE over Copper Similar to GbE, there are two standards for 10 GbE over copper: 10GBASE-CX4 and 10GBASE-T.
AU6039.indb 84
2/13/08 9:22:34 AM
The Flavors of Ethernet n 85
10GBASE-CX4 10GBASE-CX4 represents the standard for 10 GbE via twin-axial cable. Under this standard, which is technically referred to as 802.3ak, four pairs of twin-axial copper wiring are used. The 10BASE-CX4 standard uses the XAUI interface specified in the IEEE 802.3ae standard. Instead of transmitting 10 Gbps over a single link, the 802.3ak specification uses four transmitters and four receivers over a bundle of thin twinaxial cables with each path operating at a baud rate of 3.125 GHz, using 8B/10B coding to transmit data at 2.5 Gbps. Although the standard defines a maximum cable distance of 15 m, the actual cable distance depends upon the diameter of the conductors. For example, Cisco Systems offers four 10BASE-CX cables with American Wire Gauges (AWG) of 28 and 26. The 28 AWG cable is 15 m in length, and 26 AWG cables are offered in 1-, 5-, and 10-m lengths.
10GBASE-T 10 GbE over unshielded twisted pair cabling was standardized by the IEEE as the 802.3an standard during June 2006. While transmitting 10 GbE over UTP at one time was considered impossible, the standards committee members relied upon the use of analog-to-digital conversion, echo cancellation, a parallel transform-based processing technique, and equalization as well as a new cabling standard to enable 10 Gbps operations. The new cabling standard is Category 6a specified by the TIA/ EIA-568A addendum 10, which enables transmission up to 100 m.
CAT6a Cable Although CAT6 cabling could support 10 Gbps transmission, it could only do so up to approximately 55 m. Thus, there are now three types of CAT6 cable: 1. CAT6: Specified for up to 250 MHz 2. CAT6e: Extended CAT6 specified for up to 500 MHz 3. CAT6a: New CAT6 cable for 10BASE-T defined for up to a 625-MHz signaling rate CAT6a cable increased the twist of cable pairs as well as varied the twist rates between the four pairs as a mechanism to control coupling. In addition, the diameter of the cable was increased to 0.31 in. from 0.22 in. and a separator was installed to control the pair positions within the cable.
Functions To achieve a 10 Gbps data rate over UTP the 10BASE-T standard supports a number of specialized functions to include a self-synchronizing scrambler, 128-Double
AU6039.indb 85
2/13/08 9:22:34 AM
86 n Carrier Ethernet: Providing the Need for Speed
Scramble
LDPC Encode
128 DQS Map
To, Lison Harashima Precode
4X 16 PAM
Figure 3.30 10GBASE-T transmission functions
Square (DSQ) coset-partitioned constellation, the use of Low Density Parity Check (LDPC) block codes, and the use of Tomlinson–Harashima Precoding (THP). Figure 3.30 illustrates the 10GBASE-T transmitter functions. It should be noted that although the standard defines exactly how the transmitter must function, the receiver implementation is left to the individual chip manufacturer to decide.
LDPC The LDPC represents a block code that performs error correction and can be used to approach the Shannon capacity of a data channel. Under the 10GBASE-T standard a block size of 2048 bits with 325 check bits is used, referred to as a 2048, 1723 code.
128-DSQ A second key function defined by the 10GBASE-T standard is the use of a twodimensional 128-DSQ constellation pattern, which conveys 7 bits per baud. 128DSQ can be visualized as a 16-×-16 checkerboard without the white squares. The resulting 128-position constellation pattern is partitioned into eight regions containing 16 points that are called cosets. Each coset conveys 3 bits that are uncoded as well as 4 coded bits that are protected using the LDPC block code. The resulting DSQ symbol is then precoded using THP that places the equalizer for the channel in the transmitter and then transmitted using 16-level PAM (PAM-16) signals. As a result of the previously described process which is spread over all four copper pairs, a raw bit rate of 11.2 Gbps is obtained.
Applications Although a data rate of 10 billion bits per second is well beyond the need of many small- and medium-sized organizations, it provides the ability to remove data blockage that can adversely affect some networks used by such organizations. Thus, we can reasonably expect 10 GbE to move into data centers as well as wiring closets and vertical risers to connect closets on different floors within a building. In addition, 10 GbE can be expected to link data centers on a campus or distributed in different offices within a metropolitan area. Because there are three basic versions of 10 GbE from a transmission distance perspective, readers need to examine carefully the transmission distance requirements of each application
AU6039.indb 86
2/13/08 9:22:34 AM
The Flavors of Ethernet n 87
Table 3.10 Comparing 10 GbE Media and Applications Application
10BASE-CX
10BASE-T
10GBASE-FIBER
Yes
Yes
Yes
(≤ 15 m)
(≤ 100 m)
Wiring closet hubs to desktop
No
Yes
Yes
Vertical risers connect wiring closets
No
Yes (≤ 100 m)
Yes
Metropolitan area campus distributed offices
No
No
Yes
Data center Server–hub–router connections
prior to selecting an appropriate 10 GbE technology. Table 3.10 provides a summary of the potential use of 10 GbE twin-axial, copper pair, and fiber media against four common applications.
100 GbE Until late 2006 a debate occurred as to whether the IEEE should focus its efforts upon 40 or 100 GbE. The debate ended in December 2006 when the IEEE decided to retain the tradition of increasing Ethernet’s speed in increments of ten. In December 2006 the IEEE announced that its 802.3 Higher Speed Study Group (HSSG) passed a vote to standardize development of the next generation of Ethernet that would operate at 100 Gbps. The IEEE will first concentrate on 100 Gbps over fiber and, at the time this book was researched, had not decided if the speed will be possible over copper. If the past is any guide to the future, readers can expect a preliminary 100 GbE standard during 2009.
Ethernet in the First Mile Ethernet in the First Mile (EFM) represents a collection of protocols standardized as IEEE 802.3ah that defines the use of Ethernet in the access and egress of networks. That is, EFM defines how Ethernet can be used in the “first” and “last” miles to access and egress a carrier infrastructure.
Architectures Under the IEEE 802.3ah standard, two different architectures are defined for Ethernet access: active Ethernet and an Ethernet over Passive Optical Network. In actuality, EFM defines eight physical layer (PHY) interfaces that include active and
AU6039.indb 87
2/13/08 9:22:35 AM
88 n Carrier Ethernet: Providing the Need for Speed
passive methods to extend Ethernet technology via the access line connecting end users and communications subscribers. Through the use of EFM, subscribers are able to use a familiar Ethernet interface connection, which lowers costs as well as provides the ability for end users to connect to a communications carrier at a higher data rate than typically available through the use of other connection methods. This in turn can provide end users with the ability to have a higher-speed connection to the Internet.
Physical Layer Interfaces Under the IEEE 802.3ah specification eight new physical layer interfaces were defined. Table 3.11 lists those interfaces to include a short description of the data rate and distance each interface can provide. As indicated in Table 3.11, there are two different types of interfaces defined in the EFM standard, copper and fiber, with the latter consisting of active and passive interfaces. Table 3.11 Ethernet First-Mile Interfaces Interface
Description
2BASE-TL
Provides point-to-point full duplex 2 Mbps over copper-grade wiring at distances up to 2700 m (9000 ft) using SHDSL technology
10PASS-TS
Provides point-to-point Ethernet full duplex at or above 10 Mbps over voice-grade copper wiring at distances up to 750 m (2460 ft) using VDSL technology
100BASE-LX10
Provides point-to-point 100 Mbps Ethernet over a pair of singlemode fibers at distances up to 10 km
100BASE-BX10
Provides point-to-point 100 Mbps Ethernet over an individual single-mode fiber at distances up to 10 km
1000BASE-LX10 Provides point-to-point GbE over a pair of single-mode fibers at distances up to 10 km 1000BASE-BX10 Provides point-to-point GbE over an individual single-mode fiber at distances up to 10 km 1000BASE-PX10 Provides point-to-multipoint GbE over a passive optical network at distances up to 10 km 1000BASE-PX20 Provides point-to-multipoint GbE over a passive optical network at distances up to 20 km
AU6039.indb 88
2/13/08 9:22:35 AM
The Flavors of Ethernet n 89
Copper Interfaces The EFM copper interfaces enable existing unshielded twisted-pair wiring to be used via SHDSL or VDSL. At a minimum, Ethernet can flow directly into the carrier network at 2 Mbps at a distance of up to 2.7 km when SHDSL provides the transmission facility, while a data rate at or above 10 Mbps at a distance up to 750 m becomes possible when VDSL technology is used.
Fiber Interfaces A second method defined for delivering Ethernet transported over access line is through the use of a direct fiber connection. As previously indicated in Table 3.11, there are six defined standards to include four active and two passive optical-fiber methods.
Wavelength Utilization When a dual-fiber topology is used, such as 100BASE-LX10 and 1000BASE-LX10, the wavelength used on each fiber is 1310 nm. When a single fiber is used, the wavelength depends upon whether the fiber is active or passive. For active fiber, such as 100BASE-BX10, a 1310-nm wavelength is used for upstream and a 1550-nm wavelength is used for downstream. For passive fiber, which includes 1000BASEPX10 and 1000BASE-PX20, a wavelength of 1310 nm is used for upstream while a wavelength of 1490 nm is used for downstream.
Applications EFM provides a series of interfaces that enable business and residential users to use the ubiquitous Ethernet interface to gain access to a carrier’s infrastructure. The two copper standards are oriented towards residential and small business usage, and the series of fiber-based standards are oriented towards providing access to a number of tenants in apartment buildings and businesses residing in both vertical and horizontal structures.
Advantages There are several advantages associated with the use of Ethernet to provide first-mile access into a communications carrier network. First, a subscriber will be able to use a familiar Ethernet interface connector to attach to a LAN or an individual computer; however, the interface is a small portion of the overall picture. If one examined the manner by which traffic is routed from a customer’s premises through the first mile to a central office and then usually aggregated for transmission over a WAN, one
AU6039.indb 89
2/13/08 9:22:35 AM
90 n Carrier Ethernet: Providing the Need for Speed
would note the use of a number of protocols and different types of equipment. Protocols used as transports can include ATM, the Point-to-Point Protocol (PPP), and even SONET or SDH. On an access-to-egress or end-to-end basis networking equipment can include DSL and cable modems, Digital Subscriber Line Access Multiplexers (DSLAMs), routers, switches, and various types of T-carrier multiplexers. As the need for protocol translations increases, so does the complexity and cost of network equipment. Thus, EFM enables data to flow into and possibly through a carrier’s network using the same frame format. This significantly simplifies the role of networking equipment and minimizes latency due to the absence of protocol translation.
Use of Dual Fibers In this section we will turn our attention to the use of fiber specified by the IEEE 802.3ah standard. In doing so, we will first examine the use of dual-fiber strands and then orient our focus to single-fiber strands.
100BASE-LX10 The 100BASE-LX10 specification defines a single-mode, dual-fiber connection at a data rate of 100 Mbps for distances up to 10 km. The signaling rate is 125M baud; however, the use of 4B/5B NRZI encoding results in a data rate of 100 Mbps. The output of the laser defined for use under 100BASE-LX10 is −15 dBm with a sensitivity of −25 dBm. The use of a 1310-nm wavelength on each fiber makes it possible to use existing OC-3 optical transmitter/receiver and the use of a 4B/5B encoding scheme makes it possible to use 100BASE-fiber chipsets.
1000BASE-LX10 The 1000BASE-LX10 of a single-mode dual-fiber connection uses a signaling speed of 1250 M baud and 8B/10B encoding to support a data rate of 1 Gbps at distances up to 10 km. Similar to 100BASE-LX10, this gigabit option uses a wavelength of 1310 nm on each fiber. The optical power of the laser is −9.5 dBm with a sensitivity of −20 dBm. One of the more interesting aspects of 1000BASE-LX10 is that GbE can also be supported on less expensive multi-mode dual fibers. When this situation occurs, the transmission distance is reduced to 550 meters. Figure 3.31 illustrates the use of either 100BASE-LX10 or 1000BASE-LX10 over dual fibers. 100/1000 Base-LX10 TRx
100/1000 Base-LX10 TRx
Dual Fiber Strands
Figure 3.31 EFM dual fiber utilization
AU6039.indb 90
2/13/08 9:22:36 AM
The Flavors of Ethernet n 91 100 Base-BX10-U TRx
100 Base-BX-D TRx
Figure 3.32 EFM PMD relationship when a single fiber is used
Use of Single Fibers The use of single fiber requires Wavelength Division Multiplexing (WDM) in which the wavelengths differ for transmission and reception. The two single-fiber EFM standards are 100BASE-BX10 and 1000BASE-BX10.
100BASE-BX10 100BASE-BX10 supports the use of single-mode, single fiber at a signaling rate of 125M baud at a distance of 10 km. Using 4B/5B encoding results in a data rate of 100 Mbps. The optical output of the laser is −14 dBm with a sensitivity of −29.2 dBm. 100BASE-BX10 uses a wavelength of 1310 nm upstream and 1550 nm downstream. The upstream PMD is referred to as 100BASE-BX10-U and is located at the subscriber premises; the downstream PMD is called 100BASE-BX10-D and is located at the carrier central office. Figure 3.32 illustrates the PMD relationship for 100 Mbps single-fiber EFM.
1000BASE-BX10 The 1000BASE-BX10 specification provides a GbE version of the use of singlemode, single fiber for a span of 10 km. Similar to its 100 Mbps single-fiber specification, 1000BASE-BX10 divides the PMD into 1000BASE-BX10-U (uplink) and 1000BASE-BX10-D (downlink). Uplink uses a wavelength of 1310 nm, and downlink uses a wavelength of 1550 nm. The PMD for single-fiber 1000BASE-BX10 is the same as previously shown in Figure 3.32 once an extra zero is added to convert 100BASE of 1000BASE on each side of the illustration.
EPON In concluding our overview of EFM, we will turn our attention to Ethernet over Passive Optical Networks (EPONs). First, we will describe the role of a passive optical network. Once this is done we will focus our attention on the operation of EPON.
Evolution Passive optical networks were developed to address the last mile of communications, commonly known as the local loop or access network. Originally developed
AU6039.indb 91
2/13/08 9:22:37 AM
92 n Carrier Ethernet: Providing the Need for Speed
for use with ATM (APON), the growth in the use of Ethernet resulted in the development of EPON.
Operation A Passive Optical Network (PON) is a single, shared optical fiber that uses simple, low-cost splitters to divide a single fiber into separate strands routed to individual subscribers. Because it is a one-to-many network technology, it is also referred to as a multi-point network architecture. Similar to a cable splitter, optical splitters need no power. Thus, PONs are referred to as “passive” even though there are active elements at the subscriber premises and carrier office. However, because electrical equipment is eliminated in the first mile, the cost of a PON only depends upon the cost of the fiber, fiber splitters, and installing the fiber.
Architecture At a carrier’s central office an Optical Line Terminal (OLT) is used to terminate a single strand of fiber. The fiber is then routed to a passive optical splitter that provides a 32-way maximum split. Transmit and receive signals on each of the up to 32 split fibers operate on different wavelengths, enabling bidirectional operations on each of the resulting fibers. Optical Network Units (ONUs) are then located at the curb in a neighborhood, the point of presence in a building, or at a demarcation point outside a home. The ONU provides an interface between a customer’s voice, video, and data networks and the PON. To do so, the ONU receives traffic in an optical format and converts it to the subscriber’s desired format, such as Ethernet, ATM, T1, T3, and so on. Figure 3.33 illustrates the use of an EPON from an OLT located in a carrier’s office to a subscriber’s ONU. In this example, up to N ONUs, where N < = 32 can be used based upon the use of a 1:N optical splitter. In examining Figure 3.33, note that the use of different wavelengths enables EPON to operate in full-duplex mode in a single-fiber, point-to-multi-point (P2MP) topology. Thus, there is no need for the use of the CSMA/CD access protocol. 802.3 Frames
Central Office OLT Tx Rx
Medium Access Logic
Tx Rx
Subscriber’s Premises ONU
FCS
Payload
Wavelength 1 WDM
Header
Tx WDM
Rx
Medium Access Logic
Tx Rx
1: N Optical Splitter
Wavelength 2 Up to 31 Additional ONUs
Figure 3.33 EPON from carrier to customer
AU6039.indb 92
2/13/08 9:22:37 AM
The Flavors of Ethernet n 93
OLT
1
2
2
3
1:N Splitter
1
2
2
3
O N U
1
1
2
2
3
O N U
2
1
2
2
3
O N U
2
3
Figure 3.34 Downstream delivery of Ethernet frames
Downstream Data Flow At the head end OLT located in the communication carrier’s central office, a TDMA protocol is used to enable data destined to one subscriber at a time to flow from the OLT towards the optical splitter. From the optical splitter Ethernet frames are replicated onto each split fiber, with the ONU located at the subscriber’s premises filtering Ethernet frames so that only frames addressed to the subscriber are passed. The downstream delivery of Ethernet frames via an EPON, as illustrated in Figure 3.34.
Upstream Data Flow In the upstream direction, subscriber data reaches a relevant ONU, which uses TDMA to place Ethernet frames in defined time slots. By varying the size of each time slot, it becomes possible to allocate bandwidth assigned to subscribers. Figure 3.35 illustrates the flow of upstream data. 1
OLT
1
2
2
3
1:N Splitter
2
2
3
O N U
1
O N U
2
O N U
2
3
Figure 3.35 Upstream data flow
AU6039.indb 93
2/13/08 9:22:38 AM
94 n Carrier Ethernet: Providing the Need for Speed
Table 3.12 Key Functions Performed by MPCP Time slot assignment Bandwidth assignment Auto-discovery process Timing reference to synchronize ONUs
MPCP The allocation of time slots as well as the allocation of bandwidth and other upstream functions are specified by the Multi-Point Control Protocol (MPCP). The MPCP specifies the operations that occur between ONUs and an OLT in the upstream and downstream directions. Table 3.12 lists the key functions performed by the protocol. MPCP defines five new MAC control messages. Two messages, GATE and REPORT, are used to assign and request bandwidth; three messages, REGISTER-REQ, REGISTER, and REGISTER-ACK, are used during the auto-discovery process. Although filtering by ONUs enables users to receive data destined to them instead of broadcasted data in the upstream direction, collisions would occur without an arbitration mechanism. That mechanism is obtained by the OLT allocating time slots to each ONU through the use of GATE and REPORT messages. The GATE message is transmitted from the OLT to the ONU and is used to assign a time slot to the ONU. In comparison, the REPORT message flows from an ONU to the OLT to request another time slot by reporting the amount of data queued. Both the GATE and REPORT messages are carried in a standard 64-byte MAC control frame. The OLT that receives a GATE message and responds with a REPORT is referred to as a logical link and is identified through the use of a Logical Link Identifier (LLID). To better understand the use of the LLID, we need to examine how MAC control messages are defined.
MAC Control Messages MAC control messages are defined via the use of the Ethertype field, with de-multiplexing performed through the use of a 2-byte MAC control opcode field that follows the 2-byte Ethertype field. Under MPCP Ethernet frames are prefixed with an 8-byte preamble field that carries the LLID and a CRC. The format of this new preamble field is shown in Figure 3.36. The 2-byte LLID shown in Figure 3.36 consists of a 1-bit mode indicator and 15-bit physical ID. The mode indicator is used to identify the use of the multicast or unicast transmission, while the CRC field provides a CRC8 protection to the
AU6039.indb 94
2/13/08 9:22:38 AM
The Flavors of Ethernet n 95
8 bytes 1
4
2
1
SOP
Reserved
Logical Link ID (LLID)
CRC
U/M bit U/M bit: 0 = Unicast
15 bit ID 1 = multicast
Utilization : Downstream Use: Destination Logical ID Upstream Use: Source Logical ID SOP: Second Generation Packet Over SO
Figure 3.36 The EPON preamble field format
preamble. When receiving a frame, the ONU examines the preamble CRC. If it is in error, the frame is discarded. Otherwise the LLID is examined and, if it matches the registered logical and physical ID, the frame is passed to the MAC layer; otherwise it is also discarded.
GATE and REPORT Message Formats The left portion of Figure 3.37 illustrates the format of a GATE message; the right portion of that illustration indicates the format of a REPORT message. Note that the opcode for a GATE message is 00-02 and the opcode of a REPORT message is 00-03. Also note that both messages include a 4-byte time stamp field. A GATE message can convey up to four grants (transmission slots) to an ONU. In the event that an ONU has more than one backlogged queue, it is up to that device to determine the manner by which each grant is allocated to a queue. Thus, the REPORT message can convey the status of up to eight queues to the OLT.
Point-to-Point Operations EPON uses point-to-point emulation so that the medium functions as a collection of point-to-point links. To accomplish this the emulation mechanism tags Ethernet frames with LLIDs in the frame preamble. Figure 3.38 illustrates the downstream and upstream transmissions between the OLT and ONUs. Note that for point-to-point emulation the OLT needs one interface for each logical link. When transmitting a frame downstream, the emulator in the OLT inserts an LLID associated with a particular port on which the frame arrived. Although the frame flows through a splitter and is broadcast to each ONU, only the ONU that can match the frame’s LLID with its own assigned value
AU6039.indb 95
2/13/08 9:22:39 AM
96 n Carrier Ethernet: Providing the Need for Speed Gate Message
bytes 6
Destination Address
6
Source Address
2
bytes 6 6
Length/Type
2
Opcode = 00–02
4
Time Stamp
1 0/4
Number of Grants Flag
0/2
Grant 1 Length
0/4
Grant 2 Start Time
0/2
Grant 2 Length
0/4
Grant 3 Start Time
0/2
Grant 3 Length
0/4
Grant 4 Start Time
0/2
Grant 4 Length
0/2
Sync Time PAD/Reserved
Source Address
2
Length/Type
2
Opcode = 00–03
4
Time Stamp
1
Number of Queues Set
1
Report Bitmap Queue 0 Report
0/2
Queue 1 Report
0/2
Queue 2 Report
0/2
Queue 3 Report
0/2
Queue 4 Report
0/2
Queue 5 Report
0/2
Queue 6 Report
0/2
Queue 7 Report
0–39
PAD/Reserved
4
CRC
4
Destination Address
0/2
Grant 1 Start Time
13–39
Report Message
CRC
Figure 3.37 GATE and REPORT message formats Downstream Transmission OLT MAC
MAC
Upstream Transmission OLT
MAC
MAC
MAC
MAC
Emulation
Emulation
Emulation
Emulation
Emulation
MAC
MAC
MAC
ONU1
ONU2
ONU3
Emulation
Emulation
Emulation
MAC
MAC
MAC
ONU1
ONU2
ONU3
Figure 3.38 Point-to-point emulation in EPON
AU6039.indb 96
2/13/08 9:22:40 AM
The Flavors of Ethernet n 97 T1
OLT Tx Rx
T5
–
T5
T1
Gate T1 * *
T4 * * T3
–
T2
Report ONU Tx
T4 * *
Rx
T1 * * T2
T3
Figure 3.39 Measuring round-trip time
will accept the frame. Thus, the P2MP transmission appears to be a point-to-point transmission due to emulation.
Delay Times To obtain an appreciation of the round-trip delay, we will examine the sequence of events between the issuance of a GATE message at an OLT and the receipt of a REPORT message generated at the ONU by the OLT. Using Figure 3.39 as an example of message flow, at time T1 the OLT transmits a GATE message. At time T2 the ONU receives the GATE message. In addition, the ONU then resets its local clock to use the time stamp in the GATE message. Thus, the ONU clock now shows the time as T1. After a processing delay the ONU responds with a REPORT message at time T3, using the time stamp T4. The REPORT message flows to the OLT, which receives the message at time T5. From Figure 3.40 the Round-Trip Time (RTT) becomes
RTT = T2 – T1 + T5 –T3
However, because T3 – T2 equals T4 – T1, then RTT = T5 – T4.
RTT Compensation The OLT performs delay compensation by transmitting GRANT messages to the ONU that reflect an arrival time compensated for by an adjustment for the RTT.
AU6039.indb 97
2/13/08 9:22:41 AM
98 n Carrier Ethernet: Providing the Need for Speed OLT
ONU “Disc over y
Contention
GAT E” M essag e st e u q e R r te is Reg ddress : MAC A Message
REGIST Message ER : Link ID
GATE
Channel Established
REGIST
Messa ge
Message ER_ACK
Figure 3.40 The auto-discovery process
For example, assume the OLT is to receive data from an ONU at time T. Then, the OLT will adjust its GATE message so that is contains a slot start whose value is set to T – RTT.
Auto-Discovery In concluding our examination of MPCP, we will discuss its auto-discovery feature. This feature is necessary for a new ONU to be recognized by the OLT. To accomplish this, the ONU and OLT will negotiate such parameters as the laser turn-on and turn-off times and round-trip time between the ONU and OLT. Figure 3.40 illustrates the flow of messages during the discovery process. First, a “discovery GATE” message is broadcast to all stations. This is a GATE message that uses a destination address set to multicast. The ONU just added responds with a REGISTER REQUEST message that flows in the upstream contention channel. This message indicates the physical ID and capabilities of the ONU as well as an echo of the capabilities of the OLT. In response to the REGISTER REQUEST the OLT returns a REGISTER message, followed by a GATE message. The REGISTER message uses a destination address of the ONU’s MAC address while the content of the message includes the Link ID and an echo of previously received ONU capabilities. Thus, the REGISTER message assigns a LINK ID to the ONU and the MAC address is its identifier. The following GATE message uses the ONU MAC address as the destination address and its content is “GRANT.” In response, the ONU acknowledges the GATE message with a REGISTER-ACK message, with the content of the message being an echo of the registered Link ID. At this time the channel between the OLT and ONU is established.
AU6039.indb 98
2/13/08 9:22:41 AM
Chapter 4
Frame Formats In prior chapters of this book we discussed the fact that a key advantage of the use of Ethernet is that it provides a common frame format used at data rates from 10 Mbps through 100 Gbps. Thus, the title of this chapter may appear a bit puzzling to readers, so permit this author to provide a degree of elaboration. Ethernet has used a 1500-byte data field and 26 bytes of header and trailer fields from its original development at 10 Mbps through Fast Ethernet operating at 100 Mbps and Gigabit Ethernet operating at 1 Gbps as well as 10 GbE. If we first focus our attention upon Ethernet operating at 10 Mbps, we will note that while the field lengths remained uniform, their use varied with the result that we can consider those variations to represent changes in Ethernet’s frame format. Similarly, Fast Ethernet retained the frame structure of its older Ethernet cousin, but used a few tricks to maintain compatibility. When the speed of Ethernet increased to 1 Gbps, several vendors created support for what are now referred to as a jumbo frame as a mechanism to enhance LAN and WAN performance. Although jumbo frames have not been officially recognized as a standard by the IEEE, they deserve mention and will be discussed in this chapter. In addition, when operating in a half-duplex mode Gigabit Ethernet uses a carrier extension as a mechanism to increase the range of transmission. Another area that modified the Ethernet frame was the standardization of virtual LANs (VLAN) by the IEEE. Because the standard requires a 4-byte VLAN tag, the use of Ethernet in a VLAN environment means that the Ethernet frame will be modified. Although the primary orientation of this chapter is towards obtaining an understanding of the different Ethernet frame formats, because different frames can have an effect upon performance we will also discuss this topic. Thus, grab a beverage 99
AU6039.indb 99
2/13/08 9:22:42 AM
100 n Carrier Ethernet: Providing the Need for Speed
Preamble
bytes: 8
Destination Address
Source Address
Type
Data
Frame Check Sequence
6
6
2
46-1500
4
Figure 4.1 The Ethernet DIX frame
and perhaps a few munchies, and we will explore the wonderful world of Ethernet frame formats and, when applicable, their effect upon network performance.
Basic Ethernet Prior to the IEEE 802.3 Committee taking responsibility for the development of Ethernet, this LAN was administered by the so-called DIX consortium consisting of Digital Equipment Corporation, Intel, and Xerox Corporation. At that time the format of an Ethernet frame followed the DIX standard and in some literature is referred to as the Ethernet Version II or DIX frame, which is illustrated in Figure 4.1.
The Ethernet II/DIX Frame Because the Ethernet DIX frame provides a base for further modifications that occurred, we will examine each field in the frame in detail.
Preamble Field The Preamble field consists of 8 bytes of alternating binary 1s and 0s. The purpose of this field is to announce the frame as well as to enable receivers on the network to synchronize themselves to the incoming frame. In addition, the 8 bytes or 64 bits in this frame ensure a minimum spacing of 9.6 μs at 10 Mbps between frames for error detection and recovery operations.
Destination Address Field The Destination address field identifies the recipient of the frame. Although this field appears to be simple, the first 2 bits are used to define how the following 46 bits are interpreted. Figure 4.2 illustrates the subdivision of the Destination address field into three subfields. As indicated, the setting of the I/G bit subfield to a binary 0 indicates that the 46-bit address is an individual or unicast address. When the value of the subfield is set to a binary 1, the address field represents a group address. One specific example of a group address is the assignment of all 1s to the address field. Here the
AU6039.indb 100
2/13/08 9:22:42 AM
Frame Formats n 101
I/G U/L *
46 Address bits
22-bit Vendor Identifier I/G bit Subfield: U/L bit Subfield:
24-bits Assigned by Vendor ‘0’ = Individual address, ‘1’ = Group address ‘0’ = Universally administrated addressing ‘1’ = Locally administrated addressing
* Set to 0 in Source address field
Figure 4.2 The Ethernet destination address field
address hex FFFFFFFFFFFF is recognized as a broadcast address and each station on a network will receive and accept frames with that address. The second subfield, which is labeled U/L, is set to a binary 0 to represent the fact that the address is a universally administered address, while a setting of binary 1 indicates that the address is a locally administered address. As a refresher, a universally administrated address indicates that the address is unique and reflects a “burnt-in” ROM address on a NIC or Ethernet chipset. Initially Xerox was responsible for administrating universal addresses by assigning one or more blocks representing the first 24 bits (3 bytes) to vendors. Later the IEEE became responsible for administrative functions to include assigning blocks of 24-bit addresses to vendors. Vendors would then use the remaining 24 bits to assign unique addresses to one or more blocks of addresses based upon the popularity of their products. That is, once they used all 242 − 1 addresses, they would begin anew with a new vendor code.
Source Address Field The Source Address field identifies the station that transmitted the frame. Similar to the destination address, the source address field is 666 bytes in length, with the first 3 bytes (actually the last 24 bits in the first 3 bytes) assigned initially by Xerox and later by the IEEE to the manufacturer for incorporation into each NIC’s ROM. The vendor then normally assigns the last 3 bytes to each of its NICs or chipsets so that each 46-bit address is unique. Because the first 3 bytes are assigned to manufacturers, another term used to reference this subfield is the “manufacturer subfield.”
Type Field The 2-byte Type field was used by Xerox and later by the DIX consortium to identify the protocol transported with an Ethernet frame. Because the minimum length
AU6039.indb 101
2/13/08 9:22:43 AM
102 n Carrier Ethernet: Providing the Need for Speed
of an Ethernet frame is 64 bytes, each of the higher level protocols defined by a type field value required either a larger minimum message length or an internal field that could be used to distinguish data from padding.
Data Field The Data field must be a minimum of 46 bytes in length to ensure that an Ethernet frame without considering the 8-byte preamble is 64 bytes in length. This means that the transmission of 1 byte of information must be carried within a 46-byte data field and results in the padding of the remainder of the field if the information to be placed in the field is less than 46 bytes. Although some publications subdivide the data field to show a PAD subfield, the latter actually represents optional fill characters that are added to the information in the data field to ensure a length of at least 46 bytes. The maximum length of the data field is 1500 bytes, which results in the use of multiple frames to transport full screen images, photographs, and almost all file transfers.
Frame Check Sequence Field The Frame Check Sequence (FCS) is applied by the transmitter computing a Cyclic Redundancy Check (CRC) that covers both address fields, the type field and the data field. The transmitter then places the CRC in the 4-byte FCS field. The CRC is developed by treating the composition of the previously named fields as one long binary number. The n bits to be covered by the CRC are considered to represent the coefficients of a polynomial M (x) of degree n – 1. Here, the first bit in the destination address field corresponds to the X n − 1 term, whereas the last bit in the data field corresponds to the x0 term. Next, M (x) is multiplied by X 32 and the result of that multiplication process is divided by the following polynomial:
G(X) = X 32 + X 26 + X 23 + X 22 + X16 +X12 + X11 + X10 + X8 + X7 + X5 + X4 + X 3 + X + 1
Readers should note that the term X n represents the setting of a bit to a 1 in position n. Thus, part of the generating polynomial X5 + X4 + X 3 + X represents the binary value 111011. The result of the division produces a quotient and a remainder. The quotient is discarded and the remainder becomes the CRC value, which is placed in the 4-byte FCS field. The 32-bit CRC reduces the probability of an undetected error to 1 bit in every 4.3 billion, or approximately 1 bit in 232 − 1 bits. Once a frame reaches its destination the receiver uses the same polynomial to perform the same operation upon the received data, resulting in the computation of a “local” CRC. If the locally generated CRC matches the one in the FCS field,
AU6039.indb 102
2/13/08 9:22:43 AM
Frame Formats n 103
the frame is accepted. Otherwise, the receiver discards the frame as it is considered to have one or more bits in error. The receiver will also consider a received frame to be invalid and discard it under two additional conditions. Those conditions occur when the frame does not contain an integral number of bytes or when the length of the data field does not match the value contained in the length field. The latter condition is only applicable to the IEEE 802.3 frame, which we will examine next.
The 802.3 Frame The key differences between the Ethernet Type II/ DIX frame and the IEEE 802.3 Ethernet frame are in the preamble field and the change from a 2-byte type field to a 2-byte length field.
Length Field When the IEEE took over the standardization of Ethernet, it needed to ensure a minimum frame length of 64 bytes to enable collision detection to work properly. Although the Ethernet II/DIX frame solved the minimum frame length through the selection of higher level protocols that resulted in a minimum frame length of 64 bytes, for Ethernet to be interchangeable with other types of LANs the IEEE needed to replace the type field with a length field to distinguish data from padding.
Preamble Field Modification The second change to the Ethernet II/DIX frame occurred by “splitting” the 8-byte Preamble field into a 7-byte Preamble field and a 1-byte Start of Frame Delimiter (SFD) field. The 8-byte Preamble field consisted of a repeating sequence of binary 0s and 1s. When the field was split the new 7-byte Preamble field retained this sequence, which was then carried over into the start of frame delimiter field with the exception of the last 2 bits, which were now both set to binary 1s. Thus, the only difference between the old Preamble field and the new Preamble and start of delimiter fields is one set bit which assists the synchronization effort.
Type/Length Field Values When Xerox was the custodian of Ethernet, it had not assigned any important protocols to have a value below 1500. Because the maximum length of an Ethernet Data field is 1500 bytes and the change in the Preamble field only enhances synchronization, in essence DIX and 802.3 frames are compatible. That is, any Ethernet frame with a Type/Length field value less than 1500 is an 802.3 frame,
AU6039.indb 103
2/13/08 9:22:43 AM
104 n Carrier Ethernet: Providing the Need for Speed bytes: 8 Preamble
6
6
2
46–1500
4
Destination Address
Source Address
Type
Data
Frame Check Sequence
DSAP bytes: 1
SSAP Control Data 1
1
Figure 4.3 The 802.2 header
and any frame with a field value greater than 1500 must be in DIX format and the field represents a type field. Thus, many times this field is referred to as a type/length field.
The 802.2 Header When the IEEE created the 802.3 Ethernet frame, their desire for compatibility with fields in other types of LANs resulted in the 802.2 Logical Link Control (LLC) header following the 802.3 header. The 802.2 header is 3 bytes in length and is situated within the 1500 byte data field, reducing the amount of information that can be transported in that field. Figure 4.3 illustrates the 802.2 LLC header within an 802.3 frame. The first 2 bytes of the 3-byte 802.2 header identify the Destination Service Access Point (DSAP) and Source Service Access Point (SSAP). A bit later in this chapter we will discuss the role of each field. The following control field is 1 byte, resulting in the 802.2 header being 3 bytes in length. This LLC header is used for connectionless data transmitted by the older Ethernet II/DIX protocols. For SNA and NETBEUI and other connection-oriented data, a 4-byte LLC header is used, with the two SAP fields set to hex 0404 for SNA and hex FOFO for NETBEUI.
Subnetwork Access Protocol One of the problems associated with the IEEE 802.3 frame is the fact that its use of a length field precludes it from providing support for transporting multiple higherlayer protocols. Recognizing this limitation, the IEEE developed the Sub-Network Access Protocol (SNAP) as a mechanism for transporting multiple protocols over networks. SNAP can be considered as a mechanism for multiplexing data on networks using the IEEE 802.2 LLC.
AU6039.indb 104
2/13/08 9:22:44 AM
Frame Formats n 105
LLC Header Operation The 802.2 LLC header follows the data link header. The purpose of the LLC header is to provide a position in buffer memory where an adapter can place the data frame. Thus, the LLC header informs the upper layers where data can be stored and retrieved. Due to the placement and retrieval of data resembling the manner by which Post Office employees sort mail into boxes, the operation of the first two fields is often referenced with wording that resembles Post Office operations.
The SNAP Frame SNAP can be considered as an extension of the 802.2 LLC header that is inserted into the data field of an 802.3 frame, as illustrated in Figure 4.4. The SNAP header consists of two fields that follow the LLC header, an organization code, and Ethertype field. Note from Figure 4.4 that the 8 bytes of additional LLC and SNAP subfields reduces the data field to a value between a minimum of 38 bytes and a maximum of 1492 bytes. In examining Figure 4.4, you might be a bit puzzled as to the rationale for having a separate subnetwork header. The reason for SNAP results from the fact that at the time the LLC was designed most pundits thought that the use of a single byte that could be used to specify up to 256 values would be sufficient to register all protocol values. Unfortunately, as values began to be registered the IEEE realized that the LLC header would soon run out of available values. Thus, the hex values AA and AB were reserved and the SNAP subheader was developed. To obtain an appreciation of the LLC header and SNAP, we will turn our attention to each of the subfields that are placed into the beginning of the old 1500-byte data field. bytes: 7
1
Preamble
S F D
6
6
Destination Address
Source Address
2
38–1492
Length
Data
DSAP SSAP CTRL OC EtherType LLC Header
4 Frame Check Sequence
Data
SNAP Header
Figure 4.4 The Ethernet SNAP frame
AU6039.indb 105
2/13/08 9:22:45 AM
106 n Carrier Ethernet: Providing the Need for Speed
DSAP Subfield The DSAP is a 1-byte field that acts as a pointer to a memory buffer in the receiving workstation. The DSAP value in effect tells the receiving NIC where data should be placed. A DSAP value of hex AA or BB indicates that the frame is a SNAP frame.
SSAP Subfield The SSAP is similar to the DSAP, specifying the service access point of the sending process. To specify that this is a SNAP frame, the SSAP is set to a value of hex AA or BB.
Control Subfield The Control subfield is 1 byte in length. The purpose of this field is to specify the type of LLC frame. When SNAP is present, the value of the control field is set to hex 03.
Organization Code Subfield The Organization code (OC) field is 3 bytes in length. Usually this field has the same value as the first 3 bytes in the source address field, although upon occasion it is set to zero.
Ethertype Subfield The 2-byte Ethertype subfield specifies which protocol is encapsulated within the IEEE 802 network. This field provides backward compatibility with the Ethernet Version II / DIX frame.
Data Field Because the LLC and SNAP fields use 8 bytes within the Data field its information carrying capacity is reduced. That reduction results in a Data field with a minimum of 38 bytes and a maximum of 1492 bytes. That Data field typically consists of upper layer headers, such as TCP/IP or IPX, followed by user data. For example, an Ether type value of 2048 decimal (hex 0800) denotes IP and a value of 2054 decimal (hex 0806) indicates ARP is being transported.
IPX over Ethernet In concluding our examination of the initial series of Ethernet frames designed for legacy 10 Mbps LANs, this author would be remiss if he did not mention the transmission of IPX over Ethernet.
AU6039.indb 106
2/13/08 9:22:45 AM
Frame Formats n 107 bytes: 7 Preamble
1
6
6
2
38–1492
S F D
Destination Address
Source Address
Length
Data
DSAP
SSAP
CTRL
AA
AA
03
LLC Header
OC
EtherType 8137
4 Frame Check Sequence
Data FF…
SNAP Header
Figure 4.5 Ethernet 802.3 SNAP transporting IPX
The 802.3 Raw Frame The transmission of IPX over Ethernet is commonly referred to as an 802.3 raw frame. Here IPX data immediately commences after the 802.3 header. Because the first 2 bytes in IPX are set to hex FF, it is a relatively simple matter to examine the composition of the bytes following the length field to determine that it is an 802.3 frame.
Other Encapsulations In addition to being transmitted within an 802.3 frame, IPX can be encapsulated within an Ethernet II frame, an 802.2 frame, and an 802.3 SNAP frame. In the first situation (Ethernet II/DIX frame), the Ethertype value of hex 8137 will denote IPX is being transported. In an 802.2 frame, the 802.3 frame header (destination address, source address, length) is followed by the 3-byte LLC header. When the 3-byte LLC header is set to hex EO, EO, 03 IPX data will be in the Data field as the hex E0 values in DSAP and SSAP indicate the Novell protocol is being transported. The third additional method used to transport IPX is via an Ethernet SNAP frame. In this situation DSAP and SSAP subfield values are each hex AA and the control field value is hex 03 to denote a SNAP frame. Then, the Ethertype value of hex 8137 is used to denote IPX encapsulation, as shown in Figure 4.5.
Full Duplex and the Pause Frame Originally Ethernet used the CSMA/CD protocol that resulted in transmission occurring in a half-duplex mode of operation. This resulted from the fact that when
AU6039.indb 107
2/13/08 9:22:45 AM
108 n Carrier Ethernet: Providing the Need for Speed
a station needed to transmit information, it first had to listen to the media and wait for an idle period to occur due to the shared media design of the protocol. With the introduction of 10BASE-T, it became possible for the physical media to support the simultaneous transmission and reception of data. In fact, in 1997 the IEEE released the 802.3x standard, which defined full-duplex transmission on point-to-point links that connect exactly two stations. In addition, the standard also incorporated DIX framing, which eliminated the 802.3/DIX frame split. Under the IEEE 802.3 standard, both stations on the link must be both capable of and configured to support full-duplex operations.
Advantages The ability of two stations to simultaneously transmit and receive data in effect doubles the potential data transfer capability. For example, assume a 10BASE-T station is connected to a 10BASE-T switch port and both support and are configured to operate in a full-duplex mode. Then, the maximum bandwidth of the link is 20 Mbps even though the station can only transmit at 10 Mbps due to the ability to transmit and receive data simultaneously. Because full-duplex operations enable a device to transmit without first listening to the media, the efficiency of the link is improved. In addition, because no collisions can occur on a full-duplex link the delays due to initiating a backoff algorithm are eliminated, further enhancing the efficiency of the point-to-point link. Perhaps the key advantage associated with full-duplex operations is the fact that segment lengths are not limited. Under half-duplex Ethernet the segment length was fixed to ensure collisions propagated to all stations within 512 bit times. Because no collisions are possible under full duplex that constraint is eliminated.
Flow Control With the addition of a full-duplex mode of operation, the IEEE recognized that the continuous transmission from one device to another could result in the inability of some devices to keep up with received data. In this event the receiving device’s buffers could fill, resulting in a loss of data. The solution to this problem was obtained through an optional flow control mechanism, which could be used to regulate the flow of data. To implement flow control, the IEEE developed the PAUSE frame.
PAUSE Frame The function of the PAUSE frame is to enable one station temporarily to stop all traffic other than MAC control frames originating from the other station on a
AU6039.indb 108
2/13/08 9:22:46 AM
Frame Formats n 109 bytes: 7
1
6
Preamble SFD Destination Address (01-80-C200-00-01) or unique DA
6
2
2
2
42
Source Type/ Reserved MAC MAC Address Length Control Control (set to Zeros) 802.3 OP Code Parameters MAC Control (00-01) (00-00 to (88-08) FF-FF)
4 FCS
Figure 4.6 The PAUSE frame
point-to-point link. To do so, the PAUSE frame contains a field whose value specifies the duration of the PAUSE event in units of 512 bit times.
Overview PAUSE frame support is an option limited to full-duplex operations. Once a PAUSE frame is transmitted by a station to another station, the receiving station can only issue a PAUSE frame if it is in a paused state. To further complicate operations, a station may be able to issue PAUSE frames without having the ability to decode such frames, in other words limited to supporting half of the protocol.
Frame Fields Figure 4.6 illustrates the format of the PAUSE frame. The preamble and start of frame delimiter fields are the same as their IEEE 802.3 frame fields. The destination address field can be assigned to either the address of the station to be paused or to the globally assigned multicast address of hex 01-80-C2-00-00-01. The length/ type field value is set to hex 88-08 to indicate that the frame is a MAC control frame. Next, the MAC control opcode field is set to hex 00-01 to indicate that the type of MAC control frame is a PAUSE frame. The following field (MAC control parameters) uses 16 bits to define the duration of the PAUSE even in units of 512bit times. Valid values are hex 00-00 to hex FF-FF. A PAUSE with a value of zero in this field informs the station at the other end of the link to resume transmission. The next-to-last field of 42 bytes is reserved, and the last field is the FCS field.
VLAN Tagging A year after the development of the 802.3z standard, the IEEE in 1998 approved its 802.3ac standard. This standard defines the frame format for the extension of an Ethernet frame to support VLAN tagging. In actuality, there are two IEEE standards associated with VLANs. The IEEE 802.1Q standard defines how VLANS
AU6039.indb 109
2/13/08 9:22:46 AM
110 n Carrier Ethernet: Providing the Need for Speed
operate, while the 802.3ac standard defines how to implement the VLAN protocol elements that are specific to Ethernet.
The 802.1Q Standard The IEEE 802.1Q standard was developed to enable multiple bridged networks to share the same physical network link without the occurrence of leakage of data between networks. To accomplish this task the 802.1Q standard defined how a VLAN would operate with respect to bridging at the MAC layer and with respect to the 802.1D spanning tree protocol. The result of the 802.1Q effort was a tagging mechanism that allows switches to differentiate frames based upon the value of the tag. As a result of the 802.1Q effort a VLAN can be considered to represent a single logical broadcast domain consisting of interfaces on one or more switches.
Advantages A major benefit of VLANs is the ability to subdivide one physical switch into multiple logical networks. Because broadcasts are restricted to the interfaces associated with the VLAN this action can reduce broadcasts as well as enhance efficiency. The “V” in the VLAN results from the fact that multiple virtual networks can be formed from one physical switch, such as assigning accountants to one VLAN while engineers are assigned to another.
Frame Format The VLAN tag when present is inserted into an Ethernet frame between the source address field and length/type field. The first 2 bytes of the VLAN tag are its Tag Protocol ID (TPID), which is always set to a value of hex 8100. This value is actually a reserved length/type field value that indicates the presence of the VLAN tag and indicates to software processing the frame that the normal length/tag field value can be found 4 bytes further into the frame. The last 2 bytes of the VLAN tag consists of three fields: user-priority, a Canonical Format Indicator (CFI), and a VLAN Identifier (VID). The user-priority field is 3 bits in length and is used to assign a priority level to the Ethernet frame. The CFI is a 1-bit field that is always set to zero for Ethernet switches. CFI is used to provide compatibility between Ethernet and Token-Ring networks, resulting in a frame with a CFI set to 1 being received at an Ethernet port not being forwarded as it is being sent to an untagged port. The third field is the VID. This 12-bit field specifies the VLAN to which the frame belongs. If the value is 0, then the frame does not belong to any VLAN. Instead, the tag only specifies a priority and is then referred to as a priority tag.
AU6039.indb 110
2/13/08 9:22:46 AM
Frame Formats n 111 bytes: 7
1
Preamble SFD
6
6
2
Desti- Source Tag nation Address Type Address hex 8100
User Priority bits:
3
2
2
??
Length/ Tag Control Type Information
CFI 1
??
Data 42-1500
4 FCS
vLAN ID 12
Figure 4.7 VLAN-tagged Ethernet frame format
Figure 4.7 illustrates the Ethernet frame extended to support VLAN tagging. Note that the 802.3ac standard resulted in the maximum Ethernet frame being extended from 1518 bytes to 1522 bytes. Because some software and ROM chips consider a frame length beyond 1518 bytes to be in error, older adapter cards need to be carefully checked to determine if they can work in a more modern VLAN environment. Currently a hex value of FF in the VLAN ID is reserved. All other values can be used as VLAN identifiers, enabling up to 4094 VLANs. Because 4 bytes are now added to a tagged frame, the original FCS field must be recalculated.
SNAP Frames For IEEE 802.2 SNAP frames, the use of tagging is similar to the previously described method. That is, the Ethertype value in the SNAP header is set to hex 8100 and the 4-byte tag is then inserted after the SNAP header.
Frame Determination A simple examination of the value in the length/type field followed by the value of the following 2 bytes can be used to determine the type of frame flowing on the LAN. Figure 4.8 illustrates the tests performed by software to determine the type of Ethernet frame.
Fast Ethernet When Fast Ethernet (100BASE-TX) was developed for transmission at 100 Mbps, the 802.3 frame was left intact. However, prefix and suffix bytes referred to as Start
AU6039.indb 111
2/13/08 9:22:47 AM
112 n Carrier Ethernet: Providing the Need for Speed
Length/ Type Field Value
> 1500
= 8100?
Yes
No < = 1500
DIX/Ethernet II
vLAN Tagged 802.3 Frame 2 bytes Following Length/ Type
= AA or AB SNAP Frame
= FF IPX Encapsulated
Figure 4.8 Determining the frame type
of Stream Delimiter (SSD) and End of Stream Delimiter (ESD) were used to surround the frame.
4B5B Coding The SSD results from the use of 4B5B encoding under which groups of 4 bits are mapped into groups of 5 bits. Because there are 32 possible combinations of 5 bits and 16 combinations of 4 bits, the 16 groups of 5 bits with the most transitions are used to provide as many transitions provide clocking information for the signal. Table 4.1 illustrates the manner by which groups of 4 bits can be mapped into groups of 5 bits, enabling enough transitions so that the clock signal can be recovered at a receiver. Because there are 16 “unused” characters, they can be used to detect errors for a special purpose, such as the SSD and ESD.
Delimiters Figure 4.9 illustrates the 100BASE-TX frame. Note that this frame differs from the 802.3 frame through the addition of a byte to mark the beginning and end of the frames. Because at 100 Mbps the frames are known as streams, this resulted in the names assigned to the two delimiters.
AU6039.indb 112
2/13/08 9:22:48 AM
Frame Formats n 113
Table 4.1 Mapping 4B5B 4b Binary
bytes: 7 S S D
1
Preamble S F D
5b Binary
0000
0
11110
0001
1
01001
0010
2
10100
0011
3
10101
0111
4
01010
1001
5
01011
1010
6
0110
1011
7
01111
1000
8
10010
1001
9
10011
1010
A
10110
1011
B
10111
1100
C
11010
1101
D
11011
1110
E
11100
1111
F
11101
N/A
Idle
11111
6
6
2
46–1500
4
Destination Address
Source Address
Length
Data
Frame Check Sequence
E S D
SSD Start of Stream Delimiter ESD End of Stream Delimiter
Figure 4.9 The 100BASE-TX frame format
AU6039.indb 113
2/13/08 9:22:48 AM
114 n Carrier Ethernet: Providing the Need for Speed
Interframe Gap Another difference between the 802.3 frame and the 100BASE-TX frame concerns the interframe gap. At 10 Mbps the interframe gap is 9.6 μs between frames, and at 100 Mbps idle codes are used to mark a 0.96-μs interframe gap. The SSD 5B symbols are 11000 10001, and the ESD has the 5B symbols 01101 00111. Both SSD and ESD fields can be considered to fall within the interframe gap of Fast Ethernet frames. Thus, computation between Ethernet/IEEE 802.3 and Fast Ethernet becomes simplified as the latter has an operating ten times the former and an interframe gap one tenth the former.
Gigabit Ethernet The introduction of the IEEE 802.3z standard for Gigabit Ethernet was accompanied by several changes to the Ethernet frame format. At a data rate of 1 Gbps maintaining a minimum frame length of 64 bytes (72 when the preamble and start of frame delimiter fields are considered) would reduce the network diameter to approximately 20 m. While this distance might be suitable for connecting a switch to a group of stations within close proximity of one another, it is not suitable for supporting horizontal wiring within a building where a 10-m distance is allowed from a wall faceplate to the desktop. To enable Gigabit Ethernet to support a network diameter of up to 200 m, a technique referred to as carrier extension was added to the technology.
Carrier Extension Carrier extension results in an extension of the Ethernet slot time from 64 bytes (512 bits) to a new value of 512 bytes (4096 bits). To accomplish this extension frames less than 512 bytes in length are padded with special carrier extension symbols. Note that under Gigabit Ethernet the minimum frame length of 64 bytes is not changed. All frames less than 64 bytes in length are first padded out to a minimum of 64 bytes. The carrier signal placed on the network is then extended to provide a minimum carrier length of 512 bytes. The preceding discussion of frame length is based upon the IEEE use of technology and does not consider the 8 bytes associated with the preamble and start of frame delimiter fields. Figure 4.10 illustrates the Gigabit Ethernet frame to include the location where non-data symbols are added. Note that the FCS is calculated only on the original, non-extended frame. At the receiver the extension symbols are removed before the FCS value is checked.
AU6039.indb 114
2/13/08 9:22:48 AM
Frame Formats n 115 bytes: 7 Preamble
1
6
6
2
46–1500
4
S F D
Destination Address
Source Address
Length
Data
Frame Check Sequence
E X T E N S I O N
64 bytes minimum 512 bytes minimum Duration of carrier event
Figure 4.10 Carrier extension on a Gigabit frame
Half-Duplex Use Carrier extension is only applicable to half-duplex transmission. This is because full-duplex transmission eliminates the possibility of collisions. Because carrier extension can significantly degrade the performance associated with short-packet transmission a second modification to the Gigabit Ethernet frame referred to as frame bursting was developed.
Frame Bursting Frame bursting represents a Gigabit Ethernet technique developed as a mechanism to compensate for performance degradation associated with carrier extension. Under frame bursting a station with more than one frame to send can transmit multiple frames if the first is successfully transmitted. If the first frame is less than 512 bytes in length carrier extension is applied to that frame. Succeeding frames in the burst are limited until a burst limit of 65536 bit times (8192 bytes) is reached. An interframe gap period is inserted between each frame in the burst. However, instead of allowing the medium to be idle between frames, the transmitting station fills the interframe gaps with nondata symbols that maintain an active carrier and which are then discarded by receiving stations. Bursting is only applicable to Gigabit and higher Ethernet speeds when transmission is half duplex. Figure 4.11 illustrates an example of Gigabit Ethernet frame bursting. In this example the first frame in the burst is transmitted successfully with an extension and is assumed to have additional frames that need to be transmitted. Thus, transmission continues until either all frames are transmitted or 8192 byte times are reached, whichever comes first.
AU6039.indb 115
2/13/08 9:22:49 AM
116 n Carrier Ethernet: Providing the Need for Speed
Frame w/extension
IFG
Frame
IFG
…
Frame
Burst Limit
Figure 4.11 Frame bursting
Jumbo Frames Without considering the use of a VLAN tag nor an Ethernet SNAP frame the maximum data field in a frame is 1500 bytes. Although this amount of data was sufficient during the 1970s when text-based e-mail prevailed we will fast-forward to today. The inclusion of signature blocks in e-mail, the attachment of photographs and motion picture files, and a general increase in organizational data storage resulted in the common occurrence of large e-mails and lengthy file transfers. Moving such data in 1500-byte fields can also result in a processing load on computers that hinders other multi-tasking operations. For example, consider moving a 1gigabyte file from a server to a workstation via a Gigabit Ethernet connection. This action would require the processing of approximately 666667 frames that could consume 20 to 40 percent of the processing power of a computer just to handle associated network interrupts. Based upon the preceding, in 1998 Alteon Networks proposed an initiative to increase the maximum length of the Ethernet data field from 1500 bytes to 9000 bytes. Although this initiative was not adopted by the IEEE, it was implemented by a large number of hardware vendors as a jumbo frame option.
Operation In the Alteon Networks proposal which was adopted as an option by several hardware vendors, the Ethernet data field is extended by a factor of six, from a maximum of 1500 to 9000 bytes. This extension can reduce the number of frames required to move a file by a factor of six, increasing application throughput while decreasing host CPU utilization. Because the resources used by a server to handle network traffic are proportional to the number of frames transmitted and received, using larger frames improves performance when compared to the use of smaller frames.
Length Rationale One of the key considerations in sizing a jumbo frame was the CRC-32 algorithm. To maintain a level of compatibility with Ethernet, jumbo frames only changed the size of the data field. Due to the manner by which the CRC-32 algorithm operates, the probability of an undetected error is relatively unchanged until frames exceed
AU6039.indb 116
2/13/08 9:22:50 AM
Frame Formats n 117
12000 bytes. Thus, to maintain the same undetected bit error rate jumbo frames should not exceed 12000 bytes. In the opposite direction, certain applications have a low maximum size for a network file. For example, the maximum size for a Network File System (NFS) datagram is approximately 8000 bytes. Thus, a jumbo data frame of 9000 bytes appears to be a good compromise.
Advantages In addition to reducing the number of frames required to transport files and network overhead, the use of jumbo frames can result in other benefits. Those additional benefits can include a reduction in fragmentation, enhancing TCP throughput as well as the efficiency of switches and routers. Reducing fragmentation reduces fragmentation overhead. This will result into a lower overhead associated with CPU processing. In a TCP environment, throughput has been shown to be directly proportional to the Maximum Segment Size (MSS). Because the MSS is equal to the Maximum Transmission Unit (MTU) less the TCP/IP headers, you can enhance throughput by increasing the Ethernet data field, which enables larger packets to be transported. Concerning switches and routers, because their efficiency is primarily a function of how much time they spend examining headers, a reduction in frames needing processing will make these hardware devices more efficient.
Problems and Solutions One of the problems associated with the use of jumbo frames is intermediate hardware that uses a 1500-byte MTU. Because the smallest MTU used by any device in a given network path determines the maximum MTU for all traffic traveling along that path, frames traversing such hardware cannot be expanded. Thus, an organization with connections to other offices and the Internet may only be able to use a small portion of a local switch for jumbo frames. Although replacement routers and switches could be purchased, they are not inexpensive and in certain situations economics might prevent more than a token replacement of hardware each year. One method that can be used to implement jumbo frames is to isolate the use of such frames to a portion of one’s network that has compatibility with such frames. To accomplish this, the IEEE 802.1Q VLAN tagging specification could be used to enable jumbo frames and standard Ethernet frames to be separate from each other, even when traversing the same physical link if the endpoints support jumbo frames. Thus, through the use of VLAN tagging jumbo compatible hardware could communicate with each other using jumbo frames, while communicating with other network devices using standard Ethernet frames. Then, as funds become available to upgrade additional equipment the VLAN can be modified.
AU6039.indb 117
2/13/08 9:22:50 AM
118 n Carrier Ethernet: Providing the Need for Speed
Performance In concluding this chapter we will examine the effect of the frame overhead on different types of Ethernet. In doing so, we will first assume that the preamble and start of frame delimiter fields as well as the destination address, source address, length/type and frame check sequence fields result in 26 bytes of overhead.
Basic Ethernet For a basic Ethernet frame a 1-byte character carried in the data field must be padded by the addition of 45 fill characters so that a minimum of 46 bytes are transmitted as data. In this situation the overhead required to carry a 1-byte character is 26 plus 45, or 71 bytes. Now, consider the situation in which you have 46 bytes of data to transmit. Here the 46 bytes of data would not require the addition of pad characters, because the frame length would be 64 bytes (72 when considering the preamble and start of frame delimiter fields) which is the minimum frame length. Thus, 64 bytes of data would result in a frame overhead of 26 bytes. Table 4.2 summarizes the overhead associated with an Ethernet nonSNAP frame as the number of bytes of information varies from 1 to a maximum of 1500 bytes that can be carried in the data field of the frame. As indicated in Table 4.2 Basic Ethernet Frame Overhead Information Carried in Ratio of Frame Overhead Percent Data Field (bytes) to Frame Length Overhead
AU6039.indb 118
1
71/72
98.61
10
62/72
86.11
20
52/72
72.22
30
42/72
58.33
45
27/72
37.50
46
26/72
36.11
64
26/90
28.89
128
26/154
16.88
256
26/282
9.22
512
26/538
4.83
1024
26/1050
2.48
1500
26/1526
1.70
2/13/08 9:22:50 AM
Frame Formats n 119
Table 4.2, the percentage of overhead (computed as the ratio of bytes of overhead to frame length times 100) can vary considerably, ranging from a high of 98.61 percent to a low of 1.7 percent when the maximum length data field is used to transfer information.
SNAP Frames The preceding computations were for a basic Ethernet frame. When the length/ type field value is hex 8100 the frame is a SNAP frame. This means that the 8 bytes of additional LLC and SNAP subfields reduce the data field to a value between a minimum of 38 bytes and a maximum of 1492 bytes. Because the extra LLC and SNAP subfields are in the data field, they count towards having a minimum frame length of 64 bytes and the data field does not have to be padded whenever the data field transports 30 or more characters of information. Because the additional 8 bytes of LLC and SNAP subfields represent overhead, a 72-byte frame contains 34 overhead characters plus pads. Thus, the transmission of a 1-byte character requires 37 pad characters or 71 overhead characters. Similarly, a 10-byte message in the data field requires 28 pad characters plus 34 fixed overhead bytes (26 + 8 for the LLC and SNAP subfields) for a total of 62 overhead characters. Thus, the overhead associated with a basic Ethernet frame and a SNAP frame are the same until the data field transports more than 37 characters. At this point the overhead from the LLC and SNAP subfields exceeds the padding, resulting in additional overhead. For example, when 38 data characters are transported LLC, SNAP, and data fields now carry 46 bytes in a frame length of 72 bytes. Thus, the overhead is 34 bytes/72 bytes or 47.22 percent. When 64 data bytes are transported the overhead is 26 + 8 or 34 bytes, while the frame length is 26 + 8 + 64 or 98 bytes, resulting in an overhead of 34/98 or 34.69 percent.Table 4.3 summarizes the overhead associated with Ethernet SNAP frames.
Gigabit Ethernet The previous computations do not consider the overhead associated when halfduplex Ethernet is used. As previously noted in this chapter, packets less than 512 bytes in length (not including their 8-byte header) are extended in length through the use of carrier extension symbols. This means that a packet transporting a 1-byte character is first extended by the addition of 45 padding bytes, further extended by the addition of 45 padding bytes, and then further extended by 448 carrier extension symbols. Thus, a Gigabit Ethernet frame transporting 1 data byte contains 520 bytes, of which 519 represent overhead when operating in half duplex. To illustrate the additional overhead computations, we will assume a frame is transporting 64 bytes of data. In this case there is no requirement for padding
AU6039.indb 119
2/13/08 9:22:50 AM
120 n Carrier Ethernet: Providing the Need for Speed
Table 4.3 Ethernet SNAP Frame Overhead Information Carried in Ratio of Frame Overhead Percent Data Field (bytes) to Frame Length Overhead 1
1/72
98.61
10
62/72
86.11
20
52/72
72.22
30
42/72
58.33
37
36/72
50.00
38
34/72
47.22
45
34/79
43.03
46
34/80
42.50
64
34/98
34.69
128
34/162
20.98
256
34/290
11.72
512
34/546
6.23
1024
34/1058
3.21
1492
34/1492
2.27
characters. However, because the frame must be 512 characters in length without considering the header, this means there must be 430 carrier extension symbols appended to the frame (448 − 18). If you add the normal overhead bytes to the 430 carrier extensions, you obtain a total overhead of 456 bytes. Table 4.4 summarizes the overhead associated with transporting information in Gigabit Ethernet frames as the number of bytes of data varies from 1 to a maximum of 1500. Note that this table is only applicable for half-duplex operations as fullduplex Gigabit Ethernet does not require carrier extensions.
Frame Rates One of the problems associated with Gigabit Ethernet is its use as a shared-media half-duplex network. Although its cost is low, the use of carrier extension to ensure a minimum frame length of 512 bytes can severely degrade performance. In fact, under certain conditions, half-duplex Gigabit Ethernet can represent at best a marginal improvement over Fast Ethernet.
AU6039.indb 120
2/13/08 9:22:50 AM
Frame Formats n 121
Table 4.4 Gigabit Ethernet Half-Duplex Frame Overhead Information Carried in Ratio of Frame Overhead Percent Data Field (bytes) to Frame Length Overhead 1
519/520
98.61
10
510/520
98.08
20
500/520
96.15
30
490/520
94.23
40
480/520
92.31
45
475/520
91.35
46
474/520
91.15
64
456/520
87.69
128
392/520
75.38
256
264/520
50.77
512
136/520
26.15
1024
26/538
4.83
1500
26/1050
2.48
***
26/1526
1.70
Mathematical Oddities Simple mathematics tells us that 100 is ten times 10 and 1000 is ten times 100. If we apply logic and simple mathematics to the operating of Ethernet, Fast Ethernet, and Gigabit Ethernet, we would expect the operating rate of each local area network to differ by a similar amount. While logic and simple mathematics are indeed true and result in operating rates increasing by an order of magnitude from Ethernet to Fast Ethernet to Gigabit Ethernet, what can we say about the ability of each network to transport data? Does Fast Ethernet provide ten times the throughput of Ethernet? Similarly, does Gigabit Ethernet provide ten times the throughput of Fast Ethernet? As we will note shortly, the answer to the second question in certain networking situations is negative and the reason for the answer has important considerations for network managers and LAN administrators considering halfduplex Gigabit technology. Because many readers may be from Missouri, which is known as the “Show Me” state, we will examine the basic frame format of Ethernet as a starting point to determine the frame rate obtainable on different types of
AU6039.indb 121
2/13/08 9:22:51 AM
122 n Carrier Ethernet: Providing the Need for Speed
Ethernet networks. By first computing the frame rate on a 10 Mbps network, we can use the resulting computations as a base to examine the effect of increasing the LAN operating rate upon the effective data transfer capability obtainable on Fast Ethernet and half-duplex Gigabit Ethernet. Readers should note that the effective data transfer rate represents the quantity of data per unit of time transferred over a network. Because all protocols have a degree of overhead in the form of headers and trailers wrapped around information formed into frames (Layer 2) or packets (Layer 3), the effective data rate is always lower than the operating rate of a transmission facility. Although most people only consider the LAN operating rate, the effective data transfer rate is a more important metric as it governs the ability of a network to transport data.
Frame Rate Computations Previously, Figure 4.1 illustrated the basic format of an Ethernet frame when the frame is placed onto a network. Although many publications reference a minimum frame length of 64 bytes, those publications are referencing the length of the frame prior to its placement onto a network. Once the latter occurs, 8 bits are added to each frame for synchronization in the form of a 7-byte preamble field and a 1-byte start of frame delimiter field. This results in a minimum Ethernet frame length of 72 bytes. In examining Figure 4.1, note that the data field ranges from a minimum of 46 bytes to a maximum length of 1500 bytes. When a maximum length frame (nonVLAN tagged) is formed, its length is 1518 bytes in the network adapter. However, when placed on the network the addition of 8 bytes in the form of the preamble and start of frame delimiter fields included for synchronization results in a maximum frame length of 1526 bytes. Thus, all Ethernet frames other than those in error range between 72 and 1526 bytes in length when flowing on a network. The only exceptions are jumbo frames and the use of VLAN tagging, with the latter adding 4 bytes to the length of the frame. For our computations, we will consider the maximum frame length to be limited to 1526 bytes, representing a conventional non-tagged frame. Because the length of an Ethernet frame can vary between 72 and 1526 bytes, the frame rate obtainable on a network will vary. However, because the minimum and maximum frame lengths are known we can compute the frame rate in terms of each metric, which will provide the maximum and minimum frame rates on a network. Because frame rate is inversely proportional to frame length, the maximum frame length will enable us to compute the minimum frame rate, and the minimum frame length obtainable on a network will provide us with the ability to compute the maximum frame rate. At a 10-Mbps operating rate Ethernet requires a dead time of 9.6 μs between frames. The bit duration at a 10-Mbps operating rate is 1/107 or 100 ns. Based
AU6039.indb 122
2/13/08 9:22:51 AM
Frame Formats n 123
upon the preceding we can compute the maximum number of frames per second (fps) for maximum length and minimum length frames. For example, consider the maximum length frame of 1526 bytes. Here the time per frame becomes 9.6 μs +1526 bytes * 8 bits/byte * 100 ns/bit, which results in a frame time of 1.23 ms. Thus, in 1 s there can be a maximum of 1/1.23 ms or 812 maximum length frames, each capable of transporting 1500 bytes of data. This means that if all frames were maximum length frames, the effective data transfer capability would be 812 fps * 1500 bytes/frame * 8 bits/byte or 9.744 Mbps. We can revise the preceding computations to determine the number of minimum length frames that can flow on a 10 Mbps Ethernet network. For a minimum length frame of 72 bytes, the time per frame is 9.6 μs +72 bytes * 8 bits/byte * 100 ns/bit, which results in a frame duration of 67.2 * 10−06 s. Thus, in 1 s there can be a maximum of 1/67.2* 10−06 or 14880 minimum length 72-byte frames, each capable of transporting between one and 46 data characters. This means that if all frames were minimum length frames the effective data transfer rate would range between 14880 frames/s * 1 byte/frame * 8 bits/byte or 119040 bps to 14880 frames/s * 46 bytes/frame * 8 bits/byte or 5.48 Mbps. Based upon the preceding computations, a file transfer between two stations on an Ethernet network that completely fills each frame’s data field to its maximum capacity of 1500 bytes will result in an effective data transfer of 9.744 Mbps, approaching the 10-Mbps LAN operating rate. However, as file transfers are replaced by interactive queries that result in a lesser number of data characters transported in each frame, the effective data transfer rate decreases. On a probably absurd level, if each frame transported one data character the effective data transfer rate would be reduced to 119040 bps. Even when each minimum length frame is filled with 46 data characters, the effective data transfer capacity is only slightly over half the network’s 10-Mbps operating rate. Fast Ethernet uses the same frame format as Ethernet, but the dead time between frames and bit duration are one tenth Ethernet’s 10-Mbps metrics. Thus, the frame rate for maximum and minimum length frames are ten times that of Ethernet. That is, Fast Ethernet supports a maximum of 8120 maximum length 1526-byte fps and a maximum of 148800 minimum length 72-byte fps. Similarly, the effective data transfer capability of Fast Ethernet is ten times that of Ethernet. Thus, we can summarize the expectations of network managers and LAN administrators concerning the performance of Fast Ethernet in comparison to Ethernet as you truly get what you expect. Table 4.5 compares the frame rates and effective data transfer capability of Ethernet and Fast Ethernet. Although we might reasonably expect Gigabit Ethernet to extend Fast Ethernet’s frame rate and data transfer capability by a factor of ten, this is not the case in certain situations. Those situations involve the use of Gigabit Ethernet in a shared-media environment when the Gigabit Ethernet basic frame length is less than 512 bytes.
AU6039.indb 123
2/13/08 9:22:51 AM
124 n Carrier Ethernet: Providing the Need for Speed
Table 4.5 Ethernet versus Fast Ethernet Frame Rates and Effective Data Transfer Capability Average Frame Size Frame Rate (bytes) (fps)
Effective Data Transfer (bps)
ETHERNET 1526
812
72
14880
9.744 Mbps 119.04 kbps to 5.48 Mbps
FAST ETHERNET 1526
8120
72
148800
97.44 Mbps 1.19 Mbps to 54.8 Mbps
Gigabit Constraints The use of Gigabit in a shared-media environment requires a station on the network to be able to hear any resulting collision on the frame it is transmitting before it completes the transmission of the frame. This means that the transmission of the next-to-last bit of a frame that results in a collision should allow the transmitting station to hear the collision voltage increase prior to the transmission of the last bit. Thus, the maximum allowable Ethernet cabling distance is limited by the bit duration associated with the network operating rate and the speed of electrons flowing on the network. When Ethernet operates at 1 Gbps, the allowable cabling distance would normally have to be reduced from Fast Ethernet’s 200-m diameter to approximately 10 m or 33 ft. This would result in a major restriction on the ability of Gigabit Ethernet to be effectively used in a shared-media, half-duplex environment. To overcome this cabling limitation, a carrier extension scheme was proposed by Sun Microsystems. This scheme, which results in the extension of the time an Ethernet frame is on the wire, became part of the Gigabit Ethernet standard for half-duplex operations. Under the Gigabit Ethernet carrier extension scheme, the IEEE standard requires a minimum length frame of 512 bytes to be formed for shared-media, half-duplex operations. This means that the resulting frame when placed onto the network must be a minimum of 520 bytes in length due to the addition of a 7-byte preamble field and a 1-byte start of frame delimiter field. Previously, Figure 4.10 illustrated the Gigabit Ethernet carrier extension scheme associated with ensuring the flow of extended minimum length frames. In examining Figure 4.10, note that the timing extension occurs after the end of the standard Ethernet frame. The actual carrier extension occurs in the form of special symbols
AU6039.indb 124
2/13/08 9:22:51 AM
Frame Formats n 125
that result in the occurrence of line transitions and inform other stations “listening” to the network that the wire is in use. The carrier extension extends each Gigabit frame time to guarantee a minimum 512-byte slot time (520 bytes on the network) for half-duplex operations. Note that the increase in the minimum length of frames does not change the contents of any frame. Instead, carrier extension technology only alters the time the frame is on the network. Thus, compatibility is maintained between the original Ethernet frame and frames flowing on half-duplex Gigabit Ethernet. It should also be noted that Gigabit Ethernet carrier extension technology is not applicable for non-shared media environments, such as transmission on fiber or workstation connections to full-duplex switch ports. This is because no collisions are possible on such network environments, alleviating the necessity to employ carrier extension technology to ensure each frame flows on the network for sufficient duration to enable the transmitting station to recognize the occurrence of a collision within a 200-m diameter. Although carrier extension technology enables the cable length of a halfduplex Gigabit Ethernet network to be extended to a 200-m diameter, the extension is not without cost. The primary cost is one of additional overhead and lower data throughput because extension symbols added to the end of short frames waste bandwidth. To obtain an appreciation for the manner by which carrier extension technology wastes bandwidth, consider the requirement to transmit a 64-byte record. When using Ethernet or Fast Ethernet, the record would be encapsulated within 26 bytes of overhead, resulting in a frame length of 90 bytes. When using Gigabit Ethernet as the transport mechanism, the minimum length frame must be 520 bytes when flowing on the network. Thus, the frame must be extended through the addition of 430 (520 − 90) carrier extension symbols. To further complicate bandwidth utilization, when the amount of data to be transported by a frame is less than 46 bytes, nulls are added to the data field to produce a 64-byte minimum length (72 bytes on the wire) frame prior to extending the frame via carrier extension symbols. Thus, a simple query, such as “Mr Watson I hear you,” which consists of 20 characters including spaces between words, would be placed in the data field, padded with 26 nulls under each version of Ethernet to form a minimum length frame. However, under Gigabit Ethernet the frame would be extended further through the addition of 448 carrier extension symbols to obtain a minimum 512-byte slot time or 520 bytes when the frame flows on the network. In this example, the ratio between actual information transported and total characters transmitted changes from 20 per 72 on Ethernet and Fast Ethernet to 20 per 520 on Gigabit Ethernet! Under Gigabit Ethernet, the minimum 64-byte slot time (72 bytes on the network) requires the use of 448 carrier extension symbols. To examine the effect upon the data transport capability of Gigabit Ethernet, we can compute the frame rate in a manner similar to prior computations. However, because the minimum length frame is 520 bytes, we will use that value instead of 72 bytes to compute the maximum frame rate. In doing so, the dead time between frames becomes 0.096 μs and
AU6039.indb 125
2/13/08 9:22:52 AM
126 n Carrier Ethernet: Providing the Need for Speed
the bit duration becomes 1 ns. Thus, the time per minimum length frame becomes 0.096 μs + 520 bytes * 8 bits/byte * 1 ns/bit or 4.256 * 10−06 s. Then, in 1 s a total of 1/4.256 μs or 234624 minimum length frames can flow on a Gigabit Ethernet shared-media network. Note that the frame rate ratio between Gigabit Ethernet and Fast Ethernet is 234624/148800 or 1.58 and not the ten to one you would expect. Concerning the effective data transfer capacity, each minimum length Gigabit Ethernet frame can transport between 1 and 498 data characters because pads and carrier extension symbols are used to ensure a minimum length frame of 520-bytes flows on the network. This means that the effective transfer rate for minimum length frames ranges between 234962 fps * 1 byte/frame and 234962 fps * 498 bytes/frame. Expressed as a conventional data rate, we obtain an effective data transfer rate between 1.88 Mbps and 93.6 Mbps. Only as frame lengths increase beyond the minimum length does half-duplex Gigabit Ethernet provide an enhanced data transfer beyond that obtainable by Fast Ethernet. The preceding computations illustrate that the effective data transfer rate when using Gigabit Ethernet is highly dependent upon the average frame length expected to flow on the network. Although a shared-media Gigabit Ethernet backbone is probably well suited for use in an ISP environment where browsing graphics fill most frames to their limit, the technology may not be particularly well suited for organizations where a high proportion of traffic is in the form of interactive query– response. Thus, it is important to investigate fully the technology as well as the manner by which you propose to use the technology prior to making an investment in the technology.
AU6039.indb 126
2/13/08 9:22:52 AM
Chapter 5
LAN Switches The purpose of this chapter is to provide readers with detailed information concerning the operation and utilization of Ethernet LAN switches. To do so, we will first examine the basic operation of a bridge, because a LAN switch in effect represents a multiport bridge that supports concurrent operations. Once this is accomplished, we will examine the three basic types of Ethernet switches and their switching methods. Because the first series of Ethernet LAN switches operated at Layer 2 in the International Standards Organization (ISO) Open System Interface (OSI) Reference Model, we will first focus our attention upon Media Access Control (MAC) address switching. Building upon MAC address switching, we will also examine higher layer switching as a prelude to discussing virtual LANs (VLANs). Once the latter is accomplished, we will conclude this chapter by examining how modern Ethernet LAN switches can function as gateways from the edge of a customer network into different types of carrier network to include a Carrier Ethernet service.
Bridge Operations Because the first series of LAN switches were based upon the manner by which bridges operate, we will examine the operation of bridges in this section. Bridges operate at the data link layer of the OSI Reference Model, using MAC addresses as a mechanism for controlling the flow of frames through the bridge.
Transparent and Translating Bridges As the need to link networks increased, two types of bridges were developed: transparent and translating. A transparent bridge connects similar networks and a translating 127
AU6039.indb 127
2/13/08 9:22:52 AM
128 n Carrier Ethernet: Providing the Need for Speed
Lan A
A
C
B
1 Bridge 2
Lan B
D
E
F
Figure 5.1 Using a transparent bridge to connect two Ethernet networks
bridge is used to connect two dissimilar networks, such as an Ethernet LAN to a Token-Ring LAN. Because the LAN wars are over and Ethernet is the clear winner, we will focus our attention upon the operation of transparent bridges.
Plug-and-Play Operation A transparent bridge is a plug-and-play device. That is, you plug each LAN connection into a bridge port and power-on the bridge. Thereafter, the bridge uses a backward-learning algorithm, which we will describe shortly to determine where frames should be routed. Thus, the bridge automatically begins to “play,” hence the term “plug-and-play.”
Bridge Operation To illustrate the operation of a transparent bridge, consider Figure 5.1 which illustrates the interconnection of two bus-based Ethernet networks. For simplicity the MAC address of each station on each LAN is indicated by a letter instead of a 48bit MAC address. A transparent bridge operates in a promiscuous mode, which means that it reads every frame transmitted on each network to which it is connected. Once the bridge is connected to each network and powered-on, it operates according to the three Fs, flooding, filtering, and forwarding, using a backward-learning algorithm. To illustrate the bridge operating process, we will assume the following activities take place. First, station A transmits to station E, which then responds to station A. Next, station B transmits a frame to station A, followed by station A transmitting a frame to station E. When the bridge is powered-on, its port/address table is empty. Thus, we can view the contents of the port/address table as follows:
AU6039.indb 128
2/13/08 9:22:53 AM
LAN Switches n 129
Port
Address
-
-
Flooding Based upon our prior assumption, station A transmits a frame which flows on LAN A and reaches the bridge on its port 0. Here the destination address in the frame is E and the source address is A. The bridge examines its port/address table to determine where to route the frame whose destination address is E. Because the bridge was just powered-on, its port/address table is empty. Thus, not knowing where to route a frame results in the bridge flooding the frame. That is, the bridge transmits the frame out of all ports other than the port it was received on. In this example the frame is output on port 1 and is then received by station F. In addition to flooding the frame, the bridge examines the frame’s source address and the port it entered the bridge. The bridge uses this data to determine if it should update its port/address table. Because the port/address table was empty, the bridge enters the source address and the port it arrived on into its port address table. Thus, the port address table begins to build and is now: Port
Address
0
A
Forwarding Next, it was assumed that station E responds to station A. As the frame transmitted from station E arrives at the bridge via port 1, the bridge looks into its port/address table and notes that address A is associated with port 0. Thus, the bridge forwards the frame onto LAN B via port 0. In addition, the bridge notes that the source address of the new frame is E and that address is associated with port 1. Thus, the bridge updates its port/address table as follows: Port
Address
0
A
1
E
Filtering Next it was assumed that station B on LAN A transmits a frame to station A on that network. As the frame flows on the network, it reaches port 0 of the bridge. The bridge notes that the destination address of the frame (A) is on port 0 from the entries in its port/address table. Thus, the bridge filters the frame so that it does not flow through it. In addition, the bridge examines the source address of the frame
AU6039.indb 129
2/13/08 9:22:53 AM
130 n Carrier Ethernet: Providing the Need for Speed
Table 5.1 Steady-State Port/Address Table Port
Address
0
A
0
B
0
C
1
D
1
E
1
F
(B) and, because it is not in the port/address table, adds the station address and port it arrived on. Thus, the port/address table entries are now as follows: Port
Address
0
A
0
B
1
E
As more and more frames flow over each network, the bridge continues to learn addresses via a backward-learning algorithm by examining the source address of frames entering different bridge ports. This eventually results in the bridge learning the address of each station on each network and the port of the bridge connected to the network. The result of this learning process is the completion of a steady-state port address table. Table 5.1 shows an example of a steady-state port address table for the bridge shown in Figure 5.1. We can summarize the operation of a transparent bridge as follows: 1. If the destination and source addresses are on the same network, discard the frame. 2. If the destination and source addresses are on different networks, forward the frame. 3. If the destination network is not known, flood the frame. When bridges were first developed during the 1970s, memory was relatively expensive and processing capabilities were a fraction of what they are today. In those “good old days” the port/address table included a time stamp for each entry. After a predefined amount of time, the entry was purged, in effect reducing the memory required to store entries as well as the processing required to search the table. If we fast-forward to the present time, many bridges and switches include a
AU6039.indb 130
2/13/08 9:22:53 AM
LAN Switches n 131
time stamp for each entry; however, either users will be able to control when entries are purged or the device will allow more entries than its early 1970s-era “cousins.”
Intelligent Switching Hubs Bridges were originally developed for use with shared-media LANs, such as 10BASE-5 and 10BASE-2 networks. Thus, the bandwidth constraints associated with shared-media networks are also associated with bridges. That is, a transparent bridge can route only one frame at a time received on one or more ports, with multiple-port frame broadcasting occurring during flooding or when a broadcast frame is received. Recognizing the limitations associated with the operation of bridges, equipment vendors incorporated parallel switching technology into devices referred to initially as intelligent switching hubs and now more commonly known as switches. Such devices were developed based upon technology used in matrix switches, which for years were successfully used in telecommunications operations. By adding buffer memory to store address tables as well as buffering frames, the switch could read the destination address of frames entering the device on multiple ports and either directly switch the frames to their destination or buffer them in memory until the destination port becomes available.
Basic Components Figure 5.2 illustrates the basic components of a four-port intelligent switch. Each switch consists of Buffers and Address Tables (BAT), logic, and a switching fabric that permits frames entering one port to be routed to any port in the switch. The destination address in a frame is used to determine the associated port with that address via a search of the port/address table, with the port address used by the switching fabric for establishing the cross-connection.
Buffer Memory The amount and placement of buffer memory depends upon the type of switch and its design. Some switches use a small amount of buffer memory to hold frames arriving at each port until their destination address is read. Other switches may store frames as they are input to compute a Cyclic Redundancy Check (CRC) and determine if the frame is error free and, if not, discard the frame. In addition, some switches may store frames in a central memory area instead of using distributed memory.
Delay Times Switching occurs on a frame-by-frame basis, with the cross-connection torn down after being established to route a frame. Thus, frames can be interleaved from two
AU6039.indb 131
2/13/08 9:22:53 AM
132 n Carrier Ethernet: Providing the Need for Speed
Port
BAT
Port
BAT
Port
BAT
Port
BAT
BAT: Buffer and Address Table
Figure 5.2 Basic components of an intelligent switch
or more ports to a common destination port with a minimum of delay. For example, consider a non-VLAN tagged frame whose maximum length is 1526 bytes to include preamble and start of frame fields. At a 10-Mbps operating rate, each bit time is 1/107 sec or 100 ns. For a 1526-byte frame, the minimum delay time if one frame precedes another frame being routed to a common destination becomes
1526 bytes × (8 bits/byte) × (100 ns/bit) = 1.22 ms
As you might expect, blocking delay decreases as the data rate increases. Table 5.2 lists the blocking delay times for a maximum length non-VLAN frame for Ethernet at 10 Mbps through 10 Gbps. Table 5.2 Blocking Delay Times for a 1526-Byte Frame Ethernet Version
AU6039.indb 132
Blocking Delay
10 Mbps
1.22 ms or 1220 ns
100 Mbps
0.122 ms or 122 ns
1 Gbps
0.0122 ms or 122 ns
10 Gbps
0.00122 ms or 1.22 ns
2/13/08 9:22:54 AM
LAN Switches n 133
The previous computed delay times represent blocking resulting from frames entering on two ports having a common destination and should not be confused with another delay time referred to as latency. Latency represents the delay associated with the physical transfer of frames from one port via the switch to another port and is based upon the architecture of the switch, which adds additional delays above and beyond the delay associated with the physical length of the frame being transported through the switch. In comparison, blocking delays depends upon the number of frames from different ports attempting to access a common destination port and the method by which the switch is designed to respond to blocking. Some switches have large buffers for each port and service ports in a round-robin fashion when frames from two or more ports attempt to access a common destination port. Other switches may be designed to recognize that port buffer occupancy results in a priority service scheme based upon the occupancy of the port buffers in the switch.
Parallel Switching The ability to support parallel switching is the key advantage obtained from the use of a switch. This feature permits multiple cross-connections to occur between source and destination ports at the same time. To illustrate this capability, we will return our focus to Figure 5.2 and assume that four 100BASE-T networks were connected to each port of the four-port switch. Assuming a station on two LANs communicates with stations on the other two LANs, two simultaneous cross-connections are supported, each at 100 Mbps. This results in an increase in bandwidth to 200 Mbps. Thus, from a theoretical perspective, an N-port switch where each port operates at 100 Mbps provides a potential throughput of up to N/2 * 100 Mbps. Thus, by connecting either individual workstations or network segments to switch ports, you can overcome the operating rate limitation of an Ethernet network. In addition, because the cross-connection through a switch represents a dedicated connection, there will never be a collision. Thus, the collision detection wire-pair can be used to provide a full-duplex capability, which can dramatically enhance the operation of servers connected to a switch. Now that we have an appreciation for the key advantages associated with a switch, we will turn our attention to the manner by which switches operate.
Switch Operations In this section we will examine two different types of switch operations. First, we will examine the three basic types of switching techniques as well as discuss the latency associated with each technique. Once this is accomplished, we will then focus our attention upon switching methods, obtaining an understanding of why
AU6039.indb 133
2/13/08 9:22:54 AM
134 n Carrier Ethernet: Providing the Need for Speed
some switches are limited to connecting individual workstations to a switch port and others enable network segments to be connected to a port.
Switching Techniques There are three basic types of switching techniques: cut-through or “on-the-fly,” store-and-forward, and hybrid. The latter alternates between the first two methods based upon the frame error rate.
Cross-Point Switching A cross-point or cut-through switch has the minimum amount of latency due to the manner by which it operates. As a frame enters a switch port, the switch uses the destination address as a decision criteria to obtain a port destination from a look-up table. Once a port destination is obtained, the switch initiates a cross-connection, resulting in the frame being routed to a destination port. Switches, like bridges, initially operated only at the MAC layer and used a backward-learning process to construct a port/address table. Thus, a Layer 2 switch follows the three Fs: flooding, forwarding, and filtering frames.
Operation Figure 5.3 illustrates the basic operation of a cross-point or cut-through switch. When cut-through switching occurs, the frame enters a switch port and its destination address is read (1) prior to the entire frame being read and buffered. The destination address is forwarded to a look-up table (2) to determine the port destination address, which is used by the switching fabric to initiate a cross-connection to the destination port (3). Because this switching method only requires the storage of a
Ports
1
1 2
2 Switching Fabric
3
4
Lookup Table
3
Figure 5.3 Cross-point switching
AU6039.indb 134
2/13/08 9:22:55 AM
LAN Switches n 135
small portion of a frame until the switch can read the destination address and perform its table look-up operation to initiate switching to an appropriate output port, latency through the switch is minimized.
Latency You can consider latency as functioning as a brake on the transmission of frames whose effect depends upon the application being routed via the switch. For example, in a conventional client–server data transfer environment the transmission of a frame by a workstation results in a server response. Thus, the minimum wait time without considering the server processing delay is two times latency for each client–server exchange, lowering the effective data transfer rate. If a VoIP application had digitized voice routed through a switch, the additional latency could represent a different type of problem. That is, as snippets of 20-ms portions of a digital conversation are passed through a switch, the latency delay could result in the reconstructed sound appearing awkward, which can be more of a problem than a lowering of the effective throughput of a switch. Because a cross-point switching technique results in a minimum amount of latency, the effect upon different applications routed through this type of switch is minimal.
Store-and-Forward A store-and-forward switch first stores a frame in memory prior to initiating its switching fabric to move the frame to its destination. Once the entire frame is stored, the switch checks the integrity of the frame by computing a local CRC on the contents of the frame and comparing its computed CRC against the CRC stored as part of the frame. If the two match, the frame is considered to be error free and will be switched to its destination. Otherwise, the frame is considered to have one or more bits in error and is sent to the great bit bucket in the sky.
Filtering Capability In addition to allowing frames to be error checked, their storage permits filtering against various frame fields to occur. Thus, frames transporting certain protocols could be routed to predefined destination ports or other fields could be used to create switching rules.
Operation Figure 5.4 illustrates the basic operation of a store-and-forward switch. In this example the switch is shown using shared buffer memory to store frames prior to
AU6039.indb 135
2/13/08 9:22:55 AM
136 n Carrier Ethernet: Providing the Need for Speed
1 1 2
Ports
2
3
Shared Memory Buffer
3 Switching Fabric
Lookup Table
4 4
Figure 5.4 Store-and-forward switching
their switching to a port affiliated with the destination address of the frame. The store-and-forward switch first reads the frame’s destination address (1) as it flows into the shared memory buffer (2). As the frame is being read into memory, a lookup operation using the frame’s destination address (3) occurs to obtain the destination port address. Once the entire frame is stored in memory, a CRC is performed and one or more filtering operations can be initiated. If the CRC indicates that the frame is error free, it is then forwarded from memory to its destination (4), otherwise the frame is discarded.
Delay Time Without considering a VLAN frame, the minimum length of an Ethernet frame is 72 bytes on a LAN and 64 bytes when stored in memory, because the preamble and start of frame delimiter (SFD) fields are not stored. Thus, the minimum oneway delay when a station or LAN operating at 10 Mbps is connected to a switch port becomes
96 ns + 64 bytes × 8 bits/byte × 100 ns/bit or 9.6 × 10−6 + 512 × 100 × 10−9 or 60.8 × 10−6 sec
In the previous computation, 9.6 ns represents the Ethernet interframe gap, and 100 ns/bit is the bit duration of a 10 Mbps Ethernet LAN. Thus, the minimum one-way latency of a store-and-forward Ethernet switch is 0.000062 sec, and a round-trip minimum latency is twice that duration. For a maximum-length, nonVLAN Ethernet frame with a data field of 1500 bytes, the frame length that is stored in a switch is 1518 bytes. Thus, the one-way maximum latency becomes
AU6039.indb 136
2/13/08 9:22:56 AM
LAN Switches n 137
Table 5.3 Minimum and Maximum Ethernet Frame Delays Ethernet Operating Minimum Frame Maximum Frame Rate Length Delay Length Delay
10 Mbps
60.6 ns
1224.00 ns
100 Mbps
6.06 ns
122.40 ns
1 Gbps
0.606 ns
12.24 ns
10 Gbps
0.0606 ns
1.22 ns
96 ns + 1518 bytes × 8 bits/byte × 100 ns/bit or 9.6 × 10−6 + 12144 × 100 × 10−9 or 1224 × 10−6 sec
Table 5.3 indicates the delay times for minimum and maximum length Ethernet frames operating at 10 Mbps through 10 Gbps. As you might expect, as the data rate increases the delay time, resulting from storing frames decreases.
Hybrid A third type of switch that supports both cut-through and store-and-forwarding is the hybrid switch. The hybrid switch monitors the frame error rate in the cutthrough mode of operation. If the error rate exceeds a predefined level, the switch changes its mode of operation to store-and-forwarding, enabling frames with one or more bits in error to be discarded. The major benefit of a hybrid switch is the fact that it provides minimal latency when the frame error rate is low and discards frames by adapting to a store-andforward switching method. Then, the switch can discard frames when the error rate exceeds a predefined threshold.
Switch Port Address Support Switches fall into two general categories based upon their port address support: port-based switching and segment-based switching.
Port-Based Switching A switch which performs port-based switching supports only a single address per port. This restricts switching to one device per port while requiring a minimal amount of
AU6039.indb 137
2/13/08 9:22:56 AM
138 n Carrier Ethernet: Providing the Need for Speed
•
Workstation 1
•
•
•
Workstation N
Port Based Switch
Server 1
•
•
•
•
Server N
Figure 5.5 Port-based switching
memory in the switch. In addition, the table look-up process is faster because this type of switch limits support to one device address per port. Figure 5.5 illustrates an example of the use of a port-based switch. In this example M user workstations use the switch to contend for access to N servers, where M >> N. If a switch operating at 10 Mbps per port has N ports, then the maximum throughput is (10 Mbps * N)/2, because up to N/2 simultaneous cross-connections can be supported. However, in the real world the number of workstations (M) connected to a switch greatly exceeds the number of servers (N) connected. Because the vast majority of communications occurs as client–server data exchanges, this means that the number of servers normally governs the throughput of the switch. Both cut-through and store-and-forward switches can employ port-based switching. When a cut-through switch uses port-based switching, its latency is minimized because only one address is stored per port, reducing the time required to search the device’s port/address table for routing frames to their destination.
Segment-Based Switching A segment-based switch supports the switching of multiple addresses per port. This means that you can connect individual workstations or network segments to each switch port. Figure 5.6 illustrates the use of a segment-based switch in a modern Ethernet environment where conventional 10BASE-T and 100BASE-T hubs long ago replaced the
AU6039.indb 138
2/13/08 9:22:56 AM
LAN Switches n 139
Corporate Server
Corporate Server
Segment Based Switch
Conventional Hub
WS
•
•
•
Conventional Hub
Dept. Server
WS
•
•
•
Dept. Server
Figure 5.6 Segment-based switching
use of coaxial cable bus-based network segments. Although two conventional hubs are shown attached to two segment-based switch ports, you can also connect individual workstations and servers to such ports. In fact, the top portion of Figure 5.6 illustrates the connection of two “corporate” servers to the segment-based switch. In examining Figure 5.6, note that each conventional hub functions as a repeater and forwards each frame transmitted to the hub to the switch port to which it is connected, regardless of whether or not the frame requires the resources of the switch. The switch examines the destination address of each frame against addresses in its look-up table, forwarding those frames that warrant forwarding, and also performing filtering and flooding as necessary. Because a segment-based switch stores more than one address per port, the search of its port/address table is more time consuming than an equivalent port-based switch. Thus, you can expect a worst-case latency to occur when you use a store-and-forward segment-based switch.
Applications To obtain an appreciation of the applications that can use Ethernet switches a few words concerning the types of Ethernet switch ports are justified. Originally,
AU6039.indb 139
2/13/08 9:22:57 AM
140 n Carrier Ethernet: Providing the Need for Speed
Corporate Server
Corporate Server
Fat Pipe
100 Base-T Ports
Segment Based Switch
10 Base-T Ports Conventional Hub
WS
•
•
•
Dept. Server
Conventional Hub
WS
•
•
•
Dept. Server
Figure 5.7 Using Fast Ethernet ports to service servers
Ethernet switch ports were limited to supporting 10BASE-T and each port operated at 10 Mbps. When Fast Ethernet ports were originally incorporated into Ethernet switches, their cost precluded each port operating at 100 Mbps. Instead, a few high-speed ports were used for server connections and 10BASE-T ports were used for connecting to individual workstations and LAN segments. An example of this architecture is shown in Figure 5.7. In examining Figure 5.7, note the two 100BASE-T connections to the server located in the upper right corner of the referenced illustration. The use of two or more connections from a switch to a workstation, server, or other device is referred to as a fat pipe. Normally, fat pipes are used to enhance the data transfer between a switch and a server. Although switches that support 1 GbE and 10 GbE are available, the per-port cost can be expensive, making the use of fat pipes an attractive, cost-effective alternative.
Considering Port Capability Because queries from workstations flow at 10 Mbps to servers that respond at 100 Mbps, dual-speed switches included additional buffers as well as a flow control mechanism to regulate the flow of data between devices connected to the switch
AU6039.indb 140
2/13/08 9:22:58 AM
LAN Switches n 141
that operates at different data rates. With the growth in the manufacture of 10/100BASE-T NICs that lowered cost, most switches today support 10/100 Mbps data rates as a minimum. In addition, the development of Gigabit Ethernet and 10 GbE resulted in switches being manufactured that support the use of a few such ports as well as some switches that only operate at 1 Gbps on each port. Because there are many types of GbE and 10 GbE technologies, it is important to ensure that you select the correct type of high-speed port(s) that will satisfy your organization’s transmission requirements. For example, the difference in the transmission range of various type of GbE fiber can cause an application to fail unless the correct type of fiber is available on a switch. Now that we have an appreciation for the necessity to carefully select the type of switch ports, we will turn our attention to the manner by which switches can be used to support different applications.
Basic Switching For small organizations a centralized switch may be sufficient to support the entire organization. An example of the use of a 10/100 Mbps Ethernet switch used to connect both existing individual workstations and workstations located on legacy segments to corporate servers is shown in Figure 5.8. Note that the failure of this switch would in effect disable network communications for the entire organization. Due to this, most switches can be obtained with redundant power supplies and central logic that results in a cutover if a primary element should fail.
Multi-Tier Networking In some modern organizations, departments such as engineering and accounting may have their own servers, while at the corporate level one or more mail servers support the entire organization, and a large router is used to provide Internet access for all employees. In this type of environment, a multi-tier network consisting of interconnected switches can be considered to satisfy organizational networking requirements. Figure 5.9 illustrates the generic use of a two-tiered, Ethernet switch-based network. Here the switches at the lower tier are used to provide individual departments with connectivity to their local server as well as access to a corporate e-mail server and a router that provides a connection to the Internet. One of the major benefits of the use of a multi-tier switching arrangement is the fact that the failure of a switch or inter-switch connection enables employees connected to other departmental switches to continue to perform the majority of their work. For example, if the higher-tier switch of one departmental switch should fail, employees connected to other departmental switches could continue to access their departmental servers.
AU6039.indb 141
2/13/08 9:22:58 AM
142 n Carrier Ethernet: Providing the Need for Speed
Server
Server
Fat Pipe
100 Base-T Ports
10/100 Mbps Ethernet Switch
10 Base-T Ports Conventional Hub
WS
•
•
•
Conventional Hub
WS
WS
•
•
•
WS
Figure 5.8 Using a single switch in a small department or corporation
Interconnecting Dispersed Offices With the development of the Ethernet in the First Mile (EFM) IEEE 802.3ah standard, it has become possible to access a Carrier Ethernet service that enables data to flow between organizational locations as Ethernet frames. This ability can significantly improve network performance as it does away with protocol conversion to the network layer, enabling throughput performance to be enhanced. Figure 5.10 illustrates the use of an Ethernet switch that includes a 1000BASEBX10 port, which supports 1000-Mbps Ethernet transmission over an individual single-mode fiber at distances up to 10 km. Thus, if a communications carrier serving an office provides a single-mode fiber into the building, that fiber could be used as an EFM connection to the Carrier Ethernet service, which is shown in the middle of the figure. Similarly, other offices within the metropolitan area can also use switches with one or more 1000BASE-BX10 or other EFM-compatible ports to access the Carrier Ethernet service. In this manner offices could communicate with one another with a minimum of latency due to data transported end-to-end at Layer 2.
AU6039.indb 142
2/13/08 9:22:58 AM
LAN Switches n 143
Mail Router Server • • •
Switch Higher Tier Lower Tier
Switch
Switch
DS
DS Legend: DS Departmental Server
Workstation
Figure 5.9 Using a two-tiered Ethernet switch-based network
Virtual LANs One of the key benefits associated with the use of switches is the support many devices provide for the operation of different types of VLANs. In this section, we will first examine the basic characteristics of a VLAN and its rationale for use. Once this is accomplished, we will examine the different types of VLANs and the VLAN IEEE standard, concluding with a review of the advantages associated with the use of VLANs.
Characteristics A VLAN can be considered to represent a broadcast domain. This means that a transmission generated by one station on a VLAN is only received by those stations predefined by some criteria to be in the domain.
Construction Basics A VLAN is constructed by the logical grouping of two or more network nodes on a physical topology. To accomplish this, logical grouping requires the use of a “VLAN-aware” switch.
AU6039.indb 143
2/13/08 9:22:59 AM
144 n Carrier Ethernet: Providing the Need for Speed
Carrier Ethernet Service
1000 Base-BX 10 or another EFM port
Ethernet Switch
Figure 5.10 Using Ethernet to link geographically separated locations within a metropolitan area
Implicit versus Explicit Tagging Two methods can be used to form a VLAN: implicit tagging and explicit tagging. Implicit tagging eliminates the need for a special tagging field to be inserted into frames. Examples of implicit tagging can include the use of MAC addresses, port numbers, protocols transported by a frame, or another parameter that permits nodes to be grouped into a broadcast domain. In comparison, explicit tagging requires the addition of a field into a frame or packet. One of the initial disadvantages associated with explicit tagging is the fact that it increases the length of an Ethernet frame. This resulted in a degree of incompatibility between equipment until a significant base of VLAN-aware switches and network adapters reached the market. In the remainder of this section we will examine the use of implicit tagging prior to discussing the IEEE VLAN standard.
Using Implicit Tagging As previously mentioned, MAC addresses, port numbers, and protocols when used to create VLANs can be considered to represent implicit tagging methods. To illus-
AU6039.indb 144
2/13/08 9:23:00 AM
LAN Switches n 145
trate the creation of VLANs based upon implicit tagging, consider the eight-port Ethernet switch shown in Figure 5.11. In this example the ports are numbered 0 through 7, and the MAC address of each device connected to the switch for simplicity is indicated as A0 through A7. We will further assume that the clients with MAC addresses A0 through A5 will be associated with the first VLAN, and clients with the MAC addresses A6 through A8 will be associated with the second VLAN. Also note that of the two servers shown connected to the switch, one has an address such that it resides in the first VLAN, and the address of the second server places that device in the second VLAN. In examining Figure 5.11, note that each VLAN represents a logical grouping of ports on top of the physical topology of the network. Each group of ports represents an independent broadcast domain, such that frames transmitted by a workstation or server on one domain remain constrained to that domain. In addition to MAC-based implicit tagging, it is also possible to create implicittagged VLANs based upon the port where a frame enters a switch or the protocol carried in a frame. For port-based VLANs an administrator could simply use the switch console to assign a group of ports to each VLAN. The creation of protocolbased VLANs can be more difficult, because the switch has to look deeper into each frame to determine the protocol being transmitted.
Explicit Tagging Explicit tagging involves the insertion of a VLAN tag into frames. Prior to the introduction of the IEEE 802.1Q standard several vendors developed proprietary methods to support VLAN creation. Two examples of those proprietary methods are Cisco’s ISL (Inter-Switch Logic) and 3Com’s VLT (Virtual LAN Trunk). Because the IEEE 802.1Q standard is today used by most, if not all, vendors offering VLANs, we will turn our attention to that standard.
The IEEE 802.1Q Standard The IEEE 802.1Q standard defines a method for tagging Ethernet frames with VLAN membership information as well as defining how VLAN bridges (switches) operate. Under the IEEE 802.1Q standard, compliant switch ports can be configured to transmit tagged or untagged frames. A tag frame contains fields for both VLAN and 802.1P priority information that can be transmitted between 802.1Q-compliant switches, enabling a VLAN to span multiple switches. However, it is important to ensure that intermediate as well as endpoint or edge switches are all 802.1Q compliant, as many NICs and legacy switches do not support VLANs. If such devices receive a tagged frame, they will more than likely discard it as they will either not understand why the value in the legacy length field is incorrect or that the maximum length of a frame was increased from 1518 to 1522 bytes.
AU6039.indb 145
2/13/08 9:23:00 AM
146 n Carrier Ethernet: Providing the Need for Speed
Server A7
Server A5
0
1
vLan 1
2
vLan 2
Switch
3
4
5
6
A3
A1 A2
A6 A6
A4
vLan 1 = Ports 0, 2, 3, 4, 5
7
vLan 2 = Ports 1, 5, 7
Figure 5.11 Establishing a MAC-based VLAN
The VLAN Tag Figure 5.12 illustrates the VLAN Ethernet frame. Note that the VLAN tag, which consists of two bytes, is inserted between the source address and length fields. The first two bytes in the VLAN tag are referred to as the tag protocol identifier (TPID), which has a defined value of hex 8100, indicating that the frame carries IEEE 802.1Q/ 802.1P data. Preamble
SFD Destination Address
Hex 8100
Source Address
802.1Q Tag Type
Tag Control Information
Length /Type
Data
FCS
bits: 3 1 12 User_Priority CFI vLAN ID CFI Canonical Form Indicator
Figure 5.12 The Ethernet VLAN frame
AU6039.indb 146
2/13/08 9:23:01 AM
LAN Switches n 147
The second two bytes in the VLAN tag are referred to as the tag control information (TCI) field. This field consists of three subfields: user priority (3 bits), canonical format indicator (1 bit), and VLAN ID (VID), with the latter 12 bits in length. The user priority field provides eight (23) priority levels. The CFI 1bit subfield is used for compatibility between Ethernet and Token-Ring networks. The CFI is always set to 0 for Ethernet switches. If the CFI is set to 1, the frame should not be forwarded to an untagged port. The third subfield, VID, allows up to 4096 (212) VLANs to be identified; however, a VID of 0 is used to identify priority frames, and a value of hex FFF is reserved, reducing the maximum number of VLANs to 4094.
VLAN Traffic The VLAN standard defines three types of traffic: untagged, priority tagged, and VLAN tagged. Untagged frames do not have a VLAN tag. Priority-tagged frames are VLAN-tagged frames that have a valid priority setting and a VID (VLAN ID) set to zero, also referred to as a NULL VID. The third type of traffic, VLAN tagged, contains a VLAN tag with a non-zero VID field value.
802.1P Signaling The three priority bits enable network traffic to be prioritized at the MAC layer. The use of the 802.1P standard allows users to prioritize traffic into various classes; however, no bandwidth reservations are established. Switches, routers, servers, and even desktop computers that support VLAN tagging can set these priority bits. Although there is no standard that maps the priority field values to priority queues, the IEEE provided recommendations concerning how traffic classes correspond to priority values. From these recommendations, network managers can then assign 802.1P values to queues supported by switches, routers, and other devices. For example, 802.1P values of 0 through 3 could be assigned to a low-priority queue, priority levels 4 through 6 could be assigned to a medium-priority queue, and priority 7 could be assigned to a high-priority queue.
VLAN Operations The IEEE 802.1Q standard is limited with respect to the manner by which it handles untagged frames. Under the 802.1Q standard, only a per-port VLAN solution is employed for untagged frames. This means that assigning untagged frames to VLANs only considers the port from which they were received. Each port has a parameter called a permanent virtual identification (PVID) that specifies the VLAN assigned to receive untagged frames. IEEE 802.1Q-compliant switches can
AU6039.indb 147
2/13/08 9:23:01 AM
148 n Carrier Ethernet: Providing the Need for Speed
be configured either to admit only VLAN-tagged frames or all frames, with the latter including untagged frames. Initially, each physical port on an 802.1Q switch is assigned a PVID value that represents its native VLAN ID whose default value is VLAN 1. All untagged frames are assigned to the VLAN specified in the PVID parameter and the PVID is considered as a tag. However, because the frame is untagged and the PVID is tagged, this enables both tagged (VLAN-aware) and untagged (VLAN-unaware) devices to co-exist on a common network infrastructure. Figure 5.13 illustrates how VLAN-aware and -unaware stations can co-exist. In examining Figure 5.13, note the two VLAN-unaware end stations shown in the lower left portion of the figure. Because they are VLAN-unaware, they will be associated with VLAN C, assuming that the PVIDs of the VLAN-aware switches in this example are set equal to VLAN C. The VLAN-unaware stations only transmit untagged frames, which informs the VLAN-aware devices that receive such frames to assign them the VLAN C.
Ingress Rules Each frame received by a VLAN switch can only belong to a single VLAN. This is accomplished by associating a VID value with the received frame. If the VID is set to 0 (null VLAN ID), then the tag only transports priority information and is not used for configuring a PVID. In fact, if the port is set to a value admitting only VLAN-tagged frames, such frames are then dropped. If the VID is not the null VLAN ID, then the VLAN identifier parameter is used. However, a VID of hex FFF is reserved and is not configurable as a PVID. All frames that are not discarded as a result of the application of ingress rules are sent to the forwarding process and learning process prior to exiting the switch based upon 802.1Q egress rules.
The Forwarding Process The forwarding process allows frames to be forwarded for transmission on some ports, referred to as “transmission ports,” and discarded without being transmitted to other ports. Each port can be considered as a potential transmission port only if it is in a forwarding state, the frame was received on a port that was in a forwarding state, and the port considered for transmission is not the same port on which the frame was received. Once a frame is permitted to be forwarded, it is subject to filtering. Filtering can occur based upon the destination MAC address in the frame, its VID, information in the filtering database for that MAC address and VID or a default filtering behavior specified for the potential transmission.
AU6039.indb 148
2/13/08 9:23:01 AM
LAN Switches n 149
vLAN A PVID = A
PVID = C
vLAN A PVID = A
vLAN Aware Switch
Access Ports
PVID = C
PVID = B vLAN B
vLAN Aware Switch
Access Ports PVID = B vLAN B
PVID = C vLAN Unaware Endstation
vLAN Unaware Switch vLAN C vLAN Unaware Switch
PVID = C
PVID = B
vLAN Aware Endstation vLAN B
PVID: Permanent Virtual Identifier
Figure 5.13 Coexistence of VLAN-aware and -unaware stations
Frame Queuing The forwarding process can result in frames being queued prior to flowing to a destination port. Both unicast and group-addressed frames with a given user priority are either sent in the order of receipt or assigned to storage queues based on their user priority. The user priority can be mapped to one to eight traffic classes. The latter occurs when a VLAN switch supports priority queuing. Table 5.4 illustrates IEEE-recommended user priority to traffic class mappings. In examining Table 5.4, note that up to eight traffic classes are supported by the traffic class tables, which enables separate queues for each level of user priority. Traffic classes are numbered 0 through N − 1, where N represents the number of traffic classes associated with a defined outbound port. If the forwarding process does not support expedited classes of traffic, the user priority value is mapped to traffic class 0, which corresponds to non-expedited traffic. Once a frame is queued it will normally be processed and transmitted through a port. However, there are several situations where the frame can be removed from the port: if the buffering time is guaranteed and then exceeded, if the bridge transit time reaches the time at which the frame would be transmitted, and if the associated port leaves the forwarding state.
AU6039.indb 149
2/13/08 9:23:02 AM
150 n Carrier Ethernet: Providing the Need for Speed
Table 5.4 Recommended User Priority to Traffic Class Mappings Number of Available Traffic Classes 1
2
3
4
5
6
7
8
0
0
0
1
1
1
1
2
1
0
0
0
0
0
0
0
0
2
0
0
0
0
0
0
0
1
3
0
0
0
1
1
2
2
3
4
0
1
1
2
2
3
3
4
5
0
1
1
2
3
4
4
5
6
0
1
2
3
4
5
5
6
7
0
1
2
3
4
5
6
7
User priority 0 (default)
Frame Selection Once frames are ready for transmission, an algorithm is used for their selection. This algorithm results in the selection of frames for a given supported traffic class from the corresponding queue only if all queues corresponding to numerically higher values of traffic classes supported by the port are empty at the time of selection.
Egress Rules Once a frame is selected, egress rules are applied to control the exit of frames from switch ports. A transmitting port only transmits VLAN tagged or untagged frames. This means a VLAN-compliant switch cannot transmit priority-tagged frames. Thus, a station that transmits a priority-tagged frame via a switch will receive a response that is either VLAN tagged or untagged, with the actual response dependent upon the state of the untagged set for the VLAN concerned.
The Learning Process Similar to how bridging learns MAC addresses and ports on which frames enter the device, a VLAN switch observes the MAC source address of frames received on each port and their VID. As the source MAC address and VID are learned, the learning process updates the information in a filtering database. If the filtering database is full, an existing entry is removed to allow the new entry to be
AU6039.indb 150
2/13/08 9:23:02 AM
LAN Switches n 151
entered into the database. Normally a default aging time of 300 sec purges rarelyused entries. The filtering database can contain both static entries entered via a management configuration process and dynamically learned entries. Such entries can be used to control the forwarding of frames with particular VIDs, the inclusion or removal of tag headers in forwarded frames, and the use of the source MAC address, destination MAC address, VID, and the port on which a MAC address was learned to perform some predefined action.
Vendor Implementation Although the IEEE 802.1Q standard defines a solid base for establishing and using VLANs, many vendors implemented subsets of the standard as well as tailored the capability of their switches to add other features, such as VLAN creation based upon IP addressing, IP applications, or another category beyond that available by the 802.1Q standard. In the remaining portion of this section, we will briefly examine how the 3Com SuperStack family of switches supports VLANS.
SuperStack Switch The SuperStack switch is limited to supporting up to 16 VLANs using the 802.1Q standard. Each switch port can be associated with any single VLAN defined or placed in multiple VLANs at the same time using 802.1Q tagging. Prior to the switch being able to forward traffic, you need to define the following information about each VLAN: VLAN name such as Engineering or Marketing 802.1Q VLAN ID used to identify the VLAN when 802.1Q tagging is used across your network Local ID used to identify the VLAN within the switch which corresponds to the VLAN ID used in legacy 3Com devices The 3Com switch supports VLT tagging. This tagging is a proprietary 3Com system that allows a VLAN to be placed in all VLANs defined for a switch.
The Default VLAN A new or initialized switch contains a single VLAN, which is the default VLAN. The VLAN has an 802.1Q VLAN ID of 1 and a local ID of 1. Initially, all ports are placed in the default VLAN, and it is the only VLAN that can be used to access 3Com management software of the switch.
AU6039.indb 151
2/13/08 9:23:02 AM
152 n Carrier Ethernet: Providing the Need for Speed
Defining VLANs For the first example we will assume we are creating VLANs using implicit (untagged) connections. Suppose we have a 24-port switch connected to four end stations and two servers as shown in Figure 5.14. Using the VLAN setup page, we would first define VLAN 2, because VLAN 1 is the default VLAN and already exists. Next, we would edit the port settings using an untagged VLAN list box so that ports 1, 3, and 13 of the switch are placed in VLAN 1 and ports 10, 12, and 24 of the switch are placed in VLAN 2.
Untagged Connections with Hubs The VLAN shown in Figure 5.14 can be considerably expanded by connecting switch ports to hubs instead of to individual end stations. If we assume that the switch now has a Layer 3 module installed, it can pass traffic between VLANs. In the example shown in Figure 5.15, a hub is connected via its port number 13 to port number 1 on a switch that has a Layer 3 switching module installed. In this example the switch has ports 1 and 14 assigned to VLAN 1, and ports 2, 7, and 24 are assigned to VLAN 2. Using the switching module, VLANs 1 and 2 can communicate with each other. The configuration of the VLAN shown in Figure 5.15 again requires the use of the VLAN list box on the 3Com port setup page of the switch’s Web interface, resulting in ports 13 and 14 placed in VLAN 1 and ports 2, 7, and 24 placed in VLAN 2. To enable communications between the two VLANs, the Layer 3 module would then be configured. Finally, port 13 on the hub would be cabled to port 13 on the switch.
Stations in vLAN 1
Server in vLAN 1
Stations in vLAN 2
Server in vLAN 2
Figure 5.14 Creating two VLANs on a 3Com SuperStack 24-port switch
AU6039.indb 152
2/13/08 9:23:03 AM
LAN Switches n 153
Stations in vLAN 1
Stations in vLAN 2
Conventional Hub
Switch with layer 3 Switching module
Server in vLAN 1
Server in vLAN 2
Figure 5.15 Expanding a VLAN and enabling inter-VLAN communications
802.1Q Tagged Connections Although untagged VLANs are fine to use with a single switch in a 3Com environment, they are not able to support multiple switches. Thus, in a network with more than one switch where VLANs are distributed among multiple 802.1Q tagging must be used. This requirement enables VLAN traffic to be passed along the trunks routed to connect switches. Figure 5.16 illustrates the interconnection of two VLAN switches using 802.1Q tags. In this example each switch has end stations that are in VLAN 1 and VLAN 2. In addition, each switch has a server for a VLAN, with all stations in VLAN 1 requiring the ability to connect to the server attached to switch 1 and all stations in VLAN 2 require the ability to connect to the server attached to switch 2. In examining the two switches shown in Figure 5.16, note that the untagged VLANs are configured as previously discussed. To provide inter-switch communications, port 26 on switch 1 is assigned to VLANs 1 and 2, so that all traffic will be passed over the trunk to switch 2. On switch 2, port 25 would be assigned to VLANs 1 and 2, enabling all VLAN traffic to flow to switch 1. Once this is accomplished, port 26 on switch 1 would be connected to port 25 on switch 2, allowing end stations in both VLANs to communicate with their applicable servers that can be on the same switch or on a different switch.
Supporting 802.1Q Learning In our concluding example, we will examine the use of three switches that support the 802.1Q learning process. In the example shown in Figure 5.17, each end
AU6039.indb 153
2/13/08 9:23:04 AM
154 n Carrier Ethernet: Providing the Need for Speed Stations in vLAN 1 and vLAN 2 (untagged) Switch 1
Port 26 vLANS 1 and 2 Server in vLAN 1 (untagged)
802.1Q tagged Stations in vLAN 1 and vLAN 2 (untagged)
Switch 2
Port 25 vLAN 1 and 2 802.1Q tagged
Server in vLAN 2 (untagged)
Figure 5.16 Using 802.1Q tagged connections for interconnecting switches
station informs the network that it is to receive traffic for certain VLANs and the switches automatically place the end stations in those VLANs. In addition, the trunks between switches are automatically configured to forward traffic that contains unknown 802.1Q tags. To support the configuration shown in Figure 5.17, you would configure switch 1 so the end stations belong to VLANs 1, 2, and 3. Similarly, you would configure switch 2 so the stations are assigned to VLANs 4, 5, and 6. Next, you would enable 802.1Q learning on each switch and configure the Layer 3 module to allow communications between VLANs 1 through 6. To complete the network you would then connect port 26 on switch 1 to port 1 of switch 3, and port 25 of switch 2 to port 2 on switch 3. The 3Com SuperStack switch is limited to supporting 16 VLANs and the 802.1Q standard supports up to 4094. Thus, if a network contains stations that support 802.1Q, the 3Com switches may have to forward traffic that uses unknown 802.1Q tags. Such traffic is automatically forwarded if a 3Com SuperStack switch has 802.1Q learning enabled, as assumed in Figure 5.17. In this example each switch is assumed to have 802.1Q learning enabled so they can place stations in a VLAN. In addition, 802.1Q learning enables applicable VLAN traffic to reach a station from anywhere in the network. Thus, traffic from VLANs 1, 2, and 3 as well as unknown tags are forwarded to switch 3 from switch 1, and traffic from
AU6039.indb 154
2/13/08 9:23:04 AM
LAN Switches n 155 Switch 1: Stations configured to belong to vLANs 1, 2, 3 802.1Q learning enabled
vLANs 1, 2, 3 802.1Q tagged/ unknown tagged forwarded
Switch 2: Stations configured to belong to vLANs 4, 5 and 6 802.1Q learning enabled
vLANs 4, 5, 6 802.1Q tagged/unknown tagged forwarding Switch 3: Layer 3 module using 802.1Q learning
Figure 5.17 Using 802.1Q learning
VLANs 4, 5, and 6 as well as unknown tags are forwarded from switch 2 to switch 3. Under 802.1Q learning, stations transmit a packet with a known multicast address to the entire network. This traffic informs other devices that the station should receive traffic for specific VLANs. Then, when a packet arrives at a port on a switch with 802.1Q learning enabled, the switch places the receiving port in the specified VLANs and forwards the packet to all other ports. When the frame arrives at another switch with 802.1Q learning enabled, that switch also places the receiving port in the VLAN specified and forwards the frame to all other ports, enabling VLAN information to be propagated throughout the network, which enables VLAN traffic to reach stations from anywhere in the network.
Advantages of Use Previously it was mentioned that a VLAN represents a broadcast domain. One of the key advantages associated with the use of VLANs is the ability to tailor operations by adjusting the broadcast domain. That is, you can increase the number of broadcast domains, but reduce the number of stations in each to correspond to employee groupings, departments, or even floor locations. This in turn can reduce network traffic and even increase security as a member of one domain can be restricted from
AU6039.indb 155
2/13/08 9:23:05 AM
156 n Carrier Ethernet: Providing the Need for Speed
being able to view traffic in another domain. Other advantages associated with the use of VLANs include facilitating adds, moves, and changes and reducing the effort required to segment networks.
AU6039.indb 156
2/13/08 9:23:05 AM
Chapter 6
Carrier Ethernet Services
Overview In less than a decade Ethernet has rapidly evolved into a full duplex, point-to-point Layer 2 protocol that no longer has the possibility of experiencing collisions. Operating at data rates from 10 Mbps to 10 Gbps on a variety of copper and fiber media the original distance limitations of Ethernet have been significantly exceeded to the point where the technology can be used to connect geographically separated offices. Recognizing a new source of revenue, communications carriers have gradually responded to customer requirements and have begun to offer metropolitan Ethernet, which in the past few years was renamed Carrier Ethernet. As its name implies, Carrier Ethernet, represents a service offered by a communications carrier: the transportation of data by Ethernet frames at Layer 2 in the ISO Open System Interconnection Reference Model. In this chapter, we will turn our attention to Carrier Ethernet. In doing so, we will obtain an overview of the technology and we will discuss why many communications carriers do not provide Carrier Ethernet service at present. Then we will go into more detail, examining how Ethernet and MPLS can be combined as well as how we can access Carrier Ethernet.
The Metro Ethernet Forum The concept for Carrier Ethernet dates to 2001 when the Metro Ethernet Forum (MEF) was formed to develop business services for customers of communications 157
AU6039.indb 157
2/13/08 9:23:05 AM
158 n Carrier Ethernet: Providing the Need for Speed
carriers that would be accessed primarily over optical metropolitan networks as a mechanism to connect Enterprise LANs. One key objective of the MEF is Carrier Ethernet services. In fact, the forum’s objective is stated as: “Carrier Ethernet services shall be delivered over native Ethernet-based Metro and Access networks and can be supported by other transport technologies.” The MEF defines Carrier Ethernet as a “ubiquitous, standardized, carrier-class service defined by five attributes that distinguish Carrier Ethernet from familiar LAN-based Ethernet.” Those attributes include standardized services, scalability, service management, reliability, and Quality of Service (QoS). Since 2001 the MEF has developed a series of 16 specifications to define one or more Carrier Ethernet attributes, including standardized services, service management, reliability, QoS, and scalability.
Requirements for Use In extending Ethernet from the LAN into the metropolitan area, native Ethernet protocols need extensions to become scalable, obtain a QoS capability and resiliency as well as provide Operation, Administration, and Maintenance (OAM) support, with the latter of key importance for communications carriers to monitor provisioning and maintain their network infrastructure. Over the past decade two trends have emerged for transporting Ethernet into and through metropolitan area networks: (1) protocol extensions and (2) encapsulating Ethernet within another transport technology such as MPLS. Unlike an Ethernet LAN, which is dedicated for use by an organization or organizational departments, Carrier Ethernet needs to have the ability to provide service to different organizations. Thus, the first requirement for developing a Carrier Ethernet service is for the service to support multiple customers. This requirement was satisfied by the use of a relatively old Ethernet technology that was originally intended to represent an enterprisewide technology, the virtual LAN (VLAN). By tagging Ethernet frames of customers it becomes possible for the Carrier Ethernet service provider to allow different customers to use the same Ethernet infrastructure without incurring a security risk. To provide readers with a review of VLAN tagging, this author will be a bit redundant instead of simply referencing an earlier portion of this book.
VLAN Tagging Under the IEEE 802.1Q standard four bytes are inserted into each Ethernet frame. For convenience, Figure 6.1 represents a duplicate of Figure 4.7, which indicates where the four bytes are inserted as well as the three subfields of the tag control information field.
AU6039.indb 158
2/13/08 9:23:05 AM
Carrier Ethernet Services n 159 bytes: 7
1
Preamble
6
SFD Destination Address
6
2
Source Address
bits:
2
Tag Tag Type Control hex 8100 Information
2
??
Length/ Type
User Priority
CFI
vLAN ID
3
1
12
??
Data 42-1500
4 FCS
Figure 6.1 The 802.1Q frame format
In examining 802.1Q bytes note that the first two, which are in the tag type field, are set to hex 81-00. The following 2 bytes of tag control information consist of a 3-bit user priority subfield, a 1-bit Canonical Format Indicator (CFI) and a 12bit VLAN ID. The first two bytes are always set to hex 81-00 to identify the frame as an 802.1Q frame. The second 2 bytes, which are subdivided into three subfields, identify a priority level for the frame, whether bit order is canonical or non-canonical (or can have additional significance based upon the MAC Protocol), and the third subfield identifies the VLAN. This 12-bit field permits 212 − 2 or 4094 unique VLANs. Although 4094 unique VLANs are more than adequate for many metropolitan areas, this number is not sufficient for large cities where tens of thousands of traditional T1 and T3 access lines could be replaced by Carrier Ethernet. Thus, the IEEE modified the 802.1Q specification to significantly enhance the number of definable VLANs. That modification is referred to as the IEEE 802.1ad specification, titled “Provider Bridges.” This specification is actually an amendment to the IEEE 802.1Q-1998 standard. The purpose of this amendment, according to the IEEE, is to enable an architecture and bridge protocols, compatible and interoperable with existing bridged local area network protocols and equipment, to provide separate instances of the MAC services to multiple independent users of a bridged local area network in a manner that does not require cooperation among the users, and requires a minimum of cooperation between the users and the provider of the MAC service. By following the specifications in this amendment a service provider can now offer the equivalent of separate LAN segments, bridged or virtual bridged LANs, to a number of users, over the provider’s bridged network.
AU6039.indb 159
2/13/08 9:23:06 AM
160 n Carrier Ethernet: Providing the Need for Speed
While work proceeded on the IEEE 802.1ad amendment various Layer 2 encapsulation schemes have been either proposed or performed by vendor equipment to address the scalability issue. Such schemes include VLAN stacking, MAC address stacking, and MPLS Layer 2 encapsulation. These techniques will be discussed later in this chapter.
The 802.1P (Priority) Standard One of the key requirements of Carrier Ethernet is to provide a QoS capability. While QoS is not directly possible, indirectly the first subfield in the tag control information field is used to indicate the priority of a frame. It is important to note that the priority value corresponds to a Class of Service (CoS) and does not directly provide a QoS. This difference is significant and deserves a degree of explanation. For QoS to work, a path is set up from the network access to network egress such that each device along the path provides a certain level of service at a specified data rate. As the path is established, each device obtains the ability to accept or reject the proposed connection based upon its available resources, similar to the routing of a call through the telephone network that can result in a fast busy signal when too many customers attempt to dial long distance from a given serving switch. If we focus our attention upon Ethernet, we would note that it was developed as a connectionless technology. This means that it is not possible to predefine a path for a service nor to pre-allocate bandwidth along a path. Instead, QoS mechanisms would be used to prioritize frames belonging to different traffic classes while switches and routers would use queuing to favor certain traffic classes over other classes. Unfortunately, this will not guarantee an end-to-end bandwidth and QoS, although it will prioritize traffic based upon different classes of service. In comparison to a QoS environment where a path is set up, in a CoS environment frames are simply marked by a sender to indicate their priority and do not have to follow a specific path through a network. This means network devices do not receive the ability to refuse higher-priority connections, in effect making the network administrator responsible for ensuring that the network is not over-committed with high-priority traffic. For example, if you have a 1 Gbps link and only have 200 Mbps of priority traffic that will flow over the link, the use of CoS will not cause any problem. Thus, CoS provides a mechanism for expediting various types of network traffic as long as the network administrator takes care to provision such traffic with recognition of the capability of the link speeds in the network.
Latency Considerations Although file transfer and e-mail are not adversely affected by latency, the same cannot be said concerning such real-time applications as videoconferencing and VoIP. When considering the use of Carrier Ethernet, it is important to examine the delays
AU6039.indb 160
2/13/08 9:23:07 AM
Carrier Ethernet Services n 161 Customer A Location B Customer Switch
Carrier Switch Carrier Switch
Carrier Switch
ec nn
Customer A Co Location A EFM Customer Switch
E Gb
n
tio
Carrier Switch
Li
nk
Carrier Switch
Service Provider Network Carrier Ethernet Service
Customer B Location A Customer Switch
Figure 6.2 Latency on a Carrier Ethernet network
along the route or path between the access and egress locations. Figure 6.2 illustrates the use of a Carrier Ethernet network to transport frames between two locations. In examining Figure 6.2, it is assumed that the Carrier Ethernet network is used to interconnect two geographically separated customer networks at 1 Gbps. The customer switch at each location is assumed to support 10/100/1000 Mbps, with the latter including a 1000BASE-BX10 interface that provides a bidirectional transmission rate of 1 Gbps over a single-mode fiber at distances up to 10 km.
Switch Latency Switch latency, which represents the time it takes for a frame to flow from an input port to an output port, depends upon several factors to include but not limited to the type of switch (cut-through or store-and-forward), average frame length, switch congestion, and port operating rates. Table 6.1 indicates the average latency computed from an analysis of ten vendor switches that included either twelve autosensing 10/100/1000BASE-T and four 1000BASE-SX ports or eight 1000BASE-SX ports. The table indicates the effect of both unloaded and loaded conditions when the frame was 64, 512, and 1518 bytes in length. Here an unloaded switch is assumed to represent a switch that has a load at 50 percent or less of its maximum throughput, while it is considered to be loaded when data is flowing into N/2 ports, where N represents the total number of switch ports.
AU6039.indb 161
2/13/08 9:23:07 AM
162 n Carrier Ethernet: Providing the Need for Speed
Table 6.1 Switch Latency (msec) 64
512
1518
Unloaded Loaded Unloaded Loaded Unloaded Loaded 6.15
33.78
12.48
125.06
27.04
132.06
In examining the data contained in Table 6.1, several comments are warranted. First, the entries represent the average delays associated with ten switches. Second, an increase in the average frame length resulted in an increase in latency regardless of the load on the switch. Third and most important, the load on a switch can have a significant effect upon latency, with a loaded switch on the average having a latency between slightly over five times that of an unloaded switch when frames are 64 bytes in length to over ten times that of an unloaded switch when frames are 1518 bytes in length. Because VoIP applications almost always use relatively short frames we can focus our attention upon the delays associated with 64-byte frames. For our simple example shown in Figure 6.2, assuming each switch is unloaded would result in a cumulative switch delay of 6.15 * 4 or 24.6 μs.
Access and Egress Delays The previously computed switching delays represent only a portion of the end-toend delay we need to consider. Other delays include the access and egress delays as well as the delay associated with moving data through the carrier network. The access or ingress delay represents the time required to transport the frame from the customer presence into the Carrier Ethernet network. If we assume a 1Gbps EFM (Ethernet in the First Mile) connection, then each bit requires 1/109 or 1.0 * 10−9 sec. If a 64-byte frame is transported, adding the header and trailer results in an additional 26 bytes, for a total of 90 bytes or 720 bits. Thus, the access or egress time to transfer the frame becomes 720 bits * 1. 1.0 * 10 −9 bits/sec = 0.72 μs. If we assume the egress connection also operates at 1 Gbps, then the total ingress and egress delays are 0.72 * 2 or 1.44 μs.
Frame Transport Because Carrier Ethernet makes use of an optical carrier, the transport of data occurs through the network at the speed of light. This means that the delay associated with the transport of frames through the network can be simplified by only considering the number of switches frames flow through and then computing the time required to place the frames onto the optical carrier. Returning to Figure 6.2, two carrier switches are shown on the path from location A to location B. Thus, at 1
AU6039.indb 162
2/13/08 9:23:08 AM
Carrier Ethernet Services n 163
Gbps the frame transport time is 0.72 μs. Adding up the switch delays, ingress and egress delays, and the frame transport delays, the total frame delay becomes 24.6 μs + 0.72 μs + 0.72 μs or 26.04 μs. Because the transport of digitized voice requires a cumulative delay of less than 150 ms the network shown in Figure 6.2 does not appear to represent a problem. However, it should be noted that the prior example did not consider the fact that a higher priority data stream could lock out a lower priority data stream from being processed by routers or switches for a period of time that could significantly increase latency. In addition, not all carriers have optical solutions for EFM access that operate at 1 Gbps. In fact, some communications carriers offer copper-based solutions, such as 2BASE-TL, which provides a maximum data rate of 2 Mbps at distances up to 2700 m. When 2BASE-TL is used for access to a Carrier Ethernet network, the delay becomes 720 bits * 1/2 * 106 or 300 * 10−6 μs, which is 0.3 ms. Thus, even if the egress line is 2BASE-TL the total access and egress delays would be 0.6 ms. When added to the switch delays and frame transport delays the total latency should still be significantly under 150 ms or 150,000 μs when frames are prioritized. Thus, the use of Carrier Ethernet coupled with EFM access and egress is well suited for transporting VoIP and other real-time applications as long as a method exists to prioritize such frames.
Fiber Connectivity There are a large number of commonly available methods to access a Carrier Ethernet network. Those methods include both copper- and fiber-based EFM technology as well as unused fiber and various types of switch and router modules connected to EFM and dark fiber transports.
Dark Fiber “Dark fiber” is a term used to define previously installed but currently unused fiber. Because of economics, many carriers install fiber bundles into office buildings and only initially use a fraction of the fibers in the bundle. Then, as traffic requirements increase, the carrier may “light” various dark fiber strands.
Gigabit Interface Converters Although dark fiber and various EFM copper and fiber methods provide a transport facility, they require an interface to equipment. One of the most popular types of equipment interface is the Gigabit Interface Converter (GBIC). In a Cisco environment the GBIC can be plugged into certain models of Ethernet switches and router ports, after which you would connect the optical cable to the GBIC. Table 6.2 lists three types of optical fiber for which you can obtain a GBIC from Cisco. At the
AU6039.indb 163
2/13/08 9:23:08 AM
164 n Carrier Ethernet: Providing the Need for Speed
Table 6.2 Common GBICs for Optical Fiber IEEE
Wavelength (nm) Fiber Type
Maximum Distance (km)
1000BASE-SX
850
Multi-mode
0.2–0.5
1000BASE-LX/LH
1310
SMF, NDSF
10
1000BASE-ZX
1550
SMF, NDSF
70–100
Note: SMF = single-mode fiber, NDSF = non-dispersion shifted fiber, NZ-DSF = non-zero dispersion shifted fiber.
time this book was written a Cisco 1000BASE-ZX GBIC module was available for approximately $1700 and a 1000BASE-LX/LH long haul, long-wavelength module was obtainable for slightly more than $100. In comparison, a D-Link GBIC for 1000BASE-SX could be obtained for $160. A single-mode fiber is an optical fiber designed to transport a single ray of light. Because such fibers do not significantly exhibit dispersion, they can transport light pulses at greater distances than multi-mode fiber. Unfortunately, lasers have a range of optical wavelengths that result in the wavelengths becoming spread out in time as they traverse the fiber. This spreading of wavelengths is referred to as chromatic dispersion. Standard single-mode fiber has a near-zero dispersion at 1310 nm and represents most of the optical cable installed during the l980s. This type of cable is also referred to as non-dispersion shifted fiber. In comparison, non-zero dispersion shifted fiber is fabricated to support high power signals over long distances as well as dense wavelength division multiplexing.
Transporting Ethernet in a Service Provider Network Over the past decade carrier networks evolved from a reliance on copper-based technology to fiber-based technology. However, the selection of fiber does not provide a uniform transport mechanism. Instead, transport technologies were developed ranging from coarse and dense wavelength division multiplexing to SONET/SDH, ATM, Frame Relay, Switched Ethernet, Resilient Packet Ring (RPR), MPLS, and IP to transport data. Thus, the infrastructure of many communication carriers resembles a fruit salad of equipment acquired over a relatively long period of time.
Operating over Other Transports In discussing Carrier Ethernet, it is important to note that the term does not imply that Ethernet is used end to end. Although Ethernet can be used as a transport
AU6039.indb 164
2/13/08 9:23:08 AM
Carrier Ethernet Services n 165
medium, it can also run over different types of transport facilities. Those facilities can include SONET, Resilient Packet Ring, and even MPLS.
Comparison to Other Layer 2 Protocols The transmission of Ethernet at Layer 2 has several key differences when compared to such Layer 2 protocols as ATM and Frame Relay. The latter two provide an intelligent forwarding mechanism at Layer 2, which in effect is a routing protocol. In comparison, switched Ethernet at Layer 2 has no such intelligence. Instead, frames are processed according to the 3 Fs: filtering, forwarding, and flooding of frames based upon their destination MAC addresses. Because subscribers are associated with MAC addresses that cannot be grouped into a sequence of addresses (like IP addressing where addresses can be subdivided into a network address, subnet address, and host address), each MAC address must be learned and maintained in tables. This represents another limitation associated with Carrier Ethernet, especially if a failure occurs in a metropolitan area serving a large number of subscribers. Then, equipment relearning MAC addresses would require a considerable amount of time, which would adversely affect carrier resilience. This means that IP or MPLS or another transport mechanism was preferred because they could be scaled to support tens of thousands of customers as well as provide routing intelligence and carrier resilience.
Ethernet Topologies When data is transmitted via Ethernet there are two general topologies that can be used: point-to-point and ring. The selection of an Ethernet topology is normally based upon the existing carrier infrastructure. For example, if a SONET/SDH network already exists, Ethernet could be laid over the physical SONET/SDH ring. If a fiber ring exists, Ethernet components could be daisy-chained by interconnecting Gigabit switches to form a ring or coarse WDM could be used to provide a series of point-to-point Gigabit Ethernet circuits over the physical fiber ring. Thus, in many cases the current carrier infrastructure will be the driving force concerning how Ethernet is transported in the metropolitan area.
Carrier Ethernet Service Types Although we speak of Carrier Ethernet as a service provider transport facility, we previously noted that an Ethernet service can be based upon virtually any transport technology, such as Ethernet over SONET, MPLS, and so on. Another method used by Carrier Ethernet service providers to differentiate their Carrier Ethernet offerings is by defining their offerings by service type. Thus, a specific Carrier
AU6039.indb 165
2/13/08 9:23:08 AM
166 n Carrier Ethernet: Providing the Need for Speed
Router
Carrier Ethernet Network
Router
Figure 6.3 E-LINE service
Ethernet service type that actually represents a topology can also be transported as a native Ethernet or carried by another transport facility. Presently, three types of Ethernet Services can be offered by service providers: E-LINE, E-LAN, and E-TREE. Each type corresponds to a specific type of network topology or architecture. Of the three, E-LINE and E-LAN are well defined; E-TREE can be considered to represent an evolving work in progress for which standards have yet to be finalized.
E-LINE E-LINE represents a point-to-point Ethernet connection. This connection can be used to interconnect two geographically dispersed offices within a metropolitan area, as illustrated in Figure 6.3. E-LINE can be considered to represent a leased line replacement that offers a much higher bandwidth than such conventional telco services as T1 and T3 connections. In examining Figure 6.3, note that the most common endpoint to an E-LINE service is to use routers with the correct optical interface at each edge of the carrier network. Because most modern switches and routers accept various optical modules, interfacing an E-LINE service offering is readily available.
E-LAN While E-LINE service is used to connect two locations, E-LAN represents an Ethernet service that can be used to connect multiple locations. Thus, you can view ELAN as resembling a multi-point service that provides an “any-to-any” connection, similar to a VLAN operating over any type of public network. The primary use of an E-LAN Ethernet service is to provide an interconnection capability between multiple organizational sites within a metropolitan area. Figure 6.4 illustrates an E-LAN Ethernet service type.
AU6039.indb 166
2/13/08 9:23:09 AM
Carrier Ethernet Services n 167
Router
Router Carrier Ethernet Network
Router
Router
Figure 6.4 E-LAN Ethernet service type
E-TREE A third type of Ethernet Service can be considered to represent a point-to-multi-point transmission service. Referred to as E-TREE, this service is similar to an EPON Ethernet topology and is also commonly referred to as “hub and spoke,” “root to leaf,” or “leaf to root.” E-TREE represents a future service that may be widely used once standards are developed. Perhaps the primary use of E-TREE will be a multiplexed connection to an ISP, with branches flowing to different organizational sites. Figure 6.5 illustrates the topology of an E-TREE Carrier Ethernet service.
Encapsulation Techniques Previously in this chapter we noted that the IEEE 802.1Q standard is limited to supporting 4094 VLANs. Although the support of 4094 VLANs represents a physical constraint, there is also another problem associated with directly using VLAN tagging. That problem is the fact that switches in the core of a Carrier Ethernet network would have to learn all of the MAC addresses of each host in any customer VLAN (c-VLAN). This could result in extremely large MAC address tables that would be maintained by core switches, resulting in what is referred to as a MAC address table explosion. Thus, a switch failure might result in a considerable delay due to recovery operations requiring the switch to relearn a considerable number of MAC addresses. Another problem associated with the use of VLAN tags is the possibility that two or more customers may select the same VLAN identifier (VID). If this occurs the service provider must be able to differentiate between them within the Carrier Ethernet domain. Based upon these problems a variety of solutions in the form of encapsulation schemes have been either implemented or proposed to provide a
AU6039.indb 167
2/13/08 9:23:10 AM
168 n Carrier Ethernet: Providing the Need for Speed
Router Carrier Ethernet Network
Router
Router
Router
Figure 6.5 E-TREE
more scalable Layer 2 service. Such schemes insert additional tags or fields in the customer Ethernet-generated frames at ingress nodes that are removed at the egress nodes. Some of the encapsulation schemes include VLAN stacking and the use of MPLS-based Ethernet encapsulation. As we will note shortly, there are different approaches to each scheme.
VLAN Stacking There are two methods of VLAN stacking that involve adding additional tagging fields to each Ethernet frame. The first method which involves the use of VLAN (802.1Q) tags is commonly referred to as Q-in-Q tagging, which was standardized in the IEEE 802.1ad specification. A second stacking method occurs via the use of a virtual-Metropolitan Area Network (VMAN) tag instead of a service provider Q-tag.
Q-in-Q Tagging The first method of VLAN stacking results in an additional Q-tag being inserted into customer Ethernet frames at the ingress switch of a Carrier Ethernet domain. This action results in a frame having two Q-tags, one referred to as a provider (P) tag, and the second represents the customer ( c ) tag. Figure 6.6 illustrates an example of VLAN stacking, which is more formally defined as a Q-in-Q Ethernet frame under the IEEE 802.1ad specification. Through the use of stacked VLAN tags (Q-in-Q), it becomes possible to define up to 16,777,216 labels, resulting in a much more scalable network. Because customer equipment is not supposed to understand the Q-in-Q frame format, the addition of the second Q-tag occurs at the ingress to the provider network and is then removed by the egress switch in the provider’s network.
AU6039.indb 168
2/13/08 9:23:10 AM
Carrier Ethernet Services n 169
Preamble
SFD DA SA
P-Ethertype
P-TCP
C-Ethertype
C-TCI
T/L
FCS
P-Ethertype Provider Ethertype P-TCI Provider Tag Control Information Contains P-vLAN ID C-Ethertype Customer Ethertype C-TCI Customer Tag Control Information Contains C-vLAN
Figure 6.6 IEEE 802.1ad VLAN stacking
Interpretation of Stacked Tags There are two ways of interpreting stacked Q-tags. In the first method only the VID of the outer tag, which is inserted by an ingress switch, is used by the core Ethernet switches to identify the C-VLAN across the domain. The second method combines the VID fields of both the customer and provider to support a much larger number of C-VLANs.
Tagged versus Raw Mode Edge routers in the service provider’s network can operate in one of two modes: tagged or raw. In a tagged mode, frames with different VLAN IDs can belong to different customers, or if they belong to the same customer they may require different treatment by the service provider. For example, some frames with specific VLAN tags could be forwarded via different paths or even mapped to different CoS classes for custom QoS treatment. In comparison, when an edge switch operates in a raw mode VLAN tags are not used to define a service to the network. Instead, the tag is part of the customer VLAN structure and is transparently passed through the network without processing.
Virtual MAN Tag Encapsulation A second type of VLAN stacking occurs through the use of a virtual Metropolitan Area Network (VMAN) tag instead of a provider Q-tag. The VMAN tag functions similar to the outer Q-tag assigned by the service provider in Q-in-Q encapsulation. However, the provider obtains control over the 24-bit VID instead of having to combine the provider and customer VIDs. Thus, the use of VMAN tag encapsulation makes it possible to transport traffic from more than 4094 VLANs over the MAN. Figure 6.7 illustrates VLAN stacking with a non-Q-tag. The M-bit is set to 0 if the 3-byte domain identifier (DI) is derived by mapping the customer tag and port, while a value of 1 indicates that the DI is derived by mapping of the port. Concerning the 24-bit domain identifier, the value represents the VMAN-ID assigned by the service provider to a C-VLAN and has significance only within a given service
AU6039.indb 169
2/13/08 9:23:11 AM
170 n Carrier Ethernet: Providing the Need for Speed
Preamble
SFD
DA
SA
New Non-Q Ethernet vLAN Type Tag
M
802.1Q hex 8100
T
vLAN Tag
Version (6)
Reserved (5)
Priority (3)
Original Ethernet Type
Data
FCS
1 byte 1 byte
Control
1 byte
Domain Identifier
3 bytes
Figure 6.7 VLAN stacking with non-Q-tag
provider domain. The T-bit is set to 0 for client data, and a value of 1 indicates control data is being passed. Both of the previously mentioned VLAN stacking methods enable subscribers to maintain their own C-VLAN structure. Although two stacked levels are the most common methods used, some vendor equipment enables up to 8 stacked VLANs. In addition, while both methods address the C-VLAN scalability issue they do not directly address the MAC address table explosion issue. Instead, a technique referred to as scalable Ethernet bridging which is also known as MACin-MAC (M-i-M), can be used.
M-i-M Tagging In the M-i-M tagging method, as its name implies, the service provider’s domain transports the customer (C-VLAN) frames based on the provider edge node’s MAC address. Specifically, each ingress node inserts two additional MAC address fields (destination and source) that have local significance into customer frames, as illustrated in Figure 6.8. In examining Figure 6.8, note that although only two provider MAC fields are shown as being added to the customer frame, other fields such as a Q-tag may be included in the stacked M-i-M header. As data flows through a service provider network using M-i-M stacked addresses, each core switch only needs to learn the edge switch MAC addresses, which significantly reduces MAC address table entries as well as search times needed to locate an entry.
Data Flow To illustrate the flow of data using M-i-M consider Figure 6.9. In this example the service provider has two edge nodes (PE1 and PE2) and five core switches (PCore
AU6039.indb 170
2/13/08 9:23:12 AM
Carrier Ethernet Services n 171 Customer Frame
Provider DA SA
Provider DA
SA vLAN Tag
Data
Original Ethertype
FCS
Provider Fields
Figure 6.8 Provider M-i-M MAC encapsulated Ethernet frame
P Core 2
P Core 1
MAC Table for PE 2 P Core 3 Port 1
MAC Address Table for PE 1 Port Address 0
M1…M4
1
M5…M9
PE 1
Rest of Frame M1 M6 PE1 PE2
Address
0
M5…M9
1
M1…M4
PE 2
Port 0
Port O
Rest of Frame M1 M6
C-Switch M4 M3
M1
Port
M2
C-Switch M5
M9 M8
M6
M7
Figure 6.9 Data flow using M-i-M tagging
1 through PCore 5). We will assume a customer has two locations within a metropolitan area that they wish to interconnect using a Carrier Ethernet service. Four computers with addresses M1 through M4 are at one location, and five computers with addresses M5 through M9 are located at a second. Figure 6.9 illustrates the flow of data from one customer location to the other location through the provider network. Note that PE1 and PE2 represent the outgoing port numbers of the provider edge switches.
AU6039.indb 171
2/13/08 9:23:13 AM
172 n Carrier Ethernet: Providing the Need for Speed
Stack Problems For each of the three previously mentioned VLAN stacking methods, it is important to separate customer use of the Spanning Tree Protocol from the provider’s network. Otherwise, changes on the customer’s spanning tree could affect the provider’s spanning tree, resulting in unintended situations occurring. Currently IEEE standards do not support tagged Bridge Protocol Data Units (BPDUs). Thus, it may become necessary for some customer BPDUs to be tagged and transported across the service provider’s network to enable customer VLAN sites to function properly. Another area where VLAN stacking will need modification concerns the definition of new Ethertype values. Such values would enable switches to distinguish between encapsulated provider frames and regular 802.1Q customer frames. This would enable provider switches to be automatically configured, and alleviate the need to manually configure switches, minimizing potential problems resulting from the misconfiguration of switches.
MPLS Layer 2 Encapsulation In concluding our examination of encapsulation techniques we will focus our attention on the use of MPLS Layer 2 encapsulation, referred to as Martini encapsulation in honor of the editor of a number of Internet RFCs to include RFC 4448 titled “Encapsulation Methods for Transport of Ethernet over MPLS Networks.” When Layer 2 services are configured over MPLS, Layer 2 traffic is encapsulated in MPLS frames and then transported via MPLS tunnels through an MPLS network. The encapsulation occurs when traffic reaches the edge of the service provider’s network, while de-capsulation occurs when traffic exits the network. The advantage of MPLS encapsulation includes using a large base of existing network equipment. In addition, this Layer 2 VPN technique takes advantage of MPLS label stacking under which more than one label can be used to forward traffic through an MPLS network. Under MPLS Layer 2 encapsulation two labels are used. One label represents a point-to-point virtual circuit, and the second label represents the tunnel through the network. As traffic is encapsulated, the ingress Label Switch Router (LSR) assigns it to a virtual circuit label. This label identifies the VPN, VLAN, or connection endpoint while the egress LSR uses the virtual circuit label to determine how to process the frame. Between the ingress and egress routers the core routers use the tunnel label to determine the path that data flows through the network. Figure 6.10 illustrates MPLS Layer 2 encapsulation of a tagged Ethernet frame. Note that the two MPLS labels that are inserted into customer Ethernet frames are based on the destination MAC address, port, and 802.1Q information. As previously mentioned, the tunnel label provides information required for transporting frames through the provider network. LSRs in the network only use information
AU6039.indb 172
2/13/08 9:23:13 AM
Carrier Ethernet Services n 173 Customer Frame
VC Label
Tunnel Label
DA
SA
vLAN Tag
Original Ethertype
Data
FCS
Encapsulated Fields
Figure 6.10 MPLS Layer 2 encapsulation
in the tunnel label to switch labeled frames across the MPLS domain. At the hop prior to the egress Label Edge Router (LER) the tunnel label is removed, leaving the VC (virtual circuit) label, which is used by the egress LER to determine how to process the frame and where to deliver it on the destination network via outputting the frame on an outgoing port. Due to MPLS tunneling the VC label is not visible until the frame reaches the egress LER. Thus, two labels (VC and tunnel) are necessary under MPLS encapsulation. The MPLS encapsulation method provides a general comparison with the previously described Ethernet extensions as follows: first, the VC label can be considered to correspond to the Q/VMAN tag and the tunnel label would correspond to the M-i-M extensions (SA/DA). Under MPLS, the Label Distribution Protocol (LDP) and Border Gateway Protocol (BGP) would be used to distribute the labels while the LSRs would be used to establish the required Label-Switched Paths (LSPs). The MPLS labels in effect perform the same function as the stacked Q/VMAN tags, enabling more than 4094 customers to be supported by the Carrier Ethernet network operator. Through the use of Martini encapsulation Ethernet frames can be transported as a virtual private LAN service (VPLS). VPLS supports the connection locations while emulating a single bridged domain over a managed IP/MPLS network. Thus, all the services in a VPLS will appear to be on the same LAN regardless of their actual location. Due to the wide availability of MPLS in carrier networks, this can be a very effective technology for transporting Ethernet. Because MPLS operates over a wide variety of transport facilities, this also means that Ethernet with Martini encapsulation can be transported over such transport facilities as T1, T3, SONET, and, in fact, just about any physical network. The methods mentioned in this chapter provide a mechanism to transport Ethernet over communications carrier facilities. However, in doing so both customers and service providers must consider the hidden “cell tax” in which the insertion of various tags used to encapsulate Ethernet reduces overall efficiency. Thus, it is
AU6039.indb 173
2/13/08 9:23:13 AM
174 n Carrier Ethernet: Providing the Need for Speed
important for both customers and network operators to consider the effect of encapsulation and additional labels upon efficiency, especially if Ethernet is being used to transport VoIP, which in turn is transported as Ethernet over MPLS. When this occurs the headers and labels can result in actual data only representing a very small portion of the bandwidth. This obviously will affect the cost of the carrier service.
AU6039.indb 174
2/13/08 9:23:14 AM
Chapter 7
Service Level Agreements and Quality of Service In this concluding chapter in this book, we will focus our attention upon two of the most difficult to implement but certainly very important aspects associated with using a Carrier Ethernet service: the Service Level Agreement (SLA) and Quality of Service (QoS). The SLA represents a contract between the service provider and the end user which defines a variety of network parameters. In comparison, QoS represents a series of resource reservation control mechanisms that can provide different levels of performance to different users or different types of user data. As you might expect, some type of QoS is typically incorporated into an SLA. In this chapter we will first define in some detail what an SLA means in an Ethernet environment and why it can be difficult to implement. As we discuss the SLA in an Ethernet environment we will become aware of the fact that different service providers offering a Carrier Ethernet service may not provide the same devices that enable similar measurements. In addition, when data flows from one service provider’s network through another network to its destination it may not be possible to obtain a meaningful SLA. In concluding this chapter we will turn our attention to QoS, which governs the ability of real-time applications such as VoIP, streaming video, and teleconferences to operate correctly. As we discuss QoS we will note that Ethernet’s class of Service (CoS) must be used as a basis for obtaining a QoS.
175
AU6039.indb 175
2/13/08 9:23:14 AM
176 n Carrier Ethernet: Providing the Need for Speed
The Service Level Agreement In this section we will obtain an appreciation for the meaning of the term “SLA” to include the parameters that are typically defined within a contract for service. An Ethernet SLA represents an agreement or contract between the service provider and the end user or customer. The SLA is provided on an end-to-end basis, which sets up a series of parameters and values the service provider will meet or be penalized if it fails to meet the terms of the agreement.
Metrics Table 7.1 lists some of the metrics that can be included in a Service Level Agreement and which we will elaborate upon in the next series of sections in this chapter.
Availability Availability can be defined at both a component and system level, with the latter permitting an end-to-end computation. In this section we will first define the term and then examine its applicability to a Carrier Ethernet transmission facility.
Component Availability The availability of an individual component can be expressed in two ways that are directly related to one another. First, as a percentage, availability can be defined as Table 7.1 Service Level Agreement Metrics Availability Latency Intra-metro Inter-metro Route specific Jitter Mean time to repair (MTTR) Installation time Bandwidth provisioning Packet loss Guaranteed bandwidth
AU6039.indb 176
2/13/08 9:23:14 AM
Service Level Agreements and Quality of Service n 177
the operational time of a device divided by the total time, with the result multiplied by 100. This is indicated by the following equation:
A% = (operational time / total time) * 100
where A% is availability expressed as a percentage. As an example of availability consider a Carrier Ethernet service that operates continuously, 24 hours per day, 7 days per week. Over a 1-year period the network should be available for use for 365 days × 24 hours per day or 8760 hours if we assume a non-leap year. Now suppose the network was not available four times during the year, with the average downtime being two hours. Thus, the network is then operational 8760 hours less 8 hours or 8752 hours. Using our availability formula we obtain:
A% = (8752 / 8706) * 100 = 99.9%
MTBF and MTTR Because some service providers express availability using Mean Time Before Failure (MTBF) and Mean Time to Repair (MTTR), we will briefly turn our attention to these metrics and how they provide a computation that determines availability. MTBF represents the average operational time of a device or facility prior to its failure. Thus, MTBF can be considered as being equivalent to the operational time of a device or facility. Once a failure occurs it must be fixed or repaired. The interval from the time the device or facility fails until it is repaired is known as the time to repair, and the average of each repair time is known as the Mean Time to Repair (MTTR). Because the total time is the sum of MTBF + MTTR, we can rewrite our availability formula as follows:
A% = MTBF / (MTBF + MTTR) * 100
It is important to remember the “M” in MTBF and MTTR, as you must use the average or mean time before failure and average or mean time to repair. Otherwise, your calculations are subject to error. For example, if the use of a Carrier Ethernet network became unavailable halfway through the year, you might be tempted to assign 4380 hours to the MTBF. Then you would compute availability as follows:
A% = 4380 / (4380 + 8) * 100 = 99.91%
The problem with this computation is the fact that only one failure occurred, which results in the MTBF not actually representing a mean. Although the
AU6039.indb 177
2/13/08 9:23:14 AM
178 n Carrier Ethernet: Providing the Need for Speed
computed MTBF is correct for a specific device or facility, the MTBF would be different for a second or third device or facility that when taken together provide an end-to-end transmission facility. Thus, if you are attempting to obtain an availability level for a network consisting of lines, switches, and routers you need to compute an average or mean level of availability through the use of an average MTBF. Then, the next logical question is how to obtain average MTBF information. Fortunately, most vendors and network operators provide the MTBF information for products they manufacture and services they offer instead of waiting for a significant period of time to obtain appropriate information.
Considering Published Statistics Although many published MTBF statistics can be used as is, it is important to note that certain statistics can represent extrapolations that deserve a degree of elaboration. For example, when a vendor introduces a new switch or router and quotes an MTBF of 50,000 or 100,000 hours, they obviously have not operated that device for that length of time. Instead, they either extrapolated MTBF statistics based upon improvements made to a previously manufactured product, or based their statistics on the MTBF values of individual components such as line cards, power supplies, and logic boards. If you are reading the brochure of a vendor or service provider and notice an asterisk next to an MTBF figure and the footnote indicates extrapolation, you might consider determining additional information. After all, if the MTBF of some device is indicated as 100,000 hours, or almost 12 years, why is the warranty period typically a year or less? In such situations you may want to consider using the warranty period as the MTBF value instead of an extended MTBF value. Concerning the MTTR, this interval may also be included on vendor literature and may also require a degree of modification to be realistic.
Considering Travel Time The MTTR figure is based upon the time required to repair a device or facility once a person is on site. Thus, you need to consider the location where equipment resides on an end-to-end transmission path and the travel time to potential failure locations. For example, assume your Ethernet switch will provide a connection to a Carrier Ethernet service provider via an EFM (Ethernet in the First Mile) optical transmission facility. If the specification sheet for the Ethernet switch lists an MTBF of 16,500 hours and an MTTR of two hours, the latter may not be accurate unless your organization has on-site maintenance support. Otherwise, you need to add travel time to the MTTR to obtain a more realistic value. For example, assume the Ethernet switch is located in a suburb of Atlanta and it takes a maintenance person two hours to travel to your organization’s location. Then, a more realistic
AU6039.indb 178
2/13/08 9:23:14 AM
Service Level Agreements and Quality of Service n 179
MTTR would result from adding the expected travel time to the vendor’s MTTR metric provided in a product specification sheet. Now that we have an appreciation for MTBF and MTTR, we will turn our attention to how system availability is computed.
System Availability In communications a system is considered to represent a collection of devices and line facilities which form a given topology. In a Carrier Ethernet environment system availability represents the end-to-end availability from source to destination. The Carrier Ethernet service provider will install demarcation points at each customer location. The demarcation point, which we will discuss in more detail later in this chapter, represents the location from which the communications carrier takes responsibility. Thus, an availability metric provided by the communications carrier represents an availability level between two demarcation points. Because both equipment and different types of line facilities occur on the end-to-end path the level of availability actually more formally represents system availability. Due to the fact that the end-to-end data flow over networks structures represent devices and lines connected in series we will next examine how the availability level of this type of topology is computed.
Components Connected in Series To illustrate the computation of system availability we need a network to analyze. Thus, the top portion of Figure 7.1 shows the path of transmission from an end user’s Ethernet switch through a service provider’s Carrier Ethernet network to a second location within a metropolitan area. In this example we will assume that
L2
End User Switch
L1
S2
S1
Demarc
L3 S3
L4
End User Switch
Demarc
Figure 7.1 Network components in series
AU6039.indb 179
2/13/08 9:23:15 AM
180 n Carrier Ethernet: Providing the Need for Speed
the service provider DEMARC (demarcation) line terminates just in front of each end-user switch. The DEMARC or demarcation represents the boundary between the service provider or carrier’s network and the customer’s or end user’s network. The purpose of the DEMARC is to define the endpoints of the carrier’s responsibility as well as to enable the carrier to test and monitor its network up to the DEMARC located at the customer’s presence. The latter is extremely important as it enables the service provider to determine where problems exist and to dispatch technicians to alleviate such problems. An Ethernet Demarcation Device (EDD) represents a relatively new network component now manufactured by a few vendors for service providers. The EDD provides Operation, Administration, and Maintenance (OAM), loopback capabilities, a variety of statistics, and enables the measurement and tracking of such endto-end SLA parameters as latency, jitter, and packet loss. In examining Figure 7.1 note that L1 through L4 represent different types of transmission facilities and S1 through S3 represent three service provider switches. The network illustrated in Figure 7.1 represents a series of devices and facilities that could be drawn with respect to their availability level as shown below:
A1
A2
An
When n components to include lines are connected in series the availability of those components as a system is computed by multiplying together the availabilities of each of the n individual components. Mathematically, this is expressed as follows for n components: n
A=
i=1
Ai
To illustrate the use of the prior formula, we will assume that each switch in the carrier network has an availability level of 99.9 percent and each of the six links have an availability level of 99.98 percent. Then, the end-to-end availability level becomes:
(0.999)2 * (0.9998)6 = 0.9968
Although a 99.68 percent level of availability may appear quite high it is important to remember that during a year with 8,760 hours or 525,600 minutes this means that you can expect 525,600 * (1 − 0.9968) or 1,682 minutes of downtime or approximately 28 hours of outage. Whether this is good or bad depends upon when the outage occurs (day or night), its duration, and what your organization was attempting to transmit right before the outage occurred.
AU6039.indb 180
2/13/08 9:23:16 AM
Service Level Agreements and Quality of Service n 181
Based upon the preceding, it is important to note that a 99+ percent level of availability can still result in a long-duration outage once a year or a series of shorter-duration outages. Thus, end users must carefully examine the availability portion of an SLA to determine when the level of availability is guaranteed and what recourse they have, if any, if the service provider fails to exceed the level of availability defined in the SLA.
Latency Latency represents the delay experienced by data flowing though a network. That delay results from routers examining headers to transfer frames from one path to another as well as delays associated with frames in the form of binary signals flowing across line segments connecting routers and switches. In the wonderful world of Carrier Ethernet all line segments possibly other than the access line are optical, minimizing latency because data flows at the speed of light.
Application Considerations It is important to compare a service provider’s latency against your organization’s application requirements and data flow. Concerning the former, real-time voice and video require a lesser degree of latency and file transfers and Web page browsing can tolerate a higher degree of latency.
Types of Latency There are several types of latency a service provider may quote. Those types of latency include intra-metro, inter-metro, and route-specific latency. Normally the SLA type of latency is defined as a monthly average.
Jitter Jitter represents the unwanted variation between successive signals. In a communications environment jitter usually occurs due to routers and switches having variable loads when processing a sequence of frames transmitted through the device to a common destination. Because a router or switch has many ports, the activity on the device at a particular point in time governs its ability to process the next frame in a sequence of frames flowing through the device, resulting in random delays.
Jitter Buffers For non-real-time applications jitter really does not matter. However, when real-time voice or video is transferred too much jitter can result in distortions to recreated
AU6039.indb 181
2/13/08 9:23:16 AM
182 n Carrier Ethernet: Providing the Need for Speed
voice and video. Thus, most VoIP and teleconferencing systems include jitter buffers where the variations in frames reaching their destination can be compensated for by storing frames and then removing them via a predefined timing scheme. The storage area is referred to as a jitter buffer, which counters jitter by enabling a continuous playout of audio or video. Most if not all readers of this book have indirectly used a jitter buffer although they may not be aware of the fact. Each time you use Microsoft’s Windows Media Player, Apple’s QuickTime Player, or a similar program to view a video you will probably notice the “Buffering…” message displayed in the lower left or right corner of the program. What this message tells you is that to display a smooth video the program is placing data in its jitter buffer so that it can extract such data with precise timing. This action enables video to be displayed and audio to be sent to the speakers without the intermittent delays that occur between frames as they traverse a network. The maximum amount of network jitter should always be less than the capacity of the jitter buffer. For example, if the jitter buffer is set up to store 20 ms of audio, then the network jitter should be less than 20 ms.
MTTR Previously we noted that availability can be computed in terms of MTBF and MTTR. Some service providers in addition to specifying network availability also specify an MTTR. When the MTTR is specified in an SLA it is usually computed by summing the total network downtime during the month and dividing that number by the total number of service interruptions that occurred during the month. The result, expressed in minutes, becomes the specified MTTR. Obviously, the lower the MTTR, the more responsive the service provider to customer problems. However, customer locations can also affect the MTTR. For example, Los Angeles is a large metropolitan area with numerous businesses that have multiple locations within the city limits, and the city is significantly spread out from a downtown area to a literal maze of other areas where business office parks are located. Thus, the MTTR quoted for a company with locations spread out from the city center to the suburbs can be expected to exceed the MTTR for a company with offices concentrated in one general location.
Installation Time One important metric that many end users forget to discuss is the guarantee provided by the Carrier Ethernet service provider for installing the service. Many Carrier Ethernet service providers will include an on-time installation metric in their SLA. That metric is typically specified in terms of days after a contract is signed.
AU6039.indb 182
2/13/08 9:23:16 AM
Service Level Agreements and Quality of Service n 183
Bandwidth Provisioning Although we like to think that we have a reasonable prediction of the bandwidth our organization needs, many times new or improved applications come along that relegate our predictions to the waste basket. When this situation occurs we need to inform our Carrier Ethernet provider that our organization requires additional bandwidth. Because the connection to the service provider is usually through an EFM connection operating on a 1GbE optical line, it is relatively easy for the service provider to accept more information from the customer per unit of time. What is a bit more difficult is for the service provider to provision the internal network to receive and route additional customer data through the existing network infrastructure. When incorporated into an SLA the service operator normally specifies a period of time in hours for provisioning additional bandwidth.
Packet Loss Many Carrier Ethernet service providers currently do not include a packet loss metric in their SLA. The reason for this is the fact that unless a true QoS exists, the service provider is hesitant to guarantee a packet loss level that can be adversely affected by users beyond their control. The exception to this occurs when the service provider has sufficient bandwidth to accommodate traffic peaks, allowing a packet loss value to be included in an SLA.
Guaranteed Bandwidth Some Ethernet service providers offer customers a guaranteed bandwidth. To accomplish this without a QoS capability results from the fact that the service provider has provisioned sufficient bandwidth to enable the customer to burst traffic up to a certain bit rate without packet loss occurring.
SLA Problems Currently Carrier Ethernet service providers are basically using a series of old metrics to provide parameters in their contracts. Unlike SONET/SDH, T-carrier circuits, and other transport facilities that have built-in protocols that provide the ability to monitor the performance of a facility, tried and true Ethernet lacks this capability. In addition, although many network facilities include the ability of non-intrusive testing as well as an OAM capability, until recently Ethernet lacked both of these features. Thus, new or modified Ethernet protocols and frames were required to obtain data that service providers can use to test, administer, and maintain their expanding Ethernet infrastructure. Fortunately, two types of OAM were recently
AU6039.indb 183
2/13/08 9:23:16 AM
184 n Carrier Ethernet: Providing the Need for Speed
being developed to provide this capability for Ethernet, one by the ITU and the IEEE and the second by the EFM Task Force. The former can be considered to represent a fully featured OAM mechanism that can provide an end-to-end OAM capability and which will provide performance measurements; the latter is limited to the monitoring of continuity on a single link providing a limited number of statistics and as such is designed for supporting access applications. To provide readers with an insight into the need for OAM and how it is being implemented within Ethernet we will probe a bit deeper into this topic.
OAM Overview Operation, administration, and maintenance is used to describe the monitoring of a network by service providers. Through the use of OAM a service provider can detect network problems, activate such network fault prevention measures as the rerouting of data, and in general respond to various alarms. Because OAM can provide service providers with detailed metrics of network performance as well as the ability to respond to faults it allows them to offer SLAs that go beyond basic measurements.
OAM and Ethernet Ethernet was originally developed as a local area network with clusters of co-located stations sharing access to a common medium. As Ethernet evolved little thought was given to adding an OAM capability as testing was limited to a local environment and the service provider was in fact the company that operated the network. With the introduction of Carrier Ethernet networks the need for an OAM capability changed. The service provider now needed a method to detect and overcome faults as well as to measure network performance so SLAs could be guaranteed.
Ethernet OAMs Previously we noted that there are two Ethernet OAMs, one developed for EFM applications, the other provides an end-to-end capability. The EFM OAM was developed by the IEEE 802.3ah Task Force in the 802.3 Working Group. Thus, it is often referred to as the 802.3ah or EFM OAM. This is a link-layer OAM, which was recently incorporated into the main 802.3 Ethernet specification as Clause 57. At the time this book was written, work on the service layer OAM for Carrier Ethernet was being performed by the IEEE under the 802.1ag specification and the ITU-T Y.1731 draft. In addition, the Metro Ethernet Forum (MEF) is also working on an Ethernet Service OAM, with all three organizations cooperating with each other. Because these three organizations are looking at Ethernet OAM from
AU6039.indb 184
2/13/08 9:23:16 AM
Service Level Agreements and Quality of Service n 185
a service level while EFM OAM is focused at the link level, the resulting protocols should be complementary and eventually work simultaneously.
Functions The primary function of an OAM protocol is to detect network faults. When a fault reaches a certain time threshold an alarm should be generated to make the network operator aware of the situation. To detect some types of network outages, special messages are periodically transmitted that are either looped back at a demarcation or responded to by equipment. Such messages are known as continuity checks (CC) as they test to ensure a path is available. A response not received within a certain period of time indicates a loss of service, and an alarm is usually generated.
Testing Testing can be divided into two general categories known as in-service (non-intrusive) and out-of-service (intrusive). In an Ethernet OAM environment frames transmitted that do not disrupt the normal operation fall into the first category. In comparison, a frame that, for instance, caused a demarcation point to be placed into a loopback for testing would result in intrusive testing.
Link-Layer OAM Ethernet’s link-layer OAM was developed for reaching customer locations from a service provider’s central office. Functions performed by this link-layer OAM include placing remote devices into and out of a loopback, querying the configuration of the remote device, and setting flags to indicate critical events. Because this OAM is limited to a single link it cannot provide data for an end-to-end service guarantee. Thus, the information provided about the state of the link is minimal.
Messages Link-layer OAM messages are transmitted as slow protocol frames referred to as OAM Protocol Data Units (OAMPDUs). The reason for the term “slow protocol” is the fact that no more than 10 OAMPDU frames per second can be transmitted, limiting the time when other traffic cannot flow on the link. Figure 7.2 illustrates the EFM OAM frame format. Note that all slow protocols use an Ether Type value of hex 88-09 and that the link-layer OAM is defined by a value of hex 03, which appears in the first byte of the MAC client payload. The destination address (DA) is a specific multicast address that is link constrained because
AU6039.indb 185
2/13/08 9:23:17 AM
186 n Carrier Ethernet: Providing the Need for Speed
Preamble
Destination Address
Source Address
Type hex 8809
Sub Type hex 03
Flags 2B
Code 1B
Data 42-1496
FCS 4B
Figure 7.2 EFM OAM frame format
Table 7.2 EFM OAM Codes Information
EFM OAMPDUs only traverse a single link. Thus, they are never forwarded by bridges or switches.
Event notification Variable request and response Loopback control Organization-specific
Codes Presently five codes are defined by the linklayer OAM. Information is encoded by setting the Code field value followed by encoding information. Table 7.2 lists the five codes currently defined for EFM OAM.
Information The information code OAMPDUs can be used to discover remote devices (auto-discovery) and exchange information about their capabilities, provide fault notifications, and serve as a heartbeat generator. Concerning the latter, OAMPDUs must be transmitted at least once per second if there are no other pending messages.
Event Notification As the name of the code implies, event notification frames report various link statistics. Statistics can be reported for a specific period of time or as a cumulative total since the counter was last reset.
Variable Request and Response Variable request frames are used by the service provider to obtain the configuration of customer equipment. To do so, the variable request will request specific SNMP MIB variables. The customer response to such requests occurs in variable response frames. Because Ethernet frame delivery is not guaranteed, even OAMPDUs may be transmitted several times to enhance the probability of reception.
AU6039.indb 186
2/13/08 9:23:17 AM
Service Level Agreements and Quality of Service n 187
Loopback Control A fourth type of OAM frame is used to enable or disable intrusive loopback in a remote device when enabled statistics from local and remote clients can be queried.
Organization Specific The last type of OAM frame allows an organization to tailor a new OAMPDU to satisfy a specific requirement.
Flags If we return our attention to Figure 7.2 we can note that two bytes or 16 bits can be toggled. Referred to as flags, each OAMPDU has one or more bits set to denote critical events as well as signal whether a remote device is ready to receive OAM messages.
Service OAM The IEEE 802.1ag specification provides an end-to-end service, enabling up to eight OAM levels as well as allowing customers and service providers to run independent OAMs. By default, users are allocated three levels of OAMs, service providers are allocated two levels, and operators three levels. OAM frames that are part of a higher level will be forwarded transparently by lower level devices. One of the key problems in developing a service OAM for Ethernet is the fact that the protocol is connectionless. Another problem is the fact that previously developed OAM protocols were primarily designed to support point-to-point connections. In comparison, Ethernet represents a multipoint-to-multipoint protocol. Solving these problems is no trivial matter, which probably explains why the IEEE 802.1ag specification is a work in progress. Although this author believes that Ethernet service OAMs will eventually be fairly common it is important to remember also that with sufficient thrust pigs will fly. What this author is implying is that the effort, coding, and devices required to establish a service OAM capability for Ethernet may have a level of complexity and cost that negates its full usefulness.
Quality of Service Overview In this section we will turn our attention to one of the most important metrics normally missing from a formal Service Level Agreement: QoS. Because QoS is not currently incorporated into most Carrier Ethernet SLAs, it is important to understand the rationale behind this omission and how some service providers compensate for it.
AU6039.indb 187
2/13/08 9:23:17 AM
188 n Carrier Ethernet: Providing the Need for Speed
As we noted earlier in this book, Ethernet provides a class of service (CoS) by using 3 bits within the IEEE 802.1Q VLAN header in the form of 3 bits. The 3 CoS bits enable 23 or 8 classes of service to be defined. In an IP environment where the Internet Protocol is transported over Ethernet a packet traverses both Layer 2 and Layer 3, so it is relatively easy to maintain QoS. This is because the IPv4 header’s Type of Service (ToS) field uses 3 bits to provide up to eight classes of service. Thus, IP’s ToS can be mapped to Ethernet’s CoS, and vice versa. However, for QoS to be set on an end-to-end basis requires configuring each device such as routers and switches to classify, police, and operate their queues in a similar manner. Thus, we need to turn our attention to the manner by which different devices can be configured to provide a QoS capability. In doing so, we will first note the differences between soft and hard QoS, as the former does not provide a true bandwidth guarantee. Today, Carrier Ethernet service providers offer “soft QoS” and primarily use the large bandwidth available in their Carrier Ethernet infrastructure to provide customers with the bandwidth they require.
Soft versus Hard QoS In some literature, readers will encounter the terms “soft” and “hard” QoS. As we will note shortly, the former term is more of a marketing mechanism as it does not guarantee that bandwidth will be reserved through a network for a specific application or series of applications.
Soft QoS Soft QoS refers to a QoS model where traffic is provided with precedence over other traffic within a network. To achieve a soft QoS capability the network operator uses multiple CoSs and assigns specific traffic to a specific CoS. Thus, a soft QoS actually does not represent a true QoS as it does not reserve bandwidth through a network. Instead, a soft QoS represents a priority scheme which, when bandwidth is sufficient for all customers, provides a similar traffic flow to a true QoS capability. The use of a soft QoS enables a service provider to differentiate between different types of network traffic. This in turn enables the service provider to offer multiple types of services, such as Platinum, Gold, Silver, Copper, and Bronze, where Platinum costs more than Gold, Gold costs more than Silver, and so on. Similarly, Platinum provides a higher priority through the network and Bronze has the lowest priority. Although soft QoS enables traffic differentiation it does not allow the carrier to guarantee specific bandwidth or provide a packet loss value within an SLA. Today, most Carrier Ethernet providers get around this limitation by having a significant amount of bandwidth available in their infrastructure that vastly exceeds cumulative customer demands. In effect, the existing Carrier Ethernet networks can be
AU6039.indb 188
2/13/08 9:23:18 AM
Service Level Agreements and Quality of Service n 189
considered as over-engineered so that network traffic does not reach a point where data may have to be discarded. Unfortunately, as the popularity of Carrier Ethernet increases, so will both the number of customers and the quantity of data that they will transmit. As the use of the Carrier Ethernet network increases it will become harder for service providers to over-engineer their network to prevent traffic from being discarded. When that level of utilization is reached many network operators will more than likely attempt to migrate to a hard QoS method.
Hard QoS A hard QoS requires paths to be pre-provisioned through a network while resources such as switches and routers are allocated to guarantee bandwidth for different applications. To accomplish this, hard QoS requires a connection-oriented approach with the ability to reserve bandwidth prior to an application’s commencing. Protocols that provide a QoS capability include Asynchronous Transfer Mode (ATM), Frame Relay, the Resource Reservation Protocol (RSVP), X-25, and some ADSL modems. The use of MPLS provides eight QoS classes that can be considered to represent QoS in that end-to-end delay and packet loss can be defined. In comparison, the IEEE 802.1P standard, which is included in the VLAN 802.1Q tag, provides a CoS that does not allocate bandwidth to be allocated nor define packet loss or delay. Because Ethernet is a connectionless technology in its current state it cannot provide a hard QoS capability. However, the MEF has ratified a series of technical specifications and has begun working on new technical specifications that may eventually enable service providers to support hard QoS services to their customers. In fact, during 2004 the MEF ratified MEF5 (Traffic Management Specification, Phase 1), which defines traffic management specifications for Carrier Ethernet to deliver hard SLA-based broadband services.
MEF QoS The Metro Ethernet Forum approaches QoS from a service definition point of view instead of a protocol or implementation method. In doing so, the MEF defined the requirements for different Ethernet services and the manner by which they should be measured. Under MEF10, which merged MEF1 (Service Model) and MEF5, the MEF now defines a traffic profile titled MEF Ethernet Service Attributes, Phase 1. Table 7.3 lists the attributes that can be used to define a traffic profile and their meanings.
Bandwidth Allocation An examination of the entries in Table 7.3 shows that it is possible to provide users with inexpensive non-guaranteed bandwidth while guaranteeing a portion of bandwidth. The non-guaranteed bandwidth in the form of the Excess Information Rate
AU6039.indb 189
2/13/08 9:23:18 AM
190 n Carrier Ethernet: Providing the Need for Speed
Table 7.3 MEF Traffic Profile Attributes Attribute CIR
Meaning Defines the average rate in bps of ingress service Frames up to which the network delivers service frames and meets the performance objectives defined by the CoS service attribute
Committed Burst Size Limits the maximum number of bytes available for a (CBS) burst of ingress service frames sent at the UNI speed to remain CIR-conformant EIR
Defines the average rate in bps of ingress service frames up to which the network may deliver service frames without any performance objectives
Excess Burst Size (EBS)
Limits the maximum number of bytes for a burst of ingress service frames transmitted at the UNI speed to remain EIR compliant
Frame Delay
The delay experienced by frames transmitted over the network from ingress to egress
Frame Delay Variation The variation in the offset of frames by time from when they should appear, representing frame jitter Frame Loss Ratio
The ratio of the number of frames lost divided by the number of frames transmitted
(EIR) represents a best-effort delivery. In comparison, the Committed Information Rate (CIR) represents a guaranteed delivery of bandwidth. Service providers can assign traffic profiles for Ethernet users to the Network User Interface (NUI) per Ethernet Virtual Circuit (EVC) or per combined EVC and IEEE 802.1P CoS. Figure 7.3 illustrates three methods by which a service provider can assign bandwidth at the ingress to their network. In each of the three examples EVCs are assumed to be established. In the first example shown in the left portion of Figure 7.3 three EVCs share a bandwidth profile established for the User Network Interface (UNI). In the middle portion of Figure 7.3 a bandwidth profile is established for each EVC, and the right portion of the figure shows how a bandwidth profile can be established using the CoS on an individual EVC. Readers familiar with Frame Relay and ATM more than likely have noted the similarity of the MEF traffic profile attributes to the ones in the two mentioned protocols. Similar to those protocols, traffic up to the CIR is guaranteed and would thus experience a very low frame loss ratio. Traffic between the CIR and EIR will be delivered based upon the present available bandwidth in the network, but can be dropped in the event congestion occurs in the network.
AU6039.indb 190
2/13/08 9:23:18 AM
Service Level Agreements and Quality of Service n 191 EVC1 cos 0, 1, 2, 3 U N I
EVC1 EVC2 EVC3
Bandwidth Profile per Ingress UNI
U N I
EVC1 EVC2 EVC3
Bandwidth Profile per EVC
U N I
cos 4, 5 cos 6, 7 EVC3
Bandwidth Profile per EVC and 802.1p COS
Figure 7.3 Ingress bandwidth profiles
Now that we have an appreciation for the MEF approach to QoS from a service definition approach, instead of altering the Ethernet protocol, which would not be desirable, we will examine how network operators can use existing routers and switches to expedite traffic based upon their CoS.
QoS Actions When using applicable routers and switches network engineers need to configure such devices to perform a series of function to provide QoS capability through the network. At the ingress location equipment must first classify data, then ensure that only up to a predefined amount of bandwidth is allowed to enter the network. Through policing and marking data that exceeds a predefined CIR, data can be either transmitted, marked, and transmitted or dropped based upon other activity occurring in the network. At the egress location QoS actions include queuing and scheduling data for delivery. To obtain an appreciation for these actions, we will briefly discuss each.
Classification As data enters the service provider’s network it is classified. In a Carrier Ethernet environment classification is based upon the CoS. Depending upon the equipment used and its configuration frames could be placed into up to eight classes.
Policing As previously mentioned, one of the functions of policing is to ensure that no more than a predefined amount of bandwidth is allowed into the network. Thus, policing can be used to enforce a CIR established for a customer on a particular network interface.
AU6039.indb 191
2/13/08 9:23:19 AM
192 n Carrier Ethernet: Providing the Need for Speed
Depending upon the equipment used and its configuration, policing can allow data into the network that exceeds the CIR. Such data can be considered to represent an overflow that can be marked and dropped by the network, if required.
Queuing As data flows through the network to the egress it can be placed into queues according to its prior marking at the egress into the network. That marking can be the CoS in the IEEE 802.1Q customer VLAN tag (c-tag), the provider VLAN tag (p-tag), or another tag used by the service provider for facilitating the flow of data through their network. Routers within the service provider’s network place data into output queues based upon a predefined tagging mechanism such as the customer priority value in the c-tag. For example, a priority of 5 to 7 could be assigned to frames transporting VoIP, teleconferencing, and other real-time applications that require low latency. Then, a priority of 3 or 4 might be assigned to near-real-time applications, such as bank teller terminal transactions, and a priority of 0 to 2 could be assigned to such applications as file transfers that are minimally affected by latency. Because the configuration of router queues requires a scheduling operation to become effective we will turn our attention to this topic.
Scheduling Scheduling determines how frames placed in queues are extracted and exit each egress port, in effect controlling the servicing of queue entries. One of the earliest methods associated with scheduling was the round robin servicing of queues. For example, with three queues (Q1, Q2 , Q3) associated with a port, data from Q1 would be extracted first, followed by data from Q2 , and so on. The problem with this extraction method is the fact that it does not prioritize the servicing of queues. A second scheduling method that prioritizes the servicing of queues is a round robin priority extraction method. Under this method a weight is assigned to each queue for extraction. For example, Q1, which might represent a low latency queue, has data sampled seven out of every ten samplings of the Q buffers. Q2, which could represent a buffer for holding near-real-time data, has a sampling rate twice that of Q3 that holds data with a CoS between 0 and 2. Figure 7.4 illustrates a weighted queue scheduling technique similar to the one just described. In examining Figure 7.4 note that if an entry does not exist in the queue, the scheduler then proceeds to the next highest weighted queue. Although this technique enables a priority to be associated with each queue, it assumes that the frame lengths of entries in each queue are the same. Thus, a pure weighted round robin
AU6039.indb 192
2/13/08 9:23:19 AM
Service Level Agreements and Quality of Service n 193
Port Buffers
Q1
Q2
Q3
70%
20%
10%
Output
Figure 7.4 Weighted queue scheduling example
scheduling technique, where weights are assigned to queues regardless of their content, can be considered as unfair. One solution to this level of unfairness that takes into consideration that different queues can contain different frame lengths is referred to as deficit round robin (DRR) by Cisco Systems. Thus, we will briefly turn our attention to this scheme.
Deficit Round Robin Deficit round robin (DRR) scheduling solves the problem associated with different queues having different frame lengths. DRR represents a modified weighted round robin scheduling scheme that was proposed in 1995. Under DRR a maximum packet size number is subtracted from the packet length. Any packets that exceed that number are then held back until the next visit of the scheduler. In comparison to weighted round robin that serves every non-empty queue, DRR serves packets at the head of every non-empty queue for which the deficit counter is greater than the packet’s size. If it is lower, then the deficit counter is increased by a value called a “quantum.” If a queue is not able to send a packet in a previous round due to an excessive packet length the remainder from the previous amount of credits a queue receives in each round (quantum) is added to the quantum for the next round.
Weighted DRR Another queuing method used by devices provides the ability to assign a weight to each queue while considering the length of frames is the weighted deficit round robin (WDRR). WDRR extends the use of a quantum from DDR to provide weighted throughput for each queue. Under WDRR queues have different weights and the quantum assigned to each queue in its scheduled round is proportional to its relative weight of the queue among all the queues serviced by the scheduler.
AU6039.indb 193
2/13/08 9:23:19 AM
194 n Carrier Ethernet: Providing the Need for Speed
Cisco ML-Series Card In a Cisco optical network environment this company manufactures a series of optical network systems (ONSs) whose functionality is enhanced through the use of its ML-Series cards. Two of the key functions of those cards is to provide queuing and scheduling capabilities.
Queue Assignment There are three methods by which queues can be assigned on an ML-Series card: (1) by the use of the “Priority” command during the configuration process; (2) through the use of the “Bandwidth” command; and (3) by allowing queues to be assigned automatically. Through the use of a weighting structure traffic can be scheduled at 1/2048 of the port rate. This action equates to approximately 488 Kbps for traffic exiting a Gigabit Ethernet port, 293 Kbps for traffic exiting an OC-12c port, and approximately 49 Kbps for traffic exiting a Fast Ethernet port. Using an ML-Series card it is common to create three types of queues as shown in Figure 7.5. The first type of queue is a low latency queue, which would be assigned a committed bandwidth of 100 percent, ensuring that data placed in that queue is serviced without delay. To limit the bandwidth used by this type of queue, you would need to assign a strict policy that would limit ingress traffic for each low latency queue. The second type of queue shown in Figure 7.5 is for unicast frames addressed to specific addresses. Similar to low latency queues, the unicast queues are created through an output service policy on egress ports. Each unicast queue is assigned a committed bandwidth, with the weight of the queue determined by the normalization of committed bandwidth of all defined unicast queues for a particular port. Any traffic in excess of the committed bandwidth on any queue is then treated by the scheduler based on the relative weight of the queue. Assignment Through the Use of ‘Priority’ Command
Through the Use of ‘Bandwidth’ Command
Automatically
Queue Low Latency Queues
Unicast Queues
Multicast/Broadcast Queues
Figure 7.5 Cisco ML-Series card queues
AU6039.indb 194
2/13/08 9:23:20 AM
Service Level Agreements and Quality of Service n 195
The third type of queue is a multicast/broadcast queue. That is, multicast and broadcast frames are automatically placed in such queues. Because Ethernet frames use the CoS bits as markers, those bits can denote both prioritized and discard-eligible frames. Then, when congestion occurs and a queue begins to fill, the first frames that will be dropped are those with a discardeligible setting in the CoS field. In comparison, committed frames will not be dropped until the total committed load exceeds the interface output.
Other Card Features In addition to performing ingress and egress priority marking the Cisco ML-Series card provides support for queue in queue. This enables a Carrier Ethernet service provider to transparently transport customer VLANs (C-VLANs) entering any specific port at the entrance to the service provider’s network over the service provider’s network. Thus, without reserving bandwidth through a network, you can prioritize traffic and expedite such traffic through queuing to provide a traffic flow that attains most of the qualities of a true QoS environment.
AU6039.indb 195
2/13/08 9:23:20 AM
AU6039.indb 196
2/13/08 9:23:20 AM
Index A Access delay, 162 Access point (see AP) Address resolution protocol (see ARP) ADSL, 6–7 ADSL2/ADSL2+, 6–7 Alteon Networks**, 116 Alien FEXT, 48 Alien NEXT, 48 Aloha packet network, 26, 45 American National Standards Institute (see ANSI) ANSI, 26 Any Transport over MPLS, (see ATOM) AP, 23–24 Application gateway, 36 ARP, 51 Asynchronous Transfer Mode (see ATM) ATM, 2–3, 15 ATOM, 10 Attachment Unit Interface (see AUI) AUI, 55 Auto-discovery, 98 Auto MDI/MDI-X, 58–59 Auto-negotiation, 66–74, 76–78 Auto-negotiation priorities, 73 Availability, 176–181
B Bandwidth, 183,189–190 Basepage, 67, 77 Basic Service Set (see BSS) Blocking delays, 132–133
Boggs, David, 25 Bridge operations, 127–131 Broadcast address, 101 Broadcast domain, 12 BSS, 24 Bus-based network structure, 46, 57 Business continuity, 4–5
C Cable categories, 49–50, 65, 78–79, 85–86 Carrier Ethernet, 1–16, 18–20, 26–27, 157–174 Carrier extension, 114–115 Access method, 26 Applications, 17–18 Challenges, 18–20 Definition, 1 Enabling technologies, 5–16 Encapsulation techniques, 167–174 Frame transport, 162–163 IEEE involvement, 26–27 Metro Ethernet Forum, 157–158 Overview, 2 Rationale, 3–5 Service types, 165–167 Topologies, 165 Carrier Sense Multiple Access with Collision Detection (see CSMA/CD) CBS, 190 Channelized versus non-channelized, 38–39 Cheap-net, 56 CIR, 190 Circuit-level gateway, 36 Class I repeater, 63–64, 74–75 Class II repeater, 63–64, 74–75
197
AU6039.indb 197
2/13/08 9:23:21 AM
198 n Index Committed burst size (see CBS) Committed information rate (see CIR) CRC, 30 Cross-point switching, 133–134 CSMA/CD, 26, 53, 54 Cut-through switching (see cross-point switching) Cyclic Redundancy Check (see CRC)
D Dark fiber, 163 Data Networking Concepts, 21–43 Deficit round robin scheduling, 193 DEMARC (demarcation) line, 180 Distribution system, 24 DIX frame, 100–104, 108 DIX standard, 25–26, 46–47, 53, 108 DSO Time slot, 38 Dual fiber, 90
E EBS, 190 Echo cancellation, 27 EFM, 87–98 Egress delay, 162 EIR, 189–190 E-LAN, 166–167 E-LINE, 166 EPON, 29, 91–98 ESS, 24 E-TREE, 167–168 Ethernet DIX standard, 25–26 Evolution, 25 First Mile, 28–29 Frame formats, 96–126 Performance, 118–119 Ethernet in the First Mile (see EFM) Ethernet II, 46, 100–103 Ethernet over Passive Optical Network (see EPON) Ethertype, 47 Excess Burst Size (see EBS) Excess Information Rate (see EIR) Explicit tagging, 144–145 Extended service set (see ESS)
AU6039.indb 198
F Far-end cross talk (see FEXT) Fast Ethernet, 3,27, 33, 60–66, 11–114 Fast Link Pulse (see FLP) FEXT, 48 Fiber-optic cable, 22, 27, 49–50, 90–91 Fiber-Optic Inter-Repeater Link (see FOIRL) Fiber-to-the-curb (see FTTC) Fiber-to-the-Neighborhood (see FTTN) Filtering, 129–130 Firewall, 35–36 Flooding, 129 Flow control, 108–109 FLP, 67 FOIRL, 59–60 Forwarding, 129 Frame bursting, 115 Frame check sequence, 102–103 Frame delay, 190 Frame delay variation, 190 Frame formats, 99–126 Frame loss ratio, 190 Frame size, 54–55 FTTC, 7 FTTN, 7 Full-duplex, 30, 52–53, 107–108
G GBIC, 163–164 Generic Routing Encapsulation, 37 Gigabit Ethernet, 3, 22, 27, 33, 75–80 Gigabit Interface Converter (see GBIC) Gigabit Media Independent Interface (see GMII) GMII, 77 Graded- index multi-mode fiber, 50–51, 74 10 Gigabit Ethernet, 22, 28, 40, 80–87, 114–117, 119–126 100 Gigabit Ethernet, 29, 87
H Half-duplex, 52, 108, 115 Hard QoS, 189 Hubs, 30–31, 57–58, 65 Hybrid switching, 137
2/13/08 9:23:21 AM
Index n 199
I
M
ICMP, 35 IEEE 802.3 standardization, 48–64 IETF, 51 Implicit tagging, 144–145 Intelligent hub, 31–32 Intelligent switching hub, 131–133 Internet Control Message Protocol (see ICMP) Internet Engineering Task Force (see IETF) IPSec, 9–11, 37 IPX over Ethernet, 106–107 Iso Ethernet, 68
MAN, 3, 22 Managed hub, 30 MDI, 58, 62 MDI-X cable, 58 Mean time before failure (see MTBF) Mean time to repair (see MTTR) Media access control, 48, 52–53 Medium Dependent Interface (see MDI) Medium Independent Interface (see MII) MEF QoS, 189 Meta Ethernet Forum, 157–158 Metcalfe, Robert, 25, 45–46 Metropolitan Area Ethernet, 1 Metropolitan Area Network (see MAN) MII, 61 M-i-M tagging, 170–172 Modular connectors, 57–59 MPCP, 94 MPLS, 13–16, 172–174 MTBF, 177–180 MTTR, 177–180, 182 Multi-mode fiber, 49–50 Multi-Point Control Protocol (see MPCP) Multi-Protocol Label Switching (see MPLS)
J Jam signal, 26 Jitter, 181 Jitter buffers, 181–182 Jumbo frames, 116–117
K Keep-alive (see LIT and NLP)
L Label Edge Router (see LER) Label Forwarding Information Base (see LFIB) Label Switch Path (see LSP) Label Switch Router (see LSR) Lampson, Butler, 25 LAN, 21–23 Latency, 161–162, 181 Layer 2 operations, 10, 12, 14–16, 32 Layer 3 operations, 10, 13, 14–16, 32 Layer 2 Tunneling Protocol (see L2TP) LER, 14, 16 LFIB, 16 Link Integrity Test (see LIT) LIT, 66–67 Local Area Network (see LAN) Logical link control, 48, 52, 104–105 LSP, 14 LSR, 14, 16 L2TP, 9–11
AU6039.indb 199
N Near-end crosstalk (see NEXT) Network Interface Card (see NIC) Network-to-Network Interface (see NNI) NEXT, 48 Next page function, 69–72 NIC, 30, 56 NLP, 66–67 NNI, 29 Normal Link Pulses (see NLP) NWay (see auto-negotiation)
O OAM, 183–187 OC-192c, 81 OLT, 92–98 ONU, 92–98 Open System Interface (see OSI)
2/13/08 9:23:21 AM
200 n Index Operational, administration, and maintenance (see OAM) Optical Line Terminator (see OLT) Optical Network Unit (see ONU) Optical splitter, 92 OSI, 51
P Packet loss, 183 Passive hub, 31–32 Pause frame, 108–109 PCS, 77, 83 Peer-to-peer, 23 Performance, 118–126 Ethernet, 118–119 Gigabit Ethernet, 119–126 Physical Medium Dependent (see PMD) Physical Medium Independent (see PMI) PMD, 77, 83 PMI, 77 Policing, 191–192 Port-based switching, 137–138 Pulse Coding Sublayer (see PCS)
Q Q-in-Q tagging, 168–170 QoS, 187–195 Quality of Service (see QoS) Queing, 192–195
R Refractive index, 50 RJ-45 (Registered jack), 57–58, 62, 79 Repeater, 63–64, 74–75 Roaming, 24 Router, 33–35
S Secure Sockets Layer (see SSL)
AU6039.indb 200
Segment-based switching, 138–140 Service level agreements, 175–187 Service Set Identifier (see SSID) Shim header, 14–15 SHDSL, 8 Single-mode fiber, 49–50 SNAP, 54, 104–106, 111 Soft QoS, 188–189 SONET, 2, 39–43, 83–84 SSID, 24 SSL, 9–11, 37 Star topology, 57 Step-index multi-mode fiber, 50 Store-and forward switching, 135–137 Sub-Network Access Protocol (see SNAP) Switch, 32–33, 127–156 Switch applications, 139–144
T T-Carrier Hierarchy, 38 Technology ability field, 68–69 Thacker, Chuck, 25 Thick-net, 55 Thin-net, 56 TIA/EIA-568 standard, 49, 62 Translating bridge, 127–128 Transparent bridge, 127–128 Transport technologies, 21–24 T1, 37–38
U UNI, 29, 190 User-to-Network Interface (see UNI)
V VDSL, 8, 22, 89 Virtual LANs (see VLANs) Virtual Private Network (see VPN) VLANs, 11–13, 109–113, 143–156, 158–160, 167–170 VPN, 9–11 VPN appliance, 36–37
2/13/08 9:23:21 AM
Index n 201
W
X
WAN, 21–23 WAN-PHY, 40 Wide Area Network (see WAN) Wireless, 23–24, 28
XGAUI, 82–83 XGMII, 81–82
AU6039.indb 201
2/13/08 9:23:21 AM
AU6039.indb 202
2/13/08 9:23:22 AM
Numbers Index 5-4-3 rule, 59 4B5B code, 112–113 64B/66B code, 80, 83 8B6T code, 66 8B/10B code, 77–78, 83 2 BASE-TL, 80 10 BASE-2, 27, 30, 56 10 BASE-5, 27, 30, 55–56 10 BASE-F, 27, 59 10 BASE-FB, 60 10 BASE-FL, 59–60, 75 10 BASE-FP, 60 10 BASE-T, 27, 30, 33, 49, 56, 66–67 10 PASS-TS, 88 100 BASE-BX, 75 100 BASE-BX10, 88 100 BASE-LX10, 88 100 BASE-SX, 75 100 BASE-T, 49, 60, 108 100 BASE-TX, 27, 49, 60, 62–63, 112–114 100 BASE- T2, 60, 66–67, 69 100 BASE-T4, 27, 60, 64–66 1000 BASE-BX10, 88 1000 BASE-CX, 27, 78–79 1000 BASE-LH, 78, 164 1000 BASE-LX, 27, 76–77, 164 1000 BASE-LX10, 88, 90 1000 BASE-PX10, 88 1000 BASE-PX20, 88
1000 BASE-SX, 27, 76–77 1000 BASE-T, 27, 49, 69, 71–72, 78 1000 BASE-ZX, 78, 164 10 BROAD-36, 56 10 GBASE-CX4, 84–85 10 GBASE-ER, 28 10 GBASE-LR, 28 10 GBASE- LX4, 28 10 GBASE-SR, 28 10 GBASE-ZR, 28 10 Gigabit Attachment Unit Interface (see XAUI) 10 Gigabit Media Independent Interface (see X GMII) 100 GBASE-T, 49, 67, 84–87 802.2 Header, 104 802.1D, 27 802.1P, 148, 160 802.1Q, 109–111, 145–155, 153–156, 158–160, 167–170 802.3 frame, 103–104 802.3ab, 27–28 802.3ae, 80 802.3ag, 184 802.3ah, 28, 184 802.3an, 85 802.3x, 108 802.3z, 27
203
AU6039.indb 203
2/13/08 9:23:22 AM
AU6039.indb 204
2/13/08 9:23:22 AM