Lecture Notes in Computer Science Commenced Publication in 1973 Founding and Former Series Editors: Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen
Editorial Board David Hutchison Lancaster University, UK Takeo Kanade Carnegie Mellon University, Pittsburgh, PA, USA Josef Kittler University of Surrey, Guildford, UK Jon M. Kleinberg Cornell University, Ithaca, NY, USA Alfred Kobsa University of California, Irvine, CA, USA Friedemann Mattern ETH Zurich, Switzerland John C. Mitchell Stanford University, CA, USA Moni Naor Weizmann Institute of Science, Rehovot, Israel Oscar Nierstrasz University of Bern, Switzerland C. Pandu Rangan Indian Institute of Technology, Madras, India Bernhard Steffen TU Dortmund University, Germany Madhu Sudan Microsoft Research, Cambridge, MA, USA Demetri Terzopoulos University of California, Los Angeles, CA, USA Doug Tygar University of California, Berkeley, CA, USA Gerhard Weikum Max Planck Institute for Informatics, Saarbruecken, Germany
6641
Jordi Domingo-Pascual Pietro Manzoni Sergio Palazzo Ana Pont Caterina Scoglio (Eds.)
NETWORKING 2011 10th International IFIP TC 6 Networking Conference Valencia, Spain, May 9-13, 2011 Proceedings, Part II
13
Volume Editors Jordi Domingo-Pascual Universitat Politècnica de Catalunya (UPC) - Barcelona TECH Campus Nord, Mòdul D6, Jordi Girona 1-3, 08034 Barcelona, Spain E-mail:
[email protected] Pietro Manzoni Ana Pont Universitat Politècnica de València Camí de Vera, s/n, 46022 Valencia, Spain E-mail: {pmanzoni, apont}@disca.upv.es Sergio Palazzo University of Catania V.le A. Doria 6, 95125 Catania, Italy E-mail:
[email protected] Caterina Scoglio Kansas State University 2069 Rathbone Hall, Manhattan, KS 66506, USA E-mail:
[email protected]
ISSN 0302-9743 e-ISSN 1611-3349 ISBN 978-3-642-20797-6 e-ISBN 978-3-642-20798-3 DOI 10.1007/978-3-642-20798-3 Springer Heidelberg Dordrecht London New York Library of Congress Control Number: 2011925929 CR Subject Classification (1998): C.2, H.4, D.2, K.6.5, D.4.6, H.3 LNCS Sublibrary: SL 5 – Computer Communication Networks and Telecommunications
© IFIP International Federation for Information Processing 2011 This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com)
Welcome Message from the General Chairs
It is our honor and pleasure to welcome you to the proceedings of the 2011 IFIP Networking Conference. This was the 10th edition of what is already considered one of the best international conferences in computer communications and networks. The objective of this edition of IFIP Networking conferences was to attract innovative research works in the areas of: applications and services, nextgeneration Internet, wireless and sensor networks, and network science. This goal was reached and we would like to thank our Technical Program Committee Co-chairs, Jordi Domingo-Pascual, Sergio Palazzo, and Caterina Scoglio, who organized the review of around 300 submissions, for the splendid technical program provided. The selected 64 high-quality papers were organized in two parallel tracks. The conference also included three technical talks from very prestigious scientists: Jim Kurose, Jos´e Duato, and Antony Rowstron. We would like to express our gratitude to them for accepting; their presence was a privilege for us all. The present edition took place at the Computer Engineering School of the Universitat Polit`ecnica of Valencia, in Spain. All this would not have been possible without the hard and enthusiastic work of a number of people who contributed to making Networking 2011 a successful conference. Thus, we would like to thank all of them, from the Technical Committee Chairs and members, to the Local Organizing Committee, to the authors, and also to the staff of CFP-UPV who dealt with all local arrangements. Thanks also to the Steering Committee of Networking and all the members of the IFIP-TC6 for their support. And finally, we would also like to encourage current and future authors to continue working in this direction and to participate in forums like this conference for the exchange of knowledge and experiences. May 2011
Pietro Manzoni Ana Pont
Technical Program Chairs’ Message
It is a great pleasure to welcome you to the proceedings of Networking 2011, which was the 10th event of the series of International Conferences on Networking, sponsored by the IFIP Technical Committee on Communication Systems (TC6). The main objectives of these IFIP conferences are to bring together members of the networking community from both academia and industry, to discuss recent advances in the broad and fast-evolving field of computer and communication networks, and to highlight key issues, identify trends, and develop visions. For this year we had four main areas in the call for papers, namely, applications and services, next-generation Internet, wireless and sensor networks, and network science, which received, respectively, 18.8%, 45.2%, 20.6%, and 15.4% of the submitted papers. This year, the conference received 294 submissions, representing a huge increase over the figures of the most recent years: it confirms this IFIP-supported initiative as a leading reference conference for the researchers who work in networking. Papers came from Europe, the Middle East and Africa (69.1%), Asia Pacific (16.5%), the USA and Canada (11.6%), and Latin America (2.8%). With so many papers to choose from, the Technical Program Committee (TPC) job to select the final high-quality technical program was challenging and time consuming. The TPC was formed by 106 researchers from 22 different countries. All papers were evaluated through a three-phase review process by at least three Program Committee members, who provided their own regular reviews; then, one of the three reviewers, being entitled as a meta-reviewer, opened a discussion among the three reviewers and provided a sum-up recommendation; finally, after a careful analysis of all recommendations, 64 papers were selected for the technical program, organized in 16 sessions, covering the main research aspects of next-generation networks. We would like to thank the members of the TPC, for they had to deal with a significant load reviewing papers due to the increase in the number of submissions. Also, we acknowledge the contribution of the additional reviewers who helped the TPC members in their task. This event would not have been possible without the hard and enthusiastic work of a number of people who contributed to making Networking 2011 a successful conference. We would especially like to thank the General Co-chairs, Ana Pont and Pietro Manzoni, for their support throughout the whole review process, and the Steering Committee Chair, Guy Leduc, for his invaluable advice and encouragement. All in all, we would like to thank all participants for attending the conference. We truly hope you enjoy the proceedings of Networking 2011! Jordi Domingo-Pascual Sergio Palazzo Caterina Scoglio
Organization
Executive Committee Honorary Chair General Chairs
Technical Program Chairs
Tutorial Chairs
Publication Chair Technical Organization Chair Publicity Chair Financial Chair Workshops Chair
Ram´ on Puigjaner, Universitat Illes Balears, Spain Ana Pont, Universitat Polit`ecnica de Val`encia, Spain Pietro Manzoni, Universitat Polit`ecnica de Val`encia, Spain Jordi Domingo-Pascual, Universitat Polit`ecnica de Catalunya, BarcelonaTECH, Spain Sergio Palazzo, University of Catania, Italy Caterina Scoglio, Kansas State University, USA Juan Carlos Cano, Universitat Polit`ecnica de Val`encia, Spain Dongkyun Kim, Kyungpook National University, South Korea Josep Domenech, Universitat Polit`ecnica de Val`encia, Spain Jos´e A. Gil, Universitat Polit`ecnica de Val`encia, Spain Carlos T. Calafate, Universitat Polit`ecnica de Val`encia, Spain Enrique Hern´ andez-Orallo, Universitat Polit`ecnica de Val`encia, Spain Vicente Casares, Universitat Polit`ecnica de Val`encia, Spain
Steering Committee George Carle Marco Conti Pedro Cuenca Guy Leduc Henning Schulzrinne
TU Munich, Germany IIT-CNR, Pisa, Italy Universidad de Castilla-la-Mancha, Spain University of Li`ege, Belgium Columbia University, USA
Supporting and Sponsoring Organizations (Alphabetically) Departamento de Inform´ atica de Sistemas y Computadores (DISCA) Escuela T´ecnica Superior de Ingenier´ıa Inform´ atica IFIP TC 6
X
Organization
Instituto de Autom´ atica e Inform´ atica Industrial Ministerio de Ciencia e Innovaci´on Telefonica Investigaci´on y Desarrollo Universitat Polit`ecnica de Val`encia
Technical Program Committee Rui Aguiar Ozgur Akan Khaldoun Al Agha Ehab Al-Shaer Kevin Almeroth Tricha Anjali Pere Barlet-Ros Andrea Bianco Chris Blondia Fernando Boavida Olivier Bonaventure Azzedine Boukerche Raouf Boutaba Torsten Braun Wojciech Burakowski Albert Cabellos-Aparicio Eusebi Calle Antonio Capone Damiano Carra Augusto Casaca Claudio Casetti Baek-Young Choi Piotr Cholda Marco Conti Pedro Cuenca Alan Davy Marcelo Dias de Amorim Christian Doerr Jordi Domingo-Pascual Constantine Dovrolis Wolfgang Effelsberg Lars Eggert Gunes Ercal Laura Feeney
University of Aveiro, Portugal Koc University, Turkey University of Paris XI, France University of North Carolina, Charlotte, USA University of California, Santa Barbara, USA Illinois Institute of Technology, USA Universitat Polit`ecnica de Catalunya, BarcelonaTECH, Spain Politecnico di Torino, Italy University of Antwerp, Belgium University of Coimbra, Portugal Universit´e catholique de Louvain, Belgium University of Ottawa, Canada University of Waterloo, Canada University of Bern, Switzerland Warsaw University of Technology, Poland Universitat Polit`ecnica de Catalunya, BarcelonaTECH, Spain University of Girona, Spain Politecnico di Milano, Italy University of Verona, Italy Instituto Superior T´ecnico, Lisbon, Portugal Politecnico di Torino, Italy University of Missouri, Kansas City, USA AGH University of Science and Technology, Poland IIT-CNR, Italy University of Castilla la Mancha, Spain Waterford Institute of Technology, Ireland UPMC Paris Universitas, France Delft University of Technology, The Netherlands Universitat Polit`ecnica de Catalunya, BarcelonaTECH Spain Georgia Institute of Technology, USA University of Mannheim, Germany Nokia Research Center, Finland University of California, Los Angeles, USA Swedish Institute of Computer Science, Sweden
Organization
Wu-chi Feng Markus Fiedler Luigi Fratta Laura Galluccio Zihui Ge Silvia Giordano Vera Goebel Sergey Gorinsky Timothy Griffin Carmen Guerrero Guenter Haring Paul Havinga Markus Hofmann David Hutchison Mohan Iyer Carlos Juiz Andreas J. Kassler Kimon Kontovasilis Georgios Kormentzas Yevgeni Koucheryavy Udo Krieger Fernando Kuipers Thomas Kunz Guy Leduc Kenji Leibnitz Jorg Liebeherr Richard Ma Pietro Manzoni Janise McNair Deep Medhi Tommaso Melodia Michael Menth Edmundo Monteiro Ioanis Nikolaidis Ilkka Norros Philippe Owezarski Sergio Palazzo Christos Papadopoulos Giovanni Pau
XI
Portland State University, USA Blekinge Institute of Technology, Sweden Politecnico di Milano, Italy University of Catania, Italy AT&T Labs - Research, USA University of Applied Science - SUPSI, Switzerland University of Oslo, Norway Madrid Institute for Advanced Studies in Networks (IMDEA Networks), Spain University of Cambridge, UK University Carlos III of Madrid, Spain Universit¨ at Wien, Austria University of Twente, The Netherlands Bell Labs/Alcatel-Lucent, USA Lancaster University, UK Oracle Corporation, USA Universitat de les Illes Balears, Spain Karlstad University, Sweden NCSR Demokritos, Greece University of the Aegean, Greece Tampere University of Technology, Finland Otto Friedrich University Bamberg, Germany Delft University of Technology, The Netherlands Carleton University, Canada University of Li`ege, Belgium Osaka University, Japan University of Toronto, Canada National University of Singapore, Singapore Universidad Polit`ecnica de Valencia, Spain University of Florida, USA University of Missouri-Kansas City, USA State University of New York at Buffalo, USA University of W¨ urzburg, Germany University of Coimbra, Portugal University of Alberta, Canada VTT Technical Research Centre of Finland, Finland LAAS, France University of Catania, Italy Colorado State University, USA University of California Los Angeles, USA
XII
Organization
Harry Perros Thomas Plagemann George Polyzos Dario Pompili Ana Pont Guy Pujolle Peter Reichl James Roberts Paolo Santi Caterina Scoglio Aruna Seneviratne Siraj Shaikh Hanan Shpungin Raghupathy Sivakumar Josep Sol´e-Pareta Christoph Sommer Otto Spaniol Ioannis Stavrakakis Ralf Steinmetz James Sterbenz Burkhard Stiller Vijay Subramanian Violet Syrotiuk Tarik Taleb Phuoc Tran-Gia Vassilis Tsaoussidis Piet Van Mieghem Huijuan Wang Lars Wolf Tilman Wolf Guoliang Xue Martina Zitterbart
North Carolina State University, USA University of Oslo, Norway Athens University of Economics and Business, Greece Rutgers University, USA Universitat Polit`ecnica de Val`encia, Spain University of Paris 6, France Telecommunications Research Center Vienna (FTW), Austria INRIA, France IIT-CNR, Italy Kansas State University, USA NICTA, Australia Coventry University, UK University of Calgary, Canada Georgia Institute of Technology, USA Universitat Polit`ecnica de Catalunya, BarcelonaTECH, Spain University of Erlangen, Germany RWTH Aachen University, Germany National and Kapodistrian University of Athens, Greece Technische Universit¨at Darmstadt, Germany University of Kansas, USA, and Lancaster University, UK University of Z¨ urich, Switzerland National University of Ireland, Maynooth, Ireland Arizona State University, USA NEC Europe Ltd., Germany University of W¨ urzburg, Germany Democritus University of Thrace, Greece Delft University of Technology, The Netherlands Delft University of Technology, The Netherlands Technische Universit¨at Braunschweig, Germany University of Massachusetts, USA Arizona State University, USA KIT (Karlsruhe Institute of Technology), Germany
Organization
XIII
Additional Reviewers Saeed Al-Haj Carlos Anastasiades Emilio Ancillotti Carles Anton Panayotis Antoniadis Markus Anwander Shingo Ata Baris Atakan Jeroen Avonts Mohammad Awal Serkan Ayaz Sasitharan Balasubramaniam Pradeep Bangera Youghourta Benfattoum Mehdi Bezahaf Nikolaos Bezirgiannidis Ozan Bicen Alex Bikfalvi Alberto P Blanc Norbert Blenn Thomas Bocek Chiara Boldrini Roksana Boreli Raffaele Bruno Shelley Buchinger Filipe Caldeira Valent´ın Carela-Espa˜ nol David Carrera Pietro Cassar` a Egemen K Cetinkaya ¸ Supriyo Chatterjea Ioannis Chatzigiannakis Lin Chen Baozhi Chen Luca Chiaraviglio Mosharaf Chowdhury Delia Ciullo Florin Coras Paul Coulton Joana Dantas Ignacio de Castro
Marcel Cavalcanti de Castro Sotiris Diamantopoulos Nikos Dimitriou Lei Ding Jerzy Domzal Falko Dressler Otto Carlos M.B. Duarte Michael Duelli Zbigniew Dulinski Roman A. Dunaytsev Juergen Eckert David Eckhoff Philipp Eittenberger Ozgur Ergul David Erman Chockalingam Eswaramurthy Wissam Fawaz Adriano Fiorese Hans Ronald Fischer Bryan Ford Anna F¨ orster Dario Gallucci Wilfried Gansterer Xin Ge Andrea Ghittino Luca Giraudo Diogo Gomes Roberto Gonzalez Jorge Granjal Vijay Gurbani Thomas Haenselmann Matthias Hartmann Syed Anwar Ul Hasan Fabio Hecht Volker Hilt David Hock Michael Hoefling Philipp Hurni Fida Hussain
Johnathan Ishmael Jochen Issing Eva Jaho Lor´ and Jakab Matthew R Jakeman Parikshit Juluri Frank Kargl Dominik Klein Murat Kocaoglu Stavros Kolliopoulos Ioannis Komnios Robert Kooij Efthymios Koutsogiannis Stein Kristiansen Michal Kryczka Adlen Ksentini Mirja Kuehlewind Berend W.M. Kuipers Harsha Kumara Li-Chung Kuo Andreas Lav´en Yee Wei Law Fotis Lazarakis Eun Kyung Lee Hendrik Lemelson Sotirios-Angelos Lenas Nanfang Li Yunxin (Jeff) Li Morten Lindeberg Teck Chaw Ling Zeyu Liu Xuan Liu Dajie Liu Dimitris Loukatos Chris Yu Tak Ma Francesco Malandrino Jose Marinho Angelos K Marnerides Steven Martin Alfons Martin Brian Meskill Jakub Mikians
XIV
Organization
Philip Mildner Gen Motoyoshi Xenia Mountrouidou Hoang Anh Nguyen Thanh Nguyen Jasmina Omic David Fonseca Palma Panagiotis Pantazopoulos Giorgos Papastergiou Ignasi Paredes-Oliva Parth Pathak Oscar Pedrola Pedro A. Vale Pinheiro Antonio Pinizzotto Bartosz Polaczyk Marc Portoles-Comeras Aiko Pras Pras Daniele Puccinelli Muhammad Qasim Ali Haiyang Qian Massimo Reineri Elisabete Reis Cristiano Gato Rezende Fabio Ricciato Michal Ries Andr´e Rodrigues Justin P. Rohrer
Sylwia A. Romaszko Claudio Rossi Walid Saad Ehssan Sakhaee Konstantinos Samdanis Josep Sanju` as-Cuxart Lambros Sarakis Bart Sas Damien Saucez Raimund Schatz Daniel Schlosser Sascha Schnaufer Charles Shen Benny Shimony William Somers Bruno Miguel Sousa Kathleen Spaey Barbara Staehle Rafal Stankiewicz Thomas Staub Moritz Steiner Martin Stiemerling Siyu Tang Orestis A. Telelis Plarent Tirana Wim Torfs Tonio Triebel Fani Tsapeli
Michael T¨ uxen Ruud van de Bovenkamp Daniel Van den Akker Andrei Vancea Salvatore Vanini Matteo Varvello Constantinos Vassilakis Hariharasudhan Viswanathan Ryan Vogt Michael Voorhaen Gerald Wagenknecht Naoki Wakamiya Anjing Wang Chih-Chiang Wang Xuetao Wei Christian Wilms Wynand Winterbach Robert W´ojcik Piotr Wydrych Yufeng Xin Dejun Yang Yang Zhang Zhongliang Zhao Quanyan Zhu Fang Zhu Thomas Zinner Patrick Zwickl
Table of Contents – Part II
Peer-to-Peer UDP NAT and Firewall Puncturing in the Wild . . . . . . . . . . . . . . . . . . . . . Gertjan Halkes and Johan Pouwelse
1
Enhancing Peer-to-Peer Traffic Locality through Selective Tracker Blocking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Haiyang Wang, Feng Wang, and Jiangchuan Liu
13
Defending against Sybil Nodes in BitTorrent . . . . . . . . . . . . . . . . . . . . . . . . Jung Ki So and Douglas S. Reeves
25
Traffic Localization for DHT-Based BitTorrent Networks . . . . . . . . . . . . . . Matteo Varvello and Moritz Steiner
40
Pricing BGP and Inter-AS Economic Relationships . . . . . . . . . . . . . . . . . . . . . . . . . . Enrico Gregori, Alessandro Improta, Luciano Lenzini, Lorenzo Rossi, and Luca Sani
54
Network Non-neutrality Debate: An Economic Analysis . . . . . . . . . . . . . . . Eitan Altman, Arnaud Legout, and Yuedong Xu
68
Strategyproof Mechanisms for Content Delivery via Layered Multicast . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ajay Gopinathan and Zongpeng Li
82
A Flexible Auction Model for Virtual Private Networks . . . . . . . . . . . . . . . Kamil Kolty´s, Krzysztof Pie´ nkosz, and Eugeniusz Toczylowski
97
Resource Allocation Collaboration between ISPs for Efficient Overlay Traffic Management . . . Eleni Agiatzidou and George D. Stamoulis Optimal Joint Call Admission Control with Vertical Handoff on Heterogeneous Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Diego Pacheco-Paramo, Vicent Pla, Vicente Casares-Giner, and Jorge Martinez-Bauset
109
121
XVI
Table of Contents – Part II
Balancing by PREFLEX: Congestion Aware Traffic Engineering . . . . . . . Jo˜ ao Taveira Ara´ ujo, Richard Clegg, Imad Grandi, Miguel Rio, and George Pavlou
135
EFD: An Efficient Low-Overhead Scheduler . . . . . . . . . . . . . . . . . . . . . . . . . Jinbang Chen, Martin Heusse, and Guillaume Urvoy-Keller
150
Resource Allocation Radio Flexible Dynamic Spectrum Allocation in Cognitive Radio Networks Based on Game-Theoretical Mechanism Design . . . . . . . . . . . . . . . . . . . . . . Jos´e R. Vidal, Vicent Pla, Luis Guijarro, and Jorge Martinez-Bauset
164
Channel Assignment and Access Protocols for Spectrum-Agile Networks with Single-Transceiver Radios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Haythem Bany Salameh and Marwan Krunz
178
The Problem of Sensing Unused Cellular Spectrum . . . . . . . . . . . . . . . . . . . Daniel Willkomm, Sridhar Machiraju, Jean Bolot, and Adam Wolisz
198
Adaptive Transmission of Variable-Bit-Rate Video Streams to Mobile Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Farid Molazem Tabrizi, Joseph Peters, and Mohamed Hefeeda
213
Resource Allocation Wireless Multiscale Fairness and Its Application to Resource Allocation in Wireless Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Eitan Altman, Konstantin Avrachenkov, and Sreenath Ramanath
225
Fast-Converging Scheduling and Routing Algorithms for WiMAX Mesh Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Salim Nahle and Naceur Malouch
238
OFDMA Downlink Burst Allocation Mechanism for IEEE 802.16e Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Juan I. del-Castillo, Francisco M. Delicado, and Jose M. Villal´ on
250
Adaptive On-The-Go Scheduling for End-to-End Delay Control in TDMA-Based Wireless Mesh Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yung-Cheng Tu, Meng Chang Chen, and Yeali S. Sun
263
Social Networks SMS: Collaborative Streaming in Mobile Social Networks . . . . . . . . . . . Chenguang Kong, Chuan Wu, and Victor O.K. Li
275
Table of Contents – Part II
XVII
Assessing the Effects of a Soft Cut-Off in the Twitter Social Network . . . Saptarshi Ghosh, Ajitesh Srivastava, and Niloy Ganguly
288
Characterising Aggregate Inter-contact Times in Heterogeneous Opportunistic Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Andrea Passarella and Marco Conti
301
Are Friends Overrated? A Study for the Social Aggregator Digg.com . . . Christian Doerr, Siyu Tang, Norbert Blenn, and Piet Van Mieghem
314
TCP Revisiting TCP Congestion Control Using Delay Gradients . . . . . . . . . . . . David A. Hayes and Grenville Armitage
328
NF-TCP: A Network Friendly TCP Variant for Background Delay-Insensitive Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mayutan Arumaithurai, Xiaoming Fu, and K.K. Ramakrishnan
342
Impact of Queueing Delay Estimation Error on Equilibrium and Its Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Corentin Briat, Emre A. Yavuz, and Gunnar Karlsson
356
On the Uplink Performance of TCP in Multi-rate 802.11 WLANs . . . . . . Naeem Khademi, Michael Welzl, and Renato Lo Cigno
368
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
379
Table of Contents – Part I
Anomaly Detection BotTrack: Tracking Botnets Using NetFlow and PageRank . . . . . . . . . . . . J´erˆ ome Fran¸cois, Shaonan Wang, Radu State, and Thomas Engel
1
Learning Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lele Zhang and Darryl Veitch
15
Machine Learning Approach for IP-Flow Record Anomaly Detection . . . . Cynthia Wagner, J´erˆ ome Fran¸cois, Radu State, and Thomas Engel
28
UNADA: Unsupervised Network Anomaly Detection Using Sub-space Outliers Ranking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pedro Casas, Johan Mazel, and Philippe Owezarski
40
Content Management Efficient Processing of Multi-connection Compressed Web Traffic . . . . . . . Yehuda Afek, Anat Bremler-Barr, and Yaron Koral
52
The Resource Efficient Forwarding in the Content Centric Network . . . . . Yifan Yu and Daqing Gu
66
Modelling and Evaluation of CCN-Caching Trees . . . . . . . . . . . . . . . . . . . . . Ioannis Psaras, Richard G. Clegg, Raul Landa, Wei Koong Chai, and George Pavlou
78
Empirical Evaluation of HTTP Adaptive Streaming under Vehicular Mobility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jun Yao, Salil S. Kanhere, Imran Hossain, and Mahbub Hassan
92
DTN and Sensor Networks MAC Layer Support for Delay Tolerant Video Transport in Disruptive MANETs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Morten Lindeberg, Stein Kristiansen, Vera Goebel, and Thomas Plagemann DTN Support for News Dissemination in an Urban Area . . . . . . . . . . . . . . Tuan-Minh Pham and Serge Fdida
106
120
XX
Table of Contents – Part I
Stochastic Scheduling for Underwater Sensor Networks . . . . . . . . . . . . . . . Dimitri Marinakis, Kui Wu, and Sue Whitesides Using SensLAB as a First Class Scientific Tool for Large Scale Wireless Sensor Network Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cl´ement Burin des Roziers, Guillaume Chelius, Tony Ducrocq, Eric Fleury, Antoine Fraboulet, Antoine Gallais, Nathalie Mitton, Thomas No¨el, and Julien Vandaele
134
147
Energy Efficiency Using Coordinated Transmission with Energy Efficient Ethernet . . . . . . . Pedro Reviriego, Ken Christensen, Alfonso S´ anchez-Maci´ an, and Juan Antonio Maestro
160
Online Job-Migration for Reducing the Electricity Bill in the Cloud . . . . Niv Buchbinder, Navendu Jain, and Ishai Menache
172
Stochastic Traffic Engineering for Live Audio/Video Delivering over Energy-Limited Wireless Access Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . Nicola Cordeschi, Tatiana Patriarca, and Enzo Baccarelli
186
VMFlow: Leveraging VM Mobility to Reduce Network Power Costs in Data Centers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Vijay Mann, Avinash Kumar, Partha Dutta, and Shivkumar Kalyanaraman
198
Mobility Modeling A Collaborative AAA Architecture to Enable Secure Real-World Network Mobility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Panagiotis Georgopoulos, Ben McCarthy, and Christopher Edwards
212
Markov Modulated Bi-variate Gaussian Processes for Mobility Modeling and Location Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Paulo Salvador and Ant´ onio Nogueira
227
Mobility Prediction Based Neighborhood Discovery in Mobile Ad Hoc Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xu Li, Nathalie Mitton, and David Simplot-Ryl
241
STEPS - An Approach for Human Mobility Modeling . . . . . . . . . . . . . . . . Anh Dung Nguyen, Patrick S´enac, Victor Ramiro, and Michel Diaz
254
Table of Contents – Part I
XXI
Network Science Epidemic Spread in Mobile Ad Hoc Networks: Determining the Tipping Point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nicholas C. Valler, B. Aditya Prakash, Hanghang Tong, Michalis Faloutsos, and Christos Faloutsos
266
Small Worlds and Rapid Mixing with a Little More Randomness on Random Geometric Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Gunes Ercal
281
A Random Walk Approach to Modeling the Dynamics of the Blogosphere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Muhammad Zubair Shafiq and Alex X. Liu
294
A Nash Bargaining Solution for Cooperative Network Formation Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Konstantin Avrachenkov, Jocelyne Elias, Fabio Martignon, Giovanni Neglia, and Leon Petrosyan
307
Network Topology Configuration Optimal Node Placement in Distributed Wireless Security Architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fabio Martignon, Stefano Paris, and Antonio Capone
319
Geographical Location and Load Based Gateway Selection for Optimal Traffic Offload in Mobile Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tarik Taleb, Yassine Hadjadj-Aoul, and Stefan Schmid
331
Femtocell Coverage Optimisation Using Statistical Verification . . . . . . . . . Tiejun Ma and Peter Pietzuch
343
Points of Interest Coverage with Connectivity Constraints Using Wireless Mobile Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Milan Erdelj, Tahiry Razafindralambo, and David Simplot-Ryl
355
Next Generation Internet A Deep Dive into the LISP Cache and What ISPs Should Know about It . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Juhoon Kim, Luigi Iannone, and Anja Feldmann
367
Data Plane Optimization in Open Virtual Routers . . . . . . . . . . . . . . . . . . . Muhammad Siraj Rathore, Markus Hidell, and Peter Sj¨ odin
379
Performance Comparison of Hardware Virtualization Platforms . . . . . . . . Daniel Schlosser, Michael Duelli, and Sebastian Goll
393
XXII
Table of Contents – Part I
A Novel Scalable IPv6 Lookup Scheme Using Compressed Pipelined Tries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Michel Hanna, Sangyeun Cho, and Rami Melhem
406
Path Diversity oBGP: An Overlay for a Scalable iBGP Control Plane . . . . . . . . . . . . . . . . Iuniana Oprescu, Micka¨el Meulle, Steve Uhlig, Cristel Pelsser, Olaf Maennel, and Philippe Owezarski
420
Scalability of iBGP Path Diversity Concepts . . . . . . . . . . . . . . . . . . . . . . . . Uli Bornhauser, Peter Martini, and Martin Horneffer
432
MultiPath TCP: From Theory to Practice . . . . . . . . . . . . . . . . . . . . . . . . . . . S´ebastien Barr´e, Christoph Paasch, and Olivier Bonaventure
444
Stealthier Inter-packet Timing Covert Channels . . . . . . . . . . . . . . . . . . . . . . Sebastian Zander, Grenville Armitage, and Philip Branch
458
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
471
UDP NAT and Firewall Puncturing in the Wild Gertjan Halkes and Johan Pouwelse Faculty of Electrical Engineering, Mathematics and Computer Science Delft University of Technology, P.O. Box 5031, 2600 GA, Delft, The Netherlands
[email protected],
[email protected]
Abstract. Peer-to-Peer (P2P) networks work on the presumption that all nodes in the network are connectable. However, NAT boxes and firewalls prevent connections to many nodes on the Internet. For UDP based protocols, the UDP hole-punching technique has been proposed to mitigate this problem. This paper presents a study of the efficacy of UDP hole punching on the Internet in the context of an actual P2P network. To the best of our knowledge, no previous study has provided similar measurements. Our results show that UDP hole punching is an effective method to increase the connectability of peers on the Internet: approximately 64% of all peers are behind a NAT box or firewall which should allow hole punching to work, and more than 80% of hole punching attempts between these peers succeed. Keywords: UDP, NAT, Firewall, Puncturing, Measurements.
1
Introduction
In Peer-to-Peer (P2P) systems, computers on the Internet connect with each other in a symmetrical fashion. The computers simultaneously assume both the role of client as well as the role of server. This requires that random computers on the Internet must be able to connect with each other. However, the deployment of firewalls and Network Address Translator (NAT) boxes creates obstacles. By their very nature, firewalls are meant to regulate what connections are permitted. Moreover, firewalls are frequently configured to allow only outgoing connections, based on the assumption of the client-server model of communication. Of course, in a P2P setting connections may also be incoming, often on non-standard ports, which these firewalls don’t allow. NAT boxes pose a separate but related problem. Although NAT by itself is not meant as a connection filtering technology, it does present an obstacle for setting up connections: the publicly visible communications endpoint (IP and port combination) is not visible for the computer behind the NAT box, and may even be different for each remote endpoint. To make matters worse, NAT
This work was partially supported by the European Community’s 7th Framework Programme through the P2P-Next and QLectives projects (grant no. 216217, 231200).
J. Domingo-Pascual et al. (Eds.): NETWORKING 2011, Part II, LNCS 6641, pp. 1–12, 2011. c IFIP International Federation for Information Processing 2011
2
G. Halkes and J. Pouwelse
technology is often combined with firewalling to create an even bigger obstacle for connection setup. The techniques for dealing with NATs and firewalls are well known. For example, the STUN [9] protocol details how a computer can determine what kind of NAT and firewall is between itself and the public Internet. Connection setup can be done through connection brokering or rendez-vous [3]. It should be noted though that the connection setup techniques are most useful for UDP traffic. Setting up a connection for TCP traffic when both computers are behind a NAT/firewall requires unusual or non-standard use of TCP and IP mechanisms, and may rely on specific NAT/firewall behaviour to work [2]. In this paper we present the results of a measurement study of UDP NAT and firewall puncturing “in the wild”. Using the known techniques, we have implemented a P2P solution for NAT and firewall puncturing which we then used to measure connection setup success. Our results show that using puncturing, connections between many more peers can be set up, which should ultimately increase robustness in the P2P network.
2
Related Work
The problems related with NATs and firewalls and work-arounds for these problems have been described extensively in previous work. The STUN protocol [9] describes how to detect the type of NAT and firewall between a computer and the public Internet and determine its publicly visible address. To do so, it uses UDP messages (although TCP is also supported) sent to pre-determined STUN servers with public IP addresses. Although the STUN protocol does not provide guarantees about whether the address learnt through it is in fact usable for connection setup, the techniques described are the most well-known method for determining NAT and firewall types. UDP hole punching was extensively described in [3] (although earlier descriptions exist). The idea is that to allow packets to come in from a remote endpoint, the computer behind the NAT or firewall needs to send something to the remote endpoint first. By doing so it creates a “hole” in the NAT or firewall through which communications can proceed. This even works when both sides are behind a NAT/firewall, when both sides start by punching a hole in their respective NAT/firewalls. In this paper we only consider simple hole punching. More elaborate techniques exist to deal with NAT/firewalls with more complex behaviour [10]. However, these techniques only apply to a small percentage of the NAT/firewalls on the Internet, and are therefore less useful. Previous studies have shown that a significant portion of the Internet hosts could be usable in P2P networks using the cited techniques [3,4,8]. However, none of these studies try to determine to what extent the theoretical ability to connect is actually usable. The study presented in this paper tries to fill that void.
UDP NAT and Firewall Puncturing in the Wild
3
3
Terminology
In the rest of this paper we will use the terminology introduced by the BEHAVE working group [1]. Specifically we will use the following terms and their respective abbreviations (see Figure 1 for a graphical explanation of the mapping types): Endpoint-Independent Mapping (EIM). The NAT reuses the port mapping for subsequent packets from the same IP address and port to any remote IP address and port. So when sending packets to host A and B from the same internal IP address and port, hosts A and B will see the same external IP address and port. Address-Dependent Mapping (ADM). The NAT reuses the port mapping for subsequent packet from the same internal IP address and port to the same remote IP address, regardless of the remote port. This means that host A will always see the same external IP address and port, regardless of the port on host A, but host B will see a different mapping. Address and Port-Dependent Mapping (APDM). The NAT only reuses the port mapping for subsequent packets using the same internal and remote IP addresses and ports, for the duration of the mapping. Even when communicating from the same internal IP address and port, host A will see two different external IP addresses and/or ports when different ports on host A are used. Endpoint-Independent Filtering (EIF). The NAT or firewall allows packets from any remote IP address and port, for a specific endpoint on the NAT or firewall. Address-Dependent Filtering (ADF). The NAT or firewall allows packets destined to a specific endpoint on the NAT or firewall from a specific remote IP, after a computer behind the NAT of firewall has sent a single packet to the remote IP. Address and Port-Dependent Filtering (APDF). The NAT or firewall allows packets destined to a specific endpoint on the NAT or firewall from remote endpoints only after a packet has been sent to that remote endpoint from inside the NAT or firewall. For the purposes of our experiments there is no difference between ADM and APDM as only single endpoints on the different hosts are used, which we will therefore combine into the abbreviation A(P)DM. Similarly we collapse the definitions of ADF and APDF into A(P)DF if the distinction is irrelevant. In [10] a more extensive subdivision is made, but again the distinctions made are relevant only when considering multiple ports or port-prediction techniques.
4
Implementation
We have implemented a UDP NAT/firewall puncturing scheme as part of the Tribler BitTorrent client. Although the Tribler client does not currently use UDP for setting up connections or exchanging data, we use it as a vehicle to
4
G. Halkes and J. Pouwelse
Fig. 1. When a NAT exhibits Endpoint-Independent Mapping (EIM) behaviour, hosts A and B see the same external IP address and port (Y:y) for a particular internal IP and port combination (X:x). If however the NAT type is Address (and Port)-dependent Mapping, hosts A and B see different IP/port combinations (Y:y and Z:z). Note that the IP addresses (Y and Z) can be the same, especially if the NAT has only a single external address.
Fig. 2. Rendez-vous for connection setup
deploy our puncturing test software on real users’ machines. The puncturing test builds an swarm, much like the BitTorrent software. However, instead of having a separate tracker, we use a peer at a pre-programmed address and port and employ a form of Peer EXchange (PEX) to find new peers to connect with. The PEX messages include a (random) peer ID, the IP address and port at which the peer originating the PEX message communicates with the remote peer, and the NAT/firewall type of the remote peer. The latter is included so the receiving peer can determine whether it is useful to attempt to connect to the remote peer, as certain combinations of mapping and filtering are not able to connect to each other. Our implementation tries to make minimal use of centralised components. Therefore the detection of the NAT and firewall type are not done using STUN. Instead, the peers rely on information reported by other peers and by checking if other peers can connect to it without previous communication in the reverse direction. Furthermore, peers use other peers as rendez-vous servers (see Figure 2). If a peer R is connected to two other peers A and B , it can serve as rendez-vous server for them, even if it is itself behind a NAT box or firewall. It should be noted that due to the generic audience which participated in our trial we expect these numbers to be reasonably representative for generic Internet users. Because using P2P file-sharing software is typically discouraged in a corporate environment, we do expect a bias towards home users. Home users
UDP NAT and Firewall Puncturing in the Wild
5
tend to use less professional equipment that is more prone to misbehaviour, which may negatively impact our connection success rate results. In the following sections we will describe the tests used by peers to determine their NAT and firewall types. 4.1
NAT Type Detection
To allow determination of the NAT type of the NAT box (if any) that a peer is behind, all peers report the remote address and port they see when a connection is set up. So each peer will receive, from its communication partner, his own external address and port. If the reports from all communication partners are the same (or at least a large majority is the same), a peer will determine that there is either no NAT or the NAT has EIM behaviour. Note that the two cases (no NAT or EIM behaviour) are indistinguishable without a reliable local determination of the local address. Furthermore, the difference is mostly irrelevant. If, however, the reported external IP address and/or port are regularly different, then the peer concludes that it is behind a A(P)DM NAT. 4.2
Filtering Behaviour Detection
The filtering behaviour of a NAT/firewall is detected by checking whether a direct connection request arrives before a reverse connection request arrives. To enable this to work, when a peer tries to set up a connection using rendezvous, it will always first send a direct connection request to the remote peer. In most cases this direct connection request will arrive at the remote peer before the reverse connection request from the rendez-vous, unless the NAT/firewall behaviour is A(P)DF. So when for a significant fraction of incoming requests the direct connection request arrives before any communication in the reverse direction, the peer concludes that the filtering behaviour is EIF (or there is no firewall, which again is indistinguishable). Otherwise it must conclude that the filtering type is A(P)DF. In principle it would be possible for the clients to distinguish between ADF and APDF. For example, peers could try to deliberately send a direct connection request to the wrong port. If the filtering is of APDF type, no connection can be setup, and the attempt will always fail. However, if the firewall uses ADF type filtering, the attempt will still succeed (assuming the reverse connection request arrives at the remote peer). Another option is to try to connect to peers behind an A(P)DM type NAT. This should theoretically only succeed for peers behind ADF firewalls. These experiments should be performed several times before concluding one way or the other. In our implementation however, we have not let peers try to distinguish between APDF and ADF type filtering. We did let A(P)DF peers attempt to connect to A(P)DM peers, such that from the collected logs we can later make the distinction. Because distinguishing between APDF and ADF filtering after the experiments should not provide different results, and the fraction of both EIM-ADF and A(P)DM peers is small (see Section 5), we felt that including this distinction in the client software would provide little benefit.
6
5
G. Halkes and J. Pouwelse
Results
In this section we present the results for two trials. For the purposes of data collection, the peers in the network logged all interesting send and receive events to a local log file. In the first trial (907 peers), peers would not retry a failed connection attempt immediately. After analysing the results of this first trial, we conducted a second trial (1,531 peers) in which peers would perform up to three retries if connection attempts failed. In the first trial peers regularly sent the collected logs to a central collection server. However, in the second trial we used a less intrusive reporting method which lead to peers with an unfiltered Internet connection being favoured in the results. Therefore we report the market share results from the first test to ensure correct results. Both trials lasted several weeks. 5.1
Market Share
Figure 3 shows the detected NAT and firewall types for 646 out of the 907 peers in the first trial, for which were able to draw a conclusion about the NAT/firewall type from the connections they made. The most dominant type of NAT/firewall is EIM-APDF (52%). This includes both simple firewalls and EIM NATs. Theoretically these can connect to each other through a rendez-vous peer. The fraction of peers that are behind A(P)DM NATs is only 11%. These NAT/firewalls are the biggest obstacle, i.e. they can only connect to EIM-EIF and EIM-ADF peers. The fraction of A(P)DM NATs is expected to go down, as more and more vendors start complying with the BEHAVE RFC [1]. 0.6
Fraction of peers
0.5 0.4 0.3 0.2 0.1 0 EIM-EIF
EIM-APDF EIM-ADF
A(P)DM
Blocked
NAT/Firewall type
Fig. 3. NAT/firewall type market share
5.2
Connection Success Rate
Next we look at the success rate in setting up a connection between two peers. For this we only consider those connections for which we have information on both ends of the connection, and for which we were able to determine the NAT/firewall type of both peers. Our results therefore represent 15,545 connections between 841 peers.
UDP NAT and Firewall Puncturing in the Wild
7
Table 1. Connection success rate per NAT/firewall type. Each bar shows the successful attempts, where retries are shown as increasingly darker shades of green To From
EIM-EIF
EIM-APDF
EIM-ADF
A(P)DM
EIM-EIF EIM-APDF EIM-ADF A(P)DM
We have split the results out into the different NAT/firewall types we have distinguished (see Table 1). A first thing to note about the results is that even peers classified as EIM-EIF only show an 85% success rate. Detailed analysis of the results shows that this is caused by a small fraction of peers that consistently show poor connectability (see Figure 4). Perhaps they are experiencing high packet losses due to a saturated link. As the puncturing test is running in the background of a BitTorrent client this is certainly not impossible. To test our hypothesis, we tried to reproduce the packet dropping behaviour in a local test environment using (new) NAT routers. Using TCP side traffic to saturate the links, we were only able to produce small packet loss rates (≤ 5%). To see whether other methods of stressing the routers would produce different results, we also performed a similar test using UDP side traffic. Using large volumes of UDP packets we could get some routers to drop significant numbers of packets (50% or more), or even crash completely. We must stress that we used newly acquired NAT routers, which may be more resilient to the BitTorrent-like stress test that we subject these routers to. Unfortunately, we currently do not have older router and modem models, which may be more susceptible to dropping packets under high TCP load. Also, although the consumer-grade NAT/firewalls are the prime suspect for dropping packets, they are by no means the only possible point at which packets may be dropped. Therefore we can not discount packet loss as a cause for reduced connection success rates. Another possibility is that certain routers stop functioning correctly when their mapping tables are full. This is a well-known problem with some older types of routers, when used in combination with BitTorrent. Note that this does not mean they do not function at all, but new connections can usually not be set up anymore, and existing connections can experience significant throughput reductions. A peer behind such a router may still be reachable over the existing connections, but all new connection attempts will fail, thereby reducing the connection success rate. The connections between EIM-ADF peers and EIM-A(P)DF peers succeed a little more often than the connections between EIM-EIF peers, but this result is not statistically significant as only 36 peers in our test are behind a EIM-ADF type NAT/firewall.
G. Halkes and J. Pouwelse
Cumulative fraction of peers
8
1 0.8 0.6 0.4 0.2 0 0
0.2
0.4
0.6
0.8
1
Per peer connection success rate
Fig. 4. Cumulative distribution of per peer connection success rate, counting only connections attempts between compatible peers. Connections to peers with lower success rates are not counted against peers with a higher connection success rate.
Connections to and from peers behind EIM-APDF NAT/firewalls succeed a little less frequently than the connections between EIM-EIF peers. This is not entirely unexpected. As already noted in the RFC describing the STUN protocol [9], even though two peers should be able to communicate given the classification determined here, subtleties in the actual implementation may prevent actual connection setup (see also Section 5.2). Conversely, peers which should not be able to connect due to incompatible NAT/firewall types can on occasion setup a connection due to similar implementation subtleties. Behavioural Subtleties. In our analysis of the collected logs, we found several behaviours in NAT boxes that are not well described by the classification we have used so far. For example, we found that several EIM-APDF NATs would occasionally use a different external port (similar to A(P)DM NAT behaviour). This is one reason why these NATs have lower connection success rates. A second notable behaviour occurs in A(P)DM NATs. The conventional idea is that these NATs will use a different port for each remote endpoint. However, some A(P)DM NATs appear to use a small set of ports repeatedly. So although it is uncertain which of these ports will be used, there is only a small set to choose from. This could explain why sometimes a connection between an A(P)DM NAT and an EIM-APDF NAT/firewall does succeed: because the set is small there is a chance that the EIM-APDF NAT/firewall actually uses the port that is chosen by the A(P)DM NAT, allowing the packet to arrive. This may also explain why sometimes the connections between two A(P)DM NATs succeed. A(P)DM to EIM-ADF First Attempt Success. One very interesting result can be seen in the connections from A(P)DM peers to EIM-ADF peers. If we assume that a packet sent directly between peers A and B will arrive before a packet sent at the same time but via a third peer, then the direct connection request will always arrive before the reverse connection request has punched a hole.
UDP NAT and Firewall Puncturing in the Wild
9
This results in the direct connection request being dropped at the NAT/firewall. If we consider the case where an A(P)DM peer is the originating peer, the reverse connection request sent from the EIM-ADF peer will also very likely not arrive at the port that the direct connection request was sent from, causing the packet to be dropped. This means that given the above assumption, we would expect that connection attempts from A(P)DM peers to EIM-ADF peers will always fail the first time. For the second and later attempts, the firewall on the EIM-ADF side will already have been opened for connections from the A(P)DM peer by the reverse connection request of the first attempt, allowing the direct connection request to pass through. As we can see from the results, the first connection attempt does succeed in approximately 41% of the cases. The reason for this is that the direct connection request is not always faster than the reverse connection request which is sent through the rendez-vous peer. This is known as a Triangle Inequality Violation (TIV). It is well known that TIVs exist in the round-trip time between nodes [11,12,13]. Our connection setup is dependent on one-way delay, and not round-trip time, but many of the underlying causes such as routing policies and peering agreements will affect the one-way delay as well. Studies of TIVs in round-trip time seem to indicate that the occurrence of TIVs can be as high as 40%, although lower numbers are more common. Another possible explanation for TIVs specific to our situation is that there is network equipment which requires some setup time when a packet needs to be sent to a host (or AS) with which it has not recently communicated. The consumergrade NAT/firewalls are of course a first suspect for such behaviour, but an exploratory test with a small number of such boxes did not find any significant extra delay. The only source of extra delay in communicating with “new” IP addresses that we were able to find is the ARP protocol used in LANs. However, in our situation it is unlikely that the ARP protocol is the cause of TIVs. Finally, there are other, albeit unlikely, possibilities for the first attempt to succeed. First, it is possible that some EIM-ADF firewalls have very long mapping/filter timeouts. If, after closing the connection, an A(P)DM peer tries to establish a new connection before the mapping/filter has timed out, the first attempt could succeed. To prevent such situations we did not allow new connections within 5 minutes, but this could be too short. Second, the two peers could simultaneously try to connect to each other. However, similarly to the previous explanation, the chances of such an occurrence are very small. Third, the previously mentioned behavioural subtleties may explain a small fraction of the successful connections as well. If the A(P)DM NAT uses only a small number of ports, this significantly increases the chance that the EIM-ADF peer actually attempts the reverse connection to the port the A(P)DM NAT is expecting, in which case the connection setup will succeed. 5.3
Timeout
A final measurement we made was the timeout for the mappings/firewall holes. This parameter is important when setting the keep-alive interval. To measure
G. Halkes and J. Pouwelse 0.6
0.6
0.5
0.5 Fraction of peers
Fraction of peers
10
0.4 0.3 0.2 0.1
0.4 0.3 0.2 0.1
0
0 <30 30
60
90 120 180 240 >240 Timeout
<30 30
60
90 120 180 240 >240 Timeout
Fig. 5. Mapping/firewall hole timeout without (left) and with (right) handshake
the timeout, we sent UDP packets to a pre-programmed IP address and port, after which a reply was sent with a delay. If the reply was received, the timeout was determined to be larger than the delay of the reply. The largest timeout for which a message was received was determined to be the approximate timeout for the mapping/firewall hole. To ensure a rapid measurement, several messages requesting replies at different delays were sent at the same time from different ports. This setup can give wrong results for some EIM-ADF NAT/firewalls which refresh the timeout for incoming traffic as well as for outgoing traffic. Doing so is a security risk, so we expect few firewalls to behave this way. However, we have for this reason excluded the (small number) of EIM-ADF NAT/firewalls from our analysis. The graph on the left in Figure 5 shows the results of this measurement. These results indicate that many firewalls employ a fairly short timeout (1 minute or less) for the created mappings/firewall holes. However, some NAT/firewalls, particularly those based on the Linux kernel, use a longer timeout if more than two packets have been sent on a particular mapping/firewall hole, and at least one packet has passed in both directions. We therefore also measured the timeout where we include an initial handshake before requesting the delayed reply. The results are shown in the graph on the right in Figure 5. The results are markedly different. Most of the NAT/firewalls that have a 30 second timeout without the handshake now use a much longer timeout. The default value in the Linux kernel is 180 seconds, which may explain the large fraction of peers with that timeout when using an initial handshake. These differences between single packet timeouts and timeouts after an initial handshake are also demonstrated in [5].
6
Discussion
In the previous section we have seen that NAT/firewalls do not always behave as simple as the traditional classifications have suggested. The result of this is that both connections that are expected to succeed do not, and vice versa. Although there seems to be a convergence towards BEHAVE compatible NAT
UDP NAT and Firewall Puncturing in the Wild
11
behaviour (as can be seen by the very small number of A(P)DM NAT boxes that are available from stores today), it is unlikely that all behavioural anomalies will fully disappear. Our results confirm that over three quarters of the peers on the Internet (79%) are not directly connectable. The lack of connectability has long been recognised as a problem in P2P networks [3,6], especially in video streaming [7]. However, using a simple rendez-vous mechanism, a further 64% of all peers could potentially set up connections between themselves when using UDP. This would reduce the connectability problem to a mere 15% of peers, practically eliminating the problem for most P2P applications. Some have suggested that the problem of NATs obstructing connectability will go away with the introduction of IPv6. However, it should be noted that most of the problems today are not so much caused by the NAT behaviour, but rather by the filtering. The vast majority of NAT/firewalls today are of the EIMA(P)DF type, which means that their externally visible endpoint is constant and therefore not a big obstacle for connectability. The remaining problems are caused by the filtering behaviour, and it is likely that home routers will continue to include firewalling by default, even when the switch to IPv6 is made. As such, the connectability problems will continue to exist and the puncturing techniques studied in this paper will remain an important tool to overcome these problems.
7
Conclusions
In this paper we have presented the results of a real implementation of UDP NAT and firewall puncturing, running on random Internet users’ machines. Our implementation makes minimal use of central components. Most notably, peers detect their NAT and firewall types solely through communication with other peers. Our results show that connectable (EIM-EIF) peers form a small minority (21%) on the Internet, and that most NATed and firewalled peers (64% of all peers, EIM-A(P)DF) should theoretically be able to set up connections through a simple rendez-vous mechanism when using UDP. Our results also show that connections between these peers can indeed be set up, with a success rate only fractionally lower than for connectable peers. As these hosts are the majority of the peers in Peer-to-Peer networks, we conclude that there is a large opportunity for UDP based Peer-to-Peer protocols to set up many more connections and therefore create a more robust network. Finally, our measurements of the NAT mapping/firewall hole timeout show that keep-alive message should be sent at least every 55 seconds to ensure that mappings/holes will remain open on almost every NAT/firewall. This assumes that several messages are exchanged within the first 30 seconds, because some NAT/firewalls extend their timeout if there is more traffic than a simple request and reply.
12
G. Halkes and J. Pouwelse
References 1. Audet, F., Jennings, C.: Network Address Translation (NAT) Behavioral Requirements for Unicast UDP. RFC 4787 (Best Current Practice) (January 2007), http://www.ietf.org/rfc/rfc4787.txt 2. Biggadike, A., Ferullo, D., Wilson, G., Perrig, A.: NATBLASTER: Establishing TCP connections between hosts behind NATs. In: SIGCOMM Asia Workshop (April 2005) 3. Ford, B., Srisuresh, P., Kegel, D.: Peer-to-peer communication across network address translators. In: USENIX 2005 (April 2005) 4. Guha, S., Francis, P.: Characterization and measurement of tcp traversal through nats and firewalls. In: Proc. of the 5th ACM SIGCOMM Conf. on Internet Measurement (IMC 2005), Berkeley, CA, October 2005, pp. 199–211 (2005) 5. H¨ at¨ onen, S., Nyrhinen, A., Eggert, L., Strowes, S., Sarolathi, P., Kajo, M.: An experimental study of home gateway characteristics. In: Proc. of the 10th Internet Measurement Conference (IMC 2010), Melbourne, Australia (November 2010) 6. Liu, Y., Pan, J.: The impact of NAT on BitTorrent-like P2P systems. In: Proc. of the 9th Int. Conf. on Peer-to-Peer Computing (P2P 2009), Seattle, WA, September 2009, pp. 242–251 (2009) 7. Mol, J., Bakker, A., Pouwelse, J., Epema, D., Sips, H.: The design and deployment of a bittorrent live video streaming solution. In: Proc. of the IEEE Int. Symp. on Multimedia (ISM 2009) (December 2009) 8. Noh, J., Baccichet, P., Girod, B.: Experiences with a large-scale deployment of stanford peer-to-peer multicast. In: Proc. of the 17th Int. Packet Video Workshop (PV 2009), Seattle, WA, May 2009, pp. 1–9 (2009) 9. Rosenberg, J., Mahy, R., Matthews, P., Wing, D.: Session Traversal Utilities for NAT (STUN). RFC 5389 (Proposed Standard) (October 2008), http://www.ietf.org/rfc/rfc5389.txt 10. Roverso, R., El-Ansary, S., Haridi, S.: NATCracker: NAT combinations matter. In: Proc. of the 18th Int. Conf. on Computer Communications and Networks (ICCN 2009), San Francisco, CA, August 2009, pp. 1–7 (2009) 11. Savage, S., Anderson, T., Aggarwal, A., Becker, D., Cardwell, N., Collins, A., Hoffman, E., Snell, J., Vahdat, A., Voelker, G., Zahorjan, J.: Detour: A case for informed internet routing and transport. IEEE Micro 19(1), 50–59 (1999) 12. Tang, L., Crovella, M.: Virtual landmarks for the internet. In: Proc. of the 3rd ACM SIGCOMM Conf. on Internet Measurement (IMC 2003), Miami Beach, FL, October 2003, pp. 143–152 (2003) 13. Zheng, H., Lua, E.K., Pias, M., Griffin, T.G.: Internet routing policies and roundtrip-times. In: Dovrolis, C. (ed.) PAM 2005. LNCS, vol. 3431, pp. 236–250. Springer, Heidelberg (2005)
Enhancing Peer-to-Peer Traffic Locality through Selective Tracker Blocking Haiyang Wang, Feng Wang, and Jiangchuan Liu School of Computing Science, Simon Fraser University, British Columbia, Canada {hwa17,fwa1,jcliu}@cs.sfu.ca
Abstract. Peer-to-peer (P2P) applications, most notably BitTorrent (BT), are generating unprecedented traffic pressure to the Internet Service Providers (ISPs). To mitigate the costly inter-ISP traffic, P2P locality, which explores and promotes local peer connections, has been widely suggested. Unfortunately, existing proposals generally require that an ISP control the neighbor selection of most peers, which is often not practical in a real-world deployment given there are noncooperative trackers. In this paper, for the first time, we examine the characteristics and the impacts of these noncooperative trackers through real-world measurements. We find that tracker blocking has the potential to address this noncooperation problem, and help the ISPs to control more peers for traffic locality. Yet, how to guarantee torrents’ availability at the same time remains a significant challenge for the ISPs. To this end, we model the tracker blocking problem coherently with torrent’s availability, and address it through a novel selective tracker blocking algorithm, which iteratively improves traffic locality with a given availability threshold. Our trace-driven evaluation shows that our solution successfully reduces the cross-ISP traffic in the presence of noncooperative trackers and yet with minimal impact to torrents’ availability. Keywords: BitTorrent, Locality, Peer control, Tracker blocking.
1 Introduction Peer-to-Peer (P2P) networking has emerged as a successful architecture for content sharing over the Internet. BitTorrent (BT), the most popular P2P application, has attracted attentions from network operators and researchers for its wide deployment. BitTorrent, however, also generates a huge amount of cross-ISP traffic. Since the ISPs typically pay their peering or higher-level ISPs for global connectivity, the traffic between different ISPs is costly and presents significant network engineering challenges. In order to alleviate this cross-ISP traffic, P2P locality has been widely suggested. Different with caching or blocking the P2P traffic, this method explores the access of existing localities to reduce the long-haul traffic. It is well known that modifying the trackers[1] is the one of the most efficient ways to deploy P2P locality. This method proposes to manipulate the neighbor selection of peers via a biased tracker deployment. In particular, the biased trackers will select the majority, but not all, of the neighbors for the BT peers within the same ISP. J. Domingo-Pascual et al. (Eds.): NETWORKING 2011, Part II, LNCS 6641, pp. 13–24, 2011. c IFIP International Federation for Information Processing 2011
14
H. Wang, F. Wang, and J. Liu
In this paper, we explore the problem that many trackers are not owned by the ISPs and thus can hardly be modified for traffic locality. We take a first step towards the characteristics and the impacts of these noncooperative trackers (the trackers that cannot be modified for locality) via extensive measurements. We show that the existence of noncooperative trackers will greatly reduce the efficiency of traffic locality. In particular, we show that an ISP will lose the control of 53% (67, 132 out of 126, 133) peers due to the existence of some Pirate Bay trackers1 . The well known tracker blocking approach has the potential to address the problem. Yet, this approach will also reduce torrents’ availability and unavoidably decay peers’ downloading experience. Therefore, we discuss the main challenge in this design: “How to control the neighbor selection of more peers and minimize the impact to torrents’ availability at the same time?”. Fortunately, the great popularity of multiple tracker configuration [2] gives us the opportunity to minimize the impact to torrents’ availability. We thus formulate the tracker blocking problem coherently with the torrents’ availability under this scenario. The problem is then addressed through a novel selective tracker blocking algorithm, which iteratively improves traffic locality with a given availability threshold. Our evaluation shows that it can successfully reduce the cross-ISP traffic in the presence of noncooperative trackers and yet with minimal impact to torrents’ availability. The rest of this paper is organized as follows: In section 2, we illustrate the related works. We discuss the existence of noncooperative trackers and our motivation in section 3. In order to address the problem, we formalize the problem and proposed a selective tracker blocking approach in section 4, and Section 5 presents our trace-based evaluation. We further discuss some piratical issues in Section 6 and conclude the paper in Section 7.
2 Related Works There have been numerous studies on the analysis, optimization, and implementation, of the BitTorrent system[3]. P2P locality has recently attracted particular attention following the pioneer work of Karagiannis et al. [4]. For example, Blond et al. [5] showed through a controlled environment that high locality values (defined by [4]) yield up to two orders of magnitude savings on cross-AS traffic. Xie et al. [6] suggested cooperation between peer-to-peer applications and ISPs by a new locality architecture, namely, P4P, which can reduce both the external traffic and the average downloading time. Choffnes et al. also proposed Ono, a BitTorrent extension that leverages a CDN infrastructure, which effectively locates peers that are close to each other. A recent study from Ren et al. [7] also confirms the possible benefit of a topology-aware and infrastructure-independent BitTorrent protocol. On the other hand, many studies also addressed some pitfalls of the locality mechanism. Piatek et al. [8] shown that a ”win-win” outcome is unlikely to obtain for all the ISPs during the locality; the reason is that reducing inter-domain traffic reduces costs 1
The Pirate Bay is a Swedish website that indexes BitTorrent contents. It has been involved in a number of lawsuits generally due to the violation of copyright laws. Meanwhile, the ISPs can hardly modify the Pirate Bay trackers for traffic locality.
Enhancing Peer-to-Peer Traffic Locality through Selective Tracker Blocking
15
for some ISPs, while it also reduces revenue for others. Cuevas et al. [9] further investigated the maximum transit traffic reduction as well as the ”win-win” boundaries across the ISPs. These studies indicate that the ISPs are more likely to be selfish during the locality deployment, especially given the considerable gain of traffic locality. Our work extends these studies through an Internet-wide measurement. We show that the peer control is a very important issue to reduce the cross-ISP traffic. The existence of noncooperative trackers may greatly reduce the efficiency of traffic locality. Given by the popularity of multiple tracker configuration, we propose a selective tracker blocking approach that can effectively help the ISPs to control more traffic while minimizing the impact to the torrents’ availability.
3 Motivation: Problem of Peer Control To investigate the deployment of traffic locality, we conduct a measurement over 3-month and collect the information for more than 9 million BT peers. Our measurement configuration and the raw dataset (including the torrents information) can be found at: http://netsg.cs.sfu.ca/n tracker.htm. It is well known that the trackers play a very important rule for the deployment of traffic locality. Table 1 presents the site information of the Top-10 most popular trackers in our measurement. We can see that many of them are belonging to Pirate Bay and etc., which are involved in a series of lawsuits, as plaintiffs or as defendants. In terms of traffic locality, unless the copyrights and other related problems are well solved, we can hardly expect to organize these tracker sites together for traffic locality optimization. Table 1. Top-10 Most Popular Trackers
∗
Rank
Peers
Torrents∗
Tracker Sites (URLs)
1
607987
19915
open.tracker.thepiratebay.org
2
593205
16724
trackeri.rarbg.com
3
560580
23386
denis.stalker.h3q.com
4
509140
15308
tpb.tracker.thepiratebay.org
5
504173
12117
vtv.tracker.thepiratebay.org
6
442708
12821
vip.tracker.thepiratebay.org
7
414095
10019
eztv.tracker.prq.to
8
262991
6079
tracker.prq.to
9
184843
3016
tk2.greedland.net
10
142220
3114
www.sumotracker.org
Note that the torrent level popularity is obtained from the metainfo files which can include multiple trackers.
16
H. Wang, F. Wang, and J. Liu
Table 2. Number of peers that will choose the modified biased trackers in certain probability (In AS#3352) Pr
0.1-0.2
0.3-0.4
Case A
0
0
Case B
2360
Case C
11751
0.5-0.6
0.7-0.8
0.9-1.0
0
0
126133
7317
28685
23279
60331
19672
32448
7469
47347
4
14
x 10
12
All trackers can be modified 4 public trackers can not be modified 8 trackers can not be modified
# of peers
10
8
6
4
2
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
Probability of choosing the biased trackers
Fig. 1. The impact of noncooperative trackers
Therefore, we use the term noncooperative tracker to refer to them; intuitively, if the ISPs cannot modify these trackers, they will also fail to control the peers that managed by the trackers. In order to quantify their impact in deployment, we investigate the tracker blocking problem in AS#3352 as an example. This is the most popular AS with 126, 133 peers in our measurement; these peers are distributed in 6, 065 torrents that managed by 384 trackers. In this case study, we pick out a set of trackers that are managed by private organizations. In particular 4 trackers belong to Demonoid and 4 trackers belong to Pirate Bay. Except these noncooperative trackers, we assume that all other trackers have already been modified by the ISPs for traffic locality. Figure 1 shows the probability that the peers in AS#3352 will connect to the modified biased trackers (the probability that their traffic will be optimized). The detailed data can be found in Table 2. (where in Case A, all the trackers can be modified for traffic locality; Case B, 4 noncooperative trackers from Pirate Bay cannot be for traffic locality; and Case C, 8 noncooperative trackers from Pirate Bay and Demonoid cannot be for traffic locality). We can see that in Figure 1(a) when 4 noncooperative trackers (from Pirate Bay) cannot be modified, only 60, 331 peers in this ISP will be optimized by traffic locality for sure. We find that, 53% peers (67, 132 out of 126, 133) will be affected by the
Enhancing Peer-to-Peer Traffic Locality through Selective Tracker Blocking
17
noncooperative trackers; none of these peers will be optimized for sure and the peers will connect to the biased trackers with relatively low probability. Moreover, as shown in Figure 1(b), if more trackers are not willing to cooperate (8 in this case), the ISP will lose the control of more peers. We can see that even a small number of noncooperative trackers can bring noticeable damage to the ISPs; a large number of noncooperative trackers many easily ruin the deployment of traffic locality. However, control the neighbor selection of numerous peers one by one is even more troublesome. It is thus important to see whether we can enhance the peer control for the ISPs.
4 Tracker Blocking Problem Intuitively, the ISPs can prevent their peers to connect to these noncooperative trackers via tracker blocking. Yet, this approach will also reduce the torrents’ availability and the experience of P2P users. In our study, we find that block all the noncooperative trackers will greatly reduce the availability of torrents (see detailed discussions in Section 5). Fortunately, the latest BitTorrent metainfo file can include multiple tracker sites stored in the announce-list section [10]. In particular, unless we block all the trackers in torrents’ announce-lists, their availability will not be affected. In our measurement, we also record the announce-list of the torrents, pick out the cited trackers and then compute the number of trackers that have been used by the torrents. Figure 2 confirms that more than 80% torrents have specified at least two trackers for the load balance (or backup) purpose, and a few torrents even have announce-lists of several hundred trackers. This is much higher than an earlier measurement in 2007 [11] (observed multi-trackers in 35% of the torrents), and thus suggests that the multiple tracker configuration has been quickly recognized and deployed in the BitTorrent community. We thus formulate the tracker blocking problem coherently with the torrents’ availability under the scenario of multiple tracker configuration. 1 0.9
CDF
0.8 0.7
1
0.6
0.8
0.5 0.6 0.4 0.4 0.3 0.2 0.2 0
0.1 0
0
5
10
0
2
15
4
20
6
25
8
30
10
35
40
# of load balance trackers in the torrent
Fig. 2. Popularity of multiple tracker configuration
18
H. Wang, F. Wang, and J. Liu
140
# of peers
120 100 80 60 40 20 0 50 40 300
30 200
20 100
10
Torrent Index
0
0
AS Index
Fig. 3. Local view of matrix Rtor,as
4.1 Problem Formulation We now give a formal description of the tracker blocking problem in BitTorrent networks. We use ℵ to denote all the ASes on the Internet, to denote the set of existing torrents and use to denote the set of trackers. We define three variables A, S and T ; A takes on values over the set of ASes a ∈ ℵ; S takes on values over the set of torrents ,and T takes on values over the set of trackers that managing the torrents. Based on the above components, two relationships can be learnt from the measurement: (1) The relationship between S (torrents) and T (trackers); (2) The relationship between A (ASes) and S (torrents). We use binary matrix Rtor,tra to define the relationship between S and T , and this matrix is learnt directly from the metainfo (.torrent) files; each element of Rtor,tra(s, t) is of a binary value, indicating whether torrent s includes tracker t in its metainfo file (1-Yes, 0-No). On the other hand, we use matrix Rtor,as to refer to the frequency table of S and A. This matrix is learnt from the IP/AS information of the probed BT peers; each element Rtor,as (s, a) is an integer which refers to the number of peers (in torrent s) that belong to AS a. The peers’ AS information is learnt by the ’whois’ command on the Linux system, and most replies are from ’whois.cymru.com’. A local view of Rtor,as is shown in Figure 3. For a given ISP x (generally includes n ASes Ax = {a1 , a2, ..., an }, n ≥ 1) that managed by k trackers Tx = {t1 , t2, ..., tk }; We use the set Tx modif ied to refer the trackers that are modified for traffic locality, set Tx blocked to refer the trackers that are blocked by the ISPs, and set Tx unchanged to refer the trackers that are unchanged (neither modified nor blocked) during the locality deployment. Note that in this definition, the set of noncooperative trackers Tx noncoop = Tx blocked ∪ Tx unchanged . Note that when we decide to block a noncooperative tracker, this decision will bring ISPs certain benefits and costs. In particular, the benefit is that some peers will switch to their alterative trackers that can be modified for traffic locality; the cost, on the other hand, can be qualified by the number of peers that cannot find an alterative trackers and
Enhancing Peer-to-Peer Traffic Locality through Selective Tracker Blocking
19
thus fail to connect to the BT networks. The tracker blocking problem is that for ISP x (with a known set of noncooperative trackers Tx noncoop ), how to division the tracker set Tx into three parts of Tx blocked , Tx unchanged , and Tx modif ied . This division should maximize the total benefit; meanwhile, the torrents’ availability (total number of peers that cannot connect to the BT networks due to the tracker unavailability) should also be bounded by a given threshold β; where β ∈ [0, 1] refers to the percentage of peers that become unavailable. We formulate this problem step by step as follows: For any division, the peers in a given torrent (for example torrent s1 ) will be optimized by the locality mechanism with the probability of: Ps1 =
Rtor,tra (s1 , t)/
Rtor,tra (s1 , t)
(1)
t∈Tx
t∈Tx modif ied
This probability is computed by the ratio of modified trackers and the total number of trackers that the torrent cite in its metainfo file. Therefore, a probability distribution of each torrents S can be given by Ps = {Ps1 , Ps2 , ..., Psi }. This distribution describes the probability that the peers(in each torrents) will connect to the bias trackers. On the other hand, for ISP x, the distribution of peer population across the torrents is given by: Ds =
Rtor,as (S, a)
(2)
a∈Ax
Based on these two distributions, we use a normalized expectation to qualify the total benefit. The tracker blocking problem is therefore to maximize: E(x) =
(Ds · Ps )/
S
s.t.
V (x) ≥β S Ds
Ds
(3)
S
(4)
where V (x) = S [ a∈T ∗ Rtor,tra (S, t)]/k · Ds indicating the total number of peers that can at least connect to one available tracker. T ∗ = Tx − Txblocked refers to the trackers that are not blocked. The physical meaning of E(x) is the average probability that the peers in ISP x will be optimized by the traffic locality; meanwhile, the constraint in eqn.4 helps us to bound the torrent availability during the tracker blocking. 4.2 Which Trackers Should Be Blocked? In this part, we will discuss the details of tracker blocking algorithm. It is easy to see that this problem can be transformed into a restricted napsack problem which known as NP complete. Note that in the real-world deployment, both Rtor,tra and Rtor,as cloud be dynamic (we will further discuss this impact in Section 5). Therefore, instead of finding an optimal solution, we are focusing on a more efficient algorithm for real-world implementations.
20
H. Wang, F. Wang, and J. Liu
Algorithm Tracker Blocking() 1: while the tracker list is not empty and the total availability cost is equal 2: or less than β 3: for ∀ tracker t ∈ T noncoop 4: compute E of blocking this tracker based on eqn.3 5: compute V of blocking this tracker based on eqn.4 6: find the tracker with maximal W = B − H 7: if the tracker is not checked and the total cost 8: after blocking this tracker is equal or less than β 12: then 9: remove this tracker from tracker list (this tracker is blocked) 10: update this information in Rtor,tra and Rtor,as . 12: else 13: mark this tracker as checked and goto 6 : 14: end if 15: end for 16: end while Fig. 4. The selective tracker blocking algorithm
To this end, we design an algorithm that can improve the solution quality iteratively. As shown in , in each liberation, we compute the blocking benefits B and the costs H for all the unblocked noncooperative trackers (as discussed in eqn.3 and eqn.4). If the total cost is less than β, then we block the tracker with maximal W (W = B−H) in this liberation. After this blocking, we recompute the blocking benefits/costs (B and H) for the remaining trackers and goto next round. Otherwise, if the availability constraint was violated or all the noncooperative trackers are blocked, stop and exit. In next section, we will show that our solution can bring considerable benefits for the real-world ISPs.
5 Evaluation In this section, we will evaluate the performance of our tracker blocking algorithm based on real trace. First, we will discuss the discuss the improvement of peer control for the ISPs. After that, we will further investigate the traffic saving as well as the possible degrading of peers’ availability. We evaluate the tracker blocking problem in a real ISP with multiple ASes. This ISP includes AS#3352, AS#6461, AS#3301, AS#3243, and AS#2847 with 402, 496 peers that managed by 428 trackers. Among all these trackers, we find 104 noncooperative trackers that can hardly cooperate with the ISPs (based on the hosting of copyright contents). We set β = 90% in this simulation, which indicates the ISP need to guarantee the access of 90% peers to BT networks. Based on the our algorithm, 10 noncooperative trackers are selected (blocked) in this ISP. Following two cases are discussed to evaluate its performance: Case #1. All the noncooperative trackers are neither modified nor blocked by the ISPs; Case #2. A selective set of noncooperative tracker is blocked by the ISP while other noncooperative
Enhancing Peer-to-Peer Traffic Locality through Selective Tracker Blocking
21
4
16
x 10
All noncooperative trackers are not blocked
14
Some noncooperative trackers are blocked 12
# of peers
10
8
6
4
2
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
Probability of choosing the bias tracker
Fig. 5. Improvement of peer control Table 3. Number of peers that will choose the modified biased trackers in certain probability Pr
0.1-0.2
0.3-0.4
0.5-0.6
0.7-0.8
0.9-1.0
Case1
30494
82496
137533
24100
52952
Case2
20151
51943
108906
52819
154257
trackers are unchanged (and not blocked); Note that Case#2 is the proposed method; the comparison of these two cases indicates the benefit of our selective blocking approach. Figure 5 shows the cooperation of these two cases. In particular, the E(x) values stress the gain is over 50% where the E(x) in Case#1 is 0.4422 and E(x) in Case#2 is 0.6809 (recall that this value refers to the probabilities that the peers will be optimized in each cases). In this figure, we can see that if we simply ignore these noncooperative trackers (Case#1), only 6.45% (25952 out of 402496) peers will be optimized for sure and most peers will be optimized with the probabilities no greater than 0.5. On the other hand, with our selective blocking approach, 38.22% (153833 out of 402496) peers will be optimized for sure. The detailed data of this figure can be found in Table 3. We also compute the case when all the noncooperative trackers are blocked. In this case, only 28.18% (113, 419 out of 402, 496) peers are able to connect to the BT networks. This less than our β value which is set to 90%. The result indicates that the over-blocking of noncooperative trackers will greatly harm the users’ experiences and can hardly be appreciated. The improvement of peer control is encouraging especially considering the overhead of blocking 10 trackers. To further investigate the saved cross-ISP traffic, we perform another simulation using the discrete-event BitTorrent simulator developed by Stanford University [12] as [1] did; we summary the key network settings as follows:
22
H. Wang, F. Wang, and J. Liu
1
1
0.9
Beta values in the algorithm
Percentage of cross−ISP traffic
0.9
0.8
0.7
0.6
0.5
0.4
0.8
0.7
0.6
0.5 0.3
0.2 0.55
0.6
0.65
0.7
0.75
0.8
0.85
0.9
0.4 0
1
Beta values in the algorithm
10
20
30
40
50
60
70
80
# of blocked noncooperative trackers
Fig. 6. Improvement of traffic locality
Fig. 7. Degrade of peer availability
Table 4. Facts of views (in Figure 6) Cross traffic
β = 0.9
β = 0.8
β = 0.7
β = 0.6
Max
0.887
0.837
0.784
0.738
Min
0.809
0.627
0.441
0.341
Median
0.833
0.677
0.456
0.368
Mean
0.861
0.719
0.566
0.491
Std
0.045
0.072
0.189
0.212
The network contains 13 ASes and 1, 600 BT peers, where the peers are managed by 80 logic trackers and 75 of them are noncooperative trackers. The peers are skewedly distributed among these trackers as we observed in our measurement. When one tracker is blocked, the peers will randomly switch to an alternative tracker as described in the standard BT protocol. Note that in this simulation, each AS from an individual ISP. These ISPs will all benefit from the tracker blocking approach, and we compute the overall inter-AS (ISP) traffic to quantify this benefit. For other detailed configurations, all peers inside the ISPs are modeled as nodes behind cable modem and DSl, and have asymmetric upload/download bandwidth. The upload bandwidth of these peers is 100kbps and downloading bandwidth is 1Mbps. Considering the peer arrival/departure, most peers are joining the network at once, i.e. the flash crowd scenario. We focus on this feature since it is the most challenging for ISPs to handle. For each torrent, there is one original seeder that will always stay online (with 400Kbps uplink bandwidth), and other peers will leave the BT network forever as soon as they finish downloading. We run multiple simulations to average the randomness and the results are shown in Figure 6 and Figure 7 with different β values. Figure 6 present the traffic saving of our algorithm. We can see that when no trackers are blocked with β = 1.0 (only 5 out of 80 trackers are modified for traffic locality), the ratio of cross-ISP traffic is
Enhancing Peer-to-Peer Traffic Locality through Selective Tracker Blocking
23
around 90%. As we decrease the β value and block more noncooperative trackers, more peers will switch to the modified trackers, and the cross-ISP traffic will noticeable decreased. Compare to Figure 7, we can see that the blocking of 20 trackers (β changed from 0.93 to 0.84) will generally reduce the 17% traffic across the ISPs. It is also worth noting that when we block more trackers, the peers’ availability is linearly decreasing. However, the cross-ISP traffic is more slowly decreasing with increased standard deviation. This result also indicates the inefficiency of over-blocking (see details in Table 4).
6 Further Discussions
300
1
295
0.9
290
0.8
285
0.7
280
0.6
Highly available trackers
CDF
# of availble trackers
This paper takes a first step towards the tracker blocking problem for the traffic locality. There are still many piratical issues that can be further explored.
0.5
275 270
0.4
265
0.3
260
0.2
255
0.1
250 July.2009 Aug.2009 Sep.2009 Oct.2009 Nov.2009 Dec.2009 Jan.2010 Feb.2010 Mar.2010
0 0
Normally available trackers
Not available trackers 1
2
3
4
5
6
7
8
Time slot
# of available month
Fig. 8. # of available trackers over time
Fig. 9. CDF of tracker availability
First, the trackers are dynamic, which can be considered in the locality deployment. We are currently probing the availability of more than 700 trackers (we have already obtained the data for over 8 months), as shown in Figure 8. We cluster the trackers in three classes: 1) Highly available trackers (with the total online time more than 7 months); 2. Normally available trackers (with the total online time more than 0 and less than 7 months); 3. Not available trackers, as shown in Figure 9. We can see that more than 30% trackers have very good availability. These trackers are more eligible to be blocked (if noncooperative) during the locality deployment. We are currently trying to add this information and further improve our model. Second, our model is based on the measurement information of Rtor,tra and Rtor,as . Therefore, an inefficient dataset may potentially reduce the benefit. Our ongoing work is to use the relationship between ASes and trackers to enhance our model. Note that this relationship can be easily computed/probed by the ISPs. We find that this relationship is quite consistent over time. More importantly, it can also be used to infer Rtor,tra and Rtor,as when either of which is missing.
24
H. Wang, F. Wang, and J. Liu
7 Conclusions In this paper, we studied the deployment traffic locality when some tracker sites cannot be modified by the ISPs. Due to the existence of these noncooperative trackers, the ISPs will lose the control of many peers and unavoidably reduce the efficiency of traffic locality. We show that a selective tracker blocking approach can well address this problem, and thus formulate the tracker blocking problem coherently with the torrents’ availability under the scenario of multiple tracker configuration. Our trace-based evaluation show that this solution successfully reduces the cross-ISP traffic in the presence of noncooperative trackers and yet with minimal impact to torrents’ availability.
References 1. Bindal, R., Cao, P., Chan, W., Medved, J., Suwala, G., Bates, T., Zhang, A.: Improving Traffic Locality in BitTorrent via Biased Neighbor Selection. In: Proc. IEEE ICDCS (2006) 2. Liu, J., Wang, H., Xu, K.: Understanding Peer Distribution in Global Internet. IEEE Network Magazine (2010) 3. Qiu, D., Srikant, R.: Modeling and Performance Analysis of Bit Torrent-Like Peer-to-Peer Networks. In: Proc. ACM SIGCOMM (2004) 4. Karagiannis, T., Rodriguez, P., Papagiannaki, K.: Should Internet Service Providers Fear Peer-Assisted Content Distribution? In: Proc. ACM/USENIX IMC (2005) 5. Blond, S.L., Legout, A., Dabbous, W.: Pushing BitTorrent Locality to the Limit. INRIA Tech. Rep. (2008) 6. Xie, H., Yang, R.Y., Krishnamurthy, A., Liu, Y.G., Silberschatz, A.: P4p: Provider Portal for Applications. In: Proc. ACM SIGCOMM (2008) 7. Ren, S., Tan, E., Luo, T., Chen, S., Guo, L., Zhang, X.: TopBT: A Topology-aware and Infrastructure-independent BitTorrent Client. In: Proc. IEEE INFOCOM 2010 (2010) 8. Piatek, M., Madhyastha, H.V., John, J.P., Krishnamurth, A., Anderson, T.: Pitfalls for ISPfriendly P2P Design. In: Proc. ACM HOTNETS (2009) 9. Cuevas, R., Laoutaris, N., Yang, X., Siganos, G., Rodriguez, P.: Deep Diving into BitTorrent Locality. Telefonica Research, Tech. Rep. (2009) 10. BitTorrent Multi-tracker Specification, http://www.bittornado.com/docs/multitracker-spec.txt 11. Neglia, G., Reina, G., Zhang, H., Towsley, D., Venkataramani, A., Danaher, J.: Availability in BitTorrent Systems. In: Proc. IEEE INFOCOM (2007) 12. BT-SIM, http://theory.stanford.edu/simcao/btsim-code.tgz
Defending against Sybil Nodes in BitTorrent Jung Ki So and Douglas S. Reeves Department of Computer Science North Carolina State University Raleigh, NC 27695-8206, USA {jkso,reeves}@ncsu.edu
Abstract. BitTorrent and its derivatives contribute a major portion of Internet traffic due to their simple and scalable operation. However, the lack of security mechanisms makes them vulnerable to attacks such as file piece pollution, connection slot consumption, and bandwidth exhaustion. These effects are made worse by the ability of attackers to manufacture new identities, or Sybil nodes, at will. The net effect of Sybil nodes and weak security leads to inefficient BitTorrent operation, or collapse. In this paper, we present defenses against threats from Sybil attackers in BitTorrent. A simple, direct reputation scheme called GOLF fosters peer cooperation to exclude potential attackers. Locality filtering tentatively identifies Sybil nodes based on patterns in IP addresses. Under the proposed scheme, Sybil attackers may still continue malicious behaviors, but their effect sharply decreases. Comparison to existing reputation models shows GOLF effectively detects and blocks potential attackers, despite false accusation. Keywords: BitTorrent; Sybil attacks; Reputation.
1 Introduction Peer-to-Peer (P2P) systems account for a major portion of Internet traffic. The P2P paradigm enables a wide range of applications to operate as scalable network services; examples are file sharing, VoIP, and media streaming. The BitTorrent protocol [1], is one of the most popular approaches to P2P file-sharing. This protocol encourages maximum peer cooperation to distribute files. BitTorrent-like systems, such as Vuze (Azureus), uTorrent, BitComet, Tribler, and PPLive, contributed more than 50% of all P2P traffic, and roughly one third of all Internet traffic, in 2008/2009 [2]. P2P systems in general are quite robust to failures, and adapt readily to rapidlychanging conditions. Unfortunately, systems based on BitTorrent may be vulnerable to deliberate attacks by determined adversaries [3,4,5,6]. This is because BitTorrent incorporates few security mechanisms, or mechanisms that are only partly effective. For instance, although the BitTorrent protocol includes coarse-grained data integrity checking (i.e., a SHA-1 hash image per piece), it is highly vulnerable to contamination by fine-grained data pollution (uploading of fake blocks). Dhungel et al. [7] showed that even one polluter in a channel can degrade a streaming service severely in PPLive (i.e., a BitTorrent-like streaming application). As another example, attackers are able to hinder a compliant peer from exchanging data with potential neighbors by fake control messages [6]. In addition, attackers can exhaust legitimate peer’s upload bandwidth [8]. J. Domingo-Pascual et al. (Eds.): NETWORKING 2011, Part II, LNCS 6641, pp. 25–39, 2011. c IFIP International Federation for Information Processing 2011
26
J.K. So and D.S. Reeves
Defending against attacks on P2P systems is made more difficult by the fact that one attacker can generate a great number of false identities at little cost; this is known as the Sybil attack [9]. The Sybil attack is a fundamental and pervasive problem in P2P systems. Attackers can use these identities to avoid detection, and to avoid repercussions for their malicious behavior. Since victims cannot differentiate Sybil attackers (Sybil nodes) from legitimate peers, it is difficult for a peer to avoid the above-mentioned attacks. Therefore, prevention or mitigation of Sybil attacks is key to making systems such as BitTorrent more robust. Sybil nodes can aggressively attempt to compromise the swarm, disseminate polluted (corrupted) file pieces, and exhaust peer resources. To address these problems, we propose a light-weight reputation scheme, called good leecher friends (GOLF), combined with locality filtering. GOLF detects polluted file blocks through a light-weight, fine-grained integrity check. Peers using GOLF share information with each other about attackers. This information is weighted by their history of previous, mutually-successful exchanges. By this means, peers can learn about and avoid attackers. Locality filtering flags possible Sybil nodes, based on similarities in their IPv4 addresses. The BitTorrent tracker maintains a locality filter that classifies participants. This filter is updated when a peer joins or leaves the swarm, and is distributed to seeders by the tracker. The primary aim of this paper is to mitigate the malicious impact from Sybil nodes through peer cooperation, in a way that is lightweight, and easily integrated with BitTorrent. As long as each peer cooperates with others, it can protect itself from attackers by use of GOLF with locality filtering. The proposed scheme has been implemented, and is shown to sharply reduce the impact of Sybil nodes. For example, the bandwidth cost is reduced more than 10 times in the presence of Sybil nodes. Comparison to other reputation schemes [10,11,12] shows GOLF effectively detects Sybil nodes, despite the dissemination of false information from neighbors. GOLF is a decentralized approach, and does not require a central authority for collection or dissemination of reputation information. Finally, GOLF improves the detection of attackers in BitTorrent [11] by a factor of 3 or greater.
2 Related Work Douceur [9] introduced the Sybil attack in distributed systems. To exclude Sybil nodes, a central authority can be a solution. A trusted third party (TTP) can issue certificates for authorized participants, using public key or identity-based cryptography. This approach has the standard drawbacks of a centralized infrastructure (overhead, lack of scalability and reliability), as well as a requiring a sacrifice of anonymity. A system that charges for IDs can mitigate (but not prevent) the Sybil attack. The drawback is that barriers to entry discourage wide participation and cooperation. Decentralized approaches, such as resource testing [13,14], trusted networks [15,16], and reputation [17,18] are alternative defenses against the Sybil attack. Resource testing based on the fact a Sybil node has a limited resource may bring about false positives in a environment where nodes have heterogeneous capacities. Yu et al. [15] showed that a trusted network (i.e., a social network) can mitigate the effects of Sybil nodes. Use of a trusted network may however incur cold start problems (i.e. newcomer discrimination), increase reliance on a separate infrastructure, and limit scalability. Sybilproof [17]
Defending against Sybil Nodes in BitTorrent
27
considers Sybil strategies, where a user is only concerned with increasing his own reputation, and the impact of “badmouthing” (i.e., false accusations). Piatek et al. [12] attempted to achieve persistent incentives across swarms in BitTorrent systems. Their one-hop reputation scheme uses public/private key pairs for identity, which generates key management overhead and limits scalability and anonymity. Lian et al. [19] evaluated private experience and shared history to achieve a balance of reputation coverage and accuracy. Such schemes are vulnerable to whitewashing (a type of Sybil attack) and collusion. Rowaihy et al. [14] reduced Sybil attacks with an admission control scheme that makes use of client puzzles and public key cryptography. Their scheme requires a trusted third party, creates artificial barriers to entry, and has the overhead of constructing a hierarchy. Sun et al. [20] investigated the effect of using Sybil nodes as a freeriding strategy. MIS scheme [5] detects a fake block (pollution) attack in P2P streaming applications through the use of hash functions at the block level. The blacklisting approach [21] excludes IP address ranges of the attackers. SafePeer plugin, a blacklist approach, requires a delay of between 2 and 20 minutes to import a database of blacklisted IP addresses [22]. This drawback has limited usage of the SafePeer plugin. Also, it may mistakenly reject some benign peers in blacklisted IP address ranges. The rest of this paper describes a fully distributed scheme for dealing effectively with content pollution and Sybil attacks. There is no penalty for newcomers (cold-start problem), and no sacrifice of anonymity. There is no reliance on a public key infrastructure, or on a trusted third party (other than the use of a tracker, which is a standard part of the BitTorrent protocol). There is no startup delay. The scheme uses direct reputation evidence based on bartering volume in a swarm, and the effects of badmouthing and collusion are considered. Careful attention is given to the use of space- and communicationefficient encoding of information.
3 Assumptions and Threat Models 3.1 Assumptions We consider a basic BitTorrent system1 . We assume the tracker and the torrent website provide correct information, and are available (methods of fail-over and redundancy are known and used). There is no central authority or trusted third party for peer authentication. Therefore, no peer can tell whether a peer identity has been faked, and all participants are initially assumed to be legitimate (non-malicious). A seeder can adapt different seeding algorithms to distribute file pieces to leechers. Each leecher follows the rate-based tit-for-tat (TFT) unchoking and LRF piece selection schemes [11]. 1
A BitTorrent system consists of a tracker, seeders, and leechers; this is collectively referred to as a swarm. The tracker is both a bootstrap server, and a coordinator informing leechers of potential neighbors. Each peer can be either a leecher or a seeder. A leecher has an incomplete file and a seeder has the complete file. Leechers obtain file pieces from other peers. Upon completion of file downloading, a leecher becomes a seeder. Readers are referred to [1] for more details.
28
J.K. So and D.S. Reeves Swarm Tracker ¥ ¥
¥
¥ ¦ £ ¤
¦
Neighbor N dictionary
£ Seeder Connection slot
Fig. 1. Overview of malicious behaviors from Sybil nodes
We assume that malicious nodes can act individually, or together (in collusion with one another). An individual node has limited resources but is able to generate fake identities. A determined adversary can create a large number of Sybil nodes and effectively control them. We believe it is considerably easier to create effective Sybil nodes in limited address ranges. [21] showed that attackers are usually located in small network ranges, and our measurement study supports this conclusion as well (in section 5.2). 3.2 Threat Models Leechers may experience the effects of malicious behavior by Sybil nodes during piece exchange [4,7,6]. Malicious peers will cheat the seeder and the tracker [3]. Figure 1 shows Sybil nodes can annoy participants with the following attacks. Connection slot attack (➀): Sybil nodes can aggressively request TCP connections to consume limited connection slots. Once established, the Sybil node can send its neighbors (seeders and leechers) fake control messages to maintain their interest. Although the cost of the control messages sent to neighbors is trivial, the attack can make it difficult for non-malicious peers to connect with other benign neighbors. The result will be slow download times, and a decrease in cooperation. Bandwidth attack (➁): Sybil nodes may attempt to greedily consume the upload bandwidth of a seeder. In the event that Sybil nodes occupy most of the unchoke slots of the seeder, benign leechers may be starved (unable to download file pieces from the seeder). In addition, a Sybil node connecting with a benign peer may receive a considerable portion of the upload bandwidth of that peer. Fake block attack (➂): Sybil nodes may send fake blocks to neighbors, to waste their download bandwidth and verification (computation) time. A Sybil node may initially appear to be complying with the TFT protocol. Due to the coarse-grained file piece integrity mechanism (i.e., using hash values of file pieces), verification of fake blocks consumes a non-trivial amount of download bandwidth, reassembly effort, and buffer space, and the victim has to re-download the genuine pieces from other neighbors. Swarm poisoning (➃): Malicious nodes create fake (Sybil) IDs and attempt to join a swarm. While the tracker may be trustworthy, it cannot discriminate whether a
Defending against Sybil Nodes in BitTorrent
29
joining peer is malicious without attack evidence. The tracker may therefore suggest Sybil nodes as potential neighbors whenever it is requested to provide neighbor lists.
4 GOLF Scheme and Locality Filtering In this section, we present a simple reputation scheme, GOod Leecher Friends (GOLF), with locality filtering. The ultimate aim is to mitigate malicious attacks from Sybil nodes. Leechers cooperate with direct neighbors to combat Sybil nodes by GOLF. The tracker and seeders reduce the impact of Sybil nodes through locality filtering. The GOLF scheme enables a leecher to detect potential attackers by sharing its experiences with direct neighbors. Locality filtering helps the tracker and seeders to discriminate against Sybil nodes, using an efficient data structure for the purpose. 4.1 GOLF Scheme The goal of GOLF is diminishing the effect of attackers. GOLF relies upon cooperation among leechers. To identify the possible Sybil nodes, each leecher uses a filter-based detection mechanism. GOLF expands the local view of attackers to immediate neighbors by exchanging information about past behavior. The local trust value is based on previous TFT volume, and the detection of corrupted blocks. GOLF protocol: GOLF is based on good interactions, or exchanges of legitimate (noncorrupted) blocks between neighbors. If a neighbor interacts successfully and properly, the leecher regards the neighbor as a “friend”. Otherwise, the leecher records the neighbor’s ID and misbehavior in its attack history. The leecher will refuse connection requests from previously-misbehaving peers. The leecher propagates information about attackers to its direct neighbors, who can use that information in making their own decisions. Consequently, the gossip between friends can exclude potential attackers from connecting. Block filter against fake block attack: Sybil attackers can directly impact leechers by uploading corrupted blocks2 . Checking data integrity using the SHA-1 signature of a piece prevents leechers from accepting corrupted pieces, but at significant cost. For instance, Sybil attackers may upload corrupted blocks of a piece, in return for being unchoked (TFT). Other blocks may be uploaded from other peers. When the piece signature fails verification, the leecher will not know which peer(s) provided false blocks. To tackle this problem, a block filter (BF ilter ) based on Bloom filtering [23] is used. The block filter is a summary of all blocks in the shared file. Figure 2(a) shows the creation steps for BF ilter . The original seeder hashes each block in the file with k hash functions, and marks the corresponding k bits in the filter. After processing all blocks, the seeder adds this BF ilter to the torrent metadata3 . After obtaining the torrent file, leechers do not need to download it again when they rejoin the swarm. Although the size of BF ilter in the torrent metadata depends on the number of blocks and the 2
3
In the BitTorrent protocol, each file piece (e.g., 256KB) is further divided into blocks (e.g., 16KB per block) for exchange purposes. The metadata contains information about a file name, its length, SHA-1, and tracker location.
30
J.K. So and D.S. Reeves First piece
b1 b2 … b j
File
Swarm
b1 b2 … b j
Hk
BFilter 1
Last piece
…
0
…
Pn
… Hk
LFilter 1
0
(a) Block filter creation
1
1
3
…
Pl 2
0
1
(b) Locality filter update
Fig. 2. The original seeder creates BF ilter with all blocks and the tracker updates LF ilter with swarm participants. In (a), each piece is divided into even-size blocks (b1 , b2 , . . ., bj ). In (b), peer Pn indicates a newcomer and peer Pl indicates a leaver.
expected rate of false positives, it is very small relative to the size of most files being shared; detailed overhead costs are analyzed in 5.4. Additionally, unlike MIS scheme through HMAC and server’s intervention [5], filter-based detection enables each leecher to directly identify a real attacker (polluter). Attacker detection using Block filter: GOLF uses BF ilter to counter the fake block attack. Upon obtaining BF ilter , leechers can check block integrity. Verification of a block involves repeating the hash functions and checking that the expected k bits in the filter are set. Integrity checking can then be done on individual blocks, rather than solely at the file piece level. Failure to be verified by the block filter indicates the block is corrupted, while successful verification means that the entire piece must still be downloaded and verified (via the SHA-1 hash). A leecher receiving a fake block, or a corrupted file piece, can set a flag indicating this neighbor is unreliable (assumed malicious). Each leecher independently maintains a history of attacks or misbehavior, based on its own direct interactions with other peers. Naturally, each leecher will prefer to cooperate with good leecher friends. Countering False Accusations: Sybil nodes may provide false information to their neighbors concerning their experiences. This has to be considered in the choice of information to use in assessing potential attackers. A Sybil node may falsely accuse a benign peer of malicious behavior. In order to reduce the effect of false accusations, trust is first based on individual (private) experience. Let Dit denote the total downloaded volume of genuine blocks from peer i through rechoke period t4 and Uit denote the total uploaded volume to peer i. A peer computes Dit the contribution value Cit of each of its directly-connected neighbors i as U t +D t , where i i t 0 ≤ Ci ≤ 1, at every rechoke interval. Note that symmetric exchange between neighboring peers will result in contribution values of .5. Dt The bartering fraction of a neighbor i of a peer having N neighbors is simply N i Dt . j=1
j
A peer computes the interaction value Iit of each of its neighbors i as the product of its Dt bartering fraction and contribution value, i.e., Iit = N i Dt ∗ Cit . The interaction j=1
j
value can range from 0 (minimum interaction) to 1 (maximum interaction), and represents the importance of a neighbor. A neighbor uploading only a small amount of the 4
In a normal TFT unchoking scheme, every rechoke period is 10 seconds.
Defending against Sybil Nodes in BitTorrent
31
total of received blocks, or downloading much more than uploading will have a small interaction value. It The trust value of a neighbor i, denoted as Tit , is computed as Tit = N i I t . j=1
j
The trust values represent the opinion of a peer about the neighbors with which it has directly bartered file pieces. A peer will compute a suspicion value Skt for other peers k based on the history of its direct interactions, and information reported by other peers. This value ranges from 0 (not suspected of being malicious) to 1 (known to be malicious). If a peer has directly experienced an attack by neighbor i at rechoke period t, Sit will be set to 1 for all t ≥ t. Peers exchange their suspicion values with each other, and use this reputation information to update their own suspicion values. A suspicion value reported by peer i about peer j at rechoke period t is denoted as Δti→j . Upon receiving this reported suspicion value, a peer updates its own suspicion value Sjt as Sjt = [
Sjt−1 × (t − 1) + Tit × Δti→j ] − Tjt t
(1)
The term inside the square brackets in equation 1 represents the average degree of suspicion for peer j, while Tjt reduces this according to the trust directly earned by j. The suspicion value is calculated for neighbors and for peers for which Δ values are received. A peer makes an independent judgement about other peers, based on the received suspicion values, and stored trust values earned by successful interactions with its neighbors. Since the number of neighbors decides possible bartering ranges for the swarm, the threshold for the suspicion value is set as a fraction of the number of connection slots. A high trust value based on direct experience diminishes the effect of other peers’ prejudices against a neighbor. Each peer suspends the decision about whether to suspect a neighbor (to reduce a hasty judgement) until the provider of suspicion information has correctly bartered at least some minimum number of pieces. A malicious attacker will attempt to influence the suspicion value of a benign peer. False accusations correspond to inaccurate high suspicion values. In the following section, the impact of strategic Sybil nodes that attempt to compromise reputation information is considered. 4.2 Locality Filtering Locality filtering reduces network resource exhaustion and swarm poisoning through IP address binning. In this approach, a bin represents peers who share the same IPv4 /24 IP address prefix (e.g., 10.9.8.6 and 10.9.8.7 share the same /24 prefix, while 10.9.8.6 and 10.9.5.6 do not). The tracker groups participants with the same IP /24 prefix using a locality filter (LF ilter ). Locality filtering helps a peer avoid Sybil nodes, thereby preserving network resources for benign leechers. Locality tracking by the tracker: The tracker is charged with monitoring membership / participation in the swarm. The LF ilter is an implementation of a counting Bloom filter [24]. As shown in Figure 2(b), the tracker maintains a LF ilter that reflects a snapshot of current participants. The set of participants can be (and usually is) frequently changing; the tracker updates the LF ilter whenever a peer joins or leaves. Each peer
32
J.K. So and D.S. Reeves
reports its state to the tracker at regular intervals in the normal BitTorrent protocol. For example, when a newcomer joins, the tracker hashes its IP /24 prefix using k hash functions, and adds 1 to each resulting index value (counter). Conversely, if a known peer leaves the swarm, the tracker decreases the corresponding k index values. The tracker shares LF ilter with seeders at regular intervals. Locality tracking uses LF ilter to select neighbors in different IP /24 ranges. The tracker provides the requestor with suggestions for neighbors until it has sent a sufficient number. The tracker randomly selects candidate neighbors. The tracker checks the /24 prefix of each candidate using the LF ilter . If the number of peers in the swarm having the same /24 address prefix exceeds a threshold parameter, and one peer in this address range has already been suggested as a neighbor, the tracker will reject additional neighbors in this same address range before sending suggestions to the requestor. Locality seeding by a seeder: In order to alleviate network resource exhaustion from Sybil nodes, a seeder uses LF ilter for effective unchoke allocation. If Sybil nodes take a majority of unchoke slots, benign leechers will potentially suffer data starvation. Locality seeding is helpful in reducing the abnormal selection of Sybil nodes. Such seeding operates similarly to locality tracking. Requesting peers are sorted by some metric, such as download rate, random selection, or service priority. In this order, the seeder checks the requesting peer’s IP /24 prefix against the LF ilter . If the count for this address prefix is less than a threshold value, the seeder will assign the next unchoke slot to the requesting peer. Otherwise, the seeder will move on (in order) to the next candidate.
5 Evaluation and Discussion This section presents a trace measurement and the results of applying GOLF with locality filtering. The goal is to understand the performance of the proposed scheme against malicious behavior by Sybil nodes. The experimental setup is first described, followed by the results, and discussion. In order not to impact a real BitTorrent swarm, we report the the results of simulation, rather than mount attacks in actual networks. 5.1 Experimental Setup We developed a BitTorrent simulator that is a faithful implementation of the BitTorrent protocol, with the ability to enable or disable GOLF and locality filtering. The simulator is event-driven, and includes events such as joins and leaves, bartering pieces, unchoking (including optimistic unchoking), and exchange of piece messages. The normal BitTorrent TFT and LRF policies were implemented. Sybil actions such as sending fake blocks, discarding received data from leechers, consuming seeders’ bandwidth, and making false accusations were also implemented. In the simulator, some fraction of the nodes were assumed to be Sybil nodes; the exact fraction is described for each experiment. Peer addresses, except for Sybil nodes, were for the most part located in different /24 address ranges. This assumption is consistent with measurements described in 5.2. A random delay caused by the impact of network topology was added when sending a piece to all peers [8]. According to [25],
Defending against Sybil Nodes in BitTorrent
33
the volume of the control messages in BitTorrent is negligible compared to piece messages. Thus, we do not reflect delays due to control messages. To reduce simulation complexity, the network was assumed to have no bottlenecks or transmission errors [8]. Each peer had an asymmetrical bandwidth capacity that reflects the ADSL standard models [26]. Every peer had between 500Kbps and 1.3Mbps for an upload rate. The original seeder had 5Mbps as its upload rate. Locality tracking was implemented in the tracker module. The simulator included three different seeding algorithms (i.e., bandwidth-first, random, and round-robin seeding) for leecher selection. Results were similar for each, and only the evaluation results for round-robin (RR) seeding are described in this section. The RR seeding algorithm sorts leechers based on their service priority (i.e., leechers having received the least are given the highest priority). This seeding algorithm combined with locality filtering is denoted as CRR in the following. The number of peers was limited to 1,000 nodes, based on a previous measurement study [27]. Each simulation started with one seeder and one tracker. They served all participants in the swarm throughout the simulation. Peers joined the swarm based on an arrival process derived from a real BitTorrent trace log [28]. Once downloading the entire file, a leecher became a seeder until it left the swarm. To explore malicious attacks, the fraction of Sybil nodes was varied from 5% to 50%. File sizes were set between 5 MB to 500 MB; results are shown only for smaller sizes (larger file sizes yielded similar results). A simulation run finished when all benign peers completed the file download. Each simulation was run 30 times to compute 95% confidence intervals. 5.2 Measurement Study with RedHat9 We analyzed the distribution of IPv4 addresses of peers in a RedHat9 (1.77GB) trace [28]. The trace reflects downloads over a period of 5 months, and has all events from the tracker of the torrent. The log contains report time, IP address, port number, peer ID, upload and download size, and events. Results are presented for the distribution of IPv4 addresses during the first 5 days of flash crowd events, which are particularly challenging for file sharing systems. Table 1. Number of peers per IP/24 # of peers in IP/24 Day 1 Day 2 1 13,306 96.2% 5,049 96.0% 2 439 3.2% 184 3.4% 3 46 0.3% 18 0.3% 4 16 0.1% 3 0.1% ≥5 31 0.2% 8 0.2%
Day 3 Day 4 Day 5 3,451 97.0% 2,624 97.1% 2,230 97.4% 81 2.3% 60 2.2% 38 1.7% 11 0.3% 8 0.3% 1 0.0% 0 0.0% 3 0.1% 1 0.0% 13 0.4% 6 0.2% 19 0.8%
Table 1 shows the number of peers per /24 prefix. At least 96% of leechers were in a /24 address range with no other leechers present. Address ranges with 4 or fewer leechers present accounted for 99.2% of all leechers. Accordingly, in the following a threshold parameter value of 5 was used to identify potential Sybil node address ranges.
34
J.K. So and D.S. Reeves
5.3 Experimental Results We present the results of simulating the proposed scheme against Sybil nodes, for both peer and performance impacts. Seeder impact: In the first experiment, the seeder was required to distribute 5MB of content to all benign users. The total bandwidth required in order to achieve this included bandwidth that was wasted on malicious (Sybil) nodes. Figure 3(a) shows the total amount of data sent by the seeder for the Round Robin seeding policy. Performance was measured with and without locality filtering. Locality filtering greatly reduces the impact of Sybil nodes. The bandwidth consumed by Sybil nodes is decreased by a factor of 10 or greater if the Sybil node percentage exceeds 10%. This is because the filter helps the seeder allocate most unchoke slots to benign leechers, not Sybil nodes. Benign user impact: The second experiment evaluated the average number of downloaded fake blocks per leecher, in a swarm sharing a file of size 100MB. Figure 3(b) shows the results. In RR seeding (without locality filtering), each leecher experienced an exponential increase for the average download rate of fake blocks, as the Sybil node fraction increased. However, the proposed scheme (locality filtering + GOLF) decreased the downloading of fake blocks to almost zero. This is because each leecher discriminates against direct and reported attackers using GOLF. Completion time: The next experiment investigated the average completion time for benign leechers to download the entire file, for a file of size 100 MB. The results are shown in Figure 3(c). BitTorrent without locality filtering showed exponential increases as the percent of Sybil nodes increased. This is because Sybil nodes occupy unchoke slots of benign peers, reducing the opportunities for benign peers to exchange file pieces with one another. In contrast, the use of locality filtering resulted in near constant download completion times, regardless of the fraction of Sybil nodes. Collusion effect: Another experiment investigated the impact of collusion among attackers. In this scenario, Sybil nodes were distributed among multiple IP /24 prefixes. The number of distinct prefixes is referred to here as the number of colluders, and was varied. The percentage of Sybil nodes was also varied. The occurrence of collusion had little impact on the download completion time of benign users, and as such, is not shown. The seeder, however, was affected by the number of colluders. Figure 3(d) shows these results. The upload bandwidth (“total size of data” in the figure) of the seeder increased exponentially as a function of the percent of Sybil nodes without the use of locality filtering. Collusion also affected BitTorrent with locality filtering, until the number of Sybil nodes per /24 address range exceeded the threshold parameter. Thereafter, locality filtering greatly reduced the waste of seeder bandwidth (by a factor of 30 or greater for 50% Sybil nodes). An attacker who is able to spread their Sybil nodes throughout the network will obviously have more impact, but at a significantly higher cost of implementation and deployment. Attacker detection coverage of GOLF scheme: BitTorrent with TFT is limited in its view. GOLF is intended to disseminate knowledge of attackers slightly more widely, but with limited overhead (no non-neighbor communication or global coordination
Defending against Sybil Nodes in BitTorrent 2500 Leechers on RR Sybils on RR Leechers on CRR Sybils on CRR
2000 1500
Average # of fake blocks
Total size of data (MB)
2500
1000 500 0
0
naive RR GOLF on RR GOLF on CRR
2000 10
1500
10
1000
10 10
2 1 0 −1
5 10 20 30 40 50
500 0
5 10 20 30 40 50 Percentage of Sybil nodes(%)
35
5
10 20 30 40 50 Percentage of Sybil nodes(%)
(a) Seeder impact
(b) Leecher impact
9000 8000 7000 6000 5000 4000 3000 2000 1000
4
naive RR GOLF on CRR
0
Total size of data (MB)
Average completion time (sec)
4
x 10
3 2 1 0
5 10 20 30 40 50 Percentage of Sybil nodes(%)
Leechers on naive RR Sybil nodes on naive RR 10 Colluders on CRR 50 Colluders on CRR
0
5 10 20 30 40 Percentage of Sybil nodes(%)
(d) Collusion impact
90 80 70
Percentage (%)
Average detection coverage (%)
(c) Completion time
Direct interaction (TFT) GOLF without liars GOLF with 20% liars One−hop GOLF without liars One−hop GOLF with 20% liars
60 50 40 30 20 10
5
10 20 30 40 Percentage of Sybil nodes(%)
50
100 90 80 70 60 50 40 30 20 10 0
False positive on EigenTrust False positives on GOLF Attack detection on EigenTrust Attacker detection on GOLF
5
2000 1900 1800 1700 No attack GOLF on RR GOLF on CRR
1500 1400 1300
0
10 20 30 40 Percentage of liars (%)
(g) Impact of false accusation
10 20 30 40 Percentage of Sybil nodes(%)
50
(f) GOLF vs. EigenTrust Average completion time (sec)
Average completion time (sec)
(e) Attacker coverage
1600
50
50
2.5 2 1.5
x 10
4
1500
1400
1300 1 (99)
1
5 (95)
10 (90)
True negatives False positives False negatives
0.5 0 1 (99)
5 (95) Percentage (%)
10 (90)
(h) Effect of false positives
Fig. 3. Evaluation results. I-shaped lines indicate 95% confidence. From (a) to (f), x-axis indicates Sybil nodes’ percentage. In (g), x-axis indicates liars’ percentage, where 20% Sybil nodes. In (h), the fraction of nodes who were benign, and not suspected of being Sybil nodes (true negatives), is varied from 99% to 95% to 90%. The fraction of nodes who were benign, but (incorrectly) suspected of being Sybil nodes (false positives) and the fraction of nodes who were Sybil nodes, but not suspected (false negatives) are varied from 1% to 5% to 10%. In (b) and (h), the inner graph magnifies the result.
36
J.K. So and D.S. Reeves
required). In the next experiment, the effectiveness of GOLF in identifying Sybil nodes was measured. The results are shown in Figure 3(e) as the probability of (correctly) detecting attackers. Three cases are considered: (1) attackers are detected only by direct experience (i.e. TFT) [11], (2) attackers are detected based by direct experience or by information provided by immediate neighbors, or (3) attackers are detected based on direct experience, information from immediate neighbors, and information from their neighbors (i.e., one-hop neighbors) [12]. The use of information from immediate neighbors, weighted by their suspiciousness, results in a three-fold increase in the likelihood of detecting Sybil nodes, from about 25% to over 75%. The use of information from one-hop neighbors provided additional benefits in this experiment. On the contrary, one-hop GOLF incurs uncertainty and complexity about one-hop neighbors’ trust. Liars (i.e., Sybil nodes falsely accuse leechers) compromised peers’ attacker history. Some peers mistakenly rejected connection requests from benign peers. In this experiment, a maximum of 6.76% peers never completed downloading the file because of false information. Comparison to EigenTrust with false accusations: Generally, reputation systems are vulnerable to false information. Trust in EigenTrust [10] reflects global and local updates. The global vector is liable to be compromised by badmouthing from malicious attackers. Although a local trust value is high, a peer might mistakenly block a connection request from a benign peer. Similarly, liars (Sybil nodes) can make false accusations about other peers in the GOLF scheme. The last experiment compared GOLF to EigenTrust with respect to detection rates and false positive rates when there are false accusations. Figure 3(f) shows the probability of detecting Sybil nodes and falsely accusing benign peers, for a file of size 100MB. With false accusations, the false positive rate of GOLF is lower than EigenTrust. The percentage of falsely rejected peers out of the total peers ranged from 1.5% (for 5% Sybil nodes) to 16% (for 50% Sybil nodes). For attacker detection rate, EigenTrust is better. To accomplish this, however, EigenTrust requires pre-trusted peers and incurs much higher communication as well as computation overhead. By comparison, GOLF uses a simple computation based on empirical piece interactions, in a distributed manner. 5.4 Discussion We discuss adversary reactions to GOLF, and the issue of false positives. After that, we analyze the overhead for deploying GOLF with locality filtering. Adversary Strategies against GOLF: Generally, reputation systems are vulnerable to counter strategies. For example, Sybil nodes may be liars (make false accusations about other peers), may be traitors (engage in productive exchanges before providing false information about other peers), or may be whitewashers (in case of accusation, leave and rejoin with a new identity). The effect of false accusations is mitigated by the weighting by trust (inversely, suspiciousness). Figure 3(g) shows the average completion time of benign users as a function of the number of neighbors that lie, for a 100MB file. In this experiment, 20% of the nodes are Sybil nodes. The completion time increases about 500 seconds compared to the no
Defending against Sybil Nodes in BitTorrent
37
attack case. This is because the reports of liars are reflected to benign neighbors and are propagated to their friends. Adjusting the computation of trust to further reduce the effects of liars and traitors remains as future work. False positives by locality seeding: False positives may occur because of the innocent existence of benign peers in the same /24 address range as Sybil nodes. For instance, a number of benign peers behind NATs may be falsely identified as a Sybil node. They may experience very slow download speeds because of the discrimination of locality seeding. In spite of the delay of getting initial currency (i.e., uploading 4 file pieces), locality tracking can help the peers overcome seeder’s discrimination. Figure 3(h) shows the effect of false positives by locality filtering. It compares to the average completion times with the CRR model for a 100MB file, based on setting detection categories of each peer by locality seeding. A benign node’s completion time is not affected much, regardless of whether or not it is suspected. The false positives (i.e., benign peers behind NATs) are delayed around two minutes on average. Deployment overhead: To deploy the proposed scheme, additional costs are incurred. The size of BF ilter depends on the file size, and the size of LF ilter depends on the number of participants. We propose the use of Bloom filters, which are well-known space-efficient data structures that are easily updated. The computation overhead requires multiple hash operations to compare the values. BF ilter for a 2GB file requires 1MB of storage, and LF ilter for 1,000 peers in the swarm requires 8KB of storage. Note that the information sent to the seeder does not have to be the entire counting filter. To reduce size overhead, the tracker can inform the seeder of locality violations using a much smaller Bloom filter. Communication overhead is due to the need to share BF ilter , LF ilter , and attacker information. BF ilter shared among peers can be included in the torrent file that is already downloaded at the first join time. LF ilter can be updated, whenever each seeder queries the tracker to harvest new neighbors. Attacker reports can also be combined with existing control messages.
6 Conclusion This paper proposes the GOLF scheme with locality filtering to mitigate Sybil attacks in a BitTorrent system. GOLF uses cooperation between directly-connected peers to spread information about suspected attackers. Each leecher learns of such suspicions from neighbors with whom it exchanges file pieces. The input from neighbors is weighted by their past beneficial behavior. Locality filtering helps a seeder evade traffic exhaustion by Sybil nodes, and helps the tracker guide leechers to good neighbors in the swarm. The overhead of locality filtering is mitigated by the use of Bloom filters. Whereas Sybil nodes devastate the performance of the standard BitTorrent, the proposed scheme effectively defends against the malicious behavior of Sybil nodes. By virtue of GOLF with locality filtering, the expected download completion time for non-malicious nodes is affected very little by the Sybil attack. The data that must be uploaded by a seeder when Sybil nodes are present is reduced by a factor of 10 or greater.
38
J.K. So and D.S. Reeves
Acknowledgments. We thank the anonymous reviewers for their fruitful feedbacks. This work was partly supported by the Secure Open Systems Initiative (SOSI) at North Carolina State University.
References 1. The bittorrent protocol specification, http://wiki.theory.org/BitTorrentSpecification 2. Schulze, H., Mochalski, K.: Ipoque. internet study (2008/2009), http://www.ipoque.com/study/ipoque-Internet-Study-08-09.pdf 3. Konrath, M.A., Barcellos, M.P., Mansilha, R.B.: Attacking a swarm with a band of liars: evaluating the impact of attacks on bittorrent. In: P2P Computing, pp. 37–44 (2007) 4. Dhungel, P., Wu, D., Ross, K.W.: Measurement and mitigation of bittorrent leecher attacks. Computer Communication 32(17), 1852–1861 (2009) 5. Wang, Q., Vu, L., Nahrstedt, K., Khurana, H.: Mis: malicious nodes identification scheme in network-coding-based peer-to-peer streaming. In: INFOCOM 2010, pp. 296–300 (2010) 6. Levin, D., LaCurts, K., Spring, N., Bhattacharjee, B.: Bittorrent is an auction: analyzing and improving bittorrent’s incentives. In: SIGCOMM 2008, vol. 38(4), pp. 243–254 (2008) 7. Dhungel, P., Hei, X., Ross, K.W., Saxena, N.: The pollution attack in p2p live video streaming: measurement results and defenses. In: P2P-TV 2007. ACM, New York (2007) 8. Shin, K., Reeves, D.S., Rhee, I.: Treat-before-trick: Free-riding prevention for bittorrent-like peer-to-peer networks. In: IPDPS 2009, pp. 1–12 (2009) 9. Douceur, J.R.: The sybil attack. In: Druschel, P., Kaashoek, M.F., Rowstron, A. (eds.) IPTPS 2002. LNCS, vol. 2429, pp. 251–260. Springer, Heidelberg (2002) 10. Kamvar, S.D., Schlosser, M.T., Garcia-Molina, H.: The eigentrust algorithm for reputation management in p2p networks. In: WWW 2003, pp. 640–651 (2003) 11. Cohen, B.: Incentives build robustness in bittorrent. In: P2PECON 2003, Berkeley (May 2003) 12. Piatek, M., Isdal, T., Krishnamurthy, A., Anderson, T.: One hop reputations for peer to peer file sharing workloads. In: NSDI 2008, pp. 1–14 (2008) 13. Newsome, J., Shi, E., Song, D.X., Perrig, A.: The sybil attack in sensor networks: analysis & defenses. In: IPSN, pp. 259–268 (2004) 14. Rowaihy, H., Enck, W., McDaniel, P., La Porta, T.: Limiting sybil attacks in structured p2p networks. In: INFOCOM 2007, pp. 2596–2600 (2007) 15. Yu, H., Kaminsky, M., Gibbons, P.B., Flaxman, A.: Sybilguard: defending against sybil attacks via social networks. In: SIGCOMM, pp. 267–278 (2006) 16. Tran, N., Min, B., Li, J., Subramanian, L.: Sybil-resilient online content voting. In: NSDI 2009, pp. 15–28. USENIX Association, Berkeley (2009) 17. Cheng, A., Friedman, E.: Sybilproof reputation mechanisms. In: P2PECON 2005 (2005) 18. Yu, H., Shi, C., Kaminsky, M., Gibbons, P.B., Xiao, F.: Dsybil: Optimal sybil-resistance for recommendation systems. In: IEEE Symposium on Security and Privacy, pp. 283–298 (2009) 19. Lian, Q., Peng, Y., Yang, M., Zhang, Z., Dai, Y., Li, X.: Robust incentives via multi-level tit-for-tat: Research articles. In: Concurr. Comput.: Pract. Exper., pp. 167–178 (2008) 20. Sun, J., Banerjee, A., Faloutsos, M.: Multiple identities in bitTorrent networks. In: Akyildiz, I.F., Sivakumar, R., Ekici, E., de Oliveira, J.C., McNair, J. (eds.) NETWORKING 2007. LNCS, vol. 4479, pp. 582–593. Springer, Heidelberg (2007) 21. Liang, J., Naoumov, N., Ross, K.W.: Efficient blacklisting and pollution-level estimation in P2P file-sharing systems. In: Cho, K., Jacquet, P. (eds.) AINTEC 2005. LNCS, vol. 3837, pp. 1–21. Springer, Heidelberg (2005)
Defending against Sybil Nodes in BitTorrent
39
22. Safepeer, http://wiki.vuze.com/w/SafePeer 23. Broder, A.Z., Mitzenmacher, M.: Survey: Network applications of bloom filters: A survey. Internet Mathematics 1(4) (2003) 24. Bonomi, F., Mitzenmacher, M., Panigrahy, R., Singh, S., Varghese, G.: An improved construction for counting bloom filters. In: Azar, Y., Erlebach, T. (eds.) ESA 2006. LNCS, vol. 4168, pp. 684–695. Springer, Heidelberg (2006) 25. Legout, A., Urvoy-Keller, G., Michiardi, P.: Rarest first and choke algorithms are enough. In: IMC 2006, pp. 203–216. ACM, New York (2006) 26. Adsl, http://en.wikipedia.org/wiki/ Asymmetric digital subscriber line 27. Legout, A., Liogkas, N., Kohler, E., Zhang, L.: Clustering and sharing incentives in bittorrent systems. In: SIGMETRICS 2007, pp. 301–312. ACM, New York (2007) 28. Redhat 9 torrent tracker trace, http://mikel.tlm.unavarra.es/˜mikel/bt_pam2004/
Traffic Localization for DHT-Based BitTorrent Networks Matteo Varvello and Moritz Steiner Bell Labs, Alcatel-Lucent, USA {matteo.varvello,moritz.steiner}@alcatel-lucent.com
Abstract. BitTorrent is currently the dominant Peer-to-Peer (P2P) protocol for file-sharing applications. BitTorrent is also a nightmare for ISPs due to its network agnostic nature, which is responsible for high network transit costs. The research community has deployed a number of strategies for BitTorrent traffic localization, mostly relying on the communication between the peers and a central server called tracker. However, BitTorrent users have been abandoning the trackers in favor of distributed tracking based upon Distributed Hash Tables (DHTs). The first contribution of this paper is a quantification of this claim. We monitor during four consecutive days the BitTorrent traffic (both trackerbased and DHT-based) within a large ISP. The second contribution of this paper is the design, prototype, and preliminary evaluation of the first traffic localization mechanism for DHT-based BitTorrent networks. Keywords: peer-to-peer, measurement, traffic management.
1
Introduction
BitTorrent is by far the most popular Peer-to-Peer (P2P) protocol, adopted by several file-sharing applications such as µTorrent [27] and Azureus [3]. The BitTorrent protocol aims to maximize the volume of data exchanged among peers without taking into account their geographic location. This causes expensive inter-ISPs traffic, and thus considerable monetary loss at the ISPs. Several interesting strategies have been proposed to achieve localization of BitTorrent traffic (Section 2). The common approach of these designs is to bias the peer selection strategy in favor of local peers, i.e., peers located at the same ISP. The more recent and effective design leverages the trackers in order to allow communication only among local peers [28]. The trackers are the central servers used in BitTorrent to coordinate a file exchange. Recently, BitTorrent introduced a distributed tracking feature (Section 3). A client can discover which peers hold a copy or a portion of a file querying a Distributed Hash Table (DHT) [10]. This feature makes traffic localization mechanisms based on the central trackers ineffective. Currently, two large and incompatible DHTs are used in the BitTorrent community: the Azureus [3] and the Mainline [17] DHT. J. Domingo-Pascual et al. (Eds.): NETWORKING 2011, Part II, LNCS 6641, pp. 40–53, 2011. c IFIP International Federation for Information Processing 2011
Traffic Localization for DHT-Based BitTorrent Networks
41
The first contribution of this work is a quantification of user interest in the BitTorrent distributed tracking (Section 4). Over four consecutive weekdays, we intercept the traffic exchanged between BitTorrent clients located at a large ISP and popular trackers. Meanwhile, we monitor the activity of the ISP subscribers in both the Azureus and Mainline DHT. We find that about 40% of the BitTorrent users in our sample have already abandoned the trackers in favor of the DHTs. We believe that this already large fraction of users will rapidly grow in the next years. This motivates our research on traffic localization for DHT-based BitTorrent networks. The second contribution of this work is the design and prototype of the first localization mechanism for DHT-based BitTorrent networks (Section 5). Our localization mechanism works in two steps. First, we intercept within the DHT the announces for popular files. Then, we intercept the requests for these popular files so that we can reply with sets of peers located at the same ISP as the requesting peer. In order to intercept all announces and requests for popular files, we introduce a large number of peers in the DHT (controlled by a single entity) whose identifiers are very close to the identifiers of those files [23]. We focus on popular files for two reasons. First, only the traffic associated to files requested from more than one peer from the same ISP at the same time has potential for localization [8]. Second, we aim to minimize the number of files to be localized. In fact, the number of peers that we need to insert in the DHT to achieve traffic localization scales linearly with the number of files that we localize. Note that identifying popular content within a DHT is a non-trivial problem. In this paper, we assume that content popularity is known; however, the discovery of popular content within a DHT defines our future work. The third contribution of this paper is a preliminary evaluation of the proposed localization mechanism for DHT-based BitTorrent networks. For this evaluation, we deploy a prototype for the Mainline DHT and monitor the benefits of traffic localization for different ISPs (Section 6). Our evaluation shows that the totality of the traffic associated with the download of a very popular file is kept local within a large ISP. However, at ISPs where the file is less popular only 3 to 25% of the transit traffic is saved on average. This problem arises only while running our system in the wild and could not be foreseen during the design phase. In fact, it is due to subtle differences of the DHT implementation across different BitTorrent clients. We are currently working to eliminate this limitation of the proposed localization mechanism.
2
Related Work
In order to achieve localization of P2P traffic, data exchanges between peers located at the same ISP need to be enforced when possible. Accordingly, several designs [6,14,16,21] propose a modification of the peer selection mechanism at the P2P client. These designs share the same rationale and differ mainly in the mechanism they use to identify the ISP of the remote peers. Ledlie et al. [14] collaborated with Azureus [3] in order to improve the network coordinate system integrated in the Azureus client. Their system is based on
42
M. Varvello and M. Steiner
Vivaldi [9] which assigns to each peer coordinates from a low dimensional space such that the distance between peer coordinates reflects the delay between the corresponding peers. Azureus leverages the information provided by its network coordinate system to encourage communication among local peers. A different and more effective approach is proposed in [6]. This scheme uses the information collected by CDN providers such as Akamai in order to favor communication among local peers. Precisely, a pair of peers are considered local if they are associated to the same set of CDN caches most of the time. Both designs [6,14] share a common limitation. Given a peer is only aware of a small subset of the peers that hold a copy or a portion of a file, the probability that this subset contains peers from its ISP is very low. Thus, these designs only achieve to select the few, if any, local peers from the peer-set received. Motivated by the need to solve the previous limitation, Xie et al. [28] propose to inform the trackers about the ISP of each peer. In this way, a tracker could reply to a peer request for a specific torrent with a list of peers located in the same ISP as the requesting peer. This approach is inspired by Aggarwal et al. [1] and Bindal et al. [4] who both suggest to use ISP support to drive the construction of generic P2P networks. The drawback of these designs is that they require cooperation between an ISP and P2P networks which is unlikely to occur. Our work is the logical continuation of previous work in the P2P traffic localization space. Similarly, we aim to bias the peer selection strategy by exposing to a peer only the information about peers belonging to its ISP. The main departure of our work from the previous work is that we do not rely on the client-to-tracker communication.
3
Background
This Section presents a brief overview of the peer discovery in BitTorrent. This is important as the mechanisms BitTorrent uses to discover peers impact the structure of the P2P network, and consequently the data dissemination. Thus, the knowledge of the peer discovery in BitTorrent is fundamental for a clear understanding of traffic localization. It is not our intention to present a complete overview of the BitTorrent protocol, as the reader may find it in [7,15]. Traditionally, BitTorrent employs a tracker, or central server, in order to discover peers and coordinate file exchanges. Peers retrieve the address of the tracker from within a torrent file they download from the web, i.e., a meta data file that contains useful information for the file exchange. Initially, a peer contacts the tracker to retrieve a list of peers participating in the swarm, i.e., the group of peers that hold the file or a portion of it. The tracker answers with the peer-list, a random subset of active peers generally composed by 50 peers. Afterwards, a peer periodically interacts with a tracker in order to send information about the volume of bytes it has downloaded and uploaded. As an answer, the tracker sends to the peer a new peer-list. The frequency of communication between client and tracker is regulated by the tracker via the min interval field contained in the tracker replies. Usually, it is set to 15 minutes.
Traffic Localization for DHT-Based BitTorrent Networks
43
Recently, BitTorrent introduced decentralized tracking, a feature that enables any peer to act as a tracker, by mean of a Distributed Hash Table (DHT)1 [10]. The BitTorrent DHT is used to store and locate information about which peers hold what files. Each peer and file is assigned a unique identifier computed using a hash function. We call nodeID the identifier of a peer and info hash the identifier of a file. Both identifiers are 160-bit long and share the same hash space [20]. Recent BitTorrent client implementations use both the central tracker and the DHT in order to discover peers. In the first place, the DHT was intended to be a backup source of peers in case the tracker is unreachable. However, in some cases the file exchange is performed only relying on the DHT. This happens when users download from the web magnet links, that are pointers to the info hash of a file. This scenario is becoming very frequent since popular torrent indexing websites started to also index magnet links. Beside the tracker and the DHT, the Peer-Exchange-Protocol (PEX) is the third mechanism to discover peers that participate in a file exchange. The PEX allows peers that download the same file to exchange their peer-sets via gossiping.
4
The Role of DHTs in the BitTorrent Network
In this Section, we aim to answer the following question: how many BitTorrent users rely on the DHT only in order to manage their file exchanges? Therefore, we now overview some results obtained by monitoring the BitTorrent traffic in a large ISP. 4.1
Methodology and Data Collection
Our methodology is to intercept the client-to-tracker traffic at ISP scale while monitoring both the Azureus and Mainline DHT. We intercept the client-to-tracker traffic by setting up at ISP border routers several filtering rules that match the IP addresses of popular trackers. The rationale of this measurement strategy is that popular trackers do not reside at the ISP where we collect traces. Since we cannot set up an unlimited number of filtering rules at an ISP border router, we only focus on the most popular trackers. We identify them by crawling the torrents files stored at the major torrent indexing websites, namely PirateBay, BitTorrent, MiniNova, IsoHunt, SuprNova, and Vuze. We then rank the trackers according to the number of recent and popular torrents they host. We consider a torrent recent when it is less than one month old. We consider a torrent popular when it has more than five peers holding a complete copy (seeders). We build a sample of about 300,000 torrents from which we extract the URLs and IP addresses of 4,000 trackers. We then set up 2,000 filtering rules matching the IP addresses of the most popular trackers at few border routers of a large ISP in Europe. IP address ranges within the ISP are statically attributed to the ISP border routers. Thus, we monitor the portion of ISP subscribers whose 1
A DHT is a structured P2P network used for content storage and retrieval.
44
M. Varvello and M. Steiner
Fig. 1. Evolution over time of the number of tracker-based, DHT-based, and both tracker&DHT-based BitTorrent users
IP addresses fall in the IP ranges associated to the border routers we monitor. Accordingly, we track about 90,000 subscribers, i.e., 3.75% of the 2.4 Million subscribers of the ISP, over four consecutive weekdays. For each subscriber, we collect the information about responsiveness with a frequency of 15 minutes, i.e., the time interval clients report their activity to the trackers (cf. Section 2). Meanwhile, we monitor both the Azureus and Mainline DHT using a crawler application. Our crawler is derived from the KAD crawler by Steiner et al. [24]. The crawler recursively queries each peer in the DHT for its neighbor list, starting from a bootstrap node. When no new peers are discovered, the crawler assumes that an entire snapshot of the network is obtained. We gather a snapshot of the entire Azureus and Mainline DHTs every six hours, comprising more than 1 Million and 8 Million unique users at each point in time, respectively. 4.2
Data Analysis
We now analyze the behavior of each ISP subscriber by comparing the clientto-tracker traces and the DHT traces. Accordingly, at a given time t we classify each subscriber who appears to be active in the BitTorrent network as either: – tracker-based - if it only exchanges messages with the central tracker. – tracker&DHT-based - if it exchanges messages with the tracker and it is active in the DHT. – DHT-based - if it is only active in the DHT. Figure 1 shows the evolution over time of the number of BitTorrent users from the monitored ISP that are tracker-based, tracker&DHT-based, and DHT-based. Globally, Figure 1 shows a daily cycle typical of Internet-based applications: low activity during the early morning, increase towards the end of the day, and then decrease during the night. Figure 1 shows a previously unreported result: the relative majority of BitTorrent users (between 34 and 41%) only rely on the DHT in order to manage file exchanges. These users have probably retrieved
Traffic Localization for DHT-Based BitTorrent Networks
45
a magnet link on the Internet which allows them to avoid the communication with the tracker. A slightly smaller fraction of BitTorrent users are concurrently connected to both the DHT and the tracker. This is the usual behavior of a BitTorrent client: it contacts the tracker and the DHT in parallel. Finally, the minority of BitTorrent users (between 23 and 25%) only rely on the tracker to coordinate a file exchange. This behavior can be associated to peers that: (1) run old BitTorrent clients not yet supporting the DHT, (2) cannot access the DHT due to NAT traversal or bootstrapping issues.
5
DHT Traffic Localization
This Section presents the design of the first localization mechanism for BitTorrent networks that rely on DHT-based tracking. First, we overview the design. Then, we discuss its specific implementation for the Mainline DHT [18]. 5.1
Overview
A naive approach to traffic localization for DHT-based BitTorrent networks consists in modifying the tracker functionality implemented at the peers. Precisely, a peer that receives a request for addresses of peers holding a certain file could include in its answer only the peers located at the same ISP as the requesting peer. The major limitation of this approach is that it requires a modification of each BitTorrent client implementation. The key design rationale of our localization mechanism is that it does not require the modification of any BitTorrent client implementation. Our localization mechanism works in two steps. First, we intercept all the messages from peers announcing in the DHT that they hold a file or a portion of it. Then, we intercept all the requests for these files and answer with local peer-sets, i.e., sets of peers located at the same ISPs as the requesting peers. We now describe both steps in detail. A single entity can join a P2P network many times with many distinct logical identities. These identities are called sybils [11]. In order to intercept announces and requests for a popular file, we insert in the DHT several sybils with nodeIDs close to the info hash of the file [23]. We only focus on popular files for two reasons. First, it is infeasible to monitor all files available in a DHT, as this would require to introduce millions of sybils with the consequence of very high load at the machines responsible for the sybils. Second, monitoring each file is unnecessary, given that the majority of the BitTorrent traffic as well as the only traffic that can be localized is associated with few popular files [8]. Once the first step is in place, the sybils are constantly aware of the peers that hold the popular files as well as the peers requesting them. Under this premise, localization is straightforward. The sybils simply need to respond to the queries for popular files with localized peer-sets. In case just few local peers are available, a peer-set is completed with external peers.
46
5.2
M. Varvello and M. Steiner
Mainline Implementation
As a proof of concept, we implement a prototype of our localization mechanism for the Mainline DHT. We pick the Mainline DHT since it has the largest user base with more than 8 Million active users at any point in time (cf. Section 4.1). Note that our localization mechanism can be implemented for any other DHTbased P2P network, e.g., Azureus [3], Emule [12], and Ares [2]. The Mainline DHT implements a simple remote procedure call mechanism based on back-to-back query and answer packets. We now summarize the main remote procedure calls available in the Mainline DHT. For more details, the interested reader is referred to [18]. ping(dest IP:port, src ID) - verifies if a peer is alive and responsive. A peer that receives a ping message learns also about the existence of the peer with nodeID src ID. find node(I,src ID) - requests the closest peers to a hash value I in order to populate the routing tables of the requesting peer. A peer responds to a find node message with the IP addresses, ports, and nodeIDs of the peers whose nodeIDs are the closest to I in its routing tables. get peers(I,src ID) - retrieves information about a file F with info hash I. The nodeID of the querying peer is included in the message (src ID). The get peers remote procedure call works iteratively. At each intermediary hop of the iteration, peers respond to a get peers message with the IP addresses, ports, and nodeIDs of the peers closest to I. At the final hop of the iteration, peers return the IP addresses and ports of the peers that hold a copy or a portion of F (see next bullet). In the latter case, the BitTorrent client might then request the content from these hosts. Note that a response to a get peers message can contain both closer peers to I and sources of F . announce peer(IP:port,I) - a peer, identified by “IP:port”, announces that it holds a file (or a portion of it) with info hash I. The object of the publication is the tuple
. A peer sends announce peer messages to the k peers whose nodeIDs are the closest to I. The value of k can be 3 or 8 according to the client implementation. These k closest peers have been previously looked-up using get peers messages. The tuple expires after a time-out that depends on the client implementation (15 or 30 minutes are typical values). The announcing peer is responsible to re-announce the tuple over time. In the remainder of this Section, we focus on the localization of the traffic associated to a single file identified by info hash I. In the first step of the localization mechanism, we intercept the announce peer messages for I. To do so, we insert sybils in the DHT with nodeIDs closer to I than any real peer. The sybils share the first 47 bits of their nodeIDs with I ensuring that the probability that another peer in the Mainline DHT (8 to 11 Million peers) is closer to I than our sybils is 1 − ((1 − 2−47 )11000000 ) 10−8 . We construct the nodeIDs of the sybils by varying the bits of their nodeIDs in the bit interval 48-56, thereby creating 256 sybils for I. Note that, in theory, k sybils should be enough to control an info hash. However, it has been shown that peers fail to always find the k closest peers to an info hash [13]. In order to
Traffic Localization for DHT-Based BitTorrent Networks
47
guarantee that our sybils are always among the k closest peers discovered, we use 256 sybils. The insertion of the sybils in the DHT consists of informing real peers whose nodeIDs are close to I about the existence of the sybils. These peers will then propagate this information to other peers in the DHT. We proceed as follows. – First, we discover the peers whose nodeIDs falls within Z, the portion of the hash-space that shares a prefix of at least z-bit with I. To do so, we send out multiple get peer messages with target info hashes close to I. – Successively, we send ping messages with src ID equal to the nodeIDs of the sybils to all the peers in Z. This operation populates the routing tables of the nodes in Z with information about our sybils. The information derived from the received announce peer messages is stored as four-tuple in a database common to all the sybils. We use Maxmind [19] to resolve a peer’s IP address to its ISP. For an entry in the database, we use the same timeout policy as currently implemented by the BitTorrent clients. In the second step of the localization, we intercept the get peers messages for I and we reply to them with local peer-sets. Similarly to the announce peer messages, the sybils also intercept the get peers messages at the final hop of their iteration along the DHT. We construct the replies to the get peers messages as follows. First, we determine the ISP of a querying peer using Maxmind. Then, we form the peer-set to be returned searching in the shared database for peers located at the same ISP as the requesting peer. In case not enough local peers are found, we complete a peer-set with external peers. In order to localize more than one file, the outlined procedure needs to be repeated. Resource consumption scales linearly with the number of files to localize, unless their info hashes are very close to each other. In the latter case, sybils for close-by info hashes can be re-utilized to localize the traffic for multiple files.
6
Evaluation
This Section preliminary evaluates the proposed localization mechanism for DHT-based BitTorrent networks. Our goal is twofold: 1) quantify the volume of traffic that we can localize, and 2) identify possible limitations of the localization mechanism when running in the wild. 6.1
Methodology
The evaluation works in three steps. First, we run our prototype in our data center in Chicago to attract the announce and get peer messages in the Mainline DHT for the most popular file as reported by the PirateBay website [25] on November 16th 2010. In the following, we refer to this file as F . We count the announce and get peer messages to derive the number of unique peers (identified by the couple ) interested in F at each ISP.
48
M. Varvello and M. Steiner
Fraction of ISPs [0−1]
1 0.8 0.6 0.4 0.2 0 1
10 100 Number of Peers [#]
1,000
Fig. 2. CDF of peer distribution across the ISPs
This measure gives us insights about the popularity distribution of F across the ISPs. Figure 2 plots the Cumulative Distribution Function (CDF) of the number of peers that participate to the swarm of F per ISP at a given time. Since the distribution does not significantly change over seven days, we only plot the data for the initial distribution. Figure 2 shows that in half of the ISPs just one peer at a time is interested in F , i.e., no localization is possible. In 20% of the ISPs, 7 to 1,500 concurrent peers are interested in F . This means that only in those ISPs there is some localization potential for F . In the second step, we activate the traffic localization mechanism for the following ISPs: Comcast Cable, SBC Internet Service and Telefonica de Espana (abbreviated Comcast, SBC and Telefonica in the following). Any BitTorrent user located at these ISPs who attempts to download F receives localized peer-sets. The localized file F is extremely popular at Comcast where are located about 10% of the available worldwide sources (i.e., between 1,000 and 1,500 sources according to day and time), popular at SBC with between 500 and 600 sources, and relatively unpopular at Telefonica, where we measure only 30 to 50 available sources. In the third step of the evaluation, we instrument a Transmission client [26] to repetitively download F every 90 minute; the download time is bound by a timeout set to 30 minutes. For each peer the client is uploading and downloading from, the instrumented client logs every two seconds the following statistics: upload/download rate and client location (local or non-local). We run the instrumented Transmission client on a machine connected to a private cable connection provided by Comcast and on two PlanetLab [22] machines associated to SBC and Telefonica, respectively. The experiments run seven days at SBC and Telefonica whereas they only run 24 hours in Comcast due to a download cap imposed on the private cable connection. In the experiments, we only enable the DHT for tracker operations by disabling the communication between the instrumented client and the central trackers. This configuration reproduces a scenario where a user clicks on a magnet link (cf. Section 3). We also disable the Peer-ExchangeProtocol. Even though this configuration is not realistic, it is still useful as a
Traffic Localization for DHT-Based BitTorrent Networks
49
Local Download Traffic [%]
100 70 30 10
1 SBC Internet Services Telefonica de Espana Comcast Cable
23:55 02:55 05:55 08:55 11:55 14:55 17:55 20:55 Time [UTC]
Fig. 3. Local Download Traffic ; [Comcast, SBC, Telefonica]
preliminary benchmark of the localization performance while providing better control on the experiments. 6.2
Results
To start with, we analyze the traffic localization benefits at each ISP where we run our experiments. Figure 3 shows the percentage of incoming traffic per download that stays local, i.e., within the ISP. We only focus on the download traffic because it dominates by far the total volume of traffic2 . Each point on the curves is the median local download traffic at a given time based on a seven day measure. The error bars refer to the 25 and 75th percentile, respectively3 . Globally, Figure 3 shows an unexpected result: the traffic localization does not reach 100% all the time. On average, 99% of the traffic stays local in Comcast, whereas only 3 to 25% stay local in Telefonica and SBC for half of the experiments. If we have a look at the 75th percentiles, we can see that the percentage of local download traffic grows up to 15% in Telefonica and 60% in SBC, on average. It follows that the localization works better at ISPs where F is more popular, where the sybils can return a larger number of peers. In these experiments, we expected to measure 100% of localization at each ISP as the client-to-tracker communication and the PEX protocol are disabled, i.e., the client should receive only peer-sets from our sybils. By inspecting the DHT control traffic collected at each machine, we find that additional peers beside the sybils replied to the get peer messages sent by our clients. This implies that some peers in the DHT receive announce messages for F despite the presence of our sybils. This happens because several BitTorrent clients (such as BitComet [5]) do not properly implement the DHT announcement mechanism and thus do not correctly announce to our sybils. Precisely, they fail to lookup the closest peers to a given info hash thus sending announce messages for F to 2 3
This is because the swarm is over provisioned, i.e., many seeders are available. Given the experiments at Comcast last just one day, i.e., there is just one value per time-stamp, all percentiles coincide.
M. Varvello and M. Steiner
1000 SBC Internet Services Telefonica de Espana
100
Comcast Cable
10
1 23:55 02:55 05:55 08:55 11:55 14:55 17:55 20:55 Time [UTC]
(a) Local Download Speed.
Unlocal Download Speed [KBps]
Local Download Speed [KBps]
50
1,000
100
10 SBC Internet Services
1
Telefonica de Espana Comcast Cable
23:55 02:55 05:55 08:55 11:55 14:55 17:55 20:55 Time [UTC]
(b) Non-local Download Speed.
Fig. 4. Speed Analysis ; [Comcast, SBC, Telefonica]
other peers than our sybils. Accordingly, our sybils compete with few other peers when returning peer-lists. In order to win this competition, it is crucial to be the first to respond to the requesting peer. It follows that beside the popularity of F within an ISP the network delay between the sybils and the requesting peer is also relevant. In order to better understand each curve of Figure 3, we plot both the local and non-local speed measured per download and ISP (Figure 4). As above, each point in Figure 4(a) and 4(b) is the median local and non-local download speed over seven days and the error bars indicates the 25th and 75th percentiles. We start by focusing on the experiments performed at Comcast. In Comcast, traffic localization reaches 100% in 6 out of the 16 download attempts and the local download speed is constant around 1MBps; in the remaining download attempts, some external peers provide a maximum of two percent of the file with a download speed of few KBps. We now focus on SBC and Telefonica. At SBC, the median local download speed stays between 10 and 20KBps whereas the median non-local download speed stays between 40 and 50KBps. Conversely, at Telefonica the median local download speed is very low, between 1 and 9KBps, while the median non-local download speed stays between 80 and 200KBps, i.e., about four times more than the value measured for SBC. At this stage of the analysis, two explanations to this observation are possible. (1) A client at Telefonica receives fewer local sources from our sybils than a client at SBC (due to different popularity of F ); thus, the non-local sources retrieved from external peers have much higher chances to contribute to the file download. (2) Peers located at Telefonica have low upload rate and cannot contribute much to the swarm. The latter result triggered our curiosity about the impact of a traffic localization mechanism on the “Quality of Experience” (QoE) perceived by the user. We aim to answer the following question: does traffic localization positively or negatively affects the user download time? In order to answer this question, we
Traffic Localization for DHT-Based BitTorrent Networks
51
1 0.9 0.8 0.7 CDF
0.6 0.5 SBC Telefonica
0.4 0.3
Comcast
0.2
SBC − no loc.
0.1
Telefonica − no loc. Comcast − no loc.
0 20
60 100 200 300 Download Time [min.]
1,000
Fig. 5. Download Duration ; [Comcast, SBC, Telefonica]
turn off the localization mechanism and we re-run the experiments at each ISP for one day. Figure 5 plots the CDF of the download duration measured with and without localization at each ISP. Due to the 30 minutes timeout for a download attempt, if a download does not complete within 30 minutes we extrapolate the download duration using the average download speed computed during the download attempt. Figure 5 shows that in Comcast the download duration is systematically higher without traffic localization, e.g., the median download time significantly grows from 15 minutes up to 35 minutes. This indicates that maintaining traffic within Comcast provides clear benefits to the user experience. However, this is not the case for SBC and Telefonica where the download time is shorter when the localization mechanism is turned off. For example, the median download time measured with and without localization decreases from 75 to 60 minutes in Telefonica and from 260 to 170 minutes in SBC. This result indicates that traffic localization might not always be beneficial to the user QoE. The decrease of the download time without traffic localization is more evident in SBC. This suggests that the low fraction of traffic localized in Telefonica is not caused by the low upload rate of Telefonica customers but mostly by the low popularity of F in Telefonica. While running the experiments with the localization mechanism disabled, we also measure the intrinsic localization, i.e., the amount of traffic naturally localized in BitTorrent. We find that, on average, 20% of the traffic stays local in Comcast, whereas no traffic at all stays local in both SBC and Telefonica.
7
Conclusion and Future Work
Recently, BitTorrent introduced a distributed tracking feature: the role of the trackers is distributed among peers via a Distributed Hash Table (DHT). This paper claims that existing solutions for BitTorrent traffic localization leveraging central trackers will become soon ineffective. In order to support this claim, we monitor over four weekdays the BitTorrent activity of 90,000 subscribers at a
52
M. Varvello and M. Steiner
large ISP in Europe. We find that about 40% of the BitTorrent users located at the monitored ISP already rely on the DHT only for tracker operations. Motivated by this observation, we design, prototype, and preliminary evaluate the first traffic localization mechanism for DHT-based BitTorrent networks. This mechanism constantly intercepts the announce messages in the DHT for popular files in order to discover worldwide peers holding these files. Then, it constantly intercepts requests for these popular files and responds with replies that contain only peers from the ISP of the requesting peer (when available). We evaluate our design by localizing the traffic associated with a popular file in the Mainline DHT. We then measure the amount of traffic localized by running an instrumented BitTorrent client on machines located at the following ISPs: “Comcast Cable”, “SBC Internet Service” and “Telefonica de Espana”. The evaluation shows that the proposed localization mechanism performs well in the wild, however it deals with the issue that several BitTorrent clients do not implement the DHT protocol correctly. This prevents our localization mechanism from keeping 100% of the BitTorrent traffic within an ISP in some cases. For future work, we aim to investigate the latter problem more in order to understand whether the system design can be further improved. Accordingly, we plan to systematically evaluate our traffic localization mechanism as follows. First, we will analyze the impact of torrent popularity within an ISP. Second, we will expand the measurements of the traffic localization benefits from just three ISPs to a larger number. Third, we will analyze the impact of the PeerExchange-Protocol on the traffic localization benefits. Another avenue for future work is understanding the impact of BitTorrent traffic localization on user Quality of Experience (QoE). Is traffic localization going to decrease download times and therefore increase users QoE? In this paper, we showed that a user located at a large ISP (Comcast Cable) with a relatively fast subscription profile always benefits of faster downloads if its traffic is maintained within its ISP. However, this result was not confirmed at SBC Internet Service and Telefonica de Espana. We aim to investigate this research question more and extend these preliminary results to other ISPs as well. Finally, we intend to extract the information about the popularity of content directly from the DHT. In this work, we assumed that the content popularity distribution is given, e.g., can be learned from the torrent indexing websites. However, this methodology has the following limitations: (1) content popularity can be different at different ISPs (e.g., due to language differences), (2) content popularity in the DHT might deviate from the content popularity as reported by torrent indexing websites, and (3) as tracker usage decreases over time content popularity will be only measurable through the DHT.
References 1. Aggarwal, V., Feldmann, A., Scheideler, C.: Can ISPs and P2P users cooperate for improved performance? CCR 37(3), 29–40 (2007) 2. Ares, www.ares.net/ 3. Azureus/Vuze, www.azureus.sourceforge.net/
Traffic Localization for DHT-Based BitTorrent Networks
53
4. Bindal, R., Cao, P., Chan, W., Medved, J., Suwala, G., Bates, T., Zhang, A.: Improving traffic locality in bittorrent via biased neighbor selection. In: ICDCS, Lisbona, Portugal (July 2006) 5. Bitcomet, http://www.bitcomet.com/ 6. Choffnes, D.R., Bustamante, F.E.: Taming the torrent: a practical approach to reducing cross-isp traffic in peer-to-peer systems. In: SIGCOMM, Seattle, WA, USA (August 2008) 7. Cohen, B.: Incentives Build Robustness in BitTorrent. Technical report, bittorrent.org. (2003) 8. Cuevas, R., Laoutaris, N., Yang, X., Siganos, G., Rodriguez, P.: Deep Diving into BitTorrent Locality. In: CoNEXT, Rome, Italy (December 2009) 9. Dabek, F., Cox, R., Kaashoek, F., Morris, R.: Vivaldi: A Decentralized Network Coordinate System. In: SIGCOMM, Portland, Oregon, USA (August 2004) 10. Dabek, F., Zhao, B., Druschel, P., Towards, I.S.: A Common API For Structured Peer-to-Peer Overlays. In: Kaashoek, M.F., Stoica, I. (eds.) IPTPS 2003. LNCS, vol. 2735, pp. 33–44. Springer, Heidelberg (2003) 11. Douceur, J.R.: The sybil attack. In: Druschel, P., Kaashoek, M.F., Rowstron, A. (eds.) IPTPS 2002. LNCS, vol. 2429, p. 251. Springer, Heidelberg (2002) 12. eMule, http://www.emule-project.net/ 13. Kang, H., Chan-Tin, E., Hopper, N., Kim, Y.: Why KAD lookup fails. In: P2P, pp. 121–130 (November 2009) 14. Ledlie, J., Gardner, P., Seltzer, M.: Network coordinates in the wild. In: NSDI, Cambridge, MA, USA (April 2007) 15. Legout, A., Urvoy-Keller, G., Michiardi., P.: Rarest First and Choke Algorithms Are Enough. In: IMC, Rio De Janeiro, Brazil (October 2006) 16. Li, J., Sollins, K.: Exploiting autonomous system information in structured peerto-peer networks. In: ICCCN, Chicago, IL, USA (October 2004) 17. Mainline BitTorrent, http://www.bittorrent.org/ 18. Mainline DHT Specification, http://www.bittorrent.org/beps/bep_0005.html 19. Maxmind, http://www.maxmind.com/ 20. Maymounkov, P., Mazieres, D.: Kademlia: A Peer-to-peer information system based on the XOR metric. In: Druschel, P., Kaashoek, M.F., Rowstron, A. (eds.) IPTPS 2002. LNCS, vol. 2429. Springer, Heidelberg (2002) 21. Nakao, A., Peterson, L., Bavier, A.: A routing underlay for overlay networks. In: SIGCOMM, Karlsruhe, Germany (August 2003) 22. Planetlab, http://www.planet-lab.org/ 23. Steiner, M., Biersack, E.W., En-Najjary, T.: Exploiting KAD: Possible Uses and Misuses. Computer Communication Review 37(5) (October 2007) 24. Steiner, M., Biersack, E.W., En-Najjary, T.: Long Term Study of peer behavior in the the KAD DHT. Transactions on Networking 17(6), 1371–1384 (2009) 25. The PirateBay, http://www.thepiratebay.org/ 26. Transmission, http://www.transmissionbt.com/ 27. uTorrent, http://www.utorrent.com/ 28. Xie, H., Yang, Y.R., Krishnamurthy, A., Liu, Y.G., Silberschatz, A.: P4P: provider portal for applications. In: SIGCOMM, Seattle, WA, USA (August 2008)
BGP and Inter-AS Economic Relationships Enrico Gregori1 , Alessandro Improta2,1 , Luciano Lenzini2 , Lorenzo Rossi1 , and Luca Sani3 1
Institute of Informatics and Telematics, Italian National Research Council Pisa, Italy {enrico.gregori|lorenzo.rossi}@iit.cnr.it 2 Information Engineering Department, University of Pisa, Italy {l.lenzini|alessandro.improta}@iet.unipi.it 3 IMT Lucca, Institute for Advanced Studies, Lucca, Italy [email protected]
Abstract. The structure of the Internet is still unknown even though it provides services for most of the world. Its current configuration is the result of complex economic interactions developed in the last 20 years among important carriers and ISPs. Although with only some success, in the last few years some research has tried to shed light on the economic relationships established among ASes. The typical approaches have two phases: in the first, data from BGP monitors is gathered to infer the Internet AS-level topology graph, while in the second phase, algorithms are instantiated on this graph to derive economic tags for all edges between nodes (i.e. ASes). This paper provides significant findings for both steps. Specifically, regarding the second step, a small set of transit-free ASes and the lifespan of any AS paths are the input to an algorithm that we have devised which assigns an economic tag to each AS connection. The quality of inferred tags is expressed by a weight which takes into consideration the lifespan of the connection it refers to, as well as the outcome of the so called two-way validation approach. Regarding the first step the paper reports another valuable contribution targeted at identifying and cleaning the presence of fake connections induced by typos. Keywords: BGP; Internet; Autonomous System; Tagging algorithm; Inter-AS Economics.
1
Introduction and Related Work
The Internet is a collection of Autonomous Systems ASes1 (ASes) connected to each other via Border Gateway Protocol (BGP) on the basis of economic
1
This paper was partly supported by the MOTIA European Project (JLS/2009/CIPS/AG/C1-016). This publication reflects the views only of the authors, and the Commission cannot be held responsible for any use which may be made of the information contained therein. An AS is a connected group of one or more IP prefixes run by one or more network operators which has a single and clearly defined routing policy (RFC 1930).
J. Domingo-Pascual et al. (Eds.): NETWORKING 2011, Part II, LNCS 6641, pp. 54–67, 2011. c IFIP International Federation for Information Processing 2011
BGP and Inter-AS Economic Relationships
55
contracts that regulate the traffic exchanged between them. The real structure of the Internet is still unknown, since there are neither standard methods nor dedicated tools to retrieve the needed information from the Internet itself. Researchers have tried to retrieve this topology using existing tools (e.g. traceroute) and by exploiting BGP information obtained from projects that deployed several monitors in significant locations (e.g. on IXPs) across the world. Typically, the Internet is studied as a graph in which nodes are ASes and edges are BGP connections between them, for example see [4,11,7,6]. A particular area of interest is the discovery of economic relationship among ASes. The undirected graph of the Internet is not sufficient to determine the real importance of each AS since it is not possible to deduce all the possible sequences of ASes that packets can traverse. Contractual agreements could override scientific metrics (e.g. the length of the AS path [5]), thus some of the paths that can be extracted using the undirected graph might not actually be used in the real Internet even though we know each of the connections that make up that path. Given the heterogeneity of the ASes that form the Internet (e.g. ISPs, CDNs, universities, research networks, factories), AS relationships are fundamental to determine routing policies that will select allowed paths over which inter-AS traffic can flow. The availability of a topology of the Internet that would be aware of the economic relationships among each AS has several practical implications in the real world. For example, a CDN could use this knowledge to select the best places in which to deploy replicas of its server, and a new regional ISP could select the best upstream ASes through which it could connect to the rest of the Internet [13]. AS relationships are also important to yield a better insight into the business choices that led to the creation of the current Internet structure. In the literature, economic relationships between ASes are usually classified into customer-to-provider (c2p), provider-to-customer (p2c), peer-to-peer (p2p) and sibling-to-sibling (s2s) [8,4]. In c2p and p2c, an AS (customer) pays another AS (provider) to obtain connectivity to the rest of the Internet. In p2p, a pair of ASes (peers) agree to exchange traffic between their respective customers, typically free of charge. In s2s agreement a pair of ASes (siblings) provide each other with connectivity to the rest of the Internet. The first work on tagging the Internet AS-level topology was [4], which proposed applying a heuristic on public BGP routing information to infer economic relationships between ASes. The heuristic was based on the fact that routes that two ASes exchange must reflect the economic relationship between them, and that a provider typically has a larger node degree2 than its customers, while two peers typically have comparable degree. In [4] it was also proved that if all ASes respect the export policies imposed by such economic relationships, then the AS path in any BGP routing table must be valley-free, i.e. after traversing a p2c or p2p, the AS path cannot traverse a c2p or p2p. Later, [13] formulated the problem of assigning a tag to each connection as an optimization problem, the Type of Relationships (ToR) problem, using the number of valley-free paths 2
The node degree of a vertex of a graph is the number of edges incident to the vertex. In our case, degree indicates the number of BGP neighbors of an AS.
56
E. Gregori et al.
as an objective function and proposed a heuristic to solve it. The ToR problem has been proven to be NP-complete [1,3], thus several authors have proposed different heuristics and enhancements to resolve it, for example [1,3,2,9]. Other interesting approaches in the tagging issue were developed in [14] and in [11]. The algorithm proposed in [14] started from a partial set of information about the relationships between ASes, inferred using the BGP COMMUNITY attribute and from a set of information gathered through the IRR databases in order to obtain an entire set of AS relationships. However there is not a standard in using the BGP COMMUNITY attribute that could lead to a systematic method to extract useful information, and data available in IRRs have no guarantees regarding completeness and freshness. The algorithm proposed in [11] was based on the fact that BGP monitors at the top of the routing hierarchy, i.e. monitors at Tier-1 ASes, are able to reveal all the downstream p2c over time, assuming that routes follow a no-valley policy. Considering the large number of variables that could affect AS commercial agreements and the entropy that is present in the BGP data, we believe that the no-valley approach itself could lead to inaccurate results. In this work, we introduce another kind of approach to this problem. In order to distinguish paths that can lead to correct economic tags from those that merely introduce noise, we believe that the lifetime of an AS path is a fundamental metric that should be considered before tagging. As far as we know, the only work that has considered the lifetime of routes is [11], but the authors only set a threshold to cut off routes that lasted less than two days, potentially cutting off short-lived backup connections. Our algorithm exploits the AS paths gathered from BGP monitors and also takes into account their lifespan during the inference of the tags, thus preserving backup links. In addition, in order to quantify the reliability of each tag we introduce the concept of two-way validation into the tagging algorithm, distinguishing between confirmed and unconfirmed tags. The rest of the paper is structured as follows. In Sect. 2 we show the presence of false connections in BGP data and their causes. In Sect. 3 we analyze the nature of transient AS paths and we describe in detail the algorithm proposed. In Sect. 4 we analyze the results and we draw conclusions in Sect. 5.
2
BGP Data Gathering and Hygiene
BGP data is widely used in research into the Internet AS-level topology. However, to the best of our knowledge, there are not many works that analyze the correctness of the topological information retrieved. As we mentioned earlier, this kind of data could be affected by errors made during the manual configuration of BGP, which could lead to false AS paths and, thus, to non-existent connections within the AS-level topology. In this section we briefly describe the list of our data sources and how we managed to retrieve the data. We then analyze these data and we will propose a methodology in order to clean up any human mistakes. The main public projects that are currently available are RIPE RIS and RouteViews. Route collectors (or monitors) deployed by these projects are
BGP and Inter-AS Economic Relationships
57
devices that act like BGP AS border routers, but that do not send UPDATE messages, thus not announcing any prefix. Their only purpose is to establish a BGP session with BGP routers under the ownership of other ASes in order to gather routing information. To analyze the BGP table of each monitor we downloaded the snapshot of its first RIB on October 1st, 2010 and all subsequent UPDATE messages up to the end of the month. We did not only download snapshots because we might have missed all those links that are visible only between two snapshots, including some backup links. In addition, UPDATE messages allow us to trace the evolution of each single AS path and each AS connection during October 2010 in terms of its lifespan. To gather the lifespan of each AS path we rely on the BGP-4 specs (RFC 4271). A route is withdrawn from service if: a) the IP prefix that expresses the destination for a previously advertised route is advertised in the WITHDRAWN ROUTES field in the UPDATE message, b) a replacement route with the same prefix can be advertised, or c) the BGP speaker connection can be closed, which implicitly removes all routes the pair of speakers had advertised to each other from service. We consider as the lifespan of an AS path the time span in which it was considered as active by the BGP monitor, i.e. the time interval during which there is at least one active route that includes the considered AS path3 in its attributes. Data gathered from BGP monitors need to be cleaned before being used. Several AS paths contain private AS Numbers (ASNs) and the AS TRANS number 23456 (RFC 4893). In addition, some loops can be found even if the default behavior of BGP is supposed to prevent their formation. We investigated in depth the possible causes of these loops and we found that some of them are caused by typos, as already highlighted in [4] and these may introduce false connections. We found three major causes of loops in BGP data: a) Human error during AS path prepending. When a BGP router sends an announcement it must prepend its local ASN to the AS path field. BGP allows the manipulation of the AS path length by prepending the owned ASN multiple times to influence the routing decision of neighboring ASes. This feature is implemented through rules to identify which announcement must be affected by the manipulation of the AS path and how many times the prepending must be done. Typically these rules are set manually by administrators, thus during this setup errors may be introduced that could in turn generate loops. Table 1 uses real AS paths in order to highlight seven different kinds of human errors. b) Network Migration. Consider ISP A and B, as represented in Fig.1a. Suppose that A purchases B, then customers of B become customers of A (Fig.1b). The external BGP (eBGP) peering sessions with B’s customers have to be reconfigured, requiring significant coordination and planning efforts. The Cisco Local-AS feature allows the migrated routers (routers formerly owned by B) to 3
The RFC 4271 defines a route as “the unit of information that pairs a destination with the attribute of a path to that destination”. In this case, the attribute that we are considering is the AS path.
58
E. Gregori et al. Table 1. Loops caused by human errors Error type
(Real) Example
Lack/excess of a trailer digit Lack/excess of a header digit Lack/excess of a middle digit Missing space between ASNs Error on a digit Error on two digits Missing digit cause split ASN
3561 26821 27474 2747 27474 286 3549 9731 38077 8077 38077 13030 1273 9329 929 9329 . . . 2152 3356 35819 3581935819 35819 35819. . . 13030 22212 19024 25782 25785 25782 11686 4436 3491 23930 23390 23930 7306 6939 5603 21441 21 41 21441
(a) Start scenario
(b) Migrated Scenario
Fig. 1. Network Migration Scenario
participate in AS A while impersonating AS B towards (previous) customers of AS B. Routers using the Local-AS feature retain the information that the BGP routes have passed the local AS in the AS path. They prepend B in inbound eBGP updates and prepend both current ASN A and B in outbound eBGP updates. In this environment, some loops can be introduced if (previous) customers of B exchange UPDATE messages with each other passing through AS A. c) Split ASes. A split AS is an AS that is divided into two (or more) islands. An AS can be split steadily or could split due to an internal temporary network failure. An example of a steady split AS is AS 48285 (Robtex.com). Consider the following AS path: (44581 48285 16150 5580 48285). AS 16150 and AS 5580 are respectively located in Sweden and the Netherlands. We contacted the Robtex.com administrator4 and learnt that AS 48285 is indeed steadily split. Thus, to obtain the connectivity between the two islands, the path needs to pass through other ASes (in this case AS 16150 and AS 5580). Note that AS paths matching case a) can introduce false connections, thus it is critical to fix them so as to obtain a reliable topology. On the other hand AS paths in b) and c) will not be fixed because they reflect a real-world situation. Thus, we clean the topology by fixing typos and by not considering connections involving private ASNs and AS T RAN S. The cleaned topology contains 116671 connections and 36437 ASes. 4
We would like to thank Robert Olsson, administrator of robtex.com, for the collaboration.
BGP and Inter-AS Economic Relationships
3
59
From BGP Lies to a Keen Tagging Algorithm
The proposed tagging algorithm exploits a list of Tier-1 ASes, denoted by Tlist , and relies upon the following basic principle: an AS that is not included in Tlist should be able to reach all the Internet networks, thus there must exist at least one AS path including the considered AS and a Tier-1 AS. As in [11], we use the list gathered from Wikipedia to obtain a list of Tier-1 ASes5 . The algorithm also assumes that export policies imposed by the p2c, c2p, p2p and s2s relationships described in [4] are respected by ASes. More specifically, it assumes that an AS announces to its customers and siblings all the routes that have been received from its customers, peers and providers, while to its providers and peers it announces only the routes received from its customers. A fundamental characteristic of this algorithm is that it computes the economic relationships by also considering the lifespan of the AS path. The presence of transient AS paths on BGP data has already been highlighted in [10], and we found that several of them, if used, can lead to misleading results in the tagging algorithm. We strongly believe that the tagging decisions made via long lasting paths must not be affected by these transient AS paths that are not used to transit IP traffic, thus our algorithm handles them. This section analyzes this specific class of transient AS paths included in the BGP data. We then introduce a tagging algorithm that also takes into account the presence of such transient paths. 3.1
Transient AS Paths
Several AS paths gathered by BGP monitors include AS connections that are not used to transit any traffic. This happens due to erroneous export policies implemented in BGP, as described in [10], where two different classes of BGP misconfigurations are listed that could induce false AS paths: origin misconfiguration and export misconfiguration. In the former, an AS accidentally announces a prefix that it should not be announced, while in export misconfiguration an AS sends an announcement to a neighboring AS which violates the commercial agreement between them. These misconfigurations are much more effective when mixed with the BGP convergence times. As stated in [12], in response to path failures some BGP routers may try a number of transient paths before selecting the new best path or declaring that a destination is unreachable, performing the so-called path exploration. We analyzed BGP data and found a specific class of transient AS paths that contains two or more ASes in Tlist separated by one or more intermediate ASes, representing easily recognizable cases of no-valley-free paths. We investigated the set of AS paths to find this particular kind of AS path and noted that, on average, 1.3% of the AS paths on each monitor belongs to this class and that the largest number of these paths lasted 5
From http://en.wikipedia.org/wiki/Tier_1_network, the Tier-1 ASes are: AS209 (Qwest), AS701 (Verizon), AS1239 (Sprint), AS1299 (TeliaSonera), AS2914 (NTT), AS3257 (TiNET), AS3356 (Level3), AS3549 (Global Crossing), AS3561 (Savvis), AS7018 (AT&T), AS6453 (Tata).
60
E. Gregori et al. CCDF
1
Maximum Lifespan Minimum Lifespan Average Lifespan
0.9 0.8
P(X > x)
0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 -2 10
-1
10
0
10
1
10
2
3
10 10 Lifespan [s]
4
10
5
10
6
10
7
10
Fig. 2. CCDF of the lifespan of no-valley-free AS paths
less than an hour. Figure 2 shows the distribution of the minimum, maximum and average lifetime of each no-valley-free AS path as seen by the route-views2 monitor of the RouteViews project. As can be seen, about 90% of these paths on route-views2 lasted 10000 seconds or less. The same behavior was found on all the other monitors. The peak of lifetimes around 30 seconds in Fig.2 can be explained by the fact that 30 seconds is equal to the suggested value for the MinRouteAdvertisementIntervalTimer of BGP (RFC 4271). This timer indicates the minimum amount of time that should elapse between two consecutive announcements regarding the same route. If this default value is used by the vast majority of routers, then several short-lasting no-valley-free paths will be replaced at least every 30 seconds. One plausible explanation for several of these paths is that they are a consequence of the co-effect of the convergence of BGP protocol upon a network failure and the usage of a particular type of outbound policy operated by one of the ASes involved. In more detail, BGP allows the filter of outbound announcements to be set up on a prefix basis. The filter thus prevents an announcement carrying a network prefix that belongs to one of its providers from being propagated to its peers and providers. However this filter has a drawback that can be seen after the end of a BGP connection, either due to a temporary network failure or to the end of an agreement. Consider the scenario in Fig.3. C uses the AS path [D] to reach prefix P, which belongs to its customer D. However, C has stored in its Adj-RIB-In6 also the AS paths [A, B, D] and [B, D], received from its providers A and B respectively. Now suppose that due to a network failure in P, D sends a withdrawal to B and C. C’s BGP decision process will remove from its RIB the AS path [D] to reach network P and will search for another way to reach it before declaring to its neighbors that network P has been withdrawn. Since it has not yet received any withdrawal message 6
Each BGP router maintains a Adj-RIB-In (Adjacent Routing Information Base, Incoming) containing routes received from neighbors.
BGP and Inter-AS Economic Relationships
61
Fig. 3. Scenario
1
foreach AS path foreach direct connection [A,B] if (T1 ∈ Tlist follows [A,B] in the AS Path) Tag[A,B] = c2p if (T1 ∈ Tlist precedes [A,B] in the AS Path) Tag[A,B] = p2c if (does not exist any T1 ∈ Tlist ) Tag[A,B] = p2p
2 3 4 5 6 7 8
Fig. 4. Step a) of the algorithm
from B concerning P, the direct consequence is that C will select7 [B, D]. If C performs an outbound filtering implemented as described above, it will then announce to A the route [B,D] to reach P, even if it is clearly in contrast with the p2c agreement signed with B. This is because network P appears in the list of networks that can be advertised to all the providers. Furthermore, since an AS typically prefers a route toward a customer over a route toward a peer or a provider, A will select the AS path [C, B, D] to reach P and it will announce it to the monitor M, thus causing the creation of a no-valley-free path. In practice, M sees that C is transiting traffic between its providers A and B for a short time (see Fig.2) since the network P will be withdrawn from the Internet at the end of the convergence of BGP protocol. Note that this is an issue for the tagging algorithm, but not for the AS-level topology discovery tools. The connections among ASes that appear during these transients all exist, even though the traffic does not actually pass via the given AS path. 3.2
Tagging Algorithm
The main piece of information that can be exploited to infer AS relationships is the set of AS paths that we can gather from the BGP data stored by monitors and their lifespan. After the data hygiene phase, performed as described in Sect. 2, our algorithm assigns an economic tag to each connection of the topology, which also indicates the level of reliability for each of them. The algorithm is organized in three main steps: a) inference of all the possible economic relationships for each AS connection. In this step, the algorithm analyzes every single AS path separately from the others 7
For simplicity’s sake, we consider the length of AS paths as the only relevant decision factor in BGP decision process.
62
1 2 3 4 5 6 7
E. Gregori et al.
foreach direct connection [A,B] foreach Tag[A,B] from the longest-lasting to the shortest- lasting if ( exists direct Tag[A,B]) if ( lifespan [direct Tag[A,B]] and lifespan [Tag[A,B]] are comparable) direct Tag[A,B] = direct merge(direct Tag[A,B], Tag[A,B]) else direct Tag[A,B] = Tag[A,B]
Fig. 5. Step b) of the algorithm
and assigns a raw economic tag to each direct8 connection found in each AS path. It does this by exploiting the presence of ASes in Tlist in order to highlight which AS is transiting traffic for another AS. For each direct [A,B] connection inside the path considered the algorithm proceeds as reported in Fig.4. Consider an AS path such as ( . . . A B . . . T1 . . . ). The algorithm infers that B is a provider of A (line 2), because B is announcing to A routes retrieved from T1 . In other words, B is providing A with connectivity to a portion of the Internet. Consider now an AS path such as ( . . . T1 . . . A B . . . ). In this case the algorithm infers that A is a provider of B (line 4), because their relationship cannot be both p2p and c2p. This is because if A and B established a p2p relationship, then A would transit traffic between one of its providers or peers (T1 )9 and another peer (B), thus violating the export rules imposed by the p2p agreement. The same argument can be applied to show that A and B cannot have a c2p relationship. If the AS path considered does not contain any of the Tlist ASes following or preceding [A,B], the algorithm infers that A and B have a p2p relationship because neither A nor B uses the other as a provider (line 6). Note that the aim of the algorithm is not to infer the relationships among ASes included in Tlist , since they are assumed to be p2p. This step of the algorithm also maintains for each tagged connection the maximum lifetime of the AS paths used to infer it. b) inference of a single economic relationship for each direct AS connection. In this step, the algorithm uses the results of step a) to infer for each direct AS connection [A,B] a unique economic tag among all the tags collected in step a) for the same direct connection. The algorithm first orders the tags inferred for the direct connection [A,B] in descending order of lifespan. It then analyzes each tag from the longest-lasting to the shortest-lasting, as illustrated in Fig.5 (line 2). The lifespan of each tag plays a major role, since the algorithm allows the current examined tag (Tag[A,B]) to affect the current resulting tag for the same direct connection (direct Tag[A,B]) iff its lifespan does not differ by more than NMAG order of magnitude from the longest-lasting tag found for the same direct connection, i.e. iff the two lifespans are comparable (line 4). Note that the 8 9
We denote as direct connection a connection in which the direction is relevant i.e. the direct connection [A,B] is different from the direct connection [B,A]. By definition a Tier-1 AS does not have any provider.
BGP and Inter-AS Economic Relationships
1 2 3 4 5 6 7 8 9
63
foreach direct [A,B] connection get direct Tag[A,B] if ( exists direct Tag[B,A]) if ( lifespan [direct Tag[B,A]] and lifespan [direct Tag[A,B]] are comparable) confirmed Tag[A,B] = inverse merge(direct Tag[A,B], direct Tag[B,A]) else unconfirmed Tag[A,B] = direct Tag[A,B] else unconfirmed Tag[A,B] = direct Tag[A,B]
Fig. 6. Step c) Final tagging and two-way validation
algorithm examines each tag in descending order of lifespan, thus direct Tag[A,B] contains the longest-lasting tag for the direct connection [A,B]. With this solution, the algorithm simply ignores the transient paths to infer the economic relationship for those connections that are found both in transient and stable paths, while it still analyzes connections that are found only as short-living since it assumes that they are backup connections. The merging rules used in this step (direct merge at line 5) are reported in Table 2a and can be justified by the export policies described at the beginning of this section. If A and B have a p2c relationship, i.e. B and A have a c2p relationship, this means that A can reach only B’s customers while B can reach A’s customers, A’s peers and A’s providers. On the other hand, if A and B have a s2s relationship, A can reach B’s providers B’s peers and B’s customers, while if A and B have a p2p relationship, they only reach their respective customers. Thus, the merge of a s2s tag with either p2c or p2p tags results in a s2s tag and the merge of a p2c (c2p) tag with a p2p tag results in p2c (c2p) tag. c) Final tagging and two-way validation. In the final step, the algorithm uses the results of step b) to infer the economic relationship for each indirect connection10 , as illustrated in Fig.6. For each direct [A,B] connection the algorithm first gets its resulting tag (direct Tag[A,B], line 2) and checks if there is a resulting tag for the direct connection [B,A] (direct Tag[B,A], line 3). If both exist then it checks whether their lifespans are comparable (line 4) and if necessary merges the two tags (inverse merge, line 5). The merging rules, listed in Table 2b, are very similar to those introduced in step b), but they also take into account the fact that a p2c for the [A,B] connection is a c2p for the [B,A] connection. In this step it is also possible to find which tag has a two-way validation, i.e. which tag is inferred from both the tags of the direct connections [A,B] and [B,A]. Then, a label is assigned for each tag which indicates if it is two-way validated or not, distinguishing between confirmed (line 5) and unconfirmed tags (line 7 and 9). 10
We denote as indirect connection a connection in which the direction is not relevant, i.e. the indirect connection [A,B] is equal to the indirect connection [B,A].
64
E. Gregori et al.
Table 2. Rules to merge tags (resulting tags are referred to the connection (A,B)) (a) Direct Merge
(b) Inverse Merge
[A, B] [A, B] p2c p2p c2p s2s
[B, A] [A, B] p2c p2p c2p s2s
p2c c2p p2p s2s
4
p2c s2s p2c s2s
p2c c2p p2p s2s
s2s c2p c2p s2s
s2s s2s s2s s2s
p2c c2p p2p s2s
s2s c2p c2p s2s
p2c c2p p2p s2s
p2c s2s p2c s2s
s2s s2s s2s s2s
Results
This section presents the results of the tagging algorithm obtained using NMAG = 1, 2, ∞ 11 . The results are shown in Table 3. Note that the number of confirmed tags is ranging from 4.5% to 6.4% for raising NMAG . However, if the value of NMAG is increased there is more probability that transient paths will affect the final tag decision, thus lowering the reliability of the algorithm itself. This is confirmed by Fig.7 that shows the CCDF of the minimum lifespan of AS paths that participate to the tag of each connection. As can be seen from this figure, if the lifespan is not considered then a large portion of transient AS paths are contributing to the tag of each connection, possibly leading to a wrong result. The results also highlight the presence of a large percentage of unconfirmed p2p relationships, which decrease for increasing values of N. This is caused mainly by lack of information, but also by the fact that the reverse connection that could confirm the tag is short lasting, and thus not considered by the algorithm. A particular scenario created due to lack of information is depicted in Fig.8. The figure shows that, due to the BGP decision processes, the monitors cannot gather all the possible AS paths. Thus the results of our algorithm can be misleading. Consider S to be the source AS, T an AS included in Tlist and M the monitor. Consider also that A, B, C and T are the transit providers of S, that in this case acts as a stub AS, i.e. it does not transit any IP traffic for any other AS, but only for end users. Supposing that T decides that the best path to S is via A, the AS paths that are gathered by M in the steady state of this scenario will be (T, A), (T, B), (T, C), (T, A, S), (B, S) and (C, S). Applying our algorithm to this scenario leads to a single unconfirmed p2c relationship for [A, S], while the couples [B, S] and [C, S] will be interpreted as unconfirmed p2p. This could be solved by connecting the BGP collector to the stub AS. There would thus be a higher probability to gather AS paths involving T, i.e. (S, A, T), (S, B, T) and (S, C, T), which would then transform the relationships [S, B] and [S, C] into an unconfirmed p2c. The same scenario could be applied to small and medium ISPs, since it is the effect of BGP decision processes, and this phenomenon also introduces a large number of unconfirmed p2p which, in the real world, are p2c relationships. Some of 11
NM AG = ∞ represents the case in which the lifespan is not considered i.e. every AS path contribute to the tagging algorithm outcome.
BGP and Inter-AS Economic Relationships
65
Table 3. Tag results
NM AG =1 NM AG =2 NM AG =∞
Tag type p2c p2p s2s p2c p2p s2s p2c p2p s2s
Unconfirmed Total (%) Involving Stubs (%) 69707 (97.0%) 53089 (73.8%) 41507 (95.6%) 12990 (29.9%) 70034 (96.7%) 53331 (73.7%) 40716 (95.7%) 12748 (30.0%) 71642 (95.9%) 54180 (14.8%) 37235 (95.3%) 11898 (30.5%) -
Total 71841 43398 1378 72394 42556 1667 74949 39078 2590
1 0.9 0.8
P(X > x)
0.7 0.6 0.5 0.4 0.3 0.2 0.1
N=1 N=2 N=INF.
0 0 10
1
10
2
10
3
10
4
10
5
10
6
10
7
10
Lifespan [s]
Fig. 7. CCDF of min lifespan of AS paths used
Fig. 8. BGP monitor placement pitfalls
these connections can be spotted by analyzing the stub ASes. A typical stub AS has several p2c relationships to transit ASes due to multihoming and it does not develop any p2p relationships. Particular real world examples of this class of ASes are ASes owned by banks, car manufacturers, universities and local ISPs. To find these kinds of ASes, we analyzed all the AS paths available and we considered as stub ASes all the ASes that only compare as last hop in the AS paths, i.e. those ASes that do not transit any IP traffic for other ASes. Due to their nature these
66
E. Gregori et al.
ASes are likely to be customers in the relationships that involve them, thus there is a higher probability that unconfirmed p2p relationships involving them could be turned into p2c relationships. As reported in Table 3, the large number of unconfirmed p2p relationships involving stub ASes supports this rationale. The probability of being a real p2c relationship is very high for unconfirmed p2c relationhips involving stubs. Note that if the connections involving stubs and unconfirmed p2c are considered as confirmed p2c, the percentage of confirmed tags would raise to near 60% for all the NMAG considered. Tags inferred by the algorithm depends on the list of Tier-1 considered, thus we investigated its correctness. Wikipedia is a web-based encyclopedia that anyone can edit, so the list can be the result of multiple manipulations. On the other hand, the web page containing the list of Tier-1 ASes concerns a very specific and technical topic and it is hard to believe that a common user without any particular skill in this subject could build up a detailed list like that. The presence of a low number of AS paths that include three consecutive ASes of the Tlist would seem to indicate that it is incorrect, because these patterns would only be able to be seen if the Tier-1 ASes were not all interconnected via settlement-free peering relationships, since in this case one of them is transiting traffic for others. For example, route-views2 spot 239 different AS paths showing this pattern. Since, on average, on each route collector 20% of the AS paths that match this pattern last more than one hour, they are very unlikely to represent only simple transient paths created through router misconfigurations. In our future work we plan to study in detail the ASes that make up the core of the Internet.
5
Conclusion
We have exploited BGP data to discover what economic relationships are established between couples of ASes using an algorithm that works directly on BGP raw data. We found that BGP data can contain several anomalous paths that are clearly in contrast with the valley-free rule introduced in [4]. We traced their dynamics over a month and discovered that most of them lasted very few seconds and that they are the result of the combination of a particular common BGP misconfiguration and the BGP convergence delay. Our algorithm is keen of these events and is able to handle them using the lifespan of each AS Path. To infer the relationships, the algorithm relies on a priori knowledge of a list of Tier-1 ASes in order to understand whether an AS is transiting traffic for another AS. Using the concept of two-way validation, this algorithm is also able to assign a level of reliability to each inferred tagged connection from the Internet topology. It does this in order to compensate for the drawbacks introduced by the incompleteness of the BGP data in our possession. Our results indicate that current BGP datasets can select the two-way validated economic relationship only in 4.5 - 6.4% of the connections depending on the NMAG considered. Further work will study methodologies to obtain a much higher coverage.
BGP and Inter-AS Economic Relationships
67
References 1. Battista, G.D., Patrignani, M., Pizzonia, M.: Computing the types of the relationships between autonomous systems. In: IEEE INFOCOM (2003) 2. Dimitropoulos, X., Krioukov, D., Huffaker, B., Claffy, K.C., Riley, G.: Inferring AS relationships: Dead end or lively beginning? In: Nikoletseas, S.E. (ed.) WEA 2005. LNCS, vol. 3503, pp. 113–125. Springer, Heidelberg (2005) 3. Erlebach, T., Hall, A., Schank, T.: Classifying customer-provider relationships in the Internet. TIK-Report (145) (July 2002) 4. Gao, L.: On inferring autonomous system relationships in the Internet. IEEE/ACM Transactions on Networking 9(6), 733–745 (2001) 5. Gao, L., Wang, F.: The extent of AS path inflation by routing policies. In: Globecom 2002, Conference Records, vol. 1-3, pp. 2180–2184 (2002) 6. Gregori, E., Improta, A., Lenzini, L., Orsini, C.: The impact of IXPs on the ASlevel topology structure of the internet. Computer Communications 34(1), 68–82 (2011) 7. He, Y., Siganos, G., Faloutsos, M., Krishnamurthy, S.: Lord of the links: A framework for discovering missing links in the internet topology. IEEE/ACM Transactions on Networking 17(2), 391–404 (2009) 8. Huston, G.: Interconnection, peering, and settlements. In: INET 1999 Abstracts Book (1999) 9. Kosub, S., Maaß, M.G., T¨ aubig, H.: Acyclic type-of-relationship problems on the Internet. In: Erlebach, T. (ed.) CAAN 2006. LNCS, vol. 4235, pp. 98–111. Springer, Heidelberg (2006) 10. Mahajan, R., Wetherall, D., Anderson, T.: Understanding BGP misconfiguration. In: Proc. ACM SIGCOMM, vol. 32(4), pp. 3–16 (2002) 11. Oliveira, R., Pei, D., Willinger, W., Zhang, B., Zhang, L.: Quantifying the completeness of the observed internet AS-level structure. UCLA Technical Report, TR 080026 (September 2008) 12. Oliveira, R., Zhang, B., Izhak-Ratzin, R.: Quantifying path exploration in the internet. In: ACM Internet Measurement Conference, IMC (October 2006) 13. Subramanian, L., Agarwal, S., Rexford, J., Katz, R.H.: Characterizing the internet hierarchy from multiple vantage points. In: Proc. IEEE INFOCOM, pp. 618–627 (June 2002) 14. Xia, J., Gao, L.: On the evaluation of as relationship inferences. In: Proc. of IEEE GLOBECOM, vol. 3, pp. 1373–1377 (2004)
Network Non-neutrality Debate: An Economic Analysis Eitan Altman , Arnaud Legout, and Yuedong Xu INRIA Sophia Antipolis, 2004 Route des Lucioles, France {eitan.altman,arnaud.legout}@inria.fr, [email protected]
Abstract. This paper studies economic utilities and quality of service (QoS) in a two-sided non-neutral market where Internet service providers (ISPs) charge content providers (CPs) for the content delivery. We propose new models that involve a CP, an ISP, end users and advertisers. The CP may have either a subscription revenue model (charging end users) or an advertisement revenue model (charging advertisers). We formulate the interactions between the ISP and the CP as a noncooperative game for the former and an optimization problem for the latter. Our analysis shows that the revenue model of the CP plays a significant role in a non-neutral Internet. With the subscription model, both the ISP and the CP receive better (or worse) utilities as well as QoS in the presence of the side payment at the same time. With the advertisement model, the side payment impedes the CP from investing on its contents. Keywords: Network Non-neutrality, Side Payment, Nash Equilibrium, Bargaining.
1
Introduction
Network neutrality, one of the foundations of Internet, is commonly admitted that ISPs must not discriminate traffic in order to favor specific content providers [1]. However, the principle of network neutrality has been challenged recently. The main reason is that new broadband applications cause huge amount of traffic without generating direct revenues for ISPs. Hence, ISPs want to get additional revenues from CPs that are not directly connected to them. For instance, a residential ISP might want to charge Youtube in order to give a premium quality of service to Youtube traffic. This kind of monetary flows, which violates the principle of network neutrality, are called two-sided payment. We use the term side payment to name the money charged by ISPs from CPs exclusively. On the one hand, the opponents of network neutrality argue that it does not give any incentive for ISPs to invest in the infrastructure. This incentive issue is even more severe in two cases: the one of tier-one ISPs that support a high load, but do not get any revenue from CPs; and the one of 3G wireless networks that need to invest a huge amount of money to purchase spectrum. On the other
The work was supported by the INRIA ARC Meneur on Network Neutrality. Corresponding author.
J. Domingo-Pascual et al. (Eds.): NETWORKING 2011, Part II, LNCS 6641, pp. 68–81, 2011. c IFIP International Federation for Information Processing 2011
Network Non-neutrality Debate: An Economic Analysis
69
hand, advocates of network neutrality claim that violating it using side payment will lead to unbalanced revenues among ISPs and CPs, thus a market instability. Recent work addressed the problem of network neutrality from various perspectives [2,3,4,5,6,7,8,9]. Among these work, [2,3,4] are the closest to our work. Musacchio et al. [2] compare one-sided and two-sided pricing of ISPs. However, they only investigate an example where the joint investments of CPs and ISPs bring revenue from advertisers to CPs. In [3], the authors show how side payment is harmful for all the parties involved such as ISP and CP. Altman et al. in [4] present an interesting bargaining framework to decide how much the ISP should charge the CP. However, their models might give a biased conclusion by overlooking the end users’ sensitivity towards the prices of the CP and the ISP. Authors in [13] study a two sided model where a continuum of CPs obtains revenue either from the end users or from the advertisers. The ISP charges the CPs according to the qualities of connections chosen by them. Our work is different from [13] in two aspects. First, we consider the QoS provision of the ISP to the CP instead of the QoS differentiation among a continuum of CPs. Second, we formulate different mathematical models. The relative price sensitivity is considered in the subscription model and the investment of the CP is incorporated into the advertisement model. In this paper, we unravel the conflicts of the side payment in a more general context. We consider a simplified market composed of one ISP, one CP, some advertisers, and a large number of end users. The ISP charges end-users based on their usage and sets their QoS level according to the price paid. The CP can either have a subscription based or an advertisement based revenue model. For the subscription based revenue model, the CP gets its revenue from the subscription paid by end-users. End-users adapt their demand of contents based on the price paid to the ISP and the CP. For the advertisement based revenue model, the CP gets its revenue from advertisers. End users adapt the demand according to the price paid to the ISP and the CP’s investment on its contents. Our work differs from related work [2,3,4] by: i) incorporating the QoS provided by the ISP, ii) studying different revenue models of the CP, and iii) introducing the relative price sensitivity of end users in the subscription model. Especially, in the subscription model, the relative price sensitivity decides whether the side payment is beneficial (or harmful) to the ISP and the CP. Our finding contradicts the previous work (e.g. [3]) that argues that the side payment is harmful for all parties involved. In the advertisement model, the ability of the CP’s investment to attract the traffic of end users plays a key role. It determines whether the side payment is profitable for the ISP and the CP. Our main contributions are the following. – We present new features in the mathematical modeling that include the QoS, the relative price sensitivity of end-users, and the CP’s revenue models. – We analytically show that the side payment from the CP to the ISP is beneficial to the ISP and the CP in terms of profits under certain conditions. – We utilize bargaining games based on [4] to investigate how the side payment is determined.
70
E. Altman, A. Legout, and Y. Xu
The rest of this work is organized as follows. In section 2, we model the economic behaviors of ISP, CP, advertisers and end users. Section 3 and 4 study the impact of the side payment and its bargaining outcomes. Section 5 presents numerical study to validate our claims. Section 6 concludes this paper.
2
Basic Model
In this section, we first introduce the revenue models of the ISP and the CP. Then, we formulate a game problem and an optimization problem for the selfish ISP and the CP. Finally, we describe the bargaining games in a two-sided market. 2.1
Revenue Models
We consider a simplified networking market with four economic entities, namely the advertisers, the CP, the ISP and end users. All the end users can access the contents of the CP only through the network infrastructure provided by the ISP. The ISP collects subscription fees from end users. It sets two market parameters (ps , q) where ps is the non-negative price of per-unit of demand, and q is the QoS measure (e.g. delay, loss or block probability). End users can decide whether to connect to the ISP or not, or how much traffic they will request, depending on the bandwidth price and the QoS. The CP usually has two revenue models, the user subscription and the advertisement from clicks of users. These two models, though sometimes coexisting with each other, are studied separately in this work for clarity. The CP and the ISP interact with each other in a way that depends on the CP’s revenue models. In the subscription based model, the CP competes with the ISP by charging users a price pc per-unit of content within a finite time. End users respond to ps , pc and q by setting their demands elastically. Though pc has a different unit as ps , it can be mapped from the price per content into the price per bps (i.e. dividing the price of a content by its size in a finite time). The price pc not only can stand for a financial disutility, but also can represent the combination of this disutility together with a cost per quality. Thus a higher price may be associated with some better quality (this quality would stand for parameters different from the parameter q which we will introduce later). Without loss of generality, ps and pc can be positive or 0. For the advertisement based model, instead of charging users directly, the CP attracts users’ clicks on online advertisements. The more traffic demands the end users generate, the higher the CP’s revenue. To better understand network neutrality and non-neutrality, we describe the monetary flows among different components. The arrows in Figure 1 represent the recipients of money. A “neutral network” does not allow an ISP to charge a CP for which it is not a direct provider for sending information to this ISP’s users. On the contrary, monetary flow from a CP to an ISP appears when “network neutrality” is violated. The ISP may charge the CP an additional amount of money that we denote by f (D) = pt D where pt is the price of per-unit of demand. We denote by δ ∈ [0, 1] the tax rate of this transferred revenue imposed by the regulator or the government.
Network Non-neutrality Debate: An Economic Analysis
71
Fig. 1. Money flow of a non-neutral network
We present market demand functions for the subscription and the advertisement based revenue models. Subscription model. Let us define the average demand of all users by D that has D(ps , pc , q) = max{0, D0 − α(ps + ρpc ) + βq}, (1) where D0 , α, β and ρ are all positive constants. The parameter D0 reflects the total potential demand of users. The parameters α and β denote the responsiveness of demand to the price and the QoS level of the ISP. The physical meaning of (1) can be interpreted in this way. When the prices of the ISP and the CP increase (resp. decrease), the demand decreases (resp. increases). If the QoS of the ISP is improved, the demand from users increases correspondingly. The parameter ρ represents the relative sensitivity of pc to ps . We deliberately set different sensitivities of end users to the prices of the ISP and the CP because pc and ps refers to different type of disutilities. If ρ = 1, the prices of the ISP and the CP are regarded as having the same effect on D. When ρ > 1, users are more sensitive to the change of pc than ps . The positive prices ps and pc can not be arbitrarily high. They must guarantee a nonnegative demand D. We denote SCP to be the pricing strategy of the CP that has SCP = {pc : c p ≥ 0}. The utility (or revenue equivalently) of the CP is expressed as Ucp = (pc − pt )D(ps , pc , q).
(2)
Note that the variable D(ps , pc , q) is interchangeable with D all the time. Next, we present the utility of the ISP with QoS consideration. We assume that the pricing strategy of the ISP is defined by SISP = {(ps , q) : ps ≥ 0; 0 < q ≤ qmax }. To sustain a certain QoS level of users, the ISP has to pay the costs for operating the backbone, the last-mile access, and the upgrade of the network, etc. Let u(D, q) be the amount of bandwidth consumed by users that depends on the demand D and the QoS level q. We assume that u(D, q) is a positive, convex and strictly increasing function in the 2-tuple (D, q). This is reasonable because a larger demand or higher QoS usually requires a larger bandwidth of the ISP. We now present a natural QoS metric as the expected delay1 . The expected delay is computed by the Kleinrock function that corresponds to the delay of M/M/1 queue with FIFO discipline or M/G/1 queue under processor sharing [10]. Similar to [10], instead of using the actual delay, we consider the reciprocal 1
The QoS metric can be the functions of packet loss rate or expected delay etc.
72
E. Altman, A. Legout, and Y. Xu
1 of its square root, q = √Delay = u(D, q) − D. Thus, the cost C(D, q) can be expressed as C(D, q) = pr u(D, q) = pr D + pr q 2 , where pr the price of per-unit of bandwidth invested by the ISP. Therefore, the cost of the ISP is denoted by C(D, q) = pr u(D, q). The utility of the ISP is defined as the difference between revenue and cost: Uisp = (ps − pr )D(ps , pc , q) + (1 − δ)pt D(ps , pc , q) − pr q 2 .
(3)
Advertisement model. Nowadays, a small proportion of CPs like Rapidshare and IPTV providers get their income from end users. Most of other CPs provide contents for free, but collect revenues from advertisers. The demand from users is transformed into attentions such as clicks or browsing of online advertisements. To attract more eyeballs, a CP needs to invest money on its contents, incurring a cost c. The investment improves the potential aggregate demand D0 in return. Let D0 (c) be a concave and strictly increasing function of the investment cost c. With abuse of notations we denote the strategy of the CP by SCP = {c : c > 0}. Hence, the demand to the CP and the ISP is written as D = D0 (c) − αps + βq.
(4)
The utility of the ISP is the same as that in (3). Next, we describe the economic interaction between advertisers and the CP. There are M advertisers interested in the CP, each of which has a fixed budget B in a given time interval (e.g., daily, weekly or monthly). An advertiser also has a valuation v to declare its maximum willingness to pay for each attention. The valuation v is a random variable in the range [0, v]. Suppose that v is characterized by probability density function (PDF) x(v) and cumulative distribution function (CDF) X(v). We assume that the valuations of all advertisers are i.i.d. Let pa be the price of per attention charged by the CP. We denote by Da (pa ) the demand of attentions from advertisers to the CP. Therefore, D a can be expressed as [11] Da = M B · Prob(v ≥ pa )/pa = M B · (1 − X(pa ))/pa .
(5)
When the CP increases pa , the advertisers will reduce their purchase of attentions. It is also easy to see that the revenue of advertising, pa ·Da , decreases with regard to pa either. However, the attentions that the CP can provide is upper bounded by the demand of users through the ISP. Then, we can rewrite Da as that in [11] by Da = min{D, M B · (1 − X(pa ))/pa }.
(6)
Correspondingly, subtracting investment from revenue, we obtain the utility of the CP by Ucp = (pa − pt )Da − c.
(7)
Lemma 1. The optimal demand Da is a strictly decreasing function of pa if the pdf x(v) is nonzero in (0, v).
Network Non-neutrality Debate: An Economic Analysis
73
The proofs of all lemmas and theorems can be found in [12]. From (6), we can observe that the optimal price pa∗ is obtained at D = M B · (1 − X(pa∗ ))/pa∗ . Here, we denote a function y(·) such that pa∗ = y(D). According to the demand curve of attentions, y(·) is a decreasing function of D. The utility of the CP is a function of the demand D and the cost c, i.e. Ucp = y(D) · D − pt D − c. 2.2
Problem Formulation
With the subscription model, the strategy profile of the ISP is to set the 2tuple (ps , q) and that of the CP is to set pc . This is actually a game problem in which the ISP and the CP compete by setting their prices. Since the ISP’s QoS is tunable, we call this game “QoS Agile Price competition”. With the advertisement model, the strategy of the ISP is still the price paid by end users, while that of the CP is to determine the investment level c. The ISP and the CP maximize their own utilities selfishly, but do not compete with each other. We name this maximization as “Strategic Pricing and Investment”. Definition 1. QoS Agile Price Competition In the subscription model, the CP charges users based on their traffic demands. If the Nash equilibrium N E (1) = {ps∗ , pc∗ , q ∗ } exists, it can be expressed as (G1) Uisp (ps∗ , pc∗ , q ∗ ) =
max
Uisp (ps , pc∗ , q),
{ps ,q}∈SISP
Ucp (ps∗ , pc∗ , q ∗ ) = cmax Ucp (ps∗ , pc , q ∗ ). p ∈SCP
(8) (9)
Definition 2. Strategic Pricing and Investment In the advertisement model, the ISP sets (ps , q) and the CP sets c to optimize their individual utilities. If there exists an equilibrium {ps∗ , q ∗ , c∗ }, it can be solved by (G2) Uisp (ps∗ , q ∗ , c∗ ) =
max
Uisp (ps , q, c∗ ),
{ps ,q}∈SISP
Ucp (ps∗ , q ∗ , c∗ ) = max Ucp (ps∗ , q ∗ , c). c∈SCP
2.3
(10) (11)
Bargaining Game
The side payment serves as a fixed parameter in the above two problems. A subsequent and important problem is how large the side payment should be. When the ISP decides the side payment unilaterally, it might set a very high pt in order to obtain the best utility. However, this leads to a paradox when the ISP sets ps and pt at the meantime. With the subscription based model, if the ISP plays a strategy (pt , ps , q) and the CP plays pc , the noncooperative game leads to a zero demand and hence a zero income. This can be easily verified by taking the derivative of Uisp over pt (also see [3]). In other words, the ISP cannot set pt and ps simultaneously in the price competition. Similarly, with the advertisement based model, the ISP meets with the same paradox. There are two possible ways, the Stackelberg game and the bargaining game, to address this problem. Their basic principle is to let the ISP to choose pt and ps asynchronously. In this work, we consider the bargaining game in a market where the ISP and the CP
74
E. Altman, A. Legout, and Y. Xu
usually have certain marketing powers. Our analysis in this work is close to the one presented in [4], but comes up with quite different observations. We here analyze the bargaining games of the side payments that are played at different time sequences. The first one, namely pre-bargaining, models the situation that the bargaining takes place before the problems (G1) or (G2). The second one, defined as post-bargaining, models the occurrence of bargaining after the problems (G1) or (G2). Let γ ∈ [0, 1] be the bargaining power of the ISP over the CP. They negotiate the transfer price pt determined by pt∗ = arg maxpt (Uisp )γ (Ucp )1−γ . Since the utilities can only be positive, the optimal pt maximizes a virtual utility U pt∗ = arg max U = arg max (1 − γ) log Ucp + γ log Uisp . pt
pt
(12)
We use (12) to find pt as a function of the strategies of the ISP and the CP.
3
Price Competition of the Subscription Model
In this section, we first investigate how the relative price sensitivity influences the price competition between the ISP and the CP. We then study the choice of the side payment under the framework of bargaining games. 3.1
Properties of Price Competition
This subsection investigates the impact of the side payment on Nash Equilibrium of the noncooperative game G1. Before eliciting the main result, we show some basic properties of the subscription based revenue model. Lemma 2. The utility of the CP, Ucp (ps , pc , q), in (2) is a finite, strictly concave function with regard to (w.r.t.) pc . Similarly, we draw the following conclusion. Lemma 3. The utility of the ISP, Uisp (ps , pc , q), in (3) is a finite, strictly concave function w.r.t. the 2-tuple (ps , q) if the market parameters satisfy 4αpr > β 2 . For the QoS Agile Price Competition, we summarize our main results as below. Lemma 4. When the ISP and the CP set their strategies selfishly, – the Nash equilibrium (ps∗ , pc∗ , q ∗ ) is unique; – the QoS level q ∗ at the NE is influenced by the side payment in the ways: • improved QoS with ρ + δ < 1; • degraded QoS with ρ + δ > 1; • unaffected QoS with ρ + δ = 1 if (ps∗ , pc∗ , q ∗ ) satisfy ps∗ > 0, pc∗ > 0, 0 < q ∗ < qmax and 4αpr > β 2 .
Network Non-neutrality Debate: An Economic Analysis
75
When the NE (ps∗ , pc∗ , q ∗ ) is not at the boundary, we can yield the following expressions by solving the best response equations. β(D0 − αpr + αpt (1 − ρ − δ)) , 6αpr − β 2 2pr (D0 − αpr + αpt (1 − ρ − δ)) = + pt , ρ(6αpr − β 2 )
q∗ = pc∗
2pr (D0 − αpr + αpt (1 − ρ − δ)) + pr − (1 − δ)pt , 6αpr − β 2 2pr α(D0 − αpr + αpt (1 − ρ − δ)) D∗ = , 6αpr − β 2
ps∗ =
(13) (14) (15) (16)
For the case that the NE is at the boundary, interested users can find it in the technical report [12]. Lemma 4 means that the QoS provision of the ISP is influenced by the side payment. We interpret the results by considering ρ and δ separately. When users are indifferent to the price set by the ISP and that by the CP (i.e., ρ = 1), a positive tax rate δ leads to the degradation of q in the presence of the side payment. Next, we let δ be 0 and investigate the impact of ρ. If users are more sensitive to the price of the ISP (i.e. ρ < 1), the side payment is an incentive of the ISP to improve its QoS. Otherwise, charging side payment leads to an even poorer QoS of the ISP. Therefore, if users are more sensitive to the CP’s price, a good strategy of the ISP is to share its revenue with the CP so that the latter sets a lower subscription fee. 3.2
Bargaining of the Side Payment
To highlight the bargaining of the side payment, we make the following two simplifications: i) the tax ratio δ is 0, and ii) pt can be positive, zero or negative. We let δ = 0 because it turns out to have the similar effect as ρ. We have shown that a negative pt might benefit both the ISP and the CP in some situations. Hence, we do not require pt to be positive in the bargaining game. When q reaches qmax , the QoS is reflected as a fixed parameter in the demand model. To avoid considering the boundary cases of q, we also remove the constraint q ≤ qmax . Pre-bargaining: In the pre-bargaining, pt is chosen based on the NE of the ISP and the CP. The equations (13)∼ (15) yield the expression of U U = 4 log D0 − αpr + αpt (1 − ρ) + constant .
(17)
The utility U is increasing or decreasing in p depending on the sign of (1 − ρ). If ρ < 1, a positive pt improves not only the QoS level of the ISP, but also the utilities of the ISP and the CP. As pt increases, ps decreases and pc increases consequently until ps hits 0. Hence, in the pre-bargaining, ps∗ = 0. The prices pt∗ and pc∗ are computed by t
pr (4αpr + 2D0 − β 2 ) . 4αpr + 2ραpr − β 2 2pr (D0 − αpr + αpt∗ (1 − ρ)) = + pt∗ . ρ(6αpr − β 2 )
pt∗ =
(18)
pc∗
(19)
76
E. Altman, A. Legout, and Y. Xu
On the contrary, when ρ > 1, a negative pt benefits both of them. Then, pt∗ is a negative value such that pc∗ is 0. When ρ = 1, the QoS and the utilities are unaffected by any pt . Among all these cases, the selection of pt is uninfluenced by the bargaining power γ. Post-bargaining: For the post-bargaining, the ISP and the CP compete for the subscription of users first, knowing that they will bargain over pt afterwards [4]. In brief, we find pt as a function of ps , pc and q first. Then, the ISP and the CP compete with each other by setting the prices. To solve the maximization in dU (12), we let dp t be 0 and obtain pt = γpc − (1 − γ)(ps − pr ) + (1 − γ)pr q 2 /D.
(20)
Submitting (20) to Uisp , we rewrite the ISP’s utility by Uisp = γ (ps + pc − pr )(D0 − α(ps + ρpc ) + βq) − pr q 2 .
The utility of the CP is proportional to that of the ISP, i.e. knowing p , we compute the derivatives t
dUisp dUisp , dq dps
and
dUcp dpc
Uisp γ
(21)
=
Ucp 1−γ .
After
by
dUisp = γ(D − α(ps + pc − pr )), dps dUisp = γ(β(ps + pc − pr ) − 2pr q), dq dUcp = (1 − γ)(D − αρ(ps + pc − pr )). dpc
(22) (23) (24)
The best responses of Uisp and Ucp will not happen at the same time unless ρ = 1 or ps + pc − pr = 0. The condition ps + pc − pr = 0 does not hold because it leads to a zero demand D and zero utilities. When ρ is not 1, only one of (22) and (24) is 0. Here, we consider the case ρ > 1. The utility Ucp reaches its maximum upon D = αρ(ps + pc − pr ), while Uisp is still strictly increasing w.r.t. ps . Thus, the ISP increases ps until the demand D is 0, which contradicts the dU dUisp condition of a nonzero D. If D = α(ps + pc − pr ), dpcp c is negative and dps is 0. c s Then, the CP decreases p until 0 and the ISP sets p to achieve its best utility accordingly. By letting (24) be 0, we can find (ps , q) at the Nash equilibrium q∗ =
β(D0 − αpr ) 4αpr − β 2
and
ps∗ =
2pr (D0 − αpr ) + pr . 4αpr − β 2
The price of the side payment, pt∗ , is thus computed by pt = −(1 − γ)
D0 − αpr . 2α
(25)
When ρ = 1, ps∗ and pc∗ can be arbitrary values in their feasible region that r r 0 −αp ) satisfy ps∗ + pc∗ = pr + 2p4p(D . Similar result has been shown in [4]. The r α−β 2 analysis of ρ < 1 is omitted here since it can be conducted in the same way.
Network Non-neutrality Debate: An Economic Analysis
4
77
Price, QoS and Investment Settings of the Advertisement Model
This subsection analyzes how the side payment influences the optimal strategies of ISP and CP with the advertisement model. The bargaining games are adopted to determine the amount of the side payment. Compared with subscription based model, the advertisement based model exhibits quite different behaviors. 4.1
Properties of the Advertisement Mode
In general, the subscription model is limited to file storage CDNs, newspaper corporations, or big content owners such as movie producers. Most of CPs are not able to provide enough unique contents so that they do not charge users, but make money from online advertisements. In this subsection, we present the general properties of the advertisement model and a couple of case studies. Lemma 5. For any feasible investment c of the CP, there exists a best strategy of the ISP, (ps , q). When c increases, the price and the QoS (i.e. ps and q) become larger. In G2, the CP and the ISP do not compete with each other. On one hand, the ISP sets the two-tuple (ps , q) with the observation of c. On the other hand, the CP adjusts c based on (ps , q). The investment of the CP brings more demand of end users, which increases the revenues of not only the ISP, but also the CP. Hence, different from G1, the problem G2 is not a game. Instead of studying the NE, we look into the optimal strategies of the ISP and the CP in G2. Theorem 1. There exists a unique best strategy, namely (ps∗ , q ∗ , c∗ ), with the advertisement model if the revenue of the CP, D · y(D), is a concave function w.r.t. D ≥ 0. Lemma 6. The side payment from the CP to the ISP leads to a decreased investment on the contents when the best strategy (ps∗ , q ∗ , c∗ ) has ps∗ > 0, q ∗ > 0 and c∗ > 0. 4.2
Case Study
In this subsection, we aim to find the best strategies of the ISP and the CP when the valuation of advertisers follows a uniform distribution or a normal distribution. Due to the page limit, the case of normal distribution is put in the technical report [12]. Recall that the potential aggregate demand of users, D0 (c), is strictly increasing and concave w.r.t c. When the CP invests money on contents, D0 becomes larger, while its growth rate shrinks. Here, we assume a log function of D0 (c), D0 (c) = D00 + K log(1 + c),
(26)
78
E. Altman, A. Legout, and Y. Xu
where the constant K denotes the ability that the CP’s investment brings the demand. The nonnegative constant D00 denotes the potential aggregate demand of end users when c is zero (the CP only provides free or basic contents). The utility of the ISP remains unchanged. Uniform Distribution: Suppose v follows a uniform distribution in the range a [0, v]. Then, the CDF X(pa ) is expressed as pv . The optimal price pa is obtained pa when D = MB pa · (1 − v ) in the range [0, v] (see subsection 2.1). Alternatively, there has pa =
M Bv . M B + Dv
(27)
The above expressions yield the utility of the CP by Ucp =
M BvD − c − pt D. M B + Dv
(28)
Deriving Ucp over c, we obtain dUcp (M B)2 v K =( − pt ) · − 1. dc (M B + Dv)2 1+c
(29)
We let (29) be 0 and get c = K(
(M B)2 v¯ − pt ) − 1. (M B + D¯ v )2
(30)
The rule of the ISP to decide (ps , q) is the same as that in the subscription model, except that the aggregate demand is not a constant, but a function of c, c = exp(
D (4pr α 2pr α
− β 2 ) − D00 + αpr − (1 − δ)pt α K
) − 1.
(31)
Note that (30) is strictly decreasing and (31) is strictly increasing w.r.t. D. They constitute a fixed-point equation for the 2-tuple (D∗ , c∗ ). In the beginning, we assume that the optimal strategies are not on the boundary. When D approaches infinity, (30) is negative while (31) is positive. When D is zero, if (30) is larger than (31), there exists a unique fixed-point solution. In this fixed point, the ISP and the CP cannot benefit from changing their strategy unilaterally. We can solve c∗ and D∗ numerically using a binary search. If (30) is smaller than (31) when D is 0, the best strategy of the CP is exactly c = 0. The physical interpretation is that the increased revenue from advertisers cannot compensate the investment on the contents. Once D∗ and c∗ are derived, we can solve ps∗ and q ∗ subsequently. In this fixed-point equation, pt greatly influences the optimal investment c∗ . When pt grows, the right sides of (30) and (31) decrease at the mean time. The crossing point of two curves, (30) and (31), shifts toward the direction of smaller c as shown in Lemma 6. Intuitively, the contents of the CP become less when the ISP charges a positive pt . The boundary case of the optimal strategies as well as the bargaining games over pt are analyzed in [12].
Network Non-neutrality Debate: An Economic Analysis
5
79
Evaluation
We present some numerical results to reveal how the QoS, prices of the ISP and the CP, as well as their utilities evolve when the price of the side payment changes. The impact of bargaining power on the side payment is also illustrated. More numerical examples are demonstrated in the technical report [12]. Subscription Model: We consider a networking market where the demand function is given by D = 200 − 10(ps + ρpc ) + 0.5q. The operational cost of per-unit of bandwidth is set to pr = 1. Two situations, ρ = 0.5 and ρ = 1.5, are evaluated. The tax rate δ is set to 0 for simplicity. As is analyzed, the side payment benefits the ISP and the CP depending on whether ρ is greater than 1 or not. In figure 2, pt has different impacts on utilities of the ISP and the CP. When ρ > 1, end users are more sensitive to the change of pc than ps . A positive pt leads to the increase of pc , causing a tremendous decrease of demand. Hence, both the ISP and the CP lose revenues w.r.t. a positive pt . Figure 3 further shows that a positive pt yields a better QoS if ρ < 1 and a worse QoS if ρ > 1. Next, the ISP and the CP bargain with each other to determine pt . We relax the choice of pt so that it can be negative. In the pre-bargaining game, pt is independent of the bargaining power γ. The optimal pt is obtained when ps∗ decreases to 0, its lower bound. We evaluate pt by changing ρ and α in figure 4. When ρ increases from 0.2 to 2, pt decreases until it becomes negative. A negative pt means that the ISP needs to transfer revenue to the CP instead. When ρ = 1, pt can be an arbitrary value as long as ps∗ and pc∗ are nonnegative. Figure 4 also shows that a larger α results in a smaller absolute value of pt .
QoS level provided by ISP
Utilities of ISP and CP (ρ = 1.5) Utilities of ISP and CP (ρ = 0.5) ISP Utility CP Utility
400
2
ISP Utility CP Utility
600
1.9
380
1.8
550
QoS Level
Utility
Utility
360 340
500
320
1.7 1.6 1.5
450 300 280
ISP QoS ρ = 1.5 ISP QoS ρ = 0.5
1.4 400 0
2
4
6 t
The price of side payment p
0
2
4
6
8
The price of side payment pt
Fig. 2. Subscription Model: Utilities of the ISP and the CP
1.3
0
2
4
6
8
The price of side payment pt
Fig. 3. Subscription Model: The QoS level provided by the ISP
Advertisement Model: In the advertisement model, we consider the demand function D = K log(1 + c) − 10ps + 0.5q. The coefficient K reflects the efficiency of the CP’s investment to attract end users. The valuation of each click/browsing follows uniform distribution in the range [0, 10]. The total budget of advertisers is set to 1000. We conduct two sets of experiments. The first one is to evaluate
80
E. Altman, A. Legout, and Y. Xu Pre−bargaining of the price of side payment 20
The CP’s investment at the equilibrium 120
pt when α=5 pt when α=10
15
The CP’s Investment
Best pt charged by ISP
t
p when α=15
10
Investment at the Equilibrium: K = 10 Investment at the Equilibrium: K = 20 Investment at the Equilibrium: K = 30
100
5 0 −5 −10
80
60
40
20 −15 −20 0.2
0 0.4
0.6 0.8 1 1.2 1.4 1.6 1.8 ρ: the end users’ sensitivity of prices
Fig. 4. Subscription bargaining of pt
0
1
Model:
Pre-
3
4
6
7
The ISP utility at the equilibrium 450
The CP revenue : K = 10 The CP revenue : K = 20 The CP revenue : K = 30
200
150
100
The CP revenue : K = 10 The CP revenue : K = 20 The CP revenue : K = 30
400 350 The utility of the ISP
250
5
Fig. 5. Advertisement Model: CP’s investment at the equilibrium
The CP utility at the equilibrium 300
The utility of the CP
2
The price of side payment pt
300 250 200 150 100
50
50 0
0 0
1
2
3
4
5
6
7
Fig. 6. Advertisement Model: The ISP’s utility at the equilibrium
0
1
2
3
4
5
6
7
The price of side payment pt
The price of side payment pt
Fig. 7. Advertisement Model: The ISP’s utility at the equilibrium
the impact of the side payment on the best strategies of the ISP and the CP. The second one is to find the optimal pt in the pre-bargaining game. In figure 5, the CP’s investment is a decreasing function of pt . When pt is large enough, c reduces to 0. Figure 6 illustrates the utility of the CP when pt and K change. The CP’s utility increases first and then decreases with K = 10 when pt increases. For the cases K = 20 and 30, the increase of pt usually leads to the decrease of revenues. In figure 7, the utility of the ISP with K = 10 and 20 increases first and then decreases when pt grows. These curves present important insights on the interaction between the CP and the ISP. If the contents invested by the CP can bring a large demand, the side payment is not good for both the ISP and the CP. On the contrary, when the efficiency K is small, the CP can obtain more utility by paying money to the ISP.
6
Conclusion and Future Work
In this paper, we first answer under what situations the side payment charged by the ISP is beneficial for the ISP (or the CP). Then, we study how the price
Network Non-neutrality Debate: An Economic Analysis
81
of the side payment is determined. Our models take account of three important features, the relative price sensitivity, the CP’s revenue models, and the QoS provided by the ISP. With the subscription model, the relative price sensitivity determines whether the ISP should charge the side payment from the CP or not. With the advertisement model, the charge of the side payment depends on the ability of the CP’s investment to attract the demand.
References 1. Hahn, R., Wallsten, S.: The economics of net neutrality. The Berkeley Economic Press Economists Voice 3(6), 1–7 (2006) 2. Musacchio, J., Schwartz, G., Walrand, J.: A two-sided market analysis of provider investment incentives with an application to the net-neutrality issue. Review of Network Economics 8(1) (2009) 3. Altman, E., Bernhard, P., Caron, S., Kesidis, G., Rojas-Mora, J., Wong, S.L.: A Study of Non-Neutral Networks with Usage-based Prices. In: 3rd ETM Workshop of ITC Conference (2010) 4. Altman, E., Hanawal, M.K., Sundaresan, R.: Nonneutral network and the role of bargaining power in side payments. In: NetCoop, Ghent, Belgium (November 2010) 5. Economides, N., Tag, J.: Net neutrality on the internet: A two-sided market analysis, working paper, http://ideas.repec.org/p/net/wpaper/0714.html 6. Claudia Saavedra, V.: Bargaining power and the net neutrality debate, working paper (2010), http://sites.google.com/site/claudiasaavedra/ 7. Choi, J.P., Kim, B.C.: Net Neutrality and Investment Incentives, working paper (2008), http://ideas.repec.org/p/ces/ceswps/_2390.html 8. Ma, T.B., Chiu, D.M., Lui, J.C.S., et al.: On Cooperative Settlement Between Content, Transit and Eyeball Internet Service Providers. To appear in IEEE/ACM Trans. on Networking 9. Bangera, P., Gorinsky, S.: Impact of Prefix Hijacking on Payments of Providers. In: Proc. of COMSNETS 2011 (January 2011) 10. El-Azouzi, R., Altman, E., Wynter, L.: Telecommunications Network Equilibrium with Price and Quality-of-Service Characteristics. In: Proc. of ITC (2003) 11. Liu, J., Chiu, D.M.: Mathematical Modeling of Competition in Sponsored Search Market. In: ACM Workshop on NetEcon 2010, Vancouver (2010) 12. Altman, E., Legout, A., Xu, Y.D.: Network Non-neutrality Debate: An Economic Analysis, in Technical Report (2010), http://arxiv.org/abs/1012.5862 13. Hermalin, B.E., Katz, M.L.: The Economics of Product-Line Restrictions With an Application to the Network Neutrality Debate. Information Economics and Policy 19, 215–248 (2007)
Strategyproof Mechanisms for Content Delivery via Layered Multicast Ajay Gopinathan and Zongpeng Li Department of Computer Science, University of Calgary {ajay.gopinathan,zongpeng}@ucalgary.ca
Abstract. Layered multicast exploits the heterogeneity of user capacities, making it ideal for delivering content such as media streams over the Internet. In order to maximize either its revenue or the total utility of users, content providers employing layered multicast need to carefully choose a routing, layer allocation and pricing scheme. We study algorithms and mechanisms for achieving either goal from a theoretical perspective. When the goal is maximizing social welfare, we prove that the problem is NP-hard, and provide a simple 3-approximation algorithm. We next tailor a payment scheme based on the idea of critical bids to derive a truthful mechanism that achieves a constant fraction of the optimal social welfare. When the goal is revenue maximization, we first design an algorithm that computes the revenue-maximizing layer pricing scheme, assuming truthful valuation reports. This algorithm, coupled with a new revenue extraction procedure for layered multicast, is used to design a randomized, strategyproof auction that elicits truthful reports. Employing discrete martingales to model the auction, we show that a constant fraction of the optimal revenue can be guaranteed with high probability. Finally, we study the efficacy of our algorithms via simulations.
1
Introduction
Data dissemination applications such as media streaming, file downloading and video conferencing represent an increasingly significant fraction of today’s Internet traffic[1]. A natural protocol for one-to-many content delivery is multicast [2,3], which reduces redundant transmissions and utilizes network bandwidth efficiently. The Internet is an inherently heterogeneous ‘network of networks’ with diverse connection types, and therefore disparate download capabilities among its users. As a result, single-rate multicast lacks the required flexibility to cater simultaneously to the needs of every user. Layered multicast [4,5,6] offers an attractive solution by encoding the media content into layers with varying sizes: a base layer provides basic playback quality, and improvement is possible through further reception of a flexible number of enhancement layers. Users are then able to enjoy playback quality commensurate with their download capacities [4,5,6]. The Internet contains private entities driven by selfish and often commercial interests. Content providers are thus compelled to include social and economic considerations when computing an appropriate multicast scheme. Previous studies on layered multicast have focused on optimizing metrics such as throughput J. Domingo-Pascual et al. (Eds.): NETWORKING 2011, Part II, LNCS 6641, pp. 82–96, 2011. c IFIP International Federation for Information Processing 2011
Strategyproof Mechanisms for Content Delivery via Layered Multicast
83
[5,6], and do not consider the potential for strategic behaviour by users. Users subscribing to media content via layered multicast have different valuations for receiving the service[7,8], which is naturally proportional to the number of layers received. However, such valuation is private to the user itself, who may strategically misreport its value for potential economic benefit, e.g. in the hopes of receiving the multicast service with a lower payment. It is critical for content providers to take such strategic behaviour into consideration when routing and pricing the layers. An altruistic content provider aims to maximize social welfare, i.e., the sum of the user valuations for layers received. A commercially motivated content provider wishes to maximize its own revenue instead. The challenge for the content provider in either case is to determine simultaneously a layer allocation, routing and pricing scheme that approaches the stated goal, while ensuring users have no incentive to misreport their valuations. In this paper, we provide a theoretical study on the classical economic problems of maximizing social welfare and revenue in the layered multicast setting. We adopt a network information flow approach to model layered multicast, and design algorithms and mechanisms for maximizing either social welfare or revenue, all the while treating multicast receivers as strategic agents. We focus on the common cumulative layering scheme, where decoding layer k requires the successful reception of layers 1 through k. Our solutions are both efficiently computable and provably strategyproof. A mechanism is strategyproof if users have no incentive to lie about their true valuations of the multicast service to the content provider. We first show that the social welfare maximization problem for layered multicast is NP-Hard. Nevertheless, we design a 3-approximation algorithm for the problem. Our algorithm exploits network coding [9] to find a layer allocation that simultaneously maximizes social welfare and guarantees a feasible routing scheme. We next focus on the problem of eliciting truthful valuation reports for maximizing social welfare. It turns out that the direct application of the well known VCG mechanism [10,11,12] is not strategyproof. We design a payment scheme built on the technique of finding the minimum critical bid capable of changing the outcome of the approximation algorithm, and proceed to prove that the resulting mechanism using this payment scheme is truthful. We next focus on designing mechanisms that maximize revenue for content providers. The first challenge here is computing an appropriate layer pricing scheme for maximizing revenue. We show that a greedy algorithm that computes the best price for layer k, under the assumption that this same price will be used for all layers k ≥ k, is indeed optimal. The next challenge is to design a mechanism to elicit user valuations truthfully. We focus on the more realistic prior-free setting [13], when no information on the distribution of user valuations is known a priori. We first prepare a new revenue extraction procedure tailored to the layered multicast setting. This procedure, coupled with the revenue-maximization pricing algorithm, are used as ingredients in a strategyproof, randomized auction first proposed by Goldberg et al. [13]. We model the auction using discrete martingales [14], and show that one can achieve a
84
A. Gopinathan and Z. Li (1−2δ)2
δ-fraction of the optimal revenue with probability at least 1 − 2 exp(− 2α ), where α is the ratio of the maximum contribution by any agent to the optimal revenue. Due to space constraints, we omit a number of technical results and their corresponding proofs from this paper. The interested reader is urged to examine the full paper [15] for further details. The rest of this paper is organized as follows – we discuss related work in Sec. 2, and introduce our model and notations in Sec. 3. Sec. 4 and Sec. 5 study social welfare maximization, Sec. 6 studies revenue maximization. Simulations results are presented in Sec. 7 before we conclude the paper.
2
Previous Research
Traditionally, computing an optimal multicast routing scheme is modeled as optimal Steiner tree packing, a well-known NP-Hard problem [2,3]. The seminal work of Ahlswede et al. [9] on network coding dramatically changed the landscape of multicast algorithm design. Exploiting the fact that information can be both replicated and encoded, network coding leads to efficient polynomial time algorithms for optimal multicast routing[16,17]. The field of mechanism design originates from economic theory, where the goal is to implement a desired social choice in the presence of agents that behave strategically [8]. The seminal work of Vickrey in auction design [10] and Clarke’s pivot payment rule [11] pioneered research on strategyproof mechanism design, which culminated in the general VCG mechanism as presented by Groves [12]. The VCG mechanism is the best known strategyproof method for maximizing social welfare, but in general fails to be truthful when the social welfare maximization problem is NP-Hard and approximate solutions are used [7]. Revenue maximizing mechanisms are known as optimal auctions in the parlance of economic theory [8]. Classic literature in such auction mechanism design assume that user valuations are drawn from a probability distribution known to the auctioneer [18]. The seminal work of Myerson [18] showed that applying the VCG [10,11,12] mechanism using virtual valuations of users yields a mechanism that is revenue-maximizing. From a computer science perspective however, the prior-free setting where such distribution information is not assumed is more realistic yet also more challenging. The work by Goldberg et al.[13] show that deterministic, revenue-maximizing auctions that are symmetric (i.e., independent of any ordering on agents) and strategyproof do not exist. Consequently, they design a randomized, strategyproof auction that guarantees a constant fraction of the optimal revenue. Borgs et al. [19] further consider budget-constrained agents, and show that when budgets are private information, the VCG scheme is not truthful; they design a randomized revenue-maximizing auction instead.
3
Preliminaries →
We model the network as a directed graph, G = (V, E). Each edge uv ∈ E has → a finite capacity C(uv). A distinguished source node s ∈ V provides a multicast
Strategyproof Mechanisms for Content Delivery via Layered Multicast
85
streaming service. The media is encoded into K layers, where the size of layer k ∈ [1..K] is lk . We will focus on the most commonly used layering technique – the cumulative layering scheme [20,21], where decoding layer k requires the use of layers 1 through k − 1 as well. Let T denote the set of users or agents in the network potentially interested in this multicast service. We will take the network flow approach to modeling the multicast routing scheme. We employ network coding within each data layer, to enable polynomial time computability of the optimal flow for a given layer. Our model for computing multicast flows is based on a classic result on multicast network coding, which states that a multicast rate of d is feasible if and only if it is a feasible unicast rate to each receiver [9]. This allows us to view the optimal flow within a network coded layer as the union of conceptual unicast flows [17,16] to each receiver, where these flows do not compete for bandwidth. For an agent i ∈ T , let ti ∈ V be the corresponding node in the network. We assume that an agent i is willing to pay vi ∈ Z+ monetary units for receiving a multicast layer, where without loss of generality, vi is assumed to be of type integer. The value vi is private information known only to i. While we plan to consider more complex valuation functions in the future, the present work nonetheless provides valuable insight on the design of strategyproof mechanisms for layered multicast. An agent i may be charged a price pk for receiving a layer k. Let x be a 2D matrix with each entry xki indicating whether agent i is allocated layer k or not. We will also use xi when referring to the layer allocation for agent i, while xk denotes the allocation of layer k for all agents. If an agent receives up to k layers and is charged pk for every layer received, then its overall utility k function is given as ui = vi k − k=1 pk . Each agent is assumed to be selfish and rational, and behaves strategically with the aim of maximizing its utility function. The utility of the content provider on the other hand is simply the sum of all payments received. A mechanism is essentially a protocol that implements some desired social choice function. We will focus on revelation mechanisms [8], where the only strategy available to each agent in the network is to declare its valuation for receiving a data layer. In this case, the mechanism reduces to the well known auction problem, and hence we will also refer to the value declared by agent i as the bid bi . Let b denote the bid vector of all agents, and denote by b−i the bid vector of all agents but i. For a given bid vector, the utility of agent i when bidding bi will be denoted as ui (bi , b−i ). Our goal is to design mechanisms that either maximize social welfare or revenue. The social welfare of a mechanism is simply the sum of the utilities of all agents in the system, including the mechanism designer (this case, the content provider), which is equivalent to the sum of all user valuations since payments and revenue cancel each other. A revenue-maximizing mechanism or auction maximizes the payments collected from all agents in the system. The seminal work of Myerson [18] paved the way for strategyproof revenue-maximizing auctions, under the assumption that agent valuations were drawn from distributions known to the auctioneer.
86
A. Gopinathan and Z. Li
In contrast, recent work in the computer science community [13,19,8] consider revenue-maximizing auctions when this information is unavailable. We will focus on this latter, prior-free setting as well in this paper. A mechanism is said to be strategyproof if the dominant strategy of every agent is to bid its true valuation, regardless of the bids submitted by other agents, i.e., ui (vi , b−i ) ≥ ui (bi , b−i ), ∀bi = vi , ∀b−i . The following characterization of truthfulness, due to Myerson [18], will be particularly useful: Lemma 1. [Myerson, 1981] Let xi (bi ) be the allocation function for bidder i with bid bi . A mechanism is strategyproof if and only if the following hold for a fixed b−i : – xi (bi ) is monotonically non-decreasing in bi b – Bidder i bidding bi is charged bi xi (bi ) − 0 i xi (z)dz Observe that the payment function is completely determined by the allocation function, and vice versa for a fixed b−i . This implies two equivalent methods of viewing a truthful mechanism: (i) there exists a critical bid bi that depends only on b−i , such that if i bids at least bi , then i is allocated the item, or (ii), the payment of every agent should not depend on its bid bi . We will use the first point of view to design strategyproof mechanisms that achieve efficiency in Sec. 5, while the second perspective will be useful when we design truthful revenue maximization mechanisms in Sec. 6.
4
The Social Welfare Maximization Problem
In the social welfare maximization (SWM) problem, we seek a feasible multicast routing and layer allocation scheme that maximizes the sum of the utilities for all agents and the utility of the service provider. We can formulate SWM as the following linear integer program (IP): k vi xi
Maximize
k
Subject To: → → [f k (uv) − fik (vu)] = 0 v∈N(u) i →
∀k, ∀i, ∀u
→
→
fik (uv) ≤ f k (uv) k → → f (uv) ≤ C(uv) k → fik (uv), f
∀k, ∀i, ∀ uv → ∀ uv
→
fik (ti s) lk k →
xk+1 ≤ xki ≤ i
(1)
i
∀k = 1..K − 1, ∀i
(uv) ≥ 0; xki ∈ {0, 1}
→
∀k, ∀i, ∀ uv
Since the payment by the agents cancels the content provider’s utility, both terms can be ignored in the objective function. We use the flow vector f to de→ note the multicast flow. The variable fik (uv) indicates the conceptual flow [17,16] → → to agent ti carrying layer k, on edge uv. Similarly, the variable f k (uv) indicates → the actual flow on edge uv. For succinctness, the above formulation assumes a
Strategyproof Mechanisms for Content Delivery via Layered Multicast
87
Algorithm 1. Approximation algorithm for SWM Input: Set of agents T , network G Output: A 3-approximate multicast routing and layer allocation scheme, x k 1 Initialize xi := 0, s(i) := 0 ∀i, ∀k ; 2 feasible := True ; 3 while feasible do 4 Compute max-flow values mi for each agent i ; 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
k
ki := maxk | k=s(i) lk ≤ mi ∀i ; if ki = 0 for all i then feasible:= False ; else foreach k ∈ 1 . . . K do W(k) := ∅ ; foreach ti ∈ T do if ki ≤ k then W(k) := W(k) ∪ i ; foreach k ∈ 1 . . . K do S(k) := i∈W(k) vi k ;
W (k ) := maxk S(k) ; foreach i ∈ W (k ) do xki := 1 ∀k = 1 . . . k ; s(i) := k + 1 ; Solve LP degradation of Eq. (1) on G with x ; Set G :=Residual network of G ;
virtual, uncapacitied directed edge from each ti ∈ T to s. In the first constraint, N (u) denotes the set of u’s neighbours in G. This constraint ensures the conceptual unicast flow is conserved at all nodes. The second constraint captures the notion that the true flow on an edge for a given layer k is the maximum of all conceptual flows on that edge carrying layer k. The third requirement ensures that capacity constraints are respected on all edges. The final constraint models: (i) the cumulative layering scheme requirement, a node is able to play layer k only if layers 1 through k − 1 has been obtained as well, and (ii) a layer can be decoded only if all its lk bits have been received. However, even disregarding the routing dimension, the decision version of SWM is NP-Hard. The proof for this hardness result can be found in the full version of this paper [15]. We are thus motivated to design an approximation scheme for SWM. Algorithm 1 shows our approximation algorithm for the SWM problem. We first compute the individual max flows for each agent, to decide the maximum number of layers it can receive. We then create k sets, where the set W(k) contains all agents that can receive up to at least layer k. If the set W(k ) yields the maximum social welfare, we set all agents in W(k ) to receive up to k cumulative layers. We are guaranteed that such a multicast scheme is feasible due to the classic result on multicast feasibility with network coding stated earlier. The next theorem shows that Algorithm 1 achieves a constant approximation ratio:
88
A. Gopinathan and Z. Li
Theorem 2. Alg. 1 is a 3-approximation algorithm for SWM. A proof of the above theorem can be found in the full version of this paper [15].
5
Strategyproof Social Welfare Maximization
In this section, we design a strategyproof mechanism for content providers who wish to seek a layer allocation scheme that maximizes social welfare. Our goal is to ensure that the mechanism is both efficiently computable and strategyproof. A common approach here is to directly apply the celebrated Vickrey-Clarke-Groves (VCG) mechanism [10,11,12] in conjunction with Algorithm 1. We have shown that the latter is efficiently computable with at most a constant factor loss in the social welfare, while the former is a well known strategyproof mechanism. However, applying the VCG mechanism while using suboptimal algorithms can harm truthfulness [7], and we demonstrate that this is also the case with Algorithm 1. Please see the full paper for details [15]. Next, we design a payment scheme that makes Algorithm 1 strategyproof. The key insight is to ensure that each agent is made to pay a critical bid for every outcome selected during each iteration of Algorithm 1. Recall that the outcome of every feasible iteration of Algorithm 1 is a chosen set W(k), where all agents in this set are allocated up to layer k. If the absence of an agent i causes some other outcome W(k ) to be selected by the algorithm, then this agent should made to pay the minimum bid required to ensure W(k) is selected ahead of W(k ). Let pi (r) be the payment charged to agent i in iteration r of Algorithm 1. The payment pi (r) can be computed using the procedure shown in Algorithm 2, which is called during every iteration of Algorithm 1, for every agent i. The total payment of each agent is then the sum of all payments incurred over all feasible iterations of Algorithm 1, pi = r pi (r). It can be shown that this payment scheme yields a truthful mechanism. Theorem 3. Using the payment scheme in Algorithm 2, bidding truthfully is a dominant strategy for all agents when Algorithm 1 is used to solve SWM. The proof for this theorem can be found in the full version of this paper [15].
6
Strategyproof Revenue Maximization
It is known that mechanisms that achieve optimal social welfare have poor revenue generating properties [8]. In this section, we will design strategyproof mechanisms for optimizing the revenue for the content provider. To attain this goal, we need to first find a pricing scheme that maximizes the content provider’s revenue, assuming truthful valuation reports are given. We will focus on the case when the content provider chooses a fixed price per layer, in the interest of fairness. Once we know how to compute the optimal layer prices, the next challenge is to design a mechanism that is strategyproof. We will focus on the prior-free setting, when no Bayesian information on the valuations of the agents
Strategyproof Mechanisms for Content Delivery via Layered Multicast
89
Algorithm 2. Strategyproof Payment Scheme for Algorithm 1 Input: A chosen allocation W(k) in iteration r of Algorithm 1, set of agents T , an agent i Output: Payment charged to agent i for iteration r, pi (r) 1 T := T \ i ; 2 Use Algorithm 1 to find allocation W(k ) in round r for T ; 3 if k < k then 4 H := W(k ) \ W(k) ; 5 pi (r) := j∈H vj k ; 6 else if k > k then 7 H := W(k) ∩ W(k ) ; 8 pi (r) := j∈H vj (k − k) ; 9 else 10 pi (r) := 0;
are known. Such an assumption is both realistic, and without loss of generality. It implies that any guarantee our mechanisms make apply for all distributions of user valuations. 6.1
Revenue-Maximizing Layer Prices
Let us first consider how to compute the optimal layer prices when agent valuations have been disclosed truthfully. First, observe that when there is only a single layer (K = 1), the optimal revenue maximizing layer price p is given as p = arg maxvi |{vl |vl xl ≥ vi }|. That is, the optimal price is given as the valuation vi that maximizes the size of the set of agents whose valuations are at least vi . One may be tempted to use this equation to find the best prices beginning from layer 1 through K. However, doing so ignores the cumulative layering requirement, and results in a revenue that has an unbounded gap from the optimal. For example, consider three agents with v1 = 3 and v2 = v3 = 1, and x1 = (1, 0), x2 = x3 = (1, 1). Then the previously mentioned greedy scheme charges p1 = 3 for layer 1, resulting in overall revenue of 3. The optimal pricing scheme is p1 = p2 = 1, resulting in a revenue of 5. It is easy to make this example arbitrarily bad. Instead, one needs to recursively optimize the layer prices greedily beginning at each layer, while considering the potential revenue from higher layers. This is precisely the idea behind Algorithm 3. The algorithm takes as input an algorithm A for computing the social welfare maximizing layer allocation. Hence, A can either be Algorithm 1, or an integer program solver used in conjunction with Eq. (1). Algorithm 3 computes the best price for layer k, under the assumption that this price will be used for all layers k and above. Once this price is set, we fix the allocation for layer k by assigning it only to agents whose valuation is at least the current price. This may result in some agents being “deallocated” layer k, which in turn leads to excess capacity in the network. Hence, we re-solve the welfare maximizing layer allocation for layer
90
A. Gopinathan and Z. Li
Algorithm 3. Computing Revenue-Maximizing Layer Prices Input: Set of agents T , algorithm A for solving SWM, network G Output: Optimal layer price vector p, revenue R 1 Initialize k := 1, R := 0 ; 2 while k ≤ K do 3 Fixing x1 . . . xk−1 , use A to compute allocation xk . . . xK on G; 4 Let vil := vi xli ∀i, ∀l = k . . . K ; 5 pk := arg maxvk |{vil |vil ≥ vjk ∧ l ≥ k}| ; j
6 7
∀i such that vik < pk , fix xki := 0 ; Set R := R + |{vik |vik ≥ pk }| and let k := k + 1 ;
Algorithm 4. ProfitExtract(R, S, w) Input: Target revenue R, bidder set S with valuation w Output: Winner vector y R 1 Set price p := |S| 2 For each bidder i ∈ S with wi < p, remove i from S 3 If S = ∅, return failure 4 If no bidder i is removed in Step 2, set yi := 1 for all i ∈ S. Return price p and winner vector y 5 Otherwise repeat from step 1
k + 1. Algorithm 3 is optimal, the proof of which can be found in the full version of the paper [15]. Theorem 4. Algorithm 3 computes revenue-maximizing prices with respect to the social welfare computed by A. 6.2
A Randomized Revenue-Maximizing Mechanism
We have shown that if user valuations are known, then the optimal prices can be computed using Algorithm 3. The next challenge is to elicit user valuations truthfully. In designing a strategyproof revenue-maximizing mechanism, the key obstacle in the prior-free setting is the lack of information on the distribution of user valuations. We require that the mechanism can somehow “guess” an optimal price to be charged to an agent i. We know from Lemma 1 that if the “guessing” process does not use i’s valuation, then the dominant strategy for i is to declare its valuation truthfully. This intuition naturally suggests a sampling based approach. We can pick a random sampling of agents, whose valuations can be used to compute a good price. This price is then offered to the other agents not in the sample. To ensure truthfulness, we ignore the potential revenue from the sampled agents. This technique was originally used by Goldberg et al. in a random sampling auction [13] for digital goods, and we apply it as a platform to build our own revenue-maximizing auction for layered multicast.
Strategyproof Mechanisms for Content Delivery via Layered Multicast
91
Algorithm 5. Profit Extraction with Layers Input: Set of agents S, allocation x, target revenue R Output: Layer price vector p, winner vector z 1 Initialize r(k) := 0 ∀k = 1 . . . K ; 2 Set k := 0 ; 3 while R > 0 do 4 k := k mod K + 1; r(k) := r(k) + 1 ; R := R − 1; 5 Set k := K ; 6 while k > 0 do 7 Set next := 1 and let wk := vi xki ; 8 while True do 9 Run ProfitExtract(r(k), S, w); 10 if ProfitExtract returns failure then 11 r(k) := r(k) − 1 ; 12 if r(k) = 0 or k = 1 then break ; 13 r(next) := r(next) + 1 ; 14 next := next mod (k − 1) + 1 ; 15 else 16 Let y and p be the winner vector and price respectively as returned by ProfitExtract ; 17 Set zik := yi ∀i ∈ S , and let p[k] := p ; 18 break ; 19 k := k − 1 ;
Algorithm 6. Competitive Auction Input: Set of agents, S Output: Total revenue R 1 Randomly assign bidders in S to one of two sets, A or B 2 Use Algorithm 3 to compute optimal price and revenue for sets A and B 3 Let RA and RB denote the revenue from sets A and B respectively 4 Run Algorithm 5 on A (resp. B) using target revenue RB (resp. RA ) 5 Return revenue gained from running Algorithm 5 successfully
We begin by introducing some key ingredients that will be required in the final mechanism. The first is a profit extraction algorithm [13], shown in Algorithm 4. The algorithm takes as input a target revenue R, and a set of agents with the 1-dimensional valuation vector w for some item. Algorithm 4 essentially runs the Moulin-Shenker tattonement process, which attempts to whittle down the initial set of agents in S until the remaining agents can adequately afford to share equally the target revenue R. Note that R, and therefore the price p offered to each agent, is independent of i’s valuation wi , and the mechanism is therefore truthful. Let O be the optimal single price revenue that can be made from S. Then observe that the mechanism is able to successfully find a set of agents to share the revenue R if and only if R ≤ O.
92
A. Gopinathan and Z. Li
We wish to generalize the design to a truthful profit extraction mechanism when valuations are multi-valued instead of single-valued, as is the case in Algorithm 4. Further, due to the cumulative layering scheme, we must ensure that if an agent is priced out of layer k, no revenue can be extracted from it for all layers k > k. Our profit extraction scheme for agents with valuations for layers is shown in Algorithm 5. The algorithm again takes a target revenue R, and attempts to share them among agents in the set S. The allocation vector x is provided by a social welfare maximizing algorithm, and ensures that agents are only considered for layers which they can receive in the network. The algorithm utilizes the fact that the revenue extracted from any layer k cannot exceed the revenue from any layer k < k. This follows from (a) agents have the same valuations for all layers, and (b) cumulative layering implies that the number of agents who can receive layer k does not exceed the number of agents that can receive layer k < k. As a result, Algorithm 5 begins by distributing R among all layers, while favouring lower layers. Beginning at layer k = K, it attempts to extract the target revenue for each layer r(k) using Algorithm 4 with the valid bids in layer k. If this fails, it then reduces and redistributes r(k) by 1 — since valuations are integral, the revenue available from each layer is integral. Once again, since the prices offered at each layer to each agent is independent of its bid, Algorithm 5 is truthful. Further, if the revenue-maximizing pricing scheme for S generates a revenue of O and R ≤ O, Algorithm 5 will find a successful allocation of prices. We have now developed all the necessary machinery to design a strategyproof, revenue-maximizing mechanism. The full mechanism is shown in Algorithm 6. The algorithm randomly splits agents into two sets, A and B, and computes the optimal revenue RA and RB from each set. It then attempts to extract RA from agents in B and vice versa. Since the target revenue for each set in Algorithm 5 is independent of the bids of agents in that set, this algorithm is strategyproof. We are guaranteed a revenue of R = min(RA , RB ). It remains now to analyze how R compares to the optimal revenue when all valuations are known exactly, which is at most RA + RB . Let p∗ (k) be the optimal layer price for layer k, and OP T be the optimal revenue when all agent valuations are known. In the optimal solution, let w(k) be the number of agentswho contribute to OP T in each layer 1 . . . k. Therefore, we k ∗ get OP T = k w(k) l=1 p (l) . After the random splitting process, let wA (k) and wB (k) be the number of agents who end up in sets A and B respectively, K who contribute to OP T up to k layers. Denote by M = k=1 p∗ (k) Then M represents the highest possible contribution to OP T by any one agent. The M parameter α = OP , is a measure of the ratio of the maximum contribution of T any one agent to the optimal revenue. We will show that the performance of our mechanism hinges on α. Theorem 5. For 0 < δ < 0.5, Algorithm 6 achieves a revenue of δOP T with 2 probability at least 1 − 2 exp − (1−2δ) 2α
Strategyproof Mechanisms for Content Delivery via Layered Multicast
93
k ∗ Proof. First, observe that RA ≥ k wA (k) p (l) , while a similar revenue l=1 guarantee holds for B. Since the revenue of Algorithm 6 is min(RA , RB ), to achieve a revenue of at least δOP T , then we must have |RA −RB | ≤ (1−2δ)OP T . Now, let Xi be the random variable which takes value 1 if agent i ends up in the set A during the random splitting process, and -1 if it ends up in B. Let ti be the revenue contributed by i to OP T , and so we have ti ≤ M , for all i. Impose some arbitrary ordering on the set of agents, and define the random variable Yj = i≤j Xi ti , with Y0 = 0. We then have that in expectation: E[Yj+1 |Y1 . . . Yj ] = Yj + E[Xj+1 tj+1 ] = Yj +
1 1 tj+1 + (−tj+1 ) = Yj 2 2
That is, the random variable Yj forms a martingale sequence with respect to itself. If there are a total of n agents, then |Yn − Y0 | = |Yn | = |RA − RB |. Further, we have |Yj+1 − Yj | ≤ M , for all j ≤ n. Hence we can bound the probability of the event |RA − RB | ≥ (1 − 2δ)OP T using Azuma’s inequality [14] as the following: (1 − 2δ)2 OP T 2 (1 − 2δ)2 OP T 2 n P r(|Yn | ≥ (1 − 2δ)OP T ) ≤ 2 exp − = 2 exp − 2 2 ≤ 2 exp
2
−
j=1 2
M
(1 − 2δ) OP T 2M
2nM
= 2 exp
−
(1 − 2δ)2 2α
The first inequality is due to Azuma’s concentration result on martingales with bounded differences [14], while the second comes about since nM ≤ OP T . We need to ensure strategyproofness is maintained at every stage of our mechanism. Hence, we use a strategyproof pricing scheme in conjunction with the algorithm A in Algorithm 3, and let q be the payment vector. We then run Algorithm 6 as described, and let r be the vector of revenue obtained from each agent. For each agent i in the winning set (either A or B), we charge max(qi , ri ). It is easy to see that the entire mechanism is both strategyproof, and revenuemaximizing.
7
Simulations
We now present results from simulations performed to determine the effectiveness of our techniques in practical scenarios. We first studied the performance of Algorithm 1. We used BRITE [22] to generate random network topologies. Link capacities were distributed randomly to model the heterogeneous nature of the network. The multicast group was chosen randomly as well. For each topology, layer sizes were generated randomly between 1 and 5. Fig. 1a shows the social welfare achieved by Algorithm 1 and the optimal solution of Eq. (1) as computed by an integer linear program solver, for varying network sizes with 10 agents and 5 layers. In all cases, Algorithm 1 achieves at least 90% of the optimal social welfare. We next examined the effect of the number of layers on the performance
94
A. Gopinathan and Z. Li
300
250
Approximate Social Welfare % Optimal Social Welfare
Optimal Algorithm 1
225 200
Social Welfare
Social Welfare
100
Optimal Algorithm 1
250
200
150
100
175 150 125 100 75 50
50
25
0
20
30
40
50
60
70
80
0
90
Network Size (nodes)
1
2
3
4
99
98
97
96
95
94
93
5
1
2
(a)
3
4
5
6
7
8
9
10
Number of agents
Number of layers
(b)
(c)
Fig. 1. The performance of Algorithm 1 is compared to the optimal social welfare for varying network sizes in (a), number of layers in (b) and number of multicast receivers in (c) 100
% of optimal revenue
% of optimal revenue
48
46
44
42
40
38
90 80 70 60 50 40 30 20 10
36 0
20
40
60
Number of agents
(a)
80
100
0 1
2
3
4
5
6
7
8
9
10
Maximum number of layers per agent
(b)
Fig. 2. The revenue obtained by Algorithm 6 is compared to the optimal revenue for varying number of agents in (a) and number of layers in (b)
of Algorithm 1. As can be seen in Fig. 1b, the greedy technique of Algorithm 1 starts to suffer when the number of layers increases beyond 3. The same effect is also observed in Fig. 1c, where Algorithm 1’s approach becomes less effective when the number of agents increase. Nevertheless, in all cases observed, we obtain at least 90% of the optimal social welfare. We next implemented Algorithm 6, together with pricing scheme of Algorithm 3. We then performed simulations on randomly generated sets of agents, with randomly chosen allocations and valuations. Fig. 2a, shows the effect of the number of agents on the revenue gained by Algorithm 6. The revenue obtained by our mechanism improves significantly as the number of agents increase. This is due to the random splitting process. As the number of agents increase, the amount of revenue available from each set is more likely to be balanced. In Fig. 2b, we simulated the auction for different values of the maximum layer size an agent may receive. Here, we find that the auction performs well in all cases, and is not affected by the number of layers. We note that the auction performs well regardless, and tends to obtain at least 40% of the optimal revenue.
Strategyproof Mechanisms for Content Delivery via Layered Multicast
8
95
Conclusion
In summary, we provide tools for maximizing social welfare as well as revenue for content providers using layered multicast to disseminate information over the Internet. The algorithms and mechanisms developed are both efficiently computable, and strategyproof. We provide a constant factor approximation algorithm for maximizing social welfare in layered multicast, which we augment with a tailored payment scheme for strategyproofness. We also show how to compute optimal prices for maximizing revenue, and design a randomized strategyproof mechanism with provable revenue guarantees. Simulation results confirm the effectiveness of the proposed solutions in practical settings.
References 1. Kurose, J., Ross, K.: Computer networking: a top-down approach featuring the Internet. Addison-Wesley, Reading (2003) 2. Jain, K., Mahdian, M., Salavatipour, M.R.: Packing Steiner Trees. In: Proceedings of the ACM-SIAM Symposium on Discrete Algorithms, SODA (2003) 3. Thimm, M.: On the approximability of the steiner tree problem. In: Sgall, J., Pultr, A., Kolman, P. (eds.) MFCS 2001. LNCS, vol. 2136, p. 678. Springer, Heidelberg (2001) 4. McCanne, S., Jacobson, V., Vetterli, M.: Receiver-driven layered multicast. In: Proceedings of ACM SIGCOMM (1996) 5. Zhao, J., Yang, F., Zhang, Q., Zhang, Z., Zhang, F.: Lion: Layered overlay multicast with network coding. IEEE Transactions on Multimedia 8, 1021 (2006) 6. Gopinathan, A., Li, Z.: Optimal layered multicast. ACM Transactions on Multimedia Computing, Communications and Applications 7(2) (2011) 7. Nisan, N., Ronen, A.: Algorithmic mechanism design (extended abstract). In: Proceedings of the ACM Symposium on Theory of computing, STOC (1999) 8. Nisan, N., Roughgarden, T., Tardos, E., Vazirani, V. (eds.): Algorithmic Game Theory. Cambridge University Press, Cambridge (2007) 9. Ahlswede, R., Cai, N., Li, S.R., Yeung, R.W.: Network Information Flow. IEEE Transactions on Information Theory 46(4), 1204–1216 (2000) 10. Vickrey, W.: Counterspeculation, auctions, and competitive sealed tenders. Journal of Finance, 8–37 (1961) 11. Clarke, E.H.: Multipart pricing of public goods. Public Choice 11(1), 17–33 (1971) 12. Groves, T.: Incentives in teams. Econometrica: Journal of the Econometric Society, 617–631 (1973) 13. Goldberg, A., Hartline, J., Karlin, A., Saks, M., Wright, A.: Competitive auctions. Games and Economic Behavior 55(2), 242–269 (2006) 14. Azuma, K.: Weighted sums of certain dependent random variables. Tohoku Mathematical Journal 19(3), 357–367 (1967) 15. Gopinathan, A., Li, Z.: Strategyproof mechanisms for layered multicast, http://ajay.gopinathan.net 16. Li, Z., Li, B., Jiang, D., Lau, L.C.: On Achieving Optimal Throughput with Network Coding. In: Proceedings of IEEE INFOCOM (2005) 17. Li, Z., Li, B.: Efficient and Distributed Computation of Maximum Multicast Rates. In: Proceedings of IEEE INFOCOM (2005)
96
A. Gopinathan and Z. Li
18. Myerson, R.B.: Optimal auction design. Mathematics of Operations Research, 58– 73 (1981) 19. Borgs, C., Chayes, J., Immorlica, N., Mahdian, M., Saberi, A.: Multi-unit auctions with budget-constrained bidders. In: Proceedings of the ACM Conference on Electronic Commerce, EC (2005) 20. ISO/IEC, Generic coding of moving pictures and association audio information, ISO/IEC 13 818–2 (1995) 21. ITU, Video coding for low bit rate communication. ITU-T Recommendation H.263 (1998) 22. Boston University Representative Internet Topology gEnerator, http://www.cs.bu.edu/brite/
A Flexible Auction Model for Virtual Private Networks Kamil Kołtyś, Krzysztof Pieńkosz, and Eugeniusz Toczyłowski Institute of Control and Computation Engineering, Warsaw University of Technology, Nowowiejska 15/19, 00-665 Warsaw, Poland [email protected]
Abstract. We consider the resource allocation problem related to Virtual Private Networks (VPNs). VPN provides connections between geographically dispersed endpoints over a shared network. To realize VPN service, sufficient amount of bandwidth of network resources must be reserved for any traffic demand specified by a customer. We assume that there are many customers that want to purchase VPN services, and many network providers that offer their network resources for sale. We present a multicommodity auction model that enables the efficient management of the network resources in the market environment. On the other hand it is very convenient for the customers as it allows them to specify the bandwidth requirements concerning VPN in a very flexible way, including pipe and hose VPN representations. The proposed model has also many other valuable properties, such as individual rationality and budget balance. The auction model has a form of LP for which the computational efficiency can be improved by applying the column generation technique. Keywords: auction model, virtual private network, bandwidth trading.
1
Introduction
Virtual Private Network (VPN) is a logical network that is established over a shared network in order to provide VPN users with service compared to dedicated private lines. A sufficient amount of bandwidth of a bundle of shared network resources must be reserved for VPN service to satisfy traffic demand pattern specified by customer. The basic way of representing the set of traffic demands values of VPN is in the form of the pipe model [1,2]. It requires that VPN customer specifies, for each pair of endpoints, the maximum demand volume. This approach is applicable to VPNs for which the exact traffic demands matrix can be predicted. As the number of endpoints per VPN is constantly growing, the traffic demands patterns are becoming more and more complex. Therefore, for some VPNs, it is almost impossible to predict maximum value of each traffic demand. Duffield at all proposed in [1] the hose VPN model in which VPN customer specifies aggregate requirements for each VPN endpoint, and not for each pair of endpoints. In comparison to the pipe model, the hose model provides a simpler and more flexible way of VPN traffic demands specification. The customer only defines ingress and egress bandwidths of VPN endpoints, which can J. Domingo-Pascual et al. (Eds.): NETWORKING 2011, Part II, LNCS 6641, pp. 97–108, 2011. c IFIP International Federation for Information Processing 2011
98
K. Kołtyś, K. Pieńkosz, and E. Toczyłowski
be more easily predicted than traffic demands matrix. In the hose model it is assumed that all patterns of traffic demands that conforms the ingress and egress bandwidths of VPN endpoints can be realized. In [6] the concept of architecture for provisioning VPNs that come sequentially in the dynamic fashion is presented. Nonetheless, it does not take into account the costs of network resources. The problem of determining the minimum cost bandwidth reservation that satisfies all VPN traffic requirements in hose model is analyzed in [3,4]. In [5] a more general model of VPN traffic demands is considered, called polyhedral model, that allows for specifying a set of required VPN traffic demands defined by some linear inequalities. In this paper we consider the general network resource allocation problem in the context of the multilateral trade. In this problem the network resources are owned by several market entities, such as companies laying cables, network providers and other network link owners. The customers on the market are geographically spread organizations and other institutions that are interested in purchasing VPN services. We assume that sellers offer single links and buyers want to purchase VPN services between several nodes. Currently, the dominating form of bandwidth trading are bilateral agreements in which two participants negotiate the contract terms. The negotiations are complex, nontransparent and time consuming. The customer that wants to purchase bandwidth between several nodes connected by a set of links owned by different providers must independently negotiate with all of them. If the negotiation fails with one of them (whereas agreements with other sellers would be drawn up and signed), the customer will get useless bandwidth as it will not ensure the connection between all selected nodes. Also even if the buyer manages to purchase bandwidth that ensures connectivity between all VPN endpoints, there is a risk that VPN service could be provided by a cheaper set of links. Thus there is a need of designing more sophisticated market mechanisms that will support customers in purchasing network resources of complex structure and enable efficient management of network resources distributed among several providers. Lately analysis of bandwidth market gives promise of emerging new forms of bandwidth trading in the future [7,8]. Most of the auction mechanisms for bandwidth trading considered in the literature concern auctioning the bundles of links [9,10,11]. They support purchasing VPN in a very limited way because they require that the customer explicitly specifies the set of links realizing all VPN traffic demands instead of allowing the customer for convenient VPN traffic demands specification as in the abovementioned pipe or hose model. In [12] the multicommodity auction model for balancing communication bandwidth trading (BCBT) is presented. Although it enables submitting buy offers for end to end demands, it does not guarantee that all end to end demands associated with the particular VPN are obtained by the customer. A generalization of BCBT model that supports purchasing VPN services represented in the pipe model is proposed in [2]. In this paper we introduce a more flexible VPN auction model AM-VPN that allows the customer to
A Flexible Auction Model for Virtual Private Networks
99
specify the requirements for VPN traffic demands defined not only in the pipe but also in the hose and mixed pipe-hose representations.
2
Auction Model
The proposed AM-VPN model concerns a single-round sealed-bid double auction of network resources. The set V represents all nodes of the network. We denote the set of sell offers by E and the set of buy offers by B. The sell offer e ∈ E regards to single link and the parameter ave defined for each network node v ∈ V states which node is a source of this link (ave = 1), which is a destination of this link (ave = −1) and which nodes are not associated with this link (ave = 0). Sell offer e includes the minimum sell unit price Se and the maximum volume of bandwidth xmax offered for sale at particular link. We e assume that bandwidth of links is a fully divisible commodity and every sell offer can be partially accepted. We denote the contracted bandwidth of link offered in sell offer e by variable xe . The buy offer m ∈ B regards to VPN. It contains the maximum price Em that buyer is willing to pay for VPN and specification of the VPN traffic demands. We denote the set of VPN endpoints by Qm (Qm ⊆ V) and the set of traffic demands between VPN endpoints by Dm . Each demand d ∈ Dm represents a required connection between source endpoint sd ∈ Qm and destination endpoint td ∈ Qm , where sd = td . For demand d ∈ Dm and each network node v ∈ V the parameter cvd is defined, such that csd d = 1, ctd d = −1 and cvd = 0 if v = sd and v = td . Any value of the traffic demand d ∈ Dm is a variable denoted by xd . The set Xm contains all vectors of traffic demands values (xd )d∈Dm that must be provided by VPN service. We assume that buy offer can be partially accepted and denote the accepted fraction of buy offer m by variable xm . We propose the mixed pipe-hose model which enables to define the set Xm in the general way that combines pipe and hose traffic demands models. In the mixed pipe-hose model the VPN customer is able to define two types of requirements. The first type of requirements allows for specifying the egress bandwidth − H H b+ mv and ingress bandwidth bmv for given endpoints v ∈ Qm (Qm ⊆ Qm ). The second type of requirements allows for specifying the maximum volume hd of P P bandwidth for some demands d ∈ Dm (Dm ⊆ Dm ). Thus, in the mixed pipe− hose model the set Xm is defined by parameters b+ mv , bmv and hd as follows: + − P Xm = (xd )d∈Dm : xd ≤ bmv , xd ≤ bmv ∀v∈QHm ; xd ≤ hd ∀d∈Dm d∈Dm : v=sd
d∈Dm : v=td
The above mixed pipe-hose model is a generalization of the pipe and hose models. P If we put QH m = ∅ and Dm = Dm , we obtain the VPN specification in the pipe H P model. If we put Qm = Qm and Dm = ∅, we obtain the VPN specification in the hose model. Define two following sets: X = Xm1 × ... × Xm|B| and D = Dm1 ∪ ... ∪ Dm|Dm | and denote by T a set of all scenarios of VPNs demands values (xd )d∈D ∈ X .
100
K. Kołtyś, K. Pieńkosz, and E. Toczyłowski
Let parameter xτd be a value of demand d ∈ D in scenario τ ∈ T . We assume that routing for each demand is static and can be carried on many paths. For each demand d, we denote by variable fed the fraction of traffic demand value xτd (regardless of scenario τ ) that is routed through link offered in sell offer e. The AM-VPN model defines the allocation and pricing rules. The allocation rule determines contracted bandwidth of sell offers and realization of buy offers ˆ It also settles a matching of acthat provides the maximum social welfare Q. cepted sell and buy offers by assigning links bandwidth of accepted sell offers to VPN services of accepted buy offers. The pricing rule defines the revenues of sellers and payments of buyers. The allocation rule is defined as linear programming (LP) problem. Such a formulation is very advantageous because standard optimization solvers can be used to determine optimal allocation and dual prices can be used to define the pricing rule. Below we present both rules in detail.
3
Allocation Rule
The allocation rule can be formulated as the following optimization problem: ˆ = max Q
Em xm −
m∈B
fed xτd ≤ xe
Se xe
(1)
e∈E
∀e∈E , ∀τ ∈T
(2)
m∈B d∈Dm
e∈E
ave fed = cvd xm
∀v∈V , ∀m∈B , ∀d∈Dm
∀e∈E 0 ≤ xe ≤ xmax e 0 ≤ xm ≤ 1 ∀m∈B 0 ≤ fed
∀e∈E , ∀m∈B , ∀d∈Dm
(3) (4) (5) (6)
The objective function is defined by equation (1) that express the social welfare being maximized. Thus, the allocation rule provides the highest economic profit that may be obtained by the sellers and buyers as a result of the trade. The constraints (2) ensure that for each scenario the bandwidth sold at link is sufficient to realize the appropriate fraction of demands values of all buy offers. Equation (3) is a flow conservation constraint that must be met for each demand of every buy offer. If xm = 0, then buy offer m is rejected. If xm = 1, then buy offer m is fully accepted. Otherwise, buy offer m is partially accepted and a + − − defined by parameters b+ set Xm mv = xm bmv , bmv = xm bmv and hd = xm hd is provided. Note that in such a case the connectivity between all VPN endpoints is ensured, but only the values of traffic demands that can be realized by the VPN service are smaller proportionally to xm . The above LP problem is hard to solve directly, because the constraints (2) must be defined for immense number of scenarios τ ∈ T . Below we present two different allocation models, called AR1 and AR2 , that are based on compact LP formulation and column generation method, respectively.
A Flexible Auction Model for Virtual Private Networks
101
To formulate the AR1 model of the allocation problem let us assume that the routing of all VPN demands (fed ) is fixed. For given buy offer m we denote by xme the bandwidth of link involved with offer e that is required to realize the worst case scenario of VPN traffic demands values. Taking vector (xd )d∈Dm as decision variables and inequalities defining set Xm as constraints, we can form following optimization problem that determines the minimum value of xme : x ¯me = max fed xd (7) d∈Dm
b+ mv ,
∀v∈QHm
(8)
xd ≤ b− mv ,
∀v∈QHm
(9)
xd ≤
d∈Dm :v=sd
d∈Dm :v=td
xd ≤ hd ,
∀d∈Dm P
0 ≤ xd ,
(10)
∀d∈Dm
(11)
v+ v− By πme , πme and πed we denote the dual variables corresponding to constraints (8), (9), and (10), respectively. For each buy offer m ∈ B and demand d ∈ Dm P P we define a parameter σd , such that σd = 1 if d ∈ Dm and σd = 0 if d ∈ / Dm . Then, the allocation rule AR1 can be formulated as the following LP problem: ˆ (12) Q = max Em xm − Se xe
xme =
m∈B
v+ + (πme bmv
+
v∈QH m
0 ≤ fed ≤
v∈QH m: v=sd
v+ πme +
v− − πme bmv )
+
e∈E
πed hd
∀m∈B , ∀e∈E
(13)
∀e∈E , ∀m∈B , ∀d∈Dm
(14)
P d∈Dm
v− πme + σd πed
v∈QH m: v=td
xme ≤ xe
∀e∈E
(15)
m∈B
e∈E
ave fed = cvd xm
∀v∈V , ∀m∈B , ∀d∈Dm
0 ≤ xe ≤ xmax e
∀e∈E
(16) (17)
0 ≤ xm ≤ 1 ∀m∈B v+ v− , πme ∀e∈E , ∀m∈B , ∀v∈QHm 0 ≤ πme
(18) (19)
0 ≤ πed
(20)
P ∀e∈E , ∀m∈B , ∀d∈Dm
where (13) and (14) represent objective function and constraints of the dual problem to (7)-(11). The optimization problem AR1 is a LP problem that can be solved directly using standard optimization solvers. A similar approach for solving a problem of provisioning VPN in the hose model was proposed in [3]. An alternative formulation AR2 of the allocation rule is also the LP problem, but it applies column generation technique to achieve optimal allocation.
102
K. Kołtyś, K. Pieńkosz, and E. Toczyłowski
For each buy offer m ∈ B we define a set Fm that contains scenarios of network resource allocation realizing VPN service specified in buy offer m. For given buy offer m ∈ B, scenario β ∈ Fm and sell offer e ∈ E the parameter αβme states how much bandwidth of particular link associated with sell offer e is required by scenario β. Let variable xβm denotes the realization of buy offer m in the scenario β ∈ Fm . The master problem of the column generation algorithm used in AR2 problem determines the optimal allocation for given set of scenarios Fm defined for each buy offer m ∈ B. It has a form of the following LP problem AR2-MP: ˆ E m xm − Se xe Q = max (21) xme =
m∈B
e∈E
αβme xβm
∀m∈B , ∀e∈E
(22)
β∈Fm
xme ≤ xe
m∈B
xm = xe ≤
∀e∈E
(23)
xβm
β∈Fm xmax e
∀m∈B
(24)
∀e∈E
(25)
xm ≤ 1 ∀m∈B 0 ≤ xe ∀e∈E 0 ≤ xm 0≤
xβm
(26) (27)
∀m∈B
(28)
∀m∈B , ∀β∈Fm
(29)
For each buy offer m and prices ξe obtained from the master problem AR2-MP we define the optimization subproblem AR2-SPm (ξe ) that determines the allocation αme realizing VPN service of buy offer m with the lowest cost Ψˆm : ˆ ξe αme Ψm = min (30) e∈E
v+ + (πme bmv
+
v∈QH m
0 ≤ fed ≤
v+ πme +
v∈QH m: v=sd
v− − πme bmv )
+
πed hd ≤ αme
∀e∈E
(31)
P d∈Dm
v− πme + σd πed
∀e∈E , ∀d∈Dm
(32)
v∈QH m: v=td
ave fed = cvd
∀v∈V , ∀d∈Dm
(33)
0 ≤ αme
(34)
0≤
∀m∈B , ∀e∈E v+ v− πme , πme ∀e∈E , ∀v∈QHm πed ∀e∈E , ∀d∈Dm P
(35)
e∈E
0≤
(36)
The allocation rule of the AM-VPN model can be now formulated as the optimization problem AR2 , that can be solved by the following algorithm based on the column generation technique:
A Flexible Auction Model for Virtual Private Networks
103
1. For each m ∈ B initialize the set Fm (e.g. for each m ∈ B solve AR2-SPm (Se )). ˆe and ω ˆ m denote the 2. Solve AR2-MP . For obtained optimal solution let λ values of dual prices associated with constraints (23) and (24), respectively. ˆ e ) determining Ψˆm i α ˆ me . 3. For each m ∈ B solve AR2-SPm (λ ˆ 4. If for each m ∈ B inequality Ψm ≥ ω ˆ m is met, then allocation determined in step 2 is optimal (STOP). Otherwise for each m ∈ B that fulfills inequality Ψˆm < ω ˆ m , create new scenario β, such that αβme = α ˆme , and add it to set Fm . Go to step 2. It can be proved that optimization problems AR1 and AR2 define equivalent allocation rules of the AM-VPN model. In general the result of the allocation ˆ =(ˆ ˆm , x ˆme ). rule is denoted as vector x xe , x
4
Pricing Rule
The pricing rule of the AM-VPN model is strictly connected with the formulation of AM-VPN allocation rule as it leverages the fact that allocation rule is defined ˆ determined by solving problem by LP problems AR1 or AR2 . For allocation x ˆ e the values of corresponding dual prices associated AR1 or AR2 we define by λ with constraint (15) in the case of AR1 , or constraint (23) in the case of AR2 . The pricing rule of the AM-VPN model sets the unit clearing price of link ˆ e . Then the revenue pe of the seller that submits associated with offer e equal to λ sell offer e equals: ˆe x ˆe , (37) pe = λ and the payment pm of the buyer that submits buy offer m equals: ˆex pm = ˆme . λ
(38)
e∈E
5
Model Properties
In this section we present some general properties of the proposed auction model AM-VPN . Denote the economic profit of seller whose sell offer e realization is xe and revenue equals pe as follows: Ue (xe , pe ) = pe − Se xe ,
(39)
Analogously, define the economic profit of the buyer whose buy offer m realization is xm and payment equals pm as follows: Um (xm , pm ) = Em xm − pm .
(40)
Proposition 1. The AM-VPN model has individual rationality property, i.e. for ˆ , revenues pe and payments pm determined by the AM-VPN given allocation x model each seller obtains non-negative economic profit Ue (ˆ xe , pe ) ≥ 0, and each buyer obtains non-negative economic profit, Um (ˆ xm , pm ) ≥ 0.
104
K. Kołtyś, K. Pieńkosz, and E. Toczyłowski
ˆ is given by AR2 , but analogously Proof. This proof assumes that the allocation x ˆe , ω proof can be done in the case of AR1 . Let γˆme , λ ˆ m, μ ˆe , μ ˆm be the values of dual variables corresponding to optimal solution of AR2 related to constraints (22)-(26), respectively. From duality theory it follows that above values satisfy the following complementary slackness conditions: ˆe ) = 0 ˆe − λ x ˆe (Se + μ ˆm + ω ˆm) = 0 x ˆm (−Em + μ β β x ˆm ( γˆme αme − ω ˆm) = 0
∀e∈E
(41)
∀m∈B
(42)
∀m∈B , ∀β∈Fm
(43)
∀m∈B , ∀e∈E
(44)
e∈E
ˆ e − γˆme ) = 0 xˆme (λ
The economic profit of seller that submits sell offer e is non-negative because: ˆ e − Se )ˆ Ue (ˆ xe , pe ) = pe − Se x ˆe = (λ xe = μ ˆe x ˆe ≥ 0
(45)
The equations in (45) results from (39), (37) and (41), respectively and the last inequality holds, because μ ˆe ≥ 0 as it is a dual variable related to inequality constraint (25). The economic profit of the buyer that submits buy offer m is non-negative because: ˆ e xˆme = Um (ˆ λ xm , pm ) = Em x ˆm − pm = Em xˆm − (46) ˆm − = Em x
e∈E
γˆme x ˆme = Em x ˆm −
e∈E
ˆm − = Em x
e∈E
ω ˆ m xˆβm
γˆme
αβme x ˆβm = (47)
β∈Fm
= (Em − ω ˆ m )ˆ xm = μ ˆm x ˆm ≥ 0
(48)
β∈Fm
The equations in (46) results from (40) and (38), respectively. The equations in (47) follows from (44) and (22), respectively. The equations in (48) results from (43), (24) and (42), respectively, and the last inequality holds, because μ ˆm ≥ 0 as it is a dual variable related to inequality constraint (26).
Proposition 2. The AM-VPN model has budget balance property, i.e. for given ˆ , revenues pe and payments pm determined by the AM-VPN model allocation x the following condition is met: pm = pe . (49) m∈B
e∈E
ˆ is given by AR2 , but analogously Proof. This proof assumes that the allocation x ˆ e be the value of dual variable corproof can be done in the case of AR1 . Let λ responding to optimal solution of AR2 related to constraint (23). From duality theory it results that following complementary slackness condition is satisfied: ˆe( x ˆme − x ˆe ) = 0 ∀e∈E (50) λ m∈B
A Flexible Auction Model for Virtual Private Networks
105
Then the following equations are met: ˆ e xˆme = λ λˆe x pm = ˆe = pe m∈B
m∈B e∈E
e∈E
(51)
e∈E
The equations in (51) results from (38), (50) and (37) respectively.
We have shown that the AM-VPN is individually rational and budget balanced. These properties are very valuable. The first one guarantees that trader will not lose by participating in the auction. The second one ensures that the operator organizing the auction does not have to pay extra money in order to proceed the auction and it does not obtain any unjustified economic profits.
Flexibility of the AM-VPN Auction Model
6
The AM-VPN model provides a flexible way for defining the customer’s preferences related to VPN service using mixed pipe-hose representation. In this section we illustrate some benefits that customer obtains by having possibility of specifying a VPN service with mixed pipe-hose representation rather than with pipe or hose representation only. The example concerns the network presented in Fig. 1a. The network consists of 11 nodes and 10 pairs of directed links denoted by solid lines. Each directed link is involved with one sell offer. Thus, there are 20 sell offers having the same unit price 10 and maximum volume 1500. We consider one customer that want to purchase VPN service concerning nine nodes: A, B, C, D, F , G, H, J and K. The VPN topology is depicted in Fig. 1a. The numbers at nodes denote the ingress and egress bandwidth of appropriated endpoints. The dotted lines connecting the selected pairs of VPN endpoints mean the demands that are required to be satisfied by VPN. All remaining demands between VPN endpoints have not to be provided. The customer is willing to pay 60000 for this VPN service. Above resource allocation problem can be solved by means of the AM-VPN model with customer requirements related to VPN specified in mixed pipe-hose representation. Achieved allocation is depicted in Fig. 1b. The numbers at links denote the bandwidth sold to the customer. Note that the buy offer is fully
(a) The VPN buy offer specification
(b) The allocation given by the AM-VPN
Fig. 1. The network resource allocation problem solved by the AM-VPN model
106
K. Kołtyś, K. Pieńkosz, and E. Toczyłowski
accepted. The clearing prices of links determined by the model AM-VPN are equal to 10. Thus, the customer has to pay 10*4000=40000 for the VPN service. Assume now that the customer is only able to use pipe or hose model instead of flexible mixed pipe-hose representation. In the case of hose model the customer can only specify the ingress and egress bandwidth of each VPN endpoint. If the customer submits a buy offer that concerns such a VPN service specification the optimal allocation provided by the AM-VPN changes as follows: the bandwidth sold at links between nodes E and G grows to 800 and the bandwidth sold at links between nodes G and H grows to 400. The clearing prices remain the same. Thus, in comparison with mixed pipe-hose case the customer has to purchase extra 400 units of bandwidth at links between nodes E and G and extra 200 units of bandwidth at links between nodes G and H resulting in higher customer’s payment, i.e. 10*(4000+2*400+2*200)=52000. In the case of pipe model the customer must define the maximum values of all demands. The maximum values of non-zero demands (denoted by dotted lines) are defined according to the typical pipe mesh approach, namely as the minimum of the egress bandwidth of source endpoint and the ingress value of destination endpoint. Naturally, the maximum values of all remaining demands are set to 0. If customer submits a buy offer that concerns such a VPN service specification, the allocation provided by the AM-VPN changes in comparison with the mixed pipe-hose model case at links between nodes E and G on which the bandwidth sold to the customer grows to 1000. The clearing prices remain the same. Thus, in comparison with mixed pipe-hose case the customer has to purchase extra 600 units of bandwidth at links between nodes E and G resulting in the higher customer’s payment, i.e. 10*(4000+2*600)=52000. Concluding this example, if the customer specifies his requirements for VPN traffic demands in the mixed pipe-hose model, he pays 40000 for required VPN service but if he is able to use merely the pipe or hose model, he has to pay much more, i.e. 52000. It results from the fact that the VPN specified in the pipe or hose model may require more network resources than the VPN specified in the mixed pipe-hose model which restricts the required VPN traffic demands more accurately than the pipe or hose model. Thus, as the above example illustrates, the flexibility of the AM-VPN model enables the customer to specify precisely the VPN service requirements in the mixed pipe-hose model which may improve his economic profit.
7
Computational Efficiency
In this section AR1 and AR2 allocation problems are compared in the respect of computational efficiency. We solve several resource allocation problems concerning three networks from SNDlib library [13]: polska (12 nodes and 18 links), france (25 nodes and 45 links) and cost266 (37 nodes and 57 links). For each network 12 problems were prepared differing in the number of buy offers and the sizes of VPNs specified in the hose model. We consider problems with 5, 10, 25
A Flexible Auction Model for Virtual Private Networks
107
Table 1. The time of determining the optimal allocation by AR1 and AR2 [s] Network polska VPN endpoints 3 6 9 Network france VPN endpoints 3 6 9 Network cost266 VPN endpoints 3 6 9
5 AR1 AR2 0,02 3,02 0,52 9,72 4,81 14,83
AR1 0,08 1,45 5,51
5 AR2 12,56 86,83 233,89
AR1 0,11 3,17 12,57
5 AR2 33,08 78,15 266,56
Buy offers 10 25 AR1 AR2 AR1 AR2 0,05 2,38 0,39 5,02 1,19 8,55 9,63 12,37 25,18 28,74 78,72 58,24 Buy offers 10 25 AR1 AR2 AR1 AR2 0,62 22,76 2,92 22,74 7,11 83,01 50,79 124,21 52,43 232,77 294,25 333 Buy offers 10 25 AR1 AR2 AR1 AR2 0,61 19,03 3,51 22,04 10,72 63,18 117,52 123,41 57,24 222,32 17130 418,44
50 AR1 AR2 1,89 13,07 58,49 30,09 682,74 143,54 50 AR1 AR2 4,01 51,36 142,18 169,54 1653,20 283,62 50 AR1 AR2 27,74 31,49 692,82 188,77 26101 703,2
and 50 buy offers, respectively and with VPNs that consist of 3, 6 and 9 nodes. In every allocation problem there is one sell offer submitted on each link. Table 1 presents the time of determining the optimal solution by allocation rules AR1 and AR2 . The computations have been performed on computer with processor Intel Core2 Duo T8100 2,1GHz, main memory 3GB and 32-bit operating system MS Vista. For solving LP problems the CPLEX 12.1 has been used. For all problems concerning small VPNs (with 3 endpoints) or having small number of buy offers (5 or 10) AR1 is faster than AR2 . Nevertheless, as the size of the problem grows, the solution time of AR1 increases much more than in the case of AR2 . The benefits from applying AR2 rather than AR1 are especially apparent for the largest resource allocation problem defined for the cost266 network for which AR1 needs more than 7 hours to determine the allocation while AR2 calculates the optimal solution in about 11 minutes. Thus for the real resource allocation problems which usually are of large sizes it is better to use the AR2 rather than AR1 .
8
Conclusions
We propose the AM-VPN auction model that supports the allocation of the network resources distributed among several providers to the customers of VPN services. The model matches many sell and buy offers aiming at maximization of social surplus. The AM-VPN model has individual rationality and budget balance properties. The main merit of the model is flexibility in specifying the VPN traffic demands. It enables the VPN customer to employ the pipe, hose
108
K. Kołtyś, K. Pieńkosz, and E. Toczyłowski
or mixed pipe-hose representations. The presented example demonstrates benefits gained by VPN customer when using the mixed pipe-hose traffic demands model. The proposed model determines optimal allocation and prices by solving an appropriate LP optimization model. The computational efficiency for large resource allocation problems can be improved by applying column generation technique.
Acknowledgments The authors acknowledge the Ministry of Science and Higher Education of Poland for partially supporting the research through Project N N514 044438.
References 1. Duffield, N.G., Goyal, P., Greenberg, A., Mishra, P., Ramakrishnan, K.K., van der Merive, J.E.: A flexible model for resource management in virtual private networks. In: Proc. ACM SIGCOMM, vol. 29(4), pp. 95–108 (1999) 2. Kołtyś, K., Pieńkosz, K., Toczyłowski, E., Żółtowska, I.: A bandwidth auction model for purchasing VPN. Przeglad Telekomunikacyjny 8-9, 1183–1189 (2009) (in polish) 3. Altin, A., Amaldi, E., Belotti, P., Pinar, M.c.: Provisioning virtual private networks under traffic uncertainty. Networks 49(1), 100–115 (2007) 4. Erlebach, T., Rüegg, M.: Optimal bandwidth reservation in hose-model VPNs with multi-path routing. In: Proc. INFOCOM, pp. 2275–2282 (2004) 5. Ben-Ameur, W., Kerivin, H.: Routing of uncertain traffic demands. Optimization and Engineering 6(3), 283–313 (2005) 6. Chu, J., Lea, C.: New architecture and algorithms for fast construction of hosemodel vpns. IEEE/ACM Transactions on Networking 16, 670–679 (2008) 7. Iselt, A., Kirstadter, A., Chahine, R.: Bandwidth trading - a business case for ASON? In: Proc. 11th International Telecommunications Network Strategy and Planning Symposium NETWORKS, pp. 63–68 (2004) 8. Rabbat, R., Hamada, T.: Revisiting bandwidth-on-demand enablers and challengers of a bandwidth market. In: Proc. 10th IEEE/IFIP Network Operations and Management Symposium, NOMS, pp. 1–12 (2006) 9. Dramitinos, M., Stamoulis, G.D., Courcoubetis, C.: An auction mechanism for allocating the bandwidth of networks to their users. Computer Networks 51(18), 4979–4996 (2007) 10. Jain, R., Varaiya, P.: Efficient market mechanisms for network resource allocation. In: Proc. 44th IEEE Conference on Decision and Control, and the European Control Conference, pp. 1056–1061 (2005) 11. Jain, R., Walrand, J.: An efficient mechanism for network bandwidth auction. In: Proc. IEEE/IFIP Network Operations and Management Symposium Workshops NOMS, pp. 227–234 (2008) 12. Stańczuk, W., Lubacz, J., Toczyłowski, E.: Trading links and paths on a communication bandwidth markets. Journal of Universal Computer Science 14(5), 642–652 (2008) 13. Survivable network design library website, http://sndlib.zib.de/
Collaboration between ISPs for Efficient Overlay Traffic Management Eleni Agiatzidou and George D. Stamoulis Athens University of Economics and Business {agiatzidou,gstamoul}@aueb.gr
Abstract. As peer-to-peer (P2P) applications (e.g. BitTorrent) impose high costs to Internet Service Providers (ISPs) due to the large volumes of interdomain traffic generated, extensive research work has been done concerning locality-awareness approaches. Although such approaches are based on properties of the physical network topology, they take very little consideration of the real business relationships between ISPs (peering and transit agreements) as well as of the heterogeneous peer distributions among ISPs. In this paper, we propose an innovative way to exploit the business relationships between ISPs of either the same or different Tiers, by introducing new collaborative approaches for overlay traffic management on top of the locality-awareness ones. By means of simulations, we show that win-win situation (reduced transit traffic for ISPs and better performance for users) can mostly be achieved under certain complicated collaborative approaches. Also we show how the collaboration between the upper Tier ISPs can affect favorably both the inter-domain traffic in the transit links, where charging applies, and the performance of their customers ISPs and how the latter can benefit from their collaboration with the transit ISPs. Keywords: Overlay Applications, Peering Agreements, Peer-to-Peer Networks, Traffic Management.
1 Introduction Peer-to-peer (P2P) applications, particularly those based on BitTorrent, have become very popular recently, since they allow the users to distribute collaboratively large volumes of content, which account for 43% to 57% of the Internet traffic depending on the geographic area [1]. Despite the advantages that the ISPs and the users gained through the increasing demand for such applications, a significant problem has arisen for the ISPs. Indeed, overlay applications exchange their traffic through logical overlay connections, which are usually agnostic to the physical structure of the Internet. This imposes traffic engineering difficulties and high costs to the ISPs due to the inter-domain traffic exchanged. Extensive research work has been carried out aiming at the reduction of the transit inter-domain traffic and, consequently, of the associated costs of the ISPs. This work led to overlay traffic management techniques, referred to as locality-awareness, which are successful from the ISP’s point of view J. Domingo-Pascual et al. (Eds.): NETWORKING 2011, Part II, LNCS 6641, pp. 109–120, 2011. c IFIP International Federation for Information Processing 2011
110
E. Agiatzidou and G.D. Stamoulis
since they increase the level of locality of the traffic. In Section 2, we present an overview of such approaches. In this paper though, we build on the approach developed by EU-funded project SmoothIT [2], which deals with the incentive-based management of overlay traffic so that this is done beneficially for both the ISP (wrt inter-domain traffic and charges) and users (wrt overlay performance). In most of the locality-awareness approaches (including that of SmoothIT), ISPs provide information about the physical topology to the overlay application clients, thus helping peers to choose their overlay neighbors in a more localized way. However, neither this approach nor the other ones overviewed in Section 2 distinguish non-local peers according to the business relationships [8] among the ISPs. In our opinion, such a distinction is very important, because it may lead to a significant reduction of the transit inter-domain traffic where charging applies, as will be seen in our work. The specific Locality-Aware BitTorrent peer selection algorithm employed by SmoothIT is based on a combination of Biased Neighbor Selection (BNS) and Biased Unchoking (BU), as presented in [4] and [5], and has already been evaluated successfully for a wide variety of simulated scenarios; see also Section 3. We introduce an innovative variation to this that employs the notion of interconnection agreements in the ranking method of the peers that is used by BNS and BU. Our objective is to investigate the benefits of employing this information for the ISPs and the peers. Moreover, we introduce and evaluate an innovative traffic management approach on top of the locality-awareness techniques that also exploits the business relationships though in a different way. This approach referred to as Splitting of Chunks, aims to avoid the download of redundant (duplicate) content via the costly inter-domain links, while allowing more traffic to be exchanged through the peering links. This approach requires detailed coordination between the collaborating ISPs at the level of swarm, which can be considered as not realistic for practical cases and possibly not acceptable by users. Nevertheless, Splitting of Chunks represents the “ultimate” way for ISPs to collaborate in order to reduce traffic redundancy in the charged transit links, which motivates our studying it. We present the specification of the BNS-BU algorithms that are based on the agreements of the ISPs (Collaborative BNS-BU approaches) and the Splitting of Chunks approach and we compare their performance to the standard BNS-BU algorithm. Note that the practical application of our proposed approaches does not involve any additional overhead compared to SmoothIT BNS-BU, except for a modification in the information already exchanged between the overlay client and the ISP server. To the best of our knowledge, there is no published work up to date that takes into account interconnection agreements in locality promotion or employs some other collaboration between ISPs. The remainder of this paper is organized as follows: In the next section, we overview approaches for locality promotion proposed in the literature, while in Section 3 we describe briefly the BNS and BU algorithms. In Section 4, we present the simulation setup used for our experimental evaluation. Sections 5 and 6 follow the same structure; the first part presents the analysis of the Collaborative BNS-BU approaches and Splitting of Chunks mechanism respectively, while the second part is dedicated to the simulation results of the aforementioned mechanism. Finally in Section 7 we present our conclusions and future work.
Collaboration between ISPs for Efficient Overlay Traffic Management
111
2 Related Work Extensive research work has already been done in the literature on the promotion of locality in P2P networks. In this section we present an overview of proposals for locality awareness mechanisms as well as research work that indicates arising issues resulted from these mechanisms that provided an extra motivation for our work. Aggrawal et al. propose an Oracle service hosted by the ISPs in order for them to cooperate with P2P users [3]. The Oracle ranks a list of potential neighbors based on locality (same AS) and proximity (AS hops distance). The evaluation results showed that the properties of the overlay topologies, such as small diameter, small mean path lengths and node degree are not affected by the use of the oracle service, while at the same time the network locality increases. A similar approach where again an overlay entity called iTracker communicates either with the peers or with application trackers providing them information about the underlay, is P4P project [6]. The simulations in PlanetLab and real networks showed a reduction in transit inter-domain traffic and download times as well. Bindal et al. [4] in their Biased Neighbor Selection (BNS) algorithm presume that a peer can be provided by a biased list of peers (in order to connect to). The simulation results over 14 different ASes showed that the transit inter-domain traffic is reduced significantly, while the download times of the peers are not influenced much. Oechsner et al. propose in [5] a new algorithm, referred to as Biased Unchoking (BU) and they combine it with the aforementioned BNS algorithm (BNS-BU). BU influences, based on the ranked list, the optimistic unchoking algorithm of a peer that indicates to which neighbor to upload. This list is received from the tracker, unlike SmoothIT and our approaches, where it is received from an ISP-owned overlay server; see Section 3. The experimental evaluation and comparison of plain BitTorrent, BNS, BU and BNS-BU in [5] showed that the two complementary mechanisms should be used together in order to achieve better performance in terms of transit inter-domain traffic while the performance of the peers remains unaffected. A different implementation of the same idea is presented by Choffnes and Bustamante in [11]. Biased neighbor selection is based on the information that is collected from DNS lookups on popular CDNs names. The similarity of the lookups of two peers determines their level of proximity. In [13], Piatek et al. present three pitfalls to ISP-friendly P2P design. One of them concerns the conflicting interest that different Tier ISPs have while inter-domain traffic is reduced causing less costs for some and less revenues for others . Wang et al. in [9] examine the different contents and peer properties in regards to the locality issues. They conclude that the peers belonging to a few large AS clusters are more eligible to be affected by a locality mechanism; thus, a selective locality mechanism is more promising to optimize the overhead and the robustness of BitTorrent. Lehrieder et al. in [10] investigate the way BNS and BU mechanisms affect the performance of the users. For scenarios with homogeneous peer distributions and all peers having the same access speed a win-no lose situation arises, i.e. reduction of transit inter-domain traffic – no deterioration of performance of users. Changing the percentage of local peers used for either BNS or BU, they argue that the actual impact for a specific peer depends heavily on that percentage as well as the topology used.
112
E. Agiatzidou and G.D. Stamoulis
3 Locality-Aware Mechanisms: BNS-BU In this section we briefly present the two main approaches of locality-awareness, Biased Neighbor Selection (BNS) and Biased Unchoking (BU) [4], [5] since we build our collaborative mechanisms on top of their combination (referred to as BNS-BU), as implemented by SmoothIT [2]. Both approaches are based on the existence of an overlay server that resides in each AS and accepts from peers, lists with remote peers acquired from a tracker. Its main role is to rank the remote peers in the list according to locality (same AS with the requesting peer, i.e. local ones), peering agreements and proximity (AS hops away from requesting peer’s AS) [2]. Thus, the overlay server maps a specific value to every local peer, forming the highest or 1st-value group and another value to every peer that belongs to the peering ASes, forming the 2nd-value group. The rest of the peers are ranked according to the proximity of their ASes based on BGP hops. More AS hops mean that the peers belong to smaller-value groups. BNS enables a peer that had already received a ranked list from the overlay server to connect to the most preferable peers. In our implementation, as also implemented in [5], 90% of a peer’s neighbors are chosen based on their ranking. The peer starts connections to the 1st-value group until it reaches the threshold of 90%. If the 1stvalue group contains fewer peers than those needed to reach 90%, then the peer iteratively connects to peers from the next value group, i.e. from the peering AS or 2nd-value group, then from the ASes one hop away etc. After reaching 90% of the connections, it randomly chooses the remaining neighbors from the rest of the groups. In BU a peer chooses to unchoke one of his interested (in the currently available chunks) neighbors that belong to the highest-value group, i.e. to the same AS according to the metrics that obtained from the ranked list. If there are no interested peers in this group, then the peer chooses randomly from the next one.
4 Simulation Setup In this section we present the basic configuration of our simulations that we used to compare the BNS-BU mechanism with our variations that introduce collaboration between ISPs from the same or different Tiers and with Splitting of Chunks approach. In Figure 1 we defined the topology for our simulations. Each AS is distinguished through an identification code, which consists of a number that indicates the Tier that this ISP belongs to, and a letter that distinguishes it from others of the same Tier. There are eight Tier 3, three Tier 2 and two Tier 1 ISPs. Tier 2 ISPs have twice as many peers as Tier 3 ones, a set up that realistically reflects Internet. To be able to show the function of the collaboration mechanisms we introduced five peering agreements (links) between ISPs of the same Tier. Furthermore each ISP contains an overlay entity that except for ranking the remote peers of a requesting peer is aware of the business relationships of the ISP and adjusts its policy to the various ranking approaches analyzed in the next session. Moreover, this server assists peers to select which part of the content will download from each peer in the Splitting of Chunks approach. This topology is rich and representative enough to motivate our mechanisms and better exploit their potential by employing the business relations
Collaboration between ISPs for Efficient Overlay Traffic Management
113
Fig. 1. Topology used in Simulation
among ISPs of two different Tiers; this would not have been possible with a topology comprising two Tiers only. At the same time the topology is simple enough for the results to be easy to analyze and comprehend. In our experiments we used the SmoothIT simulator, which is based on Protopeer [7]. In order to incorporate the approaches introduced in this paper, we extended the simulator accordingly. The peers are connected to the ASes with access speed equal to 16Mbps for download and 1Mbps for upload, roughly the average speed values of DSL lines today. The network delay among all peers is 10msec regardless their physical location. All inter-domain links have symmetrical capacities, while the access links of peers constitute the only bottlenecks in the network. All peers exchange a file whose size is 154.6MB. For simplicity reasons we simulate one single swarm for evaluating our mechanisms. Peers arrive in the swarm following a deterministic distribution. Every 30 and 60 sec Tier 2 and Tier 3 ISPs respectively acquire one more peer. Also, all peers after downloading the whole file, they seed it for 3 minutes before they exit the system. Thus, as measured during the simulations, the swarm contains about 100 to 200 peers on average, a typical size for mediumsized swarms, as reported in [12]. The main metrics of interest are the mean inter-domain traffic exchanged between ISPs 2a and 3a and between ISPs 1a and 2a, as well as the mean download times of the peers in 3a and 2a. The former is used to assess the savings for the ISPs, while the latter to assess the impact of the mechanism on the user performance. We calculated the mean values per minute for 10 runs of each experiment and the confidence intervals for a confidence level of 95%. The size of confidence intervals was below 6% of the corresponding mean value.
5 Collaborative BNS-BU Approaches 5.1 Approach Transit agreements between ISPs are used in order to provide connectivity to the rest of the Internet. Peering agreements on the other hand, help ISPs to reduce their transit inter-domain traffic and, therefore, the implied costs, which are comparatively greater than the cost of maintaining a peering link [9]. In this paper we focus on transit and peering agreements between different or same Tier ISPs.
114
E. Agiatzidou and G.D. Stamoulis
In this section we propose the modification of the BNS-BU approach in such a way that takes into consideration the business relationships of the ISPs in ranking the peers in a list upon such a request. To explain this better, let us assume that the overlay entity returns to the requesting peer a pair of values i.e. {remote peer IP address, remote peer group value} (a, g), one for each peer in the list. Thus, the requesting peer uses g to identify the group to which remote peer a belongs, as mentioned in Section 3. The overlay entity will map the highest g to local peers (relatively to the requesting peer), the second higher g to peers from the peering AS and then classifies the rest of the peers according to BGP hops; more hops away means smaller values. Nevertheless, if two ASes are the same number of hops away, this does not necessarily indicate that the exchange of traffic with each one of them will cost the same. For example, a peer in 2a will receive the same g value (thus, forming one group) for peers from 3c, 3d, and 2c. However, exchanging traffic with 3c and 3d is cheaper than exchanging traffic with 2c, since, as shown in Figure 1, 2a and 2b have a peering link with each other. Preferably the peer could connect to as many as possible peers from 3c and 3d before connecting to 2c. This knowledge that results from the business agreements of the ISPs, is not available explicitly to ISP 2a, but since 2a and 2b collaborate with each other, they can exchange the preference values (from where this knowledge is extracted) that the overlay entity of each ISP gives to other ASes. In such cases peer selection can be optimized using the knowledge regarding the business relationships among ISPs in order for groups to be formed differently. We propose two collaborative approaches that take advantage of this knowledge, referred to as Collaborative BNS-BU and Layered Collaborative BNS-BU. Collaborative BNS-BU takes into consideration the business relationships of the ISPs in order to form the groups of peers differently from BNS-BU. In the 1st-value group belong local peers or peers from the peering AS, thus forming a larger group than in BNS-BU. The 2nd-value group consists of peers from the customer ASes of the ISPs. Larger groups were chosen in order to provide peers with more resources, boosting this way their performance. In case that ISPs 2a and 2b collaborate using this approach, their 1st-value group consist of peers from 2a and 2b while the 2nd-value group is formed by peers from the customer ASes, i.e. 3a, 3b, 3c, 3d. The rest of the groups are formed using AS hops. Tier 3 ISPs, when collaborate with each other cannot affect much the grouping since they know only their own business relationships as opposed to the Tiers above. Thus only the highest-value group of locals and peers from peering ASes is formed and the rest stay unaffected. On the other hand, if Tier 3 overlay entities collaborate with Tier 2 one, the 2nd-value group of Tier 3 ISPs 3a and 3b consists of peers from 2a, 2b, 3c and 3d. Layered Collaborative BNS-BU uses a slightly refined technique in order to form as many groups as possible according to the business relationships. This approach differs from the previous one in the number of groups. Thus, under this mechanism a peer first connects to all resources found locally before it chooses a peer from the next group, i.e. from the peering ISP. Thus, the first group, as formed in the Collaborative BNS-BU case, is split in two groups, exactly as in the plain-BNS-BU. The 2nd-value group of the previous approach is split to multiple smaller groups according to BGP hops. Smaller groups are used in order to further promote proximity. Indeed, due to such small groups, a peer will connect to all of the peers in a ‘closer’ group before moving to the next one. As an example, a peer in ISP 2a would receive the following
Collaboration between ISPs for Efficient Overlay Traffic Management
115
group of peers: 1st group - local peers, 2nd group - peers from 2b, 3rd group - peers from customers of 2a (3a, 3b) and 4th group - peers from the customers of 2b (3c, 3d). The rest of the ASes are formed into groups according to the BGP hops. It is worth noticing that for Tier 3 ISPs this approach forms groups that are identical to the plainBNS-BU groups unless cross-Tier collaboration exists (e.g. Tier 2-Tier 3). 5.2 Experimental Evaluation In Table 1 we present the effect of the aforementioned approaches to the transit interdomain traffic (in MB/s) of 1a2a and 2a3a links, as well as to the download times (in min) that peers in ISPs 2a and 3a experience. Each column of the table corresponds to a different scenario; BNS-BU, Collaborative BNS-BU and Layered Collaborative BNS-BU. Each scenario is separated to multiple columns, each one corresponding to groups of simulations; for each group a different pair of ISPs runs the specific scenario. For each scenario the rest of the ISPs run BNS-BU since we consider that is in the interest of ISPs to run those two algorithms. For example, Collaborative BNSBU Tier 2 means that ISPs 2a and 2b use the Collaborative BNS-BU approach while all other ISPs run BNS-BU. Collaborative BNS-BU Tier 3 means that Tier 3 ISPs 3a and 3b only use the specific approach. Tier 2 & Tier 3 refers to the case where 2a and 2b as well as their customers follow the specific approach. Each row comprises the mean values of a specific metric, i.e. the mean inter-domain traffic 1a2a, the mean inter-domain traffic 2a3a, and the mean download times in 2a and in 3a. Table 1. Collaborative BNS-BU approaches Metric / Scenario Traffic 1a2a (MB/s) Traffic 2a3a (MB/s) DT 2a (min) DT 3a (min)
BNSBU
Layered Collaborative BNS-BU Collaborative BNS-BU Tier 2 Tier 2&3 Tier 2 Tier 3 Tier 2&3
1.09
0.8
1.01
0.61
0.8
0.63
11.11
11.29
11.03
11.54
11.02
10.86
2.92 8.95
2.99 8.85
2.98 9.01
2.03 9.11
3.06 8.96
3.03 9.06
As shown in Table 1, when only ISPs 2a and 2b run Collaborative BNS-BU (Collaborative BNS-BU Tier 2), they manage to reduce their transit inter-domain traffic by 26% essentially without affecting peers’ performance, thus attaining a winno lose situation. Traffic between 2a and 3a and download times for peers in 3a are marginally affected (1.6% and 1.1% respectively). Larger download times for peers in 3a are caused by the use of BNS-BU mechanism. Thus, for Collaborative BNS-BU Tier 2 there is a win-no lose situation for ISPs in both Tiers (2a and 3a). For ISPs 3a and 3b, the transit inter-domain traffic is not significantly affected (Collaborative BNSBU Tier 3); at the same time this has very little effect on peers’ performance in both Tier ISPs and on the inter-domain traffic 1a2a. However, if ISPs 3a and 3b run Collaborative BNS-BU and 3c and 3d do so too, then Tier 2 ISPs 2a and 2b have a clear incentive to also run Collaborative BNS-BU. Indeed, this extra collaboration between them will improve the transit inter-domain traffic 1a2a by 44% and considerably improve the download times of peers in 2a (30.5%), (see column Collaborative BNS-BU Tier 2&3 in Table 1). Since in this case Tier 3 ISPs are
116
E. Agiatzidou and G.D. Stamoulis
indifferent in adopting the approach, ISP 2a can incite it to its customers by sharing its own benefits with them. Similar results are obtained from the implementation of the Layered Collaborative BNS-BU for Tier 2 ISPs and for Tier 2 & 3 ISPs, due to the large amount of resources from the local and the peering ISPs. However, download times deteriorate slightly more in this case, due to the extra constraints imposed by the finer layering. Thus, we conclude that especially large ISPs, such as Tier 2 ones, gain from deploying Collaborative algorithms for BNS-BU approaches due to fact that their peers can find larger amounts of resources locally, from their peering ISP and their customers ISPs; thus, inter-domain traffic exchanged through links for which the Tier 2 ISPs are charged can be reduced.
6 Splitting of Chunks 6.1 Approach On top of the aforementioned mechanisms we develop an innovative traffic management approach, referred to as Splitting of Chunks. This mechanism is based on the collaboration of ISPs that have a peering agreement with each other; this provides them the opportunity to further collaborate in order to reduce the redundancy in the downloading content from non-local peers (or ISPs). The proposed mechanism suggests that the peering ISPs can share costs by deciding to split and download different parts of content through their transit links and then exchange the rest of the parts via their peering link. Indeed, as each peering ISP will download different chunks using its transit link, it will be charged less by the ISP of the upper Tier. Peering ISPs first decide which chunks of the content will download from remote ASes. The chunks can be partitioned in two subsets of equal size according to their ids, e.g. in even and odds or first half and second half or any other similar way. The rest of the chunks that each ISP does not download from the remote ASes will be retrieved from the peering link or from local peers. As in the plain BNS-BU, the peer may identify its neighbors by the ranked list of peers that it retrieves from the overlay server. Since we consider that the peers already run BNS-BU algorithms only few modifications have to be implemented. In particular, the ranked list needs to contain another field, which indicates the type of the chunks (or in other words the set of ids) that the peer is allowed to download from every peer in the list. In our evaluation we split the chunks according to even and odd ids. In this case the list contains triplets of {address – group’s value – chunkTypeID}, (a, g, c). Thus, the querying peer uses g to identify proximity of peer a for BNS and BU algorithms and c to identify which chunks it can download from this particular peer. Let us suppose that ISPs 3a and 3b (as in Figure 1) are collaborating using the proposed mechanism and have decided that peers from 3a will download the even chunks of the content (c=2) while peers from 3b the odd ones (c=1). Then a peer from 3a receives three triplets, one for a peer from 3a, (a1, g1, c1), one for a peer from 3b (a2, g2, c2) and one for a peer from 2a (a3, g3, c3). The first peer belongs to 3a, thus g1=1 and c1=0, which indicates that a1 peer is local and the requesting peer can download every chunk therefrom. For the second peer (in 3b) g2=2 and c2=0, which indicates that a2 peer belongs to a peering ISP and the peer can download every chunk too. Finally for the third peer (in 2a) g3=3 and c3=2, which indicates that a3 peer belongs to an ISP 1 hop away and the peer can download only even chunks.
Collaboration between ISPs for Efficient Overlay Traffic Management
117
Also in order for a peer to download content in accordance to Splitting of Chunks, two extra modifications have to be introduced. In particular, the peer once acquiring the message with the list of the available chunks of a neighbor, it has to decide if it is interested in any of the chunks this neighbor has (and eventually download them), as normally happens in BitTorrent mechanism. Therefore it needs to check whether the neighbor possesses any of its missing chunks and also if those chunks belong to the subset of chunks that the field c indicates for that neighbor. If it does so, then the peer declares its interest to that neighbor and requests the specific chunks. 6.2 Experimental Evaluation In this section, we present the experimental performance of Splitting of Chunks mechanism and its comparison to BNS-BU mechanism with or without collaboration between ISPs. In the first set of simulations, we run Splitting of Chunks on top of BNS-BU combination mechanism in order to evaluate our approach. The columns of Table 2 present the different scenarios; BNS-BU and Splitting of Chunks. BNS-BU runs in all ISPs of our topology while there are different set of experiments for Splitting of Chunks. SC Tier2 refers on the sets of experiments where only 2a and 2b run Splitting of Chunks. Respectively, SC Tier 3 refers to the sets of experiments where 3a and 3b support this approach while in SC Tier 2 & Tier 3, 2a, 2b, 3a and 3b support it. For all experiments the rest of the ISPs in our topology support BNS-BU. The rows of Table 2 comprise the mean traffic volume for inter-domain links 1a2a and 2a3a and the mean download times for peers that belong to 2a and 3a ASes. In SC Tier 2 scenario, ISP 2a manages to reduce the inter-domain traffic exchanged with its transit ISP (1a) by 15.6%. The traffic exchanged via the transit link with its customers remains unaffected (~1%) as well as the download times for peers in 2a and 3a (<4%). Also, the inter-domain traffic 2a3a is slightly reduced too. Thus, if ISP 2a runs Splitting of Chunks, there is a win-no lose situation, since it reduces transit traffic without affecting the performance of its users. At the same time there is a winno lose situation for its customer ISPs, since the action of ISP 2a affects neither their traffic nor their users’ performance. In fact, ISP 2a would not like to have caused a significant reduction of the transit traffic of its customers, because this would affect adversely its own revenue. From Tier 3 ISPs’ (3a and 3b) perspective, in case they agree to support the Splitting of Chunks approach, they reduce almost 30% their inter-domain traffic exchanged with their transit ISP. On the other hand, the performance of their users is strongly deteriorated and thus it is highly unlikely for them to proceed with such an approach. In fact, even if Tier 2 ISPs proceed in adopting Splitting of Chunks, Tier Table 2. Splitting of Chunks comparison with BNS-BU mechanism Metric / Scenario Traffic 1a2a (MB/s) Traffic 2a3a (MB/s) DT 2a (min) DT 3a (min)
BNS-BU 1.09 11.11 2.92 8.95
SC Tier 2 0.92 10.99 3.04 9.25
SC Tier 3 2.2 7.84 3.00 11.19
SC Tier 2 & 3 1.69 7.74 2.99 11.28
118
E. Agiatzidou and G.D. Stamoulis
3 ones will not make the same decision, despite the significant gain in their transit inter-domain traffic, since their users’ performance is still strongly affected. To reach safe conclusions according to the problem that Tier 3 ISPs face, we conducted experiments using also Collaborative BNS-BU only in Tier 3 ISPs 3a and 3b. In a similar way is not in the interest of a small Tier 3 ISP to proceed to such a collaborative approach, due to the additional delays in download times that its users are experiencing. Since the introduction of Splitting of Chunks in Tier 2 ISPs may be considered as applicable and promising, we continue our experimental analysis by introducing also Collaborative BNS-BU and Layered Collaborative BNS-BU in Tier 2 ISPs and combining these approaches with Splitting of Chunks. In Table 3 we present the previous results for BNS-BU, Collaborative BNS-BU, Layered Collaborative BNSBU and the Splitting of Chunks approach on top of those algorithms. The first column of the table comprises BNS-BU scenario. The second and the third column refer to two scenarios based on Collaborative BNS-BU in Tier 2 ISPs 2a and 2b (second column) and Splitting of Chunks on top of Collaborative BNS-BU in Tier 2 ISPs 2a and 2b (third column). In the last two columns we replace Collaborative BNS-BU with Layered Collaborative BNS-BU. The rest of the ISPs run BNS-BU. The effect of Splitting of Chunks on top of Collaborative BNS-BU, either with or without layers, is almost the same. The new approach results in an inter-domain traffic reduction about 9% compared to Collaborative BNS-BU and 33% compared to BNS-BU, without essentially affecting users’ performance in both Tiers. Thus, Tier 2 ISPs that have a peering agreement can adopt Splitting of Chunks on top of any BNS-BU approach that they follow. Since Tier 2 ISPs can be expected to implement Splitting of Chunks on top of Collaborative BNS-BU approaches adopted only by them, we also have to investigate the reaction of Tier 3 ISPs. In Table 4 there are two groups of experiments; the one with Collaborative BNS-BU and the second with Layered Collaborative BNS-BU. The first column of each group comprises the corresponding BNS-BU that 2a, 2b and their customers run. The second, third and fourth column refers to Splitting of Chunks approach when only Tier 2 ISPs 2a and 2b, Tier 3 ISPs 3a and 3b and Tier 2 & 3 ISPs 2a, 2b, 3a and 3b run respectively but all of the aforementioned ISPs and their customers run Collaborative BNS-BU; this is different from the corresponding scenario of Table 3, where only 2a and 2b run Collaborative BNS-BU. Also the rest of the ISPs run BNS-BU. When only Tier 2 ISPs 2a, 2b run Splitting of Chunks on top of Collaborative BNS-BU, they manage to reduce their transit inter-domain traffic Table 3. Splitting of Chunks combined with BNS-BU collaboration approaches in Tier 2 Metric / Scenario
BNSBU
Traffic 1a2a (MB/s)
1.09
Traffic 2a3a (MB/s) DT 2a (min) DT 3a (min)
11.11 2.92 8.95
Collaboration BNS-BU SC Tier 2 Tier 2 0.8 0.73 11.29 2.99 8.85
11.16 2.97 8.88
Layered Collaboration BNS-BU SC Tier 2 Tier 2 0.8 0.74 11.02 3.06 8.88
11.26 3.03 9.02
Collaboration between ISPs for Efficient Overlay Traffic Management
119
Table 4. Splitting of Chunks combined with BNS-BU collaboration approaches in all Tiers
Metric / Scenario Traffic 1a2a (MB/sec) Traffic 2a3a (MB/sec) DT 2a (min) DT 3a (min)
Collaboration BNSBU SC SC SC Tier Tier 2 Tier 3 Tier 2&3 2&3
Layered Collaboration SC SC SC BNS-BU Tier 2 Tier 3 Tier 2&3 Tier 2&3
0.61
0.43
0.8
0.4
0.63
0.61
0.89
0.41
11.54
11.9
8.17
11.8
10.86
11.05
7.69
11.15
3.02 9.11
2.99 8.75
2.95 11.1
2.98 8.89
3.03 9.06
3 9.06
2.94 11.31
2.93 8.82
by 30% (60% compare to BNS-BU) essentially without affecting peers’ performance, thus attaining a win-no lose situation. Traffic between 2a and 3a and download times for peers in 3a are marginally affected (3% and 3.9% respectively, although download times are better). Thus, there is a win-no lose situation for ISPs in both Tiers. Tier 3 ISPs will not adopt Splitting of Chunks as in the previous scenarios, due to slower download times of their users. In case that Tier 2 ISPs already deploy Splitting of Chunks on top of Collaborative BNS-BU, Tier 3 ISPs are indifferent in adopting the approach. However, in the case where all adopt Splitting of Chunks we observe the lowest value of inter-domain traffic 1a2a (34%, 63% compared to Collaborative BNS-BU and plain BNS-BU respectively). Again win-no lose situations arise between Tier 2 ISPs and their users and also between Tier 2 ISPs and their customers ISPs. Therefore, ISP 2a may ask its customers to follow this approach sharing the benefits with them. The same observations are applicable to the case of Layered Collaboration BNS-BU. Besides comparing mean download times, we have also measured the difference between the 95th percentile value of each scenario and BNSBU; for all scenarios (except for SC Tier 3 approaches), was below 3%. We also have run experiments where the two large peering ISPs and their customers, have a different amount of peers in the swarm. We have reached the same conclusions as before, though the download times of the peers were deteriorated slightly more due to the lack of resources in the peering ISP. Thus, we conclude that even in case of a different approach such as Splitting of Chunks, large ISPs gain from deploying it due to fact that their peers can find larger amounts of resources locally or from the customers of their ISPs and thus interdomain traffic exchanged through costly transit links can be reduced.
7 Conclusions and Further Work In this paper, we introduce and evaluate collaborative variations of the BNS-BU (Biased Neighbor Selection - Biased Unchoking) locality promotion approach, namely Collaborative and Layered Collaborative, BNS-BU, as well as the Splitting of Chunks approach. All of them exploit the business relationships between ISPs of either the same or different Tiers in order to reduce the costly transit inter-domain traffic. By means of simulations, we show that large ISPs (Tier 2 in our simulations) can attain this goal, while in general not deteriorate the performance of their users. Indeed, in case peering Tier 2 ISPs collaborate with each other and with their customer ISPs,
120
E. Agiatzidou and G.D. Stamoulis
running Collaborative BNS-BU, a win-win situation for the large ISPs and their users is arising, without affecting the transit inter-domain traffic and the performance of the users of Tier 3 ISPs. The introduction of Splitting of Chunks on top of the aforementioned scenario leads to additional reduction of the transit inter-domain traffic of Tier 2 ISPs, but just maintains user performance, thus leading to a win-no lose situation. In general, we conclude that large ISPs have the incentives to collaborate with each other, creating large clusters within which they promote locality without affecting the performance of their users, while they can further benefit if they provide their customers ISPs with the incentive to collaborate with them. In future research, we plan to focus on the case of P2P Video-on-Demand applications, and consider different topologies and set-ups for our simulations, e.g. asymmetric cases. Acknowledgments. This work has been performed in the framework of the EU ICT Project SmoothIT (FP7-2007-ICT-216259). The authors would like to thank all SmoothIT partners for useful discussions on the subject of the paper. Also, this work has been partly funded by the European Social Fund and National Resources (Greek Ministry of Education - HERAKLEITOS II Programme).
References 1. Ipoque. Internet Study (2008/2009), http://www.ipoque.com/ 2. SmoothIT Simple Economic Management Approaches of Overlay Traffic in Heterogeneous Internet Topologies, http://www.smoothit.org/ 3. Aggrarwal, V., Feldmann, A., Scheideler, C.: Can ISPs and P2P users cooperate for improved performance? ACM SIGGOMM Computer Communications Review 37(3), 29– 40 (2007) 4. Bindal, R., Cao, P., Chan, W., Medval, J., Suwala, G., Bates, T., Zang, A.: Improving traffic locality in bittorrent via biased neighbor selection. In: 26th IEEE International Conference on Distributed Computing Systems, p. 66. IEEE, Washington, DC, USA (2006) 5. Oechsner, S., Lehrieder, F., Hoβfeld, T., Metzger, F., Pussep, K., Staehle, D.: Pushing performance of Biased Neighbor Selection through Biased Unchoking. In: Peer-to-Peer Computing, pp. 301–310 (2009) 6. Xie, H., Richard, Y., Krishnamurthy, A., Liu, Y.G.,, S.: P4p: provider portal for applications. SIGCOMM Comput. Commun. Rev. 38, 4 (2008) 7. Protopeer, http://protopeer.epfl.ch/faq.html 8. Chang, H., Jamin, S.: To peer or not to peer: Model the evolution of the Internet’s ASlevel topology. In: Infocom (2006) 9. Wang, H., Liu, J., Xu, K.: On the locality of BitTorrent-based video file swarming. In: 8th International Workshop on Peer-to-Peer Systems, IPTPS 2009 (2009) 10. Lehrieder, F., Oechsner, S., Hoβfeld, T., Despotovic, Z., Kellerer, W., Michel, M.: Can P2P-Users Benefit from Locality-Awareness? In: Peer-to-Peer Computing (2010) 11. Choffnes, D.R., Bustamante, F.E.: Taming the Torrent: A Practical Approach to Reducing Cross-ISP Traffic in Peer-to-Peer Systems. SIGCOMM (2008) 12. Hoßfeld, T., Hock, D., Oechsner, S., Lehrieder, F., Despotovic, Z., Kellerer, W., Michel, M.: Measurement of BitTorrent swarms and their AS topologies. Tech. Rep. 463, University of Wurzburg (2009) 13. Piatek, M., Madhyastha, H.V., John, J.P., Krishnamurthy, A., Anderson, T.: Pitfalls for ISP-friendly P2P design. In: HotNets (2009)
Optimal Joint Call Admission Control with Vertical Handoff on Heterogeneous Networks Diego Pacheco-Paramo, Vicent Pla, Vicente Casares-Giner, and Jorge Martinez-Bauset Dept. Comunicaciones, Universidad Politecnica de Valencia, UPV Camino de Vera s/n, 46022, Valencia, Spain [email protected], {vpla,vcasares,jmartinez}@dcom.upv.es
Abstract. A joint management of radio resources in heterogeneous networks is considered to raise efficiency. In this paper we focus on joint schemes for admission control and access technology selection using vertical handoff. We study a system where both streaming and elastic traffic are supported, and two radio access technologies covering the same areas are available: TDMA and WCDMA. Optimal solutions for different optimization criteria which can be expressed in terms of blocking probabilities and throughput are found and properly described, and a new heuristic policy is proposed. Finally, in order to provide some understanding of the cost of a single vertical handoff, a new optimization criteria is defined where its cost is found in relation to that of voice and data blocking. Keywords: Heterogeneous networks; Markov decision process; Joint call admission control.
1
Introduction
Coexistence of several access technologies demands the study of joint radio resource management (JRRM), given the objective of providing users with a permanent connection with the best possible performance [1]. Since coordinated control of several radio access technologies improves the use of limited resources [2], a challenge that arises is to find the most appropiate scheme for the main functions of JRRM, such as joint call admission control (JCAC) and vertical handoff (VH). Joint call admission control (JCAC) proposals such as those in [3] and [4], show the improvement in network’s stability based on a load balancing scheme for a heterogeneous network, but they do not make use of the advantages of vertical handoff. The complexity that vertical handoff adds to the system is commented in [5] and several vertical handoff characteristics and algorithms are reviewed in [6]. Most of vertical handoff schemes like [7,8] base decisions on user preferences. We believe that the study of vertical handoff decision from the operator’s point of view is a constructive and important issue. In this article, we are interested in finding an optimal scheme for a heterogeneous network with two technologies, and where both streaming and elastic J. Domingo-Pascual et al. (Eds.): NETWORKING 2011, Part II, LNCS 6641, pp. 121–134, 2011. c IFIP International Federation for Information Processing 2011
122
D. Pacheco-Paramo et al.
traffic are supported. In [9] a Markov model is used for an heterogeneous network (WLAN and CDMA), and a linear programming method is used to solve the optimization problem. That paper explicitly acknowledges the inherent complexity of the problem, which could make it computationally intractable for large systems. This is a main concern in this work. Therefore, we have simplified some characteristics of the system in such a way that results can be extrapolated to more complex systems. Several solution methods for similar problems have been used, such as genetic algorithms [10] or fuzzy-based solutions [11], [12]. In this work, policy iteration is used since it allows to discern the main characteristics of the optimal policies in a per iteration basis as several parameters vary. The last part of the article is focused on the cost of a single vertical handoff, and how it relates to other costs such as those of blocking voice or data calls. Other solutions define a monetary cost of vertical handoff, and users decide according to this and other parameters [13]. In contrast, our interest is to explore the impact of this cost from the operator’s point of view in relation to other performance parameters such as the blocking probabilities. The paper is structured as follows. In Section 2 we describe the Markov model of the system and the types of vertical handoff used. The solution method is described in Section 3. In Section 4 the optimal policies are analyzed and some heuristics are proposed and compared with optimal solutions. A study of the cost of performing a vertical handoff in relation to those of blocking voice and data calls is done in Section 5. Finally, Section 6 concludes the work.
2
System Description and Markov Model
The system of our interest is composed by two radio access technologies, TDMA and WCDMA. Additionally, both technologies provide voice and data services in the same area as it was proposed in [14]. As a call arrives to the system, a decision has to be made about if it is served in one or another technology. Both service users (voice and data) are defined by a Poisson arrival process, with rate λv (λd ) for voice (data). The service time for voice is exponentially distributed with mean 1/μv . On the other hand, because data is best modelled as elastic traffic, its sojourn times will depend on the available resources. Therefore, the mean service time for data is exponentially distributed with mean 1/F (σ), where σ is the average size of the information sent and F refers to its division by the transmission rate. 2.1
State Space
The state vector of the continuous time Markov chain (CTMC), is s = (s1 ,s2 ,s3 ,s4 ) where s1 represents the number of ongoing voice sessions on TDMA, s2 the data sessions on TDMA, s3 the voice sessions on WCDMA and s4 the data sessions on WCDMA. We define C as the fixed number of channels per slot in TDMA. A voice
Optimal JCAC with Vertical Handoff on Heterogeneous Networks
123
session will always use a whole channel, so there can only be C simultaneous voice sessions on this technology. On the other hand, data sessions can share a channel when TDMA is at full capacity, in such a way that nc data sessions can be served per channel. This means that we can have a maximum of C · nc simultaneous data sessions in TDMA. According to this, the first condition that a state must fulfill to be feasible is given by s1 · nc + s2 ≤ nc · C.
(1)
The capacity on WCDMA is defined by s3
W/BRw,v +1 (Eb /N0 )v
−1
+ s4
W/BRw,d +1 (Eb /N0 )d
−1 ≤ ηul ,
(2)
where W is the chip rate, BRw,x is the bit rate used for transmitting service x in WCDMA, (Eb /N0 )x is the bit energy to noise density required for service x, and ηul is the uplink cell load factor. This is the same expression used in [15]. Considering that each technology has independent resources, the feasible combination of data and voice users can be calculated for each technology separately. We define S as the set of feasible states, that is all the state vectors s that fulfill the conditions defined in (1) and (2). 2.2
System Metrics
The parameters that define performance are the voice blocking probability, the data blocking probability, and the total throughput. The total blocking probability refers to the probability that the system is in a state where all calls are blocked, independent of the service it requires. In the same way the voice (data) blocking probability refers to the probability of being in those states where voice (data) calls are blocked. To calculate the throughput we have to consider that the bit rate is independent for each service and technology and that data sessions in TDMA can share a channel, which is reflected in the min (C − s1 , s2 ) factor of the following equation: Th = P (s) s1 BRt,v + s3 BRw,v + min(C − s1 , s2 )BRt,d + s4 BRw,d , (3) s∈S
where BRx,y is the bit rate used for transmitting service y (voice or data) in technology x (TDMA or WCDMA). 2.3
Vertical Handoff
Inferred from the optimal policies obtained in [16], four vertical handoff types are defined:
124
D. Pacheco-Paramo et al.
I
Number / Class of From ⇒ To VH-A VH-B calls Full occupation N ∗ data voice arrival on WCDMA calls WCDMA⇒TDMA
II
data arrival
Type
Triggering Event
Conditions
Channel sharing One voice on TDMA call TDMA⇒WCDMA
voice or data Channel sharing One voice departure on TDMA call TDMA⇒WCDMA from WCDMA voice or data VH does not One data IV departure produce channel call WCDMA⇒TDMA from TDMA sharing on
III
–
–
TDMA ∗
N is the necessary number of moved data calls such that one voice call can access WCDMA.
The system that uses vertical handoff types I and II is called VH-A, the one that uses all types is called VH-B, and the one that does not use vertical handoff is called NVH.
3
Optimization Problem
For each state s ∈ S, a decision must be made about if an arriving call should be admitted according to the service required. In a Markov decision process, a policy π defines which actions a=(avs ,ads ) should be taken at each state s for voice avs and data ads arrivals. It should be clear that decision epochs occur only at arrivals. The main objective is to find among the possible policies the one that optimizes a chosen function. The set of actions A defines the possible values for avs and ads , and it is defined in Table 1. Table 2 shows the transition rates rsu from state s to state u (s, u ∈ S). Table 1. Set of actions A for MDPs with vertical handoff value 0 1 2 3 4
action Block call Send call to TDMA Send call to WCDMA Vertical Handoff type I Vertical Handoff type II
Optimal JCAC with Vertical Handoff on Heterogeneous Networks
125
Table 2. Transition rates from state s to u
3.1
avs u 1 s + e1 2 s + e3 3 s + N · e2 + e 3 - N · e4
rsu λv λv λv
ads 1 2 4
rsu λd λd λd
u s + e2 s + e4 s - e 1 + e2 + e3
Cost Function
Since our interest relies on data and voice blocking probabilities, as well as the total throughput, we have defined two different objective functions. The first one, is the weighted sum of the voice and data blocking probabilities, FBP = BPvoice · α + BPdata · (1 − α).
(4)
The parameter α, 0 ≤ α ≤ 1, is the one responsible for giving more or less weight to each blocking probability. Thus, α relates the way in which the blocking probabilities will be minimized. The cost function associated to the objective function for each feasible state s is cost(s) = 1 − (α · Fv (as ) + (1 − α) · Fd (as )),
(5)
where Fx (as )=1 if as is 1 or 2, and 0 otherwise, being x the service. The second objective function is the aggregated throughput, so in that case we try to maximize the value defined by (3). The reward for each state s is cost(s) = s1 BRt,v + s3 BRw,v + min(C − s1 , s2 )BRt,d + s4 BRw,d . 3.2
(6)
Solution Method
The method used to find the optimal policy πopt is policy iteration [17]. This method can search among the finite group of possible policies for the MDP and find the optimal in a finite number of steps. The relative values V allow to relate the cost obtained in the actual state with costs expected from future actions, and are found using the next equation: cπ − cπ · e + Vπ RπT = 0.
(7)
The cπ in the previous expression is the vector of costs associated to being in each state, Rπ is the transition matrix, and cπ is the value of the objective function for policy π. Once V and cπ are found using (7), it is possible to find the action on each state that will minimize the objective function using the next expression:
126
D. Pacheco-Paramo et al.
a min cas − cπ + rs,u (vπu − vπs ) , a
(8)
s=u
where vπu and vπs are the relative values for states u and s respectively when the a policy π is used, cas is the cost associated to state s when action a is taken, rs,u is the transition rate from state s to state u, when the action for state s is a. The set of actions will define a new policy and the process is repeated until the optimal policy is found.
4
Optimal Policy Analysis
To obtain and analyze the optimal solutions, the scenario that will be used unless otherwise stated is defined in Table 3. The values are chosen in order to keep the problem computationally tractable while at the same time keeping capacity proportionality among the technologies used. In this section we study MDP VHA and MDP VH-B which are based on the systems defined in section 2.3. Since MDP VH-B performs vertical handoff types III and IV each time a departure occurs, they share the same action set A of Table 1. The state space S for both MDPs is defined by (1) and (2). Table 3. Initial Scenario for Policy Iteration WCDMA TDMA W =3.84 Mcps C =4 (Eb /N0 )v =14 dB nc = 2 (Eb /N0 )d =14 dB BRt,v =12.2 kbps BRw,v =12.2 kbps BRt,d =44.8 kbps BRw,d =44.8 kbps ηul = 1 Clients λv = 0.025 λd = 0.134 μv = 0.0083 σ= 1 Mb
When using both optimization functions, the blocking function defined in (4) and the throughput function defined in (3), while varying λv and λd the main characteristics of the optimal policies for both MDPs can be summarized as follows: service action voice Send to WCDMA or use VH type I. data Send to TDMA while no sharing is needed. If so, use VH type II or send to WCDMA. However, there are differences for each case that should be analysed.
Optimal JCAC with Vertical Handoff on Heterogeneous Networks
4.1
127
Blocking Function Optimization
When the blocking function is optimized and λv varies from 0.005 to 0.095, the number of states that are left unused grows with λv for both MDPs. For MDP VH-B it grows from 32.1% to 50.4% and for MDP VH-A grows from 10.4% to 41.7%. Therefore, MDP VH-B organizes calls in a more effective way given the impact of vertical handoff for departures. Also, there is some blocking of voice calls even when there is space left for the highest values of λv , when its influence is bigger on the optimization function. When λv =0.095, 1.2% of usable states block voice calls with MDP VH-B, while 1% do it with MDP VH-A. When λd grows from 0.05 to 0.225, the percentage of unused states for MDP VH-A remains constant in 41.6%, while it decreases from 49.4% to 39.6% with MDP VH-B. This decrease occurs because of vertical handoff type III, which is used more often as λd grows. Also for high values of λd , there is some blocking of data calls even when there is space available. For λd =0.225, 2.56% of usable states block data calls for MDP VH-A and 2.98% of states do this for MDP VH-B. 4.2
Throughput Optimization
When the throughput is optimized and λv grows from 0.005 to 0.095, the percentage of unused states also grows from 16 % to 41.7% for MDP VH-A, but decreases from 61.2% to 49.5% for MDP VH-B. Also, there is some voice calls blocking while there is available space left when λv is low. For λv =0.005, 0.71% of the usable states block voice calls for MDP VH-A while 0.25% of states do it for MDP VH-B. Therefore MDP VH-B manages resources in a more organized way, since the number of states left unused is always higher. When λd grows from 0.05 to 0.225, the percentage of unused states also grows from 41.6% to 43% for MDP VH-A and from 45.4% to 47.2% for MDP VH-B. Also, since voice calls contribute less to the total throughput, some states block voice calls when there is space left. This occurs for high values of λd . When λd =0.225, 0.7% of usable states block voice calls for MDP VH-A, while 0.56% of states do it for MDP VH-B. 4.3
Result Analysis for Vertical Handoff MDPs
Based on the results obtained in sections 4.1 and 4.2 we propose two new heuristic policies that make use of their main characteristics. Heuristic VH-A makes use of vertical handoff types I and II, as it was seen for the optimal policies found using MDP VH-A. On the other hand, Heuristic VH-B also includes vertical handoff types III and IV, as it was done by the optimal policies obtained using MDP VH-B.
128
D. Pacheco-Paramo et al.
Heuristic
Event
Action
Voice arrival • • • VH-A Data arrival • • • • Voice arrival • Heuristic • • VH-B Data arrival • • • • Voice departure • • Data departure • • Heuristic
Send to WCDMA. If it is not possible, use VH type I. If it is not possible, send to TDMA. Send to TDMA if no channel sharing is needed. If there is channel sharing, use VH type II. If it is not possible, send to WCDMA. If it is not possible, send to TDMA. Send to WCDMA. If it is not possible, use VH type I. If it is not possible, send to TDMA. Send to TDMA if no channel sharing is needed. If there is channel sharing, use VH type II. If it is not possible, send to WCDMA. If it is not possible, send to TDMA. Use VH type III if departs from WCDMA Use VH type IV if departs from TDMA Use VH type III if departs from WCDMA Use VH type IV if departs from TDMA
In this section we compare the results obtained by the two new heuristic policies with those of the MDPs that use vertical handoff and the one that does not (MDP NVH). In Fig. 1(a) it is shown the blocking function for the MDPs and the two heuristic policies when λv varies from 0.005 to 0.095. It is evident that the use of vertical handoff enhances the system behavior, since both heuristic have a lower optimal value for all λv . Also, if we restrict BPv and P Bd below 2%, λv can grow up to 0.07 for Heuristic VH-A, while for MDP NVH the maximum λv was of 0.062, a difference of about 1 Erl. However, it is difficult to notice the effect of vertical handoff for service completion in Fig. 1(a), since the improvement is not as pronounced as one could expect. In Fig. 1(b) is shown the blocking function as λd varies from 0.05 to 0.225. Again, the performance of the heuristic policies is better than that of MDP NVH. When λd =0.225, the value of the blocking function for the heuristics VHA and VH-B is very similar, and 85 % of the value obtained by MDP NVH. As λd grows, the difference between the solution found by the MDPs with vertical handoff differ more from the heuristics and from the optimal solution found using MDP NVH. This is due to the rise of P Bv on the heuristic policies, which is managed by the optimal policies by creating some data blocking states. When optimizing throughput, there is only a significant improvement of the solutions with vertical handoff over MDP NVH for the highest values of λv as can be seen on Fig. 2(a). When λv =0.095, the throughput of the policies that use vertical handoff are very similar among them and above the value of MDP NVH for about 1.7 %. This improvement represents about 44.3 kbps, almost 1 data
Optimal JCAC with Vertical Handoff on Heterogeneous Networks 0.09
0.06
MDP NHV MDP VH−A MDP VH−B Heuristic VH−A Heuristic VH−B
MDP NHV MDP VH−A MDP VH−B Heuristic VH−A Heuristic VH−B
0.025
0.05 0.04
v
d
BP α +BP (1−α)
0.07
0.03
BPvα +BPd(1−α)
0.08
129
0.03
0.02
0.015
0.01
0.02 0.005 0.01 0 0.005 0.015 0.025 0.035 0.045 0.055 0.065 0.075 0.085 0.095
λv
(a) λv variation.
0 0.05
0.075
0.1
0.125
λd
0.15
0.175
0.2
0.225
(b) λd variation.
Fig. 1. Blocking Function Optimization
channel or 4 voice channels. However, for this value of λv , P Bv and P Bd are way above acceptable levels, with 7.5% and 3.8% respectively for the solution of MDP VH-B, the one with the best results. Also, vertical handoff improves P Bv and P Bd . Restricting these values to 2%, the maximum acceptable λv for MDP VHA is around 0.068, while for MDP NVH is of 0.064. This clearly indicates that the rearrangemet done by vertical handoff improves the throughput by raising the maximum load capacity of the system. In Fig. 2(b) appears the throughput for the same policies as λd varies from 0.05 to 0.225. In this case, the improvement in throughput of the new policies over MDP NVH is very low. Also, the values for P Bd are very similar, showing how difficult it is for the system to deal with high rates of λd . However, there is a significant difference in the values obtained for P Bv . When P Bv reaches 2% for the solution obtained with MDP NVH, the value of Heuristic VH-A is of 0.5 %, so there is a considerable reduction. This result also shows that the improvement on the throughput as λd grows is given by voice calls, which contribute less to the total throughput given their data rate.
5
Cost of the Vertical Handoff
In this section, a new objective function based on the blocking rates and the vertical handoff rate is defined as: FV H = θ · ζV B + (1 − θ) · ζDB + CV H · ζV H ,
(9)
where θ, 0 ≤ θ ≤ 1, is the factor that defines the cost for blocking a single voice or data call, and ζV B and ζDB are the mean voice and data blocking rates.
130
D. Pacheco-Paramo et al.
In the same way, CV H is the cost of performing a single vertical handoff, and ζV H is the mean rate of vertical handoffs performed. Hence, by assigning values to θ and CV H , a new optimal policy that minimizes FV H can be found using a Markov decision process. 5
Throughput
2.4
2.2
5
x 10
2.6
MDP NHV MDP VH−A MDP VH−B Heuristic VH−A Heuristic VH−B
x 10
2.4 2.2
Throughput
2.6
2
1.8
2
MDP NHV MDP VH−A MDP VH−B Heuristic VH−A Heuristic VH−B
1.8 1.6 1.4 1.2
1.6 1 1.4 0.005 0.015 0.025 0.035 0.045 0.055 0.065 0.075 0.085 0.095
λv
0.8 0.05
(a) λv variation.
0.075
0.1
0.125
λd
0.15
0.175
0.2
0.225
(b) λd variation.
Fig. 2. Throughput Optimization
5.1
Markov Decision Process for VH Cost
In this section we define three new MDPs based on (9). The first one does not use vertical handoff (MDP BR), the second one uses vertical handoff types I and II (MDP C-1), and the last one uses all four types of vertical handoff (MDP C-2). These MDPs are different from those exposed before because of their objective functions. The state space for all of them S is defined by (1) and (2) and the set of actions A is that of Table 1 for MDP C-1 and MDP C-2, while the set of actions for MDP BR does not include actions 3 and 4 from A of Table 1. The cost function associated to the objective function for each feasible state s for MDP BR is (10) cost(s) = λv · G(avs ) · θ + λd · G(ads ) · (1 − θ), for MDP C1 is cost(s) = λv · G(avs ) · θ + λd · G(ads ) · (1 − θ) + CV H · (λv · R(avs ) + λd · R(ads )), (11) and for MDP C2 is cost(s) = λv · G(avs ) · θ + λd · G(ads ) · (1 − θ) + CV H · (λv · R(avs ) + λd · R(ads ))
+ CV H · (T (s vT DMA ) + T (sdT DMA ) + T (s vW CDMA ) + T (s dW CDMA )), (12) where the coefficients are explained in Table 4.
Optimal JCAC with Vertical Handoff on Heterogeneous Networks
131
Table 4. Coefficients for optimization function SYMBOL G(axs ) R(axs )
T (s vT DM A )
T (s dT DM A )
T (s vW CDM A )
T (s dW CDM A )
5.2
DEFINITION Indicates if s is a blocking state for service x Indicates the number of calls that suffer vertical handoff when a call of service x arrives while the system is on state s.
VALUE • If action axs is blocking, G(axs )=1. • Otherwise G(axs )=0. • If axs =3, and all the conditions for vertical handoff type I are fulfilled, R(avs ) = N . • If axs =4, and all the conditions for vertical handoff type II are fulfilled, R(ads ) = 1. • Otherwise, R(axs ) = 0. Indicates the rate of calls that • If the conditions for vertical handoff suffer vertical handoff when a type IV are fulfilled once a voice call is voice call is served on TDMA served on TDMA, T (s vT DM A ) = s1 · μv . while the system is on state s. • Otherwise T (s vT DM A ) = 0. Indicates the rate of calls that • If the conditions for vertical handoff suffer vertical handoff when a type IV are fulfilled once a data call is data call is served on TDMA served on TDMA, while the system is on state s. T (s dT DM A ) = min(C-s1 ,s2 ) · BRt,d /σ. • Otherwise T (s dT DM A ) = 0. Indicates the rate of calls that • If the conditions for vertical handoff suffer vertical handoff when a type III are fulfilled once a voice call voice call is served on WCDMA is served on WCDMA, while the system is on state s. T (s vW CDM A ) = s3 · μv . • Otherwise T (s vW CDM A ) = 0. Indicates the rate of calls that • If the conditions for vertical handoff suffer vertical handoff when a type III are fulfilled once a data call data call is served on WCDMA is served on WCDMA, while the system is on state s. T (s dW CDM A ) = s4 · BRw,d /σ. • Otherwise T (s dW CDM A ) = 0.
Results Analysis
In this section, policy iteration is used to solve the MDPs. The reference scenario is defined in Table 3 with θ= 0.5, which means that the cost of blocking voice and data calls is the same. In Fig. 3 we can see the optimal values for each of the MDPs introduced in the last section for different values of CV H . MDP C2 has the lowest optimal value when CV H =0 and is followed closely by MDP C1. However, as CV H grows, the value of MDP-R2 rapidly grows surpassing MDP C1 and MDP BR because of the large amount of vertical handoff of types III and IV performed. In fact, for values of CV H as low as 0.01, the optimal value of MDP C2 is higher than that of MDP BR. This means that when CV H is higher than this, it is not justified to use the policy this MDP uses. Keeping in mind that the cost of blocking voice and data calls is of 0.5, we could say that it is necessary that the cost of blocking voice and data calls to be about 50 times higher than that of vertical handoff for this policy to be useful.
132
D. Pacheco-Paramo et al. −3
0.5*ζVB+0.5*ζDB+ CVH * ζVH
1.4
x 10
MDP C2 MDP C1 MDP BR
1.2
1
0.8
0.6
0.4
0.2
0
0
0.2
0.4
0.6
0.8
1
C
VH
Fig. 3. Optimization values for various CV H
On the other hand, results found using MDP C1 vary slowly and are limited by those of MDP BR. This occurs because if it is the vertical handoff which causes the objective function to grow, MDP C1 can always choose not to use it, and then its optimal policy will be identical to that of MDP BR. The interesting point here is to find that value of CV H that makes both MDPs to reach the same optimal policy and therefore the same results. For Fig. 3, when CV H =0.9, both policies are the same. This means that when the cost of vertical handoff is about 1.8 times higher than that of blocking voice or data calls, vertical handoff is not useful anymore. It is interesting to notice that when CV H =0, the optimal policies of MDP C1 and MDP C2 are very similar to those of MDP VH-A and MDP VH-B, even though the objective functions are different. Therefore, the values of parameters such as throughput and blocking probabilities (BPv and BPd ) are similar as well. As CV H grows, the performance of these parameters degrade, and this happens at a faster rate for MDP C2 than for MDP C1. Hence, while the ratio of voice/data blocking cost to CV H is high, one could expect that the heuristic policies VH-A and VH-B will fairly represent the optimal policies. This last remark is true even for a higher rank of ratios in the case of MDP C1 and the heuristic VH-A, for the reasons explained earlier.
6
Conclusions
We have studied the optimal joint call admission control policy with vertical handoff in a system with heterogeneous technologies (WCDMA and TDMA) and services (voice and data). Two optimization functions were used, a blocking function and a throughput function. Four vertical handoff types are defined according to the service and the event that triggers them. Some simplifications
Optimal JCAC with Vertical Handoff on Heterogeneous Networks
133
such as technologies’ capacity or coverage areas were restricted in order to obtain computational tractability. However, the solutions found apply for bigger systems and are useful as an starting point for more complex problems where areas do not always overlap or more networks are available. The analysis of the optimal policies allowed us to define new heuristic solutions that are simple enough as it is required, while at the same time improve the performance of other schemes that do not use vertical handoff. At the same time, it was shown how heuristic solutions perform very close to the optimal solutions for a wide range of arrival rates and both optimization criteria. Finally, a study that combines the cost of blocking voice and data calls to that of performing vertical handoff was done. It should be noted that this is a very important topic when resource availability and utilization efficiency are main concerns, that is, the optimization and decisions are made from the operator’s point of view, whose needs are important to understand. We concluded that only certain types of vertical handoff are useful, and only while the ratio blocking cost/handoff cost is between a specific range.
Acknowledgements This work was supported by the Spanish Government, the European Commission and Universidad Polit´ecnica de Valencia through projects TIN2010-21378-C0202, TIN2008-06739-C04-02/TSI and PAID-06-09.
References 1. Wu, L., Sandrasegaran, K.: A survey on common radio resource management. In: The 2nd Conference on Wireless Broadband and U-Wideband Communications, AusWireless (2007) 2. Tolli, A., Hakalin, P., Holma, H.: Performance Evaluation of Common Radio Resource Management. In: IEEE International Conference on Communications (ICC), vol. 5, pp. 3429–3433 (2002) 3. Pillekeit, A., Derakhshan, F., Jugl, E., Mitschele-Thiel, A.: Force-based Load Balancing in Co-located UMTS/GSM Networks. In: IEEE 60th VTC 2004-Fall, pp. 4402–4406 (2004) 4. Suleiman, K., Chan, A., Dlodlo, M.: Load Balancing in the call admission control of heterogeneous wireless networks. In: International Wireless Communications and Mobile Computing Conference, pp. 245–250 (2006) 5. Falowo, O.E., Chan, H.A.: Joint Call Admission Control Algorithms. Requirements, Approaches, and Design Considerations 31, 1200–1217 (2007) 6. M´ arquez-Barja, J., Calafate, C.T., Cano, J.-C., Manzoni, P.: An Overview of Vertical Handover Techniques: Algorithms, Protocols and Tools. Computer Communications (2010) 7. Stevens-Navarro, E., Wong, V.: Comparison between Vertical Handoff Decision Algorithms for Heterogeneous Wireless Networks. In: IEEE 63rd Vehicular Technology Conference VTC 2006-Spring, pp. 947–951 (2006)
134
D. Pacheco-Paramo et al.
8. Goyal, P., Saxena, S.K.: A Dynamic Decision Model for Vertical Handoffs across Heterogeneous Wireless Networks. World Academy of Science, Engineering and Technology 31 (July 2008) 9. Yu, F., Krishnamurthy, V.: Optimal Joint Session Admission Control in Integrated WLAN and CDMA Cellular Networks with Vertical Handoff. IEEE Transactions on Mobile Computing 6, 126–139 (2007) 10. Karabudak, D., Hung, C., Bing, B.: A call admission control scheme using genetic algorithms, pp. 1151–1158 (2004) 11. Agusti, R., Sallent, O., Perez-Romero, J., Giupponi, L.: A fuzzy-neural based approach for joint radio resource management in a beyond 3G framework. In: 1st Internation Conference on Quality of Service in Heterogeneous in Wire/Wireless Networks, pp. 216–224 (2004) 12. Falowo, O.E., Chan, H.A.: Fuzzy Logic Based Call Admission Control for Next Generation Wireless Networks. In: 3rd International Symposium on Wireless Communication Systems, pp. 574–578 (2008) 13. Hasswa, A., Nasser, N., Hassanein, H.: Generic Vertical Handoff Decision Function for Heterogeneous Wireless Networks. In: 2nd IFIP International Conference on Wireless and Optical Communications Networks, pp. 239–243 (2005) 14. Gelabert, X., Perez-Romero, J., Sallent, O., Agusti, R.: A Markovian Approach to Radio Access Technology Selection in Heterogeneous Multiaccess/Multiservice Wireless Networks. IEEE Transactions on Mobile Computing 7(10) (2008) 15. Gilhousen, K., Jacobs, I., Padovani, R., Viterbi, A., Weaver, L., Wheatley, C.: On the Capacity of a Cellular CDMA System. IEEE Transactions on Vehicular Technology 40(2) (1991) 16. Pacheco-Paramo, D., Pla, V., Casares-Giner, V., Martinez-Bauset, J.: Optimal Radio Access Techology Selection on Heterogeneous Networks, http://personales.upv.es/diepacpa/HetNet.pdf 17. Bertsekas, D.P.: Dynamic Programming and Optimal Control. Athena Scientific (2001)
Balancing by PREFLEX: Congestion Aware Traffic Engineering Jo˜ao Taveira Ara´ ujo, Richard Clegg, Imad Grandi, Miguel Rio, and George Pavlou Network & Services Research Laboratory Department of Electronic & Electrical Engineering University College London, UK {j.araujo,r.clegg,i.grandi,m.rio,g.pavlou}@ee.ucl.ac.uk
Abstract. There has long been a need for a robust and reliable system which distributes traffic across multiple paths. In particular such a system must rarely reorder packets, must not require per-flow state, must cope with different paths having different bandwidths and must be self-tuning in a variety of network contexts. PREFLEX, proposed herein, uses estimates of loss rate to balance congestion. This paper describes a method of automatically adjusting how PREFLEX will split traffic in order to balance loss across multiple paths in a variety of network conditions. Equations are derived for the automatic tuning of the time scale and traffic split at a decision point. The algorithms described allow the load balancer to self-tune to network conditions. The calculations are simple and do not place a large burden on a router which would implement the algorithm. The algorithm is evaluated by simulation using ns-3 and is shown to perform well under a variety of circumstances. The resulting adaptive, end-to-end traffic balancing architecture provides the necessary framework to meet the increasing demands of users while simultaneously offering edge networks more fine-grained control at far shorter timescales. Keywords: Traffic engineering, congestion control.
1
Introduction
Traffic engineering, whether applied intradomain [1,2] or interdomain [3,4,5], focuses on distributing traffic across multiple paths whilst reducing an explicit cost, typically by minimizing the maximum link load observed over a period of time. Most research within traffic engineering however has been limited to optimizing traffic distribution from the vantage point of a single domain [6], failing to take into consideration the wider impact such actions may have on endto-end performance. This myopic view of traffic has become a severe limitation as end-hosts become increasingly able to pool multiple paths according to their own preferences, optimizing for throughput, latency or reliability. While such trends are already noticeable due to peer-to-peer applications such as Bittorrent, it is J. Domingo-Pascual et al. (Eds.): NETWORKING 2011, Part II, LNCS 6641, pp. 135–149, 2011. c IFIP International Federation for Information Processing 2011
136
J. Taveira Ara´ ujo et al.
expected that resource pooling will further tip toward the end-host with the deployment of multipath transport protocols such as MPTCP [7]. As a consequence of traffic patterns becoming more dynamic, traffic engineering has been forced to remain static. Since both congestion control and traffic engineering are intrinsically tied but unaware of each other by nature, there is a justified concern that updating routing too often may lead to oscillations in demand as hosts shift traffic to paths with better end-to-end performance. Coupled with an aversion to costly upgrades, this conservative approach to traffic management has meant that despite proposals for dynamic, online traffic engineering [8,9], such solutions have remained both untested and undeployed. While interest in traffic engineering has steadily waned within the research community due to these perceived limitations, the demand for fine grained traffic management tools remains high, as demonstrated by the continued proliferation of tools such as deep-packet inspection in order to shape traffic explicitly. Such mechanisms illustrate the continued tussle to circumvent the opaqueness inherent to the Internet’s layered architecture. Networks frequently attempt to act on information contained within the transport or application layer in order to override resource allocation as performed by TCP. Conversely, the transport layer often makes incorrect assumptions due to the limited information contained within the network layer. With no explicit information on the path taken by each packet, TCP assumes a single path is maintained for the entire flow, and as such has grown to expect in-order delivery from the IP layer [10]. PREFLEX [11] is an architecture which addresses this information asymmetry by providing hosts with explicit knowledge of paths and networks with insight into the loss rate experienced by TCP. PREFLEX provides the means by which edge networks can assign flows to paths, but has not yet defined a method for such an assignment. In this paper we formulate an autonomous tuning mechanism for balancing traffic according to expected congestion rather than load.
2
PREFLEX Overview
PREFLEX (Path RE-Feedback with Loss EXposure) [11] is an architecture which redraws the boundaries between transport and network layers in an attempt to make both aware of each other’s actions. PREFLEX relies on three components: 1. Hosts provide the network with an estimate of loss using LEX (Loss EXposure). 2. Edge networks provide hosts with information on which outgoing path to use through PREF (Path RE-Feedback). 3. A mechanism which assigns “flowlets” [12] to paths in order to balance loss. The latter is the focus of this paper. In the following sections we will briefly review the first two components.
Balancing by PREFLEX: Congestion Aware Traffic Engineering
2.1
137
Loss Exposure
Loss exposure is a mechanism by which hosts signal transport semantics at the network layer by marking the IP header, in a similar fashion to re-ECN [13]. If a host does not support LEX, all traffic is unmarked and will appear to the network as legacy traffic. For LEX-capable traffic, there are three possible codepoints: Table 1. LEX markings Codepoint Not-LECT LECT LEx FNE
Meaning Not Loss Exposure Capable Transport Loss Exposure Capable Transport Loss Experienced Feedback Not Established
The FNE codepoint is set by the host when feedback on the path has not yet been established, and therefore there is no congestion control loop. This applies to the first packet of a flow, such as TCP SYN packets, or the first packet after a long idle time, such as keep-alives. An important distinction is that FNE marks the beginning of a “flowlet” [12], rather than a flow. We define flowlets as a burst of packets which the sender does not wish to be reordered. As such, it is the end-hosts decision on how and when to divide traffic into flowlets. While a flow is a transport association between two endpoints, a flowlet is a sequence of packets which is expected to follow the same path. As an example, an IMAP connection may remain open several hours, but is composed of several distinct flowlets each time a client requests data from the server. Operating on flowlets rather than flows allows networks to balance traffic at a finer granularity and in a transport agnostic manner. The remaining two codepoints, LECT and LEx in table 1, are used to signal path loss. By default, traffic is marked as being Loss Exposure capable, and for every loss experienced by a host, the ensuing packet is marked with the Loss Experienced codepoint. In reliable transport protocols such as TCP this corresponds to marking every retransmitted packet accordingly, but can also be applied to non-reliable transport protocols with some form of feedback on loss, such as DCCP [14]. 2.2
Path Re-Feedback
With LEX, hosts provide information on both path loss and flowlet start at the network layer. We now focus on how edge networks specifically, which are naturally multi-homed, can reflect path diversity back to hosts. In PREF (Path RE-Feedback) the network selects a preferred outgoing path for each incoming FNE packet (fig. 1(a)). Upon receiving the first packet of a flowlet, the host performs a reverse lookup on the source address (step 1) and selects a path according to the perceived performance (2). The router then associates the chosen path identifier to the packet and forwards the packet toward the host (3).
138
J. Taveira Ara´ ujo et al.
The treatment of outbound traffic by PREFLEX is illustrated in figure 1(b). The host, having received indication of the preferred path, tags all traffic deemed sensitive to reordering with the given path identifier. As the PREFLEX aware router receives this marked traffic it updates statistics associated to path, aggregating loss for each destination prefix (4). It then discards the path identifier (5) and forwards traffic along the appropriate path (6).
INBOUND
OUTBOUND
7
SRC
X
DST
Y
1
3 PREF
C
2
DST
X
OUT
Loss
A
pa
B
pb
C
pc
(a) Inbound FNE packet
SRC
Y
DST
X
PREF
C
6 4 DST
X
PREF
C
5
(b) Outbound traffic
Fig. 1. PREFLEX architecture
The resulting architecture provides many benefits. The balancing is both transport agnostic and allows flow state to be kept to a minimum, requiring policing at the edges only if congestion accountability is required. Additionally, path selection is receiver driven, aligning the stakeholder who can decide when to issue FNE packets with the stakeholder who benefits the most. The key to path selection is the information provided by the sender (point 7), which allows the network to calculate the percentage of path loss. We assume for the entirety of this paper that each flow follows a single path and that a host does not override the path selected by its network, although neither is strictly necessary. The implications of both assumptions being broken and the incentives involved are briefly reviewed in [11].
3
Balancing Congestion
While PREFLEX provides an architecture within which transport and network layers exchange relevant information, it does not yet define how networks select outgoing paths. In this section we present a solution for balancing traffic according to expected congestion which works in a wide variety of network conditions. 3.1
Model
Consider LEX-capable traffic, that is explicitly marked as either being retransmitted or non-retransmitted, from a single origin prefix to a single destination
Balancing by PREFLEX: Congestion Aware Traffic Engineering
139
prefix, with a number of possible paths and within a single time period. Let N be the number of paths (numbered 1, . . . , N ). Let Ti be the number of bytes sent down path i for the previous time period. Let Ri be the number of bytes marked as retransmissions down path i for the previous time period. Let R = i Ri and T = i Ti . While Ri does not strictly represent the number of lost bytes, the ratio of Ri /Ti should be a good approximation of the loss rate within period i. A starting assumption is that it is desirable to equalise the proportion of lost bytes on all interfaces – that is make Ri /Ti equal for all i – and by doing so loss is balanced. The loss rate ρi = Ri /Ti is an unknown function of Ti and Bi the bandwidth of the link. It is, however, likely that the loss rate is increasing (or at least non-decreasing) with Ti . Similarly, the loss rate is decreasing (or at least nonincreasing) with the unknown Bi . Assume then that whatever the true function, for a small region around the current values of Ti and Bi then it is locally linear ρi = ki Ti /Bi where ki is an unknown constant. Substituting gives Bi /ki = Ti2 /Ri .
(1)
Now consider the next time period. Use the dash notation for the same quantities in the next time period. In typical time periods E [T ] = E [T ] (since, for example, if the next time period on average had more traffic, the overall amount of traffic would be growing – while this is true in the year-on-year setting, this effect is negligible in the time scale discussed here). Now ρi = Ri /Ti = kTi /Bi . Choose the Ti to make all Ri /Ti = C (hence all equal) where C is some unknown constant. Therefore Ti = CBi /ki but if this is still near our locally linear region then Bi /ki Bi /ki and Bi /ki is determined by (1) hence the predicted 2 distribution to calculate C by summing over is 2Ti = CTi /Ri . Now it is necessary i. T = C i Ti /Ri and hence C = T / i Ti2 /Ri . This gives the final answer: Ti =
TT2 i2 . Ri j (Tj /Rj )
(2)
There is, of course, a problem when Ri = 0. Because of the assumption that the traffic is assigned in inverse proportion to loss rate then a branch with no loss would naturally have all the traffic assigned to it. To correct this, we increment Ri for each path by one maximum segment size (MSS). This problem will be further discussed in subsequent sections. 3.2
Understanding the Design Space
Because only local linearity is assumed then large adjustments of the traffic split are likely to cause problems. It is also useful to send a small amount of probe traffic down each route even if the loss rate is high [15]. In the absence of other information (for example when there is no loss) it is useful to assign traffic according to the current traffic throughput split. Therefore there are three tendencies to be accounted for, the loss-equalisation tendency from (2), the conservative tendency to keep the traffic split the same as the throughput split in
J. Taveira Ara´ ujo et al. 120
Throughput (Mb/s)
Throughput (Mb/s)
140
100 80 60 40 Path 1 Path 2
20 0 0
300
600
900
120 100 80 60 40 Path 1 Path 2
20 0
1200
0
300
Time (s) 6 4 3 2 1 300
600
900
2
1200
0
300
100
100
80
80
60 40 20
Path 1 Path 2
0 0
300
600
900
60 40 20
Path 1 Path 2
0 1200
0
300
Time (s)
Throughput (Mb/s)
100 80 60 40 Path 1 Path 2 0 6
300
1200
600 Time (s)
900
120 100 80 60 40 Path 1 Path 2
20 0
1200
0 6
Path 1 Path 2
5
900
(b) Conservative
120
0
600 Time (s)
(a) Equalisation
20
600 Time (s)
Flow Split (%)
Flow Split (%)
1200
3
Time (s)
Throughput (Mb/s)
900
4
0 0
4 3 2 1
300
600 Time (s)
900
1200
300
600 Time (s)
900
1200
Path 1 Path 2
5 Loss (%)
Loss (%)
1200
1
0
4 3 2 1
0
0 0
300
600 Time (s)
900
1200
0
100
100
80
80
Flow Split (%)
Flow Split (%)
900
Path 1 Path 2
5 Loss (%)
Loss (%)
6
Path 1 Path 2
5
600 Time (s)
60 40 20
Path 1 Path 2
0 0
300
600 Time (s)
(c) Loss-driven
900
60 40 20
Path 1 Path 2
0 1200
0
300
600 Time (s)
900
1200
(d) Balanced
Fig. 2. Simulation using PREFLEX to balance traffic over two paths. Each simulation demonstrates the dynamics of a single mode, except for (d) which balances between conservative and loss-driven components.
Balancing by PREFLEX: Congestion Aware Traffic Engineering
141
the previous period and the equalisation tendency to keep the traffic equally split on all interfaces. Call the traffic split assigned to path i by each of these schemes T (E)i , T (C)i and T (L)i where E, C and L stand for “equal”, “conservative” and “loss-driven”. Then our final distribution of traffic across all links is: Ti = βE T (E)i + βC T (C)i + βL T (L)i , where the β• are user set parameters in (0, 1) such that βE + βC + βL = 1. Now T (C)i = Ti and T (E)i = T /N where N is the number of interfaces. This gives a first equation for the desired flow split fi , which represents the probability with which path i will be assigned to a new flow: fi =
Ti 1 Ti T2 = βE + βC + βL i 2 . T N T Ri j (Tj /Rj )
(3)
While (3) describes a very broad design space, we provide some intuition on how each component acts through a simple example, which is described in full in section 4.1. Consider a network balancing traffic to a given prefix between two paths with equal bottleneck capacity L1 = L2 = 120Mbps. For the duration of the experiment, a source sends traffic according to the same distribution. At t = 300s, cross-traffic is generated towards client C through L1 . At t = 600s, a greater quantity cross-traffic is generated towards client C through L2 . The results for this example are shown in figure 2 in four different guises, and in each case we show the throughput, loss rate and the proportion of flows assigned to each path as computed by the PREFLEX balancer. We first investigate the effect of equalisation, which is necessary to ensure all paths are continually probed. Equalisation alone however leads to an inefficient use of the network if there is a mismatch in path capacity, as is clearly visible in figure 2(a) as congestion arises on either link. Such behaviour arises in traditional traffic engineering, which resorts to equalising traffic weighted by a local link’s capacity. Where a bottleneck is remote and distinct however, such behaviour will lead traffic across all links to be roughly bound by the capacity of the slowest path, as can be seen between t = 300s and t = 600s where throughput over the first path is dragged down by congestion on a second path. Figure 2(b) and 2(c) on the other hand show distinctive behaviour when using a solely conservative or solely loss-driven approach. Predictably for small values of loss the loss-driven approach overreacts and flaps between either path. While the net effect of these oscillations does not result in significant losses, a conservative approach lends itself more naturally to situations where loss is too small to provide a reliable indicator on path quality. Once loss becomes significant however a conservative approach is unable to drive traffic in order to balance loss across both paths. More worryingly, a pure conservative approach demonstrates the wrong dynamics, as highlighted between t = 250s and t = 300s. Despite being saturated, the balancer continues to push more traffic towards path 1 while the second path remains under-utilized, dropping its throughput further.
142
J. Taveira Ara´ ujo et al.
Note that in figure 2 the conservative approach obtains higher throughput than its loss-driven counterpart once loss settles in. While this may seem advantageous, in reality we shall show that this behaviour is detrimental to the system as a whole and only by balancing loss can social welfare be optimized [15]. 3.3
Balancing between Conservative and Loss-Driven
We wish to tune PREFLEX to balance between conservative and loss-driven modes for differing regimes of loss. Equalisation is required in some measure as every path must attract some traffic if it is not to fall out of use ([15], remark 2). If this is not the case, a path with relatively high loss will never be probed to determine whether it has improved. The remaining two components must be adjusted to be able to respond adequately to loss while not being overly sensitive to statistically insignificant fluctuations in loss. We define γ to replace both βC and βL in (3) and adjust between conservative and loss-driven modes: fi
1 Ti Ti2 = βE + (1 − βE ) γ + (1 − γ) N T Ri j (Tj2 /Rj )
(4)
As a result, βE may now vary between [0, 1]. A simple but effective value for γ is to define a minimum average loss μmin below which we do not react to loss, and react increasingly as the average loss μ becomes more significant: γ=
µmin µ ,
1,
if μ > μmin if μ ≤ μmin
(5)
This completes the PREFLEX balancer, as shown in figure 2(d). The evolution of γ for the example shown in figure 2(d) is shown in figure 3(left). The value of μmin was set to 0.005.
1
16
0.8
Interval (s)
Loss-driven
γ
0.6 0.4 0.2
12 8 4
Conservative
0
0 0
300
600 Time (s)
900
1200
0
300
600
900
1200
Time (s)
Fig. 3. Parameters γ (left) and τ (right) as seen in in example in figure 2(d)
Balancing by PREFLEX: Congestion Aware Traffic Engineering
3.4
143
Tuning Update Interval
Another issue is how often to update the flow split considering the sparseness of loss. Assume that for a given prefix packet loss is measured over a given time period τ . It would be useful to tune this τ per prefix in order that the loss estimate was “accurate”. If τ is short then only a very small number of packets will be lost. On the other hand, if τ is long then the control system will be unable to react quickly to changes. The idea is to set τ sufficiently small that an accurate measure of loss can be obtained. It is useful therefore, to have a rough estimate of a time period over which it is necessary to measure in order that estimates of loss are accurate to a given degree. Because this time period of measurement is per prefix, this must to some extent take account of how important a given measurement is to the system as a whole (for example, not slowing down measurements because one route with a tiny amount of traffic has an inaccurate measurement). This will be achieved with the concept of a weighted coefficient of variation. Let ti be the number of packets transmitted down path i in the time period τ and let li be the number of packets which were lost in this time period. (To prevent divide by zero issue set li = 1 if no packets are lost). Let pi be the probability that a given packet is lost on path i and assume that packet loss is a Bernoulli process. An unbiased estimate of pi is pˆi = li /ti (it is important to what follows that pˆi is only an estimate of pi by “chance” more or fewer packets may have been lost). If packet loss is Bernoulli then li has a binomial distribution and its variance σ 2 is given by ti p(1 − p). The coefficient of variation (CV), is a dimensionless measure given by the standard deviation over the mean cv = σ/μ. Keeping the coefficient of variation within some bound δ is a measure of the amount by which an estimate is likely to vary from the true mean. (Note, however, that this is not technically a confidence interval.) ˆ For the number of lost packets on a given prefix i the estimated CV is cv (i) = ti pˆi (1 − pˆi )/ti pˆi . Let ri be the rate of packet arrival per unit time on i giving ti = ri τ . Define W the CV weighted by transmitted packets over the prefix as W = i ti /tcvˆ(i) where t = i ti and this expands as ti (1 − pˆi ) W = . t ri τ i The “accuracy” of the measurement of pi is determined by the accuracy of li and hence, for the prefix as a whole by the CV W . The aim now is to pick the time period for the next measurement τ such that W ≤ δ for some δ. Assuming that the loss rates and traffic rates will be the same in the next time period will give a good indication of how to set τ . Therefore for the next time period 1 ti (1 − pˆi ) δ≥W = √ . ri τ i t
144
J. Taveira Ara´ ujo et al.
This gives an estimated minimum time period to set for the next time period. In order to get weighted CV of packet loss (and hence loss rate) equal to or below δ the time period τ is bounded by
τ ≥τ
1 ti − l i δt i
2 .
This equation gives the smallest value to set the time period of measurement to in order that the weighted coefficient of variation of the loss measurement is a given δ. While a number of simplifying assumptions have been made, such as modelling loss as Bernoulli, the time scale choice is not a critical system parameter so long as it provides “good enough” estimates of loss. The evolution of τ for the example in figure 2(d) is plotted in figure 3. As throughput decreases, the time inbetween updates is inflated to adjust to the lower occurrence of loss events.
4
Performance Analysis
We evaluate PREFLEX through simulation in ns-3 [16]. Since PREFLEX balances traffic using loss rather than load, there is a need to emulate the end-to-end behaviour of traffic. This proves more challenging than analysis of existing traffic engineering proposals which typically only focus on adjusting load, since we wish to verify the impact of PREFLEX on end-user metrics. 4.1
Methodology
For all simulations we will use the topology displayed in figure 4. The topology links a client domain C to a server domain S through N paths with equal bottlenecks Li , and total bandwidth B = Li . While a domain is represented as a single entity in figure 4, each domain is composed by a traffic generator connected to a router. Client C generates G simultaneous HTTP-like requests (or “gets”) from S according to a specified distribution, described at the end of this section. As traffic flows from S to C, the router within S is responsible for balancing traffic over all available paths.
Fig. 4. Simulation topology
Balancing by PREFLEX: Congestion Aware Traffic Engineering
145
Across simulations, as the number of paths increases, total bandwidth B and the number of simultaneous requests G is fixed. In this manner we wish to analyze how PREFLEX balances traffic as the granularity with which it can split traffic becomes coarser. Since we are interested in evaluating how PREFLEX shifts traffic in response to loss, we introduce additional “dummy” servers Di which are connected to C through a single path. We partition the total simulation time T into N + 2 intervals starting on si , in which s0 and sN +1 have no traffic to Di . Starting at time si , client C generates gi requests to Di according to the same distribution as used to server S. All requests to Di end at time sN +1 . Equation (6) sets the start time si for requests to Di as a function of total simulation time T and number of paths N . Likewise, equation (7) sets the number of simultaneous requests gi to Di as a function of G, the total number of requests to S, and N . si = T
i N +2
(6)
1
θi = N +1−i , gi = Gθi . 1
(7)
N +1−i
G G 2
D1 D2
Num. requests
Num. requests
Figure 5 illustrates the number of simultaneous gets from C to Di for N = 2 (used in the example shown in figure 2) and N = 4. Generating cross-traffic in this manner serves two purposes. Firstly, gi = G, so independently of the number of concurrent paths, the maximum load in the system is 2G. However, as the number of paths increases, the fluctuation in load for each path becomes smaller, and so we will stress the sensitivity with which PREFLEX balances traffic. Secondly, the number of requests for each Di over time is the same. Over timescale T , equalisation appears to be an acceptable strategy, however within each interval we will show it performs poorly achieve consistent behaviour. This is a fundamental limitation of offline traffic engineering, which is calculated over very long timescales and is unable to adapt as traffic routinely shifts.
T 4
T 2
Time
3T 4
T
G G 2
D1 D2 D3 D4 T 6
T 3
T 2
2T 3
5T 6
T
Time
Fig. 5. Number of requests from C to cross traffic servers Di for N = 2 (left) and N = 4 (right)
We now specify the settings common to all simulations, including those previously shown in figure 2. Total simulation time T is set to 1200 seconds, while total bandwidth B is fixed at 240Mbps. The number of requests G sent from C
146
J. Taveira Ara´ ujo et al.
to S is set to 240. Upon completing, a request is respawned after an idle period following an exponential distribution with a 15s mean. Transfer size follows a Weibull distribution with an average value of 2MB. These values attempt to reflect traffic to a single prefix with a file size that mimics the small but bursty nature of web traffic, which does not lend itself to being balanced by the endhost. PREFLEX is configured with βE = 0.05, μmin = 0.01/N and δ = 0.005. 4.2
Varying Bottleneck Distribution
100% 80%
D6 D5 D4 D3 D2 D1 S
60% 40%
2
3
Preflex
Equal
Equal
4 5 Number of Paths
Preflex
Equal
Preflex
Preflex
Equal
0%
Equal
20% Preflex
Goodput [% of total bandwidth]
We start by examining the case where all bottlenecks share the same bandwidth, Li = B/N , and compare PREFLEX to equalisation, which mimics traffic engineering techniques based on hashing flow tuples and assigning them to a path. The goodput, calculated as the total data transfered to client C by flows completed within T , is shown for both equalisation and PREFLEX methods in figure 6. While both saturate most available bandwidth, equalisation leads to disproportionate distribution of goodput amongst competing traffic. As loss is not equalised over all paths, the amount of goodput achieved by servers Di differs despite demand being similar. Equalisation, even when weighted according to local link capacity, is often prone to remote bottlenecks. We investigate the effect of differing bottlenecks by repeating previous simulations with the same total bandwidth B, but with Li set proportionally to B in a similar manner to (7), that is Li = θi B. Figure 6 shows the goodput as a proportion of total link bandwidth for the case where all links have equal bandwidth. We vary the number of links N , and for each case compare equalisation (as illustrated in 2(a)) and PREFLEX as the balancing methods used. The bulk of goodput originates from server S, which is the only domain to be connected to all links. If traffic is correctly balanced, we expect to see servers D1−N generate the same amount of goodput.
6
Fig. 6. Goodput relative to B achieved by each server for equal capacity links
147
100% 80%
D6 D5 D4 D3 D2 D1 S
60% 40%
2
3
Preflex
Equal
Equal
4 5 Number of Paths
Preflex
Equal
Preflex
Preflex
Equal
0%
Equal
20% Preflex
Goodput [% of total bandwidth]
Balancing by PREFLEX: Congestion Aware Traffic Engineering
6
Fig. 7. Goodput relative to B achieved by each server for different capacity links
Mean Flow Completion Time (s)
In this scenario, equalisation can be seen as the optimal static TE solution, yet both approaches bear similar performance. With no knowledge of topology, link bandwidth or expected traffic matrices, PREFLEX is able to adequately mimic the performance of the static TE solution for the case where such an approach is best suited. Where bottleneck bandwidth is unequal however equalisation proves inadequate. Once again comparing goodput (figure 7) we highlight two significant shortcomings of equalisation which PREFLEX overcomes. Firstly, goodput for S drops as N increases. Unable to realize it is overloading a path, equalisation is reduced to sending traffic over each link at approximately the same rate as the most congested link. In contrast, PREFLEX detects congestion and adapts accordingly. Secondly, the incorrect distribution of traffic due to equalisation in S distorts the goodput of others servers. While in PREFLEX goodput from D1−N is perfectly balanced, with equalisation traffic crossing the most congested links are directly affected by another domain’s inability to distribute its traffic appropriately. It may seem unfair to judge equalisation for cases where there is a mismatch in link capacity, however this mismatch between link weight and path capacity arises regularly as operators continue to adjust traffic engineering according to local conditions, with little thought spared for the impact this may have further downstream.
30 25 20 15 10 5 0
Experiment / Balancer EqualBW / Equal EqualBW / PREFLEX DiffBW / Equal DiffBW / PREFLEX
2
3
4 5 Number of Paths
6
Fig. 8. Mean average flow completion time for equal and differing bottleneck links
148
J. Taveira Ara´ ujo et al.
This impact is in turn perceived by users, who experience longer flow completion times, as shown in figure 8. In the equal bandwidth case the flow completion time is similar for both balancers. Where bandwidth differs however, PREFLEX outperforms equalisation and maintains a stable performance when balancing over all six paths. This shows that the algorithm scales well as the number of available paths increases.
5
Conclusions and Further Work
In this paper we have introduced congestion balancing using PREFLEX. PREFLEX uses packet marking to estimate and balance loss across multiple paths. It requires no per-flow state or significant changes at routers, is computationally simple to implement, and does not cause packet reordering. PREFLEX has been implemented and evaluated in ns-3 for dynamic traffic scenarios where it balances traffic using different strategies which are weighted according to the network conditions it detects. In conditions where loss is deemed significant, PREFLEX balances congestion between paths. In the absence of sustained loss PREFLEX assigns traffic based on current throughput. By balancing between these strategies PREFLEX can operate in a variety of dynamic traffic settings and has been shown to perform as well as the ideal static traffic assignment bandwidths are equal. Where bandwidth asymmetry arises, PREFLEX successfully balances loss with no significant degradation of performance as both the number of paths and inherent complexity of balancing increases. By readjusting traffic according to end-to-end metrics, PREFLEX is unique in proposing congestion, rather than just load, as an essential metric for traffic engineering and signals a novel approach in bridging the divide between traffic engineering and congestion control.
References 1. Fortz, B., Thorup, M.: Optimizing OSPF/IS-IS weights in a changing world. IEEE Journal on Selected Areas in Communications 20(4), 756–767 (2002), doi:10.1109/JSAC.2002.1003042 2. Wang, Y., Wang, Z.: Explicit routing algorithms for internet traffic engineering. In: Proceedings or the Eight International Conference on Computer Communications and Networks 1999, pp. 582–588 (1999) 3. Feamster, N., Borkenhagen, J., Rexford, J.: Guidelines for interdomain traffic engineering. SIGCOMM Computer Communication Review 33(5) (2003) 4. Gao, L., Rexford, J.: Stable internet routing without global coordination. IEEE/ACM Transactions on Networking 9(6), 681–692 (2001) 5. Quoitin, B., Pelsser, C., Swinnen, L., Bonaventure, O., Uhlig, S.: Interdomain traffic engineering with BGP. Communications Magazine 41(5), 122–128 (2003) 6. Wang, N., Ho, K., Pavlou, G., Howarth, M.: An overview of routing optimization for internet traffic engineering. IEEE Communications Surveys & Tutorials 10(1), 36–56 (2008)
Balancing by PREFLEX: Congestion Aware Traffic Engineering
149
7. Wischik, D., Handley, M., Braun, M.: The resource pooling principle. ACM SIGCOMM Computer Communication Review 38(5), 47–52 (2008) 8. Elwalid, A., Jin, C., Low, S., Widjaja, I.: MATE: multipath adaptive traffic engineering. Computer Networks 40(6), 695–709 (2002) 9. Kandula, S., Katabi, D., Davie, B., Charny, A.: Walking the tightrope: Responsive yet stable traffic engineering. ACM SIGCOMM Computer Communication Review 35(4), 264 (2005) 10. Thaler, D.: Evolution of the ip model. Internet Draft draft-iab-ip-Model-Evolution02.txt, IETF, Work in Progress (2010) 11. Ara´ ujo, J.T., Rio, M., Pavlou, G.: A mutualistic resource pooling architecture. In: Third Workshop on Re-Architecting the Internet (Re-Arch), Philadelphia (2010) 12. Sinha, S., Kandula, S., Katabi, D.: Harnessing TCP’s burstiness with flowlet switching. In: 3rd ACM SIGCOMM Workshop on Hot Topics in Networks, HotNets (2004) 13. Briscoe, B., Jacquet, A., Cairano-Gilfedder, C.D., Salvatori, A., Soppera, A., Koyabe, M.: Policing congestion response in an internetwork using re-feedback. ACM SIGCOMM Computer Communication Review 35(4), 288 (2005) 14. Kohler, E., Handley, M., Floyd, S.: Designing dccp: Congestion control without reliability. ACM SIGCOMM Computer Communication Review 36(4), 38 (2006) 15. Kelly, F., Voice, T.: Stability of end-to-end algorithms for joint routing and rate control. ACM SIGCOMM Computer Communication Review 35(2), 12 (2005) 16. Network Simulator 3, http://www.nsnam.org
EFD: An Efficient Low-Overhead Scheduler Jinbang Chen1 , Martin Heusse2 , and Guillaume Urvoy-Keller3 1
3
Eurecom, Sophia-Antipolis, France [email protected] 2 Grenoble-INP / UJF-Grenoble 1 / UPMF-Grenoble 2 / CNRS, LIG UMR 5217 Grenoble, France [email protected] Laboratoire I3S CNRS, Universit´e de Nice, Sophia Antipolis, France [email protected]
Abstract. Size-based scheduling methods receive a lot of attention as they can greatly enhance the responsiveness perceived by the users. In effect, they give higher priority to small interactive flows which are the important ones for a good user experience. In this paper, we propose a new packet scheduling method, Early Flow Discard (EFD), which belongs to the family of Multi-Level Processor Sharing policies. Compared to earlier proposals, the key feature of EFD is the way flow bookkeeping is performed as flow entries are removed from the flow table as soon as there is no more corresponding packet in the queue. In this way, the active flow table remains of small size at all times. EFD is not limited to a scheduling policy but also incorporates a buffer management policy. We show through extensive simulations that EFD retains the most desirable property of more resource intensive size-based methods, namely low response time for short flows, while limiting lock-outs of large flows and effectively protecting low/medium rate multimedia transfers. Keywords: size-based scheduling, performance, LAS, Run2C.
1 Introduction Size-based scheduling has received a lot of attention from the research community with applications to Web servers [15], Internet traffic [3,14,16] or 3G networks [2,10]. The key idea is to favor short flows at the expense of long ones because short flows are in general related to interactive applications like Email, Web browsing or DNS requests/responses; unlike long flows which represent background traffic. Such a strategy pays off as long as long flows are not completely starved and this generally holds without further intervention for Internet traffic where short flows represent a small portion of the load and thus cannot monopolize the bandwidth. Despite their unique features, size-based scheduling policies have not yet been moved out of the lab. We believe the main reasons behind this lack of adoption are related to the following general concerns about size-based scheduling approaches: – Size-based scheduling policies are in essence state-full: each flow needs to be tracked individually. Even though one can argue that those policies should be deployed at bottleneck links which are presumably at the edge of network – hence at J. Domingo-Pascual et al. (Eds.): NETWORKING 2011, Part II, LNCS 6641, pp. 150–163, 2011. c IFIP International Federation for Information Processing 2011
EFD: An Efficient Low-Overhead Scheduler
151
a location where the number of concurrent flows is moderate – the common belief is that stateful mechanisms are to be avoided in the first place. – Size-based scheduling policies are considered to overly penalize long flows. Despite all its drawbacks, the legacy scheduling/buffer management policy, FIFO/drop tail, does not discriminate against long flows while size-based scheduling solutions tend to impact both the mean response time of flows but also their variance as long flows might lock-out each others. – As their name indicates, size-based scheduling policies consider a single dimension of a flow, namely, its accumulated size. Still, persistent low rate transfers often convey key traffic, e.g., voice over IP conversations. As a result, it seems natural to account both for the rate and the accumulated amount of bytes of each flow. A number of works address partially the aforementioned shortcomings of size-based scheduling policies. Although, to the best of our knowledge, none of them fulfill simultaneously the above objectives. This paper presents a new scheduling policy, EFD (Early Flow Discard) that aims at fulfilling the following objectives: (i) Low response time to small flows; (ii) Low bookkeeping cost, i.e., the number of flows tracked at any given time instant remains consistently low; (iii) Differentiating flows based on volumes but also based on rate; (iv) Avoiding lock-outs. EFD manages the physical queue of an interface (at the IP level) as a set of two virtual queues corresponding to two levels of priority: the high priority queue first and the low priority queue at the tail of the buffer. Formally, EFD belongs to the family of MultiLevel Processor Sharing policies (see Section 2) and is effectively a PS+PS scheduling policy. The key feature of EFD is the way flow bookkeeping is performed. In EFD, we keep an active record only for flows that have at least one packet in the queue. This simple approach allows to fulfill the entire list of objectives listed above. Specifically, in EFD the active flow table size is bounded to a low value. Also, although EFD has a limited memory footprint, it can discriminate against bursty and high rate flows. EFD is not limited to a scheduling policy but also incorporates a buffer management policy, where the packet with smallest priority gets discarded when the queue is full, as opposed to drop tail which blindly discards packets upon arrival. This mechanism is similar to the one used in previous works [13,4]. Section 2 gives an overview of the related works mentioned above. Section 3 presents the proposed scheduling scheme. The simulation environment, including network setup, network topology and workload appear in Section 4. Then we use simulations to evaluate its performance and compare with other schedulers in Section 5. Finally we conclude the paper in Section 6.
2 Related Work Classically, size-based scheduling policies are divided into blind and non-blind scheduling policies. A blind size-based scheduling policy is not aware of the job1 size while a non-blind is. Non blind scheduling policies are applicable to servers [15] where the job 1
Job is a generic entity in queueing theory. In the context of this work, a job corresponds to a flow.
152
J. Chen, M. Heusse, and G. Urvoy-Keller
size is related to the size of the content to transfer. A typical example of non blind policy is the Shortest Remaining Processing Time (SRPT) policy, which is optimal among all scheduling policies, in the sense that it minimizes the average response time. For the case of network appliances (routers, access points, etc.) the job size, i.e. the total number of bytes to transfer, is not known in advance. Several blind size-based scheduling policies have been proposed. The Least Attained Service (LAS) policy [13] bases its scheduling decision on the amount of service received so far by a flow. LAS is known to be optimal if the flow size distribution has a decreasing hazard rate (DHR) as it becomes, in this context, a special case of the optimal Gittins policy [5]. Some representatives of the family of Multi-Level Processor Sharing (MLPS) scheduling policies [8] have also been proposed to favor short flows. An MLPS policy consists of several levels corresponding to different amounts of attained service of jobs, with possibly a different scheduling policy at each level. In [3], Run2C, which is a specific case of MLPS policy, namely PS+PS, is proposed and contrasted to LAS. With Run2C, short jobs, which are defined as jobs shorter than a specific threshold, are serviced with the highest priority while long jobs are serviced in a background PS queue. Run2C features key characteristics: (i) As (medium and) long jobs share a PS queue, they are less penalized than under LAS; (ii) It is proved analytically in [3] that a M/G/1/PS+PS queue offers a smaller average response time than an M/G/1/PS queue, which is the classical model of a network appliance featuring a FIFO scheduling policy and shared by homogeneous TCP transfers; (iii) Run2C avoids the lock-out phenomenon observed under LAS [7], where a long flow might be blocked for a large amount of time by another long flow. Run2C and LAS share a number of drawbacks. Flow bookkeeping is complex. LAS requires to keep one state per flow. Run2C needs to check, for each incoming packet, if it belongs to a short or to a long flow. The latter is achieved in [3] thanks to a modification of the TCP protocol so as to encode in the TCP sequence number the actual number of bytes sent by the flow so far. Such an approach, which requires a global modification of all end hosts, is questionable2. Moreover, both LAS and Run2C classify flows based on the accumulated number of bytes they have sent, without taking the flow rate into account. Some approaches propose to detect long flows by inserting the flow in the table probabilistically [4,12,9]. The key idea here is to perform a simple random test (with a low probability of success) upon packet arrival to decide if the corresponding flow should be inserted in the table. As long flows generate many packets, it is unlikely to miss them, while many short flow simply go unnoticed. These approaches differ in the way they trade false positive rate against the speed of detection of a long flow. So far, a single work addresses the problem of accounting for rates in size-based scheduling [7]. It consists in a variant of LAS, Least Attained Recent Service (LARS), where the amount of bytes sent by each flow decays with time according to a fading factor β. LARS is able to handle differently two flows that have sent a similar amount of bytes but at different rates and it also limits the lock out duration of one long flow by another long flow to a maximum tunable value. 2
Other works aim at favoring short flows, by marking the packets at the edge of the network so as to relieve the scheduler from flow bookkeeping [11]. However, the deployment of DiffServ is not envisaged in the near future at the Internet scale.
EFD: An Efficient Low-Overhead Scheduler
153
3 Early Flow Discard In this section, we describe how EFD manages space and time priority. EFD belongs to the family of Multi-Level Processor Sharing scheduling policy. EFD features two queues. The low priority queue is served only if the high priority queue is empty. Both queues are drained in a FIFO manner at the packet level (which is in general modeled as a PS queue at flow level). In terms of implementation, a single physical queue for packet storage is divided into two virtual queues. The first part of the physical queue is dedicated to the virtual high priority queue while the second part is the low priority queue. A pointer is used to indicate the position of the last packet of the virtual high priority queue. This idea is similar to the one proposed in the Cross-Protect mechanism [9]. We now turn our attention to the flow management in EFD and the enqueuing and dequeuing operations. We eventually discuss the spatial policy used when the physical queue gets full. 3.1 Flow Management EFD maintains a table of active flows, defined here as the set of packets that share a common identity, consisting of a 5-tuple: source and destination addresses, source and destination ports and protocol number. Flows remain in the table as long as there is one corresponding packet in the buffer and discarded when the last packet leaves. Consequently, a TCP connection (or UDP transfers) may be split over time into several fragments handled independently of each other by the scheduler. Note that unlike most scheduling mechanisms that keep per flow states, EFD does not need to use any garbage collection mechanism to clean its flow table. This happens automatically upon departure of the last packet of the flow. A flow entry keeps track of several attributes, including flow identity, flow size counter, number of packets in the queue. Packet enqueuing. For each incoming packet, a lookup is performed in the flow table of EFD. A flow entry is created if the lookup fails and the packet is put at the end of the high priority queue. Otherwise, the flow size counter of the corresponding flow entry is compared to a preset threshold th. If the flow size counter exceeds th, then the packet is put at the end of the low priority queue; otherwise the packet is inserted at the end of the high priority queue. The purpose of th is to favor the start of each flow. In our simulations, we use a th of 20 packets (up to 30 Kbytes for packets with size of 1500 bytes each). Obviously, if a connection is broken into several fragments, from the scheduler’s perspective, then each time it will handle each fragment as a unique one and assign the start (within threshold th) of each fragment a high priority, by means of directing all packets making up the start of each fragment into the high priority queue. We believe that this makes sense as this happens only if the connection has not been active for a significant time –it has not been backlogged for a while– and thus can be considered as fresh. In practice, several phenomena can lead to break a connection into many fragments. For instance, during connection establishment, the TCP slow start algorithm limits the number of packets in flight so that it does not continuously occupy the buffer. This is however not a problem, as those flows are smaller than th and thus the start of the TCP
154
J. Chen, M. Heusse, and G. Urvoy-Keller
2
Statistic of connection fragmentation
avg. num of fragments
10
1
10
0
10 0 10
(a) Network topology
1
2
10 10 10 Connection size in MSS
3
10
4
(b) Number of fragments per connection workload of 8Mbit/s Fig. 1.
transfer will receive a high priority. If the flow lasts longer and it is effectively able to use its share of the capacity, then the connection will eventually occupy the buffer without interruption and therefore stay in the flow table. Figure 1(b) illustrates such a scenario (Section 4 details the experimental setup). It is apparent that, as the connection size increases, the number of fragments tends to reach a limit so that, for the longest connections, a small number of fragments correspond to many packets. Packet dequeuing. When a packet leaves the queue or gets dropped, it decreases the number of queued packets of the corresponding flow entry. The flow entry stays in the table as long as one corresponding packet is in the queue. So the flow table size is bounded by the physical queue size in packets3 . Indeed, in the worst case, there are as many entries as distinct flows in the physical queue, each with one packet. This policy ensures that the flow table remains of small size. Also if a flow sends at high rate for a short period of time, its packets will be directed to the low priority queue only for the limited period of time during which the flow is backlogged: EFD is sensitive to flow burstiness. 3.2 Buffer Management When a packet arrives to a queue that is full, EFD first inserts the arriving packet to its appropriate position in the queue, and then drops the packet that is at the end of the (physical) queue. This buffer policy implicitly gives space priority to short4 flows, which differs from the traditional droptail buffer management policy. This approach is similar to the Knock-Out mechanism of [4] and the buffer management proposed to 3
4
In most if not all active equipments – routers, access points – queues are counted in packets and not in bytes. Due to the discussion in the above paragraph, a short flow is a part of a connection whose rate is moderate.
EFD: An Efficient Low-Overhead Scheduler
155
LAS in [13]. As large flows in the Internet are mostly TCP flows, we can expect that they will recover from a loss event with a fast retransmit; unlike short flows that might time out.
4 Performance Evaluation Set Up In this section, we present the network set up – network topology and workload – used to evaluate the performance of EFD and to compare it to other scheduling policies. All simulations are done using QualNet [1]. 4.1 Network Topology We evaluate the performance of EFD and compare it to other scheduling policies for the case of a single bottleneck network, using a classical dumbbell topology depicted in Fig. 1(a). A group of senders (nodes 1 to 5) are connected to a router (node 6) by 100Mbps bandwidth links and a group of receivers (nodes 8 to 12) are connected to another router (node 7) with a 100Mbps bandwidth link. The two aggregation routers are connected to each other with a link at 10Mbps. All links have 1 ms propagation delay. All nodes use FIFO queues, except the bottleneck node which uses one of the four scheduling policies that we compare in this work: FIFO, LAS, RuN2C or EFD. The bottleneck buffer has a finite size of 300 packets. 4.2 Workload Generation Data transfer requests arrive according to a Poisson process, the server and the client are picked at random and the content requested is distributed according to a bounded Zipf distributed flow sizes. A bounded Zipf distribution is a discrete analog of a continuous bounded Pareto distribution. Transfers are performed under TCP or UDP depending on the simulation. In all cases, the global load is controlled by tuning the arrival rate of requests. For each simulation set-up, we consider an underload and an overload regime, which correspond respectively to workloads of 8 and 15 Mb/s (80% and 150% of the bottleneck capacity). For TCP simulations, we use the GENERIC-FTP model of Qualnet, which corresponds to an unidirectional transfer of data. For UDP transfers, we use a CBR application model where one controls the inter-packet arrival time. The latter enables to control the exact rate at which packets are sent to the bottleneck. In both TCP and UDP cases, IP packets have a size of 1500 bytes.
5 Performance Evaluation In this section, we compare the performance of EFD to other scheduling policies. Our objective is to illustrate the ability of EFD to fulfill the 4 objectives listed in the introduction, namely (i) low bookkeeping cost, (ii) low response time to small flows, (iii) avoiding lock-outs, (iv) protecting long lasting delay sensitive flows.
156
J. Chen, M. Heusse, and G. Urvoy-Keller
To illustrate the first 3 items, we consider a TCP workload with homogeneous transfers, i.e., transfers that take place on paths having similar characteristics. For the last item - protecting long lived delay sensitive flows - we add a UDP workload to the TCP workload in the form of a CBR traffic, in order to highlight the behavior of each scheduler in presence of long lasting delay sensitive flows. 5.1 Overhead of Flow State Keeping The approaches to maintain the flow table in the size-based scheduling policies proposed so far can be categorized as follows: – Full flow table approach as in LAS [13]. An argument in favor of keeping one state per active flow is that the number of flows to handle remains moderate as it is expected that such a scheduling policy be implemented at the edge of the Internet. – No flow table approach: an external mechanism marks the packets or the information is implicit (coded in the SEQ number in Run2C) [3,11] – Probabilistic approaches: a test is performed at each packet arrival for flows that have not already be incorporated in the flow table [4,9,12]. The test is calibrated in such a way that only long flows should end up in the flow table. Still, false positives are possible. Several options have been envisaged to combat this phenomenon especially, a re-testing approach [12] or an approach where the flows in the flow table are actually considered as long flows once they have generated more than a certain amount of packets/bytes after their initial insertion [4]. – EFD deterministic approach: the EFD approach is fully deterministic as flow entries are removed from the flow table once they have no more packet in the queue. In this section, we compare all the approaches presented except the ”No flow table approach” for our TCP workload scenario (see Section 4.2). We consider one representative of each family: LAS, X-Protect and EFD. We term X-Protect a Multi-Level Processor Scheduling policy that maintains two queues, similarly to Run2C, but uses the probabilistic mechanism proposed in [9] to track long flows5 . As for the actual scheduling of packets, X-Protect mimics Run2C based on the information it possesses. If the packet does not belong to a flow in the flow table nor passes the test, it is put in the high priority queue. If it belongs to a flow in the flow table, it is put either in the high priority queue or in the low priority queue, depending on the amount of bytes sent by the flow. We use a threshold of 30KB, similar to the one used for EFD. The evolution of flow table size over time for load of 8Mbit/s (underload) and 15Mbit/s (overload) are shown in Fig. 2. For LAS and X-Protect, the flow table is visited every 5 seconds and the flows that have been inactive for 30 seconds are removed. We observe how X-Protect roughly halves the number of tracked flows, compared to LAS. By contrast, EFD reduces it by one order of magnitude. The reason why X-Protect offers deceptive performance is the race condition that exists between the flow size distribution and the probabilistic detection mechanism. Indeed, even though a low probability, say 1%, is used to test if a flow is a long, there exists so many short 5
Note that this mechanism is proposed in [9] to do admission control function and not a scheduling.
EFD: An Efficient Low-Overhead Scheduler
400 600 time (s) X−Protect
800
1000
400 200 0 0
200
400 600 time (s) EFD
800
1000
20 10 0 0
200
400 600 time (s)
800
1000
500 0 0
table size
200
1000
500
table size
0 0
table size
LAS
500
table size
table size
table size
LAS
157
100
(a) workload of 8Mbit/s (underload)
0 0
200
400 600 time (s) X−Protect
800
1000
200
400 600 time (s) EFD
800
1000
200
400 600 time (s)
800
1000
50 0 0
(b) workload of 15Mbit/s (overload)
Fig. 2. Evolution of flow table size over time LAS Density
0.01 0 0
100
200 300 flow table size X−Protect
400
500
Density
0.04 0.02 0 0
50
100 150 flow table size EFD
200
0.5 0
250
Density
Density
Density
Density
LAS 0.02
1
2
3
4
5 6 7 8 flow table size
9
10 11
(a) workload of 8Mbit/s (underload)
0.02 0.01 0 0
200
400 600 flow table size X−Protect
800
1000
100
200 300 flow table size EFD
400
500
0.02 0.01 0 0 0.04 0.02 0 0
10
20
30 40 flow table size
50
60
70
(b) workload of 15Mbit/s (overload)
Fig. 3. Histogram of the flow table size
flows that the number of false positives becomes quite large, which prevents the flow table from being significantly smaller than in LAS. The histograms in Fig. 3 confirm the good performance of EFD in underload and also overload, as EFD keeps the flow table size to a few 10s of entries at most. Note that this is clearly smaller than the actual queue size (300 packets) that constitutes an upper bound on the flow table size in EFD as explained before. 5.2 Mean Response Time Response time is a key metric for a lot of applications, especially interactive ones. An objective of EFD and size-based scheduling policies in general is to favor interactive applications, hence the emphasis put on response time. We consider four scheduling policies: FIFO, LAS, Run2C and EFD. FIFO is the current de facto standard and it is thus important to compare the performance of EFD to this policy. LAS can be considered as a reference in terms of (blind) size-based scheduling policies as a lot of other disciplines have positioned themselves with respect to LAS. Run2C, for instance, aims
158
J. Chen, M. Heusse, and G. Urvoy-Keller
at avoiding the lock out of long flows observed more often with LAS than for e.g. FIFO. We do not consider the X-protect policy discussed in Section 5.1, as Run2C can be considered as a perfect version of X-protect since Run2C distinguishes packets of flows below and above the threshold th (we use the same threshold th for both EFD and Run2C). Response times are computed only for flows that complete their transfer before the end of the simulation. When comparing response times, one must thus also consider the amount of traffic due to flows that terminated their transfer and to flows that did not complete. The lack of completion of a flow can be due to a premature end of simulation. However, in overload and for long enough simulations as in our case, the main reason is that they were set aside by the scheduler. We first turn our attention to the aggregate volumes of traffic per policy for the underload and overload cases. We observe no significant difference between the different 0
0
CDF
10
CDF
10
FIFO LAS RuN2C EFD LARS
−1
10
0
10
1
10
2
3
FIFO LAS RuN2C EFD LARS
−2
10
4
10 10 Transfer size in MSS
−1
10
10
0
10
(a) workload of 8Mbit/s (underload)
1
10
2
10 10 Transfer size in MSS
3
4
10
(b) workload of 15Mbit/s (overload)
Fig. 4. Distributions of incomplete transfers size 1
3
10
Mean Response time (s)
Mean Response time (s)
10
0
10
−1
10
FIFO LAS RuN2C EFD LARS
−2
10
0
10
1
10
2
10 File size in MSS
3
10
(a) workload of 8Mbit/s (underload)
2
10
1
10
0
10
FIFO LAS RuN2C EFD LARS
−1
10
−2
4
10
10
0
10
1
10
2
10 File size in MSS
10
3
(b) workload of 15Mbit/s (overload)
Fig. 5. Conditional mean response time
4
10
EFD: An Efficient Low-Overhead Scheduler
159
scheduling policies in terms both of number of complete and incomplete connections. The various scheduling policies lead to a similar level of medium6 utilization. In contrast, when looking at the distribution of incomplete transfers, it appears that the flows killed by the different scheduling policies are not the same. We present in Fig. 4 the distribution of incomplete transfers where the size of a transfer is the total amount of MSS packets transferred at the end of the simulation. A transfer is deemed incomplete if we do not observe a proper TCP tear down with two FIN flags. As expected, we observe that FIFO tends to kill a lot of small flows while the other policies discriminate long flows. Distributions of the response times for the (complete) short and long transfers in underload and overload conditions are presented in Fig. 5. Under all load conditions, LAS, EFD and Run2C manage to significantly improve the response time of the short flows as compared to FIFO. EFD and Run2C offer similar performance. They both have a transition of behavior at about th value (th = 20 MSS). Still, the transition of EFD is smoother than the one of Run2C. This was expected as Run2C applies a strict rule: below or above th for a given transfer, whereas EFD can further cut a long transfer into fragments which individually go first to the high priority queue. Overall, EFD provides similar or slightly better performance than Run2C with a minimal price in terms of flow bookkeeping. LAS offers the best response time of size-based scheduling policies in our experiment for small and intermediate size flows. For large flows its performance are equivalent to the other policies in underload and significantly better for the overload case. However, one has to keep in mind that in overload conditions, LAS deliberately killed a large set of long flows (see Fig. 4), hence its apparent better performance. LARS behaves similarly to LAS in underload and degrades to fair queueing –which brings it close to FIFO in this case– when the networks is overloaded. 5.3 Lock-Outs The low priority queue of EFD is managed as a FIFO queue. As such, we expect EFD, similarly to Run2C, to avoid lock-outs observed under LAS whereby an ongoing long transfer is blocked for a significant amount of time by a newer transfer of significant size. This behavior of LAS is clearly observable in Figure 6(a) where the progress (accumulated amount of bytes sent) over time of the 3 largest transfers of one of the above simulations7 . We indeed observe large periods of times where the transfers experience no progress, which leads to several plateaus. This is clearly in contrast to the cases of LARS, EFD and to a lesser extent of Run2C, for the same connections, shown in Figures 6(b), 6(c) and 6(d) respectively. The progress of the connections in the latter cases is indeed clearly smoother with no noticeable plateau. 5.4 The Case of Multimedia Traffic In the TCP scenario considered above, FTP servers were homogeneous in the sense that they had the same access link capacity and the same latency to each client. The transfer 6 7
The medium is the IP path as those policies operate at the IP level. Those 3 connections did not start at the same time, the time axis is relative to their starting dates.
160
J. Chen, M. Heusse, and G. Urvoy-Keller LARS 1.4
1.2
1.2 Data transferred (MB)
Data transferred (MB)
LAS 1.4
1 0.8 0.6 0.4 cx1 cx2 cx3
0.2 0 0
10
20
30 Time (s)
40
50
1 0.8 0.6 0.4 cx1 cx2 cx3
0.2 0 0
60
(a) LAS, underload
10
1.2
1.2
1 0.8 0.6 0.4 cx1 cx2 cx3
0.2 20 30 Time (s)
(c) EFD, underload
50
Run2C 1.4
Data transferred (MB)
Data transferred (MB)
EFD
10
40
(b) LARS, underload
1.4
0 0
20 30 Time (s)
40
50
1 0.8 0.6 0.4 cx1 cx2 cx3
0.2 0 0
10
20 30 Time (s)
40
50
(d) Run2C, underload
Fig. 6. Time diagrams of the 3 largest TCP transfers under LAS, LARS, EFD and Run2C (underload), relative to the start of each transfer
rate was controlled by TCP. In such conditions, it is difficult to illustrate how EFD takes into accounts the actual transmission rate of data sources. In this section, we have added a single CBR flow to the TCP workload used previously. We consider two rates 64Kb/s and 500Kb/s for the CBR flow, representing typical audio (e.g., VoIP) and video stream (e.g., YouTube video - even though the YouTube uses HTTP streaming) respectively. The background load also varies - 4, 8 and 12Mbpswhich correspond to underload/moderate/overload regimes as the bottleneck capacity is 10 Mbps. To avoid the warm-up period of the background workload, the CBR flow is started at time t=10s and keeps on sending packets continuously until the end of the simulation. The simulation lasts for 1000 seconds. Since small buffers are prone to packet loss, we assign to the bottleneck a buffer of 50 packets, instead of 300 packets previously. The loss rates experienced by the CBR flow are given in Fig. 7, in which a well-known fair scheduling scheme called SCFQ [6] is added for the comparison, aparting from other disciplines mentioned hereinbefore.
EFD: An Efficient Low-Overhead Scheduler 50
40
70
4Mb/s 8Mb/s 12Mb/s
60
161
4Mb/s 8Mb/s 12Mb/s
Loss rate in %
Loss rate in %
50 30
20
40 30 20
10 10 0
fifo
scfq
las
efd run2c lars
(a) a CBR flow with rate of 64Kb/s
0
fifo
scfq
las
efd run2c lars
(b) a CBR flow with rate of 500Kb/s
Fig. 7. Loss rate experienced by a CBR flow in different background loads
As we can see from the figure, for the case of a CBR flow with rate of 64Kbps, LAS discards a large fraction of packets even at low load. This was expected as LAS only considers the accumulated volume of traffic of the flow and even at 64 kbps, the CBR flow has sent more than 8 MB of data in 1000 s (without taking the Ethernet/IP layers overhead into account). In contrast, FIFO, SCFQ and Run2C offer low loss rates in the order of a few percents at most. As for EFD and LARS, they effectively protect the CBR flow under all load conditions. As the rate of the CBR flow increases from 64Kbps to 500Kbps, no packet loss is observed for EFD in underload/moderate load conditions, similarly to SCFQ, whereas the other scheduling disciplines (FIFO, LAS, Run2C and LARS) are hit at various degrees. In overload, EFD and LARS blow up similarly to LAS (which still represents an upper bound on the loss rate as the CBR flow is continuously granted the lowest priority). EFD behaves slightly better than LARS as the load in the high priority queue is by definition lower under EFD than under Run2C. When looking at the above results from a high level perspective, one can think at first sight that FIFO and SCFQ do a decent job as they provide low loss rates to the CBR flow in most scenarios (under or overload). However, those apparently appealing results are a side effect of a well-known and non desirable behavior of FIFO. Indeed, under FIFO, the non responsive CBR flow adversely impacts the TCP workload, leading to high loss rates. This is especially true for the CBR flow working at 500 kbps. SCFQ tends to behave similarly if not paired with an appropriate buffer management policy [6]. In contrast, LARS and EFD offer a nice trade-off as they manage to simultaneously grant low loss rates to the CBR flow with a low penalty to the TCP background workload. Run2C avoids the infinite memory of LAS but still features quite high loss rates since the CBR flow remains continuously stuck in the low priority queue. Overall, EFD manages to keep the desirable properties of size-based scheduling policies and in addition manages, with a low bookkeeping cost, to protect multimedia flows as it implicitly accounts for the rate of this flow and not only its accumulated volume.
162
J. Chen, M. Heusse, and G. Urvoy-Keller
6 Conclusion In this paper, we have proposed a simple but efficient packet scheduling scheme called Early Flow Discard (EFD) that uses a fixed threshold for flow discrimination while taking flow rates into account at the same time. EFD possesses the key feature of keeping an active record only for flows that have one packet at least in the queue. With this strategy, EFD caps the amount of active flow that it tracks to the queue size in packets. Extensive network simulations revealed that EFD, as a blind scheduler, retains the good properties of LAS like small response times to short flows. In addition, a significant decrease of bookkeeping overhead, of at least one order of magnitude is obtained as compared to LAS, which is convincing from a practical point of view. Lock-outs which form the Achilles’ heel of LAS are avoided in EFD, similarly to Run2C. In contrast to LAS and Run2C, EFD inherently takes both volume and rate into account in its scheduling decision due to the way flow bookkeeping is performed. We further demonstrated that EFD can efficiently protect low/medium multimedia flows in most situations. Future directions of research on EFD will be to test its applicability to WLAN infrastructure networks, where the half-duplex nature of the MAC protocol needs to be taken into account [16].
References 1. QualNet 4.5. Scalable Networks 2. Aalto, S., Lassila, P.: Impact of size-based scheduling on flow level performance in wireless downlink data channels. Managing Traffic Performance in Converged Networks, 1096–1107 (2007) 3. Avrachenkov, K., Ayesta, U., Brown, P., Nyberg, E.: Differentiation between short and long tcp flows: Predictability of the response time. In: Proc. IEEE INFOCOM (2004) 4. Divakaran, D.M., Carofiglio, G., Altman, E., Primet, P.V.B.: A flow scheduler architecture. In: Crovella, M., Feeney, L.M., Rubenstein, D., Raghavan, S.V. (eds.) NETWORKING 2010. LNCS, vol. 6091, pp. 122–134. Springer, Heidelberg (2010) 5. Gittins, J.: Multi-armed bandit allocation indices. Wiley Interscience, Hoboken (1989) 6. Golestani, S.: A self-clocked fair queueing scheme for broadband applications. In: 13th Proceedings IEEE INFOCOM 1994. Networking for Global Communications, vol. 2, pp. 636–646 (June 1994) 7. Heusse, M., Urvoy-Keller, G., Duda, A., Brown, T.X.: Least attained recent service for packet scheduling over wireless lans. In: WoWMoM 2010 (2010) 8. Kleinrock, L.: Computer Applications, 1st edn., Queueing Systems, vol. 2. Wiley Interscience, Hoboken (1976) 9. Kortebi, A., Oueslati, S., Roberts, J.: Cross-protect: Implicit service differentiation and admission control. In: IEEE HPSR (2004) 10. Lassila, P., Aalto, S.: Combining opportunistic and size-based scheduling in wireless systems. In: MSWiM 2008: Proceedings of the 11th International Symposium on Modeling, Analysis and Simulation of Wireless and Mobile Systems, pp. 323–332. ACM, New York (2008) 11. Noureddine, W., Tobagi, F.: Improving the performance of interactive tcp applications using service differentiation. Computer Networks Journal, 2002–2354 (2002)
EFD: An Efficient Low-Overhead Scheduler
163
12. Psounis, K., Ghosh, A., Prabhakar, B., Wang, G.: Sift: A simple algorithm for tracking elephant flows, and taking advantage of power laws. In: 43rd Annual Allerton Conference on Control, Communication and Computing (2005) 13. Rai, I.A., Biersack, E.W., Urvoy-keller, G.: Size-based scheduling to improve the performance of short tcp flows. IEEE Network 19, 12–17 (2004) 14. Rai, I.A., Urvoy-Keller, G., Vernon, M.K., Biersack, E.W.: Performance analysis of las-based scheduling disciplines in a packet switched network. In: SIGMETRICS 2004/PERFORMANCE 2004: Proceedings of the Joint International Conference on Measurement and Modeling of Computer Systems, vol. 32, pp. 106–117. ACM Press, New York (2004) 15. Schroeder, B., Harchol-Balter, M.: Web servers under overload: How scheduling can help. ACM Trans. Internet Technol. 6(1), 20–52 (2006) 16. Urvoy-Keller, G., Beylot, A.L.: Improving flow level fairness and interactivity in wlans using size-based scheduling policies. In: MSWiM 2008: Proceedings of the 11th International Symposium on Modeling, Analysis and Simulation of Wireless and Mobile Systems, pp. 333–340. ACM, New York (2008)
Flexible Dynamic Spectrum Allocation in Cognitive Radio Networks Based on Game-Theoretical Mechanism Design Jos´e R. Vidal, Vicent Pla, Luis Guijarro, and Jorge Martinez-Bauset Universitat Polit`ecnica de Val`encia, 46022 Valencia Spain {jrvidal,vpla,lguijar,jmartine}@dcom.upv.es
Abstract. In this paper we present an approach based on game-theoretical mechanism design for dynamic spectrum allocation in cognitive radio networks. Secondary users (SU) detect when channels can be used without disrupting any primary user and try to use them opportunistically. When an SU detects a free channel, it estimates its capacity and sends the valuation of it to a central manager. The manager calculates a conflict-free allocation by implementing a truthful mechanism. The SUs have to pay for the allocation an amount which depends on the set of valuations, and they behave as benefit maximizers. We present and test two mechanisms implementing this idea which are proved to be truthful, and that are tractable and approximately efficient. We show the flexibility of these mechanisms by illustrating how they can be modified to achieve other objectives such as fairness and also how they can operate without really charging the SUs. Keywords: Cognitive radio, spectrum sharing, game theory, mechanism design.
1
Introduction
Cognitive radio is the technology that enables dynamic spectrum access (DSA) networks to fully utilize the scarce spectrum resources. In DSA networks, users who have no spectrum licenses, known as SUs, are allowed to use the spectrum. In this paper, we will focus on DSA networks with hierarchical and overlay access [14]. In the hierarchical access model, SUs use spectrum that is licensed to primary users (PUs). As PUs have priority in using the spectrum, when SUs coexist with PUs, they have to perform real-time wideband monitoring of the licensed spectrum to be used in order to avoid harmful interference to PUs. In overlay access, also referred to as opportunistic spectrum access, SUs only use the licensed spectrum when PUs are not transmitting. In order not to interfere with the PUs, SUs need to sense the licensed frequency band and detect the spectrum opportunities.
This work was supported by the Spanish government through projects TIN200806739-C04-02 and TIN2010-21378-C02-02, and by Universitat Polit`ecnica de Val`encia through PAID-06-09.
J. Domingo-Pascual et al. (Eds.): NETWORKING 2011, Part II, LNCS 6641, pp. 164–177, 2011. c IFIP International Federation for Information Processing 2011
Flexible Dynamic Spectrum Allocation in Cognitive Radio Networks
165
The availability and quality of spectrum opportunities may change rapidly over time due to PUs activity and competition between SUs. Therefore, dynamic spectrum allocation and sharing schemes are needed to achieve flexible spectrum access in long-run scenarios. They should be able to adapt to the spectrum dynamics, e.g., channel variations, based on local observations. In centralized spectrum allocation, the opportunistic spectrum access of SUs is coordinated by a central element serving as a spectrum manager. The spectrum manager collects operation information from each SU, and allocates the spectrum resources to achieve certain objectives such as efficiency or fairness. This kind of spectrum sharing is non-cooperative, since SUs only aim at maximizing their own benefit, so SUs might exchange false information about their channel conditions in order to get more access opportunities to the spectrum. Therefore, cheat-proof spectrum sharing schemes should be developed to meet the objectives. Dynamic spectrum sharing has been extensively studied from a game theoretical perspective [11]. Mechanism design is a game theoretical approach that can be applied to dynamically redistribute spectrum across several players (PUs and SUs) to meet their demands. Mechanisms aim at achieving the desired equilibrium by enforcing SUs to play truthfully, so that the spectrum resources are allocated according to reliable information. This is attained by means of payments, which are collected and redistributed by a trusted entity. In spectrum auction games, a specific form of mechanism design, a spectrum manager collects bids and allocates spectrum resources to SUs and charges them according to some rules. By multiplexing spectrum supply and demand in time and space, dynamic auctions can improve spectrum utilization or fairness. Most of works on spectrum auctions focus on the scenario where one or more PUs want to lease spectrum to SUs, and a monetary gain for PUs is involved. In [12], an auction model is proposed where a PU announces a portion of its licensed band and a unit price, and SUs bid for the desired amount of bandwidth. In [3], SUs bid for a pool of homogeneous channels announcing a demand curve, from which the auction is cleared using an approximation algorithm. A beliefassisted double auction is proposed in [8], with collusion-resistant strategies based on the use of optimal reserve prices. Another solution for double auctions is presented in [15], where several PUs auction a channel each, while several SUs bid for just one of them, assuming that all the channels are homogeneous to the SUs. Other works propose auctions for power allocation in spectrum sharing. In [2] sequential second price auctions are proposed assuming complete information. In [6] an auction model is proposed where the utility of each SU is defined as a function of the received signal-to-noise-and-interference ratio (SINR). SUs are charged a unit price for their received SINR, so that the auction achieves the desired social utility. Mechanism design is used in [13]. There the underlay model is assumed, so SUs may transmit in the licensed spectrum when PUIs are also transmitting, and their power transmission is restricted by the interference temperature limit. The objective power allocation is calculated using channel information obtained locally by SUs. A truthful mechanism with monetary transfers enforces SUs to reveal this information.
166
J.R. Vidal et al.
Our work differs from those cited above in two major aspects. Firstly, we address the problem of sharing spectrum opportunities between SUs without the intervention of PUs. These opportunities appear sparse in time and space, therefore we propose mechanisms to allocate them in real time, i.e., mechanisms that have to be run every time that a spectrum opportunity appears. We present two mechanisms (one deterministic and one randomized) for dynamic spectrum allocation which are truthful, computationally tractable and approximately efficient and that are simple enough for its use in real-time allocations. They have the flexibility to achieve long-run objectives other than efficiency, such as fairness, maintaining their properties. Secondly, we investigate how these mechanisms can operate without monetary transactions, which would make the solution easier to implement. If monetary gain is not involved in the spectrum allocation problem, SU payments will no longer be chosen as money, but as and alternative form of ‘virtual currency’. This internal currency will be managed by the spectrum manager, which will record the credit of every SU and will distribute it to them. In order that this currency retains its value for the SUs, the mechanism itself should consider the credit kept by any SU. That is, the choice and payment functions should depend on the credit so limiting what SUs spend, and a proper ‘cash flow’ should be redistributed to SUs. We investigate how this can be done and how it affects to the mechanisms properties. The rest of the paper is organized as follows. In Section 2 we describe the model of spectrum sharing on networks with hierarchical and overlay access and with a central manager. In Section 3 we describe the mechanism design background applied to this problem, present two mechanisms based on the theoretical results and describe how these mechanisms can be modified to achieve fairness and to operate with a virtual currency. We show and discus experimental results in Section 4 and conclude in Section 5.
2
Spectrum Sharing Model
We assume that the spectrum is divided into non-overlapping orthogonal channels and that SUs are able to detect when a channel can be used without interrupt any PU. When one of these opportunities, called white spaces, appears, SUs may try to use it opportunistically. Every white space might be used by one or several SUs, with the condition of not conflict between them. We model the interference between SUs according to the protocol model [4,7]. This model assumes that, for a given channel, SU i has a communication range Ri and a larger interference range Ri . The channel can be used by SU i if the receiver is a at distance smaller than Ri . Two SUs i and j are in conflict if the distance between them is smaller than Ri or smaller than Rj , and in this case they cannot transmit simultaneously. In this network, Ri and Ri depend on the transmitting power, which is imposed by the position and channel usage of nearby PUs. Additionally, the transmitting power and the distance to the receiver determines the transmission rate, hence for a given white space, SU i will be able to transmit at Bi bps. With these assumptions, the conflicts for a given white space can be
Flexible Dynamic Spectrum Allocation in Cognitive Radio Networks
167
modelled by a conflict graph whose vertices correspond to the SUs which are able to use the channel. There is an edge between vertices i and j if SUs i and j are in conflict. This graph can be written in matrix form: C = cij N ×N
with cij ∈ {0, 1} .
(1)
Where cij = 1 if SUs i and j are in conflict. The sharing problem in this scenario is, for each white space, to find an allocation compatible with the conflict graph: X = xi N ×1
where xi ∈ {0, 1} and xi · xj = 0
if cij = 1 .
(2)
Where xi = 1 if the channel is allocated to SU i. The allocation cannot fulfil efficiency and fairness simultaneously, so the objective will be a trade-off between them. This objective should be achieved on long-term basis. Long-term efficiency is achieved if every white space is allocated efficiently. A single allocation is said to be efficient if N Xeff = arg max xi Bi . (3) X
i=1
However, fairness cannot be achieved for each white space, so long-run fairness should be defined as a time-average. If denote by X j the allocation made at the j-th white space, the proportional fairness criterion for M consecutive allocations is 1 M X fair = (Xfair , . . . , Xfair ) = arg max X
M N
log xji Bij .
(4)
j=1 i=1
In our proposal we assume that there is a spectrum manager whose rules are abided by SUs, and there is a control channel dedicated to the communication between manager and SUs [5]. Every time that a white space appears, the SUs detect it and estimate Bi . Those SUs willing to use the channel send Bi to the manager. They also detect and communicate which neighbours they conflict with (cij ), and from this the manager derives the conflict graph. The manager then will calculate the allocation according to the objectives, and communicate the allocation to SUs. We also assume that no SU can benefit from lying about the conflict graph. This is not strictly true but it is a reasonable assumption in most situations, because if SU i declares non-existing conflicts (it falsely declares cij = 1), this will reduce the set of possible allocations, so it will reduce its chances to obtain the channel. On the other side, if it hides a conflict (it falsely declares cij = 0), the resulting allocation may be useless for i and j. However, they could benefit from lying about the channel bitrate estimation; a higher declared value of Bi rises the value of N i=1 xi Bi if xi = 1, so rising the manager evaluation of allocations including SU i. We propose a solution based on mechanism design for enforcing the SUs to tell the truth to the spectrum manager driven by self-interest.
168
3
J.R. Vidal et al.
Spectrum Sharing Based on Mechanism Design
We model the allocation procedure of each white space as a mechanism in which the players are the SUs and a spectrum manager that implements the game rules. Formally, a mechanism is a game in which the players do not know the utilities of the other players, and the rules of the game are designed such that the equilibrium of the game is guaranteed to have certain desired properties. It is defined by the tuple (N, O, Θ, p, u, A, M ): – N is a set of n players or, in the spectrum allocation problem, SUs. – O is the set of possible outcomes and will include the allocations. – Θ = Θ1 × · · · × Θn is the set of possible SU types. SU i type θi ∈ Θi is known only by SU i and determines its utility function. Here, θi is determined by Bi . – p is the probability distribution of Θ. Here p depends on the position of SUs and interference restrictions. – u = (u1 , . . . , un ), where ui : O × Θ → R is the utility function of SU i depending on the outcome o and on its type θi . – A = A1 ×· · ·×An is the action profile, where Ai is the set of actions available to SU i. In this problem, the action allowed to SU i is to declare its type θi , i.e., Ai = Θi , what results in a so-called direct mechanism. – M : A → Π(O) maps each action profile to a distribution over outcomes. If the mechanism is deterministic, then M : A → O. If we want the resulting allocations to have certain properties as efficiency or fairness, the mechanism should implement the corresponding social choice function C : u → Π(O), i.e. the game must have an equilibrium a∗ in which M (a∗ ) = C(u). We also want the SUs to reveal truthfully their types, what will be fulfilled if the equilibrium is a∗ = (θ1 , . . . , θn ). In this problem, the solution can be restricted to a so-called quasilinear mechanism [10], where the possible outcomes can be written as O = X × Rn ,
(5)
where X is a finite set. The outcome is then an allocation plus a vector of n real numbers. The i-th value of this vector is the price that SU i has to pay for the allocation. Thus, for a given vector of types θ ∈ Θ, the utility functions is ui (o, θ) = ui (x, θ) − pi ,
(6)
where ui (x, θ) is the value that allocation x has for SU i, and pi is the price that SU i has to pay when the allocation is x. This mechanisms has the property of conditional utility independence, because the value ui depends only on SU i type and not on other SUs type. We refer to the value ui as valuation of SU i for allocation x, and we write it as vi (x), considering implicit its dependence on θi . Then: ui (o, θ) = vi (x) − pi . (7)
Flexible Dynamic Spectrum Allocation in Cognitive Radio Networks
169
Here vi (x) can be interpreted as the maximum amount that SU i would be willing to pay for allocation x. Let vi denote the mapping that assigns a valuation vi (x) for each x ∈ X. Revealing type θi is equivalent to revealing vi and the set of allowed actions for SU i is the set of possible values of vi . Let vˆi denote the declared valuation of SU i, which might be different from vi . The mechanism can be interpreted as an auction, being vˆ = (ˆ v1 · · · vˆn ) ∈ V the bids, and p = (p1 · · · pn ) the prices that the bidders have to pay. Given vˆ, the manager calculates allocation and payment from: – A choice rule: f : V → Π(X), or f : V → X if deterministic. – A payment rule: p : V → Rn . Our objective is to design these rules so as to achieve truthfulness [10]. A truthful mechanism is in equilibrium when vˆ = v, and no SU can benefit from declaring a false valuation. A theoretical result says that, for non-restricted quasilinear preferences domains, the only existing truthful mechanism is the weighted Vickrey Clarke Groves (VCG) mechanism. This is a well known mechanism that implements efficiency and is not computationally tractable. However, here we need a mechanism able to implement other criteria as fairness, and simple to compute. Fortunately, the valuation setting described before belongs to the family of single parameter valuations [10], in which a valuation vi is defined by a single value. Formally, for each SU i the set of allocations can be partitioned into a winning set Wi and a losing set, vi if x ∈ Wi (8) vi (x) = 0 if x ∈ / Wi Here x ∈ Wi if xi = 1, i.e., SU i wins if the channel is allocated to it, regardless what happens to other SUs. Let vˆ−i denote the vector of valuations of all SUs except SU i. The mechanism has good properties if the choice function f is monotone: vi , vˆi > vˆi , f (ˆ vi , vˆ−i ) ∈ Wi ⇒ f (ˆ vi , vˆ−i ) ∈ Wi . ∀ˆ vi , ∀ˆ
(9)
That is, if a SU wins with a given valuation, it wins also with all higher valuations. Given vˆ−i , the critical value for SU i is defined as the minimum value of vˆi for which SU i wins: ci (ˆ v−i ) =
sup f (vi ,ˆ v−i )∈Wi
vi .
(10)
A deterministic single parameter domain mechanism is truthful if and only if every winning bid pays ci (ˆ v−i ) plus a function independent on its valuation. The mechanism is said to be normalized if losing bidders pay 0. Every truthful mechanism can be turned into a normalized one. Thus, the payment rule can be expressed, without loss of generality, as ci (ˆ v−i ) + hi (ˆ v−i ) if x ∈ Wi vi , vˆ−i ) = (11) pi (ˆ 0 if x ∈ / Wi
170
J.R. Vidal et al.
A randomized mechanism is truthful on expectation, if for all bidders, revealing its true valuation maximizes its expected benefit, i.e., ∀i ∀vi ∀v−i ∀ˆ vi : vi , vˆ−i )) − pi (f (ˆ vi , vˆ−i ))] . E[vi (f (vi , vˆ−i )) − pi (f (vi , vˆ−i ))] ≥ E[vi (f (ˆ
(12)
Let us denote by ωi (ˆ vi , vˆ−i ) = P r[f (ˆ vi , vˆ−i ) ∈ Wi ] the probability that SU i wins. Then, the expected benefit for SU i is vi , vˆ−i ) = vi ωi (ˆ vi , vˆ−i ) − pi (f (ˆ vi , vˆ−i )) . ui (ˆ
(13)
If ωi (ˆ vi , vˆ−i ) is monotonically non-decreasing in vˆi and vi0 is the valuation under which i cannot win, the truthfulness condition for a normalized mechanism is [1] pi (ˆ vi , vˆ−i ) = vˆi ωi (ˆ vi , vˆ−i ) −
v ˆi vi0
ω(t, vˆ−i )dt + hi (ˆ v−i ) .
(14)
Based on these results, we propose two simple truthful mechanisms for spectrum sharing: a deterministic one and a randomized one. As the experimental results in Sec. 4 show, the randomized one exhibits better properties. They are executed every time a white space appears. When SUs detect the white space, each of them estimates its valuation vi = Bi . Then, each SU willing to use the channel sends a bid containing its declared valuation vˆi to the spectrum manager. The manager starts an auction when it receives the first bid and, after a fixed time interval, it closes the auction. The auction is cleared by applying the choice and paying rules to the vector vˆ, resulting in a set of winning SUs and a vector p of payments. Finally, the manager sends a message to the winners SUs. 3.1
A Deterministic Truthful Mechanism
The following mechanism approximates efficiency by giving priority to higher valuations. The resulting choice function is monotone and then every winning bid has a critical value. The payment function is the critical bid, so the mechanism is truthful. Furthermore, both choice and payment functions are computationally tractable. – Choice function. Given a list of bids: 1. Order the list from highest to lowest value of valuations. 2. From the beginning of the list until the end of the list: Allocate the channel to bidder i if it does not conflict with any preceding winning bidder: xi = 1 if cij xj = 0, ∀j preceding i in the list. – Payment rule. Keep the order of the bid list above. For each bid i: 1. Remove bid i from the list. 2. Start a virtual allocation x with xj = 0 ∀j. 3. From the beginning of the list until the end of the list or until a bidder j conflicting with i virtually wins:
Flexible Dynamic Spectrum Allocation in Cognitive Radio Networks
171
Allocate virtually the channel to bidder k if it does not conflict with any preceding virtual winner: xk = 1 if ckl xl = 0, ∀l preceding k in the list. Then: • If the end of the list is reached, bidder i wins and pays 0. • If vˆi > vˆj , bidder i wins and pays the critical value ci = vˆj . • If vˆi < vˆj , bidder i losses and pays 0. 4. Insert bid i into the list at its original position. 3.2
A Randomized Truthful Mechanism
A randomization of the previous mechanism results in a more flexible choice function. The resulting winning probability function ωi (ˆ vi , vˆ−i ) is monotonically non-decreasing. Equation (14) is applied to obtain the payment function, so ensuring the truthfulness of the mechanism. Both choice and payment functions are computationally tractable. – Choice function. Given a list of bids: 1. For each bid j, calculate a random value kj with uniform distribution between 0 and vˆj . 2. Order the list from highest to lowest value of kj . 3. From the beginning of the list until the end of the list: Allocate the channel to each bidder if it does not conflict with any preceding winning bidder: xi = 1 if cij xj = 0, ∀j preceding i in the list. – Payment rule. Keep the order of the bid list. For each bid i: 1. Remove bid i from the list. 2. Start a virtual allocation x with xj = 0 ∀j. 3. From the beginning of the list until the end of the list or until a bidder j conflicting with i virtually wins: Allocate virtually the channel to bidder k if it does not conflict with any preceding virtually winner: xk = 1 if ckl xl = 0, ∀l preceding k in the list. Then: • If the end of the list is reached, i wins and pays 0. • If vˆi > kj , bidder i wins with probability1 : ωi (ˆ vi , k−i ) = 1 −
kj , vˆi
(15)
and pays pi = vˆi ωi (ˆ vi , k−i ) −
v ˆi
ωi (t, k−i )dt = kj ln
kj
vˆi . kj
(16)
• If vˆi < kj , bidder i cannot win and pays 0. 4. Insert bid i into the list at its original position. 1
Here the function ωi (ˆ vi , k−i ) has been used instead of ωi (ˆ vi , vˆ−i ). However, the proof of the condition expressed by (14) holds for both functions [1].
172
3.3
J.R. Vidal et al.
Fairness
Unfairness has two main causes. Firstly, all SUs may have not the same opportunities because physical restrictions and those SUs reporting higher valuations, if true, are favoured. This may be alleviated by limiting the budget of SUs, as described in the next section. Secondly, the mechanism itself does not treat all the bidders the same way. It is easy to see that bidders having more competition (more neighbours conflicting with) have less chances to win and when winning, they pay a higher price. The previous mechanisms can be modified to compensate unfairness caused by the degree of competition. A simple way to do it is by weighting the valuations by a function increasing with the number of edges in the conflict graph. Simulation results are shown in Sec. 4. Clearly, this does not affect the truthfulness, because the original conditions still hold. The choice function is still monotonically nondecreasing and the payment function for the deterministic mechanism is the critical value and, for the randomized mechanism is given by pi = vˆi ωi (ˆ vi , k−i ) −
3.4
v ˆi kj wi
ωi (t, k−i )dt =
kj wi vˆi ln . wi kj
(17)
Credit Restriction and Redistribution
So far we have considered that SUs behave as profit maximizers who value a unit of transmission rate the same as a monetary unit. Consequently, they obtain the maximum profit when the rate they get minus the money they pay is maximum. This approach is valid if SUs are effectively charged for the spectrum. Another possibility is that what they pay was not real money, but a virtual currency unit internal to the system. This could be useful if what we want is just to fairly share the spectrum, not to trade with it. This would also make the system simpler by avoiding monetary transactions. Furthermore, a limitation of the credit offered to SUs would help to achieve fairness. In this last approach, the manager should redistribute the payments and record the credit of each SU. The amount of credit given to each SU must be limited to approximately the amount of credit they would spend when it received the fair share of the spectrum. Credit limitation is the way to get them to value the credit and still behave as profit maximizers. To achieve this, the mechanism has to be modified in two ways: – SUs need to receive a flow of credit along the time. For system stability, the total amount of credit received by them should be equal to the total amount of their payments. If this property holds for each auction, it is called budged balance. Here we only need it to hold in the long term, and this is achieved if in every auction the expected value of what each bidder receives equals the expected value of what it pays. – SUs should be punished when they run out of credit. However, SUs with negative credit cannot be excluded from auctions, because they must still get
Flexible Dynamic Spectrum Allocation in Cognitive Radio Networks
173
their share of the redistribution and so recover credit for successive auctions. Instead, when a SU has low credit, the mechanism would grant it a lower winning probability. There are several solutions for the redistribution problem that preserve truthfulness [9]. All of them satisfy the condition that what a bidder receives is independent of its valuation. Based on this idea, we have tested the following redistribution method. For every auction, a parallel auction is created with the same bidders and random valuations. Every bidder receives what it would pay in this parallel auction. If random valuations have a random distribution which approximates the distribution of real valuations, and if bidders play truthfully, long term budged balance is achieved. For credit limitation, we have added to the previous mechanisms another weighting factor qi which depends on credit. Valuations are weighted by qi and the payment functions are modified accordingly. If we consider a single auction, this change preserves the truthfulness of the mechanism, as it did the weighting for fairness described in Sec. 3.3. However, this does not hold if we consider successive auctions, because qi depends on the credit of SU i, which in turn depends on vˆi . As a consequence the valuation on the current auction has an effect not only in the benefit obtained in this auction, but also in what happens in successive auctions. For this reason, it cannot be assured that SUs could not obtain a long term benefit from lying. Because successive auctions are not independent, this issue should be studied as a repeated game. To alleviate this problem, the influence of credit in the mechanism should be as small as possible, that is, it should be qi = 1 most of time while qi < 1 is only required when SU i is close to run out of credit. Then, if the redistribution policy works properly, SUs will work most of time with qi = 1. When a SU overbids and spends all its credit it is punished. On the other side, a SU cannot benefit from saving credit because, once it reaches enough credit to have qi = 1, having more does not increase its winning probability. Simulation results shown in Sec. 4 suggest that the mechanism still behaves reasonably well, though truthfulness cannot be guaranteed in all situations.
4
Experimental Results and Discussion
We have evaluated the properties of previous mechanisms by means of discreteevent simulations. We have simulated a static configuration of 6 SUs with the conflict graph of Fig. 1, located into the interference range of a PU which conveys traffic bursts whose inter-arrival time is exponentially distributed with mean 20 time units and their duration is exponentially distributed with mean 10 units. The PU has a pool of 10 channels, and allocates traffic bursts randomly. Idle periods of these channels are detected by SUs as white spaces. SUs estimations of the bit rate of a white space is uniformly distribution between 0 an 2 bit-rate units. Every white space all SUs send a bid containing its valuation of the channel to the central manger, which executes the mechanism. Then it sends a message
174
J.R. Vidal et al.
Fig. 1. Conflict graph of simulation set 4
10
x 10
Units of value
8
6
4
Deterministic Random
2 0.5
0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
1.5
Over/under bidding factor
Fig. 2. Total benefit of SU c as a function of its h
to the winners who occupy the channel until it is used by the PU again. The credit available for each SU is recorded and updated by the spectrum manager. We assume that SUs are backlogged and their objective is to send as much traffic as possible. All simulation runs have a length of 107 time units, what have been checked to yield statistically significant results The plot in Fig. 2 illustrates the truthfulness of both mechanisms without credit restriction and confirms the theoretical result. Here, total benefit of SU c is the sum of the bit-rate of all the channels it won minus the total price it paid for them. This is plotted as a function of an over/under bidding factor h which measures the relation between the declared valuation and the true valuation: vˆc = hvc . When h > 1, c it is overbidding and when h < 1 it is underbidding. The rest of SUs are truthful. It can be seen that SU c maximizes its benefit when h = 1, i.e., when it declares the truth. We have also tested the mechanisms with credit restriction an redistribution as described in Sec. 3.4. Here the weighting factor qi is set to 1 when crediti > 50, to 0 when crediti < −50, and it is varied linearly between 0 and 1 when −50 < crediti < 50. The initial credit is 100. By doing this, SUs are forced to keep its credit not far under 0, that is, they cannot spend much more than they receive. Since credit is not exceeded nor accumulated, now the benefit is the obtained bit-rate, i.e., the mean traffic. Some results for SU c are shown in Figs. 3 and 4. We have run simulations varying h in other SUs and with other conflict graphs obtaining similar results not shown here. It can be seen that, with the deterministic mechanism, SU c can benefit from overbidding. This does not happen with the random mechanism. The reason of
Flexible Dynamic Spectrum Allocation in Cognitive Radio Networks
175
10
radio c radio d
8
Mean traffic
radios a and b 6
radios e and f
4 2 0 0.5
0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
1.5
Over/under bidding factor
Fig. 3. Mean traffic of SU c as a function of its h. Deterministic mechanism. 8
Mean traffic
6
4
radios e and f radios a and b
2
radio c radio d
0 0.5
0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
1.5
Over/under bidding factor
Fig. 4. Mean traffic of SU c as a function of its h. Randomized mechanism. 0.8
Unit price
0.7 0.6 0.5
Random
0.4
Deterministic
0.3 0.2 0.5
0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
1.5
Over/under bidding factor
Fig. 5. Unit price paid by SU c as a function of its h
this different behaviour can be found in Fig. 5, which plots the unit price that SU c pays as a function of h. For the deterministic mechanism, the price grows up to a maximum for h = 1, because the payment function depends only on the critical value. Therefore, if a bid wins, the price does not depend on the bid value, that is, two winning bids with different valuations pay the same. In contrast, with the randomized mechanism the price function is given by (16) and depends on the valuation, so the unit price grows as it can be seen in the plot. This characteristic of the payment function adds stability to the random mechanism, because makes it more expensive to overbid.
176
J.R. Vidal et al. 8
Mean traffic
6
4
radio c radio d
2
radios a and b radios e and f
0 0.5
0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
1.5
Over/under bidding factor
Fig. 6. Mean traffic of SU c as a function of its h with fairness compensation
Total mean traffic
42 40 38 36 34
Deterministic Efficient
32 30 0.5
Random 0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
1.5
Over/under bidding factor
Fig. 7. Mean traffic conveyed by all SUs as a function of h of SU c
The capability of the mechanisms to compensate the unfairness due to disadvantage in the conflict graph, as described in Sec. 3.3, has been evaluated in an experiment in which valuations vˆi are multiplied by a competition weight wi = e1.5 i , where ei is the number of edges related to vertex i in the conflict graph of SU i. Figure 6 shows the result for SU c with the random mechanism. It can be seen that, compared with Fig. 4, a much more fair share is achieved. We have also evaluated the efficiency of these mechanisms by comparing the sum of the mean traffic of the 6 SUs without fairness compensation with that obtained by another mechanism with an efficient choice function implemented by exhaustive search. Results are plot in Fig. 7. It can be can see that both mechanism closely approximate efficiency.
5
Conclusions
We present two truthful and low complexity mechanisms for real-time spectrum allocations. We show how they can be modified to implement social fairness, maintaining its properties. We also show how they can work when a virtual currency instead of money is used, by controlling the credit of the SUs, making the choice and payment functions dependent on the credit, and redistributing cash to SUs. However, when the mechanisms are dependent on credit, although they are still truthful on a single run, on repeated runs truthfulness does not
Flexible Dynamic Spectrum Allocation in Cognitive Radio Networks
177
hold, because successive runs become dependent. Although truthfulness cannot be guaranteed in all situations, our experimental results shown that under certain conditions they still behave truthfully. We are currently working in defining which are the conditions that the credit restriction and the redistribution policy have to fulfil so that the resulting repeated mechanism is truthful.
References 1. Archer, A., Tardos, E.: Truthful mechanisms for one-parameter agents. In: IEEE Symposium on Foundations of Computer Science (FOCS 2001), pp. 482–491. IEEE Computer Society, Los Alamitos (2001) 2. Bae, J., Beigman, E., Berry, R.A., Honig, M.L., Vohra, R.V.: Sequential bandwidth and power auctions for distributed spectrum sharing. IEEE Journal on Selected Areas in Communications 26(7), 1193–1203 (2008) 3. Gandhi, S., Buragohain, C., Cao, L., Zheng, H., Suri, S.: Towards real-time dynamic spectrum auctions. Comput. Netw. 52, 879–897 (2008) 4. Gupta, P., Kumar, P.R.: The capacity of wireless networks. IEEE Transactions on Information Theory 46, 388–404 (2000) 5. Han, C., Wang, J., Yang, Y., Li, S.: Addressing the control channel design problem: OFDM-based transform domain communication system in cognitive radio. Computer Networks, 795–815 (2008) 6. Huang, J., Berry, R.A., Honig, M.L.: Auction-based spectrum sharing. Mob. Netw. Appl. 11, 405–418 (2006) 7. Jain, K., Padhye, J., Padmanabhan, V., Qiu, L.: Impact of interference on multihop wireless network performance. Wireless Networks 11(4), 471–487 (2005) 8. Ji, Z., Liu, K.: Belief-assisted pricing for dynamic spectrum allocation in wireless networks with selfish users. In: SECON 2006, vol. 1, pp. 119–127 (2006) 9. Moulin, H., Shenker, S.: Strategyproof sharing of submodular costs:budget balance versus efficiency. Economic Theory 18(3), 511–533 (2001) 10. Nisan, N., Roughgarden, T., Tardos, E., Vazirani, V.V. (eds.): Algorithmic game theory. Cambridge University Press, Cambridge (2007) 11. Wang, B., Wu, Y., Liu, K.R.: Game theory for cognitive radio networks: An overview. Comput. Netw. 54, 2537–2561 (2010) 12. Wang, X., Li, Z., Xu, P., Xu, Y., Gao, X., Chen, H.H.: Spectrum sharing in cognitive radio networks: an auction-based approach. Trans. Sys. Man Cyber. Part B 40, 587–596 (2010) 13. Wu, Y., Wang, B., Liu, K.J.R., Clancy, T.C.: Repeated open spectrum sharing game with cheat-proof strategies. Trans. Wireless. Comm. 8, 1922–1933 (2009) 14. Zhao, Q.: A survey of dynamic spectrum access: signal processing, networking, and regulatory policy. IEEE Signal Processing Magazine, 79–89 (2007) 15. Zhou, X., Zheng, H.: TRUST: A general framework for truthful double spectrum auctions. In: INFOCOM, pp. 999–1007 (2009)
Channel Assignment and Access Protocols for Spectrum-Agile Networks with Single-Transceiver Radios Haythem Bany Salameh1 and Marwan Krunz2 1
2
Department of Telecommunication Engineering Yarmouk University, Irbid 21163, Jordan [email protected] Department of Electrical and Computer Engineering University of Arizona, Tucson, AZ 85721, USA [email protected]
Abstract. Many spectrum access/sharing algorithms for cognitive radio networks (CRNs) have been designed assuming multiple transceivers per CR user. However, in practice, such an assumption may not hold due to hardware cost. In this paper, we address the problem of assigning channels to CR transmissions, assuming one transceiver per CR. The primary goal of our design is to maximize the number of feasible concurrent CR transmissions with respect to both spectrum assignment and transmission power. Energy conservation is also treated, but as a secondary objective. The problem is posed as a utility maximization problem subject to target rate demand and interference constraints. For multi-transceiver CRNs, this optimization problem is known to be NPhard. However, under the practical setting of a single transceiver per CR user, we show that this problem can be optimally solved in polynomial time. Specifically, we present a centralized algorithm for the channel assignment problem based on bipartite matching. We then integrate this algorithm into distributed MAC protocols. First, we consider a singlehop CRN, for which we introduce a CSMA-like MAC protocol that uses an access window (AW) for exchanging control information prior to data transmissions. This approach allows us to realize a centralized algorithm in a distributed manner. We then develop a distributed MAC protocol (WFC-MAC) for a multi-hop CRN. WFC-MAC improves the CRN throughput through a novel distributed channel assignment that relies only on information provided by the two communicating users. We compare the performance of our schemes with CSMA/CA variants. The results show that our schemes significantly decrease the blocking rate of CR transmissions, and hence improves the network throughput. Keywords: Opportunistic Access; Cognitive Radio; Single-transceiver.
This research was supported in part by NSF (under grants CNS-1016943, CNS0721935, CNS-0904681, IIP-0832238), Raytheon, and the Connection One center. Any opinions, findings, conclusions, or recommendations expressed in this paper are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
J. Domingo-Pascual et al. (Eds.): NETWORKING 2011, Part II, LNCS 6641, pp. 178–197, 2011. c IFIP International Federation for Information Processing 2011
Channel Assignment and Access Protocols for Spectrum-Agile Networks
1
179
Introduction
The widespread acceptance of the unlicensed wireless communication services and applications has significantly increased the demand for more transmission capacity. The unlicensed portions of the frequency spectrum (e.g., the ISM bands) have become increasingly crowded. At the same time, recent radio measurements conducted by the FCC and other agencies revealed vast temporal and geographical variations in the utilization of the licensed spectrum, ranging from 15% to 85% [1]. To overcome spectrum scarcity, cognitive radios have been proposed to allow opportunistic on-demand access to the spectrum [1,2]. CR technology offers such opportunistic capability without affecting licensed primary radio (PR) users. Specifically, CR users access the available spectrum opportunistically by continuously monitoring the operating channels so as to avoid degrading the performance of PR users [2]. They should frequently sense their operating channels for active PR signals, and should vacate these channels if a PR signal is detected. In such an opportunistic environment, the crucial challenge is how to allow CR users to efficiently share (utilize) the available spectrum opportunities while improving the overall achieved throughput. Several MAC protocols have been proposed for CRNs (e.g., [3,4,5,6,7]). We refer the interested reader to our technical report [8] for an extensive overview of related works. Most of these protocols assume that each CR is equipped with multiple transceivers, which may not often be the case. In addition, these protocols are often based on a greedy channel assignment strategy, which selects the “best” available channel (or channels) for a given transmission [3,9]. The best channel is often defined as the one that supports the highest rate. Hereafter, we refer to this strategy as the best multi-channel (BMC) approach. As shown later, when the BMC approach is employed in a CRN, the blocking probability for CR transmissions increases, leading to a reduction in network throughput. In contrast, in our work, we consider a single half-duplex transceiver per CR user. Our goal is to maximize the number of concurrent feasible CR transmissions, and conserve energy as a secondary objective, with respect to both spectrum assignment and transmission power. For multi-transceiver CRNs, this joint channel assignment and power allocation problem is known to be NP-hard [7,10]. However, for the single-transceiver case, we show that this problem can be optimally solved in polynomial time. Our optimization follows a “fall back” approach, whereby the secondary objective is optimized over the set of feasible channel assignments that are found to be optimal with respect to the primary objective. Contributions: The contributions of this paper are as follows. We first formulate the optimal channel assignment and power allocation problem. Then, we present an optimal centralized algorithm for this problem based on bipartite matching. For a single-hop CRN, we develop a CSMA-based MAC protocol, called AW-MAC, which realizes the centralized algorithm in a distributed manner. The centralized algorithm requires global information, which is hard to obtain in a multihop environment. Accordingly, for a multi-hop CRN, we present an efficient distributed channel assignment that relies only on information provided by the two
180
H. Bany Salameh and M. Krunz
communicating users. Our distributed scheme improves CRN throughput performance through cooperative assignment among neighboring CR users. Specifically, a CR user that intends to transmit has to account for potential future transmissions in its neighborhood. Based on this distributed scheme, we then develop a novel CSMA-based MAC protocol (WFC-MAC). Our protocols do not require interaction with PR networks (PRNs), and can be adapted to existing multi-channel systems with little extra processing overhead. To evaluate the performance of our protocols, we conduct simulations for a single-hop and a multi-hop CRN with mobile users. Simulation results show that our protocols significantly improve the network throughput over two previously proposed schemes (i.e., BMC-MAC [9,3] and DDMAC [7]). The results also indicate that our protocols preserve (even slightly improve) throughput fairness. For single-hop scenarios, we show that AW-MAC achieves the best throughput (up to 50% improvement over BMC-MAC scheme) at no additional cost in energy consumption. In multi-hop scenarios, WFC-MAC achieves the best throughput at the cost of energy consumption. The rest of the paper is organized as follows. In Section 2, we introduce our system model, state our assumptions, and formulate the optimal channel assignment/power control problem. Section 3 introduces the centralized channel assignment algorithm. Section 3.2 describes the proposed AW-MAC protocol. In Section 4, we introduce the distributed channel assignment algorithm and the proposed WFC-MAC protocol. In Section 5, we analysis the throughput of our proposed protocols. Section 6 presents our simulation results. Our concluding remarks are presented in Section 7.
2 2.1
Problem Formulation and Design Constraints Network Model
We consider a decentralized opportunistic CRN that geographically coexists with M different PRNs. The PRNs are licensed to operate on non-overlapping frequency bands, each of Fourier bandwidth W . For k = 1, . . . , M , the carrier frequency associated with the kth PRN is fk (in Hz). Let M denote the set of all non-overlapping channels in all PRNs (i.e., M = |M|). CR users continuously identify potential spectrum holes and exploit them for their transmissions. They employ power control to avoid harmful interference with PR receptions. Specifically, CR users adopt the following transmission power strategy: for band i, i = 1, . . . M , the maximum CR transmission power (i) is 0 if any PR user operates on band i, or limited to Pmax if no PR signal is (i) (i) detected (i.e., the transmission power ≤ Pmax ). Pmax is the smaller of the FCC regulatory maximum transmission power over band i and the maximum power supported by the CR’s battery (PCR ). Note that identifying the list of idle channels that is potentially available for CR transmissions at a given time and in a given geographical location is a challenging problem. To deal with this challenge, the FCC recently adopted three principal methods that can be used to determine
Channel Assignment and Access Protocols for Spectrum-Agile Networks
181
the list of idle channels that is potentially available for CR transmissions at a given time and in a given geographical location [11]. The first method requires determining the location of a CR user and then accessing a database of licensed services (internal or external database) to identify busy/idle PR channels. The second method is to integrate spectrum sensing capabilities in the CR device. The third method is to periodically (or on-demand) transmit control information from a professionally installed fixed broadcast CR station. This control information contains the list of idle channels. Under this method, a CR transmitter can only transmit when it receives a control information that positively identifies idle PR channels. According to the FCC, this control information can also be transmitted by external entities, such as PR base stations (e.g., broadcast TV and radio stations). For our purposes, we assume that the control signal method is in place for determining the list of idle channels. 2.2
Feasibility Constraints
For a CR transmission j, transmitter and receiver need to cooperatively select an appropriate channel and transmission power while meeting the following constraints: 1. Exclusive channel occupancy policy: The selected channel cannot be assigned to more than one CR transmission in the same neighborhood (inline with the CSMA/CA mechanism). 2. One transceiver per CR user : Each CR user can transmit or receive on one channel only. The operation is half-duplex, i.e., a CR user cannot transmit and receive at the same time. 3. Maximum transmission power : For a CR transmission j and idle channel i, (i) (i) the transmission power Pj is limited to Pmax . If channel i is occupied by a (i)
PR user, Pj = 0. 4. Rate demand : Each CR transmission j requires a given data rate Rj . If none of the idle channels can support Rj , the CR transmission j will be blocked. 2.3
Problem Formulation
At a given time t, let N (t) and MIdle (t) ⊆ M respectively denote the set of all CR transmission requests and the set of all |MIdle (t)| = MIdle (t) idle channels in a given neighborhood. Let N (t) = |N (t)|. It has been shown that neighboring CR users in a given locality typically share a similar view of spectrum opportunities (i.e., the set of common idle channels) [12,2]. Given the rate demands (Rj , ∀j ∈ N (t)) and the set of idle channels MIdle (t), our goal is to compute a feasible channel assignment that assigns channels and transmission powers to CR requests such that the number of feasible concurrent CR transmissions is maximized subject to the previously mentioned constraints. If multiple solutions exist for this optimization problem, we seek the one that requires the least amount of energy. Because we focus on computing a feasible
182
H. Bany Salameh and M. Krunz
channel assignment at a given time t, in what follows, we drop the time subscript (i) (t) for notational convenience. Let αj be a binary variable that is defined as follows: 1, if channel i is assigned to transmission j (i) αj = (1) 0, otherwise. The resource assignment problem is stated as follows: maximizeα(i) ,P (i) j
j
(i)
(i)
αj 1[rj ≥ Rj ] −
i∈MIdle j∈N
Subject to
1 Ptot
(i)
(i)
αj Pj
i∈MIdle j∈N
(i)
αj ≤ 1, ∀i ∈ MIdle
j∈N
(i)
αj ≤ 1, ∀j ∈ N
i∈MIdle (i)
0 ≤ Pj
(i)
(i) ≤ Pmax , ∀i ∈ MIdle and ∀j ∈ N (2) (i)
where 1[.] is the indicator function, rj = f (Pj ) is the data rate for link j on channel i, f (.) is monotonically non-decreasing rate-power function (It can be (i) Shannons capacity or any other power-rate function), and Ptot = i∈M Pmax . The second term in the objective function ensures that if multiple solutions exist for the optimization problem, the one with the least amount of total transmission power will be selected. Note that the first two constraints in (2) ensure that at most one channel can be assigned per transmission and a channel cannot be assigned to more than one transmission. The third constraint (i)
ensures that
Pj
(i) Pmax
≤ 1. Given the above three constraints and noting that
MIdle ⊆ M, the second term of the objective function is always < 1 (i.e., (i) (i) 1 < 1). So, for any two feasible assignment Ω1 with i∈MIdle j∈N αj Pj Ptot N 1 of admitted CR transmissions and Ω2 with N2 < N1 of admitted CR transmissions, the above formulation will also selects Ω1 over Ω2 , irrespective of the total transmission power. The optimization problem in (2) is a mixed integer non-linear program (MINLP). Due to integrality constraints, one expects such a problem to be NPhard. However, we show that this MINLP is not NP-hard and may be solved optimally in polynomial time. Specifically, we show that this problem is the same as assigning channels to independent (distinct) links such that the number of feasible CR transmissions is maximized while using the minimum total transmission power. In Section 3, we propose an algorithm that transforms this optimization problem into the well-known maximum weighted perfect bipartite matching problem, which has a polynomial-time solution [13]. Remark: For multi-transceiver case, the joint channel/power assignment problem is known to be NP-hard [7,10].
Channel Assignment and Access Protocols for Spectrum-Agile Networks
3
183
Optimal Channel Assignment
In this section, we first present a centralized algorithm for the channel assignment problem based on bipartite matching. The objective of this algorithm is to maximize the total number of feasible concurrent CR transmissions by means of power management. Note that centralized algorithms are easy to implement in single-hop networks where all users are within radio range of each other. Based on this centralized algorithm, we develop a CSMA-based MAC protocol that can be executed in a distributed manner. 3.1
Proposed Algorithm
In our context, a centralized algorithm implies that the instantaneous SINR values, location information, and rate demand are known to the decision-making entity that assigns channels and transmission powers. For a finite number of available channels and given rate demands, a CR user can compute the minimum required power over each channel. Using this fact and noting that the graph connecting the set of CR transmission requests and the set of available channels is a bipartite graph1 , our optimization problem can be transformed into a bipartite perfect matching problem. The maximum matching of this bipartite graph problem is the set containing the maximum number of feasible CR transmissions that can proceed concurrently. If there are multiple feasible channel assignments with maximum matching, the one requiring the smallest total transmission power will be selected. In the following, we develop an algorithm that transforms our optimization problem into a bipartite perfect matching problem. Formally, the algorithm proceeds in 3 steps: Step 1. Compute the minimum required powers: For every CR transmission request j ∈ N and every idle channel i ∈ MIdle , the algorithm computes the (i) minimum required transmission power Pj,req that can support the rate demand Rj , i.e., Pj,req = f −1 (Rj ). Then, the algorithm identifies prohibited (infeasi(i)
(i)
ble) channel/transmission combination (i, j) whose Pj,req does not satisfy the (i)
(i)
maximum transmission power constraint (i.e., Pj,req > Pmax ). Step 2. Formulate the perfect bipartite matching problem: The algorithm creates MIdle nodes, each corresponding to one of the idle channels. Let these nodes constitute the channel set C. The algorithm also creates N nodes to represent the CR transmission requests. Let these nodes constitute the request set R. If N > MIdle , the algorithm creates N − MIdle additional nodes CD = {M Idle + 1, . . . MIdle + N } to represent dummy channels and updates C as C = C CD . On the other hand, if N < MIdle , the algorithm creates MIdle − N additional nodes RD = {N + 1, . . . MIdle + N } to represent dummy requests and updates R as R = R RD . Then, the algorithm connects the nodes in C to the 1
A bipartite graph is a graph whose vertex set can be decomposed into two disjoint sets such that no two vertices in the same set are connected.
184
H. Bany Salameh and M. Krunz
nodes in R. Any (i, j) assignment that contains a dummy node is also a pro(i) hibited assignment. Let wj denote the arc weight of link (i,j) on the bipartite (i)
graph. For all prohibited assignments, the algorithm sets wj to a very large number Γ PCR .. Formally, (i) (i) (i) (i) wj = Pj,req , if Pj,req ≤ Pmax , j ∈ R and i ∈ C (3) (i) (i) (i) wj = Γ, if Pj,req > Pmax , j ∈ R and i ∈ C. The above bipartite graph construction transforms the assignment problem into a perfect bipartite matching (because the number of CR transmissions is equal to the number of channels, and every node in the request set is connected to every node in the channel set). Now, the algorithm seeks a one-to-one assignment for the max{MIdle, N } × max{MIdle , N } bipartite graph constructed in Step 2, with the weights defined in (3), so as the channel utilization is maximized while selecting the minimum transmission powers. Particularly, the mathematical formulation of the assignment problem corresponds to a weighted perfect bipartite matching problem. Hence, the global optimal channel assignment solution (i.e., (i) αj ) can be found using the Hungarian algorithm, which has a polynomial-time √ complexity (i.e., O( KK), where K = max{N, MIdle } [13]) and codes are readily available for its implementation [14]. Step 3. Find the optimal resource allocation: The algorithm removes all prohibited assignments in the optimal channel solution by setting them to 0, and (i) modifies αj as follows:
∗(i)
(i)
(i)
αj = 1, if αj = 1 and wj < Γ , j ∈ R and i ∈ C ∗(i) (i) (i) αj = 0, if αj = 1 and wj = Γ , j ∈ R and i ∈ C. (4)
∗(i) {αj },
Using the revised optimal assignment ∗(i) i∈C j∈R αj . Depending on Z, there are two possibilities:
the algorithm computes Z =
– If Z = 0, there is no feasible channel assignment. – If Z > 0, there is a feasible channel assignment. In this case, Z represents the maximum number of possible concurrent transmissions. 3.2
Channel Access Protocol for Single-Hop CRNs
Based on the channel assignment algorithm presented in Section 3.1, we now propose a distributed multi-channel MAC protocol for single-hop ad hoc CRNs with a single half-duplex radio per node. Before describing our protocol in detail, we first state our main assumptions.
Channel Assignment and Access Protocols for Spectrum-Agile Networks
185
Assumptions: For each frequency channel, we assume that its gain is stationary for the duration of a few control packets and one data packet. This assumption holds for typical mobility patterns and transmission rates [15]. We also assume symmetric gains between two users, which is a common assumption in RTS/CTS-based protocols, including the IEEE 802.11 scheme. Our protocols assume the availability of a prespecified common control channel.Such a channel is not necessarily dedicated to the CRN. It may, for example, be one of the unlicensed ISM bands. Note that the existence of a common control channel is a characteristic of many MAC protocols proposed for CRNs (e.g., [7,6,3]). Operational Details: To execute the centralized algorithm presented in the previous section in a distributed manner, we require the instantaneous SINR information and rate demands of all contending CR users in a given locality to be known to all CR users in that locality before assigning channels and transmission powers. In a single-hop network, this issue can be handled during the “admission phase” by introducing a contention period known as the access window (AW). The AW consists of MIdle fixed-duration access slots (AS). A series of control packet exchanges take place during these slots, after which several data transmissions can commence concurrently. We note here that the use of an AW for contention was originally proposed in the MACA-P protocol [16] and was later integrated in the design of POWMAC [15]. However, in both protocols the objective was not to address spectrum sharing (channel assignment), but rather to prevent collisions between control and data packets (in MACA-P) and to address single-channel transmission power control (in POWMAC). During the AW, communicating CR users announce their instantaneous SINR information. A CR user that has packets to transmit and that is not aware of any already established AW in its neighborhood can asynchronously initiate an AW. Each AS consists of the sum of an RTS duration, a CTS duration, and a maximum backoff interval (explained below), and two fixed short interframe spacing (SIFS) periods2 . Control packets are sent at the maximum (known) power Pctrl . This Pctrl is constrained by the maximum permissible transmission power imposed on the control channel. Upon receiving an RTS packet from a CR user, say A, that is initiating an AW, other CR users in the network synchronize their time reference with A’s AW. Suppose that a CR user C overhears A’s RTS, and has a data packet to send. C contends for the control channel in the next access slot of A’s AW as follows. It first backs off for a random duration of time (T ) that is uniformly distributed in the interval [0, Tmax ]; Tmax is a system-wide backoff counter. After this waiting time and if no carrier is sensed, user C sends its RTS packet in the current AS. Note that Tmax is in the order of a few microseconds whereas a time slot is in milliseconds, so the backoff mainly serves to prevent synchronized RTS attempts. For illustration purposes, Figure 1 shows a time diagram of the channel access 2
As defined in the IEEE 802.11b standard [2], a SIFS period consists of the processing delay for a received packet plus the turnaround time.
186
H. Bany Salameh and M. Krunz
AW
Data+Ack
MIdle T ctrl
Tdata
Tctrl ……..
Ctrl
Data t to
AS
Fig. 1. Basic operation of AW-MAC
process, assuming fixed data-packet sizes and equal rate demands. Tctrl and Tdata in the figure denote the durations (in seconds) of one RTS/CTS packet exchange and one data plus ACK packets transmissions, respectively. After all the control packets have been exchanged, the channel assignment and power management algorithm of Section 3 is executed at every communicating node. 3.3
Remarks and Design Variants
Granularity of Channel Assignment: Depending on channel availability due to PR dynamics, the proposed channel assignment can be performed at the granularity of a packet or a link. In the latter case, the assignment applies to all packets of the current connection between the two end points of a link. Fairness Properties of AW-MAC: According to AW-MAC, CR users contend over the control channel using a variant of the CSMA/CA mechanism. This gives all CR users the same probability of accessing channels, irrespective of their rate demands. Thus, our AW-MAC protocol preserves fairness among CR users. In our simulations (Section 6), we compare the fairness properties of AW-MAC to that of typical multi-channel CRN CSMA-based protocols. The results show that AW-MAC preserves (slightly improves) the network fairness. Channel Assignment with a Multi-level Frequency-dependent Power Constraint: The problem of identifying spectrum holes and selecting appropriate channels/powers is overcomplicated by the presumingly non-cooperative nature of PRNs, which usually do not provide feedback (e.g., interference margins) to CR users. To address this problem, a multi-level time-varying frequency(1) (2) (M) dependent power mask (Pmask = {Pmask , Pmask , . . . , Pmask }) on the CR transmissions is often adopted (e.g., [7]). Enforcing such a power mask allows for spectrum sharing between neighboring CR and PR users. According to this approach, CR users can exploit both idle as well as partially-utilized bands, potentially leading to better spectrum utilization. However, the determination of
Channel Assignment and Access Protocols for Spectrum-Agile Networks
AW
Data+Ack
Mo T ctrl
Tdata
Tctrl
T ctrl
……..
Ctrl
187
……..
…….. ……..
Data t to
AS
t1 M T 1 ctrl AW
Fig. 2. Basic operation of 2-radio AW-MAC (Note that MIdle (to ) = Mo and MIdle (t1 ) = M1 )
an appropriate multi-level power mask is still an open issue, which has been recently investigated under certain simplifying assumptions (e.g., [3,2]). Although our proposed algorithm assumes a binary-level power constraint on CR transmissions, the algorithm is still valid for the case of a multi-level frequency-dependent power mask by setting the maximum CR transmission power over channel i to (i) (i) Pmax = min{Pmask , PCR }, ∀i ∈ M. AW-MAC with Two Transceivers: Another design possibility that can achieve improvement in the CRN throughput is to use two half-duplex transceivers per CR user. One transceiver would be tuned to the control channel, while the other could be tuned to any data channel in MIdle . Because there is no interference between data and control transmissions (the two are separated in frequency), the reservations of the subsequence AW can be conducted while current data transmissions are taking place (i.e., mimicing a full-duplex operation). This reduces the control overhead and improves the overall throughput at the cost of an additional transceiver. We refer to the channel access mechanism that uses AW assignment with one transceiver as AW-MAC, and the one that uses AW assignment with two transceivers as 2-radio AW-MAC. Figure 2 shows the basic operation of 2-radio AW-MAC. In Section 5, we study the potential throughput improvement of 2-radio AW-MAC over AW-MAC. RTS/CTS handshake in AW-MAC: It should be noted that the RTS/CTS handshake is essential in multi-channel systems (e.g., CRNs). Besides mitigating the hidden-terminal problems, there are two other main objectives for the use of RTS/CTS: (1) conducting and announcing the channel assignment, and (2) prompting both the transmitter and the receiver to tune to the agreed on channels before transmission commences. Simulation studies have shown that using RTS/CTS packets for data packets larger than 250 bytes is beneficial [17]3 . It is also worth mentioning that our AW-MAC protocol is based on passive learning. 3
The RTS threshold depends on the number of users in the network [17,18]. It should be reduced for a large number of users.
188
H. Bany Salameh and M. Krunz
This is because in AW-MAC, CR users are within the transmission range of each other and always listen to the control channel in order to overhear control-packet exchanges, including those not destined to them. CR users use the control information to perform the channel assignment. Thus, AW-MAC does not introduce any additional control message overhead beyond the needed two-way handshake for every transmitted data packet.
4
Distributed Channel Assignment for Multi-hop CRNs
In this section, we present a distributed channel assignment scheme for a multihop CRN. It attempts to improve spectrum utilization in a purely distributed manner while relying only on information provided by the two communicating nodes. We first identify the key challenges involved in realizing the centralized algorithm in a distributed manner. Then, we describe our distributed scheme in detail. 4.1
Challenges
To execute our centralized algorithm in a multi-hop environment, the algorithm must run in a distributed manner at each CR device in a given locality (i.e., contention region). This implies that each CR user that belongs to a contention region must exchange instantaneous SINR information with other neighboring CR users in that region before selecting channels and powers. This incurs high control overhead and delay. Moreover, in a multi-hop environment, CR users may belong to multiple contention regions that differ in their views of the spectrum opportunity. To overcome such challenges, we develop a heuristic channel assignment scheme that provides a suboptimal solution with low complexity and that achieves good spectrum utilization. 4.2
Channel Assignment
The main consideration in our distributed scheme is to enable cooperation among neighboring CR users. A CR user that intends to transmit has to account for potential future transmissions in its neighborhood. It does that by assigning to its transmission the worst feasible channel, i.e., the least-capacity available channel that can support the required rate demand. We refer to this approach as the worst feasible channel (WFC) scheme. Note that a user determines the worst feasible channel for its transmission using only local information. WFC scheme preserves better channels for potential future CR transmissions. Even though WFC requires a pair of CR users to communicate on a channel that may not be optimal from one user’s perspective, it allows more CR transmissions to take place concurrently, especially under moderate to high traffic loads. Compared to previously proposed channel assignment schemes (evaluated in Section 6), our approach avoids unnecessary blocking of CR transmissions, and has a great potential to improve network throughput by means of cooperative channel assignment.
Channel Assignment and Access Protocols for Spectrum-Agile Networks
4.3
189
Channel Access Protocol
Protocol Overview: Based on the WFC algorithm, we propose a distributed multi-channel MAC protocol for multi-hop ad hoc CRNs with a single halfduplex radio per node. The proposed protocol is an extension of the single channel RTS-CTS-DATA-ACK handshaking scheme used in the 802.11 standard. It differs from previous designs in that it exploits the “dual-receive single-transmit” capability of radios (i.e., each radio is capable of receiving over two channels simultaneously, but can transmit over one channel at a time). The operation is half-duplex, i.e., while transmitting, the radio cannot receive/listen, even over other channels. It can be implemented using one transceiver with slight upgrade in the receive chains of the circuitry. This capability is readily available in some recent radios. For example, QUALCOMM’s RFR6500 radio [19] supports “simultaneous hybrid dual-receive operation, which allows for 1X paging signal monitoring during a 1xEV-DO connection, while monitoring other frequency bands for hand-off”. Another example is Kenwood’s TH-D7A Dual-Band Handheld Transceiver [20], which supports simultaneous reception over both data and voice channels using a single antenna. Though a simple enhancement of the transceiver circuitry, the dual-receive capability makes the MAC design much easier. In particular, if we assume a common (or coordinated) control channel, a CR user that is not transmitting any data can tune one of its two receive branches to the control channel while receiving data over the other receive branch. This way, the multi-channel hidden-terminal problem can be alleviated. Operational Details: To facilitate multi-channel contention and reduce the likelihood of CR collisions, each CR user, say A, maintains a free-channel list (FCL) and a busy-node list (BNL). The FCL(A) represents idle PR channels that are not occupied by other CR users within the A’s one-hop communication range. BNL(A) consists of the IDs of CR users that are currently busy transmitting/receiving data packets in A’s neighborhood. The FCL(A) and BNL(A) are continuously updated according to the channel access dynamics and overheard control packets. The proposed protocol follows similar interframe spacings and collision avoidance strategies of the 802.11 scheme (implemented here over the control channel) by using physical carrier sensing and backoff before initiating control-packet exchanges. Upon accessing the control channel, communicating CR users perform a three-way handshake, during which they exchange control information, conduct the channel assignment, and announce the outcome of this channel assignment to their neighbors. The details of the channel access mechanism are now described. Suppose that CR user A has data to transmit to CR user B at a rate demand RA . If A does not sense a carrier over the control channel for a randomly selected backoff period, it proceeds as follows: – If FCL(A) is empty or B is busy (based on BNL(A)), A backs off and attempts to access the control channel later. Otherwise, A sends an RTS message at power Pctrl . The RTS packet includes FCL(A) and RA .
190
H. Bany Salameh and M. Krunz
– A’s neighbors other than B, that can correctly decode the RTS will stay silent until either they receive another control packet from A, denoted by FCTS (explained below), or until the expected time for the FCTS packet expires. – Upon receiving the RTS packet, B determines the common channel list that is available for A → B transmission, denoted by CCL(A, B). Then, B proceeds with the channel assignment process, whose purpose is to determine whether or not there exists a feasible channel assignment that can support RA . – Depending on the outcome of the channel assignment process, B decides whether or not A can transmit. If not (i.e., non of the channels in CCL(A, B) can support RA ), then B does not respond to A, prompting A to back off, with an increased backoff range that is similar to 802.11, and retransmit later. Otherwise, B sends a CTS message to A that contains the assigned channel, the transmit power, and the duration (Tpkt (A)) needed to reserve the assigned channel. The CTS implicitly instructs B’s CR neighbors to refrain from transmitting over the assigned channel for the duration Tpkt (A). – Once A receives the CTS, it replies back with a “Feasible-Channel-to-Send” (FCTS) message, informing its neighbors of the assigned channel and Tpkt (A). Such a three-way handshake is typically needed in multi-channel CSMA/CA protocols designed for multi-hop networks (e.g., [9,7,3]). For single-hop networks, where all users can hear each other, there is no need for the FCTS packet. Likewise, in single-channel multi-hop networks, the FCTS packet is also not needed. – After completing the RTS/CTS/FCTS exchange, the transmission A → B proceeds. Once completed, B sends back an ACK packet to A over the assigned data channel. When used with the WFC assignment, the above protocol is referred to as WFCMAC. Note that, while receiving a data packet over a given data channel, a CR user still listens to other control packet exchanges taking place over the control channel, and can update its FCL and BNL accordingly. However, a CR user that is transmitting a data packet will not be able to listen to the control channel, so its FCL and BNL may become outdated. We refer to this problem as transmitter deafness, which is primarily caused by the half-duplex nature of the radios. To remedy this problem, when the receiver sends its ACK, it includes in this ACK any changes in the FCL and BNL that may have occurred during the transmission of the data packet. The transmitter uses this information to update its own tables. Because there is no interference between data and control packets, a CR user that hears the RTS (CTS) packet defers its transmission only until the end of the control packet handshaking. This allows for more parallel transmissions to take place in the same vicinity.
Channel Assignment and Access Protocols for Spectrum-Agile Networks
5
191
Throughput Analysis
In this section, we use simplified analysis to evaluate the maximum achievable throughput of various channel access schemes in single-hop topologies. We assume that a CR user transmits data in the form of fixed-size packets at a fixed transmission rate. Recall that Tctrl denotes the transmission duration of one RTS plus one CTS packets, and Tdata denotes the duration for data plus ACK transmissions. Assume that Tctrl can be expressed in terms of Tdata as Tctrl = δTdata . It is worth mentioning that according to the IEEE 802.11 specifications, Tdata is at least an order of magnitude larger than Tctrl (i.e., 0 < δ 1). As an example, consider data- and control- packet sizes of 4-KB and 120 bits, respectively [21]. Also consider a transmission rate of 5Mbps. Then, δ ≈ 0.0073. We now provide expressions for the maximum achievable throughput under the various schemes assuming the availability of MIdle channels and a per-packet channel assignment. The maximum achievable throughput is defined as the maximum number of concurrent feasible CR transmissions that can be supported in a Tdata + MIdle Tctrl = (1 + MIdle δ)Tdata duration. For the single-transceiver AW-MAC, according to Figure 1, the maximum number of data packets that can be potentially transmitted in a Tdata +MIdle Tctrl duration is MIdle . Under 2-radio AW-MAC, at steady state, the maximum number of datapackets that can be potentially transmitted in the same duration MIdle 2 ctrl is MIdle + i=1 MIdle TTdata = MIdle + MIdle δ = MIdle (1 + MIdle δ) (see Figure 2). Under both WFC-MAC and BMC-MAC (similar to WFC-MAC but uses the BMC channel assignment), for a given channel i, Figure 3 shows that an RTS/CTS exchange can immediately follow the transmission of the previous data packet over that channel. Thus, the maximum achievable throughput in the M Idle −1 ctrl Tdata + MIdle Tctrl duration is MIdle + i=1 (MIdle − i − 1) TTdata . With some manipulations, this quantity can be written as MIdle + δ(
MIdle Tctrl Tctrl Ctrl
2 MIdle 2
− 32 MIdle + 1).
Tdata
Tctrl ……..
…… (M Idle -1)T ctrl
………...
CH 1 (MIdle-2)T ctrl CH 2
CH MIdle -1
CH MIdle
. . . . .
Tctrl
………... ………... . . . . . ………...
………... t
Fig. 3. Basic operation of the distributed spectrum access scheme
H. Bany Salameh and M. Krunz
25
Maximum Achievable Throughput
Maximum Achievable Throughput
192
2−radio AW−MAC 20
AW−MAC WFC/BMC−MAC
15 10 5 0
5
10 MIdle
15
20
(a) Data-packet size = 4 KB
25 2−radio AW−MAC 20
AW−MAC WFC/BMC−MAC
15 10 5 0
5
10 MIdle
15
20
(b) Data-packet size = 8 KB
Fig. 4. Maximum achievable throughput (in packet/ (Tdata + MIdle Tctrl )) vs. total number of idle channels (control-packet size = 120 bits)
Computing the maximum achievable throughput in this way is rather optimistic since we are assuming that for 2-radio AW-MAC/AW-MAC, all AW slots result in successful RTS/CTS exchanges, and that for the BMC-MAC/WFCMAC and a given data channel, an RTS/CTS exchange follows immediately the transmission of the previous data packet over that channel. Figure 4 shows the maximum achievable throughput as a function of MIdle for two data-packet sizes and various channel access schemes. For practical data- and control- packet sizes [21], where δ 1, the figures reveal that various channel access schemes achieve comparable throughput performance. More importantly, the use of two half-duplex transceivers per CR user provides a minor improvement in the system throughput over a single-transceiver design. The figures also demonstrate that the throughput gain due to two transceivers is larger at smaller data-packet sizes (i.e., larger δ) and larger MIdle . This is because a larger δ (or MIdle ) means larger a AW duration, which results in more overhead for the single-transceiver solution.
6
Performance Evaluation
We now evaluate the performance of the proposed protocols via simulations. Our proposed protocols (AW-MAC and WFC-MAC) are compared with two CRN MAC protocols: BMC-MAC [3] and DDMAC [7]. As mentioned before, BMC-MAC selects the best available channel for data transmission. DDMAC is a CSMA-based spectrum-sharing protocol for CRNs. It attempts to maximize the CRN throughput through a probabilistic channel assignment algorithm that exploits the dependence between the signal’s attenuation model and the transmission distance while considering current traffic and interference conditions. For a fair comparison, in BMC-MAC, WFC-MAC, and DDMAC, CR users employ the same channel access mechanism described in Section 4.3. They differ in the channel assignment approach. The maximum achievable throughput under
Channel Assignment and Access Protocols for Spectrum-Agile Networks
193
DDMAC channel access is the same as the one obtained in Section 5 for WFCMAC/BMC-MAC and is comparable to the one for AW-MAC (see Figure 4). Note that, in all protocols, if there is no feasible channel assignment that can support the rate demand, no channel will be assigned, prompting the transmitter to back off. It is worth mentioning that DDMAC involves more processing overhead, as it requires distance and traffic estimation. In our evaluation, we first study the network performance in a single-hop CRN, where all users can hear each other. Then, we study it in a multi-hop mobile CRN. Our results are based on simulation experiments conducted using CSIM, a C-based, process-oriented, discrete-event simulation package [22]. 6.1
Simulation Setup
We consider four PRNs and one CRN that coexist in a 100 meter × 100 meter field. Users in each PRN are uniformly distributed. The PRNs operate in the 600 MHz, 900 MHz, 2.4 GHz, and 5.7 GHz bands, respectively. Each PRN consists of three 2.5-MHz-wide channels, resulting in a maximum of 12 channels for opportunistic transmissions. We divide the time into slots, each of length 6.6 ms. A time slot corresponds to the transmission of one data packet of size 4-KB at a transmission rate of 5 Mbps. Each user in the kth PRN acts as an ON/OFF source, where it is ON while transmitting and OFF otherwise. The source is further characterized by the distribution of its ON and OFF periods, which are both taken to be exponential. We set the average ON and OFF periods for the four PRNs to be the duration of 10 and 190 time slots, respectively. The number of PR links in each PRN is 20. Each active link in the kth PRN transmits over one of the 3 channels in its own band. Thus, the available spectrum opportunity in each PR band is 66.7%. For the CRN, we consider 200 mobile users. The random waypoint model is used for mobility, with the speed of a CR user uniformly distributed between 0 and 2 meters/sec. For each generated packet, the destination node is selected randomly. Each CR user generates fixedsize (2-KB) data packets according to a Poisson process of rate λ (in packet/time slot). Each user requires a transmission rate of 5 Mbps. We set the CRN SINR (i) threshold to 5 dB and the thermal noise power density to Pth = 10−21 Watt/Hz (1) (2) for all channels. We set the maximum transmission power to Pmax = Pmax = (12) . . . = Pmax = 50 mW and the control-packet size to 120 bits. The data rate of a CR transmission over a given channel is calculated according to Shannon’s formula4 . The reported results are averaged over 100 runs. Our performance metrics include: (1) the network throughput, (2) the CR blocking rate, (3) the average energy consumption for successfully transmitting one data packet (Ep ), and (4) the fairness index. The CR blocking rate is defined as the percentage of CR requests that are blocked due to the unavailability of a feasible channel. We use Jain’s fairness index [23] to quantify the fairness of a scheme according to the throughput of all the CR users in the network. 4
Other rate-vs-power relationships, such as a staircase function, can be used for calculating the achievable data rates.
194
H. Bany Salameh and M. Krunz
0.08 42
WFC−MAC BMC−MAC DDMAC AW−MAC 35 2−radio AW−MAC
0.06
25
8
12
16 20 24 λ (Packet/Sec)
28
32
30
6 0
0
(a) Blocking ratio vs. λ
4
8
12 16 20 24 λ (Packet/sec)
28
32
0.04
p
50% WFC−MAC BMC−MAC DDMAC AW−MAC 2−radio AW−MAC
18
36
E (mJ)
30
20 4
WFC−MAC BMC−MAC DDMAC AW−MAC 2−radio AW−MAC
1% 18%
40
Throughput (Mbps)
CR Blocking rate (%)
45
0.02
0 4
36
8
12
(b) Throughput vs. λ
16 20 24 λ (Packet/sec)
28
32
36
(c) Ep vs. λ
Low Load 0.15
1
0.1
0
0.9 0.85 AW−MAC WFC−MAC BMC−MAC DDMAC
0.8 0.75 0
4
8
12 16 20 24 λ (Packet/sec)
28
32
36
(d) Fairness index (2-radio AW-MAC depicted similar behavior as AW-MAC)
Channel Usage (%)
Fairness Index
0.05
0.95
1
2
3
4
5 6 7 Moderate Load
1
2
3
4
5
8
9
10
11
12
8
9
10
11
12
9
10
11
12
0.15 0.1 0.05 0
6 7 High Load
0.15 BMC−MAC
WFC−MAC
AW−MAC
DDMAC
0.1 0.05 0
1
2
3
4
5
6 7 Ch. No.
8
(e) CR channel usage (2-radio AW-MAC depicted similar behavior as AW-MAC)
Fig. 5. CRN performance in single-hop scenarios.
6.2
Single-Hop Network
We first study the throughput performance. Figures 5(a) and (b) show that 2radio AW-MAC provides only minor improvement in the network throughput over the single transceiver AW-MAC (this result is inline with the analysis in Section 5). Because both 2-radio AW-MAC and AW-MAC use the same channel assignment algorithm and provide comparable throughput performance, in the following, we focus on the performance of AW-MAC and compare it with the performance of the other protocols. Specifically, Figures 5(a) and (b) show that under moderate and high traffic loads, AW-MAC significantly outperforms the other protocols. At steady state, AW-MAC reduces the CR blocking rate and improves the overall one-hop throughput by up to 50% compared to BMC-MAC, 18% compared to DDMAC, and 12% compared to WFC-MAC. This improvement is mostly attributed to the increase in the number of simultaneous CR transmissions. WFC-MAC outperforms both BMC-MAC and DDMAC. This is because WFC-MAC attempts to serve a given CR transmission first using the worst feasible channel and preserves better channels for potential future transmissions. Under light loads, all protocols achieve comparable throughput performance. In Figure 5(c), we study the impact of the channel assignment strategy on Ep . It is clear that WFC-MAC and DDMAC perform the worst in terms of energy consumption. At the same time, the figure reveals that 2-radio AW-MAC, AW-MAC, and BMC-MAC have comparable performance with respect to Ep .
Channel Assignment and Access Protocols for Spectrum-Agile Networks
195
Thus, the throughput advantage of AW-MAC does not come at the expense of additional energy consumption. Figure 5(d) shows that all schemes achieve comparable fairness. This can be attributed to the fact that in all of these schemes CR users contend over the control channel using a variant of the CSMA/CA mechanism. Finally, Figure 5(e) depicts the channel usage, defined as the fraction of time in which a specific channel is used for CR transmissions. For WFC-MAC and DDMAC, channel usage is roughly evenly distributed among all channels, irrespective of the traffic load. For AW-MAC and BMC-MAC, under low and moderate traffic loads, channels with lower carrier frequencies are favored for CR transmissions (lower attenuation). On the other hand, under high traffic load, there are no significant differences in channel usage among all channels. 6.3
Multi-hop Network
In order to study the performance in a multi-hop environment, we use the same simulation setup described in Section 6.1, but with the following changes: – A 500 meter × 500 meter field is considered for the 200 mobile CR users. (1) (2) (12) – The maximum transmission power is set to Pmax = Pmax = . . . = Pmax = 100 mW. – Each CR user generates 2-KB data packets according to a Poisson process of rate λ. For each generated packet, the destination node is randomly selected to be any node in the network. We use a min-hop routing policy, but we ignore the routing overhead. For all schemes (BMC-MAC, WFC-MAC, and DDMAC), the next-hop candidates are nodes that are within the transmission range of the transmitter. The purpose behind these changes in the setup is to give rise to hidden terminals. Our simulations take into account the effect of the hidden-terminal problem due to imperfect control and inaccurate ACL at both the receiver and transmitter by considering the interference from active neighboring CR transmissions that use common channels (if any). As shown in Figures 6(a) and (b), WFC-MAC achieves lower CR blocking rate and higher end-to-end network throughput than the other two protocols under 55
25
1.8 WFC−MAC BMC−MAC DDMAC
40
35
WFC−MAC BMC−MAC DDMAC 0.05
0.1 0.15 0.2 λ (Packet/time slot)
(a) Blocking ratio vs. λ
0.25
1.4
15
20%
p
45
1.6
20
E (mJ)
50
Throughput (Mbps)
CR Blocking Rate (%)
9%
10 WFC−MAC BMC−MAC DDMAC
5 0 0
0.05
0.1 0.15 λ (Packet/time slot)
0.2
0.25
(b) End-to-end throughput vs. λ
1.2 1 0.8 0.05
0.1 0.15 λ (Packet/time slot)
(c) Ep vs. λ
Fig. 6. CRN performance in multi-hop scenarios
0.2
0.25
196
H. Bany Salameh and M. Krunz
moderate and high traffic loads. On the other hand, under low traffic load, all protocols achieve comparable throughput performance. Figure 6(c) shows that BMC-MAC outperforms WFC-MAC and DDMAC in terms of Ep under different traffic loads. Similar fairness and channel usage properties to the single-hop scenarios are also observed here. Note that no single strategy is always best in all traffic regimes. Under light traffic, BMC-MAC provides the same throughput performance as WFC-MAC and DDMAC, but outperforms them in terms of Ep . However, under moderate and high traffic loads, WFC-MAC performs better in terms of throughput at the cost of Ep .
7
Conclusion
In this paper, we investigated the design of cooperative dynamic channel assignment for single-transceiver CR devices that employ adaptive power management. Our solutions attempt to maximize the network throughput as a primary objective, followed by minimizing energy consumption as a secondary objective. We first presented centralized and distributed channel assignment algorithms. For single-hop CRNs, we developed a CSMA-based MAC protocol with access window (AW) for exchanging control messages. Our AW-MAC realizes the optimal centralized channel assignment in a distributed manner. Based on our heuristic distributed assignment, we also developed a distributed, asynchronous MAC protocol (WFC-MAC) for multi-hop CRNs. We studied the performance of our protocols and contrasted them with two previously proposed MAC protocols (i.e., BMC-MAC and DDMAC). We showed that for single-hop CRNs, AW-MAC performs the best in terms of throughput and energy consumption under various traffic conditions. Under moderate-to-high traffic loads, AW-MAC achieves about 50% increase in throughput over BMC-MAC at no additional cost in energy. It achieves about 18% throughput improvement over DDMAC, with even less energy consumption and processing overhead. For multi-hop scenarios, our results show that WFC-MAC is the best strategy in terms of throughput at the cost of energy consumption under different traffic loads.
References 1. FCC, Spectrum Policy Task Force Report, ET Docket No. 02–155 (2002) 2. Bany Salameh, H., Krunz, M.: Channel Access Protocols for Multihop Opportunistic Networks: Challenges and Recent Developments. IEEE Network-Special Issue on Networking Over Multi-Hop Cognitive Networks (2009) 3. Bany Salameh, H., Krunz, M., Younis, O.: MAC protocol for opportunistic cognitive radio networks with soft guarantees. IEEE Transactions on Mobile Computing 8, 1339–1352 (2009) 4. Sabharwal, A., Khoshnevis, A., Knightly, E.: Opportunistic spectral usage: Bounds and a multi-band CSMA/CA protocol. IEEE/ACM Transactions on Networking 15, 533–545 (2007)
Channel Assignment and Access Protocols for Spectrum-Agile Networks
197
5. Bany Salameh, H.: Rate-maximization channel assignment scheme for cognitive radio networks. In: Proceedings of the IEEE GLOBECOM Conference (2010) 6. Yuan, Y., Bahl, P., Chandra, R., Chou, P., Ferrell, J., Moscibroda, T., Narlanka, S., Wu, Y.: Knows: Kognitive networking over white spaces. In: Proceedings of the IEEE DySPAN Conf., pp. 416–427 (2007) 7. Bany Salameh, H., Krunz, M., Younis, O.: Cooperative adaptive spectrum sharing in cognitive radio networks. IEEE/ACM Transactions on Networking (2010) 8. Bany Salameh, H., Krunz, M.: Spectrum sharing with adaptive power management for throughput enhancement in dynamic access networks. Technical Report TRUA-ECE-2009-1, University of Arizona (2009), http://www.ece.arizona.edu/ krunz/Publications.htm/ 9. Jain, N., Das, S., Nasipuri, A.: A multichannel CSMA MAC protocol with receiverbased channel selection for multihop wireless networks. In: Proceedings of the 9th Int. Conf. on Computer Communications and Networks (IC3N), pp. 432–439 (2001) 10. Behzad, A., Rubin, I.: Multiple access protocol for power-controlled wireless access nets. IEEE Transactions on Mobile Computing 3, 307–316 (2004) 11. Second Report and Order and Memorandum Opinion and Order, ET Docket No. 04-186;FCC 08-260 (2008) 12. Zhao, J., Zheng, H., Yang, G.H.: Distributed coordination in dynamic spectrum allocation networks. In: Proceedings of the IEEE DySPAN Conf., pp. 259–268 (2005) 13. Sedgewick, R.: Algorithms in C, Part 5: Graph Algorithms, 3rd edn. AddisonWelsy, London (2002) 14. Dendeit, V., Emmons, H.: Max-Min matching problems with multiple assignments. Journal of Optimization Theory and Application 91, 491–511 (1996) 15. Muqattash, A., Krunz, M.: POWMAC: A single-channel power control protocol for throughput enhancement in wireless ad hoc networks. IEEE Journal on Selected Areas in Communications 23, 1067–1084 (2005) 16. Acharya, A., Misra, A., Bansal, S.: MACA-P: a MAC for concurrent transmissions in multi-hop wireless networks. In: Proceedings of the First IEEE PerCom 2003 Conference, pp. 505–508 (2003) 17. Crow, B., Widjaja, I., Kim, J., Sakai, P.: IEEE 802.11 wireless local area networks. IEEE Communications Magazine 42, 116–126 (1997) 18. Bianchi, G.: Performance analysis of the IEEE 802.11 distributed coordination function. IEEE Journal on Selected Areas in Communications 18, 535–547 (2000) 19. Qualcomm Announces Sampling of the Industry’s First Single-Chip Receive Diversity Device for Increased CDMA 2000 Network Capacity, http://www.qualcomm. com/press/releases/2005/050504-rfr6500.html 20. Kenwood TH-D7A Dual-Band Handheld Transceiver, http://www.kenwoodusa. com/Communications/Amateur-Radio/Portables/TH-D7AG 21. The Cisco Aironet 350 Series of wireless LAN, http://www.cisco.com/warp/ public/cc/pd/witc/ao350ap 22. Mesquite Software Incorporation, http://www.mesquite.com 23. Jain, R.: The Art of Computer System Performance Analysis. John Wiley & Sons, New York (1991)
The Problem of Sensing Unused Cellular Spectrum Daniel Willkomm1 , Sridhar Machiraju2, , Jean Bolot3 , and Adam Wolisz1,4 1
Telecommunication Networks Group, Technische Universit¨ at Berlin, Einsteinufer 25, 10587 Berlin, Germany [email protected] 2 Google, Mountain View, CA 94043, USA [email protected] 3 Sprint, Burlingame, CA 94010, USA [email protected] 4 University of California, Berkeley, CA 94720, USA [email protected]
Abstract. Sensing mechanisms that estimate the occupancy of wireless spectrum are crucial to the success of approaches based on Dynamic Spectrum Access. In this paper, we present key insights into this problem by empirically investigating the design of sensing mechanisms applied to check the availability of excess capacity in CDMA voice networks. We focus on power-based sensing mechanisms since they are arguably the easiest and the most cost-effective. Our insights are developed using a unique dataset consisting of sensed power measurements in the band of a CDMA network operator as well as “ground-truth” information about primary users based on operator data. We find that although power at a single sensor is too noisy to help us accurately estimate unused capacity, there are well-defined signatures of call arrival and termination events. Using these signatures, we show that we can derive lower bound estimates of unused capacity that are both useful (non-zero) and conservative (never exceed the true value). We also use a combination of measurement data and analysis to deduce that multiple sensors are likely to be quite effective in eliminating the inaccuracies of single-sensor estimates. Keywords: Cognitive radio, spectrum sensing, dynamic spectrum access.
1
Introduction
Dynamic Spectrum Access (DSA) is often viewed as a remedy against the spectrum scarcity caused by existing static spectrum allocation schemes. In DSAbased approaches primary users, who are often the licensed users of spectrum, have strictly higher priority than secondary users, who must vacate spectrum if and when it is needed by primary users. A fundamental problem in DSA is: how can secondary users know whether or not primary users are using spectrum? This problem has been studied most
The author was at Sprint when this work was done.
J. Domingo-Pascual et al. (Eds.): NETWORKING 2011, Part II, LNCS 6641, pp. 198–212, 2011. c IFIP International Federation for Information Processing 2011
The Problem of Sensing Unused Cellular Spectrum
199
frequently in the case of secondary usage of spectrum licensed to TV broadcasters [10]. For TV bands as well as other usage scenarios, secondary users need to reach a binary decision, namely, whether a primary user is present or not. In this paper, we address spectrum licensed to another set of primary users – cellular telephony operators. Our focus on cellular bands is justified given the significant recent interest in implementing DSA in these bands [2–5, 8]. Such interest is fueled by the large number of devices and networks using cellular bands. In addition, as VoIP services and wireless data networks proliferate, cellular voice bands may see reduced loads. Secondary usage of such spectrum is an attractive way by which providers can extract value. We study CDMA cellular bands since they are one of the most widely-used cellular technologies (along with GSM) and we had access to the ground truth of such a network. Since CDMA voice users share spectrum, the goal is not to detect whether a user is present or not but to identify the amount of unused spectrum capacity. Of course, even if there is unused spectrum capacity, secondary usage can have an impact on primary users. Hence, it is acceptable, perhaps even desirable, that some amount of spectrum is left un-utilized by secondary users. Thus, if secondary users want to use capacity X, we would like to check the availability of capacity Y > X + Δ beyond what is already used by primary users. Though understanding it is outside the scope of this paper, we do believe that – at the least – low-bandwidth applications (e.g., urban sensing) may be well placed to exploit unused capacity in CDMA voice networks. In this paper, we investigate how secondary users can estimate unused spectrum capacity by utilizing sensing mechanisms. In particular, we focus on what are arguably the simplest sensing mechanisms – those that are based on single sensors considering power alone. Not only are these mechanisms simple but they are also cheap (thus enabling large-scale secondary user deployment). We phrase our investigation in terms of the following problem: What information about primary user occupancy do spectrum measurements of power yield? Can this information be used to estimate the unused capacity that secondary users can exploit? To conduct our investigation, we leverage a large and unique dataset containing both spectrum measurements of transmitted power as well as the corresponding “ground truth”, i.e., the exact information about all the calls in progress at every point in time obtained from the call records collected at the network switches. In other words, we capture both the actual behavior of the primary users (calls in progress, which can only be measured inside the network) as well as the estimated behavior of the primary users, as would be measured by secondary sensing users. To our knowledge, this is the first study to execute and evaluate such synchronized measurements of sensing and ground-truth. Throughout this paper, we use the natural approach of estimating unused capacity by first estimating primary usage and subtracting it from the total capacity. We start by exploring if sensed power can be easily converted to the amount of primary usage. We find that this way of estimating primary usage is not
200
D. Willkomm et al.
practical. Therefore, we focus on investigating other, indirect ways of estimating unused capacity. In particular, we explore the possibility of identifying primary usage events (call arrivals or terminations). Using statistics of call durations that are published or well-known [7, 19], we can then convert the estimated intensity of event arrivals into estimates of primary usage. It turns out that sensed power, when averaged over time, contains distinct, well-defined power signatures corresponding to call arrival events and, to a lesser extent, call termination events. However, the noise in sensed power at any single time instant is large enough that we cannot exploit these signatures to accurately identify individual call events. We conclude that accurate estimation of primary usage (and hence unused capacity) using a single sensor is likely to be unachievable. Power sensed by a single sensor can nevertheless be of value to secondary users since it can provide estimates of unused capacity that are conservative (underestimates) yet useful (not equal to zero). We show that a single sensor can provide such useful underestimates because of the following reason: even though our event identification algorithms are hampered by noise, they can be suitably configured so that they never overestimate unused capacity but still yield useful estimates of unused capacity. For example, in the moderately-loaded cells that we monitored in our experiments, we can correctly estimate that at least 5% of total capacity is almost always available for secondary use without ever overestimating unused capacity. Thus, we can use power measurements from a single sensor to at least support a low-bandwidth secondary application. Finally, we show how accurate estimates of unused capacity can be obtained by using multiple sensors. Using our measurement-based characterization, we estimate the number of sensors required to achieve better accuracy.
2
Measurement Methodology
To evaluate if sensed power yields information about the underlying primary usage, we conducted a large number of sensing measurements spread over time and multiple locations. In addition, we simultaneously collected detailed information about primary usage. Thus, we were able to collect a large amount of unique data consisting of both measurements and ground truth. Our measurements were collected in the band used by a CDMA-based cellular operator, which is an important target for dynamic spectrum access as described in the introduction. In total, our experiments yielded approximately 90 GB of ground truth data on primary usage and 14 GB of data with spectrum power measurements. In this section, we describe our measurement methodology in detail and also the salient aspects of the collected datasets. 2.1
Ground Truth on Primary Usage
To compute the ground-truth, we use call records collected at the switches of a CDMA-based cellular operator. These records capture the start time, duration,
The Problem of Sensing Unused Cellular Spectrum
day 4 day 5 C2
−65
0 day 1
day 3 Time
day 4 day 5
day 3
day 5 Time
0.4
0 day 1
S 2.1
0.2 day 7 S 1.2
0 day 1
day 3
day 5 Time
0.4
day 7 S 2.2
0.2
0.2
day 2
0.4 Normalized Load
day 3 Time
Normalized Load
Sensed Power [dBm]
day 2
−61
−69 −73 day 1
S 1.1
0.2
−70 −74 day 1
0.4
C1
−66
201
day 3
day 5 Time
day 7
0 day 1
day 3
day 5 Time
day 7
Fig. 1. Dataset D1: (a) Sensed power over time for carriers C1 and C2. (b) Normalized load for strongest cell sector S1. (c) Normalized load for 2nd strongest cell sector S2.
initial and final sector as well as assigned carrier1 of the voice calls made in the network. The call duration reflects the RF emission time of the call and captures precisely what we want – the duration of primary usage. All timestamps were measured with a resolution of a few milliseconds. Using the call records, we calculate the ground-truth usage in each carrier of a cell sector. We split the call records based on the sector and carrier. We create two records for each call corresponding to its initiation and termination. Then, we sort these records in order of their time to get a sequence of call events that are known to have occurred in each sector/carrier. Using the sorted list of events, we also calculate the load of each sector and carrier. We do so by maintaining a running count of the number of ongoing calls. This count is increased by +1 when a call begins and decreased by −1 when a call terminates. Since the switches only record the initial and final sector of each call, we are unable to account for the spectrum usage in other sectors that the user may have visited in between. This implies that our list of events is accurate but not necessarily complete (it may not count call initiation and termination events caused by handover). Similarly, for calculating load, we assign the whole call to the initial sector/carrier. We also try other approximations of load by, for example, assigning the first half of the call to the initial sector/carrier and the last half to the final sector/carrier. Since these approximations do not alter our results significantly, we do not provide them. Furthermore, we believe that full mobility information is unlikely to change our results either. 2.2
Power Sensing
We collected our power measurements using a W1314A Multi-band Wireless Measurement Receiver from Agilent [1]. We used the Model 110 so that we can sense the uplink and downlink CDMA bands (in the 1900 MHz frequency range) used by the network whose call records we had access to. This receiver captured in real-time power measurements from all 12 1.25 MHz carriers belonging to a 1
Each cell sector may be assigned one or more carriers – a 1.25 MHz portion of the spectrum. Each carrier is capable of supporting tens of calls.
202
D. Willkomm et al.
single provider. The power measurements were reported twice or thrice a second on average. Unless stated otherwise, we convert this raw data into per-second averages (computed using 2 or 3 readings). Our wireless measurement receiver is a sophisticated piece of equipment and can be viewed as being capable of the most accurate power based sensing that a secondary user can perform. We collected multiple datasets at four different urban locations (L1 to L4). For each of the locations the power measurements were collected over multiple days and spanned all possible 1.25 MHz carriers of the CDMA band. We use C1, C2, . . . to refer to the various carriers. L1 was within line-of-sight and about 0.5 miles from the nearest base station. L2 was at a similar distance but not with line-of-sight of the closest base station. In both L1 and L2 the antenna was placed close to a window at the second floor. L3 and L4 were about twice the distance (1 mile) to the closest base station, on the ground floor and heavily shadowed from the closest base station. Our analysis showed the following trends: datasets collected at the same location (at different times) showed similar results. Furthermore, the datasets collected at locations L1 and L2 illustrated better results than those collected at locations L3 and L4. This was due to the heavy shadowing at the latter locations. On account of this and space limitations, we provide results only for L1 and L2 in this paper. In particular, we use two datasets, which we label D1 and D2. D1 was collected over a period of 3 days at location L1 and D2 was collected over a period of 4 days at location L2. In this paper, we present results for two carriers C1 and C2, which were the most active carriers in the two locations. For each experiment, we also recorded the identities of the cell sectors with the strongest pilots. For each dataset, we refer to the cell sectors as S1, S2, . . . in decreasing order of pilot strength. We observe that, for both of our datasets, this order reflected the distance of the sectors from the measurement locations. Usually a cell sector has activity in more than one carrier. We denote this by referring to the activity of sector 1 in carrier 1 with S1.1, in carrier 2 with S1.2, etc. Note that, for each of these cell sectors, we computed ground-truth (events and load) using the call records as described in Sect. 2.1.
3
Estimation via Power Thresholds
In this section, we present how well power is correlated with primary usage. We obtain these results by investigating the following question: Can a simple scheme based on mapping sensed power information to load information work? We start with some preliminary observations and data analysis for synchronizing the power measurements and ground truth. 3.1
Power-Load Correlation
In Fig. 1, we plot the power sensed and network ground truth for dataset D1 (downlink). We plot results for the two strongest sectors S1 and S2 and two carriers C1 and C2 used by both sectors. For data confidentiality reasons, we normalize the load values by a randomly chosen number so that the absolute load
The Problem of Sensing Unused Cellular Spectrum
0 −1 −3
0.6 0.59 0.58 0.57 −50
S 1.1 & C1 S 1.2 & C2 0 1 2 3 Lags [s] x 105 S 1.1 & C1 0.49 S 1.2 & C2 0.48
−2
−1
CDF
Crosscorrelation
0.8
load=low load=medium load=high
0.6
1
load=low load=medium
0.8 0.6
CDF
1
1
203
0.4
0.4
0.2
0.2
0.47 0
50
0.46 −50
0
50
Fig. 2. Dataset D1: Crosscorrelation of sensed power and ground truth load
0 −70 −69.5 −69 −68.5 −68 −67.5−67 Power [dBm]
0 −72
−71
−70 −69 Power [dBm]
−68
Fig. 3. Dataset D1: Distribution of power levels when the load varies. (a) Daytime (b) Nighttime. We divide the observed load levels into three levels.
is obfuscated while preserving the trends. Notice that the day/night variations of the load are clearly visible. This is not surprising since it corresponds to levels of human activity and has also been observed in prior work [9, 19]. The plots showing the sensed power also illustrate a distinct diurnal pattern with higher power levels during the day and lower levels at night. Cross-correlation is a simple and well-known metric that we can use to measure the extent to which power “tracks” load. We plot the cross-correlation between sensed power and load for several lags in Fig. 2. The maximum is not reached at lag 0 since the clocks used for collecting call records and power measurements are only synchronized to within a few seconds of each other. In addition, for each sector/carrier, we see peaks separated by roughly one day. These local maxima are caused by the underlying diurnal pattern of the load. Overall, it is clear that the sensed power is indeed well correlated with load. It turns out that the cross-correlation curves for all sector/carriers reach a maximum at the same lag of 13 seconds as shown in Fig. 2. We observed maxima at a similar lag with dataset D2 as well. For the rest of this paper, we, thus, use the above lag to synchronize the power measurements and call records. 3.2
Naive Threshold-Based Scheme
We start investigating if there is a unique mapping of power level to sector load with Fig. 3(a). We plot the distribution of the sensed power levels for various coarse-grained levels of load during peak hours of the day (12PM to 6PM). As expected, with increasing load, the power levels tend to increase. But, observe that for different values of load, the same power levels are often seen. Moreover, when the load is low, the power levels are more spread out. This indicates that it is challenging to distinguish small changes in load using static power alone. Power levels are better able to separate the coarse-grained measures of load at night (12AM to 6AM) as shown in Fig. 3(b). Note that, since the load at night is never high, we only show two levels. These levels are relatively well separated: for example, the power level is below −70 dBm for 60% of the time when the load is low as opposed to 5% of the time we have medium load. Such power-based thresholds separating coarse-grained load levels such as low, medium and high
204
D. Willkomm et al. −67.5 S 1.1 S 2.1
−67.4 −67.6 −10
−5
0 Seconds from event
5
Power [dBm]
Power [dBm]
−67.2
10
−67.4 −5
0 Seconds from event
5
10
Power [dBm]
Power [dBm]
−5
0 Seconds from event
5
10
−67.3 S 1.2 S 2.2
−67.2
−67.6 −10
−67.7 −10
−67
S 1.1 S 2.1
−67.6
S 1.2 S 2.2
−67.4 −67.5 −67.6 −10
−5
0 Seconds from event
5
10
Fig. 4. Dataset D1: 10-second t-average plots for (downlink) power for C1 (top) and C2 (bottom) during (a) Call initiation events, and, (b) Call termination events
are likely to depend on location. For example, in separate short experiments, we found that the average power levels close (about 100 m) to a base station were around −50 dBm. Though we did not find much variation at a given location within a few days, it is unclear how long thresholds at a location remain valid. To summarize, load estimation based on power thresholds (that are gleaned on a per-location basis) can provide coarse-grained information about load especially at night. Though fine-grained information is difficult to extract, such coarse-grained information might be sufficient to decide whether or not to start secondary usage. The above threshold-based scheme is a black-box estimation of load in the sense that no information is required from the cellular operator.
4
Event Signatures
During our analysis, we found that sensed power contains information about the “first derivative”, i.e., change of primary usage load. These changes are typically due to call initiation and termination in a CDMA voice network. The sensed power in our datasets shows often jumps and drops when events occur. To understand if such event signatures exist and characterize them, we examine all initiation and termination events spread across multiple days in our datasets. Since the average power across these days may not be stationary we rely on what we refer to as t-average plots: we extract the power for a short time periods before and after events and average the sensed power across all initiation and termination events. This allows us to look at time-averaged characteristics of power when events occur even in the absence of stationarity – a key advantage. In Fig. 4, we show t-average plots capturing the average behavior for 10 seconds before and after call initiation and termination events of dataset D1. In this figure, we are considering only the downlink bands. We notice two key patterns for the events involving the strongest cell sector S1: – On average, call initiation events are characterized by a spike that is about 0.3 − 0.4 dBm followed by a general increase of power afterwards that is
The Problem of Sensing Unused Cellular Spectrum −72.8 S 1.1 S 2.1
−72.8 −73 −73.2 −10
−5
0 Seconds from event
5
Power [dBm]
Power [dBm]
−72.6
−73 −5
0 Seconds from event
5
10
−73.4 S 1.2 S 2.2
−73 −73.5 −5
0 Seconds from event
5
10
Power [dBm]
Power [dBm]
S 1.1 S 2.1
−72.9
−73.1 −10
10
−72.5
−74 −10
205
S 1.2 S 2.2
−73.6 −73.8 −74 −10
−5
0 Seconds from event
5
10
Fig. 5. Dataset D2: 10-second t-average plots for (downlink) power for C1 (top) and C2 (bottom) during (a) Call initiation events, and, (b) Call termination events
around 0.05 − 0.1 dBm. This trend is clearly seen during the daytime and nighttime as well (not shown). We believe that this spike is easily explained by the CDMA downlink power control loop [18]. This loop ensures that, when a call starts, the base station transmits with high power. The power level is then reduced to a minimal level while maintaining call quality using the rapid closed-loop power control of CDMA. The increase of 0.05−0.1 dBm reflects the increased power due to a new call, of course. – On average, call termination events are characterized by a dip of at least 0.05 − 0.1 dBm (a bit higher during nighttime) immediately after the event. This reflects our intuition that the call termination corresponds to lesser power being emitted by the base station. Thus, there exist well-defined power signatures for initiation and termination events in CDMA networks. Figure 4 also shows that there are no visible signatures corresponding to the events of the second strongest cell sector S2. Recall from Sect. 2.2 that we collected a second dataset D2 from experiments conducted at another location. To verify that the event signatures persist across locations, we plot t-average plots for D2 in Fig. 5. The call initiation signature corresponding to events of the strongest sector S1 continues to be clearly seen for both carriers. As before, there is no signature corresponding to events of the second strongest sector S2. Surprisingly, the call termination signature for D2 is less clear. The location at which dataset D2 was collected had no line-of-sight to S1 or S2 which is the likely reason behind the weak termination signature. We also use t-average plots (not shown due to lack of space) to investigate if such signatures are also present in the sensed power of uplink bands. We find no identifiable signatures corresponding to initiation or termination events for both S1 and S2. The absence of signatures in uplink power is not surprising given that the average power levels are about 25 − 30 dBm lower than the downlink power measurements. Such lower power levels are likely due to the stricter power budget of end-user devices. Also, the sources of uplink power are end-user devices, which are spatially distributed. We expect to see signatures when such
206
D. Willkomm et al.
devices are nearby. To verify this hypothesis, we conducted “active” experiments by initiating phone calls using a mobile handset located near our power sensor. When these calls were initiated, we did observe identifiable spikes in power similar to the downlink initiation signature. These experiments confirm that uplink sensing is of use only if the sensor is close to all end-user devices. Since this is physically impossible, we do not further investigate uplink sensing in this paper.
5
Event Detection
In the previous section, we found the presence of well-defined signatures corresponding to call initiation and termination events on the downlink CDMA channels. In this section, we show that such average-case signatures do, however, not translate into algorithms for accurate event detection. 5.1
Discriminators of Initiation Signature
The t-average plots discussed in the previous section indicate that call initiation can potentially be identified by detecting spikes in the sensed power. Referring to the t-average plots of Fig. 4, we see that there are roughly three time periods: during, before and after call initiation. The spike occurs during the call initiation and is significantly higher than the power before and after. We also expect the power after call initiation to be higher than before. Based on the above discussion, we are motivated to consider three intuitive discriminators of initiation signatures. We use P (·) to denote the power (as a function of time) and T to represent the second when a call is initiated. We divide a contiguous period of time around T into three periods: the first period of call initiation consisting of a window of w seconds including and after T , a window of w− seconds prior to this period, and, a window of w+ seconds after the call initiation period. We calculate the average power in each of these three periods and define the criteria for the 3 discriminators as follows: 1. The difference between the power in the call initiation period and the power in the period before call initiation is larger than a threshold τ1 , i.e., P ([T, T + w − 1]) − P ([T − w− , T − 1]) ≥ τ1 .
2. The difference between the power during the call initiation period and the power thereafter is larger than a threshold τ2 . P ([T, T + w − 1]) − P ([T + w, T + w + w+ − 1]) ≥ τ2 .
3. The difference between the power after and before call initiation is larger than a threshold τ3 . P ([T + w, T + w + w+ − 1]) − P ([T − w− , T − 1]) ≥ τ3 .
An advantage of using a window of several seconds in each period might be the potential reduction in the variance of the estimated power. At the same time, larger periods may be polluted by other call initiation and termination events.
The Problem of Sensing Unused Cellular Spectrum
Fraction
0.6 0.5
nInit q 0.5 init q 0.5 nInit q 0.2 init q 0.2
0.6
0.4 0.2
day 2
day 3 Time
day 4 day 5
0 day 1
nInit q 0.5 init q0.5 nInit q0.2 init q0.2
0.4
0.2
0.1
1 0.8 0.6
0.4
0.3
0 day 1
1 0.8
Fraction
nInit q 0.5 init q 0.5 nInit q 0.2 init q 0.2
0.7
Fraction
0.8
207
0.2 day 2
day 3 Time
day 4 day 5
0 day 1
day 2
day 3 Time
day 4 day 5
Fig. 6. Dataset D1: Fraction of initiation events and non-events that satisfy the discriminators defined by (a) Criterion 1. (b) Criterion 2. (c) Criterion 3.
To better understand the impact of the various parameters including the window sizes and thresholds, we rely on so-called cCDF (complementary CDF ) plots. Consider the first discriminator. Recall that it looks at the difference during and before call initiation and expects this difference to exceed a threshold when a call is initiated. A cCDF plot shows if this discriminator is justified by plotting the distribution of the difference for all T when a call was initiated, and, compares it with the distribution for all T when a call was not initiated. Since we are interested in the number of cases that the difference exceeds a threshold, we plot the cCDF (the CDF subtracted from 1) and experiment with several choices of w and w− (plots not shown due to lack of space). The higher we choose τ1 to be, the fewer call initiation events satisfy the criterion. However, as we make τ1 smaller, more seconds during which no call was initiated satisfy the criterion. The sweet spot appears to be at around 0.4 − 0.5 dBm, below which the latter increase faster than the former. We also get marginally better results when we define the period during call initiation as consisting of exactly 1 second. Larger windows do not improve our results. It seems that the power during additional seconds is not as high as that of the first second. Hence, any potential variance reduction from the additional power measurements comes at the cost of eliminating the signature itself. The size w− of the period before call initiation impacts the results to a lesser extent, leading to marginally better results with a window of size 4. Using the cCDF plots for the other two discriminators we find that they exhibit similar behavior, namely, smaller windows are better. 0.3−0.5 dBm appears to be a good threshold value of τ2 . However, the third criterion is not as useful since the difference in power before and after call initiation is not as clear. The success of our criteria may vary with time which is explored in Fig. 6. We choose the thresholds corresponding to the 0.5 and 0.2 cCDF quantiles. We then apply the criteria with the respective thresholds on an hourly basis and plot the fraction of initiation events and “non-events” satisfying them in Fig. 6. As expected, the fraction of initiation events satisfying each criterion does not vary significantly and stays around the quantile value (0.5 or 0.2) used to choose
1
criterion 1 criterion 2 criterion 3
0.8 0.6
1
criterion 1 criterion 2 criterion 3
0.8
0.4
0.4
0.2
0.2
0.2
0 −1 −0.5
0 0.5 1 1.5 2 2.5 Threshold τ [dBm]
0 −1 −0.5
1
0.8 0.6
0.6
0.4
Prob. Estimating X% Free Capacity
D. Willkomm et al.
Prob. Non-Event Detected as Event
Prob. of Event Detected as Event
208
0 0.5 1 1.5 Threshold τ [dBm]
2
2.5
Fig. 7. Dataset D1: (a) Fraction of initiation events identified by the detectors (b) Fraction of non events declared as initiation events by the detectors based on the 3 criteria with varying threshold values
0
0
X=5 daytime X=50 daytime X=50 nighttime X=80 nighttime 0.5 1 1.5 2 Threshold τ [dBm]
Fig. 8. Dataset D1: Probability of estimating at least X% of unused capacity
the thresholds. However, the fraction of “non-events” passing the criteria show a clear dip during night time for both thresholds. This implies significantly better performance during night time on account of lesser noise in power. 5.2
Initiation Detectors
We now investigate how the various discriminators and their criteria can be used for the best possible detection of initiation events. Due to lack of space, we do not focus on detecting call terminations since discriminators based on call termination events do not perform as well as those based on call initiation. We quantify discriminator performance, by looking at the detection probability, i.e., the probability of detecting an initiation event given that a call was really initiated. In Fig. 7(a), we plot the detection probability when each of our three discriminators are used on dataset D1. For each detector, we vary the corresponding threshold from −1 to 2.5 dBm. As expected, the detection probability reduces with increasing thresholds. In Fig. 7(b), we show the probability of detecting a call initiation event, although no call was initiated. Comparing Fig. 7(a) and Fig. 7(b) shows a common problem using energy detection: Choosing a threshold τ1 that achieves a high detection probability results in many non-initiation events to be declared as initiation event and vice versa. E.g., using criterion 1, a threshold of τ1 = 0 dBm results in detecting close to 75% of the initiation events, but also in mistakenly detecting about 50% of the non-events as initiations. In contrast, a threshold of τ1 = 0.5 dBm detects only 10% of the non-events as initiations but fails to detect 70% of the initiations.
6
Estimating Unused Capacity
In this section, we discuss how we can use power sensed by a single sensor to derive useful estimates of unused capacity. Specifically, we find lower bounds for the unused capacity so that – at least – low-bandwidth applications can utilize it. Note that these estimates are useful because they are significantly greater than
The Problem of Sensing Unused Cellular Spectrum
209
zero. We compute lower bounds using the call initiation detectors to estimate call arrival rates. We then estimate the load in the system using Little’s law as E[k] = λE[b] where E[b] is the mean call duration and λ the estimated arrival rate. We do assume that we have partial information about the system being studied, namely, the average call durations. Such information can be obtained from previous studies [7, 19] (since mean call durations are quite stable over time) or directly from providers. Since we use such information, we refer to it as a gray-box approach (in-between black-box and white-box approaches). In Fig. 8 we show how well we can achieve our goal: finding unused spectrum to satisfy the secondaries bandwidth requirement. We show results for dataset D1 and criterion 1. We divide our dataset into hour-long time periods and calculate average values within these time periods. Remember, that it is crucial to never overestimate the unused capacity. Using the whole dataset, the maximum threshold would be τ ≤ 0.5 dBm resulting in X = 5% of capacity unused in more than half of the time periods (figure not shown). If we use different thresholds for daytime (9am to 10pm) time periods and nighttime (11pm to 8am) time periods, we can improve performance significantly. The maximum thresholds so that we never overestimate are τ ≤ 0.7 dBm and τ ≤ 0.5 dBm respectively. These are indicated by the solid black lines in Fig. 8. This figure also shows that, using the maximum daytime threshold (τ = 0.7 dBm), we correctly estimate 5% (50%) of total capacity is unused almost always (in 14% of the daytime time periods). For the maximum nighttime threshold (τ = 0.5 dBm) we correctly estimate that 50% (80%) of total capacity is unused in 75% (46%) of the nighttime hours. For dataset D2, the results are similar though the maximum thresholds (so that we never overestimate) are closer to 1 dBm. These results show that power measurements at a single sensor can be quite useful especially for secondary applications such as urban sensing, which have low bandwidth demands but strict power constraints. Given the differences we observed between D1 and D2, local calibration might be necessary to choose the appropriate power thresholds.
7
Towards Spatial Diversity
An alternative approach is to improve the accuracy of event detection. Clearly, this is hard to achieve with a single sensor because the signatures of individual events are obfuscated by additive white noise, which makes accurate detection very hard especially in the low SNR regime [15, 17]. The natural way to improve event detection accuracy would be to use multiple sensors in spatially diverse locations so that we can eliminate the white noise. There are various proposals for cooperative spectrum sensing approaches in the literature, e.g., [13, 14, 16]. We now use our dataset to quantify the benefits of cooperative sensing. In particular, we consider the approach of soft decision combining as described in [13] and make a natural assumption of zero-mean additive Gaussian noise with variance σ 2 . With k distributed (and independent) sensors, the white noise in the average sensed power can be approximated as a zero-mean normal variable 2 2 N (0, σk ) with variance σk by the Central Limit Theorem.
210
D. Willkomm et al.
We calculate the quality of event detection under the above model: Assume that a call is initiated at time T and the first criterion of Sect. 5 with w = w− = 1. At time T , the criterion shows a power spike of about 0.4 dBm plus the difference between the white noise at T and T − 1. Assuming that white noise at these time instants is independent, their difference is a zero-mean normal variable with 2 variance 2σk . Hence, the criterion will have a false negative with probability Pfn and false positive with probability Pfp 2σ 2 2σ 2 Pfn = P N 0, + 0.4 ≤ τ Pfp = P N 0, ≥τ , k k where τ is the decision threshold. Given a maximum tolerable false positive and false negative probability, we can solve the above to estimate the minimum number of sensors (k) required. For example, for Pfn ≤ p and Pfp ≤ n we get: q √2σ q √2σ √ 1−p n k ≥ max , τ1 − 0.4 τ1 Here, qy is the value at which the quantile of the standard normal distribution is y. We estimate σ using D1. Since we do not want to capture temporal mean variations, we remove the moving average of the previous 5 seconds for each power reading. This yields an estimate σ = 0.39 with which we can achieve less than 10% false negatives and false positives using k = 13 sensors and τ = 0.2 dBm. In the previous section, we saw that it may be desirable to have a much smaller fraction of false negatives than false positives. It turns out that we can achieve at most 1% false negatives and 10% false positives with k = 26 sensors and τ = 0.25 dBm. Deploying such numbers of sensors per cell could very well be economical, especially if existing consumer devices can be leveraged.
8
Related Work
In recent years, a lot of measurement studies [6, 11, 12] have been carried out to show the under-utilization of the licensed spectrum. Though these studies show the abundance of temporally unused spectrum, they give little insight into the dynamic behavior of the licensed users legally operating in those bands. The authors of [9] estimate the load in the New York cellular bands (CDMA as well as GSM) based on spectrum measurements. However, in addition to pure power measurements, the CDMA signals are demodulated to determine the number of active Walsh codes (i.e., the number of ongoing calls). To determine the number of calls in the GSM bands, image processing of the spectrogram snapshots is used. This is in contrast to our study, which is based on power measurements and uses minimal processing. In addition, to our best knowledge, there is no study which correlates spectrum measurements with the actual load as recorded by the system. Sensing for TV bands has been previously studied, for example in [10]. They found that energy detection with multiple sensors is often better than feature detection. However, their results are for the relatively static TV bands and not for cellular bands.
The Problem of Sensing Unused Cellular Spectrum
9
211
Conclusions
We used a unique set of simultaneous sensing and network measurements to study the problem of sensing-based estimation of unused capacity in cellular spectrum. When averaged over time, we found well-defined signatures of call initiation and termination events using the power at a single sensor. However, sensing noise makes it challenging to use these signatures to estimate unused capacity by identifying call events. We found that useful underestimates can nevertheless be computed especially for low-bandwidth secondary applications. Alternatively, we can obtain accurate estimates by using multiple sensors. To our knowledge, our work is the first detailed study of how well sensing works in CDMA networks, which are often viewed as candidates for DSA. In the future, we intend to design and evaluate better sensing algorithms including those based on multiple sensors.
References [1] Agilent: W1314a datasheet, http://www.agilent.com [2] Daoud, A.A., Alanyali, M., Starobinski, D.: Secondary pricing of spectrum in cellular CDMA networks. In: Proc. of IEEE DySPAN 2007 (2007) [3] Alyfantis, G., Marias, G., Hadjiefthymiades, S., Merakos, L.: Non-cooperative dynamic spectrum access for cdma networks. In: Proc. of IEEE GLOBECOM 2007 (2007) [4] Buddhikot, M., Ryan, K.: Spectrum management in coordinated dynamic spectrum access based cellular networks. In: Proc. of IEEE DySPAN 2005 (2005) [5] Chen, D., Yin, S., Zhang, Q., Liu, M., Li, S.: Mining spectrum usage data: a large-scale spectrum measurement study, pp. 13–24 (2009) [6] Chiang, R., Rowe, G., Sowerby, K.: A quantitative analysis of spectral occupancy measurements for cognitive radio. In: Proc. of VTC Spring 2007 (2007) [7] Guo, J., Liu, F., Zhu, Z.: Estimate the call duration distribution parameters in GSM system based on k-l divergence method. In: Proc. of WiCom 2007 (2007) [8] Hamouda, S., Hamdaoui, B.: Dynamic spectrum access in heterogeneous networks: Hsdpa and wimax. In: Proc of IWCMC 2009 (2009) [9] Kamakaris, T., Buddhikot, M., Iyer, R.: A case for coordinated dynamic spectrum access in cellular networks. In: Proc. of IEEE DySPAN 2005 (2005) [10] Kim, H., Shin, K.G.: In-band Spectrum Sensing in Cognitive Radio Networks: Energy Detection or Feature Detection? In: Proc. of ACM Mobicom 2008 (2008) [11] MacDonald, J.T.: A survey of spectrum occupancy in chicago. Tech. rep., Illinois Institute of Technology (2007) [12] McHenry, M.A., Tenhula, P.A., McCloskey, D., Roberson, D.A., Hood, C.S.: Chicago spectrum occupancy measurements & analysis and a long-term studies proposal. In: Proc. of TAPAS 2006 (2006) [13] Mishra, S.M., Sahai, A., Brodersen, R.W.: Cooperative sensing among cognitive radios. In: Proc. of IEEE ICC 2006 (2006)
212
D. Willkomm et al.
[14] Pham, H.N., Zhang, Y., Engelstad, P.E., Skeie, T., Eliassen, F.: Optimal cooperative spectrum sensing in cognitive sensor networks. In: Proc. of IWCMC 2009 (2009) [15] Sahai, A., Tandra, R., Mishra, S.M., Hoven, N.: Fundamental design tradeoffs in cognitive radio systems. In: Proc. of TAPAS 2006 (2006) [16] Sun, C., Zhang, W., Letaief, K.B.: Cooperative spectrum sensing for cognitive radios under bandwidth constraints. In: Proc IEEE WCNC 2007 (2007) [17] Tandra, R., Sahai, A.: Snr walls for signal detection. IEEE J. Sel. Topics Signal Process. 2(1), 4–17 (2008) [18] Vanghi, V., Damnjanovic, A., Vojcic, B.: The cdma2000 System for Mobile Communications: 3G Wireless Evolution. Prentice Hall PTR, Englewood Cliffs (2004) [19] Willkomm, D., Machiraju, S., Bolot, J., Wolisz, A.: Primary Users in Cellular Networks: A Large-scale Measurement Study. In: Proc. of IEEE DySPAN 2008 (2008)
Adaptive Transmission of Variable-Bit-Rate Video Streams to Mobile Devices Farid Molazem Tabrizi, Joseph Peters, and Mohamed Hefeeda School of Computing Science, Simon Fraser University 250-13450 102nd Ave, Surrey, BC, Canada {fma20,peters,mhefeeda}@cs.sfu.ca
Abstract. We propose a novel algorithm to efficiently transmit multiple variable-bit-rate (VBR) video streams from a base station to mobile receivers in wide-area wireless networks. The algorithm transmits video streams in bursts to save the energy of mobile devices. A key feature of the new algorithm is that it dynamically controls the buffer levels of mobile devices receiving different video streams according to the bit rate of the video stream being received by each device. Our algorithm is adaptive to the changes in the bit rates of video streams and allows the base station to transmit more video data on time to mobile receivers. We have implemented the proposed algorithm as well as two other recent algorithms in a mobile video streaming testbed. Our extensive analysis and results demonstrate that the proposed algorithm outperforms two other algorithms and it results in higher energy saving for mobile devices and fewer dropped video frames. Keywords: Mobile Video Streaming, Mobile Broadcast Networks, Energy Saving.
1
Introduction
Due to advances in mobile devices such as increased computing power and screen size, the demand for mobile multimedia services has been increasing in recent years [1]. However, video streaming to mobile devices still has many challenges that need to be addressed. For example, mobile devices are small and can only be equipped with small batteries that have limited lifetimes. Thus, conserving the energy of mobile devices during streaming sessions is needed to prolong the battery lifetime and enable users to watch videos for longer periods. Another challenge for mobile video is the limited wireless bandwidth in wide-area wireless networks. The wireless bandwidth is not only limited, but it is also quite expensive. For instance, Craig Wireless System Ltd. agreed to sell one quarter of its wireless spectrum to a joint venture of Rogers Communication and Bell Canada for $80 million [2], and AT&T sold a 2.5 GHz spectrum to Clearwire Corporation in a $300 million transaction [3]. Thus, for commercially viable mobile video services, network operators should maximize the utilization of their license-based wireless spectrum bands. J. Domingo-Pascual et al. (Eds.): NETWORKING 2011, Part II, LNCS 6641, pp. 213–224, 2011. c IFIP International Federation for Information Processing 2011
214
F. Molazem Tabrizi, J. Peters, and M. Hefeeda
In this paper, we consider the problem of multicasting multiple video streams from a wireless base station to many mobile receivers over a common wireless channel. This problem arises in wide-area wireless networks that offer multimedia content using multicast and broadcast services, such as DVB-H (Digital Video Broadcast-Handheld) [4], ATSC M/H (Advanced Television Systems Committee-Mobile/Handheld) [5], WiMAX [6], and 3G/4G cellular networks that enable the Multimedia Broadcast Multicast Services (MBMS) [7]. We propose a new algorithm to efficiently transmit the video streams from the base station to mobile receivers. The transmission of video streams is done in bursts to save the energy of mobile devices. Unlike previous algorithms in the literature, e.g. [8,9,10,11,12], the new algorithm adaptively controls the buffer levels of mobile devices receiving different video streams. The algorithm adjusts the buffer level at each mobile device according to the bit rate of the video stream being received by that device. The algorithm also uses variable-bit-rate (VBR) video streams, which are statistically multiplexed to increase the utilization of the expensive wireless bandwidth. By dynamically adjusting the receivers’ buffers, our algorithm enables finer control of the wireless multicast channel and allows the base station to transmit more video data on time to mobile receivers. We have implemented our new algorithm in an actual mobile video streaming testbed that fully complies with the DVB-H standard. We have also implemented the recent algorithm proposed in [8] and an algorithm used in some commercial base stations for broadcasting VBR video streams [13], which we know through private communication [14]. Our empirical results demonstrate the practicality of our new algorithm. Our results also show that our algorithm outperforms other algorithms as it delivers more video frames on-time to mobile receivers and it achieves high energy saving for mobile receivers. The rest of this paper is organized as follows. We review related work in Section 2. In Section 3, we describe the wireless network model that we consider and we state the problem addressed in this paper. We present the proposed algorithm in Section 4. We empirically evaluate our algorithm and compare it against others in Section 5. We conclude the paper in Section 6.
2
Related Work
Energy saving at mobile receivers using burst transmission (aka time slicing) has been studied in [9, 10]. Simulations are used in these studies to show that time slicing can improve energy saving for mobile receivers. However, no burst transmission algorithms are presented. Burst transmission algorithms for constantbit-rate (CBR) video streams are proposed in [11, 15]. These algorithms cannot handle VBR video streams with fluctuating bit rates. In this paper, we consider the more general VBR video streams which can achieve better visual quality and bandwidth utilization [16]. In [12], we presented a burst transmission technique designed for scalable video streams. The algorithm in [12] adjusts the number of transmitted quality layers based on the available bandwidth. However, unlike the algorithm presented in this paper, our previous algorithm does not support
Adaptive Transmission of VBR Video Streams to Mobile Devices
215
VBR streams, nor does is dynamically adjusts the receivers’ buffers. In addition, the new algorithm in this paper is designed for single-layer VBR video streams, which are currently the most common in practice. Transmitting VBR video streams over a wireless channel while avoiding buffer overflow and underflow at mobile devices is a difficult problem [17]. Rate smoothing is one approach to reducing the complexity of this problem. The smoothing algorithms in [18, 19] reduce the complexity by controlling the transmission rate of a single video stream to produce a constant bitrate stream. The minimum requirements of rate smoothing algorithms in terms of playback delay, lookahead time, and buffer size are discussed in [20]. The on-line smoothing algorithm in [21] reduces the peak bandwidth requirements for video streams. However, none of these smoothing algorithms is designed for mobile multicast/broadcast networks with limited-energy receivers. A different approach to handling VBR video streams is to employ joint rate control algorithms. For example, Rezaei et al. [22] propose an algorithm to assign bandwidth to video streams proportional to their coding complexities. This algorithm, however, requires expensive joint video encoders. A recent algorithm for transmitting VBR video streams to mobile devices without requiring joint video encoders is presented in [8]. This algorithm, called SMS, performs statistical multiplexing of video streams. SMS divides the receivers’ buffers into two equal parts, one for receiving data, and the other for playing out video. The proposed algorithm in this paper outperforms SMS because it dynamically controls the wireless channel and the buffer levels of the receivers. This allows the base station to exploit the varying nature of VBR streams in order to transmit more video frames on time. The VBR transmission algorithms deployed in practice are simple heuristics. For example, in the Nokia Mobile Broadcast Solution (MBS) [13,14], the operator determines a bit rate value for each video stream and a time interval based on which bursts are transmitted. The time interval is calculated on the basis of the size of the receiver buffers and the largest bit rate among all video streams, and this time interval is used for all video streams to avoid buffer overflow instances at the receivers. In each time interval, a burst is scheduled for each video stream based on its bit rate. In practice, it is difficult for an operator to assign bit rate values to VBR video streams to achieve good performance while avoiding buffer underflow and overflow instances at the receivers. In this paper, we compare our proposed algorithm to the SMS algorithm [8] (which represents the state-ofthe-art in the literature) as well as to the algorithm used in the Nokia Mobile Broadcast Solution (which represents one of the state-of-the-art algorithms in practice).
3
System Model and Problem Statement
We study the problem of transmitting several video streams from a wireless base station to a large number of mobile receivers. We focus on multicast and broadcast services enabled in many recent wide-area wireless networks such as DVB-H [4], MediaFLO [23], WiMAX [6], and 3G/4G cellular networks that
216
F. Molazem Tabrizi, J. Peters, and M. Hefeeda
Receivers1status ON
BaseStation
SLEEP
ON
SLEEP
ON
Receivers1
Receivers2
Receivers3
Fig. 1. The wireless network model considered in this paper
offer Multimedia Broadcast Multicast Services (MBMS) [7]. In such networks, a portion of the wireless spectrum can be set aside to concurrently broadcast multiple video streams to many mobile receivers. Since the wireless spectrum in wide-area wireless networks is license-based and expensive, maximizing the utilization of this spectrum is important. To achieve high bandwidth utilization, we employ the variable-bit-rate (VBR) model for encoding video streams. Unlike the constant-bit-rate (CBR) model, the VBR model allows statistical multiplexing of video streams [8], and yields better perceived video quality [16]. However, the VBR model makes video transmission much more challenging than the CBR model in mobile video streaming networks [17]. Mobile receivers are typically battery powered. Thus, reducing the energy consumption of mobile receivers is essential. To save energy, we employ the burst transmission model for transmitting video streams, in which the base station transmits the data of each video stream in bursts with a bit rate higher than the encoding bit rate of the video. The burst transmission model allows a mobile device to save energy by turning off its wireless interface between the reception of two bursts [15, 17]. The arrival time of a burst is included in the header of its preceding burst. Thus, the clocks at mobile receivers do not need to be synchronized. In addition, each receiver is assumed to have a buffer to store the received data. Figure 1 shows a high-level depiction of the system model that we consider. To achieve the burst transmission of video streams described in Figure 1, we need to create a transmission schedule which specifies for each stream the number of bursts, the size of each burst, and the start time of each burst. Note that only one burst can be transmitted on the broadcast channel at any time. The problem we address in this paper is to design an algorithm to create a transmission schedule for bursts that yields better performance than current algorithms in the literature. In particular, we study the problem of broadcasting S VBR video streams from a base station to mobile receivers in bursts over a wireless channel of bandwidth R Kbps. The base station runs the transmission scheduling algorithm every Γ sec; we call Γ the scheduling window. The base station receives the video data belonging to video streams from streaming servers and/or reads it from local video databases. The base station aggregates video data for Γ sec. Then, it computes for each stream s the required number of bursts. We denote the size of burst k of
Adaptive Transmission of VBR Video Streams to Mobile Devices
217
video stream s by bsk (Kb), and the transmission start time for it by fks sec. The end time of the transmission for burst k of stream s is fks + bsk /R sec. After computing the schedule, the base station will start transmitting bursts in the next scheduling window. Each burst may contain multiple video frames. We denote the size of frame i of video stream s by lis (Kb). Each video frame i has a decoding deadline, which is i/F , where F is the frame rate (fps). The goals of our scheduling algorithm are: (i) maximize the number of frames delivered on time (before their decoding deadlines) for all video streams, and (ii) maximize the average energy saving for all mobile receivers. We define the average energy saving as γ = Ss=1 γs /S, where γs is the fraction of time the wireless interfaces of the receivers of stream s are turned off.
4 4.1
Proposed Algorithm Overview
We start by presenting an overview of the proposed algorithm and then we present the details. We have proved that our algorithm produces near-optimal energy saving and that it runs in time O(N S), where S is the number of video streams and N is the total number of control points defined for the video streams, but we have omitted the proofs due to space limitations. We propose a novel algorithm, which we call the Adaptive Data Transmission (ADT) algorithm, to solve the burst transmission problem for the VBR video streams described in Section 3. The key idea of the algorithm is to adaptively control the buffer levels of mobile devices receiving different video streams using dynamic control points. The buffer level at each mobile device is adjusted as a function of the bit rate of the video stream being received by that device. Since we consider VBR video streams, the bit rate of each video is changing with time according to the visual characteristics of the video. This means that the buffer level at each mobile device is also changing with time. The receiver buffer level is controlled through the sizes and timings of the bursts transmitted by the base station in each scheduling window. The sizes and timings of the transmitted bursts are computed by the proposed ADT algorithm at dynamically defined control points. Dynamic control points provide flexibility to the base station so that the algorithm has more control over the bandwidth when the bit rates of video streams increase. This results in transmitting more video data on time to mobile receivers. The ADT algorithm defines control points at which it makes decisions about which stream should have access to the wireless medium and for how long it should have access. Control points for each video stream are determined based on a parameter α, where 0 < α < 1. This parameter is the fraction of B, the receiver’s buffer, that is played out between two control points of a video stream. The parameter α can change dynamically between scheduling windows but is the same for all video streams. At a given control point, the base station selects a stream and computes the buffer level of the receivers for the selected stream and can transmit data as long as there is no buffer overflow at the receivers.
218
F. Molazem Tabrizi, J. Peters, and M. Hefeeda
Our algorithm is designed for broadcast networks where there is no feedback channel, so we cannot know the buffer capacity of every receiver. We assume B to be the minimum buffer capacity for the receivers. Therefore, the receivers have a buffer capacity of at least B Kb and our solution still works if some of them have larger buffer capacities. For small values of α, control points are closer to each other (in time) which results in smaller bursts. This gives the base station more flexibility when deciding which video stream should be transmitted to meet its deadline. That is, the base station has more opportunities to adapt to the changing bit rates of the different VBR video streams being transmitted. For example, the base station can quickly transmit more bursts for a video stream experiencing high bit rate in the current scheduling window and fewer bursts for another stream with low bit rate in the current scheduling window. This dynamic adaptation increases the number of video frames that meet their deadlines from the high-bit rate stream while not harming the low-bit rate stream. However, smaller bursts may result in less energy saving for the mobile receivers.Each time that a wireless interface is turned on, it incurs an energy overhead because it has to wake up shortly before the arrival of the burst to initialize its circuits and lock onto the radio frequency of the wireless channel. We denote this overhead by To , which is on the order of msec depending on the wireless technology. 4.2
Details
The proposed ADT algorithm is to be run by the wireless base station to schedule the transmission of S video streams to mobile receivers. The algorithm can be called periodically every scheduling window of length Γ sec, and whenever a change in the number of video streams occurs. We define several variables that are used in the algorithm. Each video stream s is coded at F fps. We assume that mobile receivers of video streams have a buffer capacity of B Kb. We denote the size of frame i of video stream s by lis (Kb). We denote the time that the data in the buffer of a receiver of stream s can be played out by ds sec. This means that it takes ds sec until the buffer of a receiver of video stream s is drained. We use the parameter M (Kb) to indicate the maximum size of a burst that could be scheduled for a video stream in our algorithm when there are no other limitations like buffer size. In some wireless network standards, there might be limitations on the value of M . A control point is a time when the scheduling algorithm decides which stream should be assigned a burst. A control point for video stream s is set every time the receiver has played out αB Kb of video data. Let us assume that the algorithm is currently computing the schedule for the time window tstart to tstart +Γ sec. The algorithm defines the variable tschedule and sets it equal to tstart . Then, the algorithm computes bursts one by one and keeps incrementing tschedule until it reaches the end of the current scheduling window, i.e., tstart ≤ tschedule ≤ tstart + Γ . For instance, if the algorithm schedules a burst of size 125 Kb on a 1 Mbps bandwidth, then the length of this burst will be 0.125 sec and the algorithm increments tschedule by 0.125 sec. The number of video
Adaptive Transmission of VBR Video Streams to Mobile Devices
219
frames of video stream s that are sent until time tschedule is denoted by ms . The number of frames of stream s that are played out at the receiver side of stream s until time tschedule is ps = min(ms , tschedule ×F ). If ps < ms , then frames ps +1 to ms are still in the receivers’ buffers. If ps = ms , then the buffers of the receivers of video stream s are empty. In this case, the receivers of video stream s are waiting for data and if ms < tschedule × F , some video frames were not received on time. We define the playout deadline for stream s as: ds = (ms − ps )/F.
(1)
The next control point hs of stream s after time tschedule will be when the receivers of stream s have played out αB Kb of video data. We compute the number of frames gs corresponding to this amount as follows: p s +gs
lis ≤ αB <
i=ps
ps +g s +1
lis .
(2)
i=ps
The control point hs is then given by: hs = tschedule + gs /F.
(3)
The high-level pseudo-code of the ADT algorithm is given in Figure 2. The algorithm works as follows. At each control point, the algorithm finds the stream s which has the closest deadline ds . Then it finds the stream s with the closest control point hs . Then the algorithm schedules a burst for stream s until the control point hs if this does not exceed the maximum burst size M or the available buffer space at the receivers of s . Otherwise, the size of the new burst is set to the minimum of M and the available buffer space. The algorithm repeats the above steps until there are no more bursts to be transmitted from the video streams in the current scheduling window, or until tschedule exceeds tstart +Γ and there remains data to be transmitted. The latter case means that some frames will not be transmitted. In this case, the algorithm tries to find a better schedule by increasing the number of control points to introduce more flexibility. This is done by dividing α by 2. After decreasing α, the algorithm resets tschedule to be tstart and computes a new schedule. The algorithm keeps decreasing α until it reaches a preset minimum value αmin or a schedule with all frames transmitted on time is found. If α is reduced in a scheduling window, then it will be gradually increased in the following scheduling windows.
5 5.1
Evaluation Testbed and Setup
The testbed that we used to evaluate our algorithm consists of a base station, mobile receivers, and data analyzers. The base station includes an RF signal modulator which produces DVB-H standard-compliant signals. The signals are
220
F. Molazem Tabrizi, J. Peters, and M. Hefeeda
Adaptive Data Transmission (ADT) Algorithm // Input: S VBR video streams // Output: Burst schedule to transmit S video streams 1. compute control points and deadlines for each stream s 2. while there is a video stream to transmit { 3. create a new scheduling window from tstart to tstart + Γ 4. while the current scheduling window is not complete { 5. pick video stream s having the earliest deadline 6. schedule a burst until the next control point or until the buffer is full 7. update tschedule based on the length of scheduled burst 8. //gradually increase α if it was reduced in previous scheduling windows 9. if α < αmax and α was not reduced in the current scheduling window 10. update α (increase linearly) 11. update ds and hs (control point) for video stream s 12. if there is a stream s which is late and α > αmin 13. //go back within the scheduling window and reschedule bursts 14. update tschedule to tstart 15. update α (decrease by a factor of 2) 16. else 17. move to next control point greater than or equal to tschedule 18. } 19. } Fig. 2. The proposed transmission scheduling algorithm
amplified to around 0 dB before transmission through a low-cost antenna to provide coverage to approximately 20m for cellular phones. The mobile receivers in our testbed are Nokia N96 cell phones that have DVB-H signal receivers and video players. Two DVB-H analyzers are included in the testbed system. The first one, a DiviCatch RF T/H tester [24], has a graphical interface for monitoring detailed information about the received signals, including burst schedules and burst jitters. The second analyzer, a dvbSAM [25], is used to access and analyze received data at the byte level to monitor the correctness of the received content. We implemented our ADT algorithm, the SMS algorithm [8], and the Nokia Mobile Broadcast Solution (MBS) [13, 14], in the testbed and integrated them with the IP encapsulator of the transmitter. We set the modulator to a QPSK (Quadrature Phase Shift Keying) scheme and a 5 MHz radio channel, and the overhead To was set to 100 msec. We fixed the maximum receiver buffer size B to be 4 Mb (500 KB). We prepared a test set of 17 diverse VBR video streams to evaluate the algorithms. The different content of the streams (TV commercials, sports, action movies, documentaries) provided a wide range of video characteristics with average bitrates ranging from 25 kbps to 750 kbps and a ratio of minimum frame size to maximum frame size of nearly 300. Each video stream
Adaptive Transmission of VBR Video Streams to Mobile Devices
x 10
4
Min/Max Dropped Frames (%)
Dropped Frames
8
MBS SMS ADT0.50 ADT0.40 ADT0.30 ADT0.20 ADT0.10
6 4 2 0 0
100
200 300 400 Time (sec)
500
600
Fig. 3. Total number of dropped video frames
60 40 20 100
200 300 400 Time (sec)
500
600
(a) ADT with α = 0.50
ADT0.20
6 4 2 0 0
100
60 40 20 100
500
100
ADT0.20
80
0 0
200 300 400 Time (sec)
Dropped Frames (%)
80
8
600
Fig. 4. Minimum and maximum number of dropped video frames
100
ADT0.50 Dropped Frames (%)
Dropped Frames (%)
100
0 0
221
200 300 400 Time (sec)
500
600
(b) ADT with α = 0.20
SMS
80 60 40 20 0 0
100
200 300 400 Time (sec)
500
600
(c) SMS
Fig. 5. Dropped video frames over 1 sec periods
was played at 30 fps and had length of 566 sec. We transmitted the 17 VBR video streams concurrently to the receivers and we collected detailed statistics from the analyzers. Each experiment was repeated for each of the three algorithms (ADT, SMS, and MBS). 5.2
Results for Dropped Video Frames
Dropped frames are frames that are received at the mobile receivers after their decoding deadlines or not received at all. The number of dropped frames is an important quality of service metric as it impacts the visual quality and smoothness of the received videos. Figure 3 shows the cumulative total over all video streams of the numbers of dropped frames for ADT with fixed values of α ranging from 0.10 to 0.50, and for SMS and MBS. The figure clearly shows that our ADT algorithm consistently drops significantly fewer frames than the SMS and MBS algorithms. The figure also shows the effect of decreasing the value of α for our ADT algorithm. The total number of dropped frames decreases from 24,742 with α = 0.50 to 7,571 with α = 0.20. No frames are dropped when α is reduced to 0.10. On the other hand, the SMS algorithm is significantly worse with 58,450 dropped frames. The results for MBS in Figure 3 were obtained by running the algorithm for each video stream with different assigned bit rates ranging from
222
F. Molazem Tabrizi, J. Peters, and M. Hefeeda
0.25 times the average bit rate to 4 times the average bit rate of the video stream and then choosing the best result for each video stream. Even in this total of best cases, the number of dropped frames is more than 75,700. In practice, an operator heuristically chooses the assigned bit rates for video streams, so the results in practice likely will be worse. We counted the number of dropped frames for each video stream to check whether the ADT algorithm improves the quality of some video streams at the expense of others. A sample of our results is shown in Figure 4; others are similar for different values of α. The curve in the figure shows the average over all streams of the percentage of dropped frames; each point on the curve is the average percentage of frames transmitted to that point in time that were dropped. The bars show the ranges over all video streams. As shown in the figure, the difference between the maximum and minimum dropped frame percentages at the end of the transmission period is very small. Therefore, the ADT algorithm does not sacrifice the quality of service for some streams to achieve good aggregate results. We further analyzed the patterns of dropped video frames for each algorithm by plotting the total number of dropped frames during each 1 sec interval across all video streams. Some samples of our results are shown in Figure 5. For the ADT algorithm, reducing α from 0.50 to 0.20 resulted in finer control over the bandwidth allocation and significantly fewer frames were dropped. Further reducing α to 0.10 eliminated all dropped frames as we have already seen in Figure 3. The SMS algorithm on the other hand dropped up to 72% of the frames during the period in which the aggregate bit rate of all streams is high. 5.3
Results for Energy Saving
We compute the average energy saving γ achieved across all video streams, which represents the average amount of time that the wireless interface is in off mode. This is done based on the formulation for average energy saving described in Section 3. The results are shown in Figure 6. The figure also shows the impact of changing α on the energy saving achieved by the ADT algorithm. The average over all video streams of the energy saving is 87.59% for the ADT algorithm when α = 0.10 which is approximately the same as the SMS algorithm. Increasing α to 0.50 increases the energy saving to 93.08%. The small improvement of 5.49% in energy saving is non-trivial but it might not be large enough in many practical situations to offset the advantage of minimizing the number of dropped frames by setting α = 0.10. Also, our experiments show that the energy saving achieved by ADT is considerably higher than MBS, which achieves average energy saving of 49%. We measured the energy saving for the receivers of each individual stream to check whether the ADT algorithm unfairly saves more energy for some receivers at the expense of others. A sample of our results is shown in Figure 7. The figure confirms that ADT does not sacrifice energy saving in some streams to achieve good average energy saving.
Adaptive Transmission of VBR Video Streams to Mobile Devices
Min/Max Energy Saving (%)
Energy Saving (%)
100 90 80 70 60 50 40 0
MBS SMS ADT0.50 ADT0.10 100
200 300 400 Time (sec)
500
Fig. 6. Average energy saving
6
600
223
100 90 80 70 60 ADT0.50 50 0
100
200 300 400 Time (sec)
500
600
Fig. 7. Min and max energy saving
Conclusions
We have presented a new algorithm for transmitting multiple VBR video streams to energy-constrained mobile devices. The algorithm is to be used by wireless base stations in wide-area wireless networks that offer multimedia services in broadcast/multicast modes such as the MBMS (Multimedia Broadcast Multicast Service) of 3G/4G cellular networks, WiMAX networks, and DVB-H (Digital Video Broadcast–Handheld) networks. One of the novel aspects of the proposed algorithm is its ability to dynamically adjust the levels of receivers’ buffers according to the bit rates of the video streams being received by each receiver. We presented a proof-of-concept implementation of the proposed algorithm in a mobile video streaming testbed. We also compared the new algorithm to recent algorithms in the literature and used in practice. We conducted an extensive empirical analysis using a large number of VBR video streams with diverse visual characteristics and bit rates. Our results show that the proposed algorithm yields high energy saving for mobile receivers and reduces the number of video frames that miss their deadlines. The results also demonstrate that the proposed algorithm outperforms the current state-of-the-art algorithms.
References 1. Global IPTV market analysis (2006-2010), http://www.rncos.com/Report/ IM063.htm 2. Craig wireless to sell canadian spectrum for $80m (2010), http://www.cbc.ca/fp/ story/2010/03/26/2729450.html 3. AT&T sells wireless spectrum in southeast to Clearwire corporation, http://www. att.com/gen/press-room?pid=4800&cdvn=news&newsarticleid=23428 4. Kornfeld, M., May, G.: DVB-H and IP Datacast – broadcast to handheld devices. IEEE Transactions on Broadcasting 53(1), 161–170 (2007) 5. ATSC mobile DTV standard (2009), http://www.openmobilevideo.com/ about-mobile-dtv/standards/ 6. IEEE 802.16: Broadband Wireless Metropolitan Area Network (2009), http:// standards.ieee.org/getieee802/802.16.html
224
F. Molazem Tabrizi, J. Peters, and M. Hefeeda
7. Parkvall, S., Englund, E., Lundevall, M., Torsner, J.: Evolving 3G mobile systems: Broadband and broadcast services in WCDMA. IEEE Communications Magazine 44(2), 30–36 (2006) 8. Hsu, C., Hefeeda, M.: On statistical multiplexing of variable-bit-rate video streams in mobile systems. In: Proc. of ACM Multimedia 2009, Beijing, China, pp. 411–420 (October 2009) 9. Digital Video Broadcasting (DVB); DVB-H implementation guidelines. European Telecommunications Standards Institute (ETSI) Standard EN 102 377 Ver. 1.3.1 (May 2007) 10. Yang, X., Song, Y., Owens, T., Cosmas, J., Itagaki, T.: Performance analysis of time slicing in DVB-H. In: Proc. of Joint IST Workshop on Mobile Future and Symposium on Trends in Communications (SympoTIC 2004), Bratislava, Slovakia, October 2004, pp. 183–186 (2004) 11. Hsu, C., Hefeeda, M.: Time slicing in mobile TV broadcast networks with arbitrary channel bit rates. In: Proc. of IEEE INFOCOM 2009, Rio de Janeiro, Brazil, April 2009, pp. 2231–2239 (2009) 12. Tabrizi, F.M., Hsu, C.H., Hefeeda, M., Peters, J.G.: Optimal scalable video multiplexing in mobile broadcast networks. In: Proceedings of the 3rd Workshop on Mobile Video Delivery (MoViD 2010), pp. 9–14. ACM, New York (2010) 13. Nokia mobile broadcast solution, http://press.nokia.com/PR/200510/1018770_ 5.html 14. Private communication with Nokia’s engineers managing mobile TV base stations 15. Hefeeda, M., Hsu, C.: On burst transmission scheduling in mobile TV broadcast networks. IEEE/ACM Transactions on Networking 18(2), 610–623 (2010) 16. Lakshman, T., Ortega, A., Reibman, A.: VBR video: Tradeoffs and potentials. Proc. of the IEEE 86(5), 952–973 (1998) 17. Rezaei, M.: Video streaming over DVB-H. In: Luo, F. (ed.) Mobile Multimedia Broadcasting Standards, pp. 109–131. Springer, US (2009) 18. Lai, H., Lee, J., Chen, L.: A monotonic-decreasing rate scheduler for variablebit-rate video streaming. IEEE Transactions on Circuits and Systems for Video Technology 15(2), 221–231 (2005) 19. Lin, J., Chang, R., Ho, J., Lai, F.: FOS: A funnel-based approach for optimal online traffic smoothing of live video. IEEE Transactions on Multimedia 8(5), 996–1004 (2006) 20. Thiran, P., yves Le Boudec, J., Worm, F.: Network calculus applied to optimal multimedia smoothing. In: Proc. of IEEE INFOCOM 2001, Anchorage, Alaska, pp. 1474–1483 (April 2001) 21. Sen, S., Rexford, J., Dey, J., Kurose, J., Towsley, D.: Online smoothing of variablebit-rate streaming video. IEEE Transactions on Multimedia 2(1), 37–48 (2000) 22. Rezaei, M., Bouazizi, I., Gabbouj, M.: Joint video coding and statistical multiplexing for broadcasting over DVB-H channels. IEEE Transactions on Multimedia 10(7), 1455–1464 (2008) 23. FLO technology overview (2009), http://www.mediaflo.com/news/pdf/tech_ overview.pdf 24. Divi Catch RF-T/H transport stream analyzer (2008), http://www.enensys.com/ 25. dvbSAM DVB-H solution for analysis, monitoring, and measurement (2008), http://www.decontis.com/
Multiscale Fairness and Its Application to Resource Allocation in Wireless Networks Eitan Altman, Konstantin Avrachenkov, and Sreenath Ramanath INRIA, 2004 Route des Lucioles, BP 93, 06902 Sophia Antipolis, France {eitan.altman,konstantin.avrachenkov,sreenath.ramanath}@sophia.inria.fr http://www-sop.inria.fr/
Abstract. Fair resource allocation is usually studied in a static context, in which a fixed amount of resources is to be shared. In dynamic resource allocation one usually tries to assign resources instantaneously so that the average share of each user is split fairly. The exact definition of the average share may depend on the application, as different applications may require averaging over different time periods or time scales. Our main contribution is to introduce new refined definitions of fairness that take into account the time over which one averages the performance measures. We examine how the constraints on the averaging durations impact the amount of resources that each user gets. Keywords: Resource allocation; Multiscale fairness; α-fairness; T -scale fairness; Networking.
1
Introduction
Let us consider some set S of resource that we wish to distribute among I users by assigning user i a subset Si of it. We shall be interested in allocating subsets of the resource fairly among the users. The set S may actually correspond to one or to several resources. We shall consider standard fairness criteria for sharing the resources among users. We shall see, however, that the definition of a resource will have a major impact on the fair assignment. We associate with each user i a measurable function xi that maps each point in S to some real number. Then, we associate with each i a utility ui which maps all measurable subsets Si to the set of real numbers. We shall say that S is a resource if ui (Si ) can be written for each Si ⊂ S as ui (Si ) = f xi (s)ds Si
As an example, consider I mobiles that wish to connect to a base station between 9h00 and 9h10 using a common channel. The time interval is divided into discrete time slots whose number is N . Assume that the utility for each mobile s of receiving a subsets Ni of slots depend only on the number of slots Ni it receives. Then the set of N slots is considered to be a resource. J. Domingo-Pascual et al. (Eds.): NETWORKING 2011, Part II, LNCS 6641, pp. 225–237, 2011. c IFIP International Federation for Information Processing 2011
226
E. Altman, K. Avrachenkov, and S. Ramanath
Next assume that if mobile i receives the channel at time slot t then it can transmit at a throughput of Xti . Assume that the utility of user i is a function of the total throughput it has during this fraction of an hour. Then again the N slots are considered as a resource. We adopt the idea that fair allocation should not be defined in terms of the object that is split but in terms of the utility that corresponds to the assignments. This is in line with the axiomatic approach for defining the Nash bargaining solution for example. With this in mind, we may discover that the set of N slots cannot always be considered as a resource to be assigned fairly. Indeed, a real time application may consider the N slots as a set of n resources, each containing B = N/n consecutive slots. A resource may correspond to the number of time slots during a period of 100 msec. The utility of the application is defined as a function of the instantaneous rate, i.e. the number of slots it receives during each period of 100 msec. (With a playout buffer that can store 100 msec of voice packets, the utility of the mobile depends only on how many slots are assigned to it during 100 msec and not which slots are actually assigned to it.) Related work: Our work is based on the α-fairness notion introduced in [1]. This, as well as other fairness notions can be defined through a set of axioms, see [2]. This paper is inspired by several papers which already observed or derived fairness at different time-scales [3,4,5,6,7,8]. However, we would like to mention that the T -scale fairness (a unifying generalization of long- and short- term fairness) and multiscale fairness are new concepts introduced in the present work. Structure of the paper: In Section 2, we introduce a resource sharing model which is particularly suitable for wireless applications. We also define several fairness criteria. In Sections 3, we apply these new concepts to study spectrum allocation in fading channels. Section 4 concludes the paper and provides avenues for future research.
2
Resource Sharing Model and Fairness Definitions
Consider n mobiles located at points x1 , x2 , ..., xn , respectively. We assume that the utility Ui of mobile i depends on its location xi and on the amount of resources si it gets. Let S be the set of assignments; an assignment s ∈ S is a function from the vector x to a point in the n-dimensional simplex. Its ith component, si (x) is the fraction of resource assigned to mobile i. Definition 1. An assignment s is α-fair if it is a solution of Z(x, s, α) := max Zi (xi , si , α) such that, i
s
i
si = 1, si ≥ 0 ∀i = 1, ..., n
(1)
Multiscale Fairness and Its Networking Applications
where,
227
(Ui (xi , si ))1−α for α = 1 and 1−α Zi (xi , si , α) := log (Ui (xi , si )) for α = 1 Zi (xi , si , α) :=
We shall assume throughout that Ui is non-negative, strictly increasing and is concave in si . Then for any α > 0, Zi (xi , si , α) is strictly concave in si . We conclude that Z(xi , si , α) is strictly concave in s for any α > 0 and therefore there is a unique solution s∗ (α) to (2). Definition 2. [1] We call Zi (si , ·, α) the fairness utility of mobile i under si , and we call Z(s, ·, α) the instantaneous degree of α-fairness under s. In applications, the state X will be random, so that the instantaneous amount of resource assigned by an α-fair allocation will also be a random variable. Thus, in addition to instantaneous fairness we shall be interested in the expected amount assigned by being fair at each instant. Definition 3. We call E[Z(s, X, α)] the expected instantaneous degree of αfairness under s. In Section 2.1 we introduce the expected long-term fairness in which the expected amount of resource is assigned fairly. Definition 4. We say that a utility is linear in the resource if it has the form: Ui (xi , si ) := si qi (xi ). For example, consider transmission between a mobile source and a base station, and assume (i) that the base station is in the origin (x = 0) but at a height of one unit, whereas all mobiles are on the ground and have height 0. Thus, the distance between the base station and a mobile located on the ground at point x is 1 + ||x||2 . (ii) that the Shannon capacity can be used to describe the utility. If the resource that is shared is the frequency then the utility has the linear form: P (x2 + 1)−β/2 U (C, x) := Cq(x); with q(x) = log 1 + σ2 2.1
Fairness over Time: Instantaneous versus Long Term α-Fairness
Next we consider the case where xi (t), i = 1, ..., n, may change in time. Definition 5. We define an assignment to be instantaneous α-fair if at each time t each mobile is assigned a resource so as to be α-fair at that instant. Consider the instantaneous α-fair allocation and assume that time is discrete. We thus compute the instantaneous α-fair assignment over a period of T slots as the assignment that maximizes (for α = 1)
228
E. Altman, K. Avrachenkov, and S. Ramanath n (Ui (xi (t), si (t)))1−α i=1
1−α
for every t = 1, ..., T .
This is equivalent to maximizing T n (Ui (xi (t), si (t)))1−α t=1 i=1
1−α
.
(2)
For α = 1, we replace (Ui (xi (t), si (t)))1−α by log[Ui (xi (t), si (t))] 1−α The optimization problem (2) corresponds to the α-fair assignment problem in which there are nT players instead of n players, where the utility of player i = kn + j (k = 0, ..., T − 1, j = 1, ..., n) is defined as Ui (xi , si ) = Uj (xj (k + 1), sj (k + 1)). Remark 1. Thus the expected instantaneous fairness criterion in the stationary and ergodic case regards assignments at different time slots of the same player as if it were a different player at each time slot! Note that when considering the proportional fair assignment, then the resulting assignment is the one that maximizes ni=1 Tt=1 Ui (xi (t), si (t)). Definition 6. Assume that the state process X(t) is stationary ergodic. Let λi be the stationary probability measure of X(0). The long term α-fairness index of an assignment s ∈ S of a stationary process X(t) is defined as 1−α n E [U (X (0), s (X(0)))] λ i i i i Z λ (s) := Z λ (s); with Zλi (s) = . 1−α i=1
An assignment s is long-term α-fair if it maximizes Zλ (s) over s ∈ S. As we see, instead of attempting to have a fair assignment of the resources at every t, it is the expected utility in the stationary regime that one assigns fairly according to the long-term fairness. Under stationarity and ergodicity conditions on the process X(t) this amounts in an instantaneous assignment of the resources in a way that the time average amount allocated to the users are α-fair. 2.2
Fairness over Time: T -scale α-Fairness
Next we define fairness concepts that are in between the instantaneous and the expected fairness. They are related to fairness over a time interval T . Either continuous time is considered or discrete time where time is slotted and each slot is considered to be of one time unit. Below, we shall understand the integral to mean summation when ever time is discrete.
Multiscale Fairness and Its Networking Applications
229
Definition 7. The T -scale α-fairness index of s ∈ S is defined as
1−α 1 T n U (X (t), s (X(t)))dt i i i T 0 ZT (s) := ZTi (s); with ZTi = . 1−α i=1 The expected T -scale α-fairness index is its expectation. An assignment s is T scale α-fair if it maximizes ZT (s) over s ∈ S. Definition 8. The T -scale expected α-fairness index of s ∈ S is defined as
1−α 1 T n E[U (X (t), s (X(t)))]dt i i i T 0 ZT (s) := ZTi (s); with ZTi = 1−α i=1
Assume that the state processes is stationary ergodic. Then for any assignment s ∈ S we would have by the Strong Law of Large Numbers: 1 T lim Ui (Xi (t), si (X(t)))dt = Eλ [Ui (Xi (0), si (X(0)))] T →∞ T 0 P-a.s. Hence, for every i and s, we have P-a.s.
1−α 1 T U (X (t), s (X(t)))dt i i i T 0 lim ZTi (s) = lim T →∞ T →∞ 1−α (Eλ [Ui (Xi (0), si (X(0)))])1−α i = = Z λ (s). 1−α Assume that Ui is bounded. Then ZTi is bounded uniformly in T . The bounded convergence then implies that i
lim E[ZTi (s)] = Z λ (s).
T →∞
(3)
Theorem 1. Assume that the convergence in (3) is uniform in s. Let s∗ (T ) be the T -scale α fair assignment and let s∗ be the long term α-fair assignment. Then the following holds: – s∗ = limT →∞ S ∗ (T ) – For any > 0, s∗ is an -optimal assignment for the T -scale criterion for all T large enough. – For any > 0, s∗ (T ) is an -optimal assignment for the long term fairness for all T large enough. Proof. According to [9], any accumulation point of s∗ (T ) as T → ∞ is an optimal solution to the problem of maximizing Z T over S. Due to the strict concavity of Z T in s it has a unique solution and it is coincides with any accumulation point of s∗ (T ). This implies the first statement of the theorem. The other statements follow from Appendices A and B in [9].
230
2.3
E. Altman, K. Avrachenkov, and S. Ramanath
Fairness over Different Time Scales: Multiscale Fairness
We consider real time (RT) and non-real time (NRT) traffic. Resource allocation policy for RT traffic is instantaneous-fair, while for the NRT traffic, it is expectedfair. The available resources are divided amongst the RT and NRT traffic so as to guarantee a minimum quality of service (QoS) requirement for the RT traffic and to keep service time as short as possible for the NRT traffic. The real time traffic would like the allocation to be instantaneously α-fair. For α > 0, this guarantees that at any time it receives a strictly positive allocation. The non-real time traffic does not need to receive at each instant a positive amount of allocation. It may prefer the resources to be assigned according to the T -scale α-fair assignment where T may be of the order of the duration of the connection. Moreover, different non real time applications may have different fairness requirements. For instance, bulk FTP transfer can prefer fairness over time scale longer than a time scale for some streaming application. In order to be fair, we may assign part (say half) of the resource according to the instantaneous α-fairness and the rest of the resources according to the T -scale α-fairness. We thus combine fairness over different time scales. We may now ask how to choose what part of the resource would be split according to the instantaneous assignment and what part according to the T -scale assignment. We propose to determine this part using the same α-fair criterion. Specifically we define the multiscale fairness as follows: Definition 9. The multiscale α-fairness index of s ∈ S is defined as ZT1 ,...,Tn (s) :=
n i=1
ZTi i (s); with ZTi i =
1 Ti
Ti 0
1−α Ui (Xi (t), si (X(t)))dt 1−α
The expected multiscale α-fairness index is its expectation. An assignment s is multiscale α-fair if it maximizes ZT1 ,...,Tn (s) over s ∈ S. We also say that multiscale α-fair assignment is (T1 , ..., Tn )-scale fair assignment.
3
Application to Spectrum Allocation in Fading Channels
We consider a fast-changing and a slowly-changing user (Fig. 1), whose channels are modeled by the Gilbert model. The users can be either in a good or in a bad state. The dynamics of the users is described by a Markov chain {Yi (t)}t=0,1,... i = 1,2, with the transition matrix and stationary distribution as:
1 − i αi αi αi i Pi = ; πi = αiβ+β . i αi +βi βi 1 − i βi Let 1 = 1 and 2 = . Note that the parameter does not have an effect on the stationary distribution, but, it influences for how long the slowly-changing user stays in some state. The smaller , the more seldom the user changes the states.
Multiscale Fairness and Its Networking Applications
fast
231
slow
User 2
User 1
Fig. 1. Spectrum allocation in random fading channels
We assume that state 1 is a bad state and state 2 is a good state. Let hij represent the channel gain coefficient of user i in channel state j. The utility (achievable throughput via Shannon capacity) of user i in state j is given by Uij = sij log2 (1 +
|hij|2 pi ) σ2
where sij , pi is the resource allocation and power that corresponding to user i. First, we would like to analyze T -scale fairness and to see the effect of the time scale on the resource allocation. Specifically, we consider the following optimization criterion 2 i=1
1−α T 1 1 Ui (t) 1 − α T t=0
→
max
(4)
s1 ,s2
with Ui (t) = si (t)qi,Yi (t) and s1 (t) + s2 (t) = 1. Let us consider several options for the time horizon T : Instantaneous fairness. If we take T = 1 we obtain the instantaneous fairness. Namely, the criterion (4) takes the form 1 1−α U1 (0) + U21−α (0) 1−α
→
max s1 ,s2
The solution of the above optimization problem is given by (1−α)/α
si (0) =
qi,Yi (0) (1−α)/α
(1−α)/α
q1,Y1 (0) + q2,Y2 (0)
This allocation results in the following expected throughputs θ1 =
1/α
q1,i (1−α)/α
i,j
q1,i
π π ,θ = (1−α)/α 1,i 2,j 2
+ q2,j
1/α
q2,j (1−α)/α
i,j
q1,i
(1−α)/α
+ q2,j
π1,i π2,j . (5)
232
E. Altman, K. Avrachenkov, and S. Ramanath
Mid-term fairness. Let us take the time horizon as a function of the underlying dynamics time parameter , that is T = T (), satisfying the following conditions: (a) T () → ∞ and (b) T () → 0. The condition (a) ensures that T () 1 1{Y1 (t) = i} → π1,i , T () t=0
as → 0,
and the condition (b) ensures that T () 1 1{Y2 (t) = i} → δY2 (0),i , T () t=0
as → 0.
This follows from the theory of Markov chains with multiple time scales (see e.g., [10]). It turns out to be convenient to take the following notation for the resource allocation: We denote by s(t) the allocation for the fast-changing user and by 1−s(t) the resource allocation for the slowly-changing user. Thus, we have s1 (t) = s(t) and s2 (t) = 1−s(t). We denote by s¯i,j = E[s(t)|Y1 (t) = i, Y2 (t) = j]. We note that since the fast-changing user achieves stationarity when T () → ∞ we are able to solve (4) in stationary strategies. Then, the criterion (4) takes the form 1 (π1,1 q1,1 s¯1,Y2 (0) + π1,2 q1,2 s¯2,Y2 (0) )1−α 1−α + ((1 − π1,1 s¯1,Y2 (0) − π1,2 s¯2,Y2 (0) )q2,Y2 (0) )1−α →
max
s¯1,Y2 (0) , s¯2,Y2 (0)
The above nonlinear optimization problem can be solved numerically. The expected throughputs in the mid-term fairness case are given by θ1 = (π1,1 q1,1 s¯1,1 + π1,2 q1,2 s¯2,1 )π2,1 + (π1,1 q1,1 s¯1,2 + π1,2 q1,2 s¯2,2 )π2,2 , θ2 = (1 − π1,1 s¯1,1 − π1,2 s¯2,1 )q2,1 π2,1 + (1 − π1,1 s¯1,2 − π1,2 s¯2,2 )q2,2 π2,2 . (6) Long-term fairness. In the case of long-term fairness we set T = ∞ which results in the following criterion 1 E[U1 ]1−α + E[U2 ]1−α → max s1 ,s2 1−α Due to stationarity, we can solve the above optimization problem over sequences in stationary strategies. Namely, we have the following optimization problem 1 ((π1,1 π2,1 s¯1,1 + π1,1 π2,2 s¯1,2 )q1,1 + (π1,2 π2,1 s¯2,1 + π1,2 π2,2 s¯2,2 )q1,2 )1−α 1−α + ((π2,1 − π1,1 π2,1 s¯1,1 − π1,2 π2,1 s¯2,1 )q2,1 + (π2,2 − π1,1 π2,1 s¯1,1 − π1,2 π2,2 s¯2,2 )q2,2 )1−α → max s¯1,1 , s¯1,2 , s¯2,1 , s¯2,2
Multiscale Fairness and Its Networking Applications
233
The expected throughputs in the long-term fairness case are given by θ1 = ( π1,1 π2,1 s¯1,1 + π1,1 π2,2 s¯1,2 )q1,1 + (π1,2 π2,1 s¯2,1 + π1,2 π2,2 s¯2,2 )q1,2 θ2 = ( π2,1 − π1,1 π2,1 s¯1,1 − π1,2 π2,1 s¯2,1 )q2,1 + (π2,2 − π1,1 π2,1 s¯1,1 − π1,2 π2,2 s¯2,2 )q2,2
(7)
Let us also consider the expected instantaneous fairness which is given by the criterion 1 E[U11−α (t)] + E[U21−α (t)] → max s1 ,s2 1−α which is equivalent to ⎡ 1 1 ⎣ π1,i π2,j (sq1,i )1−α dFij (s) 1−α 0 ij +
ij
1
π1,i π2,j
⎤
((1 − s)q2,j )1−α dFij (s)⎦
→
0
max Fij
where Fij (s) is the distribution for s(t) conditioned on the event {Y1 (t) = i, Y2 (t) = j}. The above criterion is maximized by (1−α)/α (1−α)/α (1−α)/α 0, if s < q1,i /(q1,i + q2,j ), Fij (s) = (1−α)/α (1−α)/α (1−α)/α 1, if s ≥ q1,i /(q1,i + q2,j ). Thus, we can see that the expected instantaneous fairness criterion is equivalent to instantaneous fairness. Multiscale fairness: Next, let us consider multiscale fairness over time. Specifically, (T1 , T2 )-scale fairness is defined by the following criterion ⎡ 1−α 1−α ⎤ T1 T2 1 ⎣ 1 1 ⎦ → max U1 (t) + U2 (t) s1 ,s2 1−α T1 t=0 T2 t=0 In this particular example, there are 6 possible combinations of different time scales. It turns out that in this example only the (1, ∞)-scale fairness gives a new resource allocation. The other combinations of time scales reduce to some T -scale fairness. Thus, let us first consider the multiscale fairness when we apply instantaneous fairness to the fast-changing user and long-term fairness to the slowly-changing user. The (1, ∞)-scale fairness corresponds to the following optimization criterion 1 U1 (0)1−α + E[U2 (t)]1−α 1−α
→
max s1 ,s2
234
E. Altman, K. Avrachenkov, and S. Ramanath
which is equivalent to 1 (q1,Y1 (0) (¯ sY1 (0),1 π2,1 + s¯Y1 (0),2 π2,2 ))1−α 1−α +(q2,1 (1 − s¯Y1 (0),1 )π2,1 + q22 (1 − s¯Y1 (0),2 )π2,2 )1−α →
max
s ¯Y1 (0),1 , s ¯Y1 (0),2
The expected throughputs in the (1, ∞)-scale fairness case are given by θ1 = ( q1,1 (¯ s1,1 π1,1 π2,1 + s¯1,2 π1,1 π2,2 ) + q1,2 (¯ s2,1 π1,2 π2,1 + s¯2,2 π1,2 π2,2 )), θ2 = ( q2,1 (1 − s¯1,1 )π2,1 + q22 (1 − s¯1,2 )π22 )π1,1 + (q2,1 (1 − s¯2,1 )π2,1 + q22 (1 − s¯2,2 )π22 )π1,2 . As we have mentioned above, the other combinations of time scales reduce to some T -scale fairness. In particular, (1, T ())-fairness reduces to the instantaneous fairness, (T (), ∞)-fairness reduces to long-term fairness, and (T (), 1)-, (∞, 1)- and (∞, T ())-fairness all reduce to mid-term fairness. Let us consider a numerical example. The parameters are given in Table 1. We consider three typical cases. The first case corresponds to the symmetric scenario. In the second case, the fast-changing user has in general better channel conditions. In the third scenario the slowly-changing user (user 2) is more often in the good channel state than the fast-changing user (user 1). We plot the expected throughput of the mobiles for various fairness criteria for case-3 in Fig. 2. Plots and explanation for case-1 and case-2 are provided in [11]. In the third scenario, the second user always gets better share in terms of throughput. This is expected as the second user spends on average more time in a channel with good state and the long or short term throughput is the principal component of the optimization criteria. It is natural that long term fairness gives the best efficiency for both types of users. However, we note that the (1, ∞)-scale fairness provides better control in term of fairness. The (1, ∞)-scale fairness based allocation provides the second best efficiency after the long term fairness based allocation. Thus, we conclude that multiscale fairness provides good sensitivity to the variation of the fairness parameter and at the same time good performance in expected throughput. Below we shall see that the multiscale fairness has another good property with respect to variance of the throughput.
Table 1. Case 1,2 & 3: Shannon capacity (q)/probability(π) Case-1 state-1 state-2 (bad) (good) User-1 2/0.2 8/0.8 User-2 2/0.2 8/0.8
Case-2 state-1 state-2 (bad) (good) 3/0.1 9/0.9 1/0.3 7/0.7
Case-3 state-1 state-2 (bad) (good) 3/0.9 9/0.1 1/0.3 7/0.7
Multiscale Fairness and Its Networking Applications Expected throughput (Case 3)
4.5
U1_inst U2_inst U1_mid U2_mid U1_long U2_long U1_multi U2_multi
4 3.5 E[θ]
235
3 2.5 2 1.5 0.5
1
1.5
2
α
2.5
3
3.5
4
Fig. 2. Throughput(θ) as a function of α for instantaneous, mid-term, long-term and (1,∞)-scale fairness criteria (Case 3)
1.7
Coefficient of variation in expected throughput (Case 3) U1_inst U2_inst U1_mid U2_mid U1_long U2_long U1_multi U2_multi
1.6
Cfft of variation in θ
1.5 1.4 1.3 1.2 1.1 1 0.9 0.8 0.7 0.5
1
1.5
2
α
2.5
3
3.5
4
Fig. 3. Coefficient of variation in expected throughput as a function of α for instantaneous, mid-term, long-term and (1,∞)-scale fairness criteria (Case 3)
It is curious to observe that in this example the instantaneous fairness does not really help fast-changing user. Coefficient of variation: We compute the coefficient of variation for shortterm, mid-term, long-term and multiscale fairness. For this, we first compute
236
E. Altman, K. Avrachenkov, and S. Ramanath
the second moment of the throughput and then find the ratio of the √ standard E[θ 2 ] deviation to its mean. For any user i, the coefficient of variation Γi = E[θi ]i . In Fig 3, we plot the coefficient of variation in throughput for the various fairness criteria considered above. It is very interesting to observe that except the (1, ∞)-scale fairness criterion all the other fairness criteria behave similarly with respect to the coefficient of variation. Only in the case of (1, ∞)-scale fairness the coefficient of variation decreases for sort-term fairness oriented user. This is a very desirable property of the multiscale fairness as a short-term fairness oriented user is typically a user with a delay sensitive application. Also, here as in the case of the throughput, the multiscale fairness provides at the same time good fairness and efficiency, now efficiency in terms of overall variance.
4
Conclusion and Future Research
We have introduced T -scale fairness and multiscale fairness. The notion of T scale fairness allows one to address in a flexible manner requirements of emerging applications (like You Tube) which demand quality of service requirement between strict real time traffic and best effort traffic. The notion of multiscale fairness allows one to use a single optimization criterion for resource allocation when different applications are present in the network. We have compared the new fairness notions with previously known criteria of instantaneous and longterm fairness criteria. We have illustrated the new notions by their application in wireless networks. Specifically, we have considered spectrum allocation when users with different dynamics are present in the system. We have demonstrated that the multiscale fairness provides a versatile framework for resource allocation. In the near future we plan to investigate in detail how multiscale fairness criterion allocates resources when a number of applications with different QoS requirements are present in the network. It is also interesting to investigate T -scale fairness in the non-stationary regime.
Acknowledgement This work was done in the framework of the INRIA and Alcatel-Lucent Bell Labs Joint Research Lab on Self Organized Networks and the Ecoscells project.
References 1. Mo, J., Walrand, J.: Fair End-to-end Window-based Congestion Control. IEEE/ACM Transactions on Networking 8(5), 556–567 (2000) 2. Lan, T., Kao, D., Chiang, M., Sabharwal, A.: An Axiomatic Theory of Fairness for Resource Allocation. In: Proc. of IEEE INFOCOM 2010 (March 2010) 3. Altman, E., Avrachenkov, K., Prabhu, B.J.: Fairness in MIMD Congestion Control Algorithms. In: Proc. of IEEE INFOCOM 2005, Miami, USA (March 2005)
Multiscale Fairness and Its Networking Applications
237
4. Altman, E., Avrachenkov, K., Garnaev, A.: Generalized Alpha-Fair Resource Allocation in Wireless Networks. In: Proc. of 47th IEEE Conference on Decision and Control, Cancun, Mexico, (December 9-11, 2008) 5. Altman, E., Avrachenkov, K., Garnaev, A.: Alpha-Fair Resource Allocation under Incomplete Information and Presence of a Jammer. In: N´ un ˜ez-Queija, R., Resing, J. (eds.) NET-COOP 2009. LNCS, vol. 5894, pp. 219–233. Springer, Heidelberg (2009) 6. Bredel, M., Fidler, M.: Understanding Fairness and its Impact on Quality of Service in IEEE 802.11, Arxive (2008) 7. Ramaiyan, V., Kumar, A., Altman, E.: Fixed Point Analysis of Single Cell IEEE 802.11e WLANs: Uniqueness and Multistability. IEEE/ACM Transactions on Networking 16(5), 1080–1093 (2008) 8. Kelly, F.P., Maulloo, A.K., Tan, D.K.H.: Rate control for communication networks: shadow prices, proportional fairness and stability. Journal of the Operational Research Society 49(3), 237–252 (1998) 9. Tidball, M., Lombardi, A., Pourtallier, O., Altman, E.: Continuity of optimal values and solutions for control of Markov chains with constraints. SIAM J. Control and Optimization 38(4), 1204–1222 (2000) 10. Filar, J., Krieger, H.A., Syed, Z.: Cesaro limits of analytically perturbed stochastic matrices. Lin. Alg. Appl. 353, 227–243 (2002) 11. Altman, E., Avrachenkov, K., Ramanath, S.: Multiscale Fairness and its Application to Dynamic Resource Allocation in Wireless Networks, INRIA research report number RR-7783, http://hal.inria.fr/inria-00515430/en/
Fast-Converging Scheduling and Routing Algorithms for WiMAX Mesh Networks Salim Nahle and Naceur Malouch Universit´e Pierre et Marie Curie - Laboratoire LIP6/CNRS 4, Place Jussieu 75252 Paris Cedex 05 {name.surname}@lip6.fr
Abstract. In this paper, we present fast converging algorithms that fit well WiMAX mesh networks. First, a centralized scheduling algorithm is presented. It calculates schedules by transforming the multi-hop tree into a single hop, and then repartitioning the different schedules in the multihop tree. Second, a routing metric called Multiple Channel One Pass (MCOP) is introduced. MCOP chooses routes by explicitly accounting for the coding and modulation schemes on each route as well as the number of available channels. In addition, the route construction is performed in a way that reduces the impact of the bottlenecks on throughput. Numerical simulations show the superior performance of MCOP as compared to other routing metrics especially when the available number of channels is more than two. Keywords: WiMAX, Mesh Networks, scheduling, routing.
1
Introduction
Wi-Fi has been adopted as the defacto technology for Wireless Mesh Networks (WMN). Wi-Fi based mesh networks, however, expose scalability and limited coverage issues. With the reduced transmission range of Wi-Fi devices, many access points are required to ensure connectivity in a small zone. This in turn, increases interference, reduces the throughput and renders ensuring QoS difficult. Wi-Fi WMNs’ performance drops down significantly beyond three hops [1]. This is mainly due to the non-deterministic medium access of the IEEE 802.11 standard. A possible solution that overcomes these issues, and consequently increases the throughput capacity and ensures QoS in WMN, is the use of WiMAX mesh networks [2]. As opposed to the Point-to-Multi-Point (PMP) mode, WiMAX MESH mode allows direct transmissions between Subscriber Stations (SSs) that communicate through a multi-hop tree rooted at the base station (BS). The way the tree is built and schedules are perfomed has a deep impact on the capacity that a WiMAX backbone may offer. WMNs are mainly used to provide broadband access, and thus routing and scheduling operations must be sufficiently rapid so as no delay is added with each change in the network configuration. For this purpose we present in this paper two disjoint fast-converging routing and scheduling algorithms. First, the J. Domingo-Pascual et al. (Eds.): NETWORKING 2011, Part II, LNCS 6641, pp. 238–249, 2011. c IFIP International Federation for Information Processing 2011
Fast-Converging Scheduling and Routing Algorithms
239
proposed centralized scheduling algorithm calculates schedules by transforming the multiple-hop tree into a single hop. This is achieved by replacing each SS path by one link. On each link, the maximal end-to-end rate on the corresponding path is used. It calculates therein the allocations for all the SSs. The different schedules are then repartitioned in the multi-hop tree. Finally, the allocations are repartitioned on the available orthogonal channels to increase capacity. Second, a routing metric called Multiple Channel One Pass (MCOP) is introduced. MCOP chooses routes by explicitly accounting for the coding and modulation schemes that are called burst profiles in WiMAX terminology. WiMAX employs an adaptive modulation and coding scheme (AMC) to choose the best burst profile on a particular link in function of the signal-to-noise ratio (SNR) and the bit error rate (BER). Besides, MCOP accounts to the number of available channels in choosing routes. Moreover, the route construction is performed in a way that reduces the impact of the bottlenecks on throughput. In fact, we have found by simulations, that routing metrics that are used in single-channel mesh networks do not fit multiple-channel networks [3]. This is mainly due to the accumulation of traffic close to the BS, and hence the creation of bottlenecks. The bottleneck node may be the BS itself. Nevertheless, we have seen that it can be any other SS, since the bottleneck node depends, in addition to the traffic, on the burst profiles of ingoing and outgoing links. Numerical simulations show the superior performance of MCOP as compared to other routing metrics especially when the available number of channels is more than two. The rest of the paper is organized as follows. In section 2, we present a scheduling algorithm for WiMAX mesh networks. Section 3 discusses the impact of routing on multi-channel WiMAX meshes. Section 4 describes MCOP. In section 5, we present our simulations and discuss the results. In section 6, we present the related work before concluding the paper in section 7.
2 2.1
Fair Scheduling Algorithm Overview: Scheduling in IEEE 802.16 Mesh Mode
IEEE 802.16 introduces two scheduling mechanisms: centralized and distributed. We focus, in this work, on the centralized scheduling which is performed using two main types of messages: the Mesh Centralized Scheduling (MSH-CSCH) message and the Mesh Centralized Scheduling Configuration (MSH-CSCF) message. Each node gathers its children’s requests and reports them along with its own in a MSH-CSCH request to its parent. The parent node is called Sponsoring Node (SN) in the WiMAX terminology. The whole process repeats recursively until the requests are propagated towards the BS. The BS then determines the flow assignments and broadcasts a MSH-CSCH Grant, which is rebroadcasted by intermediate nodes until all the SS nodes in the network receive it. SSs determine their scheduling in a recursive manner by using a common algorithm that divides the frame proportionally.
240
S. Nahle and N. Malouch
2.2
Scheduling Algorithm
Fairness among users is a crucial issue in WMNs that are mainly used for providing wireless broadband access. In this section we propose a scheduling algorithm that allocates the resources (channels and time slots) in a fair way among different SSs. For this purpose we identify two key objectives that must be fulfilled: 1. Guaranteed allocations per flow: The algorithm ensures that the allocated resources (time slots) are sufficient to realize a fair end-to-end rate. Suppose that SSi and SSj are respectively ni and nj hops distant from the BS, αi,k is the number of minislots to be allocated on link1 k of the path2 of SSi (∀k ∈ {1, 2, . . . , ni }), on behalf of SSi (link k is also used by other SSs), ri is the end-to-end data rate of SSi . The scheduling algorithm must ensure that αi,k and αj,k (∀k ∈ {1, 2, . . . , nj }) are sufficient to guarantee the end-to-end rate, regardless of the number of hops of SSi and SSj . 2. Maximum utilization of resources: In the purpose of maximizing the network utilization, the scheduling algorithm properly chooses the values of αi,j . Assume, for example, that SSi is 3 hops from the base station, where each link lj on its path supports a data rate rlj (j ∈ {1, 2, 3}), known from the burst profile information carried by the control messages. Given that rl1 > rl2 > rl3 , it may be sufficient to allocate the same αi,j for all the links in order to satisfy the first condition. Nevertheless, this will result in resource wastage since the overall rate is bounded in this case by the lowest link rate rl3 , and is rl3 /3. Our algorithm searches to ensure the required end-to-end data rate with the minimal number of time slots. It guarantees ri for every SSi by choosing the values of αi,j to be inversely proportional to the link rates. The intuition behind that is maintaining the same throughput on each link. The second scheduling condition can be expressed as follows: αi,1 rl1 = αi,2 rl2 = . . . = αi,ni rlni = ri ∀i. (1) This latter condition can be used for calculating the maximal end-to-end rate (MEER) for each SS, as we will see in the following section. 2.3
Maximal End-to-End Rate
In wireless multihop networks, it has been largely considered that the end-toend rate (throughput) of a multihop flow is bounded by the minimum link-rate (throughput) on the path of this flow [4]. This might be right in 802.11-based networks where the access to the channel is non-deterministic as the case of the CSMA/CA 802.11 MAC. Nevertheless, in WiMAX mesh networks it is possible to achieve a higher end-to-end rate on a multihop flow, by explicitly accounting for data rates on the different hops constituting it and also by satisfying equation (1) in assigning time slots among different links. 1 2
A link i refers to the link between SSi and its parent. A path of an SS corresponds to its route towards the BS, which is unique in the WiMAX mesh case.
Fast-Converging Scheduling and Routing Algorithms
241
Fig. 1. Network example: f1 is 3 hops
Consider the network graph given in figure 1, f1 is a 3-hops flow between nodes (SSs) 1 and 4. Given the link rates r1 = 1 M bps, r2 = 2 M bps and r3 = 4 M bps, then assuming an equal share for each link will result in an end-to-end data rate r = 0.33 M bps. This leads to channel underutilization. The optimal channel utilization is achieved by satisfying equation (1). Accordingly, α1,1 , α1,2 and α1,3 , being the portions of the total time T allocated to f1 on links l1,2 , l2,3 and l3,4 respectively, are computed as: α1,1 = (4/7)T , α1,2 = (2/7)T and α1,3 = (1/7)T . For T = 1 second, the obtained MEER of f1 , is r = 0.57M bps which is equivalent to 72 % rate improvement (relative to r = 0.33 M bps). MEER of a multihop flow fi that corresponds to SSi , which is ni hops, where rlj is the rate on link lj of the path of SSi can be expressed as follows: ri =
1 r l1
+
1 rl 2
1 + ...+
1 rl n
(2) i
Proof: Let αi,j be the portion of the total time Ti allocated to fi on link lj , then satisfying equation (1) we obtain: αi,1 rl1 = αi,2 rl2 = . . . = αi,ni rlni = ri . Similarly, αi,1 = ri /rl1 , αi,2 = r/rl2 , . . ., αi,ni = ri /rlni . Recall that the sum of αi,j is Ti : αi,1 + αi,2 + . . . + αi,ni = Ti . For obtaining the end-to-end data rate it is sufficient to replace Ti by 1 and αi,j by ri /rlj and we are done ri /rl1 + ri /rl2 + . . . + ri /rlni = 1. 2.4
Mesh Scheduling
Scheduling multihop links in a way that satisfies the previously mentioned objectives is not straightforward. The proposed scheduling algorithm performs the following steps: 1. The BS calculates MEER of each SS as well as αi,j values by satisfying equation (1). 2. It transforms the multihop mesh network into a single-hop one, by using MEER of each SS. 3. Then, it distributes the available minislot space among the SSs that form a single cell. At this stage, any fairness model can be applied (max-min, proportional, maximum throughput). This can be accomplished by assigning weights for each SS on an end-to-end basis.
242
S. Nahle and N. Malouch
4. Then, having the number of minislots allocated for each SS (SSi ), these minislots are distributed among the links that constitute its path towards the BS according to the calculated αi,j values. 5. Finally, each link i in the mesh tree is allocated the aggregation of its share on different flows that use it. For a better understanding, let’s consider the mesh network in figure 2(a). Given a set of SSs and a BS with the rates on the different links, the algorithm calculates MEER and the corresponding αi,j on all the links on the route from an SS towards the BS. These values are given in figure 2(b). Figure 2(c) shows the single-hop network that is obtained by using MEER of every SS. Figure 2(d) shows the number of time slots of each SS (for its own traffic in the uplink direction). Note that we suppose that all the SSs must be served in the same manner. The calculations for the downlink direction are the same. The second line in the table of figure 2(d) shows the repartition of the time slots of each SS among the links of its path. They are calculated by using αi,j on each link. For instance, the 7 slots of SS5 are repartitioned on links l5,3 (4 slots) l3,1 (2 slots) l1,BS (1 slot) since αi,j values are respectively 0.57,0.26 and 0.14. Finally the allocations on each link (for different SSs) are added (line 3). The obtained allocations on each link are shown in figure 2(e).
Fig. 2. Scheduling example
3
Impact of Routing on Multi-channel WiMAX Mesh Networks
The way the mesh tree is constructed has a significant impact on the throughput capacity. In fact, routing metrics designed for single-channel WiMAX Mesh
Fast-Converging Scheduling and Routing Algorithms
243
networks, do not necessarily fit multiple-channel meshes. This is mainly due to the creation of bottlenecks close to the BS. Consequently, even with using several orthogonal channels, there is an upper bound on the gain that can be obtained in term of fair throughput. This limit is referred to as multiple channel gain (MCG) and expressed as follows: M CG =
T
M AX M AXi
Ti LSS
, TBS L
.
(3)
Where T corresponds to the whole scheduling period needed to satisfy the demands of all SSs. Ti is the time needed for SSi to communicate with its parent and its children in the uplink and downlink directions. TBS is the transmission time needed for the BS. LSS , L are the number of network cards for an SS and d the BS respectively. Ti = wjˆj + wiˆi , where wiˆi = riˆˆi is the time needed j∈Childi
ii
to transfer the traffic on link liˆi . diˆi and riˆi are respectively the traffic demand and the achievable data rate on the link between SSi and its parent SSˆi . The SS with the maximum Ti or the BS constitutes the bottleneck of the multi-channel network. This is represented in the denominator of equation 3. We can deduce from MCG, the number of channels (γ) beyond which no more increase in fair throughout can be obtained. γ is the upper bound of MCG. For more details on the calculations refer to [3]. As a conclusion, the routing has to take into account these bottlenecks in order to realize full advantage of available multiple channels. In the next section, we propose a routing metric that explicitly accounts for these bottlenecks in the tree construction.
4
Multiple Channel One Pass Routing
MCOP aims at maximizing fair throughput capacity while establishing routes in a distributed manner. The name MCOP is used since the entrance of a new node does not cause recalculations at already joined nodes. The IEEE 802.16 Mesh mode uses Mesh Network Configuration (MSH-NCFG) and Mesh Network Entry (MSH-NENT) messages for advertisement of the mesh network and for helping new nodes to synchronize and join the mesh network. Active nodes within the mesh periodically advertise MSH-NCFG messages which are used by new joining nodes. Among all possible neighbors that advertise MSHNCFG, the joining node (which is called Candidate Node CN in the 802.16 Mesh mode terminology) selects a potential Sponsoring Node to connect to. This latter is called candidate sponsoring node CSN . MCOP algorithm chooses among the CSN s the SS that minimizes the bottleneck of the fair throughput capacity. In other words, the joining node, that knows the actual state of the tree (number of nodes, burst profiles of data links) based on the M SH − N CF G and M SH − CSCH, calculates locally the M CG for every possible tree. Note that if there are n candidate SN, then it does only n calculations. The algorithm is sequential, its details are given in Algorithm 1. Nodes that are closer to the BS starts earlier than others. An SS that does not have a
244
S. Nahle and N. Malouch
Algorithm 1. MCOP Require: p, parent of the SSw . 1: procedure ParentSelection(p, r) 2: SSw ← MESH-NCFG and MESH-CSCH 3: metric ← 0 4: for all i ∈ CSN do 5: calculate T and M CG (T, M CG) 6: a ← M CG 7: if a > m then 8: a←m 9: end if 10: if metric < a÷T then 11: metric ← a÷T 12: end if 13: end for 14: send(p, MSH-NENT); 15: end procedure
neighboring node (a candidate SN in range) waits until another SS which is in its range, connects to the mesh tree, and follows it. For better understanding, we explain hereafter the algorithm line by line. Each CN SSw accumulates the knowledge of the actual mesh tree through the MESH-NCFG and MESH-CSCH messages (line 2), and thus it knows the number of channels available for use (m). For each possible SN in the set CSN , it calculates T and multiple channel gain M CG as explained in section 3 (line 5). It assumes that all the SSs have the same load, so it affects x bits for every SS and does the calculations accordingly. In fact, T hrMC = M CG ∗ T hrSC where T hrSC is the single channel throughput and T hrMC is the muli-channel throughput. Nevertheless, M CG may be greater than m. In this case the gain a of using m channels is limited to m (line 8). Now it compares T hrMC for each possible path and chooses the one that maximizes throughput capacity. In fact the algorithm uses a/T as a metric, since it is proportional to T hrMC where 1/T is proportional to T hrSC . Finally it sends a mesh network entry to this parent p. 4.1
Discussion
It is worthwhile noticing that the MCOP algorithm may lead to quasi-optimal routing topologies if we permit the recalculations of paths after the arrival of a new SS. Nevertheless, this may also lead to route flaps when many SSs switch simultaneously to the new joining node. In this case, convergence of the algorithm is crucial to the performance. Studying the convergence of the algorithm is one of our future directions, though initial results are optimistic, especially when using an indexing strategy that gives priorities to lower hop SSs to connect to the new SS. Next, we study the performance of this algorithm without recalculations,
Fast-Converging Scheduling and Routing Algorithms
245
and as we will see, the results are also very interesting even with this simple algorithm that needs no changes in the configuration of the MESH mode in the IEEE 802.16 standard.
5
Performance Evaluation
In this section we study the performance of the MCOP algorithm by comparing it to routing metrics, namely MEER [5], Hop Count and Blocking [6]. We suppose that a number of orthogonal channels m is available. The evaluation metric is the achievable fair capacity which is the maximal fair throughput (maximum equal share). We implemented using Matlab the MCOP routing algorithm as well as the other routing metrics. We also implemented the scheduling algorithm that uses the round robin approach defined in IEEE 802.16. The graph topologies used in these simulations are randomly generated where a set of n SSs and a BS are distributed in a d ∗ d cell. The routing algorithm first constructs the mesh tree according to the routing metric (MCOP, Hop Count, MEER, Blocking). Then we apply the scheduling algorithm presented in section 2. The output of this algorithm is a set of (link, number of slots). Then by using the available channels, we use a graph coloring algorithm that reduces the scheduling time by exploiting the available number of channels and spatial reuse (more details about the channel assignment are found in [3]). The final output is a set of triplets (t, active links, channel) that determines for each time slot t, the set of active links on each channel. These schedules in time and frequency domains are broadcasted in the scheduling frames. We present simulation results for two scheduling paradigms. In the first, we do not account for channel reuse. In this case, the results correspond to the case where the BS is not aware of the exact topology. In the second, channel reuse is enabled. Notice here that only the receiver on each link is protected by forbidding all nodes that interfere with it from transmitting and hence there are no hidden terminals. Hereafter, we only present the results for single interface WiMAX Mesh networks (LSS = L = 1). Applying the model for multiple interface networks is straightforward, and it is sufficient to replace LSS and L by their values in equation 3. Fig. 3 displays the maximal aggregate fair throughput as a function of orthogonal (non-interfering) channels for the three routings. We vary n and d from 20 to 40SSs and from 15 to 25km respectively. Each point is the average of 50 simulations for different topologies randomly chosen. It shows how this achievable throughput, for all the metrics, increases with the number of channels until it reaches γ, beyond which no more gain is obtained. Recall that γ is the upper bound of M CG as we have seen in section 3. MCOP can employ more orthogonal channels than the others. This is because, it takes the number of channels into account and minimizes the bottleneck accordingly as described in section 4. Otherwise, MEER slightly performs better for m ≤ 2 in figure 3(a) and m ≤ 4 in figure 3(b), since MEER routing is the optimal routing as the available number of channels is less than M CG because it chooses routes with maximal end-toend data rates [5]. Hence for m ≤ M CG, there is a tradeoff when using MCOP,
246
S. Nahle and N. Malouch
Aggregate throughput capacity (Mbps)
Aggregate throughput capacity (Mbps)
8 5
4 Hop Count MEER Blocking MCOP Hop Count + FSR MEER + FSR Blocking + FSR MCOP + FSR
3
2
1
0 1
2
3 4 5 6 Number of orthogonal channels
(a) 15 km cell size, 20 SSs
7
7 6 5
Hop Count MEER Blocking MCOP Hop Count + FSR MEER + FSR Blocking + FSR MCOP + FSR
4 3 2 1 0 1
3
5 7 9 11 13 Number of orthogonal channels
15
(b) 25 km cell size, 40 SSs
Fig. 3. Routing metrics comparison in term of throughput, multiple channels, with and without frequency spatial reuse (FSR)
since it selects, with each network entry, paths that can exploit the m available channels, rather than paths that optimize end-to-end rate as MEER. Consequently, if the available number of channels is not sufficient (i.e. less than M CG of MEER-based tree), MEER can perform better. Moreover, MCOP does not allow recalculation of existing paths when a new SS joins the network, which is allowed using MEER that converges rapidly. In fact, we can adapt the routing algorithm (MCOP ), to use MEER for the case m ≤ M CG (of MEER-based tree) and MCOP otherwise. As seen in the figure, MCOP routing outperforms all the other routing metrics even the Blocking metric that was particularly designed for increasing FSR, which performs even worse than Hop Count in the multiple channel case. The poor performance of these metrics occurs because they do not account for burst profiles in route construction which is crucial. Note that, it is not surprising that the same fair throughput capacity is obtained whether using FSR or not, since the same routing is used in both cases, hence the same bottleneck is created. On the other hand, an important observation from figure 3(a) is that for all the used routing metrics, the optimal throughput is obtained by employing almost the same number of channels with or without FSR. For instance, 3 channels are needed to obtain this maximal throughput in cases of Blocking and Hop Count routings and 5 channels for MEER. However, MCOP yields the best performance with 5 channels with FSR, but it needs 6 channels without it. Nevertheless, there is a slight difference between the throughput capacity obtained in both cases. This means that the gain that can be obtained with FSR is not very big if sufficient number of channels is available, especially that accumulating the exact information about topology requires the exchange of a non-negligible portion of traffic for conveying the interference information. In figure 3(b), more nodes are used which implies that more channels can be used [7]. Besides, the used larger area enables better use of the channel. It is
Fast-Converging Scheduling and Routing Algorithms
247
clear from the figure that MCOP outperforms the other routing approaches but also increases the difference especially with MEER. Interestingly it is able to exploit up to 10 channels for increasing the throughput capacity. We can also observe that as the area size increases (from 15 km to 25 km), more FSR can be achieved, and hence whenever FSR is used, at least 1 channel is saved for obtaining the best performance, 2 channels in case of MEER.
6
Related Work
Various joint schemes for multi-channel 802.11-based WMNs were proposed [8,9,10]. 802.16 networks, however, have different characteristics such as the contention-free transmissions dislike 802.11 standard where nodes need contend to the medium. Though, some works can still be considered as a benchmark for the WiMAX case since they assume TDMA link scheduling for 802.11 based WMNs [11]. These proposals either use mathematical formulations that assume global knowledge of the network, or propose distributed algorithms relying on local knowledge that is accumulated by exchanging messages in a certain neighborhood. The numerical solving of these models make them “black boxes” that results in less understanding of the system behavior, for instance the impact of bottlenecks. Knowing that the scheduling and channel assignment are NP-hard [10], we use simple yet efficient algorithms for improving the throughput capacity. These algorithms are sequential. First, a routing tree built is with MCOP or another metric. Then, the scheduling algorithm assigns time slots by transforming the multihop tree to single-hop cell. Then by exploiting the available number of channels, it reduces the total scheduling period. Note that it is sufficient to protect the receiver-side of a transmission by forbidding SSs that are in its range from transmission. In many works [12,11] the RTS/CTS/DATA/ACK model was assumed and hence the sender should be protected too. In [7], the destination bottleneck constraint on the capacity corresponds to the node that is the destination of maximum number of flows. In the WiMAX case, the BS is the destination of all the flows, but it is not necessarily the bottleneck on the throughput capacity. An SS with less number of flows can be this bottleneck depending on the data rates of the links incident on it. On the other hand, many works have addressed multiple channel WiMAX mesh networks [13,14,15,16]. In [15], an iterative tree construction algorithm is presented. However constraints are imposed in a way that an SS cannot have more hops (towards the BS) than a possible parent. In our work, MCOP prefers multi-hopping in order to reduce the bottleneck time for exploiting well the available channels. Jiao et al. compared different routing metrics under a proposed centralized scheduling [14]. In this paper, we showed that routing metrics designed for single channel use may not be suitable for multiple channel case due to the creation of bottlenecks and hence we proposed heuristic algorithms that reduce these bottlenecks and thus increase the multiple channel exploitation.
248
S. Nahle and N. Malouch
MCOP routing can exploit more channels than other routing metrics since it was fabricated to reduce the bottleneck of multiple channel use. The number of channels is even beyond the results in [12] where it is found that no more gain can be obtained beyond L + 1 channels where L is the number of radios, and also in [16] where authors recommend to use 2 ∗ L channels.
7
Conclusions and Future Work
In this paper we have proposed scheduling and routing algorithms in the purpose of enhancing throughput capacity but also fairness among SSs in a WiMAX Mesh Network. The scheduling algorithm maximizes the utilization of the network and guarantees fairness among different SSs on an end-to-end basis. The algorithm transforms the multi-hop tree into single-hop cell by considering the maximal end-to-end rate of each SS. It can ensure any kind of fairness in the single hop network, and then by reconsidering the multiple hop tree, it distributes the shares of each SS among different links. We have also proposed a distributed routing algorithm called MCOP, which selects the routes that minimize the time needed for bottleneck SS with each SS entry. We have shown by simulations the efficiency of this algorithm compared to other approaches in terms of throughput capacity and fairness. MCOP can employ more channels than other metrics in the purpose of improving fair throughput capacity. We have also found that frequency reuse can reduce the number of channels needed to obtain the maximal capacity for large networks. Interestingly, MCOP outperforms other routing metrics. This means that it is possible to remove the “black box” and understand the real behavior of wireless multihop networks. Accordingly, “simple” routing techniques can significantly enhance the performance of these systems. For instance, MCOP is built based on the understanding of the bottlenecks in WiMAX mesh networks. Thus, it explicitly accounts to them in the route construction, which results in improving system capacity. Nevertheless, MCOP is proposed as a fast route construction algorithm where recalculations of paths after the entry of a new SS are not allowed. In fact, allowing recalculations exhibits well known convergence issues in routing, such as route flaps. Resolving these issues consists one of our future works. We believe that resolving them can result in quasi-optimal performance, since MCOP in its current version, approaches to the optimum for some topologies. Moreover, MCOP favors multi-hopping. This may incur some delay. In this work we have not studied delay, which consists one of our future directions. IEEE 802.16 supposes that the transmission of the same packet (corresponding to an SS) cannot occur more than once in the same frame. This means that a packet corresponding to an SS that is 7 hops away from the BS must wait at least 7 frames to arrive at the BS which incurs some delay. A possible improvement may be changing the scheduling that is adopted in the standard, which assigns time slots for each SS in a row.
Fast-Converging Scheduling and Routing Algorithms
249
References 1. Nahle, S., Malouch, N.: Graph-based Approach for Enhancing Capacity and Fairness in Wireless Mesh Networks. In: Proceedings of IEEE Globecom (2009) 2. IEEE 802 Standard Working Group: IEEE Standard for Local and Metropolitan Area Networks–Part 16: Air Interface for Fixed Broadband Wireless Access Systems. Standard 802.16d-2004, IEEE (2004) 3. (Report, T.) nahle/Files/cc.pdf, http://www-rp.lip6.fr/ 4. Gao, Y., Chiu, D.M., Lui, J.C.: Determining the end-to-end throughput capacity in multi-hop networks: methodology and applications. SIGMETRICS Perform. Eval. Rev. 34(1), 39–50 (2006) 5. Nahle, S., Malouch, N.: Joint Routing and Scheduling for Maximizing Fair Throughput in WiMAX Mesh Network. In: Proceedings of IEEE PIMRC (2008) 6. Wei, H., Ganguly, S., Izmailov, R., Haas, Z.: Interference-aware IEEE 802.16 wimax mesh networks. In: Proc. IEEE Vehicular Technology Conference, VTC (2005) 7. Kyasanur, P., Vaidya, N.H.: Capacity of multi-channel wireless networks: impact of number of channels and interfaces. In: MobiCom 2005. ACM Press, New York (2005) 8. Chiueh, T.c., Raniwala, A.: Architectures and algorithms for an ieee 802.11-based multi-channel wireless mesh network. In: INFOCOM (2005) 9. Alicherry, M., Bhatia, R., Li, L.E.: Joint channel assignment and routing for throughput optimization in multi-radio wireless mesh networks. In: MobiCom 2005: Proceedings of the 11th Annual International Conference on Mobile Computing and Networking. ACM Press, New York (2005) 10. Raniwala, A., Gopalan, K., Chiueh, T.c.: Centralized channel assignment and routing algorithms for multi-channel wireless mesh networks. SIGMOBILE Mob. Comput. Commun. Rev. 8(2), 50–65 (2004) 11. Wang, W., Li, X.Y., Frieder, O., Wang, Y., Song, W.Z.: Efficient interference-aware tdma link scheduling for static wireless networks. In: MobiCom 2006: Proceedings of the 12th Annual International Conference on Mobile Computing and Networking, pp. 262–273. ACM Press, New York (2006) 12. Kodialam, M., Nandagopal, T.: Characterizing the capacity region in multi-radio multi-channel wireless mesh networks. In: MobiCom 2005: Proceedings of the 11th Annual International Conference on Mobile Computing and Networking, pp. 73–87. ACM Press, New York (2005) 13. Zhou, M.T., Harada, H., Wang, H.G., Ang, C.W., Kong, P.Y., Ge, Y., Su, W., Pathmasuntharam, J.S.: Multi-channel wimax mesh networking and its practive in sea. In: IEEE ITST (2008) 14. Jiao, W., Jiang, P., Liu, R., Li, M.: Centralized scheduling tree construction under multi-channel ieee 802.16 mesh networks. In: Proc. GLOBECOM (2007) 15. Ghiamatyoun, A., Nekoui, M., Esfahani, S.N., Soltan, M.: Efficient Routing Tree Construction Algorithms for Multi-Channel WiMax Networks. In: IEEE ICCCN (2007) 16. Du, P., Jia, W., Huang, L., Lu, W.: Centralized Scheduling and Channel Assignment in Multi-Channel Single-Transceiver WiMax Mesh Network. In: Proc. IEEE Wireless Communications and Networking Conference WCNC (2007)
OFDMA Downlink Burst Allocation Mechanism for IEEE 802.16e Networks Juan I. del-Castillo , Francisco M. Delicado, and Jose M. Villal´ on Instituto de Investigaci´ on en Inform´ atica de Albacete (I3 A) Universidad de Castilla–La Mancha (UCLM), Spain {juanignacio,franman,josemvillalon}@dsi.uclm.es
Abstract. Orthogonal Frequency Division Multiple Access (OFDMA) is one of the most promising, demanded and researched modulation and access methods for future mobile wireless networks. Emerging technologies, like Worldwide Interoperability for Microwave Access (WiMAX) or Long Term Evolution (LTE), are adopting OFDMA due to its high spectral efficiency, scalability for high amounts of users and mobility support. In OFDMA, before data is actually transmitted, it must be mapped into a time-frequency matrix through a resource allocation process. This step is critical for a correct network behaviour, and several factors must be taken into account like efficiency, Quality of Service (QoS) fulfillment or power consumption. In this paper, we propose a novel approach for the mapping process of IEEE 802.16e networks, and we compare its performance with several existing algorithms. The effectiveness of our proposal is evaluated in different scenarios by means of extensive simulation. Keywords: WiMAX, IEEE 802.16, OFDMA, Resource allocation.
1
Introduction
Forthcoming wireless communication technologies are expected to support a high amount of simultaneous users, high spectral efficiencies, low power consumption (especially in the case of portable devices) and high speed mobility. At the physical layer (PHY), Orthogonal Frequency Division Multiple Access (OFDMA) is beginning to be considered as the best solution for some 4th Generation (4G) wireless networks. Worldwide Interoperability for Microwave Access (WiMAX), defined by the IEEE 802.16 standard [1] and other multicarrier-based equipment like Long Term Evolution (LTE) systems [2] are examples of emerging OFDMAbased technologies. OFDMA enable the system to allocate spectral resources in an efficient and flexible way, due to the partial usage of time and frequency domains for several users. However, this gain in flexibility poses a substantial resource allocation challenge.
This work was supported by the Spanish MEC and MICINN, as well as European Commission FEDER funds, under Grants CSD2006-00046 and TIN2009-14475-C0403. It was partly supported by JCCM under Grants PII2I09-0045-9916 and PEII090037-2328.
J. Domingo-Pascual et al. (Eds.): NETWORKING 2011, Part II, LNCS 6641, pp. 250–262, 2011. c IFIP International Federation for Information Processing 2011
OFDMA Downlink Burst Allocation Mechanism for IEEE 802.16e Networks
BURST #1 UL MAP
MAC common part sublayer
Traffic queues
Scheduler
...
Resource allocator
Fig. 1. Scheduler and resource allocator are components of the MAC layer
P R E A M B L E
251
BURST #2
DL MAP BURST #3
BURST #4 BURST #5
Fig. 2. OFDMA Downlink subframe structure in IEEE 802.16
In the context of the IEEE 802.16 standard the specific downlink (DL) resource allocation mechanism is not standarized, so it is left open for design differentiation from individual manufacturers. The resource allocator is one of the components of the Media Access Control (MAC) common part sublayer of an IEEE 802.16 network, and along with the scheduler is responsible of fulfilling Quality of Service (QoS) requirements of user connections (Figure 1). High radio resource utilization also depends on the efficiency of the resource allocator. In Figure 1, the scheduler and the resource allocator are represented into a dotted box, because their designs can be done independently, or in a joint manner. In the case of an independent design, the goal of the scheduler is to grant bandwidth to the different connections in order to guarantee their QoS requirements. On the other side, the resource allocator will be responsible of distributing time–frequency resources among users, through a process where packets from the scheduler component are mapped into an OFDMA matrix (Section 2). On the other hand, a joint design should also achieve both goals, but it could take advantage of channel state information to maximize throughput [3]. In this paper we consider Partial Usage of Sub-Channels (PUSC) subcarrier permutation mode of the IEEE 802.16 standard. With PUSC, subsets of subcarriers are distributed to form subchannels in order to achieve full channel diversity. In this mode, all subchannels will be equally adequate for all users, so the resource allocation problem is significantly simplified. The rest of this paper is structured as follows: in Section 2 we define the DL resource allocation problem, highlighting some related factors that may affect the performance of IEEE 802.16 networks. Also we describe the most relevant proposals that exist in the literature in this context. We state our proposal in Section 3 and we carry out a performance evaluation in Section 4 by means of extensive simulation. Section 5 concludes this paper.
2 2.1
OFDMA Downlink Resource Allocation Problem DL Subframe Structure and Resource Allocation Restrictions
Point-to-Multipoint (PMP) IEEE 802.16 networks are usually deployed in a celular architecture [4], where each cell is composed of two types of stations: one
252
J.I. del-Castillo, F.M. Delicado, and J.M. Villal´ on
Base Station (BS) that acts as a central controller, and the Subscriber Stations (SSs). All the network traffic flows through the BS, which is in charge of allocating the time-frequency resources among the connections. Considering an isolated implementation, the resource allocator receives traffic packets from the scheduler component and maps them into the DL subframe. The IEEE 802.16 standard [1] defines the structure for the DL subframe (Figure 2). The DL subframe is usually represented as shown: an OFDMA matrix with the x-axis being time (symbols) and y-axis being frequency (subchannels). After an initial preamble, which is needed to allow the synchronization of the SSs, the BS broadcasts the DL-MAP message (in column-wise order). The remaining space in the subframe is then allocated as rectangular data regions, through a certain resource allocation mechanism. These data regions are called bursts. The position and size of each burst is specified through an Information Element (IE) into the DL-MAP. This implies that, the greater the number of bursts, the higher the size of the DL-MAP will be. As the DL-MAP competes for the same space that could be used to transmit data, its size should be reduced as possible. The shape bursts must be rectangular, and their sizes must be a multiple of the minimum resource allocation unit (slot). The size of a slot depends on the permutation mode, and in the case of PUSC it is 2x1: two symbols in time per one subchannel in frequency. These two restrictions lead to a problem known as overallocation, and it occurs when more space than the strictly needed is reserved in a certain burst. For example, suppose we have to reserve seven symbols for a certain user. In PUSC mode we could reserve a burst of size 4x2 (a total of eight symbols) leading to a waste of one overallocated symbol. Traffic from several users may be mapped into the same burst with one condition: data in a certain burst is transmitted using the same Modulation and Coding Scheme (MCS). If the resource allocation mechanism is able to group traffic from different users into the same burst then the number of bursts will be reduced, and therefore the size of the DL-MAP will be reduced too. 2.2
Allocation Mechanism Design Factors
Resource allocation mechanisms have to be designed taking into account the goals that have to be achieved, and the factors that complicate the achievement of these goals. The goals of a resource allocator should be the following: High throughput. High network throughput can be achieved by efficiently mapping the traffic into the OFDMA matrix. This efficiency is inversely proportional to the amount of frame space that is not used to transmit data. This wasted space includes control information (DL-MAP and MAC overhead) and unused space inherent to the allocation process (overallocation and unused slots). As described in Section 2.1 a portion of the DL subframe is used to send the DL-MAP. The size of this message directly depends on the number of bursts. In order to minimize DL-MAP overhead, the number of bursts should be minimized. This can be achieved by grouping traffic from different users into the same burst, assuming it will be transmited using the same MCS. As specified in the IEEE
OFDMA Downlink Burst Allocation Mechanism for IEEE 802.16e Networks
253
802.16 standard the DL-MAP may be transmitted multiple times (2, 4 or 6 times) in order to prevent its incorrect reception by the SSs due to channel errors, so in these cases the DL-MAP size minimization is even more critical. Traffic packets are mapped into the frame as Protocol Data Units (PDUs). Each PDU includes a 48–bit MAC header, a payload, a 32–bit Cyclic Redundancy Check (CRC) and in some cases packing or fragmentation subheaders (8–bit and 16–bit respectively). This control overhead also affects performance and should be minimized. A resource allocation mechanism may divide a given allocation into several, at the cost of probably adding fragmentation overhead. Grouping traffic from the same connection into the same burst permits to take advantage of the packing mechanism established in the standard. At last, regarding mapping efficiency, there are two factors that may impair throughput. As stated in Section 2.1, due to the mandatory rectangular shaping of bursts and because their size must be proportional to the slot size, some overallocation appear. On the other side, a bad resource allocator may leave certain slots completely unassigned to any burst, and therefore more space is wasted. Both overallocation and unused slots should be minimized. Quality of Service fulfillment. On an isolated scheduler/resource allocator design, the latter should always respect the allocation order established by the first. The traffic prioritization of the scheduler is established to fulfill certain QoS requirements of the user connections. If an allocation mechanism does not respect this order QoS guarantees may be impaired. Requirements from some connections can behave incorrectly due to this problem, specially those involving real-time applications like VoIP. QoS requirements may include maximum delay, allowed jitter and bit loss rate. Even best effort applications may be impaired if fairness criteria is used by the scheduler. Ideally, a resource allocator should always preserve the order established by the scheduler. As we will see in Section 2.3, some allocation mechanisms [5] alter this order. Power consumption. One important goal for any emerging wireless technology, including IEEE 802.16 networks, is the minimization of the power consumption of client stations. This factor is even more critical in the case of mobile devices, which have a limited energy budget. In the case of OFDMA, the global energy consumption of the served SSs may be reduced by minimizing the average duration of bursts within a given frame, in such a way that a SS needs to be awake for receiving data during a shorter time. Obviously burst shape will affect this factor: bursts taking few time symbols will reduce global awake times. As the DL-MAP is broadcasted to all SSs, minimizing its size will also reduce the awake period of every SS. Some algorithms [6] specifically deal with this goal, as we will see in Section 2.3. Algorithm complexity. The resource allocation problem is similar to the “bin packing” problem [7], which is NP–complete. As frame duration in OFDMA mode of the IEEE 802.16 standard is short (2–20ms), the resource allocation algorithm may not be too complex, discarding optimal solvers and having to address the problem heuristically. The input factor of the complexity of a resource
254
J.I. del-Castillo, F.M. Delicado, and J.M. Villal´ on
allocator will be the number of allocations that need to be mapped into the frame, which in turn depends on network load (number of connections). Thus, a resource allocator should properly scale as the number of connections increase. 2.3
Related Work
OFDMA resource allocation in IEEE 802.16 networks has attracted much attention in recent years. Authors of [8] propose the Raster algorithm. Frame slots are allocated from left to right and from top to bottom (in rows). The problem with this approach is that, as DL-MAP grows from left to right, an initial column for data must be established. This does not allow the DL-MAP to grow dynamically if needed, so if the reserved space for the DL-MAP is used, no more data burst could be allocated. Also, allocations may be splitted at the right edge of the frame so PDU fragmentation will be increased. Other proposals avoid the previous problem by assigning slots from right to left or by keeping track of the total width of the mapped bursts. In [9], authors propose the SDRA algorithm. Allocations are mapped bottom–up and from right to left, allowing the DL-MAP to grow dynamically. During the mapping process, a given allocation will not be assigned more slots than it needs, at the cost of splitting it into several (at most three) bursts, introducing some PDU fragmentation. However, the authors assume that all data directed to users with the same MCS are combined into the same data region, but this is not directly achievable without altering the scheduling order. Despite this assumption, the allocations are mapped in order into the frame, but the performance of this algorithm will highly depend on the size of the allocations. If there are a lot of small allocations, the DL-MAP size will grow too much. The goal of the algorithm proposed in [10] is to reduce the number of bursts by grouping data for different users but with the same MCS. This is done by allocating bursts of fixed height, called buckets. The width of the buckets grows as needed when data from the same MCS is progresively introduced. The problem with this mechanism is that it generates a high amount of overallocation, due to the fact that whole columns are allocated regardless of the actual required space. However, it only allocates one burst per MCS, so the DL-MAP size is minimal. Authors of [5] propose the eOCSA mapping mechanism. It allocates bursts as SDRA, bottom–up and right–left, seeking to minimize unused slots and energy consumption (by minimizing burst width) and to allow the DL-MAP to grow dynamically. However, to optimize the mapping process the allocations are initially sorted in decreasing order. This implies that the scheduler order is not preserved so connections with QoS requirements may be severely impaired. Allocations are not grouped by MCS, so DL-MAP size is expected not to be minimized. A comparative of three of the above mentioned algorithms was done in [11]. In [6], authors seek to optimize the receiver awake cycle by reducing the average duration and delay of the bursts within a given frame. Although this is an important goal for a resource allocator as stated in Section 2.2, the algorithm uses a full search approach, making it impractical for more than eight users.
OFDMA Downlink Burst Allocation Mechanism for IEEE 802.16e Networks
3
255
Proposed Allocation Mechanism
In this section we describe a novel proposal to the resource allocation problem described in Section 2. Our proposal is based on the following key concepts, in order to achieve the goals stated in Section 2.2: – Traffic from the same MCS should be grouped into the same burst whenever possible, thereby reducing DL-MAP size and PDU fragmentation. – The mapping process should be efficient enough in order to avoid unused slots and overallocation as much as possible. – The traffic order established by the scheduler should be preserved, to avoid QoS requisites impairment. – Average duration of bursts should be minimized, in order to achieve a low global power consumption of the served SSs. – The algorithm complexity must make its implementation feasible. The proposed resource allocation mechanism is divided into two phases. In order to allow a dynamic growth of the DL-MAP, the algorithm reserves n columns, which will be available for the DL-MAP to grow in any moment. The reserved columns are only used for data allocations when no space is available (for the performance evaluation of Section 4 we set n = 1 reserved column). We define the concept of container as a set of columns of the DL subframe. In the first phase containers are created, one per MCS processed, and having just one burst. The width of a container is the same as the burst it contains. When the first phase ends, the width of the containers gets fixed so it does not change in the whole second phase.
start phase 1
next packet
no available frame width? (*)
end phase 1 (**)
yes no any burst of same MCS? yes recompute width and height (W, H) update burst (W, H) update container (W)
compute width and height (W, H) create burst (MCS, W, H) create container (burst, W)
(*) Considering DL-MAP and reserved columns. (**) Width of containers get fixed at this point.
Fig. 3. Flow chart of the first phase
256
J.I. del-Castillo, F.M. Delicado, and J.M. Villal´ on
start phase 2
next packet not found
no
are there unused areas?
find burst (*)
yes
found
find best-fit unused area
recompute height (H) update burst (H)
no fit
fits
width (W) = container width compute height (H) create burst (MCS, W, H) insert burst in container
select biggest unused area width (W) = container width compute height (H) (**) create burst (MCS, W, H) insert burst in container
no reserved columns > 0
end phase 2
yes free one reserved column width (W) = 1 compute height (H) (**) create new burst (MCS, W, H) create container (burst, W)
(*) Searches for a burst of the same MCS with enough space for the packet or with enough space in its container to grow vertically and fit the packet. (**) In this cases, if the computed height (H) is greater than the free height of the container, the free height is used instead of the computed one. If this occurs, the packet can be fragmented if needed.
Fig. 4. Flow chart of the second phase
Figure 3 shows a flow chart of the process of the first phase. The width(W ) and height(H) (re)computed in the first phase are given by: W = A/HDL
(1)
H = A/W
(2)
where A is the area in slots of the packet being mapped, HDL is the height in slots of the DL subframe, and W and H is the computed burst width and height respectively. In the case of a packet of an already processed MCS, the packet is grouped in the existing burst, recomputing its width and height. The first phase ends when the sum of the widths of the containers plus the width of the DL-MAP takes the whole DL subframe (except for the n reserved columns). At the end of the first phase fixed–width containers have been established (one for each MCS), and we still have unused space below the initial bursts. During the second phase those unused areas are mapped (Figure 4). In the case of a packet of an existing MCS we proceed as in the first phase. If an existing burst of that MCS can hold the packet (increasing its height if needed and if there
OFDMA Downlink Burst Allocation Mechanism for IEEE 802.16e Networks
257
Containers
A B C D E F G
Slots
MCS
8 7 4 6 4 4 3
1 2 3 3 4 3 3
M A P
C
A B
8
1
Case 2 H2 I J
2 2 2
3 5 4
M A E P
M A P
A
E C,D,F
B
G
A
C,D B
(a) During 1st phase
Case 1 H1
R E S E R V E D
R E S E R V E D
M A P
(b) End of first phase
R E S E R V E D
(d) 2nd phase (best-fit area)
R E S E R V E D
A
E C,D,F
B
(c) 2nd phase (updating burst) H1
E M A P
A
E C,D,F
B
A,H1
G
M A P
C,D,F
B H2
G
I J
(e1) 2nd phase case 1 (Reserved column for data)
(e2) 2nd phase case 2 (Reserved column for DL-MAP)
Fig. 5. Example of the mapping process with our proposal
is enough area in the container), then the packet is grouped into that burst. If there is not a burst of that MCS or if the existing burst can not grow vertically to hold the packet, then a new burst is created. All newly created bursts will have the width of their containers. To select a free area for the new burst, we first use a best–fit criteria. If the new burst does not completely fit in that area then the biggest area is selected, whether it fits completely or not (the packet may be fragmented in this case). When all free areas are used, we proceed by freeing one reserved column. The process finishes when all space is used. Figure 5 shows an example of the mapping process of our proposal. Figures 5(a) and 5(b) depict the first phase of the algorithm. It can be seen that packet D uses the same MCS as packet C, so they are grouped into the same burst, recomputing width and height of that burst. First phase finishes when there is no more free width. Figures 5(c–e2) depict the second phase, where bursts can be updated and/or created. Figure 5(e1) takes the case of mapping packet H1 (case 1 in the figure), where the reserved column is used for data. Figure 5(e2) shows the case of mapping packets H2, I and J (case 2 in the figure), where the reserved column is used for the DL-MAP instead.
4
Performance Evaluation
In this section we compare the performance of our algorithm with three previously proposed allocation mechanisms, by means of extensive simulation. We have selected SDRA [9], Ohseki et al. [10] and eOCSA [5] for the evaluation. This selection is based on the results and conclusions obtained in [11]. In the case of our proposal we reserve one column (n = 1), as stated in Section 3. For our simulations we consider a single–cell PMP IEEE 802.16 network. The physical bandwidth is 10 MHz which corresponds to a 1024–FFT. We only
258
J.I. del-Castillo, F.M. Delicado, and J.M. Villal´ on
consider the DL direction and we use the PUSC permutation mode. Frame duration is 5 ms and duplexing technique is Time Division Duplexing (TDD), with a 50% of the frame duration intended for DL. The wireless channel is modelled using a Rayleigh fading channel based on the ITU Pedestrian A multi–path model [12]. Due to this, SSs change their MCS scheme through the simulation in order to use the most efficient scheme possible. Two different scenarios are simulated: one where the DL-MAP is transmitted once (x1) and other where the DL-MAP is transmitted four times (x4) as allowed by the standard. In order to check the behaviour of the algorithms under different network loads, we set one BS and a variable number of SSs (from 30 to 80). DL traffic is generated as a mix of traffic models from different types of users: 65% of individual subscribers, 20% of small business and 15% of medium business. Traffic is formed of voice and best–effort classes, modeled as specified in [13]. We set a strict priority scheduler, which gives precedence to packets according to their service class. Voice packets with an end–to–end delay greater than 10 ms are dropped by the scheduler. Each scheduled packet is forwarded to the mapper as an allocation to insert into the OFDMA matrix. For each scenario we simulate 3 minutes of operation, measuring statistics after the first 10 seconds. We run 30 different executions for each scenario, in order to be able to get the average value and a 95% confidence interval for each metric. Confidence intervals are however not drawn since they are negligible with respect to the estimated average. 4.1
Metrics
The following metrics are defined in order to evaluate and compare the performance of our proposal and the other resource allocators. These metrics are based on the factors defined in Section 2.2. – DL subframe waste (%): this metric is the sum of unused slots, overallocation, DL-MAP overhead and MAC overhead. Each one of these is obtained by dividing the number of symbols it takes (i.e. overallocated symbols) by the total number of symbols of the DL subframe. – DL throughput efficiency (%): it is defined as the total traffic sent by the BS to the SSs divided by the total DL traffic that arrives at the BS. – Skipped traffic (%): as a resource allocator may change the packet order initially established by the scheduler, some traffic may be skipped in a given frame. This metric is computed as: S = 100 ∗
Bskipped Btotal
(3)
where Bskipped is the sum in bits of the skipped packets according to the scheduler order and Btotal is the sum in bits of all the processed packets. – Bit–loss rate (%): defined as the total traffic dropped by the scheduler (deadline) divided by the total DL traffic that arrives at the BS.
OFDMA Downlink Burst Allocation Mechanism for IEEE 802.16e Networks
259
– Active time (%): this metric indicates the average active time of all SSs. Each frame it is computed as: n mi i=1 W0 + j=1 Wi,j At = 100 ∗ (4) n·W where n is the number of SSs, mi is the number of bursts that contain data intended for the ith SSs, W0 is the width in slots of the DL-MAP region, Wi,j is the width in slots of the jth burst of the ith SS and W is the total width in slots of the DL subframe. 4.2
Simulation Results
We start by evaluating DL subframe waste and throughput. Figures 6 and 7 depict waste (vertical bars) and throughput (lines) for the one and four DLMAP repetition scenarios, respectively. In both cases it can be seen that total subframe waste with our proposal (O.P.) is the lowest of the four algorithms. As network load increases, our proposal is able to mantain unused slots and overallocation at very low values. In the first scenario (one repetition), the eOCSA algorithm generates more overallocation and leaves more slots unused, because its allocation performance highly depends on the size distribution of the arriving packets. Our proposal is able to group packets from the same MCS more efficiently. MAC overhead is greater with O.P., but the overall waste is still lower because we only fragment packets in some specific cases of the process. Number of Subscriber Stations
40
50
60
70
80
100
100
80
80
60
60
40
40
20
20
0
0
Downlink throughput (%)
DL Subframe wastage (%)
30
Overalloc Unused DL-MAP MAC
SDRA Ohseki eOCSA O.P.
O.P. eOCSA Ohseki SDRA
O.P. eOCSA Ohseki SDRA
O.P. eOCSA Ohseki SDRA
O.P. eOCSA Ohseki SDRA
O.P. eOCSA Ohseki SDRA
O.P. eOCSA Ohseki SDRA
Fig. 6. DL subframe waste and DL throughput efficiency (DL-MAP x1)
260
J.I. del-Castillo, F.M. Delicado, and J.M. Villal´ on Number of Subscriber Stations
40
50
60
70
80
100
100
80
80
60
60
40
40
20
20
0
0
Downlink throughput (%)
DL Subframe wastage (%)
30
Overalloc Unused DL-MAP MAC
SDRA Ohseki eOCSA O.P.
O.P. eOCSA Ohseki SDRA
O.P. eOCSA Ohseki SDRA
O.P. eOCSA Ohseki SDRA
O.P. eOCSA Ohseki SDRA
O.P. eOCSA Ohseki SDRA
O.P. eOCSA Ohseki SDRA
Fig. 7. DL subframe waste and DL throughput efficiency (DL-MAP x4)
In the case of the second scenario (four repetitions), the eOCSA algorithm behaves significantly worse because the effect of DL-MAP overhead is greater, and eOCSA generates big DL-MAPs due to the fact that it does not group packets from the same MCS. Ohseki and O.P. group packets, generating smaller DL-MAPs, but Ohseki has a problem with overallocation because it reserves whole columns for individual MCSs, so much of that space is wasted. SDRA generates too large DL-MAPs, because it does not group packets and even a given allocation may be divided into up to three bursts. DL throughput efficiency is inversely proportional to DL waste, so our proposal is performing better than the other three as seen in Figures 6 and 7. Figures 8 and 9 shows skipped traffic and bit-loss rate (BLR) of voice packets for both scenarios. It can be seen that the eOCSA algorithm is not preserving the scheduler order, because it firstly sorts allocations by its size. Although this may improve the performance of the mapping process, it also has a harmful effect over voice BLR. In the case of four DL-MAP repetitions the space available for data transmission is lower, so even though the relative disorder is lower its effect over BLR is greater. On the other hand, Ohseki skips packets from time to time, but it does not significantly affect BLR. Our proposal and the SDRA algorithm do not disorder traffic in any case because the traffic is processed in strict order, so no voice packets are dropped and BLR is zero. Average SS active time (%) is shown in Table 1. In the case of the one repetition scenario, there is not much difference between the four algorithms, and active time is relatively low. When we transmit the DL-MAP four times in a
OFDMA Downlink Burst Allocation Mechanism for IEEE 802.16e Networks 50
60
70
6 4 2 0 8
SDRA Ohseki eOCSA O.P.
6 4 2 0 30
40
30
80 SDRA Ohseki eOCSA O.P.
Skipped traffic [x4] (% )
40
BLR voice [x4] (% )
BLR voice [x1] (% )
Skipped traffic [x1] (% )
30 8
50
60
70
80
50
60
70
80 SDRA Ohseki eOCSA O.P.
6 4 2 0 8
SDRA Ohseki eOCSA O.P.
6 4 2 0 30
40
50
60
70
80
Number of Subscriber Stations
Number of Subscriber Stations
Fig. 8. Skipped traffic and BLR (x1)
40
8
261
Fig. 9. Skipped traffic and BLR (x4)
Table 1. Active time (%) DL MAP x1
x4
Alg. SDRA Ohseki eOCSA O.P. SDRA Ohseki eOCSA O.P.
30 22 ± 2 · 10−4 22 ± 2 · 10−3 21 ± 6 · 10−5 22 ± 2 · 10−3 30 ± 3 · 10−3 22 ± 3 · 10−3 25 ± 8 · 10−4 22 ± 3 · 10−3
Number of Subscriber Stations 40 50 60 70 −5 22 ± 8 · 10 22 ± 3 · 10−5 22 ± 9 · 10−5 21 ± 7 · 10−5 −3 −3 −4 24 ± 3 · 10 24 ± 2 · 10 24 ± 9 · 10 23 ± 1 · 10−3 21 ± 2 · 10−5 21 ± 1 · 10−5 21 ± 4 · 10−5 21 ± 4 · 10−5 24 ± 3 · 10−3 23 ± 1 · 10−3 24 ± 1 · 10−3 23 ± 4 · 10−4 37 ± 4 · 10−3 38 ± 7 · 10−4 37 ± 8 · 10−4 37 ± 9 · 10−4 24 ± 3 · 10−3 24 ± 2 · 10−3 24 ± 9 · 10−4 23 ± 1 · 10−3 29 ± 3 · 10−4 30 ± 6 · 10−4 31 ± 2 · 10−3 31 ± 2 · 10−3 24 ± 4 · 10−3 25 ± 5 · 10−3 30 ± 2 · 10−3 30 ± 1 · 10−3
80 21 ± 5 · 10−5 23 ± 5 · 10−4 20 ± 2 · 10−5 23 ± 2 · 10−4 37 ± 1 · 10−3 23 ± 5 · 10−4 31 ± 3 · 10−3 30 ± 8 · 10−4
frame more variation is observed. In this case Ohseki is consuming less energy than any other algorithm, because it generates very small DL-MAPs and packet are arranged in columns. It can be seen that our algorithm is behaving better than eOCSA and SDRA, also because DL-MAP size is relatively low and we try to minimize burst width through the allocation process. At last, we compare algorithmic complexities of the four algorithms. SDRA and Ohseki have a linear complexity, O(N ), where N is the number of allocations that need to be mapped in a certain frame. They are simple and fast algorithms, and they scale very well. On the other hand, eOCSA has a complexity of O(N 2 ) due to the fact that it iterates over the allocation list seeking for the best one to fit the next hole. In the case of our proposal, we achieve a worst–case complexity of O(N log N ), due to the need of mantaining an ordered list of holes. However it is perfectly feasible for its real implementation.
5
Conclusions
In this paper, we have proposed a new resource allocation algorithm for OFDMA DL of IEEE 802.16 systems. It is clear that the process of resource allocation is not straightforward, and that there are several factors that may affect it.
262
J.I. del-Castillo, F.M. Delicado, and J.M. Villal´ on
A correct and clever balance of these factors is needed to achieve the desired goals. Usually, tuning up one factor implies worsening others. Through computer simulation, it has been confirmed that our proposal is able to perform better than the other three evaluated algorithms, preserving QoS order, keeping a reduced power consumption and a low algorithmic complexity.
References 1. IEEE 802.16–2009: IEEE Standard for Local and Metropolitan Area Networks– Part 16: Air Interface for Broadband Wireless Access Systems (2009) 2. Seba, V., Modlic, B.: Multiple Access Techniques for Future Generation Mobile Networks. In: International Symposium Electronics in Marine, pp. 339–344 (2005) 3. Guti´errez, I., Bader, F., Aquilu´e, R., Pijoan, J.L.: Contiguous Frequency-Time Resource Allocation and Scheduling for Wireless OFDMA Systems with QoS Support. Eurasip Journal on Wireless Communications and Networking (2009) 4. Andrews, J., Ghosh, A., Muhamed, R.: Fundamentals of WiMAX. Understanding Broadband Wireless Networking. Prentice-Hall, Englewood Cliffs (2007) 5. So–In, C., Jain, R., Al–Tamimi, A.: eOCSA: An Algorithm for Burst Mapping with Strict QoS Requirements in IEEE 802.16e Mobile WiMAX Networks. In: IEEE Wireless Communication and Networking Conference (2008) 6. Desset, C., de Lima, E.B., Lenoir, G.: WiMAX Downlink OFDMA Burst Placement for Optimized Receiver Duty-Cycling. In: International Conference on Communications, pp. 5149–5154 (2007) 7. Johnson, D.S.: Fast Algorithms for Bin Packing. Journal of Computer and System Sciences, 272–314 (1974) 8. Ben–Shimol, Y., Kitroser, I., Dinitz, Y.: Two–Dimensional Mapping for Wireless OFDMA Systems. IEEE Transactions on Broadcasting 52(3), 388–396 (2006) 9. Bacioccola, A., Cicconetti, C., Lenzini, L., Mingozzi, E., Erta, A.: A Downlink Data Region Allocation Algorithm for IEEE 802.16e OFDMA. In: International Conference on Information, Communication and Signal Processing, pp. 1–5 (2007) 10. Ohseki, T., Morita, M., Inoue, T.: Burst Construction and Packet Mapping Scheme for OFDMA Downlinks in IEEE 802.16 Systems. In: GLOBECOM 2007, pp. 4307– 4311 (2007) 11. Del-Castillo, J.I., Delicado, F.M., Delicado, J., Villalon, J.M.: OFDMA Resource Allocation in IEEE 802.16 networks: A Performance Comparative. In: Wireless and Mobile Networking Conference (WMNC), pp. 1–6 (2010) 12. WiMAX Forum: WiMAX System Methodology v2.1, p. 230 (2008) 13. Baugh, C.R., Huang, J.: Traffic Model for 802.16 TG3 MAC/PHY Simulations. IEEE 802.16 BWA Working Group (2001)
Adaptive On-The-Go Scheduling for End-to-End Delay Control in TDMA-Based Wireless Mesh Networks* Yung-Cheng Tu1, Meng Chang Chen2, and Yeali S. Sun3 1
Research Center for Information Technology Innovation, Academia Sinica, Taipei, Taiwan 2 Institute of Information Science, Academia Sinica, Taipei, Taiwan 3 Department of Information Management, National Taiwan University, Taipei, Taiwan
Abstract. Providing end-to-end delay bound for real-time applications is a major challenge in wireless mesh networks (WMNs) because the bandwidth requirements of flows are time-varied and the channel condition is unstable due to the wireless interference between the links. In this paper, we present a twostage slot allocation mechanism in TDMA-based WMNs. First, we assume that the bandwidth requirement of each flow is given in the form of a range and use a distributed algorithm to pre-allocate time slots to each link. Then, we implement an On-The-Go scheduling scheme, which enables each link to schedule its transmission time promptly without coordinating with others. In contrast to traditional approaches, our method allows a degree of control over the collision probability, but it only requires a few control messages and the computational overhead is lower. The simulation results show that our mechanism performs efficiently and flexibly on supporting real-time applications in WMNs. Keywords: TDMA-based scheduling; wireless mesh networks; slot allocation.
1 Introduction For high affordability and easy deployment, wireless mesh networks (WMNs) have been developed as cost-efficient networking platforms to support ubiquitous broadband Internet access in several cities [1][2][3]. A communication flow from source to destination in a WMN usually requires multiple hops. Due to the available bandwidth for each hop is time-varied depending on the current load and interference between the wireless links, supporting end-to-end quality of service (QoS) guarantees for communication flows in WMNs is a challenging task. In this paper, we focus on QoS provisioning for real-time applications, such as VoIP, video conferencing and multi-media streaming. Real-time applications usually generate variable-bit-rate (VBR) traffic and require an end-to-end QoS guarantee. It is known that contention-based protocols, like CSMA/CA used in IEEE 802.11 [4], cannot provide strict QoS guarantees because they will result in service unfairness between wireless links [5] in multi-hop wireless networks. In contrast, time division multiple access (TDMA) based MAC protocols, such as the 802.16 mesh protocol [6] *
The project was partly supported by NSC Taiwan under contract NSC98-2221-E-001005-MY3.
J. Domingo-Pascual et al. (Eds.): NETWORKING 2011, Part II, LNCS 6641, pp. 263–274, 2011. c IFIP International Federation for Information Processing 2011
264
Y.-C. Tu, M.C. Chen, and Y.S. Sun
and the 802.11s mesh coordination function (MCF) coordinated channel access protocol [7], provide collision-free communications and allow fine control of the throughput and delay of network traffic. Several TDMA-based scheduling algorithms with different objective functions have been proposed for WMNs, e.g., maximizing system throughput [8][9][10], fairness [10][11], and flow utility [12][13], or minimizing end-to-end transmission delays [14]. These algorithms can be classified into two types: centralized algorithms and distributed algorithms. For centralized algorithms, the central controller requires a substantial amount of time to collect the bandwidth requirements of all links, apply the scheduling algorithm and deliver the scheduling results to all the mesh nodes. In contrast, a distributed scheduling algorithm is applied by each mesh node without the information about the whole network. However, coordination between the mesh nodes is necessary to ensure that the transmissions of different links will not conflict with each other. As a result, the coordination mechanism incurs extra overheads and scheduling waiting time [8][11][13][15]. One of the major challenges in providing end-to-end QoS guarantees for real-time application flows in TDMA-based WMNs is how to rapidly adjust the slot allocation of each link when the traffic load of flows and network condition vary frequently. As mentioned above, for both centralized and distributed scheduling algorithms, there is a certain amount of latency between the time a link requests a scheduler and the time it gets the scheduling result. Therefore, all existing TDMA-based scheduling algorithms assume that the state of each link is fixed. Mostly when a link’s state changes, it is usually necessary to re-schedule the transmission time of all the links in the network. To address the above problem, we propose an adaptive on-the-go scheduling scheme that allocates time slots for each link dynamically. The contributions of this work are as follows: 1. We propose a two-stage slot allocation mechanism for TDMA-based WMNs. The first stage provides minimum end-to-end QoS guarantees to all real-time application flows; and the second stage allows each link to adjust its transmission time dynamically to maximally satisfy the QoS requirements of all flows on the link and prevent transmission collisions with other links. 1. In contrast to traditional conflict-free slot allocation schemes, we allocate conflictfree slots and multi-access slots to each link. The transmissions in the conflict-free slots of each link will not be interfered by those of other links. For the multi-access slots, we can control the number of interference nodes and maximize network utilization by selecting appropriate transmission time slots for each link. 2. We present an adaptive on-the-go scheduling scheme that each link can schedule its transmission times without coordinating with other links. The mechanism performs more flexibly on real-time application flows than traditional one-stage conflict-free slot allocation schemes. The remainder of this paper is organized as follows. In Section 2, we define the network and system model of our work. In Section 3, we present our two-stage slot allocation mechanism. In Section 4, we introduce the adaptive on-the-go scheduling scheme for our two-stage slot allocation mechanism. In Section 5, we evaluate the scheme’s performance via simulations. Finally, we give a conclusion in section 6.
Adaptive On-The-Go Scheduling for End-to-End Delay Control
265
2 Network and System Models We model a WMN by a directed network graph NG=(N,V), where N={na,nb,nc,…} is the set of nodes and V={v1,v2,v3,…} is the set of directed links. In a WMN, a node na can transmit data to another node nb if nb is in the transmission range DTR of na. A link from node na to node nb means the node na has the ability to transmit data to node nb. Two links in a WMN will interfere with one another if they can not transmit packets simultaneously. In this paper, we adopt the protocol model proposed in [16] as our interference model. Under the model, a transmission from node na to node nb is successful if and only if there is no other node within the interference range DIR of nb transmitting data at the same time. With the protocol model, we can construct a undirected contention graph CG=(V,E), where V is the same as above and an edge (vk,vl) is in E if links vk and vl can not transmit data simultaneously. Then, we define link vk’s neighbor set as NB(vk)={vl | (vk,vl) ∈E}. A real-time application flow fi in a WMN consists of a routing path, defined as Pathi={vk| vk∈V, fi passes through the link vk}, and the range of the flow’s demand max rate ( ri min at all vk∈Pathi, where ri min and ri max are, respectively, the minimum ,k ,k , k , ri , k ) and maximum demand rates of fi at link vk. Then, the bandwidth requirement of link vk is bound by ( Rkmin , Rkmax ) , where Rkmin is the summation of ri min of all flows that pass ,k through vk; Rkmax can be obtained in a similar manner. Like most TDMA-based protocols, we divide the timeline into recurrent frames, each comprised of M fixed-length time slots. Suppose the demand rate of flow fi at the link vk in the frame t is ri,k(t) and Ck is the channel capacity of link vk., we can estimate the total bandwidth requirement Rk(t) and slot requirement Tk(t) of the link vk in the frame t as follows: Rk (t ) =
∑
i|vk ∈Pathi
ri ,k (t )
Tk (t ) = ⎡⎢ M ⋅ Rk (t ) Ck ⎤⎥
(1) (2)
Similarly, we can derive the minimum and maximum slot requirements, denoted as Tkmin and Tkmax, of vk from Rkmin and Rkmax. We assume that the demand rate ri,k(t) of and ri min , so the slot requirements of link vk is each flow i at link vk is between ri min ,k ,k bounded by (Tkmin , Tkmax ) .
3 The Two-Stage Slot Allocation Mechanism The system framework of our two-stage slot allocation mechanism with on-the-go scheduling is shown in Fig. 1. The demand rate of a flow fi is estimated based on the traffic load, network condition and the end-to-end QoS requirement of the flow. Any bandwidth estimation algorithm that supports end-to-end delay control, such as the per-node delay assignment scheme [17] or the bulk scheduling scheme [18], can be used in our mechanism.
266
Y.-C. Tu, M.C. Chen, and Y.S. Sun Node na End-to-end delay aware rate estimation
r1,k(t) Flow 2 r2,k(t)
vk=(na,nb) 2 On-The-Go Scheduling
Flow 3
1
Slot Pre-Allocation
Flow 1
Frame table
Node nb
r3,k(t) channel condition
Fig. 1. The framework of the two-stage slot allocation mechanism with On-The-Go scheduling
In the first stage, we pre-allocate slots to each link vk according to the link’s minimum and maximum slot requirements. Since Tkmin and Tkmax are determined by the minimum and maximum demand rates of all flows in vk, slot pre-allocation is only required when a flow joins or leaves this link. It is not necessary to dynamically adjust the pre-allocated slots with the variable traffic load; therefore coordination between the links is allowed in this stage. The second stage of our slot allocation mechanism implements the on-the-go scheduling scheme. The scheme tries to select slots dynamically from the pre-allocated slots in the first stage to maximally satisfy the immediate bandwidth requirements of all flows on the link and prevent transmission collisions with other links. 3.1 Conflict-Free and Multi-Access slots The objective of the slot pre-allocation scheme is to allocate slots for each link vk such that Tkmin can be guaranteed and Tkmax can be satisfied as much as possible in the second stage. Therefore, our pre-allocation scheme assigns two types of slots: conflict-free slots and multi-access slots for each link vk. The former is not allocated to any neighbor of vk while the latter can be allocated to some of its neighbors. Let mk and m'k be, respectively, the number of conflict-free slots and all pre-allocated slots. In our pre-allocation scheme, the mk is equal to Tkmin and m'k must be greater than Tkmin but cannot exceed Tkmax. Our scheme has the same constraint as [14], which requires that a link’s pre-allocated slots are continuous in order to reduce the overhead of coordination between links. In addition, we assume that each active link contains at least one conflict-free slot to prevent flow starvation. Then the conflict-free slots of each link must also be continuous because the pre-allocated slots between any two conflict-free slots of a link vk cannot be pre-allocated to any neighbor of vk or it will result in the fragmentation of the pre-allocated slots of the link vk. Thus the preallocated slots of each link can be divided into three periods as in Fig. 2. Suppose the starting positions of all pre-allocated slots and conflict-free slots of link vk in the frame table are the slot s'k and slot sk respectively. We define three periods of the pre-allocated slots of vk, namely, the head period Zkhead, the body period Zkbody and the tail period Zktail, as follows: Z khead = {i | Δ ( sk′ , i ) < Δ ( sk′ , sk )} Z Z
body k
tail k
= {i | Δ ( sk′ , sk ) ≤ Δ ( sk′ , i ) < Δ ( sk′ , sk + mk )}
= {i | Δ ( sk′ , sk + mk ) ≤ Δ ( sk′ , i ) < mk′ }
(3) (4) (5)
Adaptive On-The-Go Scheduling for End-to-End Delay Control
Frame table of link vk
sk
s’k
Head (Multi-access)
Body
sk+ mk
267
s’k+ m’k
Tail
(Conflict-free) (Multi-access)
Fig. 2. The head, body and tail periods of a link
where Δ ( i, j ) = ( j − i ) mod M represents the distance from slot i to slot j in recurrent frames. The relationships between the three periods are shown in Fig. 2. The transmission of link vk in a slot i during its head period will only compete for transmission opportunities with link vl, where vl∈NB(vk) and vl’s tail period Zltail contains the slot i. Moreover, a transmission of vk in a slot j∈Zktail will only compete with the link vm, where vm∈NB(vk,) and j∈Zmhead. Thus, the contention degree of a link vk in our two-stage slot allocation mechanism can be bounded. Let CG(vk) be a subgraph of CG that contains only the neighbors of vk and the edges between the neighbors. Under our pre-allocation scheme, the number of links that compete for transmission opportunities with vk, in any time slot is not greater than the size of the maximum independent set of all links in CG(vk) because the links that can compete for transmission opportunities with vk at any slot cannot be neighbors with each other. 3.2 The Slot Pre-allocation Scheme The objective of our slot pre-allocation scheme is to maximally satisfy the slot requirements of all links. The pre-allocation scheme can be centralized or distributed with some coordination. However, the scheduler needs information about all the links in the WMN to get the global optimal solution, which incurs a large communication overhead and requires a great deal of computation time. Moreover, because all links in a WMN must adjust their pre-allocated slots once a link changes its pre-allocation, each link’s pre-allocation will be changed frequently. This results in instability in the short-term throughput of each link and higher delay jitters between links. Therefore, we consider the following local optimization problem for each link which can be solved by a distributed pre-allocation algorithm: Given : M , Tkmin , Tkmax , NB(vk ),
and ( sl′, sl , ml , ml′ ) of all vl ∈ NB (vk )
Find : sk′ , sk , mk , mk′ Maximize : mk′ s.t. 0 < mk = Tkmin ≤ mk′ ≤ Tkmax 0 ≤ Δ ( sk′ , sk ) ≤ mk′ − mk Δ ( sk′ , sk ) ≤ Δ ( sl + ml , sk ) , ∀vl ∈ NB(vk ) Δ ( sk + mk , sk′ + mk′ ) ≤ Δ ( sk + mk , sl ) , ∀vl ∈ NB(vk )
To solve this problem, each node only needs to gather the pre-allocated slots, which consist of the head, body and tail periods, from all neighbors. This problem can be easily solved by a linear search. Since more than one region would meet the above
268
Y.-C. Tu, M.C. Chen, and Y.S. Sun
requirements, choosing an appropriate one as the pre-allocation is also a problem in the pre-allocation scheme. In this paper, we choose pre-allocate slots in the longest available period for vk to achieve the objective of maximizing the m'k. Any existing TDMA-based distributed slot allocation protocol can be applied to our pre-allocation scheme with a little modification where the allocated slots are represented by four parameters (s'k, sk, mk, m'k). For example, the slot allocations in IEEE 802.16 are performed by a three-way handshake: request, grant and confirm. To apply our mechanism in IEEE 802.16, a node sends a request message containing the minimum and maximum slot requirements (Tkmin , Tkmax ) to its receiver only when the bandwidth requirement ( Rkmin , Rkmax ) of the link is changed. The receiver schedules a pre-allocation as mentioned above, broadcasts the result as a grant in the (s'k, sk, mk, m'k) format and waits the confirm from the sender. Thus, each link in the WMN knows the pre-allocation of its neighbors and keeps a list of available conflict-free and multi-access slots as in Fig. 3. Our algorithm only needs to retrieve all available slots and find the one with maximum m'k from feasible allocations. The complexity of the algorithm is O(M) because the maximum number of available slots is M. link va
link vd
link vb
link ve
link vc
link vf s’
s
s+m
s’+m’
≥Tkmin ≤Tkmax A feasible allocation of link vk
head
body
tail
Available conflict-free slots Available multi-access slots
Fig. 3. The illustration of retrieving the frame to find the feasible allocations
4 On-The-Go Scheduling Scheme In previous section, we introduced the pre-allocation scheme for each link vk. However, we still need an efficient scheduling scheme to select slots from the preallocated slots to transmit data in each frame. Therefore, we propose the on-the-go scheduling scheme that 1) chooses slots dynamically for transmissions without coordinating with other links or waiting for scheduling from a central server; and 2) prevents collisions between links and maximize the network throughput. Specifically, the on-the-go scheduling scheme assigns an index value to each slot and chooses slots according to the immediate slot requirement and index values. 4.1 Index Values and Transmission Slots Selection In the following, we use xk to denote the vector of all index values of link vk, where xk(i) is the i-th element in xk and represents the index value of slot i of link vk. With our design, the index value of a slot in the body period and the idle period must be zero and infinite respectively. Each multi-access slot has a unique integer index value between 1 and mk′ − mk . Let xkmax(t) denote the maximum index value of slots that vk can transmit data in the frame t.
Adaptive On-The-Go Scheduling for End-to-End Delay Control
269
4.2 The Near-Body-First (NBF) Policy In our mechanism, the slot with smaller index value has higher priority to be selected to transmit data. The xk is used to prevent transmission collisions and maximize the network throughput. This is achieved by the Near-Body-First policy. An index value assignment xk of vk is a Near-Body-First (NBF) index assignment if the index values of any two adjacent slots i and j, j=(i+1) mod M, of vk satisfy the constraints: head ⎪⎧ xk ( i ) > xk ( j ) ,if i, j ∈ Z k ⎨ tail ⎪⎩ xk ( i ) < xk ( j ) ,if i, j ∈ Z k
(6)
Theorem 1. Given the head, body, and tail periods of each link and an index value assignment xk whose expected throughput for each link vk is θk, we can always find an NBF index assignment x'k such that the expected throughput of each link vk with x'k is higher than or equal to θk. The Theorem 1 can be proved by iteratively exchanging the index values of any two adjacent slots that violate the NBF policy and ensuring that each iteration will not diminish the expected throughput of each link. The detail of the proof is omitted here due to the limitation of paper length. From Theorem 1, we know that the NBF index assignments can provide the highest expected throughput for all links. However, given the head, body and tail periods of a link vk, there are many index assignments belonging to NBF index assignments. To dynamically calculate the index assignment for each link according to the links channel conditions to avoid collisions is a major concern of the on-the-go scheduling scheme. In this paper, we define the parameters wkh2t(t), called the weight of the head over the tail, for the index assignment of vk in frame t as follows: wkh 2 t (t ) =
g khead (t − 1) + δ g ktail (t − 1) + δ
(7)
where gkhead(t-1) and gktail(t-1) are the numbers of packets transmitted successfully during the head and the tail periods of vk in the previous frame. The parameter δ is a very small value to avoid the illegal division by zero and let wkh2t(t) be 1 when both gkhead(t-1) and gktail(t-1) are equal to 0. Since a link vk will compete for transmission opportunities with different neighbors in its head and tail periods, it will experience different interferences in the two periods. The parameter wkh2t(t) represents the relation of the interferences in head period and that in tail period. A higher value of wkh2t(t) indicates that the link vk has fewer collisions during its head period. Thus, we calculate the index value of each multiaccess slot in frame t as follows:
(
)
⎧Δ ( i, sk ) + min Ztail , ⎢⎣(Δ(i, sk ) − 1) / wkh 2t (t ) ⎥⎦ k ⎪ xk (i) = ⎨ head h 2t ⎪⎩Δ ( sk + mk , i ) + min Z k , wk (t ) ⋅ Δ( sk + mk , i )
(
, if i ∈ Zhead k
)
, if i ∈ Ztail k
(8)
270
Y.-C. Tu, M.C. Chen, and Y.S. Sun s’k
sk
sk+mk
s’k+m’k
h 2t Case 1: ∞ ∞ ∞ 5 4 3 2 1 0 0 0 0 6 7 8 9 10 ∞ ∞ ∞ ∞ wk (t ) → ∞ h 2t Case 2: ∞ ∞ ∞ 10 9 8 7 6 0 0 0 0 1 2 3 4 5 ∞ ∞ ∞ ∞ wk (t ) → 0
Case 3: ∞ ∞ ∞ 7 5 4 2 1 0 0 0 0 3 6 8 9 10 ∞ ∞ ∞ ∞ wkh 2t (t ) = 2
Fig. 4. Three examples of NBF index assignment
where |Zktail| and |Zkhead| are the lengths of the tail and head periods of vk. Fig. 4 shows three examples of index assignments in which wkh2t(t) is infinite, 0 and 2 respectively. 4.3 Congestion Control Although the design of wkh2t(t) reflects the difference between interferences in the head and tail periods, simply adjusting the index values of each slot can not avoid collisions when the slot requirement is large because the link will tend to use all the multi-access slots to transmit data. Therefore, we incorporate a congestion control mechanism into our on-the-go scheduling scheme. The congestion control is achieved by introducing the parameter xkmax(t) and limiting the link vk can only transmit data in slots with index values smaller than xkmax(t). If collisions occurred in the transmissions of vk in the previous frame, the value of xkmax(t) is reduced to slow down the transmission rate of vk as follows:
xkmax (t ) = gkhead (t − 1) + gktail (t − 1) − 1
(9)
If there were no collisions in the transmissions of vk in the previous frame, we gradually raise the transmission rate by increasing the xkmax(t):
xkmax (t ) = xkmax (t − 1) + 1
(10)
4.4 Drop Tail Policy
The drop tail policy is another collision avoidance mechanism. Its function is to “stop all the transmissions of vk after a collision of vk has occurred during its tail period until the next head period begins.” The concept of the drop tail policy is based on the property that the transmissions of vk during its tail period will only be affected by the transmissions of the neighbors of vk in their head periods. Moreover, according to the NBF policy, once a slot i is selected to transmit data by vk, the slots following slot i in the head period of vk will also be selected to transmit data. Therefore, if a collision occurs in the tail period of vk, the residual slots in the same tail period will fail. We can discard the transmissions in these slots that the vk and its neighbors will have lower collision probabilities.
5 Performance Evaluation In this section, we evaluate the performance of the proposed two-stage slot allocation mechanism and on-the-go scheduling scheme by simulations. All of the simulations were performed using the ns-2 network simulator [19] with TDMA enhancements in the MAC layer of IEEE 802.11. We show the benefit of the proposed mechanism by
Adaptive On-The-Go Scheduling for End-to-End Delay Control
271
comparing with other two types of transmission schemes: the IEEE 802.11, and the traditional conflict-free TDMA-based schemes with average-rate and peak-rate slot allocations, denoted as TDMA-avg and TDMA-peak, respectively. The system configuration of our simulations is listed in Table I. All flows use the UDP as their transport protocol. The period of each simulation is 3000 seconds. The distance between the source and destination of each link is 200 meters in all scenarios. Table 1. The system Configuration of our simulations Parameter Channel Bandwidth Interface Queue Size Packet Size Flow type Transmission/Interference Range TDMA slot/frame duration
Value 11Mbps 100 packets 1500 bytes Exponentially distributed ON/OFF model Average ON/OFF period : 1000/1000 ms. 250m/420m 1.2 ms/60 ms (50slots per frame)
5.1 Chain Topology
In this scenario, we examine the cases of one flow and multi-flows on a 6-hop chain topology as shown in Fig. 5(a). We first consider the wireless network with a 6-hop flow. We evaluate the performance of our mechanism by increasing the average sending rate of the 6-hop flow and Fig. 5(b), 5(c) and 5(d) show the simulation results. We can see that our on-the-go (OTG) scheduling provide higher end-to-end throughput than IEEE 802.11 and TDMA-avg as shown in Fig. 5(c). Although TDMA-peak has shorter end-to-end delay than OTG, but it can not admit the flow with average sending rate more than 1200 Kbps while OTG can allow the sending rate of the 6-hop flow up to 1600 Kbps. Note that in the cases of TDMA-peak and OTG with sending rate more than their schedulable limits, we still admit the flow but only the maximum schedulable slots are allocated. The TDMA-avg has shorter average end-to-end delay when traffic load is high because most packets are dropped by the interface queue. We also show the performance of our enhancements, congestion control and drop tail, for on-the-go scheduling in Fig. 5(d). The results show that the two enhancements can reduce the collision probability of the on-the-go scheduling when traffic load is high. In the second part of this scenario, we consider multiple flows with different path lengths. There are one 6-hop, two 3-hop and six 1-hop flows, as in Fig. 5(a). We fix the sending rates of 1-hop and 3-hop flows to 512Kbps while the sending rate of the 6-hop flow varies from 200Kbps to 1000Kbps in this simulation. We show the average end-to-end delay of each flow and collision probability under IEEE 802.11 and OTG in Fig 5(e) and 5(f). The results of TDMA-peak and TDMA-avg are not presented because the comparisons of their performances with OTG are similar as in the scenario of single flow shown in Fig. 5(b). We can see that OTG provides constant end-to-end delay for each flow when the requests are schedulable. The average scheduling delay [14] of each hop is about 30ms in our simulation, which results in about 150ms and 60ms end-to-end scheduling delay to 6-hop and 3-hop flows. If we only consider the queueing delay, the on-the-go scheduling performs fairly on flows with different hops.
272
Y.-C. Tu, M.C. Chen, and Y.S. Sun 802.11 OTG TDMA-avg TDMA-peak
1.6
Flow 2 Flow 1 a
b
Flow 4
Flow 3 c
d
f
e
g
)c 1.4 se( 1.2 yla ed 1 dn e- 0.8 tonde 0.6 eg are 0.4 vA 0.2
Flow 5 Flow 6 Flow 7 Flow 8 Flow 9
0 500 600 700 800 900 1000 1100 1200 1300 1400 1500 1600 1700 1800 1900 2000
Sending rate of flow 1 (Kbps)
(a) 1800
(b)
1600
s)p 1400 b (K t 1200 puh gu 1000 roh t 800 nde o--t 600 dn 400 E
802.11 OTG OTG+Drop tail OTG+Congestion Control TDMA-avg TDMA-peak
30
Schedulable limit of OTG
25
Schedulable limit of TDMA-peak
) (% y 20 ltibi ab or 15 Pn iois lo 10 C
802.11 OTG TDMA-avg TDMA-peak
200
5
0
0 500 600 700 800 900 1000 1100 1200 1300 1400 1500 1600 1700 1800 1900 2000
500 600 700 800 900 1000 1100 1200 1300 1400 1500 1600 1700 1800 1900 2000
Sending rate of flow 1 (Kbps)
Sending rate of flow 1 (Kbps)
(c)
(d)
IEEE 802.11 Flow 2(3-hop)
1
)c 0.9 e(s 0.8 laye 0.7 dd 0.6 ne -o 0.5 t d-n 0.4 ee ga 0.3 re 0.2 v A
Flow 3(3-hop) Flow 4(1-hop) Flow 5(1-hop) Flow 6(1-hop) Flow 7(1-hop) Flow 8(1-hop) Flow 9(1-hop)
0.1 0
200
300
400
500
600
700
Sending rate of flow 1(Kbps)
(e)
800
900
1000
On-The-Go
Flow 1(6-hop) Flow 2(3-hop) Flow 3(3-hop) Flow 4(1-hop) Flow 5(1-hop) Flow 6(1-hop) Flow 7(1-hop) Flow 8(1-hop) Flow 9(1-hop)
Flow 1(6-hop)
1 )c 0.9 e(s 0.8 laye 0.7 dd 0.6 n-e o-t 0.5 dn 0.4 ee gra 0.3 ev 0.2 A 0.1 0 200
300
400
Schedulable limit of OTG
500
600
700
800
900
1000
Sending rate of flow 1 (Kbps)
(f)
Fig. 5. The topology and simulation results in the 6-hop chain wireless network
5.2 Mesh Topology
In this section, we consider four flows in a wireless mesh network as shown in Fig. 6(a). We fixed the sending rate of flow 1, 2 and 4 to 1000Kbps and varied the sending rate of flow 3 from 100Kbps to 1000Kbps to examine the performance of flows with different channel conditions. The results of average end-to-end delay of each flow and collision probability with different mechanisms are shown in Fig. 6(b), 6(c) and 6(d). From the results, we can see that OTG preserves the property of isolation between flows that, as shown in Fig 6(b), the end-to-end delays of OTG flows maintain constant when the rate of flow 3 increases. Similarly, the TDMA-peak also provides isolation between flows but it can only admit the flow 3 with traffic load less than 300Kbps. We have not shown the collision probability of TDMA-avg and TDMApeak in Fig. 6(d) because their collision probabilities all equal to zero. The drop tail policy only has little improvement because almost all collisions are occurred in the last slot in the tail period of each link when traffic load is not heavy.
Adaptive On-The-Go Scheduling for End-to-End Delay Control
v1
n1
5
n2
n4 Flow 4 n7
Flow 2
v7
)c 4 se( yla 3.5 ed dn 3 e- 2.5 tonde 2 eg 1.5 aer vA 1
v3 v5
n5
n6
v4
v6 n8
v11
n9
v9
n11
n10
0 100
n12
200
300
400
(a) 5
600
700
800
900
800
900
1000
(b)
(TDMA-avg)Flow 1 (TDMA-avg)Flow 2 (TDMA-avg)Flow 3 (TDMA-avg)Flow 4
4.5
500
Sending rate of flow 3(Kbps)
Flow 3
)c 4 se( yla 3.5 ed dn 3 -eto 2.5 -d ne 2 eg are 1.5 vA 1
(OTG)Flow 1 (OTG)Flow 2 (OTG)Flow 3 (OTG)Flow 4
0.5
v10
v8
(802.11)Flow 1 (802.11)Flow 2 (802.11)Flow 3 (802.11)Flow 4
4.5
n3
v2
Flow 1
273
20
(TDMA-peak)Flow 1 (TDMA-peak)Flow 2 (TDMA-peak)Flow 3 (TDMA-peak)Flow 4
802.11 OTG OTG+Drop tail OTG+Congestion control
18 16
) 14 (% ityil 12 ba bo 10 Pr onsi 8 li oC 6
Schedulable limit of TDMA-peak
4 2
0.5 0
0 100
200
300
400
500
600
700
800
900
Sending rate of flow 3 (Kbps)
(c)
1000
100
200
300
400
500
600
700
Sending rate of flow 3 (Kbps)
(d)
Fig. 6. The topology and simulation results of four flows in a 4x4 grid wireless network
6 Conclusion and Discussion In this paper, we proposed a two-stage slot allocation mechanism on TDMA-based transmission protocols in WMNs. The mechanism allocates not only conflict-free slots but also multi-access slots to each link compared to traditional one-stage slot allocation algorithms. An adaptive on-the-go scheduling for the two-stage slot allocation mechanism is also introduced to dynamically schedule the transmission time slots within the multi-access slots for each link. The on-the-go scheduling scheme selects slots for transmission and avoid collisions without coordinating with other links that it can afford rapidly adjusting the slot allocation to support real-time applications with variable-bit-rate traffic. The simulation results also show that our on-the-go scheduling scheme achieves higher utilization than IEEE 802.11 MAC protocol and performs more flexibly and efficiently than traditional TDMA-based slot allocations.
References 1. 2. 3. 4.
Seattle wireless, http://www.seattlewireless.net Southampton Open Wireless Network, http://www.sown.org.uk Rice TFA Network, http://tfa.rice.edu IEEE Standard for Information Technology-Telecommunications and Information Exchange Between Systems-Local and Metropolitan Area Networks-Specific Requirements - Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications, IEEE Std 802.11-2007 (Revision of IEEE Std 802.11-1999), pp. C1–1184
274
Y.-C. Tu, M.C. Chen, and Y.S. Sun
5. Garetto, M., Salonidis, T., Knightly, E.: Modeling Per-flow Throughput And Capturing Starvation in CSMA Multi-hop Wireless Networks. In: Proceedings of IEEE INFOCOM 2006, Barcelona, Spain (April 2006) 6. IEEE Standard for Local and Metropolitan Area Networks Part 16: Air Interface for Fixed Broadband Wireless Access Systems, IEEE Std 802.16-2004 (Revision of IEEE Std 802.16-2001), pp.0_1-857 (2004) 7. "IEEE Draft STANDARD for Information Technology–Telecommunications and information exchange between systems–Local and metropolitan area networks–Specific requirements Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications Amendment 10: Mesh Networking," IEEE Unapproved Draft Std P802.11s/D6.0 (June 2010) 8. Kabbani, A., Salonidis, T., Knightly, E.W.: Distributed Low-Complexity MaximumThroughput Scheduling for Wireless Backhaul Networks. In: Proceedings of IEEE INFOCOM 2007, Anchorage, AK (May 2007) 9. Cheng, H.T., Zhuang, W.: Pareto Optimal Resource Management for Wireless Mesh Networks with QoS Assurance: Joint Node Clustering and Subcarrier Allocation. IEEE Transactions on Wireless Communications 8(3) (March 2009) 10. Li, X.-Y., Nusairat, A., Wu, Y., Qi, Y., Zhao, J., Chu, X., Liu, Y.: Joint Throughput Optimization for Wireless Mesh Networks. IEEE Transactions on Mobile Computing 8(7), 895–909 (2009) 11. Cicconetti, C., Akyildiz, I.F., Lenzini, L.: FEBA: A Bandwidth Allocation Algorithm for Service Differentiation in IEEE 802.16 Mesh Networks. IEEE Transactions on Networking 17(3), 884–897 (2009) 12. Wang, B., Mutka, M.: Qos-Aware Fair Rate Allocation in Wireless Mesh Networks. Elsevier Computer Communications 31(7), 1276–1289 (2008) 13. Hou, Y., Leung, K.K.: A Distributed Scheduling Framework for Multi-User Diversity Gain and Quality of Service in Wireless Mesh Networks. IEEE Transactions on Wireless Communications 8(12) (December 2009) 14. Djukic, P., Valaee, S.: Delay Aware Link Scheduling for Multi-Hop TDMA Wireless Networks. IEEE/ACM Transactions on Networking 17(3) ( June 2009) 15. Rhee, I., Warrier, A., Min, J., Xu, L.: DRAND: Distributed Randomized TDMA Scheduling for Wireless Ad Hoc Networks. IEEE Transactions on Mobile Computing 8(10) (October 2009) 16. Gupta, P., Kumar, P.R.: The Capacity of Wireless Networks. IEEE Transactions on Information Theory 46(2), 388–404 (2000) 17. Vagish, A., Znati, T., Melhem, R.: Per-Node Delay Assignment Strategies for Real-Time High Speed Network. Proceedings of IEEE Globecom 1999, 1323–1327 (December 1999) 18. Tu, Y.-C., Chen, M.C., Sun, Y.S., Shih, W.-K.: Enhanced Bulk Scheduling for Supporting Delay Sensitive Streaming Applications. Elsevier Computer Networks 52(5), 971–987 (2008) 19. Network Simulator ns-2, http://www.isi.edu/nsnam/ns/
SMS: Collaborative Streaming in Mobile Social Networks Chenguang Kong1 , Chuan Wu1 , and Victor O.K. Li2 1
2
Department of Computer Science, The University of Hong Kong {cgkong,cwu}@cs.hku.hk Department of Electrical and Electronic Engineering, The University of Hong Kong [email protected]
Abstract. Mobile social applications have emerged in recent years. They explore social connections among mobile users in a variety of novel scenarios, including friend finding, message routing, and content sharing. However, efficiently supporting resource-demanding delay-sensitive streaming applications on the mobile platform remains a significant challenge. In this paper, we study collaborative VoD-type streaming of short videos among small groups of mobile users, so as to effectively exploit their social relationships. Such an application is practically set in a number of usage scenarios, including streaming of introductory video clips of exhibition items to visitors’ mobile devices, such as in a museum. We design SMS, an architecture that engineers such Streaming over Mobile Social networks. SMS constructs a collaborative streaming overlay by carefully inspecting social connections among users and infrastructural characteristics of Bluetooth technologies. We evaluate our design based on prototype implementation on the Android platform. Keywords: Applications and Services, Mobile Social Networks, PeerAssisted Streaming.
1
Introduction
Novel mobile social applications have emerged in recent years. They explore advanced mobile technologies and social connections among mobile users for interactions and exchanges anywhere at any time. Representative mobile social networks can be grouped into two categories. Mobile social networks in the first category are a natural extension of online social networks over the Internet, where users participate in Facebook-like social applications using their mobile devices, e.g., Tencent QQ [3], Cyworld [2], both of which are enabled with web and mobile access. Exploiting geographic positions of the mobile users, locationbased features may be added, such as sharing of geotagged photos and news captured by mobile devices. In the second category, mobile users in physical proximity directly connect with each other, for friend making based on common interest (e.g., MagnetU [1]), or information searching by propagating queries (e.g., PeopleNet [15]) as well as blog and photo sharing (e.g., Micro-Blog[9]). J. Domingo-Pascual et al. (Eds.): NETWORKING 2011, Part II, LNCS 6641, pp. 275–287, 2011. c IFIP International Federation for Information Processing 2011
276
C. Kong, C. Wu, and V.O.K. Li
Internet access via 3G or WiFi is required for mobile devices to participate in social networks of the first category, while Bluetooth is largely exploited in the second category. We target the latter type of self-organized mobile social networks, where more challenges must be overcome to enable richer functionalities (that are comparable to those via Internet access) at cheaper device costs. In particular, we are interested in effective designs for supporting delay-sensitive streaming applications, which are more resource-demanding than sharing of blogs and photos with typical sizes of a few tens or hundreds of Kbytes. We seek to design an efficient architecture and detailed protocols for collaborative VoD-type streaming of short videos in a peer-to-peer (P2P) fashion, which effectively exploit participants’ social relationships. Such an objective is representative of a number of practical usage scenarios. For example, in a museum or a botanic garden, introductory videos are commonly seen accompanying items on exhibition to give the visitors rich information about the items. Compared to the current way of broadcasting each video with a dedicated screen, a superior solution is to implement cooperative streaming of the video to mobile devices of the group of audience surrounding the exhibition item in a P2P VoD fashion, to save hardware cost and provide viewing flexibility (a visitor can begin watching from the beginning whenever he approaches the item and drag to view any part of the video at will). There are other usage scenarios such as sharing amusing video clips one has in his cellphone to friends in a party. To achieve the above objective, we design and implement SMS, an architecture to engineer VoD-type Streaming over Mobile Social networks, consisting of virtual communities of mobile users interested in particular videos. Our contributions in the novel design of SMS are three-fold: First, we explore social connections among participants in the virtual community, to facilitate voluntary collaborative streaming according to users’ personal preferences. Specifically, we quantify social connections into different levels, namely friends, matches of social attributes, and others. Users are allowed to specify personal preferences in video streaming, to friends only, to friends and attribute matches, or to anyone, based on their own levels of social selfishness, altruism, or practical concerns such as the current battery levels. Second, we construct streaming overlays by synergistically combining participants’ preferences, video segment availability, and characteristics of Bluetooth infrastructures. The broadcast nature of wireless transmissions is also exploited in efficient streaming of segments of common interests. Third, we design detailed protocols to implement SMS, which effectively tackle dynamics of the mobile users, including movements and VCR operations. We also implement our design on the Android platform and carry out extensive evaluations on streaming performance and battery consumption. The remainder of the paper is organized as follows. We present the architecture and detailed design of SMS in Sec. 2 and Sec. 3, respectively. Sec. 4 gives results of the experimental evaluation. We discuss related work in Sec. 5 and conclude the paper in Sec. 6.
SMS: Collaborative Streaming in Mobile Social Networks
2
277
System Architecture
We focus on collaborative streaming of short videos among small groups of mobile users. For each video, there is one source S, which could be the video-emitting device besides an exhibition item in the museum scenario, or the mobile user who owns the video to be shared in the party scenario. The video is partitioned into n consecutive segments for dissemination. A set of mobile users, V, in close proximity of the source and interested in viewing the video, form a virtual community and retrieve the video segments in a P2P fashion via Bluetooth connections. We assume the source and mobile users are within the Bluetooth transmission ranges of each other. Each mobile user may start to view the video at any time, e.g., when he approaches the exhibition item or first joins the group of friends who are sharing the video, and can drag the sliding bar on his video player to view any part of the video at will. Therefore, Video on Demand (VoD) type of streaming is investigated in our design. We consider mobile users are social entities, with different levels of selfishness and altruism. In collaborative streaming within each virtual community, the mobile users may have different preferences in contributing resources (battery, bandwidth) to upload video streams to others, which we classify into three categories: (1) strongly socially selfish type, corresponding to mobile users who wish only to upload to their friends; (2) weakly socially selfish type, for those who are willing to upload to friends as well as strangers with certain social attributes (e.g., the same hometown, the same hobby, etc.); (3) altruistic type, for users willing to help anyone else in streaming. Completely selfish users, who do not wish to upload to anyone else, are excluded from the system, by enforcing that only users who pick one of the above options and enable at least one upload connection can participate in streaming. To achieve efficient streaming over such mobile social networks, we design SMS, an architecture for collaborative streaming with voluntary resource contributions, by exploring social preferences of the mobile users. Our architecture consists of four functional modules. Bluetooth Protocols. Video transmissions among mobile users are based on Bluetooth in SMS. The Bluetooth protocols deal with connection setup and data transfer between Bluetooth devices. The detailed functions include device discovery, service discovery, connection establishment, and data transmission. Specifically, we carefully explore characteristics of Bluetooth infrastructures, i.e., piconet and scatternet, when constructing the P2P streaming topology among mobile users, for high transmission efficiency and low latency during streaming. Social Preferences. The social preference module handles inquiry and matching of preferences and attributes among the mobile users. Each user maintains a profile, which records his upload preference, friend list, as well as a number of attributes he wishes to share with others (e.g., hometown, university/school graduated from, hobbies, etc.). Potential streaming links will be established only between users with matching preference and attributes.
278
C. Kong, C. Wu, and V.O.K. Li
Bluetooth Protocols
User Social P2P Preferences Streaming Interface
SMS Video Player
Block Request/Transfer
Preference and Attribute Inquiry
Connection Setup
User Profile
Streaming Topology Construction
Preference and Attribute Match
Data Transmission
Fig. 1. Architecture of SMS
P2P Streaming. The P2P streaming module constructs P2P streaming topology among mobile users in the virtual community, and implements request and upload of video segments among users. The establishment of streaming links from one mobile user to another is contingent on whether the upstream peer has the video segments the downstream peer needs, as well as the upload preference of the upstream peer, social connections between the two, and the transmission efficiency of the resulting Bluetooth infrastructures. In particular, broadcast opportunities are explored as much as possible among sub groups of users with similar viewing progresses, by organizing them into a piconet, for the best dissemination efficiency. User Interface. This module provides interfaces on the mobile devices for users to maintain their profile, i.e., indicating their upload preferences, friend list, and attribute values. It also provides the video player interface for users to view the video and to control the playback progress. An illustration of the SMS architecture is given in Fig. 1. We design detailed protocols to implement each module in the following section.
3
SMS: Detailed Design and Protocols
We first present detailed approaches to construct efficient collaborative streaming topology, based on social preferences and Bluetooth characteristics. We then present practical protocols to implement the design on individual dynamic users.
SMS: Collaborative Streaming in Mobile Social Networks
3.1
279
Collaborative Streaming Design
The key design issue is to decide which other peers each mobile user streams from and uploads video segments to at each time, i.e. the dynamic construction of collaborative streaming topology. Three aspects are investigated. Segment Availability. To minimize the number of times for costly teardown and setup of Bluetooth connections, a mobile user should connect to a peer owning the largest number of video segments it needs. The suitability for user y to establish a download connection from user x can be decided as follows. Let Sx = (s1x , s2x , · · · , snx ) be the bitmap indicating which video segments user x has, where 1 : X holds video segment i six = , i = 1, . . . , n. 0 : otherwise Let Ry = (ry1 , ry2 , · · · , ryn ) denote the segment request list at a mobile user, where ryi =
ρi−p 0
: i≥p , i = 1, . . . , n, : otherwise
with parameter ρ < 1 and p indicating the index of the segment the user is currently playing. Here different segments are assigned different weights according to how close they are to the playback deadlines, to be used to prioritize request sequence of the segments. The suitability for user y to stream from user x can then be calculated as F (x, y) =
n
ryi × six .
i=1
A larger F (x, y) means that user x may supply user y more segments it urgently needs. Social Preferences in Uploading. Whether y can download segments from x depends on social preference of x. There are three scenarios in which x is willing to upload to y: (1) y is a friend of x; (2) y shares the same social attributes with x, while x is willing to upload to users with matching social attributes; (3) y is neither a friend nor matches the social attributes, but x is willing to upload to anyone. Let W (x, y) denote the level of preference for x to upload to y. Let a, b, and c be three system parameters satisfying a + b + c = 1, a > b > c > 0.
280
C. Kong, C. Wu, and V.O.K. Li
We define
⎧ a ⎪ ⎪ ⎨ b W (x, y) = c ⎪ ⎪ ⎩ 0
: : : :
Case (1), Case (2), Case (3), otherwise.
Combining segment availability with social preferences, we derive the following download preference for y to prioritize his connection requests towards potential supplying peers: P (x, y) = W (x, y) · F (x, y). The larger P (x, y) is, the more likely x can upload more needed segments to y. Bluetooth Transmission Efficiency. The construction of streaming topology should be efficiently combined with Bluetooth transmission technology, and exploits the broadcast nature of wireless transmissions for optimized streaming efficiency. In our design, we organize mobile users into an efficient Bluetooth scatternet, and explore broadcast opportunities within each piconet as much as possible. Fig. 2 illustrates the Bluetooth network the mobile users form. Specifically, there are three different types of nodes in this Bluetooth scatternet: master nodes (MNs), master/slave nodes (MSNs), and slave nodes (SNs). The MNs own the complete video (e.g., the original source of the video or mobile users who have completed the download), and serve as seeds in the streaming network. Each MSN serves as the master node in one piconet and a slave node in another piconet concurrently, corresponding to mobile users who upload cached video segments to peers while downloading further needed video segments. The SNs are mobile users who have just started downloading (e.g., those just joined the system) and have not served others yet. The Bluetooth network is dynamically constructed based on users’ upload preferences and evolving with the streaming progress and dynamics of users. We Piconet Master/Slave Master Slave
Fig. 2. Bluetooth network structure: an illustration
SMS: Collaborative Streaming in Mobile Social Networks
281
make two design choices in constructing the collaborative streaming overlay over the Bluetooth network. First, each mobile user maintains only one download connection at a time and maximally retrieves all segments the upstream peer can provide, before tearing down the connection and reconnecting to another supplier. Mapping to the Bluetooth scatternet, each node may serve as a slave in one piconet only. This design aims to minimize the cost of establishing Bluetooth connections, as well as guaranteeing the download speed at each mobile user. In a Bluetooth network, each piconet maintains a unique pseudo-random frequency hopping sequence (FHS) and time division duplexity (TDD) is used for transmission scheduling. If a mobile user participates in more than two piconets, time is partitioned for different transmissions. The effective transmission time of each connection could be reduced, leading to degraded download speed. Second, mobile users with similar segment requests are maximally arranged into the same piconet, while whether the master node is willing to upload to those peers considering. In this way, broadcast opportunities within each piconet are maximally explored: master node x periodically collects segment requests from slave nodes, and prioritizes the sequence of segment broadcast according to the sum of download preferences, i.e. y∈Y ryi W (x, y), for each segment i from different slaves in set Y. Among the slaves in a piconet, some may serve as master nodes in other piconets (i.e., the case of MSNs). Since MSNs may only be active in one piconet at a time (by synchronizing to its FHS), they could miss data broadcast from the piconet where it serves as a slave. To resolve this issue, An MSN may send the same request again, if it has not received a segment for some time. 3.2
Practical Protocols
We next discuss practical protocols to implement the design in realistic dynamic environments. Joining the system. When a mobile user first joins the streaming system, he customizes his user profile, including upload preference, social attributes, and friend list. His Bluetooth device enters the inquiry mode and discovers other Bluetooth devices within the radio range. Then it inquires service records of other devices through Service Discovery Protocol (SDP). A Bluetooth device providing service publishes its service record enclosing a number of service attributes; some service attributes are common to all (e.g., ServiceClassIDList, ProviderName), but others can be defined by the service providers themselves. [20] proposes an approach for information exchange by modifying the attributes of service records. In our implementation, we make use of a similar approach to exchange information for collaborative streaming, including upload preferences, social attributes, and segment availability. Making use of the information acquired, a newly joined user identifies the most appropriate supplying peer to connect to, based on the design in Sec. 3.1. Then the user establishes a temporary connection with the selected supplier and
282
C. Kong, C. Wu, and V.O.K. Li
sends a connection request. The supplier then invites the user into its piconet. If there are seven slaves including the new user, the supplier stops publishing its service record to prevent new connection request. According to the design on segment availability, the user will join a piconet in which slaves have similar requests. Streaming video segments. After a new slave node joins in a piconet, he will send his segment request to the master node at the first time slot assigned to him. The master node in a piconet collects segment requests from slave nodes and schedules segment transmissions. At a Bluetooth device, the frequency-hopping rate is 1600 hops per second, and each time slot for a frequency hop is 625μs. A regular Bluetooth packet can carry at most 2745 bits of data. In the streaming system, the size of a segment is typically at a few Kbytes. Therefore, multiple time slots are needed to send a video segment in a piconet. For each receiver of the segment, it remains active and follows the frequency-hopping sequence in the piconet. A slave node in a piconet listens to each packet from the master node at the beginning of each odd time slot, and identifies the destination of a packet based on its LT ADDR field. A master node can broadcast packets to all slave nodes on the ASB-U, PSB-C, or PSB-U logical transport. When the master node is broadcasting, the LT ADDR field of the packets are set to zero. Switching download connections. As the streaming progresses, segment availability at each mobile user changes. Such updated information will be published by the user via updated service records. Mobile users may alter their download connections when their current suppliers can no longer serve video segments they need, in cases when the slave user has performed some VCR operations during his video playback and when the master user moves beyond the radio range of the slave. When a change of download link is necessary, the mobile user tears down the connection in its original piconet, and restarts the procedure to search for a new supplier, following the similar service discovery steps as carried out by a newly joined user. Detailed steps of our protocols are summarized in Algorithm 1.
4
Performance Evaluation
We evaluate SMS by implementing our design and protocols on Android 2.1 platforms. 6 HTC Wildfire mobile phones are used in our experiments. They are equipped with 528MHz Qualcomm MSM7225 processor, 512MB ROM, 384MB RAM, and a lithium-ion battery with 1300mAh capacity. In our experiments, a 15MB video file is distributed, with a playback time of 320 seconds. One mobile device serves as the source, with the video file preloaded. The other mobile devices join the streaming network one by one, with joining times uniformly distributed within a total duration of 9 minutes. Upon joining, a user starts requesting and viewing the video from the beginning. The users are configured with different upload preferences, friend relationships, and social
SMS: Collaborative Streaming in Mobile Social Networks
283
Algorithm 1. SMS work flow Initialize user profile and segment request procedure download procedure if (request list= NULL) and (connection = NULL) then Identify the most appropriate supplying peer to connect to Join in the piconet in which selected supplier acts as master Send request list while connection = NULL do if The coming segment is in the request list then Receive data from master Update request list Update supply information in service record end if if (supply list of master= NULL)or(request list= NULL) then Disconnect with master end if end while end if endprocedure procedure upload procedure Modify service record and start service while (Service is not stopped) and (Supply list = NULL) do if a slave disconnects connection then Clear the request information about that user Update the request information end if if new user joins in the piconet then if the number of slaves = 7 then Stop service end if Receive request list from new user Update request information end if Identify segment with the highest weight and broadcast it Update request information end while endprocedure
attributes: the type of each user is randomly chosen among the three (strongly socially selfish, weakly socially selfish, and altruistic); the number of friends and the number of peers with matching attributes at each user are both randomly chosen from 1, 2, and 3. Given the limited number of mobile devices, we restrict the number of slaves in each piconet to two, in order to form a Bluetooth scatternet for evaluation.
284
C. Kong, C. Wu, and V.O.K. Li
4.1
Streaming Performance
350
10 Streaming with all altruistic users SMS
300 250 200 150 100 50 0
1
2
3 4 5 6 7 8 Index of experiments
9
10
Fig. 3. Average video download time
Total waiting time (seconds)
Average download time (seconds)
We first evaluate the streaming performance at the mobile users, by measuring the time taken for each user to complete the video download and the total waiting time during the download due to seeking suitable segment suppliers. We note that the streaming rate of the video is 15 × 106 × 8bits/320seconds = 375Kbps, and data rate along Bluetooth connections is around 540Kbps. Therefore, segment download should be faster than the viewing progress, which guarantees smooth playback at the users. On the other hand, the total waiting time reflects the delay overhead incurred with our streaming protocols, due to service discovery (exchanging segment availability, social attributes, etc.), tearing down and establishing new connections.
Streaming with all altruistic users SMS 8 6 4 2 0
1
2
3 4 5 6 7 8 Index of experiments
9
10
Fig. 4. Average waiting time during download
We repeat our experiment for 10 times and plot the average results among all mobile users in Fig. 3 and Fig. 4, respectively. We observe from Fig. 3 that the average video download time is around 220 seconds, which is much less than the playback duration of the video, showing smooth playback can be achieved at the mobile users. Fig. 4 shows the percentage of waiting time during the entire download process is as low as 1.2%, exhibiting low delay overhead involved in our protocols. We have also compared SMS with a P2P streaming protocol over mobile networks, by which mobile users select suppliers only based on segment availability and all users are willing to upload to anyone else. The results in Fig. 3 show that when more practical considerations on users’ social selfishness are included (the case of SMS), the download performance is only slightly worse, as compare to the performance upper-bound achieved when all users are altruistic. Fig. 4 confirms the conclusion. In most cases, the difference of the total waiting time between SMS and a P2P streaming protocol is small, bringing little influence on the user experience.
SMS: Collaborative Streaming in Mobile Social Networks
285
Battery Consumption (%)
2 Master Device Slave Device Master/Slave With One Slave Master/Slave With Two Slaves
1.5
1
0.5
0 0
50
100 150 200 Running Time (seconds)
250
Fig. 5. Power consumption at devices of different roles
4.2
Fig. 6. Power consumption at devices of different upload preferences
Power Consumption
We next evaluate battery consumption on the mobile devices when running our collaborative streaming protocol. In our experiments, the mobile devices are fully charged before start. We first select four mobile devices with representative roles in the Bluetooth network, i.e., a master node with one slave, a slave node, a master/slave node with one slave, and a master/slave node with two slaves, and monitor their battery levels when running SMS using PowerTutor [4]. Fig. 5 shows that the more nodes a mobile device is uploading to, the more energy is consumed. In addition, uploading consumes more energy than downloading. Nevertheless, we observe that the battery consumption during a time interval sufficient to download the entire video (about 250 seconds), is only less than 1% of the full battery power at all devices. This shows that SMS is effecient in energy consumption. Next, we compare the battery consumption at mobile users with different upload preferences. We measure the total power consumption during a 250second runtime at a user who wishes only to upload to friends, a user who can serve both friends and strangers with matching attributes, and a user who is willing to serve anyone. Fig. 6 shows that the altruistic user could spend 19% more battery power than a strongly socially selfish user. However, the overall energy consumption at each user is still lower than 1% of the full battery power. This shows that a mobile user can be less concerned with his battery consumption when participating in SMS, and should be more altruistic to boost the overall download speed in the system (the case of all altruistic users in Fig. 3).
5
Related Work
Mobile social applications that are natural extensions of online social networks have been extensively explored. In Micro-Blog [9], users generate geotagged data using their mobile phones, and share them with others via the Internet.
286
C. Kong, C. Wu, and V.O.K. Li
Other applications in this category include PeopleTones [12], Just-for-Us [11], MobiClique [16], FriendLee [5], and Social Serendipity [8]. In the other category, mobile users in physical proximity directly connect with each other. They store data (e.g., profiles) locally in their mobile devices, and no central Internet server is involved. E-SmallTalker [20] is a mobile communications system facilitating social networking among people in physical proximity, and can discover and suggest topics of common interests for conversations. PeopleNet [15] presents an architecture that facilitates information seeking over a wireless virtual social network. SocialNet [14] is an interest-matching application that uses patterns of collocation to infer common interests between users over time. Besides mobile social application design, there exist other work exploiting social connections of users in their system design [10][13]. But none of them have categorized social relationships in the same way as we do. There are some work exploiting the Bluetooth technology and protocols. [6][7][17] propose approaches to form stable Bluetooth scatternet, without taking into account the video streaming and social network scenario. Sewook et al. [19,18] propose P2P streaming system designs over interconnected Bluetooth piconets. Their work focus on Bluetooth network construction without taking social relationship and selfishness of mobile users into consideration. In contrast, our work explores users’ social connections and Bluetooth characteristics in an integral design, for voluntary collaborative streaming with high efficiency.
6
Concluding Remarks
This paper presents SMS, an architecture for VoD-type Streaming over Mobile Social networks, to achieve efficient collaborative distribution of short videos among mobile users. Our contributions in this paper are three-fold: First, we explore social connections among participants in the mobile social network, to facilitate voluntary collaborative streaming according to users’ own upload preferences. Second, we construct streaming overlays by synergistically combining participants’ preferences, video segment availability, and characteristics of the Bluetooth infrastructures, while effectively exploiting the broadcast nature of wireless transmissions. Third, we design detailed protocols to implement SMS, which efficiently tackles varies dynamics of the mobile users. We also implement our design on Android platforms and carefully evaluate the runtime performance of SMS. Preliminary results over a small-scale mobile network have shown satisfying streaming performance as well as low battery consumption with our protocols. In our ongoing work, we are implementing a practical Bluetooth emulation platform, in order to break the limitation of available hardware devices and carry out evaluations in a larger scale under realistic settings.
Acknowledgement This research is supported in part by the University of Hong Kong Strategic Research Theme of Information Technology.
SMS: Collaborative Streaming in Mobile Social Networks
287
References 1. 2. 3. 4. 5.
6.
7.
8. 9.
10.
11.
12.
13. 14.
15. 16.
17.
18. 19.
20.
MagnetU, http://magnetu.com/ Cyworld, http://uscyworld.com/ Tencent QQ, http://wwwimqq.com/ PowerTutor, http://www.powertutor.org/ Ankolekar, A., Szabo, G., Luon, Y., Huberman, B.A., Wilkinson, D.: Friendlee: A Mobile Application for Your Social Life. In: Proc. of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services (2009) Cano, J., Manzoni, P., Toh, C.-K.: UbiqMuseum: A Bluetooth and Java Based Context-Aware System for Ubiquitous Computing. Wireless Personal Communications 38, 187–202 (2006) Donegan, B.J., Doolan, D.C., Tabirca, S.: Mobile Message Passing Using a Scatternet Framework. International Journal of Computers, Communications & Control 3, 51–59 (2008) Eagle, N., Pentland, A.: Social Serendipity: Mobilizing Social Software. IEEE Pervasive Computing, Special Issue: The Smartphone 4, 28–34 (2005) Gaonkar, S., Li, J., Choudhury, R.R., Cox, L., Schmidt, A.: Micro-Blog: Sharing and Querying Content Through Mobile Phones and Social Participation. In: Proc. of ACM MOBISYS (June 2008) Jahanbakhsh, K., Shoja, G.C., King, V.: Social-Greedy: a socially-based greedy routing algorithm for delay tolerant networks. In: Proc. of the Second International Workshop on Mobile Opportunistic Networking (2010) Kjeldskov, J., Paay, J.: Just-for-Us: A Context-Aware Mobile Information System Facilitating Sociality. In: Proc. of 7th International Conference on Human Computer Interaction with Mobile Devices & Services (September 2005) Li, K., Sohn, T., Huang, S., Griswold, W.: PeopleTones: A System for the Detection and Notification of Buddy Proximity on Mobile Phones. In: Proc. of ACM MobiSys (June 2008) Mei, A., Stefa, J.: Give2Get: Forwarding in Social Mobile Wireless Networks of Selfish Individuals. In: Proc. of IEEE ICDCS (2010) Michael, T., D., M.E., Kathy, R., Darren, L.: Social net: using patterns of physical proximity over time to infer shared interests. In: Proc. of CHI 2002 extended abstracts on Human factors in computing systems (April 2002) Motani, M., Srinivasan, V., Nuggehalli, P.: PeopleNet: Engineering a Wireless Virtual Social Network. In: Proc. of ACM MobiCom (August 2005) Pietil¨ ainen, A., Oliver, E., LeBrun, J., Varghese, G., Diot, C.: middleware for mobile social networking. In: Proc. of the 2nd ACM Workshop on Online Social Networks, WOSN (2009) Sewook, J., Chang, A., Gerla, M.: Performance comparison of overlaid bluetooth piconets (OBP) and bluetooth scatternet. In: Proc. of WCNC, pp. 505–510 (April 2006) Sewook, J., Chang, A., Gerla, M.: Peer to peer video streaming in Bluetooth overlays. Multimedia Tools Application 37(3), 263–292 (2008) Sewook, J., Lee, U., Chang, A., Cho, D.K., Gerla, M.: BlueTorrent: Cooperative Content Sharing for Bluetooth Users. In: Proc. of Fifth Annual IEEE International Conference on Pervasive Computing and Communications (2007) Yang, Z., Zhang, B., Dai, J., Champion, A.C., Xuan, D., Li, D.: E-SmallTalker: A Distributed Mobile System for Social Networking in Physical Proximity. In: Proc. of IEEE ICDCS (2010)
Assessing the Effects of a Soft Cut-Off in the Twitter Social Network Saptarshi Ghosh1, , Ajitesh Srivastava2 , and Niloy Ganguly1 1
Department of CSE, IIT Kharagpur - 721302, India [email protected] 2 Department of CSIS, BITS Pilani, India
Abstract. Most popular OSNs currently restrict the number of social links that a user can have, in order to deal with the problems of increasing spam and scalability in the face of a rapid rise in the number of users in recent years. However such restrictions are often being criticized by socially active and popular users, hence the OSN authorities are facing serious design-choices while imposing restrictions; this is evident from the innovative ‘soft’ cut-off recently imposed in Twitter instead of the traditional ‘hard’ cut-offs in other OSNs. Our goal in this paper is to develop an analytical framework taking the restriction in Twitter as a case-study, that can be used to make proper design-choices considering the conflicting objectives of reducing system-load and minimizing userdissatisfaction. We consequently define a simple utility function considering the above two objectives, and find that Twitter’s policy well balances both. From a network science perspective, this is the first analysis of ‘soft’ cut-offs in any sort of network, to the best of our knowledge. Keywords: Online social network, Twitter, soft cut-off, restricted network growth, utility function for restrictions.
1
Introduction
Online Social Networks (OSNs) have experienced an exponential rise in the number and activity of users in recent years. As a result, these OSNs are frequently facing scalability issues such as high latency and increased down-time [17] which lead to discontent among users. The situation is aggravated by spammers who typically establish social links with thousands of users and then use the methods of communication provided to disseminate spam. Several popular OSNs have adopted a common ‘tool’ to deal with these issues: they have imposed a limit or cut-off on the number of friends/social links that a user can have (i.e. on the node-degree), e.g. 1000 in Orkut and 5000 in Facebook. Such limits help in reducing the load on the OSN infrastructure - since most OSNs support real-time one-to-all-friends communications, controlling the number of friends of users is an effective way to reduce message overhead. Moreover, these restrictions also prevent spammers from indiscriminately increasing their links. Twitter (www.twitter.com), one of the OSNs worst affected by the above problems, has placed a more intelligent ‘soft’ cut-off [1] on the number of links a user
Corresponding Author.
J. Domingo-Pascual et al. (Eds.): NETWORKING 2011, Part II, LNCS 6641, pp. 288–300, 2011. c IFIP International Federation for Information Processing 2011
Assessing the Effects of a Soft Cut-Off in the Twitter Social Network
289
can create. The Twitter social network is a directed network where an edge u → v implies that user u ‘follows’ user v i.e. u has subscribed to receive all messages posted by v. In Twitter terminology, u is a ‘follower’ of v and v is a ‘following’ of u. The out-degree (number of followings) of u is thus a measure of u’s social activity or her interest to collect information from other users. Analogously, the in-degree of u (number of followers who are interested in u’s posts) is a measure of u’s popularity in Twitter. The growing popularity of Twitter in recent years has not only led to high system-load due to increasing user-activity, but also to high levels of “Follow Spam” [2] where spammers indiscriminately follow numerous users, hoping to get followed back. To reduce strain on the website [1] and control follow spam, Twitter enforced a restriction on the number of people that a user can follow (i.e. on the out-degree), in August 2008 [2]. Every user is allowed to follow up to 2000 others, but “once you’ve followed 2000 users, there are limits to the number of additional users you can follow: this limit is different for every user and is based on your ratio of followers to following.”, as stated in the Twitter Support webpages [1]. However, Twitter does not specify the restriction fully in public [2] (security through obscurity). This has led to several conjectures regarding the Twitter follow-limit; among these, the most widely believed one, known as the “10% rule” [3], is as follows. If a user u has uin number of followers (in-degree), then the maximum number of users whom u can herself follow (maximum possible out-degree) is umax out = max{2000, 1.1 · uin }. However, restrictions on the number of links are presently being frequently criticised by the socially active and popular legitimate users of OSNs, as an encroachment on their freedom to have more friends [6]. In fact, the ‘soft’ cut-off in Twitter is the first attempt by an OSN towards designing restrictions that adapt to the requirements of popular legitimate users (unlike the ‘hard’ cut-offs in Facebook/Orkut), and hence aim to minimize user-dissatisfaction along with fulfilling other objectives (e.g. reducing system-load). Evidently, the OSN authorities today are facing several design-choices while designing restrictions, such as - at what degree should the restriction be imposed so that a desired reduction in the system-load can be achieved without affecting a large number of legitimate users? In order to explore and utilise the full potential of restrictions on node-degree, an analytical model that helps to make such design-choices, rather than ad-hoc engineering solutions, has become a necessity. The goal of this paper is to formulate such a model using the methods of network science, taking the Twitter follow-limit as a case study. Restrictions on node-degree have significant effects on the topology of presentday OSNs, as was first observed by us for the Twitter network in [8]. In this paper, we extend the rudimentary model proposed in [8] to develop a complete analytical framework that can be used to predict the emerging degree distribution of an OSN in the presence of different forms of restrictions. We demonstrate the effectivity of enumerating the degree distribution (for restricted growth) by our model by formulating a simple utility function for restrictions, whose optimization would enable the OSN authorities to design restrictions that suitably
290
S. Ghosh, A. Srivastava, and N. Ganguly
balance the two conflicting objectives of reducing system-load and minimizing dissatisfaction among users. There have been several studies on the topological characteristics that emerge as a result of various growth dynamics in OSNs [5, 13, 15]; however, to the best of our knowledge, ours is the first set of work on analysing the effects of restrictions on node-degree on these dynamics. From a network science perspective, though there have been studies on the effects of ‘hard’ cut-offs on node-degree (e.g. in peer-to-peer networks [9, 16]), there has not been any prior analysis on network-growth in the presence of ‘soft’ cut-offs (as has been imposed in Twitter), according to our knowledge. The rest of the paper is organized as follows. Section 2 describes the effects of the Twitter restriction on the topology of the OSN. The analytical framework for modeling network growth in the presence of restrictions is developed in Sect. 3 while the insights drawn using the model are discussed in Sect. 4. Conclusions from the study are drawn in Sect. 5.
2
Empirical Measurements on the Twitter Social Network
The Twitter OSN has been of interest to researchers since 2007 and there have been several attempts [8, 10, 12, 14] to crawl the Twitter network 1 . Recently a large crawl of the entire Twitter social network in July 2009, containing about 41.7 million nodes and 1.47 billion follow-edges, has been made publicly available [14]; we use this data for empirical measurements in this paper. In this section, we discuss the statistics of followers (in-degree) and followings (outdegree) of users in the Twitter social network, which clearly shows the effects of the restriction on the network topology. Scatter plot: Fig. 1a compares the scatter plot of the followers-followings spread in Twitter as in July 2009, with the corresponding scatter plot in February 2008 which was before the restriction was imposed (reproduced from [12] as an inset). While the scatter plot in 2008 is almost symmetrical about x = y, the scatter plot in 2009 has a sharp edge at x = 2000 due to the restriction at this degree. Users having more than 2000 followings (out-degree x) now need to have a sufficient number of followers (in-degree y), such that their out-degree remains less than 110% of their in-degree (i.e. they lie to the left of the x = 1.1 y line); this verifies the ‘10% rule’ stated in Sect. 1. Note that there exists a small fraction of users who violate the 10% rule; possibly Twitter relaxes its restriction for some users, such as those who joined the OSN before the restriction was imposed. Degree Distributions: The in-degree and out-degree distributions of the Twitter OSN, as in July 2009, are shown in Fig. 1b. The in-degree distribution (inset) shows a power-law decay pi ∼ i−2.06 over a large range of in-degrees (the powerlaw exponents are estimated by the method in [7]); however, the out-degree 1
We ourselves crawled 1 million users during Oct-Nov 2009; though these data exhibited the effects of the restriction on the network properties, as observed in [8], it suffered from the known bias of partial BFS-sampling towards high-degree nodes.
Assessing the Effects of a Soft Cut-Off in the Twitter Social Network 0
10
100
Twitter data -1.92 pj ~ j
-2
Twitter data pi ~ i-2.06
10-2 pi
10
10
-4
10-6
10
10
4
pj
Number of followers (in−degree)
10 6
291
-4
10
10-8 0 10
102 104 106 in-degree (number of followers), i
-6
10
2
-8
10
0
10 0 10
2
10
4
10
0
10
6
10
Number of following (out−degree)
2
10
4
10
6
10
out-degree (number of following), j
(a)
(b)
Fig. 1. (a) Scatter plot of followers-followings spread in Twitter: main figure - in July 2009 (along with the lines x = 1.1y and x = 2000), inset - in Jan-Feb 2008 (reproduced from [12]) (b) Degree distributions of Twitter OSN as in July 2009: main plot - outdegree distribution, inset - in-degree distribution
distribution clearly shows a departure from the power-law nature that was observed by measurements on Twitter before the restriction was imposed [10, 12]. Now, the power-law pj ∼ j −1.92 for the out-degrees below the point of restriction is followed by a sharp spike at around out-degree j = 2000, and a rapid decay in the distribution beyond this point. This is because a significant number of users are unable to increase their out-degree beyond a certain limit near 2000 as they do not have sufficient in-degree (followers). The out-degree distribution also shows a peak at x = 20 because till 2009, Twitter used to recommend an initial set of 20 people for every newcomer to follow by a single click, and many newcomers took up this offer (as also observed in [14]).
3
Modeling Restricted Growth Dynamics of OSNs
In this section, we extend the model we proposed in [8] to develop a complete analytical framework for modeling the growth of OSNs in general and Twitter in particular. We model the growth dynamics in an OSN (i.e. joining of new users, creation of new social links) by the preferential attachment model [4] which has been experimentally shown to occur in several OSNs [13, 15]. Also, it produces power-law degree distributions similar to the empirical distributions in Twitter before the restriction was imposed [10, 12]. Our proposed model is a customized version of the network-growth model proposed by Krapivsky et. al. [11] (henceforth referred to as the KRR model), which we modify by introducing restrictions on out-degree, similar to the followlimit imposed in Twitter. We first briefly discuss the modification introduced by us in [8] for the sake of completeness. 3.1 The Model Proposed in [8] In this model, any one of the following events occurs at each discrete time-step: (1) with probability p, a new node is introduced and it forms a directed out-edge to an existing node, or (2) with probability q = 1 − p, a new directed edge is created between two existing nodes.
292
S. Ghosh, A. Srivastava, and N. Ganguly
The probability that a new node (event 1) links to an (i, j)-node (i.e. a node having in-degree i and out-degree j) is assumed to be proportional to (i + λ), since intuitively a new user is more likely to link to (follow) a popular user having many followers (high in-degree). Analogously, the probability that a new edge (event 2) is created from a (i1 , j1 )-node to a (i2 , j2 )-node is assumed to be proportional to (i2 + λ)(j1 + μ). Here λ and μ are model parameters that introduce randomness in the preferential attachment rules [11]. Let Nij (t) be the average number of (i, j)-nodes in the network at time t. The model considers the following rate-equations to track how Nij changes with time. Change in Nij due to change in out-degree of nodes: Restrictions on out-degree are incorporated in the model by introducing the βij terms in (1) below, where βij is defined to be 1 if users having in-degree i are allowed (by the restriction) to have out-degree j, 0 otherwise. Nij increases when a (i, j −1)-node forms a new out-edge (event 2); however, only those (i, j − 1)-nodes are allowed to do this for whom βij = 1. This event occurs with the rate q(j −1+μ)Ni,j−1 βij divided by the normalization factor ij (j + μ)Nij βi,j+1 . Similarly, Nij gets reduced when an (i, j)-node (having βi,j+1 = 1) forms a new out-edge (event 2). Thus the rate of change in Nij (t) due to change in out-degree of nodes is: dNij (j − 1 + μ)Ni,j−1 βij − (j + μ)Nij βi,j+1 = q· dt out ij (j + μ)Nij βi,j+1
(1)
Change in Nij due to change in in-degree of nodes: This case is similar to the above case, only we are not considering any restriction on in-degrees. dNij (i − 1 + λ)Ni−1, j − (i + λ)Nij = dt in ij (i + λ)Nij
(2)
Hence the total rate of change in Nij (t) is given by dNij dNij dNij = + + p δi0 δj1 dt dt in dt out
(3)
where the last term accounts for the introduction of new nodes with in-degree 0 and out-degree 1 (Kronecker’s delta function δxy is 1 for x = y and 0 otherwise). This model can be used to study various restrictions by suitably defining the βij terms in (1). To study the Twitter follow-limit, we define βij for a generalized ‘κ-% rule’ starting at out-degree s (κ = 10 and s = 2000 in Twitter, see Sect. 1) as 1 if j ≤ max {s, (1 + κ1 )i}, ∀i βij = 0 otherwise
Assessing the Effects of a Soft Cut-Off in the Twitter Social Network
3.2
293
Extending the Model to Find Degree Distributions Analytically
The preliminary model in [8] is extended by solving (3) to analytically compute the emerging degree distributions in presence of ‘soft’ cut-offs. We demonstrate the solution for the commonly believed version of the Twitter restriction, other variations of ‘soft’ cut-offs can be analysed by a similar technique. At time t, let N (t) be the total number of nodes in the network, and let I(t) and J(t) be the total in-degree and total out-degree respectively. Since at every time-step, a new edge is added and a new node is added with probability p, N (t) = Nij = p t, I(t) = iNij = J(t) = jNij = t (4) ij
ij
ij
Thus parameter p controls the relative number of nodes and edges in the network. The denominator (normalizing factor) in (2) equals (I + λN ). For the denominator in (1), we make a simplifying approximation - we assume that at a given time, the number of nodes that are actually blocked by the restriction from increasing their out-degree (i.e. number of (i, j)-nodes for which βi,j+1 is 0) is negligibly small compared to the total number of nodes in the network, which implies (j + μ)Nij βi,j+1 (j + μ)Nij = (J + μN ) (5) ij
ij
Note that this approximation is valid only for large values of μ, when the fraction of nodes blocked by the restriction actually becomes very small (see Sect. 4.2). By solving (3) with the above approximation for few small values of i,j, it is seen that Nij (t) grows linearly with time [11]; hence we can substitute Nij (t) = nij t
(6)
where nij is the (constant w.r.t. time) rate of increase in the number of (i, j)nodes. Substituting (4) and (6) in (3) gives a recursion relation for nij : (i−1+λ)ni−1, j − (i+λ)nij q(j−1+μ)ni,j−1 βij − q(j+μ)nij βi,j+1 + + pδi0 δj1 1 + λp 1 + μp (7) For brevity, we denote the first fraction on the right-hand side in (7) as Aij . To simplify the computation of the functional form of the degree distribution, we assume (as in the original KRR model [11]) that the power-law exponents of the in-degree and out-degree distributions are equal, which implies λ = (μ+1)/q. The exponents were actually found [10] to be equal for the Twitter OSN before the restriction was imposed. Since we are studying restrictions only on out-degree (as in Twitter), we shall henceforth consider only the out-degree distribution. The in-degree distribution can be computed by the original KRR model [11] and will be of the formof a power-law for the entire range of in-degrees. Let Njout (t) = i Nij (t) be the number of nodes with out-degree j at time t; using (6), Njout (t) = t i nij = tgj , where gj = i nij . Thus the out-degree distribution at j (i.e. fraction of nodes with out-degree j) can be obtained as Njout (t)/N (t) = gj /p. To obtain the complete out-degree distribution, we solve (7) to get gj = i nij for all j by considering the following cases. nij =
294
S. Ghosh, A. Srivastava, and N. Ganguly
Case 1: j < s (before the starting point of cutoff ): Since there is no restriction for j < s, the model behaves similar to the original KRR model [11]; hence gj = G ·
−1 −1 Γ (j + μ) ∼ j −(1+q +μpq ) Γ (j + 1 + q −1 + μq −1 )
(8)
where Γ () is the Euler gamma function, and G is a constant. Note that (8) is actually an approximation under assumption (5); in reality, the out-degree distribution for j < s is also slightly affected by the restriction (see Sect. 4.1). Case 2: j = s (at the starting point of the cutoff ): Let α denote the fraction 1 (1+1/κ) in case of a κ-% rule (κ = 10 in Twitter). A node can have an out-degree j > s only if it has an in-degree i ≥ αj, implying that for βi,j+1 (for j ≥ s) to be 1, i ≥ α(j + 1). Hence, for j = s, (7) becomes q(s−1+μ)ni,s−1 Ais + i < α(s + 1) 1+μp nis = (9) q(s−1+μ)ni,s−1 −q(s+μ)nis Ais + i ≥ α(s + 1) 1+μp We use a standard technique [11] to solve rate equations: summing (9) for all i ≥ 0, the terms in Ais disappear (they cancel out each other, except the first term in the first equation, i.e. for the case i = 0, but that term is zero), and we get s−1+μ s+μ · gs−1 + · cs (10) s + (1 + μ)q −1 s + (1 + μ)q −1 α(s+1) where gs−1 can be computed by (8) and cs = ( i=0 nis ) is the rate of increase in the number of nodes that have out-degree s but cannot increase their out-degree further (i.e. (i, j)-nodes for which j = s and βi,s+1 = 0). Let α(s+1) be denoted by d. To compute cs , we sum (9) in the range 0 ≤ i ≤ d to get gs =
d 1 cs = · (s − 1 + μ) ni,s−1 − (d + λ)nds 1 + λp i=0
(11)
where nds can be obtained as (s + μ − 1)Γ (d + λ) Γ (k + λ(1 + p) + 1) = · nk,s−1 Γ (d + λ(1 + p) + 2) Γ (k + λ) d
nds
(12)
k=0
from (9) after some algebraic manipulations (omitted for sake of brevity). The terms ni,s−1 in (11) and (12) can be evaluated from the original KRR model (eqn. 18 in [11]) since they are not affected by the restriction starting from j = s. Substituting cs from (11) into (10), we can obtain a closed-form expression for the degree distribution gs /p at j = s. Equations (10) and (11) can be used to estimate the fraction of members blocked at the point of cut-off, as detailed in Sect.4. Case 3: j > s (beyond the starting point of cutoff ): In this region, (7) becomes ⎧ i < αj ⎪ ⎨0 q(j−1+μ)ni,j−1 A + αj ≤ i < α(j + 1) nij = (13) ij 1+μp ⎪ ⎩ A + q(j−1+μ)ni,j−1 −q(j+μ)nij i ≥ α(j + 1) ij 1+μp
Assessing the Effects of a Soft Cut-Off in the Twitter Social Network
10-1
100
model simulation
100
10
pi
10-2
10-2
10-1
pj
pj
10
-6
10-8 0 10
10-4
Twitter data model
-2
10-4
10
-3
295
102
104 in-degree i
106
10-4 10-3 10-6 -5 10-5 10
1
1
10
100
10
1000
100 out-degree j
(a)
1000
10-8 0 10
Twitter data model 104
102
106
out-degree j
(b)
Fig. 2. (a) Agreement of simulation and proposed model (b) Fitting empirical Twitter data with model (main plots: out-degree distributions, inset: in-degree distributions)
since nodes having in-degree i < αj cannot have out-degree j(> s), nodes having in-degree αj ≤ i < α(j + 1) can have out-degree j but not j + 1, and nodes with in-degree i ≥ α(j + 1) can increase their out-degree from j to j + 1. Proceeding similarly as in the case j = s, and adding (13) over all i ≥ 0, we get gj =
j −1+μ j+μ · [gj−1 − cj−1 ] + · cj j + (1 + μ)q −1 j + (1 + μ)q −1
(14)
α(j+1) where cj = i=0 nij is the rate of increase in the number of nodes which have out-degree j but cannot increase their out-degree further, unless their indegree increases. Proceeding from (14) in a similar way as in the case j = s, we can derive analytical expressions for gj and cj for j > s iteratively using the values of gj−1 and cj−1 (e.g. gs+1 and cs+1 can be derived using gs and cs and so on). Details are being omitted for brevity. 3.3 Values of Model Parameters Used for Experiments The parameter p (ratio of nodes to edges in the network) is set to 0.028 as measured from the empirical data described in Sect.2. Estimating λ and μ (which indicate the level of randomness in link-creation dynamics) for an OSN is a challenging issue; moreover, they can change with time e.g. due to the recommendation of popular users to others in Twitter. Hence we conduct experiments for different values of these parameters. Since the model assumes λ = (μ+1)/(1−p), we report results for different values of μ only. Parameters of the restriction function are set to κ = 10 and s = 2000 (as in Twitter) unless otherwise stated. 3.4 Validating Proposed Model with Simulated and Empirical Data Correctness of the proposed model is validated by simulating the restricted growth of the network. Since experiments in the scale of the empirical Twitter data are infeasible, simulations were performed for 100,000 nodes and cut-off s = 100 (Fig. 2a). Though the model gives approximate solutions for low values of μ (as stated in Sect. 3.2), Fig. 2a shows almost exact agreement between the
296
S. Ghosh, A. Srivastava, and N. Ganguly
theory and simulation for μ = 6.0 (this value fits the empirical distributions for Twitter). Exact agreement was obtained in our experiments for μ > 50.0 (results not reported for brevity). The empirical in-degree and out-degree distributions of Twitter (described in Sect. 2) show excellent fit with those obtained from the model using μ = 6.0 (Fig. 2b). This signifies that the proposed model successfully captures the growth dynamics of the Twitter OSN. However, the empirical out-degree distribution deviates from the theoretical one in two aspects: (i) the empirical distribution has a peak at out-degree 20, which is explained in Sect. 2, and (ii) the spike at out-degree s = 2000 is lower in the empirical data as compared to that in the theory; this can be explained by the following two factors. First, there exist a few thousand users in the empirical data who violate the 10% rule, as stated in Sect. 2. Second, we have observed that many Twitter users who actually get blocked by the restriction reduce their out-degree by un-following some of their current followings; this naturally leads to a smaller spike at s and a corresponding rise in the fraction of users having out-degree a little less than s.
4
Insights from the Model
Now that the model is validated and is able to reproduce the degree distributions of the Twitter OSN, we use the model to draw various insights on ‘soft’ cut-offs. 4.1
Effects of Restrictions on Degree Distributions
‘Hard’ cut-offs in peer-to-peer networks are known to cause a reduction in the absolute value of the power-law exponent γ of the degree distribution below the cut-off degree [9]. Our experiments [8] show a similar effect on the exponent |γout | of the out-degree distribution due to ‘soft’ cut-offs in directed networks like Twitter; this can be explained by re-considering our approximation in (5). The denominator in (1), which needs to be evaluated for only those nodes that are currently not blocked by the restriction (i.e. (i, j)-nodes for which βi,j+1 = 1), is in fact Nij βi,j+1 = (j + μ) (j + μ) j
i
j
nij t = (1 + μp)t − ζt (15)
i≥α(j+1)
where ζ = j (j + μ)cj is the unknown term. Thus the denominator of the second fraction on the right-hand side in (7) should actually be (1 + μp − ζ). Proceeding as in Sect. 3, it can be shown that in the range j < s, |γout | reduces from (1 + q −1 + μpq −1 ) in absence of any restriction (as stated in (8)) to (1 + (1−ζ)q −1 + μpq −1 ) in presence of the ‘soft’ cut-off modeled in Sect. 3. A smaller |γ| indicates a more homogeneous structure of the network with respect to node-degrees. This provides scalability to OSNs as messages produced will get equitably distributed among various users, and hence various servers, and would not be directed towards a small group of users (servers). The theoretical reduction in |γout | is also validated from real data of the Twitter OSN where |γout | has decreased after the imposition of the cut-off, from 2.412 as reported in [10] to 1.92 in the data described in Sect. 2.
Assessing the Effects of a Soft Cut-Off in the Twitter Social Network 2.5e-04
10-3
2.0e-04
10-6 10-7
Φs
Φs
10-4 10-5
297
μ = 6.0 μ = 30.0 μ = 50.0
1.5e-04 1.0e-04
μ = 6.0 μ = 30.0 μ = 50.0 μ = 100.0
5.0e-05 0.0e+00 5 10 15 20 κ (with s = 2000)
500 1000 2000 3000 cut-off, s (with κ = 10)
(a)
25
(b)
Fig. 3. Variation of the fraction of users blocked at j = s (i.e. height of spike in outdegree distribution) (a) with s (log-log plot) (b) with κ (p = 0.028, μ = 6.0)
4.2 Quantifying the Fraction of Users Blocked due to the Restriction In absence of any restriction, gj decays as gj = (j−1+μ)gj−1 /(j+(1+μ)q −1 ) [11]. Comparing this with (10), we see that due to the ‘soft’ cut-off at j = s, the fraction of nodes having out-degree s (i.e. gs /p) includes the following additional term, which accounts for the spike in the out-degree distribution at this point: φs =
s+μ cs · −1 s + (1 + μ)q p
(16)
where cs is obtained from (11). For s μ and q 1 (for a real-world OSN, typically cut-off s is large and p = 1 − q is very small), φs cs /p which is an estimate of the fraction of nodes (users) blocked at the point of cut-off. The effects of different parameters on φs are discussed below. Our experiments indicate that φs approximately varies as inversely proportional to the network density p (graphs not shown for lack of space), since for higher p (i.e. when joining of new users dominates link-creation by existing users), the number of nodes reaching the cut-off gets reduced. The network density of OSNs is known to vary non-monotonically over time [13]; hence in practice, parameters of the restriction function (e.g. s and κ) may be varied depending on the dynamics of the network at different stages. φs also reduces rapidly with increase in the randomness parameter μ (graphs not shown) - for more random dynamics, new links get distributed among a large number of nodes, resulting in a smaller fraction of nodes approaching the cut-off. Figures 3a and 3b show the variation in φs with the restriction parameters s and κ respectively; we use different values of μ to investigate varying link-creation dynamics (from highly preferential to more random). φs shows a power-law decay with increasing s (Fig. 3a in log-log scale); for lower values of s, a larger fraction of users get blocked leading to a greater reduction in the system-load, but at the risk of increased user-dissatisfaction. Similarly, with increase in κ, a higher in-degree becomes necessary to cross the cut-off resulting in a larger fraction of blocked users; as shown in Fig. 3b, φs has a parabolic increase with κ.
S. Ghosh, A. Srivastava, and N. Ganguly
0.03
0.016
Utility U
Utility U
0.02 0.01 0 wu = 10 wu = 30 wu = 50
-0.01 -0.02 500
2000 3500 cutoff s
(a)
0.012 wu = 10 wu = 30 wu = 50
0.008 0.004
5000
2
6
(b)
10 14 κ
18
22
Pr(indeg=i | outdeg=2000)
298
Twitter data model
0.002
0.001
0 0
1000 2000 in-degree, i
3000
(c)
Fig. 4. (a) Variation of utility U with cut-off s (b) Variation of U with κ (c) Comparing in-degree distribution of nodes having out-degree 2000 according to empirical Twitter data (shown in grey) and model (shown in black) (for all plots, p = 0.028, μ = 6.0)
4.3
Using the Proposed Framework to Design Restrictions
A restriction imposed in an OSN can be said to be effective only if it achieves both the conflicting objectives - a desired reduction in system-load and minimizing dissatisfaction among blocked users. Our proposed model can be used in the process of designing effective restrictions as demonstrated below. We define a utility function for a restriction as U = L − wu B where L is the reduction in the number of links due to the restriction (an estimate of reduction in system-load caused by message communication along social links) and B is the fraction of blocked (dissatisfied) users; wu is the relative weight given to the objective of minimizing user-dissatisfaction and can be chosen suitably by 0 design engineers. For a restriction at out-degree s, we compute L = ( j≥s jgj − j≥s jgj ) where gj is as obtained in Sect. 3 in presence of the restriction while gj0 , the corresponding quantity in an unrestricted network, is computed using the original KRR model (see (8)). Note that our model assumes gj = gj0 for j < s as stated in Sect. 3. As discussed above, B can be approximated as φs cs /p. Figure 4a shows the variation in utility U with s for a κ = 10% soft cut-off, for different wu . In each case, the maximum value of U attained is marked. For low wu , when much higher emphasis is laid on reducing system-load, a low cut-off degree is the best choice. However, as wu increases, low values of s reduce U since a large fraction of users gets blocked; hence the optimal s occur at higher values. Interestingly, the optimal value for s in the case wu = 50 matches with the value of 2000 chosen in Twitter. The variation in U with κ (for fixed s = 2000) is shown in Fig. 4b. For low wu (higher emphasis on reducing systemload), U increases with κ as more users gets blocked from creating new links; on the contrary, U decreases with κ for higher wu . It is seen that for wu = 50, the decrease in U stabilizes around the value κ = 10 that matches with the chosen value in Twitter. Such analyses are an efficient way for the OSN authorities to make design-choices while imposing restrictions, so that both the objectives of reducing system-load and minimizing user-dissatisfaction can be balanced.
Assessing the Effects of a Soft Cut-Off in the Twitter Social Network
4.4
299
Estimating the Population of Spammers in the OSN
The population of spammers in an OSN like Twitter can be roughly estimated from the in-degree distribution of users who get blocked at the cut-off. Since the in-degree and out-degree of most legitimate users in Twitter are highly correlated [10], among the users blocked at the cut-off, the legitimate ones can be expected to have relatively high in-degrees (number of followers); on the contrary, spammers are likely to have very low in-degrees even when their outdegrees reach the cut-off. According to the model, the number of (i, s)-nodes (i < α(s + 1) for nodes blocked at s) at time t is Nis (t) = nis t, where nis can be computed from (12) by substituting i for d. Since the number of nodes having out-degree s at time t is gs t (as computed in Sect. 3), nis /gs gives the value of the said in-degree distribution (conditional to having out-degree s) at in-degree i. Figure 4c compares the in-degree distribution of nodes having out-degree s = 2000, as obtained from the model (for μ = 6.0) and that from the empirical Twitter data. The sharp drop in the theoretical distributions occurs at the minimum in-degree 1820 required to overcome the restriction. Since the model does not consider follow-spammers, most nodes having out-degree s have relatively high in-degrees (corresponding to legitimate users) in the theoretical distribution. In contrast, the Twitter data contains a much higher fraction of ‘follow spammers’ having low in-degrees and out-degree 2000.
5
Conclusion
In this paper, we take the first step towards analysing restrictions on node-degree in OSNs as well as in the modeling of ‘soft’ cut-offs in any type of network. We analyse the dependence of the fraction of blocked users on the restriction parameters, such as a power-law reduction with the cut-off degree s and a parabolic increase with κ. We also propose a utility function for restrictions, that helps to balance the conflicting objectives of reducing system-load and minimizing user-dissatisfaction; this gives practical insights on the choice of values for the restriction parameters, and justifies the choices made in Twitter. Such analyses will be essential to OSN-authorities in recent future for systematically designing restrictions that meet their goals. Soft cut-offs can be expected to become the chosen type of restriction in all types of OSNs in recent future instead of the frequently criticized ‘hard’ cut-offs, as they can be easily tuned to adjust to the demands of different types of users. Soft cut-offs can also be applied in undirected OSNs (e.g. Facebook, Orkut) by differentiating between the initiator of a social link and the acceptor, and users can be restricted from initiating arbitrary number of links.
References 1. Twitter help center: Following rules and best practices, http://support.twitter.com/forums/10711/entries/68916 2. Twitter blog: Making progress on spam (August 2008), http://blog.twitter.com/2008/08/making-progress-on-spam.html
300
S. Ghosh, A. Srivastava, and N. Ganguly
3. The 2000 following limit on Twitter (March 2009), http://twittnotes.com/2009/03/2000-following-limit-on-twitter.html 4. Barabasi, A.L., Albert, R.: Emergence of scaling in random networks. Science 286(5439), 509–512 (1999) 5. Bonato, A., Janssen, J., Pralat, P.: A geometric model for on-line social networks. In: WOSN (June 2010) 6. Catone, J.: Twitter’s follow limit makes Twitter less useful (August 2008), http://www.sitepoint.com/blogs/2008/08/13/ twitter-follow-limit-makes-twitter-less-useful/ 7. Clauset, A., Shalizi, C.R., Newman, M.E.J.: Power-law distributions in empirical data. SIAM Review 51(4), 661–703 (2009) 8. Ghosh, S., Korlam, G., Ganguly, N.: The effects of restrictions on number of connections in OSNs: A case-study on Twitter. In: Workshop on Online Social Networks (WOSN) (June 2010) 9. Guclu, H., Yuksel, M.: Scale-free overlay topologies with hard cutoffs for unstructured peer-to-peer networks. In: IEEE ICDCS (2007) 10. Java, A., Song, X., Finin, T., Tseng, B.: Why we twitter: An analysis of a microblogging community. In: Zhang, H., Spiliopoulou, M., Mobasher, B., Giles, C.L., McCallum, A., Nasraoui, O., Srivastava, J., Yen, J. (eds.) WebKDD 2007. LNCS, vol. 5439, pp. 118–138. Springer, Heidelberg (2009) 11. Krapivsky, P.L., Rodgers, G.J., Redner, S.: Degree distributions of growing networks. Phys. Rev. Lett. 86(23), 5401–5404 (2001) 12. Krishnamurthy, B., Gill, P., Arlitt, M.: A few chirps about Twitter. In: WOSN (2008) 13. Kumar, R., Novak, J., Tomkins, A.: Structure and evolution of online social networks. In: ACM KDD, pp. 611–617 (2006) 14. Kwak, H., Lee, C., Park, H., Moon, S.: What is Twitter, a social network or a news media? In: ACM WWW, pp. 591–600 (2010) 15. Mislove, A., Koppula, H.S., Gummadi, K.P., Druschel, P., Bhattacharjee, B.: Growth of the Flickr social network. In: WOSN (2008) 16. Mitra, B., Dubey, A., Ghose, S., Ganguly, N.: How do superpeer networks emerge? In: IEEE INFOCOM, pp. 1514–1522 (2010) 17. Owyang, J.: The many challenges of social network sites (February 2008), http://www.web-strategist.com/blog/2008/02/11/ the-many-challenges-of-social-networks/
Characterising Aggregate Inter-contact Times in Heterogeneous Opportunistic Networks Andrea Passarella and Marco Conti IIT-CNR, Via G. Moruzzi 1, 56124 Pisa, Italy {a.passarella,m.conti}@iit.cnr.it
Abstract. A pioneering body of work in the area of mobile opportunistic networks has shown that characterising inter-contact times between pairs of nodes is crucial. In particular, when inter-contact times follow a power-law distribution, the expected delay of a large family of forwarding protocols may be infinite. The most common approach adopted in the literature to study inter-contact times consists in looking at the distribution of the inter-contact times aggregated over all nodes pairs, assuming it correctly represents the distributions of individual pairs. In this paper we challenge this assumption. We present an analytical model that describes the dependence between the individual pairs and the aggregate distributions. By using the model we show that in heterogeneous networks - when not all pairs contact patterns are the same - most of the time the aggregate distribution is not representative of the individual pairs distributions, and that looking at the aggregate can lead to completely wrong conclusions on the key properties of the network. For example, we show that aggregate power-law inter-contact times (suggesting infinite expected delays) can frequently emerge in networks where individual pairs inter-contact times are exponentially distributed (meaning that the expected delay is finite). From a complementary standpoint, our results show that heterogeneity of individual pairs contact patterns plays a crucial role in determining the aggregate inter-contact times statistics, and that focusing on the latter only can be misleading. Keywords: opportunistic networks, analytical modelling.
1
Introduction
Foundational results in the area of mobile opportunistic networks have clearly shown that characterising inter-contact times between nodes is crucial [Aug07, Kar07]. In this paper we thoroughly investigate the dependence between the distributions of individual node pairs inter-contact times and the distribution of the aggregate inter-contact times. Specifically, an individual pair distribution is the distribution of the time elapsed between two consecutive contacts between that pair of nodes. The aggregate distribution is the distribution of inter-contact
This work was partially funded by the European Commission under the FETPERADA SOCIALNETS (FP7-IST-217141), FIRE SCAMPI (FP7-IST-258414), and FET-AWARE RECOGNITION (FP7-IST-257756) Projects.
J. Domingo-Pascual et al. (Eds.): NETWORKING 2011, Part II, LNCS 6641, pp. 301–313, 2011. c IFIP International Federation for Information Processing 2011
302
A. Passarella and M. Conti
times of all pairs considered together, i.e. is the distributions of all inter-contact times measured in the network between any two nodes. A clear understanding of the dependence between the individual pairs and the aggregate distributions is very important, although not achieved in the literature yet. It has been clearly shown that, depending on the distribution of pairs inter-contact times, families of forwarding protocols may produce infinite expected delays [Aug07]. However, most of the literature has focused on the aggregate distribution (see Section 2 for a review), assuming it is representative of the individual pairs distributions. This is mainly due to the fact that in real traces, it is much easier to measure and characterise the aggregate distribution than the individual pairs distributions, as gathering enough samples for each and every pair is often very difficult. Aggregate inter-contact times have been frequently found to be distributed according to a power-law with or without exponential cut-off. This has been perceived as a severe challenge for forwarding in opportunistic networks, as an important class of protocols yield infinite expected delay if individual pairs distributions are power-law [Aug07]. In this paper we carefully review the hypothesis that the aggregate distribution is representative of individual pairs distributions, by deriving an analytical model that describes the dependence between the two. We consider a general heterogeneous networking environment, in which the individual pairs distributions are all of the same type (e.g., exponential, Pareto, . . . ), but whose parameters are unknown a-priori. We assume that the rates of the pairs inter-contact times (the reciprocal of the averages) are drawn from a given distribution, which, therefore, determines the specific parameters of each pair inter-contact times. In other words, as the distribution of the rates controls the parameters of the inter-contact times distributions, it allows us to control the type of heterogeneity in the network. The model described in the paper shows that both the distribution of the rates and the distributions of individual pairs inter-contact times impact on the aggregate distribution. In particular, we use the model to find, among others, the conditions under which the aggregate distribution features the main characteristics often found in traces, i.e. a power-law with or without exponential cut-off. We can summarise the key findings presented in the paper as follows. – Starting from exponentially distributed individual pairs inter-contact times, the aggregate is distributed exactly according to a Pareto law iff the rates of the pairs inter-contact rates are drawn from a Gamma distribution. – As an exponential distribution is a special case of a Gamma distribution, Pareto aggregate inter-contact times can result from a network where both the individual inter-contact times and their rates are exponentially distributed. – When pairs inter-contact times are exponential, and rates are drawn from a Pareto distribution, the asymptotic behaviour of the aggregate distribution (for large inter-contact times) is a power-law with or without exponential cut-off. In particular, the exponential cut-off appears when rates cannot be arbitrarily close to 0.
Characterising Aggregate Inter-contact Times
303
– Under exponentially distributed individual pairs inter-contact times, the distribution of the rates plays a crucial role in generating aggregate inter-contact times featuring a heavy tail. Specifically, whenever rates can be arbitrarily close to 0, a power-law appears in the aggregate distribution. Our findings clearly show that relying on the aggregate inter-contact times distribution only for assessing key properties of opportunistic networks is not appropriate in general, and can lead to wrong conclusions. In particular, finding a power-law in the aggregate inter-contact times distribution is not necessarily an indication that individual pairs distributions feature a heavy tail as well, and that therefore forwarding protocols may not converge. On the contrary, the heterogeneity of the network, represented in our study by the distribution of the individual pairs inter-contact rates, plays a crucial role in determining the nature of the aggregate distribution, which can be totally different from the distributions of the individual pairs. The rest of the paper is organised as follows. We review the relevant state-ofthe-art in Section 2. Then, Section 3 presents the general model describing the dependence between the individual pairs inter-contact times, the distribution of their rates, and the aggregate inter-contact times distribution. In Section 4 we use the model to investigate under which conditions aggregate distributions featuring the main characteristics found in real traces can be generated. We also present simulation results validating our analytical findings. Finally, in Section 5 we draw the main conclusions of this study.
2
Related Work
The first work, to the best of our knowledge, that highlighted the importance of inter-contact times for studying opportunistic networks is [Aug07]. In this work authors show by analysis that a popular family of routing protocols may produce infinite expected delays if individual pairs inter-contact times distributions are heavy tailed. The same work also analyses a set of traces, showing that the aggregate inter-contact times actually follow a power-law distribution. Assuming that the same property holds true for individual pairs as well, authors conclude that those forwarding protocols may not converge in real opportunistic networks. This very pessimistic result has been somewhat softened by the work in [Kar07], where authors re-analyse the same traces and suggest that the aggregate intercontact times distribution might indeed present an exponential cut-off in the tail. Assuming, again, that the same property holds true also for individual pairs, they conclude that forwarding protocols might actually not yield infinite delay. In this work authors discuss the fact that the aggregate and the individual pairs distributions may be different. They propose an initial model for studying the dependence between the two, which we exploit as a starting point in our paper. However, they do not study this aspect further, after checking that, in their traces, a subset of individual pairs inter-contact times are distributed according to a power-law with exponential cut-off.
304
A. Passarella and M. Conti
These two papers informed most of the subsequent literature, which most of the time assumed that the distributions of individual pairs and the aggregate distribution can be used interchangeably. Only a few papers paid attention to individual pairs distributions. Among them, [Con07] analysed another set of popular traces, finding that a significant fraction of pairs inter-contact times may follow exponential, Pareto or log-normal distributions. Authors also provided a model similar in spirit to that presented in our work, in which they analyse conditions under which pairs exponential distributions result in a power-law aggregate. As we highlight in the following, the model in [Con07] does not incorporate a fundamental aspect, thus obtaining imprecise results. The work in [Gao09] also analyses popular traces, finding that over 85% of the pairs distributions fit an exponential law, according to a χ2 test. The dependence with the aggregate distribution is not studied, though. Besides this body of work, most of the literature on opportunistic networks gives for granted that aggregate inter-contact times feature a power-law with exponential cut-off, and do not pay attention to the possible difference of the individual pairs distributions. For example, the vast majority of the mobility models proposed for opportunistic networks share this assumption, and aim at reproducing individual pairs and/or aggregate power-law distributions (e.g., [Lee09, Bor09, Bol10, Rhe08]). Similarly, other papers try to highlight which characteristics of reference mobility models generate a power-law in individual pairs inter-contact times [Cai07, Cai08]. With respect to this body of work, in this paper we provide a thorough analysis of the dependence and the key differences between individual pairs and aggregate inter-contact time distributions, clearly showing that in general the latter cannot be used as a substitute for the former. With respect to the models presented in [Kar07] and [Con07] we provide much more general and accurate analysis and results. To the best of our knowledge, no previous work has dealt with this specific problem at the level of detail presented here.
3
Analytical Model of Aggregate Inter-contact Times
In this section we present an analytical model that describes the dependence between the inter-contact times of individual pairs and the resulting distribution of aggregate inter-contact times. This is the key result that we then exploit in the following analysis. 3.1
Preliminaries
As a first step, it is important to recall a result found in [Kar07], which shows the relationship between the distribution of individual pairs inter-contact times and the aggregate distribution, in a network where the parameters of the individual pairs distributions are known. Let us assume to monitor individual pairs inter-contact times for a large time interval T . Let us denote with P the number of pairs for which at least one inter-contact time is measured over T . Moreover,
Characterising Aggregate Inter-contact Times
305
denote with Fp (x) the CCDF of inter-contact times for pair p, p ∈ {1, . . . , P }, with np (T ) and N (T ) the number of inter-contact times of pair p and the total number of inter-contact times over T , respectively. Finally, denote with θp the rate of inter-contact times for pair p (i.e. the reciprocal of the average intercontact time) and with θ = p θp the total rate of inter-contact times. Then, the CCDF of the aggregate inter-contact times F (x) can be expressed as in the following lemma. Lemma 1. In a network where P pairs of nodes exist for which inter-contact times can be observed, the CCDF of the aggregate inter-contact times is: F (x) = lim
T →∞
Proof. See [Kar07].
P np (T ) p=1
N (T )
Fp (x) =
P θp p=1
θ
Fp (x) .
(1)
Lemma 1 is rather intuitive. The distribution of aggregate inter-contact times is a mixture of the individual pairs distributions. Each individual pair “weights” in the mixture proportionally to the number of inter-contact times that can be observed in any given interval (or, in other words, proportionally to the rate of inter-contact times). 3.2
General Results
In this section we extend the result of Lemma 1 to the case in which the parameters of the individual pairs inter-contact times are not known a priori. Specifically, we consider the general case in which the rates of individual pairs inter-contact times are independent and identically distributed (iid) according to a continous random variable Λ with density f (λ), λ ≥ 0 (for the generic pair p, λp denotes its rate). We also assume that all individual pairs inter-contact times follow the same type of distribution. For the generic pair p, the distribution parameters are set such that the resulting rate is equal to λp . Note that we are able to model heterogeneous networks, as inter-contact times distributions of different pairs are in general different, as their rates are different. With respect to the notation used in Section 3.1, we hereafter denote with Fλ (x) the CCDF of the inter-contact times between a pair of nodes whose rate is equal to λ1 . Under these assumptions, the CCDF of the aggregate inter-contact times becomes as in Theorem 1. Theorem 1. In a network where the rates of individual pairs inter-contact times are distributed with density f (λ), the CCDF of the aggregated inter-contact times is as follows: ∞ 1 F (x) = λf (λ)Fλ (x)dλ . (2) E[Λ] 0 1
Note that, when Fλ (x) is defined by more than one parameter, additional conditions besides the rate should be identified to derive all parameters. Our analysis holds true for any definition of such additional conditions.
306
A. Passarella and M. Conti
Proof. The complete proof is available in [Pas11], while here we provide an intuitive sketch. Formally, we prove the theorem by conditioning F (x) on a particular set of individual pairs inter-contact rates, and applying the law of total probability. Note however that we can also obtain Equation 2 by considering a modified network in which we assume that all rates are possibly available, each with probability f (λ)dλ. F (x) is thus the aggregate over all such individual intercontact times distributions. As the number of distributions becomes infinite and is indexed by Λ (a continuous random variable), the summation in Equation 1 becomes an integral over λ. Furthermore, the weight of each distribution (θp in Equation ∞ 1) becomes λ · p(λ) = λf (λ)dλ, while the total rate (θ in Equation 1) becomes λf (λ)dλ = E[Λ]. The expression in Equation 2 follows immediately. 0 Note that generalising Lemma 1 as in Theorem 1 results in a much richer tool for understanding the dependence between individual pairs and aggregate intercontact times distributions. Specifically in the model provided by Theorem 1 the individual pairs distributions are not pre-defined, but can be tuned according to the random variable Λ. This allows us to “steer” and control the heterogeneity of the network. As we show in Section 4, this model allows us to study the relationship between individual pairs and aggregate inter-contact times distributions, by assuming that i) individual pairs are heterogeneous; ii) their inter-contact times follow an arbitrary family of distributions (Fλ (x)); and iii) their rates follow another arbitrary distribution (f (λ)). These degrees of flexibility are not provided by the model in Lemma 1. As a final remark, a similar generalisation was also attempted in [Con07]. However, the formulation in [Con07] is not exact, as it does not take into account the fact that, in the mixture defining F (x), distributions of more frequent contact patterns should “weight more” with respect to distributions of less frequent contact patterns.
4
Aggregated Inter-contact Times Emerging in Different Heterogeneous Networks
In this section we exploit the model provided by Theorem 1 to investigate the dependence between the distributions of individual pairs inter-contact times and their aggregate distribution. Specifically, we consider exponentially distributed individual pairs inter-contact times (i.e., we assume that Fλ (x) = e−λx holds true), and study how the aggregate CCDF F (x) varies for different distributions of the individual pairs inter-contact rates, f (λ). Considering exponential individual pairs inter-contact times is sensible, as analysis on traces indicates that this hypothesis cannot be ruled out, in general [Gao09, Con07]. 4.1
Preview of the Main Results
As a preview of the results, we will show that power-law distributions (with or without exponential cut-off) for the aggregate inter-contact times can appear even starting from exponentially distributed individual pairs inter-contact times.
Characterising Aggregate Inter-contact Times
307
This is a very interesting outcome, indeed. It clearly indicates that - in general - looking at the aggregate distribution of inter-contact times is not enough for inferring the distributions of individual pairs inter-contact times, and can indeed be misleading. We show, for example, that observing a power-law aggregate distribution with shape α ∈ (1, 2] is not sufficient to conclude that a large family of forwarding protocols yield infinite expected delay [Aug07]. In such a case, individual pairs inter-contact times may actually be exponentially distributed, which would guarantee finite expected delay. The key reason behind this finding is that when the network is heterogeneous, and not all individual pairs contact patterns are statistically equivalent, the heterogeneity of the individual pairs distributions plays a crucial role in determining the aggregate distribution of the inter-contact times, which may be of a completely different type with respect to the individual pairs distributions. The detailed results are hereafter presented as grouped in two classes. Firstly, in Section 4.2, we investigate under which conditions the aggregate inter-contact times follow exactly a given distribution. Specifically, we impose that F (x) in Equation 2 is equal to such distribution, and find the corresponding distribution of the individual pairs inter-contact rates f (λ). Then, in Section 4.3 we find additional cases in which it is not possible to exactly map a given aggregate distribution F (x) to a specific rate distribution f (λ), but it is possible to identify rate distributions such that the tail of the aggregate follows a certain pattern. As those are among the most interesting cases to study, we focus on aggregate inter-contact times distributed according to i) a power-law, ii) a power-law with exponential cut-off, iii) an exponential law. Proofs are available in [Pas11]. 4.2
Exact Aggregate Inter-contact Times Distributions
First of all, we wish to identify rate distributions f (λ) that result in power-law (Pareto) aggregate distributions. From Equation 2, and recalling that we assume individual inter-contact times are exponentially distributed, we have to find f (λ) such that α ∞ 1 b λf (λ)e−λx dλ = , (3) E[Λ] 0 b+x where α and b are the shape and scale parameters of the Pareto distribution. Note that in this case we consider the definition of the Pareto distribution in which all positive values are admitted, i.e., x > 0 holds true. The rate distribution f (λ) satisfying Equation 3 is provided by Theorem 22 . Theorem 2. When individual pairs inter-contact times are exponentially distributed, aggregate inter-contact times are distributed according to a Pareto law with parameters α > 1 and b > 0 iff the rates of individual inter-contact times follow a Gamma distribution Γ (α − 1, b), i.e. α b bα−1 F (x) = ⇐⇒ f (λ) = λα−2 e−bλ . (4) b+x Γ (α − 1) 2
A qualitatively similar result was also found in [Con07]. However, due to the inexact formulation of F (x) highlighted before, the exact result differs.
308
A. Passarella and M. Conti
Theorem 2 is one of the most interesting results of this paper. It has been found in [Aug07] that a large family of forwarding protocols yield infinite expected delay when the individual pairs inter-contact time distributions are Pareto with α ∈ (1, 2]. Based on this result, it has been common in the literature to assume that, if the aggregate inter-contact time distribution is Pareto with α ∈ (1, 2], those forwarding protocols yield infinite delay. Theorem 2 clearly shows that this is not correct, as aggregate power-laws with α ∈ (1, 2] can be obtained starting from exponential individual pairs inter-contact times. In such a case, the expected delay of forwarding protocols is finite. As a special case of Theorem 2, the following corollary holds true. Corollary 1. When individual pairs inter-contact times are exponentially distributed, aggregate inter-contact times are distributed according to a Pareto distribution with parameters α = 2 and b > 0 iff the rates of individual inter-contact times follow an exponential distribution with rate b, i.e. F (x) =
b b+x
2
⇐⇒ f (λ) = be−bλ .
(5)
Proof. This follows immediately from Equation 4 by recalling that a Gamma distribution Γ (1, b) is actually exponential with rate b. Corollary 1 further stresses the result of Theorem 2, stating that a power-law distribution of aggregate inter-contact times can be obtained starting from both exponentially distributed individual pairs inter-contact times and pairs rates. An interesting physical intuition can be highlighted that justifies the above results. Recall that the inter-contact times aggregate is a mixture of the individual pairs inter-contact times. From a physical standpoint, power-law aggregates means that some inter-contact times in the mixture can take extremely large values, possibly diverging. Intuitively, such a behaviour can therefore be generated irrespective of the distribution of individual pairs inter-contact times, by including in the mixture individual pairs whose inter-contact rate is extremely small, arbitrarily close to 0. This is exactly the effect of drawing rates from Gamma or exponential distributions, which can admit values of the rates arbitrarily close to 0. The same physical intuition is also confirmed by other results we present in Section 4.3. The final result we present in this section shows under which conditions aggregate inter-contact times follow an exponential distribution, i.e., F (x) = e−μx . This is shown in Theorem 3. Theorem 3. When individual pairs inter-contact times are exponentially distributed, aggregate inter-contact times are distributed according to an exponential distribution with rate μ > 0 iff the network is homogeneous, i.e. iff all individual pairs inter-contact times are exponentially distributed with rate μ: F (x) = e−μx ⇐⇒ f (λ) = δ(λ − μ) , where δ(·) is the Dirac delta function.
(6)
Characterising Aggregate Inter-contact Times
309
Interestingly, Theorem 3 shows that it is sufficient to look at the aggregate intercontact time distribution only when it turns out being exactly exponential, as, starting from individual pairs exponential distributions, the only possibility is that all pairs inter-contact times are distributed with exactly the same exponential law found in the aggregate. 4.3
Asymptotic Behaviour of Aggregate Inter-contact Times Distributions
In this section we present a further set of results characterising the asymptotic behaviour of the aggregate inter-contact times. We still assume that individual pairs inter-contact times are exponential, and study the aggregate when pairs rate are drawn from Pareto distributions. For this set of results we are not able to obtain sufficient and necessary conditions for obtaining a given distribution of the aggregate. However, we are still able to show interesting sufficient conditions for obtaining aggregate distributions that asymptotically decay as a power-law with or without exponential cut-off. These results are quite interesting, as several papers in the literature have observed aggregate distributions whose tail decays as a power-law with exponential cut-off. Note that studying the asymptotic behaviour is relevant, as it is the tail of the inter-contact times distributions that determine the convergence properties of forwarding algorithms [Aug07]. Firstly, we assume that individual pairs rates are γ distributed according to a Pareto distribution whose CCDF is F (λ) = λk , λ > k, and derive the asymptotic behaviour of F (x) for large x. Note that in this case rates are drawn from a Pareto distribution that does not admit values arbitrarily close to 0. Theorem 4 provides the expression for F (x). Theorem 4. When individual pairs inter-contact times are exponentially dis γ tributed and rates are drawn from a Pareto distribution F (λ) = λk , λ > k, the tail of the aggregate inter-contact times decays as a power-law with exponential cut-off, i.e.: γ k e−kx F (λ) = , λ > k ⇒ F (x) ∼ for large x (7) λ kx Two interesting insights can be drawn from Theorem 4. First, an aggregate distribution whose tail decays as a power-law with exponential cut-off can emerge also when individual pairs inter-contact times are exponential. Again, this challenges common hypotheses used in the literature, that assume individual inter-contact times are power-law with exponential cut-off because aggregate inter-contact times are distributed according to this law. Second, this result confirms our intuition about the fact that a key reason for an aggregate distributions with a heavy tail is the existence of individual pairs with inter-contact rates arbitrarily close to 0. In the case considered by Theorem 4 this is not possible, and indeed the tail of the aggregate inter-contact time decays faster than a power-law.
310
A. Passarella and M. Conti
Finally, we study the asymptotic behaviour of the aggregate distribution when inter-contact rates are drawn from a Pareto distribution in the form F (λ) =
γ k , λ > 0. The following theorem holds. k+λ Theorem 5. When individual pairs inter-contact times are exponentially dis
γ k tributed and rates are drawn from a Pareto distribution F (λ) = k+λ , λ > 0, the tail of the aggregate inter-contact times decays as a power-law with shape equal to 1, i.e.: γ k 1 F (λ) = , λ > 0 ⇒ F (x) ∼ for large x (8) k+λ x Theorem 5 confirms once more that the presence of individual pairs with contact rates arbitrarily close to 0 result in heavy tails in aggregate inter-contact times. Again, it also confirms that the presence of even significantly heavy tails (shape equal to 1) in the aggregate inter-contact time distribution is not necessarily an indication that individual pairs distributions also present a power-law. 4.4
Validation
In this section we validate the results presented before, by comparing the analytical results with simulations. In our simulation model we consider a network of 150 pairs that meet each other with exponential inter-contact times. Rates are drawn at the beginning of each simulation run according to the specific distribution f (λ) we want to test. For each pair we generate at least 100 inter-contact times. Specifically, each simulation run reproduces an observation of the network for a time interval T , defined according to the following algorithm. For each pair, we first generate 100 inter-contact times, and then compute the total observation time after 100 inter-contact times, Tp , as the sum of the pair intercontact times. T is defined as the maximum of Tp , p = 1, . . . , P . To guarantee that all pairs are observed for the same amount of time, we generate additional inter-contact times for each pair until Tp reaches T . In this way we generate at
(a)
(b)
Fig. 1. F (x), inter-contact rates Λ ∼ Γ (2, 1) (loglog (a) and linlog (b))
Characterising Aggregate Inter-contact Times
(a)
311
(b)
Fig. 2. F (x), inter-contact rates Λ ∼ Exp(0.1) (loglog (a) and linlog (b))
(a)
(b)
Fig. 3. F (x), inter-contact rates Λ ∼ P areto(0.001, 2), λ > 0 (loglog (a) and linlog (b))
(a)
(b)
Fig. 4. F (x), inter-contact rates Λ ∼ P areto(0.001, 2), λ > 0.001 (loglog (a) and linlog (b))
least 150 ∗ 100 samples of the aggregate inter-contact time distribution, which we consider enough to obtain a reasonably accurate empirical distribution of F (x). Figure 1 shows the aggregate inter-contact times CCDF F (x) when intercontact rates are drawn from a Gamma distribution Γ (2, 1) (inter-contact times are reported on the x-axis in seconds). According to Theorem 2, this results in
312
A. Passarella and M. Conti
aggregate inter-contact times distributed according to a Pareto law with shape α = 3. The power-law behaviour is clearly highlighted by the less-than-linear decay in the linlog scale (Figure 1(b)). It is also clear that simulation and analytical results are in very good agreement. Figure 2 shows F (x) when the individual pairs inter-contact rates are exponentially distributed with rate 0.1s−1 . Also in this case, according to Corollary 1, the aggregate inter-contact time follows a Pareto distribution with shape α = 2. Figure 2 shows that also in this case analytical results are very well aligned with simulations. Finally, Figure 3 and 4 show F (x) ac when γ the pairs rates are distributed γ k cording to a Pareto law F (λ) = k+x , λ > 0 and F (λ) = xk , λ > k, respectively. From Theorems 5 and 4, the key difference is the fact that in the former case rates can be arbitrarily close to 0, while in the latter case they cannot. The effect on F (x) is to generate a heavy tail decaying as 1/x in the former −kx case, and a light tail decaying as e kx in the latter. Recall that in these cases the analysis is not able to capture the complete distribution of F (x), but only its asymptotic behaviour for large x. Figures 3 and 4 confirm also that in this case analytical and simulation results are aligned.
5
Conclusions
In this paper we have investigated through an analytical model the dependence between the distributions of i) individual pairs inter-contact times, ii) the rates of individual pairs inter-contact times, and iii) the aggregate inter-contact times, in mobile opportunistic networks. Understanding this dependence is important, as most of the literature assumes that the aggregate distribution is representative of the individual pairs inter-contact times distributions, and checks network properties that depend on the latter by considering the former. Our analytical results clearly show that - in general - this approach is not correct. As one of the most popular cases considered in the literature, we have studied under which conditions the aggregate distribution features a heavy tail, with or without an exponential cut-off. We have shown that in heterogeneous networks (i.e., when not all the pairs distributions are the same), heavy tailed aggregate distributions can appear starting from exponentially distributed individual pairs inter-contact times. Therefore, the aggregate distribution is not representative, in general, of the individual pairs distributions, and that focusing on the former to check properties that depend on the latter can thus be misleading. Furthermore, we have highlighted the key impact of the distribution of the rates of individual pairs inter-contact times on the aggregate distribution. Whenever rates arbitrarily close to 0 are permitted, heavy tails appear in the aggregate, also when individual pairs distributions are light-tailed. This shows the critical role played by the heterogeneity of individual pairs on the aggregate inter-contact times distribution.
Characterising Aggregate Inter-contact Times
313
References [Abr72]
[Aug07]
[Bol10]
[Bor09]
[Cai07] [Cai08] [Con07] [Gao09] [Kar07]
[Lee09] [Pas11]
[Rhe08]
Abramowitz, M., Stegun, I.: Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Dover Publications, Mineola (1972) ISBN 978-0-486-61272-0 Chaintreau, A., Hui, P., Crowcroft, J., Diot, C., Gass, R., Scott, J.: Impact of Human Mobility on Opportunistic Forwarding Algorithms. IEEE Trans. on Mob. Comp. 6(6), 606–620 (2007) Boldrini, C., Passarella, A.: HCMM: Modelling spatial and temporal properties of human mobility driven by users’ social relationships. Elsevier Comput. Commun. 33(9), 1056–1074 (2010) Borrel, V., Legendre, F., Dias De Amorim, M., Fdida, S.: SIMPS: using sociology for personal mobility. IEEE/ACM Trans. Netw. 17(3), 831–842 (2009) Cai, H., Young Eun, D.: Crossing over the bounded domain: from exponential to power-law inter-meeting time in MANET. In: ACM MobiCom (2007) Cai, H., Young Eun, D.: Toward stochastic anatomy of inter-meeting time distribution under general mobility models. In: ACM MobiHoc (2008) Conan, V., Leguay, J., Friedman, T.: Characterizing pairwise inter-contact patterns in delay tolerant networks. Autonomics (2007) Gao, W., Li, Q., Zhao, B., Cao, G.: Multicasting in delay tolerant networks: a social network perspective. In: ACM MobiHoc (2009) Karagiannis, T., Le Boudec, J.-Y., Vojnovic, M.: Power law and exponential decay of inter contact times between mobile devices. In: ACM MobiCom (2007) Lee, K., Hong, S., Kim, S.J., Rhee, I., Chong, S.: SLAW: A New Mobility Model for Human Walks. In: INFOCOM (2009) Passarella, A., Conti, M.: Characterising aggregate inter-contact times in heterogeneous opportunistic networks., IIT-CNR Tech. Rep. 02/201, http://cnd.iit.cnr.it/andrea/docs/net11_tr.pdf Rhee, I., Shin, M., Hong, S., Lee, K., Chong, S.: On the Levy-walk Nature of Human Mobility. In: INFOCOM (2008)
Are Friends Overrated? A Study for the Social Aggregator Digg.com Christian Doerr, Siyu Tang, Norbert Blenn, and Piet Van Mieghem Department of Telecommunication TU Delft, Mekelweg 4, 2628CD Delft, The Netherlands {C.Doerr,S.Tang,N.Blenn,P.F.A.VanMieghem}@tudelft.nl
Abstract. The key feature of online social networks is the ability of users to become active, make friends and interact with those around them. Such interaction is typically perceived as critical to these platforms; therefore, a significant share of research has investigated the characteristics of social links, friendship relations, community structure, searching for the role and importance of individual members. In this paper, we present results from a multi-year study of the online social network Digg.com, indicating that the importance of friends and the friend network in the propagation of information is less than originally perceived. While we note that users form and maintain social structure, the importance of these links and their contribution is very low: Even nearly identically interested friends are only activated with a probability of 2% and only in 50% of stories that became popular we find evidence that the social ties were critical to the spread.
1
Introduction
The recent explosive growth of online social network (OSN) platforms such as Facebook, Twitter, or Digg has sparked a significant interest. As several hundred millions users now regularly frequent these sites to gather and exchange ideas, researchers have begun to use this comprehensive record to analyze how these networks grow by friendship relations, how information is propagated, and who are the most influential users. A good understanding of these principles would have many applications, such as effective viral marketing [1], targeted advertising [2], the prediction of trends [3] or the discovery of opinion leaders [4]. A fundamental assumption of previous research is that friendship relations are a critical component for the proper functioning of social networks [5], i.e., they assume that information, opinions and influences are sourced by single individuals and then propagated and passed on along the social links between members. The extent, density, layout and quality of the social links and the network of links will therefore determine how information can be spread effectively. In this paper, we report on results from a multi-year empirical study of the OSN Digg.com, a so-called social news aggregator, that indicate that the criticality and importance of individual friendship relations and the friendship network is less than previously perceived. In these social news aggregators, users submit J. Domingo-Pascual et al. (Eds.): NETWORKING 2011, Part II, LNCS 6641, pp. 314–327, 2011. c IFIP International Federation for Information Processing 2011
Are Friends Overrated? A Study for the Social Aggregator Digg.com
315
news items (referred to as “stories”), communicate with peers through direct messages and comments, and collaboratively select and rate submitted stories to get to a real-time compilation of what is currently perceived as “hot” and popular on the Internet. Yet, despite the many means to communicate, interact and spread information, an analysis of eleven million stories and the commenting and voting patterns of two million users revealed that the impact of the friendship relations on the overall functioning of the social network is actually surprisingly low. In particular, we find that, while users indeed form friendship relations according to common interests and physical proximity, these friendship links are only activated in 2% of the information propagation. Furthermore, in 50% of all stories that became “hot”, there was no sufficient prior contribution by the friend network to trigger the popularity of the story. Instead, we find that a critical mass was reached through participation through random spectators. The remainder of this paper is structured as follows: Section 2 discusses related work and prior findings on the role and characteristics of friendship links and the friendship network in OSNs. Section 3 describes background information about the social network used in our experimentation and our data collection methodology. Sections 4 and 5 discusses the role of friendships and selected individuals to the successful information propagation. Section 6 present an outlook on the issue of time dependencies in Digg. Section 7 summarizes our findings.
2
Related Work
Ever since the publication of Katz and Lazarfeld’s argument for the origin and spread of influence through communities [6], researchers have investigated the mechanisms by which ideas and opinions are passed along social relationships. The wide-spread popularity of OSNs now provides an easily accessible, machinereadable data source for broad-scale analysis. As it is commonly assumed that friendship interactions are the backbone of social networking platforms [5] along which a “web of influence” [7] is extended and maintained, the investigation of how links are formed and used has received significant attention. The mechanisms by which these social ties are formed is still subject to investigation. Fono and Raynes-Goldie [8] for example studied the semantics of friendship relations in the OSN LiveJournal and find that a number of overlapping processes drive the formation of links: ties may be formed as measures of trust, reciprocity, an extension of offline acquaintance or as a facilitator for information transmission. This function of friends as content providers [9] is a strong force, as it even drives the friend selection of users: in a usage survey on Facebook, subscribers named their interest to obtain information from friends to form friendship relations [10]. According to network theory, such content dissemination should work best across “weak ties” [11] which are linking tightly connected clusters, as these should transmit information with the least amount of redundancy. Therefore, the predominant share of OSN research has been conducted to investigate the topological properties of OSN and to understand how users behave and exchange content and information across their social links.
316
C. Doerr et al.
Along these lines, Mislove et al. [12] studied the topological properties of four OSNs at large-scale: Flickr, YouTube, Live-Journal, and Orkut, for which they evaluated different network metrics, e.g. link symmetry, node degree, assortativity and clustering coefficient. The OSNs were characterized by a high fraction of symmetric links, and contained a large number of highly connected clusters, indicating tight and close (transitive) relationships. The degree distributions in the OSNs followed a power-law with similar coefficients for in- and out-degree, showing the mixed importance of nodes in the network - there are few well connected and important hubs to which the majority of users reach to. In [13], Leskovec et al. presented an extensive analysis about communication behaviors of the Microsoft Messenger instant-messaging users. The results of 30 billion conversations among 240 million people showed that people with similar characteristics (e.g., age, language, and geographical location) tend to communicate more. Structurally, in terms of node degree, cluster coefficient, and shortest path length, it was shown that the communication graph is well connected, robust against node removal, and exhibits the small-world property. Another line of research aims to discover factors for content popularity and propagation in OSNs. For instance, it is shown in [14] that there are different photo propagation patterns in Flickr and photo popularity may increase steadily over years. A similar study was performed with YouTube and Daum in [15]. Here, video popularity is mostly determined at the early stage after content submission. Such observations are also voiced in the viral marketing literature where it is assumed that “few important trends reach the mainstream without passing the Influentials in the early stages, ... they give the thumbs-up that propel a trend“. [16, p. 22]. As it is however quite difficult to evaluate concretely and at a population scale how content is disseminated, there exists to this date no population-level (on the order of millions of users thereby also considering possible emergent properties) evaluation of these hypotheses. This paper aims to address this void, and from this current state of research, we therefore formulate two hypotheses that will be addressed and investigated for the scope of an entire social network: H1 . There exist critical members inside the community who have better or earlier access to important information. H2 . Inter-personal relations and the overall network of friendships are the key component to the successful spread of information.
3
The “Digg” Data Collection
The news portal digg.com is a social content website founded in 2004. According to the Alexa rating (alexa.com), Digg belongs to the top 120 most popular websites. At the time of this work 2.2 million users are registered at the webpage, submitting about 15,000 to 26,000 stories per day. Out of those, approximately 180 stories are voted to be popular per day. Users are able to submit and vote on content they like, an activity called “digging”. A submission consists of the link to a webpage, where a news story, an image or a video is stored, and a short
Are Friends Overrated? A Study for the Social Aggregator Digg.com Who digged on a story?
317
Which stories are listed?
Who are their friends?
Social Network Perspective
User Perspective
Story Perspective
Site Perspective
Fig. 1. Different components of the Digg crawling process
description of the content. New submissions gets enqueued to the upcoming pages where they are staying for a maximum of 24h. Users may explore the upcoming section by topic or a recommendation system, which displays stories that already gained attention and votes. A secret algorithm by digg.com chooses stories from the upcoming section to promote them to the front pages which is the default page when entering the Digg website. After this promotion a story will therefore obtain a lot of attention. Within the Digg network, it is possible for users to create friendship connections to other users. One may either be a fan or a mutual friend to another person. Fans and friends are notified by the friends interface of Digg if their friend has “digged” or submitted a story.1 We studied different aspects of Digg, such as the friendships, user characteristics and activities, and the properties and dynamics of the published content. While most social network traces are crawled using friendship relations, e.g. [12] and [17], the Digg dataset was obtained by a simultaneous exploration from four different perspectives, as shown in Fig. 1: Site perspective: The Digg website lists popular and upcoming stories in different topic areas. Peridiocally all front pages with all popular stories and all upcoming stories were collected. All discovered stories are added to an “all-known story” list maintained by us. Story perspective: For all stories, a complete list of all diggs performed by users is collected. Newly discovered users will be added for future exploration. User Perspective: For each discovered user, a list of their previous activities is obtained. If new stories are found the entire activities is retrieved. Social Network Perspective: A list of friends is downloaded for every user. If a user is newly found, he is added to the data discovery process, and a list of all friends and user profile information are fetched. 1
It should be noted at this point that the semantics of a friend in Digg (obtaining information) is certainly different from a friendship in Facebook (personal acquaintance) or LinkedIn (business contact) [9], as also the main function differs between these social networks. As this paper investigates information propagation and social news aggregators such as digg.com focus on the exchange of information, these results are only immediately applicable to this type of OSN. To what extent these findings can be extended towards other types needs more investigation.
318
C. Doerr et al.
The above procedure is continued until no new user and story can be found and periodically repeated afterwards to discover new activity. By using the above crawling methodology, we are able to collect (nearly) the entire information about friendships, activities of users and the published content in the Digg network. This is important as traditional crawling techniques will only discover those users which are linked within the social network and will overlook for example all users without any friendship relations who are still otherwise active members. In our case our crawling technique discovered nearly twice as many users than could have been identified by a pure crawl of the social network alone. This outcome might (partially) explain some of the contrary findings in our paper. Our data covers the entire span since the beginning of Digg in 2004 until the end of our study in July 2010 and contains a volume of more than 600 GB, covering the history of 2.2 million registered users and 11 million stories.
4
Information Spread through the Network of Friends
As discussed in the introduction, it is commonly assumed that the friendship relations within a social network are a critical components to the successful spread of information. This section will dissect this process and investigate for the case of the Digg OSN, whether the propagation of news is indeed the results of the activation of users’ ties. 4.1
Self-organization of the Friendship Network
According to sociological theory, friendship relations in an OSN grow directed by common interests and tastes [9]. Within Digg, all news stories are classified within eight major topic areas, subdivided by 50 special interests. When matching the users’ concrete digging behavior with the topic area into which a story was classified, we find that the subscribers exhibit quite strong and distinct preferences and tastes for individual topic areas. As shown in Fig. 2, if a particular user reads, diggs and is therefore interested in two distinct topic areas, say for example “Science” and “Technology”, more than 70% of all consumed stories fall within the most preferred genre. For three topic areas, the favorite one draws 65%, and even for users interested in all eight categories the top two will on average still account for 60% of read stories. These rankings of user interest provide a direct measure of how similar the tastes and preferences of users in their information acquisition are. When comparing two users and their ranking of topics, we use the number and distance of permutation steps required to transform one list into the other (the Waserstein rank distance [18]) as a measure of user similarity. A network-wide analysis of the similarities between friends shows that users directly connected to each other have a very high alignment of their preferences and tastes: 36% of rank lists are identical, 20% require one transformation, and within three transformation steps 80% of all friendship relations are aligned. While there exists a perfect overlap between the interests and tastes of friends, there is a surprisingly low amount of common activity among friends and only
Are Friends Overrated? A Study for the Social Aggregator Digg.com
319
Fraction of digged stories
8
2 hops
6
1 hop
4
k=1 k=2 k=3 k=4 k=5 k=6 k=7 k=8
2
0.1 8 6 4
2
1
2
% of Friend Network Reached
1 70 60 50 40 30 20 10 0
1
2
3
4
5
Hops from Submitter
3
4
5
6
7
8
Number of interested topics n
(a)
(b)
Fig. 2. The average ratio of the k-th fa- Fig. 3. Most of the entire friendship netvorite topic of users that are interested in work spreading out from the submitter is actually already covered by the first hop n topics after ranking
2% of all friend pairs actually do digg on the same story. The hypothesis that common interests result in the formation of friendships in order to gain information from neighboring peers [8] would also predict that the more similar the tastes between friends are, the closer the alignment of clicking patterns would be. In practice however, we found this not to be the case. Yet, pairs of friends do not exist in isolation, but are embedded within a larger network of the friends of the friends. This in OSNs very dense network of friends [12] may be a powerful promoter, as theoretically a large group can be reached if information can be passed on from friend to friend over several steps. Our analysis shows that information can indeed travel over multiple hops from the original submitter (see Fig. 3(a)) and on average does reach 3.7 hops from the source until the propagation dies down. The actual contribution of the multi-hop network, i.e., the number of friends of friends that can be activated, is however rather limited. As shown in Fig. 3(b) nearly 70% of the ultimately participating network of friends consists of the submitter’s direct friends, while the benefit of the additional hops decreases super-exponentially. This result is not astonishing given the generally low activation ratios of friends and possible redundancies in the spread as indicated by the dashed line in figure 3(a). 4.2
Reaching Critical Momentum
All news stories submitted to Digg are initially collected in the “upcoming” list, which with more than 20 000 submissions a day has a very high turnover rate (more than 800/h). In order to become promoted to the front pages, a story has to attract sufficient interest, i.e., a large enough number of diggs, within 24 hours, which as shown in Fig. 4 the majority reaches within 16 hours. We experimentally determined that about 7 diggs/h are necessary to qualify for the promotion, thereby stories should gather on average around 112 diggs. A story can rally this support initially from random spectators or friends of the submitter, who were notified about the newly placed story. Figure 5 shows the probability that a given number of friends is active on the website on the same day, and we compute the likelihood that at least 112 friends are online within 24
320
C. Doerr et al.
Table 1. Ratio of friends/non-friends among the number of diggs for popular stories Before popular After popular Average ratio Friends Non-friends Friends Non-friends a) 63484 stories 0.72 0.28 0.25 0.75 b) 51679 stories 0.23 0.77 0.14 0.86
hours to be about 0.01. This expectation corresponds with the actually observed promotion success ratio. The fine line between failure and success, whether a story will be forgotten or becomes popular, therefore strongly depends on the underlying stochastic process, whether at a certain time a sufficient number of friends are online to support the story. In the remaining 99% of the cases when not enough friends are online, additional support needs to be rallied from users outside the friend network to reach the promotion threshold. 4.3
Promotion without Friendships
1.0
6x10
0.8
24 hours
0.4 0.2
Promotion duration
10
10
1
10
2
Time (in hour)
3
10
4 3 2 1
0.0 0
-3
5
0.6
probability
Percentage of promoted stories
As the likelihood that a story becomes popular solely through the submitter’s friendship network is rather slim (given the low activation ratio of friends, the limited contribution of the network of friends and the low probability of a critical mass of friends active on the same day), in most cases the contribution of nonfriends is necessary to push a story up to the promotion threshold. Table 1 shows the ratio of friends and non-friends active on a story both before and after the promotion for all stories that became popular within the Digg network. Here, two distinct groups emerge. In about 54% of all cases, a story was marketed predominantly by friends, although a contribution of non-friends (28%) was necessary for the story to reach critical mass. Figure 6(a) shows this aggregated pattern for an example story in this class. In the remaining cases (46%), stories were spread and digged predominantly by users outside the submitter’s friendship network. Figure 6(b) shows a typical example for this pattern. Once the promotion threshold is crossed, both types of stories are read more by non-friends, as the quantity is usually significantly larger and the contribution of the submitter’s friendship network may already be exhausted.
4
10
0 200
400
600
800
1000
amount of active friends at the same day
Fig. 4. Stories have to gain enough mo- Fig. 5. The probability for a given nummentum within 24h to be promoted to the ber of friends to be active within 24h decreases rapidly front pages
Are Friends Overrated? A Study for the Social Aggregator Digg.com
321
8 6
Friend diggers Non-friend diggers
Number of diggers
4
Story id: 10471007 Promoted by friends
8 6 4
2
10
Story id: 1083159 Promoted by non-friends
Friend diggers Non-friend diggers
100
Before popular
2
After popular
8
After popular
Before popular
10
6
8 6
4
4 2
2
1 1
2
3
4
5
6
7 8 9
10
2
3
4
5
6
Time in hour (Promotion duration: 11 hours)
7 8 9
100
1 1
(a)
2
3
4
5
6 7 8 9
10
2
3
4
5
6 7 8 9
Time in hour (Promotion duration: 10 hours) (b)
100
Fig. 6. Propagation pattern of a popular story since publication. (a) The story is promoted by friends. (b) The story is promoted by non-friends. (log-log scale)
5
The Criticality of Individuals
The successful spread of information cannot be explained directly from the social ties inside our investigated online social network, neither through the relationships among individual friends nor from the usage and outreach of users into their friendship network. This naturally raises the question whether all users are equal inside the network, or whether there are some individuals in the social community (a) who themselves have better (or earlier) access to important content and are therefore able to get a high number of popular stories, (b) can use the friendship network more efficiently, act as motivators able to overproportionally recruit friends, or (c) able to early on spot content that will later resonate with the masses and become a hit. These questions will be the focus of this section. There exist a number of ways to define the importance or criticality of individuals in networks. In complex network theory and social network analysis, importance is typically defined from a structural perspective, using topological metrics such as node degree or betweenness [19], which measure how well a particular node is connected to its surrounding peers and how many possible communication paths between nodes in the network will traverse this node. Using this definition of importance, most studies of online social networks find a small number of topologically critical nodes [12,20,13], resulting from the power-law degree distribution of these complex networks: there exist a few well-connected nodes with whom a large number of users are friends. In our analysis, we confirm these findings and will (for now) use this definition of critical individuals. Contrary to other OSNs however, we do not only observe a skewed distribution in the degree and connectivity of nodes, but also in the symmetry of relationships among users. While most OSNs show high levels of link symmetry, for example 74% of links in LiveJournal and 79% of links in YouTube are found to be bidirectional [12], the relationships in Digg are less reciprocative (38% on average) and also vary with the degree of the node: the more connections an individual B already has, the less likely it is to match an incoming new friendship request from A. In Digg, A thus becomes a “fan” of B, thereby receiving notifications about the activities of B, but not vice versa.
322
C. Doerr et al.
ratio of stories
1 0.8 0.6
Line
of E
qua
lity
0.4 0.2
ff=0.48) Stories (gini coe
0 0
0.2
0.4
Popular Stories (gini coeff=0.992) 0.6
ratio of users
0.8
1
Fig. 7. While a significant share of users submit stories into the system, only the stories of a selected few are reaching critical mass to become popular.
Degree [0, 10) [10, 100) [100, 1000) 1000
# Users Sym. link 282536 0.53 49416 0.42 13993 0.39 111 0.31
Fig. 8. Fraction of symmetric links in the Digg network
This observation of decreasing reciprocity is consistent with sociological theory and ethnographic studies of social networks which showed that friendship requests in OSNs are often driven by users’ interests to become passively informed by means of these social ties [8,10]. The fact that the average symmetry is significantly lower and also dependent on the degrees of remote nodes, underlines (a) that users are engaging in friendships in Digg with the intention of information delivery and (b) the existence of individuals which act (or view themselves) as sources and broadcasters of knowledge, which according to [16] would embody the critical influentials in the network. 5.1
Submitting Successful Stories
When looking at all stories submitted in the past 4 years, we indeed find that users are very different when it comes to generating and sourcing important information. While the content published on Digg is followed about 25 million visitors a month, only a limited number of registered users are actively submitting content. The activity patterns of these users is furthermore biased, as shown in the Lorentz curve [21] in Fig. 7: the 80% least active users of the network are together submitting only about 20% of the entire content as indicated by the dashed red line. While this is far from an equitable system, the same skew – commonly referred to as the 80-20 or Pareto rule – has been found repeatedly in economics and sociology. Here it is however more drastic when only considering those stories that gained enough support and were promoted to popular. As the figure shows, these successful stories can be attributed to a selected minority of 2% of the community, which is able to find and submit 98% of all stories. This effect is however not the result of the pure quantity. In other words, there exists no statistically significant relationship between the number of stories a person has submitted and the ratio of stories that will become popular (Pearson’s correlation coefficient r2 =-0.01). While the presence of such a highly skewed distribution seems to suggest the existence of a few “chosen ones”, a closer inspection reveals that these highly successful submitters are also not those users responsible for the effective spread of information. First of all, the average ratio of popular to submitted stories of the top 2% successful submitters is only 0.23.
Are Friends Overrated? A Study for the Social Aggregator Digg.com
323
Second, the group of users who rank among the top successful members of the community is highly volatile, and the set of successful users changes substantially between studied time intervals. As we do not find a significant number of members who are able to continuously repeat their previous successes, we may conclude that there exists no conceptual difference or strategic advantage with those who do score successful stories. It appears that they were simply in the right place at the right time. In conclusion, it is not predominantly the well-connected nodes that are the originator of wide-spreading content, as there is no significant relationship between a user’s success ratio and its degree with those around it (p-value>0.5). 5.2
Activation of the Social Network
While there do not exist any particular nodes that are overly successful in injecting content, it may be possible that there exist users highly successful in activating their friendship network, and therefore would be a key component in helping stories reach widespread popularity. It turns out that the activation ratio of a node’s direct friends is surprisingly low. On average, a particular node is only able to generate 0.0069 clicks per friendship link. This recruitment is furthermore quite stable with the structural properties of the network nodes. While the literature predicts that nodes in a social network achieve an exponentially increasing influence compared to their importance [16, p. 124], we find a linear relationship (r2 =0.76) between the size of a nodes’ friendship network and the amount of users it can recruit to click on a story. As the slope of the linear regression is low (a=0.102), there is no overproportional impact of higher-degree nodes: 1 activated user with 100 friends is on average about as effective as 10 activated users with 10 friends. While we find no quantitative difference in the friendship network around the important nodes, there may be a qualitative difference in terms of structural characteristics and the information propagation along links. As complex networks evolve, certain growth processes such as preferential attachment [22] create sets of highly connected clusters, which are interconnected by fewer links. According to social network theory [11,23], these links among clusters, commonly referred to as “weak ties”, act as a critical backbone for information propagation. Information within a cluster is communicated and replicated between nodes thereby creating high redundancy, while the weak ties transport other, previously unknown information between groups of nodes (see Fig. 9(a)). To evaluate this hypothesis, we classified the network into weak and strong ties according to their edge betweenness and compared their theoretical importance to the actual amount of content that was propagated between each pair of nodes. Figure 9(b) shows a Lorentz curve of the betweenness and the actual information conductivity, demonstrating that the distributions are in general comparable. As there is no hard threshold for what characterizes a weak or strong tie, we compared the top and bottom 20% of the distribution as weak and strong ties respectively to the amount of stories propagated along a certain link. As shown
324
C. Doerr et al. 1x 109
Weight in terms of Betweenness Weight ofofa a linklink in terms of Betweenness Weight ofofa a linklink in terms of amount of Propagated Stories Weight in terms of amount of Propagated Stories
1
Weak ties Weak ties
1x 108
0.8
1x 106
Edge Betweenness
Ratio Normalized Link Weight
1x 107
0.6
0.4
100000 10000 1000 100 10 1
0.2
0.1
Strong ties Strong ties
0.01 0
0.001 0
0.2
0.4
0.6
0.8
Ratio Users
(a)
(b)
1
1
10
100
1000
10000
Story Propagation
(c)
Fig. 9. While according to the weak ties hypothesis, (fig. a) the links connecting different clusters and communities (high edge betweenness) are critical to the spread of information and the topology and usage patterns of the social network showed similar network characteristics (e.g. the weight/degree distribution in fig. b), there existed no relationship between the strength of the tie and the amount of information propagated, neither for the entire network nor for the subclasses of strong and weak links (fig. c).
in Fig. 9(c), there exists no relationship (r2 = 0.00006), thus information is not propagated more effectively along weak ties. 5.3
Early Predictors
Finally, we investigated if there exist certain individuals who might be called important and influential in the sense that they are able to early-on identify content that will later on become popular (see for example [16]). In the months of April-May 2009, we followed the voting patterns of all registered users on all stories to determine how successful users were in finding and clicking on content that within the next hours or days would become popular. Of all activity within this two month time period, users identified and reacted on average only to 11.9% of content before it got promoted. With the absence of any high performers, there are thus no specific individuals who are able to consistently and repeatedly find emergent trends. This observation did not change either for the case of the high degree individuals or the users with a high success ratio of submitting successful stories; there exists no statistically significant difference in their ability to find content in the social network before it actually reaches widespread popularity.
6
Beyond Pure Friend Relations
The discussions in section 4 and 5 show that neither the importance of individual users nor the dynamics of the individual friendship relations or the network of friends can solely explain if a certain story will become a success. Furthermore, as in nearly 50% of all stories the promotion process took place without any dominant contribution by the friendship network, we further investigated how the low participation values of the friendship network may be explained and which features are the dividing force between those stories pushed by friends and those promoted by the general public.
Are Friends Overrated? A Study for the Social Aggregator Digg.com rank
Activity user A
325
Activity user B
Page 1 Page 2 Page 3
...
20.4.2009 0h
20.4.2009 12h
21.4.2009 0h
21.4.2009 12h
22.4.2009 0h
time
Fig. 10. The high turnover rate of even the popular stories and the limited attention span and activity period of users can offer an explanation of the low importance of friendships. The intensity of a line indicates the rate of diggs a story is accumulating.
6.1
Spread without Friends – A Matter of Timely Relevance
To investigate why one story is propagated by friends while another one is pushed by random users, we conducted a controlled experiment and presented the 158 most successful stories in the last year to a group of non-experts. In the experiment, stories were displayed to participants using the same user interface as on digg.com, except that only one story was displayed at a time to eliminate distractions. As we could in retrospect classify these stories as promoted by friends or non-friends, the stories in the experiment were balanced in terms of topic areas and to mimic a similar distribution as on Digg. Looking at the title, description, link, digg count and if available preview image, we asked participants to rate each story in terms of general appeal, their own personal interest and the general importance of a particular story. From the experiment it became evident that the difference between friends and non-friends promotion was a direct result how important and relevant the participants rated a particular story. Whenever participants marked a story as being “of general interest to the public”, in other words it is likely that one would hear it in the evening news, or attributed it with a high level of timely relevance, the same story has also reached popularity on Digg by non-friends. Thus, whether stories will become friend or non-friend promoted seems to be a function of a stories’ content and appeal (both stat. significant at p=0.05). 6.2
Explaining Critical Mass through Temporal Alignment
As a large number of factors previously assumed to be of importance to information spread turned out in our study of Digg to be rather insignificant and highly volatile over time, we further investigated the influence of time on the story propagation process. We found that some of the unexpected low or highly fluctuating factors are to some extent dependent upon the temporal alignment of users, i.e., whether users in general (and friends in particular) are visiting the site within the same narrow time window or not. Figure 10 visualizes this idea of temporal alignment on a snapshot of the front pages from April 2009, which shows the position of all popular stories with at least 100 diggs over a 48 hour time interval on the first 50 front pages. There
326
C. Doerr et al.
exists a high flux in the amount of stories passing through as within on average 3 hours the entire content on the first front page has been replaced by newer items. From a combined analysis of voting patterns and front page traces, we are able to determine the usual search strategy and search depth of users on Digg. Stories accumulate 80% of their attention received after promotion on the first and second page only, while the ratio of users who are scanning more than the first 4 front pages is practically zero. Considering the case of two users active on 20.4.2009, this can explain the surprisingly low amount of common friendship activations, as nearly 70% of the stories visible to user A during the two morning visits are already outside of user B’s attention window as the user visits the site just six hours later. Unless B actively looks for and follows up on A activity, the abundance of content and high turnover rate of information combined with limited attention span will therefore largely limit the potential for commonality. This demonstrates that whether a story reaches critical mass depends to a significant amount upon who and how many people are currently active on the site within a short time window. A combination of this temporal perspective with interest and friendship data can go a long way and provide a much more detailed understanding of user behavior, as we were able to improve our analysis accuracy of the activation ratio of friendship links by a factor of 15. Note however that while a temporal view is currently able to reveal in retrospect why certain users clicked on a particular story, it is not yet possible to predict how users will interact on a story in the future for a variety of reasons. Most importantly, an accurate prediction will require a good model of users’ future activity periods at a fine enough resolution to minimize the prediction error of which stories users will see. Furthermore, it will be necessary to further understand the concrete decision process that will lead to a users actively clicking on a story.
7
Conclusion
In this paper, we have evaluated the assumption made in OSNs that friendship relations are the critical factor for information propagation. While we find evidence that friendships are formed based on common interests, the actual effectiveness is surprising low and does not confirm the high importance attributed to them – at least in digg.com. We furthermore notice that although there exists a significant skew in the characteristics of network nodes from a topological perspective, we do not find any evidence that some users are more effective in terms of spreading information. They have no better access to information, are not more efficient in triggering their friends nor do predict trends better. Various outcomes of our analysis point to a factor that in the past has not received sufficient attention: time period overlaps. We find that when incorporating this factor, the conductivity of friendships and our ability to explain the spread of information improves manyfold. This will be the focus of future research.
Are Friends Overrated? A Study for the Social Aggregator Digg.com
327
References 1. Richardson, M., Domingos, P.: Mining knowledge-sharing sites for viral marketing. In: ACM SIGKDD (2002) 2. Yang, W.-S., Dia, J.-B., Cheng, H.-C., Lin, H.-T.: Mining social networks for targeted advertising. In: 39th Hawaii International Conf. (2006) 3. Surowieck, J.: The Wisdom of Crowds. Anchor (2005) 4. Davitz, J., Yu, J., Basu, S., Gutelius, D., Harris, A.: ilink: search and routing in social networks. In: ACM SIGKDD (2007) 5. Boyd, D.M., Ellison, N.B.: Social network sites: Definition, history, and scholarship. J. of Computer-Mediated Communication 13(1), 210 (2007) 6. Katz, E., Lazarsfeld, P.F.: Personal Influence. Free Press, New York (1955) 7. Koller, D.: Representation, reasoning and learning (2001) 8. Fono, D., Raynes-Goldie, K.: Hyperfriends and beyond: Friendship and social norms on livejournal. Internet Research Annual 4 (2006) 9. Raynes-Goldie, K.: Pulling sense out of today’s informational chaos: Livejournal as a site of knowledge creation and sharing. First Monday 8(12) (2004) 10. Ellison, N.B., Steinfield, C., Lampe, C.: The benefits of facebook “friends:” social capital and college students’ use of online social network sites. Journal of Computer-Mediated Communication 12, 1143–1168 (2007) 11. Granovetter, M.: The strength of weak ties. Am. J. of Sociology 78 (1973) 12. Mislove, A., Marcon, M., Gummadi, K., Druschel, P., Bhattacharjee, B.: Measurement and analysis of online social networks. In: IMC Conference (2007) 13. Leskovec, J., Horvitz, E.: Planetary-scale views on a large instant-messaging network. In: WWW 2008 (2008) 14. Cha, M., Mislove, A., Gummadi, K.P.: A measurement-driven analysis of information propagation in the flickr social network. In: WWW 2009 (2009) 15. Cha, M., Kwak, H., Rodriguez, P., Ahn, Y.Y., Moon, S.: Analyzing the video popularity characteristics of large-scale user generated content systems. IEEE/ACM Transactions on Networking (2009) 16. Keller, E.B., Berry, J.: The Influentials: One American in Ten Tells the Other Nine How to Vote, Where to Eat, and what to Buy. The Free Press, New York (2003) 17. Ahn, Y.Y., Han, S., Kwak, H., Moon, S., Jeong, H.: Analysis of topological characteristics of huge online social networking services. In: WWW 2007 (2007) 18. R¨ uschendorf, L.: The Wasserstein Distance and Approximation Theorems (1985) 19. Scott, J.P.: Social Network Analysis: A Handbook. Sage, London (2000) 20. Benevenuto, F., Rodrigues, T., Cha, M., Almeida, V.: Characterizing user behavior in online social networks. In: IMC 2009 (2009) 21. Lorentz, M.O.: Methods of measuring the concentration of wealth. Publications of the American Statistical Association 9(70), 209–219 (1905) 22. Simon, H.A.: On a class of skew distribution functions. Biometrika (1955) 23. Csermely, P.: Weak Links: Stabilizers of Complex Systems from Proteins to Social Networks. Springer, Berlin (2006)
Revisiting TCP Congestion Control Using Delay Gradients David A. Hayes and Grenville Armitage Centre for Advanced Internet Architectures Swinburne University of Technology, Melbourne, Australia {dahayes,garmitage}@swin.edu.au
Abstract. Traditional loss-based TCP congestion control (CC) tends to induce high queuing delays and perform badly across paths containing links that exhibit packet losses unrelated to congestion. Delay-based TCP CC algorithms infer congestion from delay measurements and tend to keep queue lengths low. To date most delay-based CC algorithms do not coexist well with loss-based TCP, and require knowledge of a network path’s RTT characteristics to establish delay thresholds indicative of congestion. We propose and implement a delay-gradient CC algorithm (CDG) that no longer requires knowledge of path-specific minimum RTT or delay thresholds. Our FreeBSD implementation is shown to coexist reasonably with loss-based TCP (NewReno) in lightly multiplexed environments, share capacity fairly between instances of itself and NewReno, and exhibits improved tolerance of non-congestion related losses (86 % better goodput than NewReno in the presence of 1 % packet losses).
1
Introduction
A key goal of TCP (transmission control protocol) [23] is to expedite the reliable transfer of byte-streams across the IP layer’s packet-based service while minimizing congestion inside both end hosts and the underlying IP network(s) [2]. Congestion control (CC) for TCP is a challenging, yet practical, research topic attracting interest from both academia and industry [9]. Traditional loss-based TCP CC considers IP packet loss to indicate network or end-host congestion. The source retransmits the lost packet and briefly slows its transmission rate. Yet TCP’s own probing for a path’s maximum capacity will also induces packet losses when a path’s capacity is reached. This is broadly reasonable where internet traffic flows over link layers with low intrinsic bit error rates (such as wires or optical fibers) and the traffic is largely loss-tolerant. However, loss-based CC is an increasingly unreasonable solution. Today’s internet encompasses a mix of TCP-based loss-tolerant and UDP-based loss-sensitive traffic (such interactive online games or Voice over IP) flowing over a mixture of fixed and wireless link layer technologies (such as 802.11-based wireless LANs, 802.16 WiMAX last-mile services, IEEE 802.15.4 ZigBee wireless links to smart energy meters, and so on). Wireless link layers tend to exhibit packet losses that are unrelated to congestion. J. Domingo-Pascual et al. (Eds.): NETWORKING 2011, Part II, LNCS 6641, pp. 328–341, 2011. c IFIP International Federation for Information Processing 2011
Revisiting TCP Congestion Control Using Delay Gradients
329
In 1989 Jain suggested an alternative CC approach where congestion along an end to end path is inferred from measurements of round trip time (RTT, or delay) [15]. CC based on delay measurements can optimise transmission rates without inducing packet losses, and offers the potential to be insensitive to packet losses that are not being caused by congestion. Many variations have emerged since [15] (as noted in Section 2). However, most of these are what we refer to as delay-threshold algorithms – they infer congestion when path delays hit or exceed certain thresholds. Unfortunately, meaningful thresholds are hard to set if little is known about the network path that packets will take. In addition, competing delay-threshold flows can suffer relative unfairness if their inability to accurately estimate a path’s base RTT (the smallest possible RTT for the path) leads to thresholds established on the basis of different estimates of base RTT [19]. We propose a novel delay-gradient CC technique, and implement it in FreeBSD 9.0. Our approach exhibits improved tolerance to non-congestion related packet losses, and improved sharing of capacity with traditional NewReno TCP [10] in lightly multiplexed environments (such as home Internet scenarios). Unlike typical delay-threshold CC techniques, we do not rely on a priori knowledge of a path’s minimum or typical RTT levels to infer the onset of congestion. The rest of our paper is structured as follows. Section 2 summarises key delaybased CC algorithms and their issues. Section 3 describes our delay-gradient CC algorithm. Section 4 covers our implementation, experimental analysis and results, while future work and conclusions are covered in sections 5 and 6.
2
Background
In this section we summarise past efforts at measuring and interpreting network layer delay for TCP CC, and differentiating between congestion- and noncongestion related packet loss. 2.1
Using Network Delay for TCP Congestion Control
Proposals to use network delay for TCP congestion control rely on there being a correlation between delay and congestion. Although some studies have shown a low correlation between loss events and increases in RTT [20], this is not an obstacle to CC since it is the aggregate behavior of flows which is important [22]. Proposals differ in the way they measure delay (RTT, one way delay, per packet measurements, etc), how they infer congestion (set thresholds, etc), and how they adjust the sender’s congestion window (cwnd) in response to congestion. In the following descriptions, β is the multiplicative decrease factor, θ represents a delay threshold, ith RTT measurement = τi , smallest RTT = τmin , largest RTT = τmax , and the ith one way delay = di . Jain’s 1989 CARD (Congestion Avoidance using Round-trip Delay) algoτi −τi−1 rithm [15] utilises the normalized delay gradient of RTT, τi +τi−1 > 0, to infer
330
D.A. Hayes and G. Armitage
congestion, and Additive Increase Multiplicative Decrease (AIMD, β = 78 ) to max ) adjust cwnd. DUAL [27] uses τi > (τmin +τ with delay measured every 2nd 2 7 RTT, and AIMD (β = 8 ) to adjust cwnd. Vegas [6] uses τi > τmin + θ (normalized by the data sent), delay measured every RTT and Additive Increase Additive Decrease (AIAD) to adjust cwnd. Fast TCP [28], TCP-Africa [16] and Compound TCP (CTCP) [26] all infer congestion similarly to Vegas, but use smoothed RTT measurements and differ in how they adjust cwnd upon detecting congestion. TCP-LP [17] uses di > dmin + δ(dmax − dmin ), based on smoothed one way delay (d) using TCP time stamps, and AIMD with a minimum time between successive window decreases to adjust cwnd. Probabilistic Early Response TCP (PERT) [3] uses dynamic thresholds based on inferred queuing delay (qj = τj − τmin ) using smoothed RTT measurements and probabilistic reaction to queuing delay inspired by Random Early Discard (RED), with loss probability matching when qj ≥ 0.5qmax . Hamilton Delay [7, 18] (“HD”, a CC algorithm from the Hamilton Institute), and our own variant of HD [14], implement probabilistic adjustment of cwnd based on queuing delay, RTT thresholds and a backoff function. 2.2
Differentiating Congestion and Non-congestion Related Loss
Using delay to infer congestion opens the possibility to differentiate between congestion and non-congestion related packet losses. For example, Biaz and Vaidya [5] investigated techniques based on TCP Vegas rate, normalised throughput gradients and normalised delay gradients, but found them to be inadequate. Cen et al. [8] investigated a number of proposals including inter-arrival time [4], a threshold based RTT scheme (Spike), a statistical RTT based scheme (ZigZag), and hybrids of these. Recently Zhao et al. [29] proposed WMTA that uses comparative loss rates of small and large packets to infer whether a wireless link is suffering wireless or congestion related losses. All of these techniques provide insight into the problem, but fall short of a robust and accurate solution.
3
Delay-Gradient TCP Congestion Control
In this section we describe how CDG (“CAIA Delay-Gradient”) modifies the TCP sender in order to: (a) use the delay gradient as a congestion indicator, (b) have an average probability of back off that is independent of the RTT, (c) work with loss-based congestion control flows (such as NewReno), and (d) tolerate non-congestion packet loss, but backoff for congestion related packet loss. First we answer the question of why it is important to revisit CC based on delay gradients, and then we describe our proposed delay-gradient algorithm. 3.1
Why Use Delay Gradient?
Inspired by Jain’s CARD [15], we have two reasons for revisiting the use of delay gradient for delay-based congestion control.
Revisiting TCP Congestion Control Using Delay Gradients
331
First, delay-threshold CC algorithms typically require an accurate estimate of a path’s base (smallest possible) RTT in order to properly share capacity among themselves, and ensure network queuing delay stays low [19]. Second, choosing thresholds is difficult. The right compromise between queuing delay and network utilisation requires knowing each flow’s path – a challenge if flows traversing differing numbers of hops are to compete fairly for available capacity. These limitations make delay-threshold algorithms, on their own, problematic for Internet-wide deployment. In contrast, delay-gradient CC relies on relative movement of RTT, and adapts to particular conditions along the paths each flow takes. 3.2
Delay Gradient Signal
RTT is a noisy signal – a cleaner signal is required for inferring congestion from the gradient of RTT over time. CDG uses the maximum RTT (τmax ) seen in a measured RTT interval1 , along with the minimum RTT (τmin ) seen within a measured RTT interval. Based on these, two measures of gradient (change in RTT measurement per RTT interval) are kept, where n is the nth RTT interval: gmin,n = τmin,n − τmin,n−1
(1)
gmax,n = τmax,n − τmax,n−1
(2)
The maximum and minimum measurements are less noisy than per packet RTT measurements. Nevertheless we apply the moving average smoothing of equation 3, which may be calculated iteratively using equation 4 (where a is the number of samples in the moving average window2 ). n gi gn − gn−a g¯n = (3) g¯n = g¯n−1 + (4) a a i=n−a where gi = gmin,i for calculating g¯min,n or gi = gmax,i when calculating g¯max,n . We implemented an enhanced RTT measuring module [13] to obtain live measurements without the noise caused by duplicate acknowledgments, segmentation offload and sack (see [22] for more on RTT sampling). An alternative to the moving average would be exponential smoothing. However, if the measured gmax,n ceased to grow because a queue along the path was full, an exponential average would only approach g¯max,n = 0 in the limit. A moving average would achieve g¯max,n = 0 in a samples. 3.3
Differentiating Congestion and Non-congestion Related Loss
In order to tolerate the kinds of packet loss common in, say, wireless environments, we must infer whether or not packet loss is related to congestion. For simple drop tail queues, congestion related loss is due to overflow of a queue along the packet’s path. To infer such events CDG uses both g¯min and g¯max . 1 2
The same time interval that is used in Vegas [6]. Although the probabilistic backoff described in 3.4 can provide sufficient smoothing, moving average smoothing helps with the loss tolerance and coexistence heuristics. When operating in slow start mode gmin,n and gmax,n are used without smoothing for a more timely response to the rapid w increases.
332
D.A. Hayes and G. Armitage
RTT
Queue full detection
g¯ Queue empty detection
τmax
Queue full detection Queue empty detection
0
τmin
g¯max
t
g¯min t
(a) Idealised RTT dynamics for queue full and empty events
(b) Idealised gradient dynamics for queue full and empty events. (¯ gmin and g¯max are vertically offset for clarity.)
Fig. 1. Queue full and queue empty scenarios, highlighting the detection areas
Figure 1a illustrates our assumption that when a queue fills to capacity, τmax stops increasing before τmin stops increasing, and that the reverse is true for a queue moving from full to empty. Figure 1b shows the idealised gradients for these two conditions (with the lines for g¯min and g¯max offset slightly for clarity). Based on this CDG estimates the state of the path queue to be Q ∈ {full, empty, rising, falling, unknown}. Only when Q = full are packet losses treated as congestion signals. 3.4
RTT Independent Backoff
We use Equation 5 as a probabilistic backoff mechanism to achieve fairness between flows having different base RTT. P [backoff] = 1 − e(¯gn /G)
(5)
were G > 0 is a scaling parameter and g¯n will either be g¯min,n or g¯max,n . Our implementation uses a lookup table for ex and a configurable FreeBSD kernel variable for G. The exponential nature of P [backoff] means that on average a source with a small RTT which sees smaller differences in the RTT measurements will have the same average P [backoff] of a source with a longer RTT which will see larger differences in the RTT measurements. 3.5
Congestion Window Progression
In congestion avoidance mode, CDG updates the congestion window (w) once every RTT according to Equation 6 wn β X < P [backoff] ∧ g¯n > 0 wn+1 = (6) wn + 1 otherwise where w is the size of the TCP congestion window in packets3 , n is the nth RTT, X = [0, 1] is a uniformly distributed random number, and β is the multiplicative 3
CDG increments a byte-based w by the maximum segment size every RTT.
Revisiting TCP Congestion Control Using Delay Gradients
333
decrease factor (β = 0.7 in our testbed experiments). Since the effect of this update will not be measured in the next RTT interval, the next calculation of gmin or gmax is ignored. Thus a delay-gradient congestion indication will cause CDG to back off at most once every two RTT intervals4 . In slow start mode w increases identically to NewReno. The decision to reduce w and enter congestion avoidance mode is made per RTT as per Equation 6, or on packet loss as in NewReno, whichever occurs first. 3.6
Competing with Loss-Based CC
CDG uses two mechanisms to mitigate the loss of fair share of available capacity when sharing bottleneck queues with loss-based TCP flows: ineffectual backoff detection, and a loss-based shadow window from [14]. Ineffectual backoff detection. If CDG backs off multiple times, b, due to delay-gradient congestion indications, but g¯min or g¯max are still not negative, then CDG presumes that its backoffs have been ineffectual because it is competing with a loss based flow. Consequently, CDG does not back off due to delay-gradient congestion indications for b further delay gradient congestion indications unless either g¯min or g¯max become negative in the process. (In our CDG implementation both b and b are configurable FreeBSD kernel variables.) Shadow window. CDG recovers some of its lost sending capability by utilising the shadow window idea from [14] to mimic the loss based backoffs of TCP NewReno. The shadow window (s) is initialised as follows: ⎧ ⎪ ⎨max(wi , si ) delay based backoff si+1 = 0 (7) Q = empty ⎪ ⎩ si otherwise If delay-gradient triggers a backoff then si+1 = max(wi , si ), but if CDG guesses a bottleneck queue has emptied then si+1 = 0, otherwise s is unchanged. 3.7
Window Update on Packet Loss
If a packet loss occurs, the congestion window (w) is updated as follows: max(si ,wi ) Q = full ∧ packet loss 2 wi+1 = wi otherwise
(8)
In the case of packet losses, the multiplicative decrease factor is 0.5 (as in NewReno), and w is set to half the bigger of s (the shadow window) and w. Using the shadow window concept from [14] improves CDG’s coexistence with loss based flows. We do not reclaim the lost transmission opportunities, but 4
TCP Vegas [6] uses a similar idea during slow start.
334
D.A. Hayes and G. Armitage
s sync
Referring to the regions indicated by circled numbers:
2
packets
1. w grows as normal for TCP congestion avoidance (one packet per RTT) 2. Delay-gradient congestion indication meeting Equation 6’s criteria, s is initialised to w, then w = βw 3. w continues to react to delay-gradient congestion indications 4. s shadows NewReno-like growth 5. A packet loss occurs (Q = full), so w is set to s/2 rather than w/2 (per Equation 8)
s
w
4
lost packet
lost
w recovery
transmission opportunity 5 gained
3
1
transmission opportunity
delay−gradient based congestion
s=0
without
w recovery
number of round trip times
Fig. 2. Behaviour of shadow window (s) and congestion window (w) when competing with loss-based TCP flows
this approach does lessen the impact of the extra delay-gradient based backoffs. Figure 2 gives and example illustrating how this works.
4
Experimental Analysis
Here we present our experimental evaluation of CDG, primarily focusing on: – – – –
Tolerance of NewReno and CDG to non-congestion related losses Sharing dynamics between three homogeneous flows (CDG or NewReno) Competition of up to two NewReno flows and up to two CDG flows Sharing dynamics between two homogeneous flows of different RTTs
The “source” hosts of Figure 3 implement CDG as a FreeBSD 9.0 kernel module [1], whilst the “sinks” are unmodified FreeBSD hosts. We used Gigabit Ethernet links to the dummynet-based [24] router. This router provides a 10 Mbps bottleneck link and base RTTs of 40 ms (20 ms each way) and 70 ms (35 ms each way) as needed. The bottleneck queue is 84 packets long, corresponding to a maximum queuing delay of about 100 ms with 1500 byte packets.
CDG Sources
20ms
CDG Sink
(FreeBSD)
35ms
(FreeBSD)
Dummynet Router (FreeBSD) NewReno Sources (FreeBSD)
20ms
NewReno Sink
35ms
(FreeBSD)
Fig. 3. Experimental Testbed
Revisiting TCP Congestion Control Using Delay Gradients
10
x 10
6
NewReno Vegas CDG 1/sqrt(p)
8 Goodput (bps)
335
6
4
2
0
0
0.04 0.03 0.02 0.01 Probability of non−congestion related loss
0.05
Fig. 4. Goodput of NewReno, Vegas and CDG with non-congestion losses
TCP traffic is generated using Netperf (http://www.netperf.org/). NewReno flows use the default parameters. CDG operates with moving average sample window a = 8, exponential scaling parameter G = 3, ineffectual backoff trigger b = 5 and ineffectual backoff ignore count b = 5. The non-congestion related loss is implemented as random packet loss added by dummynet in the the forward (data) path only. Each experiment is repeated 10 times. Where appropriate, graphs show the 20th , 50th , and 80th percentiles (marker at the median, and error bars spanning the 20th to 80th percentiles). 4.1
Tolerance to Non-congestion Related Losses
First we look at the impact of non-congestion related losses on TCP goodput 5 . Figure 4 shows the average goodput achieved over 60 s versus the probability of non-congestion related packet loss for New Reno, TCP Vegas [6], and CDG. We also show the theoretical maximum throughput under loss conditions given by the B = pktrttsize √Cp model proposed by Mathis et al. [21] (where B is the expected
throughput, rtt = 40 ms, p is the probability of packet loss, and C = 23 ). NewReno and Vegas goodput decreases markedly with non-congestion related losses, tracking [21]’s 1/sqrt(p) curve. Vegas reacts to both loss and delay as congestion signals. High speed TCP variants, such as CUBIC[12], which reduce w by less on packet loss and increase w much more quickly during congestion avoidance than NewReno can recover from packet loss more quickly than NewReno. This capability in CDG is not tested here for fair comparison with NewReno. Note that Compound TCP [26] performs worse than NewReno at these levels of non-congestion packet loss (Compound TCP begins performing slightly worse than NewReno when losses exceed about 0.5 %). CDG is noticeably better at tolerating non-congestion losses. Although still reacting to loss, CDG’s use of delay-gradient information improves its ability to 5
Usable data transferred per unit time, excluding retransmitted payloads.
336
D.A. Hayes and G. Armitage
infer whether any given loss event is due to congestion along the path. CDG is conservative, preferring to have a false positive than a false negative. Nevertheless, CDG’s goodput still drops as loss rates increase. It spends proportionally more time retransmitting lost packets and less time growing cwnd (Equation 6’s w) as CDG does not increment cwnd during the recovery process. 4.2
Homogeneous Capacity Sharing
Here we contrast the way NewReno and CDG share capacity with instances of their own ‘type’. Each experiment uses three 60 s NewReno or CDG flows sharing the bottleneck link, with the first, second and third flows starting at 0 s, 20 s and 40 s respectively. We examine the case where all losses are due to congestion, and then artificially add an extra 1% packet loss rate unrelated to congestion. Figure 5 (goodput over time) shows that both NewReno and CDG share quite fairly among themselves when there is no non-congestion related packet loss. Figure 6 (the path RTT over time) reveals that NewReno induces far higher, and more oscillatory, queuing delays than CDG. Each NewReno flow (Figure 5a) takes longer to converge on their fair share of link rate than the equivalent CDG flow (Figure 5b). This is because NewReno consistently pushes RTT up to 140 ms (40 ms base RTT plus 100 ms queuing delay when queue is full), so its feedback loop cannot react to the onset of selfinduced congestion during slow start as quickly as CDG’s feedback loop. CDG can allow the link to become idle for brief periods of time, so a single CDG flow will never quite match the best goodput of a NewReno flow in the absence of non-congestion related losses (Figure 5 between t = 10 s and t = 20 s). CDG’s relatively benign impact on RTT is likely to be attractive to other applications sharing a congestion point. In contrast, NewReno cyclically fills the queue until packet loss occurs (Figure 6a). (In similar scenarios CUBIC induces similar or higher average delays than NewReno [25]. Compound TCP’s congestion window is always cwnd+dwnd, where dwnd uses a Vegas-style delay calculations, so it will also not induce lower queuing delays than NewReno.) 6
x 10 10
8
8 Goodput (bps)
Goodput (bps)
x 10 10
6 4 flow 1 flow 2 flow 3
2 0 0
20
40
60
80
100
Time (s)
(a) Three NewReno flows sharing link
6
6 4 flow 1 flow 2 flow 3
2 0 0
20
40
60
80
100
Time (s)
(b) Three CDG flows sharing link
Fig. 5. Homogeneous capacity sharing between three flows on a link having no noncongestion related losses. Flows begin at 20 s intervals and transmit for 60 s. Goodput averaged every 4 s with the point at the middle of the 4 s interval.
140
140
120
120 RTT (ms)
RTT (ms)
Revisiting TCP Congestion Control Using Delay Gradients
100 80 flow 1 flow 2 flow 3
60 40 0
20
40
60 Time (s)
80
flow 1 flow 2 flow 3
100 80 60 40 0
100
(a) NewReno RTT dynamics due to induced queuing delays
337
20
40
60 Time (s)
80
100
(b) CDG RTT dynamics due to induced queuing delays
Fig. 6. Path RTT versus time during the trials in Figure 5. (The number of points has been reduced by a factor of 20 for clarity). 6
x 10 10
8
8 Goodput (bps)
Goodput (bps)
x 10 10
6 4 flow 1 flow 2 flow 3
2 0 0
20
40
60
80
100
Time (s)
(a) Three NewReno flows sharing link
6
6 4 flow 1 flow 2 flow 3
2 0 0
20
40
60
80
100
Time (s)
(b) Three CDG flows sharing link
Fig. 7. Homogeneous capacity sharing between three flows on a link with a 1 % random probability of non-congestion related losses. Flows begin at 20 s intervals and transmit for 60 s. 4 s averages, with the point in the middle of the 4 s interval.
Figure 7 repeats the experiment with 1 % probability of additional packet loss (unrelated to congestion). The additional losses dominate and cripple NewReno flows, but CDG (cf. Figure 4) continues to utilise and share the available capacity. 4.3
Competing with NewReno
Practical deployment of delay-based CC algorithms is made difficult by the need to coexist with loss-based flows that tend to cyclically overfill queues. We start four 80 s flows (CDG, NewReno, NewReno, and CDG) at 20 s intervals to demonstrate how CDG’s use of the shadow window (Sections 3.6 and 3.7 and Figure 2) helps it compete with NewReno for available capacity. Figure 8a shows coexistence where congestion is the only source of packet loss. The first CDG flow retains about 24 % of the available capacity once the first NewReno flow starts up and takes the rest. This is mainly due to CDG’s conservative coexistence heuristic which at first does reduce cwnd before the shadow window adjustment is made. CDG does slightly better as the next NewReno and CDG flows join, but never quite claims its fair share. Figure 8b shows how the dynamics change when there is a 1 % probability of non-congestion packet loss. NewReno’s sensitivity to packet loss prevents it from fully utilising the available capacity. In contrast, CDG’s intrinsic tolerance
338
D.A. Hayes and G. Armitage
x 10
6
x 10
10
6 4 X= 60 Y= 1733700
2 0 0
20
40
CDG NewReno NewReno CDG
8 Goodput (bps)
Goodput (bps)
8
6
10
CDG NewReno NewReno CDG
6 4 2
60 80 Time (s)
100
120
0 0
140
(a) No non-congestion losses.
20
40
60 80 Time (s)
100
120
140
(b) 1 % random non-congestion loss.
Fig. 8. Coexistence between NewReno and CDG – Goodput averaged every 20 s (plotted point is in the middle of the averaging period) 6
6
x 10 10
8
8 Goodput (bps)
Goodput (bps)
x 10 10
6 4 2
10
20
4 2
40ms base RTT flow 70ms base RTT flow
0 0
6
30
40 Time (s)
50
60
70
(a) Two NewReno flows sharing a link – shorter RTT flow starting first (S-L)
30
40 Time (s)
50
60
70
80
x 10 10
8
8 Goodput (bps)
Goodput (bps)
20
6
6
x 10
6 4
0 0
10
(b) Two NewReno flows sharing a link – longer RTT flow starting first (L-S)
10
2
70ms base RTT flow 40ms base RTT flow
0 0
80
20
4 2
40ms base RTT flow 70ms base RTT flow
10
6
30
40 Time (s)
50
60
70
80
(c) Two CDG flows sharing a link – shorter RTT flow starting first (S-L)
0 0
70ms base RTT flow 40ms base RTT flow
10
20
30
40 Time (s)
50
60
70
80
(d) Two CDG flows sharing a link – longer RTT flow starting first (L-S)
Fig. 9. Homogeneous capacity sharing between two flows with different base RTTs, with no non-congestion related losses. Each flow transmits for 60 s, with the second flow starting 20 s later. 10 s averages, with the point in the middle of the 10 s interval.
to non-congestion losses allows it to utilise more of the capacity that NewReno is unable to use. (CDG does not capture capacity at the expense of NewReno). 4.4
Competition between Flows Having Different Base RTTs
Finally we briefly explore capacity sharing between flows using the same CC algorithm but having different base RTTs (40 ms and 70 ms). Figure 9 shows the goodput results (each point showing goodput averaged over 10 s) for NewReno
Revisiting TCP Congestion Control Using Delay Gradients
339
and CDG in two scenarios: (a) the source with 40 ms base RTT starts first (S-L), and (b) the source with 70 ms base RTT starts first (L-S). In both S-L and L-S cases the NewReno and CDG flows with higher (70 ms) base RTT end up with a smaller (roughly 30 %) share of available capacity. The most notable difference between NewReno and CDG is the speed with which capacity sharing stabilises in each case. As noted in Section 4.2, NewReno induces much higher overall RTT and thus its feedback loop reacts more slowly (compared to CDG) to the addition or removal of a competing flow. In such lightly multiplexed environments with little noise, phase effects can dramatically alter the function of traditional TCP and the resulting throughput [11]. This can lead to larger error bars (particularly evident in Figures 9a,b). CDG does not suffer as much from these effects due to its probabilistic backoff.
5
Further Work
We have considered CDG in lightly multiplexed environments where congestion is dominated by a single router (such as might exist with home Internet connections). More work is required to characterise the utility of CDG in highly multiplexed environments, and multi-hop multi-path environments. CDG’s cwnd increase mechanism could adopt a more aggressive approach: increasing during the loss recovery mechanism when the loss is deemed to not be due to congestion to better cope with higher loss rates, and increasing cwnd more quickly during congestion avoidance than traditionally allowed (relying on CDG’s ability to infer congestion before packets are lost). We explore Delay-gradient CC because it does not require accurate knowledge of a path’s base RTT. Future work might explore whether combining delaygradient and absolute queuing delay congestion signals could create a more robust CC algorithm.
6
Conclusion
We have proposed, implemented (under FreeBSD 9.0) and demonstrated CDG – a novel sender-side delay-gradient TCP congestion control algorithm that requires no changes to existing TCP receivers. CDG avoids a key limitation of delaythreshold CC algorithms – their need to establish an accurate measure of a path’s base RTT and set thresholds based on actual network path delay characteristics. CDG’s improved tolerance to non-congestion related packet losses makes it attractive over paths containing links with non-negligible intrinsic packet loss rates (such as wireless links). CDG can coexist with loss-based TCPs such as NewReno, though it achieves less than its fair share of the capacity in such cases. CDG uses the maximum RTT and minimum RTT gradient envelope to estimate whether loss is congestion or non-congestion related. CDG subsequently exhibited improved tolerance to non-congestion related losses, with a single CDG flow achieving 65 % of the available capacity at 1 % packet loss, compared to 35 % for NewReno (a 86 % improvement over NewReno). CDG’s utilises [14]’s NewRenolike shadow window to help it compete with loss-based TCP CC algorithms.
340
D.A. Hayes and G. Armitage
Acknowledgments This work was made possible in part by a grant from the Cisco University Research Program Fund at Community Foundation Silicon Valley.
References [1] NewTCP project tools (August 2010), http://caia.swin.edu.au/urp/newtcp/ tools.html (accessed December 2, 2010) [2] Allman, M., Paxson, V., Stevens, W.: TCP Congestion Control. RFC 2581 (Proposed Standard), (April 1999), http://www.ietf.org/rfc/rfc2581.txt updated by RFC 3390 [3] Bhandarkar, S., Reddy, A.L.N., Zhang, Y., Loguinov, D.: Emulating AQM from end hosts. In: SIGCOMM 2007: Proceedings of the 2007 Conference on Applications, Technologies, Architectures, and Protocols for Computer Communications, pp. 349–360. ACM Press, New York (2007) [4] Biaz, S., Vaidya, N.: Discriminating congestion losses from wireless losses using inter-arrival times at the receiver. In: Proceedings of the IEEE Symposium on Application-Specific Systems and Software Engineering and Technology, ASSET 1999, pp. 10–17 (1999) [5] Biaz, S., Vaidya, N.H.: Distinguishing congestion losses from wireless transmission losses. In: Seventh International Conference on Computer Communications and Networks (IC3N) (October 1998) [6] Brakmo, L.S., Peterson, L.L.: TCP Vegas: end to end congestion avoidance on a global internet. IEEE J. Sel. Areas Commun. 13(8), 1465–1480 (1995) [7] Budzisz, L., Stanojevic, R., Shorten, R., Baker, F.: A strategy for fair coexistence of loss and delay-based congestion control algorithms. IEEE Commun. Lett. 13(7), 555–557 (2009) [8] Cen, S., Cosman, P.C., Voelker, G.M.: End-to-end differentiation of congestion and wireless losses. IEEE/ACM Trans. Netw. 11(5), 703–717 (2003) [9] Floyd, S.: Congestion Control Principles. RFC 2914 (Best Current Practice) (September 2000), http://www.ietf.org/rfc/rfc2914.txt [10] Floyd, S., Henderson, T., Gurtov, A.: The NewReno Modification to TCP’s Fast Recovery Algorithm. RFC 3782 (Proposed Standard) (April 2004), http://www.ietf.org/rfc/rfc3782.txt [11] Floyd, S., Jacobson, V.: On traffic phase effects in packet-switched gateways. Internetworking: Research and Experience 3(3), 115–156 (1992) [12] Ha, S., Rhee, I., Xu, L.: CUBIC: A new tcp-friendly high-speed tcp variant. ACM SIGOPS Operating System Review 42(5), 64–74 (2008) [13] Hayes, D.: Timing enhancements to the FreeBSD kernel to support delay and rate based TCP mechanisms. Tech. Rep. 100219A, Centre for Advanced Internet Architectures, Swinburne University of Technology, Melbourne, Australia (February 19, 2010), http://caia.swin.edu.au/reports/100219A/CAIA-TR-100219A.pdf [14] Hayes, D.A., Armitage, G.: Improved coexistence and loss tolerance for delay based TCP congestion control. In: 35th Annual IEEE Conference on Local Computer Networks (LCN 2010), Denver, Colorado, USA (October 2010) [15] Jain, R.: A delay-based approach for congestion avoidance in interconnected heterogeneous computer networks. SIGCOMM Comput. Commun. Rev. 19(5), 56–71 (1989)
Revisiting TCP Congestion Control Using Delay Gradients
341
[16] King, R., Baraniuk, R., Riedi, R.: TCP-Africa: An adaptive and fair rapid increase rule for scalable TCP. In: IEEE INFOCOM 2005, pp. 1838–1848 (2005) [17] Kuzmanovic, A., Knightly, E.: TCP-LP: low-priority service via end-point congestion control. IEEE/ACM Trans. Netw. 14(4), 739–752 (2006) [18] Leith, D., Shorten, R., McCullagh, G., Heffner, J., Dunn, L., Baker, F.: Delaybased AIMD congestion control. In: Proc. Protocols for Fast Long Distance Networks, California (2007) [19] Leith, D., Shorten, R., McCullagh, G., Dunn, L., Baker, F.: Making available basertt for use in congestion control applications. IEEE Communications Letters 12(6), 429–431 (2008) [20] Martin, J., Nilsson, A., Rhee, I.: Delay-based congestion avoidance for tcp. IEEE/ACM Trans. Netw. 11(3), 356–369 (2003) [21] Mathis, M., Semke, J., Mahdavi, J., Ott, T.: The macroscopic behavior of the tcp congestion avoidance algorithm. SIGCOMM Comput. Commun. Rev. 27(3), 67–82 (1997) [22] McCullagh, G., Leith, D.J.: Delay-based congestion control: Sampling and correlation issues revisited. Tech. rep., Hamilton Institute – - National University of Ireland, Maynooth (2008) [23] Postel, J.: Transmission Control Protocol. RFC 793 (Standard), (September 1981), http://www.ietf.org/rfc/rfc793.txt (updated by RFC 3168) [24] Rizzo, L.: Dummynet: a simple approach to the evaluation of network protocols. ACM SIGCOMM Computer Communication Review 27(1), 31–41 (1997) [25] Stewart, L., Armitage, G., Huebner, A.: Collateral damage: The impact of optimised TCP variants on real-time traffic latency in consumer broadband environments. In: Fratta, L., Schulzrinne, H., Takahashi, Y., Spaniol, O. (eds.) NETWORKING 2009. LNCS, vol. 5550, pp. 392–403. Springer, Heidelberg (2009) [26] Tan, K., Song, J., Zhang, Q., Sridharan, M.: A compound TCP approach for highspeed and long distance networks. In: Proceedings of the 25th IEEE International Conference on Computer Communications, INFOCOM 2006, pp. 1–12 ( April 2006) [27] Wang, Z., Crowcroft, J.: Eliminating periodic packet losses in the 4.3-Tahoe BSD TCP congestion control algorithm. SIGCOMM Comput. Commun. Rev. 22(2), 9–16 (1992) [28] Wei, D.X., Jin, C., Low, S.H., Hegde, S.: FAST TCP: Motivation, architecture, algorithms, performance. IEEE/ACM Trans. Netw. 14(6), 1246–1259 (2006) [29] Zhao, H., ning Dong, Y., Li, Y.: A packet loss discrimination algorithm in wireless ip networks. In: 5th International Conference on Wireless Communications, Networking and Mobile Computing, WiCom 2009, Beijing, pp. 1–4 (September 2009)
NF-TCP: A Network Friendly TCP Variant for Background Delay-Insensitive Applications Mayutan Arumaithurai1, Xiaoming Fu1 , and K.K. Ramakrishnan2 1
Institute of Computer Science, University of Goettingen, Germany {arumaithurai,fu}@cs.uni-goettingen.de 2 AT&T Labs-Research, U.S.A. [email protected]
Abstract. Delay-insensitive applications, such as P2P file sharing, generate substantial amounts of traffic and compete with other applications on an equal footing when using TCP. Further, to optimize throughput, such applications typically open multiple connections. This results in unfair and potentially poor service for applications that have stringent performance objectives (including sensitivity to delay and loss). In this paper, we propose NF-TCP, a TCP variant for P2P and similar delay-insensitive applications that can afford to have communication in the “background”. NF-TCP aims to be submissive to delay-sensitive applications under congestion. A major component of NF-TCP is to integrate measurement as an integral component of the congestion control framework. This enables the transport to exploit available bandwidth, so that it can aggressively utilize spare capacity. We implemented NF-TCP on Linux and ns-2. Our evaluations of the NF-TCP Linux implementation on ns-2 show that NF-TCP outperforms other network friendly approaches (e.g., LEDBAT, TCP-LP and RAPID). NF-TCP achieves high utilization, fair bandwidth allocation among NF-TCP flows and maintains a small average queue. Our evaluations further demonstrate that with NF-TCP, the available bandwidth can be efficiently utilized. Keywords: Network-Friendly, submissive, bandwidth-estimation.
1 Introduction Peer-to-Peer (P2P) file sharing applications form a significant part of the Internet traffic. According to a study from Ipoque [1], P2P accounted for a range between 43% and 70% of the total traffic in large parts of Europe, Middle East, Africa and South America. To optimize their throughput over a wide range of conditions, such P2P applications use TCP (which provides fairness among all coexisting flows). These applications seek to improve throughput further by opening multiple connections, thereby resulting in an unfair and potentially poor service for applications that have stringent performance objectives to meet user requirements for interactive usage. Based on user expectations and the current technology available in the Internet, applications may be broadly classified into delay-sensitive and delay-insensitive applications. Users have a higher expectation and less tolerance for delays caused to delay-sensitive applications such as video conferencing and streaming, voice over IP, and even web-browsing. On the other hand, J. Domingo-Pascual et al. (Eds.): NETWORKING 2011, Part II, LNCS 6641, pp. 342–355, 2011. c IFIP International Federation for Information Processing 2011
NF-TCP: A Network Friendly TCP Variant
343
users generally perceive applications such as software updates, “download and play” and P2P file sharing to name a few, as having lower priority. To overcome the perceived unfair and sometimes unnecessary disadvantage that delay-sensitive applications are subjected to by traffic from background delay-insensitive applications, ISPs have resorted to throttling or even blocking of traffic from heavy users during congestion. Future trends such as congestion based charging based on approaches such as [2] also motivate the need for solutions that make background traffic more network friendly. These would aid applications to seamlessly identify congestion and facilitate delay-insensitive applications to defer the use of the network. This has the potential for a positive situation for both users and ISPs resulting in a better distribution of the network load and greater user satisfaction. Delay-insensitive traffic needs to be both submissive during congestion periods and aggressive during non-congestion periods. Our point of view is that, it is most suitable when done as a transport layer protocol. The primary argument behind this is driven by considerations of the time-scale at which the transport layer (rather than a different layer, such as at the application) can react to the onset or the absence of congestion. In addition, congestion control and avoidance has typically required the transport protocol to place a load on network resources, create congestion, and then react to the effect of this congestion to effectively (i.e., both in terms of efficiency and fairness) use the network. This approach is in conflict with the basic goal of a network-friendly, submissive protocol. Our motivation for developing NF-TCP is based on the realization of the difficulties observed with other recent efforts to arrive at a network-friendly transport protocol. Recently, the IETF LEDBAT working group has been formed to develop such a network friendly protocol [3], which is primarily an end-host, delay-based congestion avoidance protocol. Other delay-based network friendly mechanisms include TCP-LP [4] and RAPID [5]. A limitation of these delay-based approaches is that they have to cause queuing (and potentially significant-enough queueing) to be able to operate effectively. In addition, they require high precision packet time-stamps and accurate delay measurements in the implementation to identify the onset of congestion. Delay measurements are also susceptible to noise in low latency networks and also in the presence of dynamic background traffic [6]. Additional difficulties include robustness to randomness and widely varying RTTs for competing traffic [7]. Our evaluations also highlight the fact that LEDBAT is unfriendly to standard TCP and contributes to queue buildup in high bandwidth-delay product (BDP) networks. Efficiently utilizing available bandwidth is another consideration driving existing solutions for high BDP environments, such as High-speed TCP [8], FAST [9], Compound TCP [10], CUBIC [11], Quick-Start [12] and XCP [13]. However, part of their ability to opportunistically utilize the large bandwidths in these types of environments is due to their aggressive increase policies. Thus, they are not necessarily “submissive” or “friendly” to other delay-sensitive applications, and have the potential to cause a period of congestion. In fact, evidence of their “collateral damage” on existing TCP traffic has been documented (e.g., [14]). In this paper, we propose a network-friendly congestion control protocol (NF-TCP), which is a TCP variant for P2P and similar background delay-insensitive applications. NF-TCP is able to be network friendly by being submissive to delay-sensitive
344
M. Arumaithurai, X. Fu, and K.K. Ramakrishnan
applications under congestion. A key idea within NF-TCP is to integrate measurement as an integral component of the congestion control framework. It uses the measured value of the available bandwidth to aggressively utilize that spare capacity during noncongestion periods, thus minimizing the reduction in throughput that it would otherwise suffer because of its submissiveness. It differs from existing approaches in two main ways: • NF-TCP is a network-friendly, submissive variant of TCP by depending on Explicit Congestion Notification (ECN) [15] based network feedback. We propose a simple but efficient enhancement to the standard Active Queue Management (AQM) routers by configuring it to use a lower threshold to begin marking or dropping of packets belonging to NF-TCP flows. NF-TCP exploits this early marking scheme to reliably identify incipient congestion and have delay-insensitive applications aggressively defer their load on a congested network. • NF-TCP is designed to exploit information provided by an available bandwidth measurement component that is carefully crafted to rapidly obtain the estimate. Based on this estimate, NF-TCP utilizes spare capacity when the network is uncongested in an aggressive but informed manner, thereby allowing NF-TCP to compensate for the loss of throughput during the submissive phase. To the best of our knowledge, this is the first work that incorporates a separate bandwidth measurement mechanism into a congestion control framework. Existing approaches estimate available bandwidth either by placing load or by switching to a rate based sending of data packets. We propose a novel ECN-enhanced bandwidth estimation mechanism (ProECN) to guide NF-TCP to aggressively utilize the bandwidth during non-congestion periods. Our LANMAN 2010 [16] work described the philosophy and overall approach of the NF-TCP. This paper provides the details of the protocol (with significant improvements and enhancements) and analyzes it extensively. Our results are primarily based on measurements of our NF-TCP Linux-based implementation that was ported to ns-2 using the Linux TCP implementation tool [17] to enhance our understanding of the NF-TCP behavior in large scale as well as to compare our approach to several other alternatives.
2 Design Objectives The main goal of our work is to develop a network friendly protocol that addresses the challenges described above. Our solution is built based on the following requirements: Requirement-I: Be submissive to standard TCP when encountering network congestion. This is to ensure that packets of delay-insensitive applications do not result in substantial queueing that can impact existing or newly arriving TCP flows. This buffer occupancy results in an increase in latency or drop rate for delay-sensitive applications. The submissiveness also enables standard TCP flows to utilize the available capacity. For NF-TCP to be submissive, it is essential for it to detect the onset of congestion earlier than standard TCP. During congestion periods when the queue is building up, we get the following condition for NF-TCP using the rate deterministic model (TN ∝ √ap ) described in [18]: aN →0 (1) √ pN
NF-TCP: A Network Friendly TCP Variant
345
where TN is the throughput of an NF-TCP flow, pN is the loss rate of the NF-TCP flow and aN is the rate of increase of the NF-TCP flow. Therefore to meet the conditions of the equation, either NF-TCP’s increase factor (aN ) should be close to zero or the loss rate of NF-TCP (pN ) should be very high. Requirement-II: Ability to saturate available bandwidth as fast as possible in the absence of other TCP flows. An NF-TCP flow must be capable of aggressively capturing available bandwidth during non-congestion periods without having a negative impact on co-existing TCP flows. Moreover, in the presence of other NF-TCP flows, the bandwidth should be equally shared among all flows.
3 NF-TCP Design In this section we present the design of NF-TCP. We begin by describing how NF-TCP is able to be submissive to standard TCP by detecting congestion early and reliably. Next, we explain how NF-TCP uses a separate bandwidth estimation mechanism to utilize spare bandwidth aggressively during non-congestion periods. 3.1 Be Submissive to Standard TCP When Encountering Network Congestion NF-TCP’s network friendly congestion control is achieved by taking advantage of a congestion detection mechanism that detects incipient congestion earlier than standard TCP. NF-TCP exploits the availability of Active Queue Management (AQM) routers that are configured to use a lower threshold to begin marking or dropping of packets belonging to NF-TCP flows. NF-TCP is designed to use the standard ECN bits and can, for example, use the low-priority DSCP code point [19] to identify an NF-TCP flow. The aim is to ensure that packets of network friendly applications do not contribute to queue build-up, which results in higher latencies for delay-sensitive traffic. The standard AQM mechanism is slightly modified to provide feedback for NF-TCP based on ECN on time-scales of an RTT. ECN-unaware routers can drop NF-TCP packets earlier to indicate the onset of congestion. Next we illustrate how a modified RED queue [20] is used for this purpose. Modified RED queue for NF-TCP. The modified RED queue consists of different parameters for NF-TCP compared to standard TCP. To detect the onset of congestion as early as possible while ensuring that we do not respond to truly short-term transients, we set the marking threshold values in the RED queue much lower than that for standard TCP. Fig. 1 illustrates the setting of the queue threshold values. The early marking and ECN for feedback to the source enables early reaction to the onset of congestion. The Table 1. The used RED queue parameters for evaluation
NF-TCP Standard TCP Min-threshold (pkts) 75 packets (= 1% of queue size ) 6800 Max-threshold (pkts) 3750 packets (= 50% of total queue size) 7500
346
M. Arumaithurai, X. Fu, and K.K. Ramakrishnan NF-TCP packets
TCP packets
1
Marking probability 0
N
Q min
N
Q max
T
Q min
T
Q max
Drop
Queue Size Fig. 1. A router queue performing network friendly marking
queue thresholds are based on the same mechanism as for a RED queue, except that the MinThreshold for NF-TCP flows (QN min ) is set much lower. The MaxThreshold for NFTCP flows (QN max ) is set to about half of the buffer size. This ensures that once the “average” queue (based on an exponentially weighted moving average (EWMA)) begins to build up, NF-TCP packets are probabilistically marked (p). All NF-TCP packets are marked or dropped when the average queue size exceeds MaxThreshold, QN max . Note that this mechanism uses a FIFO queue and is therefore different from the approach adopted by a priority queue [21]. This ensures that the delay-insensitive applications receive timely feedback and are therefore able to become submissive on their own. Congestion window decrease. On detecting congestion via network feedback, NFTCP reduces its sending window to yield to higher priority applications. Note that the NF-TCP flow does not differentiate between the existence or non-existence of standard TCP and therefore is designed to decrease its congestion window as follows: CONGESTION DETECTED : w ← − w − b ∗ w.
(2)
With a small value of b, NF-TCP flows will take longer to attain fairness among themselves. A high b on the other hand potentially results in low link utilization. Our evaluations are based on a value of b = 12.5%, similar to that proposed in [22]. It strikes a balance between being submissive to standard TCP, and improving the link utilization; and fairness in the presence of only NF-TCP flows. 3.2 Ability to Saturate Available Bandwidth as Fast as Possible in the Absence of Other TCP Flows NF-TCP explores the use of a novel combination of bandwidth measurement and congestion control. The bandwidth estimation mechanism guides the decision of the congestion control framework in the appropriate timescale to opportunistically use spare bandwidth. This feature is especially applicable for a network friendly transport that needs to be both submissive during congestion and aggressive during non-congestion periods. NF-TCP uses bandwidth estimation to estimate the available bandwidth. This estimate is used as a target value, up to which the flow can increase its rate, as long as it has not received an ECN marking. This enables NF-TCP to be opportunistic in using available bandwidth resulting in throughput optimization as well as increased network utilization without causing congestion. The NF-TCP bandwidth estimation mechanism
NF-TCP: A Network Friendly TCP Variant
347
separates the measurement process for obtaining the available bandwidth from the congestion control. Whenever an estimate is not available, NF-TCP continues to use the standard conservative increase of 1 packet per RTT to be able to achieve fairness among NF-TCP flows. Probing based on ECN (ProECN). We propose an ECN complemented probing mechanism that is based on PathChirp [23]. Similar to PathChirp, ProECN uses a series of packets that have an exponentially reducing inter-packet spacing to measure a wide range of available bandwidths. The sender sends this stream of packets to simulate an increasing sending rate and utilizes self-induced congestion to identify available bandwidth. ProECN differs from pathChirp by using an ECN complemented approach to measure the available bandwidth instead of depending only on increasing delay estimates. For this purpose we use a modified AQM queue that is able to perform instantaneous marking instead of the traditional EWMA based RED marking. The modified AQM queue identifies probe packets by the DSCP bit set in the header and marks them if the instantaneous queue size is greater than 1, to indicate self-induced congestion. RAPID [5] also uses a PathChirp like mechanism to perform a rate-based transmission in which all data packets are part of a continuous logical group of N probe packets; NF-TCP on the other hand uses ProECN only for probing and employs a window based transmission for the data packets. An NF-TCP flow generates two kinds of packets: normal data and probe packets. The bandwidth-estimation module starts after the first RTT on receiving an acknowledgment for the initial data packets, and only if there is no loss or ECN markings received. This is to ensure that newly starting flows do not contribute to congestion caused by the probes. The N probe packets are sent with varying inter-packet spacing so that we can measure available bandwidth in the range of M inRate to M axRate, such that the probes rate (ri ) is given by: ri = M inRate ∗ (SF )i−1 (3) M axRate = M inRate ∗ (SF )N −2
(4)
M inRate is the minimum probe rate, M axRate is the maximum probe rate and N is the number of probe packets. The ratio of successive packet inter-spacing times within a chirp is the spread factor SF . The estimated available bandwidth (BWest (bps)) is described by: ⎧ MinRate ∗ SFN−1 , ⎪ ⎪ ⎪ ⎪ if BWAvail > Maxrate ⎨ MinRate ∗ SFN−k , BWest = (5) ⎪ ⎪ if Minrate < BW < Maxrate ⎪ Avail ⎪ ⎩ 0, if BWAvail < Minrate k is the first packet in the series that arrives with an ECN marking and/or at a time greater than all the previous packets and BW Avail is the actual available bandwidth on the link. The NF-TCP bandwidth-estimation mechanism sends the probe packets with varying inter-packet gaps to emulate a range of sending rates. When the sending rate is higher
348
M. Arumaithurai, X. Fu, and K.K. Ramakrishnan
than the available bandwidth, the probe packets undergo self-induced congestion. This results in the packets having to wait in the queue and thus being ECN marked. The ECN marking acts as a reliable and early indicator of a packet having to wait in the queue and therefore enables the bandwidth-estimation module to obtain a reliable estimate of the available bandwidth. The ECN marking complements the delay-based approach of PathChirp since it exploits feedback received from the intermediate routers instead of having to depend only on delay measurements that could have high noise in low latency networks. With this enhancement, a source is able to identify excursion segments (a period of increasing delays) more accurately. The actual analysis and the heuristics utilized are similar to those described in [23] to account for bursty traffic. Dynamic probing rate adjustment. NF-TCP’s bandwidth estimation module is designed to dynamically adjust the measurement probe rate to get an estimate of the available bandwidth quickly, while limiting the overhead introduced in the network. This ensures that the probing mechanism can function efficiently in networks ranging from low BDP networks to high BDP networks. Currently, by design, the minimum probe rate is set to 1Mbps to prevent probing in networks with an available capacity that is lesser than 1Mbps. Similar to RAPID [5] the bandwidth-estimation module starts probing in a slowstart manner starting with 2 packets and doubling the number of packets afterwards. The slow-start phase is exited when the size of the probe train reaches N or when the estimated bandwidth is lower than the M axRate. On exiting slow-start, the probing transits into a dynamic probing mode. Here, the average sending rate (ravg ) is set to α ∗ BWest . PathChirp probe packets are limited in quantity for a particular probe event and the SF is set to a fixed value (for our evaluations we use N = 15 and SF = 1.2). On receiving an estimate of the available bandwidth, the probing mechanism is restarted after a uniformly distributed time period with a mean value equal to that of the baseRTT. The new M inRate is calculated according to the last BWest and is given by: M inRate =
SF N −1 − 1 ∗ ravg (N − 1)(SF − 1) ∗ SF N−2
(6)
Congestion window increase. NF-TCP is designed to be aggressive during noncongestion periods, in order to be opportunistic in using available bandwidth. NF-TCP takes advantage of the bandwidth-estimation performed to determine the rate of increase, so as to have an informed aggressive increase mechanism. This is unlike approaches that use an aggressive increase for large BDP networks, where they potentially cause congestion before backing off. Our approach enables NF-TCP to be truly friendly to existing transport connections. The estimate of the available bandwidth allows NFTCP flows to aggressively utilize a certain percentage of the remaining available bandwidth. The increase is limited to a factor α of the estimated available bandwidth to allow inaccuracies in the measured estimate as well as differences in the time scales. Based on evaluations, we recommned the use of α = 0.5. On receiving an estimate of the available bandwidth (BWest ), NF-TCP switches over to an aggressive increase phase wherein the congestion window is adjusted as follows: ACK : w ← −w+
α ∗ BWest ∗ RTT , if BWest > 0 w ∗ packet size
(7)
NF-TCP: A Network Friendly TCP Variant
349
1 , if BWest = 0 (8) w where ACK stands for acknowledgment received, RTT for round trip time in seconds and packet size for average packet size in bits. ACK : w ← −w+
4 Performance Evaluation We implemented NF-TCP on Linux kernel 2.6.31, and used the Linux TCP implementation ns-2 tool [24, 17] to import our Linux-based implementation of NF-TCP as well as the existing TCP Reno and TCP-LP onto ns-2. This enabled us to perform tests in a wide range of scale, topology and simulation time. Additionally, it also allowed us to compare NF-TCP performance against other candidate proposals such as LEDBAT and RAPID that we developed on ns-2. We start with a single bottleneck scenario and then illustrate more sophisticated scenarios with RTT heterogeneity and multiple bottlenecks with several flows (Fig. 3). The bottleneck link routers (Router1, 2 and 3) maintain a modified RED queue with different threshold values for NF-TCP flows, and a normal RED queue for TCP flows, as shown in Table 1. Routers have a buffer capacity equal to the link BDP. We use FTP, and generate a SACK for every received data packet. Packet size 1000 bytes (including the IP header) and the initial ssthresh for Reno/SACK is set to 100 packets. Bottleneck capacity is 600 Mbps and RTT is 100 ms. We refer the readers to [25] for results illustrating the performance of NF-TCP in networks with different BDPs. 4.1 Comparison with Other Approaches
600 500 400 300 200 100 0 0
500 1000 1500 2000 2500 3000
Throughput [Mbps]
Throughput [Mbps]
Throughput [Mbps]
In this section, we evaluate NF-TCP and other candidate approaches. We first focus on the performance of a single candidate (NF-TCP/LEDBAT/TCP-LP/RAPID) flow in the presence of a single standard TCP flow. We use the topology as shown in Fig. 3 with the candidate flows and the reference flows traversing the bottleneck link at Router R1. Fig. 2 illustrates the instantaneous throughput of the network friendly flows in the presence of a competing standard TCP flow. Fig. 2(a) shows that the NF-TCP flow is able to opportunistically utilize the bandwidth in the period from 0-500s with the support of its ProECN bandwidth estimation. NF-TCP is comparable in its aggressive 600 500 400 300 200 100 0 0
500 1000 1500 2000 2500 3000
NF-TCP
(a) NF-TCP with TCP
300 200 100 0 0
500 1000 1500 2000 2500 3000
Time [s]
Time [s]
Time [s] TCP
600 500 400
TCP
TCP-LP
(b) TCP-LP with TCP
TCP
Ledbat
(c) LEDBAT with TCP
Fig. 2. Single bottleneck: Instantaneous throughput of a candidate flow in presence of a TCP flow
350
M. Arumaithurai, X. Fu, and K.K. Ramakrishnan
increase phase to the most aggressive of the alternatives, RAPID. RAPID was designed for use in high BDP networks and hence is aggressive in its startup. On the other hand, TCP-LP (Fig. 2(b)) and LEDBAT (Fig. 2(c)) are much slower in their increase and hence are unable to fully utilize the uncongested network during this time period. From 500s onwards, as TCP increases its demand, NF-TCP quickly reduces its load, as a result of ECN marking, allowing TCP to grow its bandwidth as much as it desires. NF-TCP is thus submissive to TCP. TCP is not impacted after time t=1200s. In this particular case, RAPID is also submissive. Again, TCP-LP and LEDBAT are much less submissive, yielding bandwidth to TCP more slowly. Further, once TCP nearly attains its full window at 1200 secs, TCP-LP and LEDBAT impact the TCP flow to different extents. In fact, when co-existing with LEDBAT, TCP continually experiences significant loss and reduced throughput, and thus incurs both additional delay and lower throughput. Thus, this experiment demonstrates both the capabilities of NF-TCP: to be opportunistic in its use of available bandwidth and submissiveness in the presence of TCP. We evaluated the performance of a LEDBAT flow in the presence of a standard TCP flow. It is true that LEDBAT flows are network friendly to standard TCP flows as reported in [26], however only under low BDP scenarios. When the bottleneck bandwidth becomes higher (i.e., 200Mbps and more) and the buffer size is set equivalent to the link BDP (more realistic with higher speed links), our results show that LEDBAT flows are no longer friendly to TCP flows. Fig. 2(c) demonstrates that LEDBAT is more aggressive than NF-TCP (and TCP-LP) during congestion periods. Fig. 5(a) shows that, as the queue builds up, the base-delay stored by LEDBAT increases (Fig. 5(b)). This is due to the resetting of the baseRTT every 2-10 minutes and results in LEDBAT increasing its throughput. In short, the results demonstrate that LEDBAT does not satisfy the requirement of a network friendly protocol to maintain low queues and being submissive to TCP over a reasonable wide range of system parameters. 4.2 Fairness among NF-TCP Flows To study the fairness among NF-TCP flows, we choose the following two scenarios: a) 5 NF-TCP flows started at the same time, b) 5 NF-TCP flows starting one after another every 50 secs. Fig. 6(a) and Fig. 6(b) show that the 5 NF-TCP flows are fair to one another. Fig. 6(b) also shows that since the NF-TCP flow (NF-TCP0) that started at time zero did not have any competing flow, it was able to utilize the available bandwidth
R1
TCP source
NF-TCP source
Fig. 3. Multi-hop topology with multiple bottlenecks
500 400 300 200 100 0 0
500
1000
1500
600
Throughput [Mbps]
600
Throughput [Mbps]
Throughput [Mbps]
NF-TCP: A Network Friendly TCP Variant
500 400 300 200 100 0
2000
0
500
Time [s] TCP NF-TCP-0
1000
1500
2000
600 500 400 300 200 100 0 0
500
Time [s]
NF-TCP-1 NF-TCP-2
TCP RAPID-0
(a) NF-TCP vs TCP
351
1000
1500
2000
Time [s]
RAPID-1 RAPID-2
TCP Ledbat-0
(b) RAPID vs TCP
Ledbat-1 Ledbat-2
(c) LEDBAT vs TCP
Fig. 4. Multiple bottlenecks: Instantaneous throughput of candidate flows in the presence of a TCP flow (RTT of candidate flows = 1/3 RTT of TCP flow) 0.2
Queue [# of Packets]
7000 6000
Base delay Actual delay Perceived delay
0.15
Delay [s]
5000 4000 3000 2000
0.1 0.05
1000 0
0 0
500 1000 1500 2000 2500 3000
0
Time [s]
500 1000 1500 2000 2500 3000
Time [s]
(a) Bottleneck queue
(b) Delay measurements
600
NF-TCP0 NF-TCP1 NF-TCP2 NF-TCP3 NF-TCP4
500 400 300 200 100 0 0
Throughput [Mbps]
Throughput [Mbps]
Fig. 5. LEDBAT is unfriendly: Causes substantial delay 600 400 300 200 100 0
200 400 600 800 1000
0
Time [s]
(a) All flows begin together
NF-TCP0 NF-TCP1 NF-TCP2 NF-TCP3 NF-TCP4
500
200 400 600 800 1000
Time [s]
(b) Staggered start
Fig. 6. Fairness among NF-TCP flows in a single hop scenario
completely and yield its share of the bandwidth when the other flows start. A decrease factor of 12.5% instead of the standard 50% makes it longer to achieve fairness but ensures that the overall link utilization is still high. With different RTTs, NF-TCP is also able to achieve a fairness similar to that achieved by standard TCP under the same scenario. 4.3 Candidate Flows with Multihop, Varying RTTs We now look at more realistic scenarios where the TCP flow traverses multiple hops going over several congested routers. They compete with flows that have different RTTs. For this purpose, we set up the testbed as shown in Fig. 3 with three ’bottleneck’ links. The RTT on the longest path from R1 to R4 is three times the RTT on the shorter single
600 500 400 300 200 100 0
600 500 400 300 200 100 0
0
500 1000 1500 2000 2500 3000
Time [s] TCP
NF-TCP’
(a)NF-TCP w/o BWest
0
100
200
300
400
Time [s]
500
600
Bandwidth consumed [Mbps]
M. Arumaithurai, X. Fu, and K.K. Ramakrishnan
Throughput [Mbps]
Throughput [Mbps]
352
5 4 3 2 1 0
Available bandwidth NF-TCP
(b)NF-TCP with Poisson traffic
0
500
1000 1500 2000 2500 3000
Time [s]
(c)Probe overhead
Fig. 7. Need for bandwidth estimation and the low overhead it produces
hop paths. We perform evaluations with a TCP flow and competing candidate flows traversing one of R1, R2 and R3. The candidate flows are started at 0s and the TCP flow is started at 500s. Fig. 4(a) illustrates that although NF-TCP has a much shorter RTT, it is friendly towards TCP flows while also opportunistically utilizing the spare bandwidth. RAPID (Fig. 4(b)) and LEDBAT (Fig. 4(c)) are both not submissive. For RAPID, this is due to the combination of its aggressive nature and its complete reliance on delay-based probing to measure available bandwidth. 4.4 NF-TCP in the Presence of Standard TCP ProECN dynamic bandwidth estimation. We evaluate the performance of ProECN bandwidth estimation tool within NF-TCP with a varying measurement range. Probes are sent about once per 2 RTTs. Fig. 7(c) illustrates that it is able to provide NF-TCP with an estimate of the available bandwidth while having an average probe throughput of about 0.6Mbps. The minimum rate that can be probed is limited to 1Mbps to prevent congestion when network capacity is less than 1Mpbs. With ProECN, NF-TCP should ideally switch off during congestion periods, as confirmed by our experiments: in the period ranging from 1800-2200s and 2700s and beyond, when the NF-TCP becomes completely submissive. NF-TCP vs UDP cross traffic. We evaluate the ability of NF-TCP to opportunistically utilize available bandwidth in the presence of UDP cross traffic (rate generated from a Poisson distribution). Fig. 7(b) illustrates that NF-TCP opportunistically get close to the available bandwidth and then resorts to a slower increase. This results in better link utilization and also allows it to be friendly to other flows.
5 Related Work Given the emergence of P2P-like delay-insensitive traffic, recent developments have attempted to provide means to support such applications. P4P [27] and the IETF Application-Layer Transport Optimization (ALTO) [28] protocol exploit the use of dedicated servers to assist in the selection of peers, to avoid overloading any specific part
NF-TCP: A Network Friendly TCP Variant
353
of the network. Such application layer solutions, when available, could complement NF-TCP, since they function at different time-scales. To relax the dependency on dedicated servers and to react to instantaneous congestion levels in the networks, delay-based transport layer mechanisms such as TCP-LP [4] and LEDBAT [3] have been developed. NF-TCP differs from LEDBAT and TCP-LP as it depends on an ECN feedback for early notification and aggressively utilize bandwidth during non-congestion periods. Another aspect is in applying bandwidth measurement mechanisms to aid congestion control, such as RAPID [5]. However, to obtain an accurate measurement of the available bandwidth, RAPID is tightly coupled with PathChirpbased probing [23]. TCP Westwood [29] uses Agile probing bandwidth estimation mechanism to repeatedly reset the ssthresh value. In contrast to RAPID and TCP Westwood, NF-TCP employs a probing-based measurement scheme in addition to a windowbased transmission of data packets. There is a clear separation of a lightweight measurement framework from the data transmission and flow control. Since delay-based schemes are known to be prone to transmission errors, NF-TCP introduces an ECN complemented probing instead of the pure delay-based probing approach of RAPID. DC-TCP [30] is a new congestion control proposal developed for data center environments. Both DC-TCP and NF-TCP aim to maintain low buffers albeit by different mechanisms. DC-TCP requires the intermediate queues to perform instantaneous marking whereas NF-TCP uses an EWMA based mechanism to allow it to function better in a heterogeneous and dynamic environment. Moreover, DC-TCP does not support aggressive start and is not designed to be submissive to TCP. Approaches such as VCP [31], MLCP [32], BMCC [33], XCP [13], RCP [34], and rate feedback in Quick-Start [12] are based on intermediate routers providing more extensive feedback, but more importantly are not meant to be submissive to other flows.
6 Summary In this paper, we presented a network friendly TCP variant, NF-TCP, that allows delayinsensitive applications to be submissive to standard TCP flows during congestion periods. Additionally NF-TCP exploits a novel combination of adaptive measurement of available bandwidth and the traditional window based congestion control to efficiently utilize network capacity. Our extensive evaluations illustrated that NF-TCP meets the requirements of a network friendly transport protocol and outperforms other candidate approaches in a wide range of network scenarios. NF-TCP contributes very little to the queueing at bottlenecks. We are currently experimenting with NF-TCP in a real testbed of Linux routers and PCs to further demonstrate its system performance. We believe that NF-TCP is viable and practical as an efficient network friendly protocol for delayinsensitive applications.
Acknowledgements We would like to thank Fabian Glaser for his help with the implementation and the anonymous reviewers for their insightful comments.
354
M. Arumaithurai, X. Fu, and K.K. Ramakrishnan
References 1. Internet Study (2008/2009), http://www.ipoque.com/resources/ internet-studies/internet-study-2008_2009 2. Moncaster, T., Krug, L., Menth, M., Araujo, J., Blake, S., Woundy, R.: The Need for Congestion Exposure in the Internet. IETF, Internet-Draft draft-moncaster-conex-problem-00, (March 2010) (work in progress) 3. Shalunov, S., Hazel, G.: Low Extra Delay Background Transport (LEDBAT). IETF, InternetDraft draft-ietf-ledbat-congestion-00.txt, (July 2010) (work in progress) 4. Kuzmanovic, A., Knightly, E.: TCP-LP: a distributed algorithm for low priority data transfer. In: Proc. INFOCOM (2003) 5. Konda, V., Kaur, J.: RAPID: Shrinking the Congestion-Control Timescale. In: Proc. INFOCOM (2009) 6. Biaz, S., Vaidya, N.H.: Is the round-trip time correlated with the number of packets in flight?. In: Proc. IMC (2003) 7. Jain, R.: A delay-based approach for congestion avoidance in interconnected heterogeneous computer networks. SIGCOMM Comput. Commun. Rev (1989) 8. Floyd, S.: HighSpeed TCP for Large Congestion Windows. RFC 3649 (December 2003) 9. Jin, C., Wei, D., Low, S.: FAST TCP: motivation, architecture, algorithms, performance. In: Proc. INFOCOM (2004) 10. Tan, K., Song, J.: A compound TCP approach for high-speed and long distance networks. In: Proc. INFOCOM (2006) 11. Ha, S., Rhee, I., Xu, L.: CUBIC: a new TCP-friendly high-speed TCP variant. SIGOPS Oper. Syst. Rev (2008) 12. Floyd, S., Allman, M., Jain, A., Sarolahti, P.: Quick-Start for TCP and IP. RFC 4782 (January 2007) 13. Katabi, D., Handley, M., Rohrs, C.: Congestion control for high bandwidth-delay product networks. In: Proc. SIGCOMM (2002) 14. Stewart, L., Armitage, G., Huebner, A.: Collateral damage: The impact of optimised tcp variants on real-time traffic latency in consumer broadband environments. In: Proc. Networking (2009) 15. Ramakrishnan, K., Floyd, S., Black, D.: The Addition of Explicit Congestion Notification (ECN) to IP. RFC 3168 (September 2001) 16. Arumaithurai, M., Fu, X., Ramakrishnan, K.K.: NF-TCP: Network Friendly TCP. In: Proc. LANMAN (2010) 17. A Linux TCP implementation for NS2, http://netlab.caltech.edu/projects/ ns2tcplinux/ns2linux/index.html 18. Baccelli, F., Carofiglio, G., Foss, S.: Proxy Caching in Split TCP: Dynamics, Stability and Tail Asymptotics. In: Proc. INFOCOM (2008) 19. Babiarz, J., Chan, K., Baker, F.: Configuration Guidelines for DiffServ Service Classes. RFC 4594 (August 2006) 20. Floyd, S., Jacobson, V.: Random Early Detection Gateways for Congestion Avoidance. IEEE/ACM Trans. on Netw. (January 1993) 21. Demers, A., Keshav, S., Shenker, S.: Analysis and simulation of a fair queueing algorithm. In: Proc. SIGCOMM (1989) 22. Ramakrishnan, K.K., Jain, R.: A binary feedback scheme for congestion avoidance in computer networks with a connectionless network layer. In: Proc. SIGCOMM (1988) 23. Ribeiro, V.J., Riedi, R.H., Baraniuk, R.G., Navratil, J., Cottrell, L.: PathChirp: Efficient Available Bandwidth Estimation for Network Paths. In: Proc. Passive and Active Measurement Workshop (2003)
NF-TCP: A Network Friendly TCP Variant
355
24. Wei, D.X., Cao, P.: NS-2 TCP-Linux: an NS-2 TCP implementation with congestion control algorithms from Linux. In: Proc. WNS2 (2006) 25. Network friendly transport for delay-insensitive background traffic, http://www.net. informatik.uni-goettingen.de/research_projects/nft 26. Rossi, D., et al.: News from the Internet congestion control world. CoRR, vol. abs/0908.0812 (2009) 27. Xie, H., Yang, R., Krishnamurthy, A., Liu, Y., Silberschatz, A.: P4p: Portal for (p2p) applications. In: Proc. SIGCOMM (2008) 28. Seedorf, J., Burger, E.: Application-Layer Traffic Optimization (ALTO) Problem Statement. RFC 5693 (October 2009) 29. Yamada, K., Wang, R., Sanadidi, M.Y., Gerla, M.: Tcp westwood with agile probing: Dealing with dynamic, large, leaky pipes. In: Proc. ICC (2004) 30. Alizadeh, M., Greenberg, A., Maltz, D., Padhye, J., Patel, P., Prabhakar, B., Sengupta, S., Sridharan, M.: DCTCP: Efficient Packet Transport for the Commoditized Data Center. In: Proc. SIGCOMM (2010) 31. Xia, Y., Subramanian, L., Stoica, I., Kalyanaraman, S.: One More Bit is Enough. IEEE/ACM Transactions on Networking (2008) 32. Qazi, I.A., Znati, T.: On the Design of Load Factor based Congestion Control Protocols for Next-Generation Networks. In: Proc. INFOCOM (2008) 33. Qazi, I.A., Znati, T., Andrew, L.L.H.: Congestion Control using Efficient Explicit Feedback. In: Proc. INFOCOM (2009) 34. Dukkipati, N., McKeown, N., Fraser, G.: RCP-AC: Congestion Control to make flows complete quickly in any environment. In: Proc. High-Speed Networking Workshop (2006)
Impact of Queueing Delay Estimation Error on Equilibrium and Its Stability Corentin Briat , Emre A. Yavuz, and Gunnar Karlsson ACCESS Linnaeus Center, KTH, SE-100 44 Stockholm, Sweden {cbriat,emreya,gk}@kth.se http://www.access.kth.se
Abstract. Delay-based transmission control protocols need to separate round-trip time (RTT) measurements into their constituting parts: the propagation and the queueing delays. We consider two means for this; the first is to take the propagation delay as the minimum observed RTT value, and the second is to measure the queueing delay at the routers and feed it back to the sources. We choose FAST-TCP as a representative delay-based transmission control protocol for analysis and study the impact of delay knowledge errors on its performance. We have shown that while the first method destroys fairness and the uniqueness of the equilibrium, the stability of the protocol can easily be obtained through tuning the protocol terms appropriately. Even though the second technique is shown to preserve fairness and uniqueness of the equilibrium point, we have presented that unavoidable oscillations can occur around the equilibrium point. Keywords: Congestion Control; FAST-TCP; Time-Delay Systems; fairness; stability.
1
Introduction
Most recent developments for the internet have concerned the development of delay-based congestion control and its instantiation in the form of FASTTCP [22]. It is possible to get better performance in terms of shorter queues and lower losses, both resulting in lower end-to-end delay for a transfer, when control is based on a continuous state variable rather than the binary signal of a packet drop. We have in a previous work shown that knowledge of both queuing and propagation delays is necessary and sufficient to obtain stability, fairness, and efficiency in distributed congestion control, such as TCP [18]. In this paper, we study how these measures are obtained and the impact any imperfection could have on the control performance. Motivation. The motivation to study congestion control relates to its importance for network operation. Mission critical information systems, for instance supervisory control and data acquisition systems used for controlling the power
Corresponding author.
J. Domingo-Pascual et al. (Eds.): NETWORKING 2011, Part II, LNCS 6641, pp. 356–367, 2011. c IFIP International Federation for Information Processing 2011
Impact of Queueing Delay Estimation Error Equilibrium
357
grid, are increasingly relying on internet communication for both system status data and control commands. The trend to port such systems from proprietary low speed data networks to the internet is based on ready availability of higher capacity and low delay that improve system operation (such as smart grid); the cost to provide similar connection quality by proprietary networks might not be justifiable for competitive reasons. A network without congestion control may still be efficient given perfect error handling in the form of selective repeat ARQ or erasure coding [1] as shown in a recent work by Bonald et al. The proviso is that one of two possible conditions must be met: The first is that the capacities of access links are small with respect to the shared links, in which case the efficiency is high for drop-tail FIFO queues. The second is by using a fair-drop policy in routers for which the sharing will also be fair. Noting the increases in access links owing to commercial offering of 100 Mb/s and higher-rate DSL connections, and of fiber to homes that are increasingly being installed, we conclude that the fair-drop mechanism appears necessary before daring to dismantle congestion control. An additional disadvantage with unregulated elastic flows is that the sharing with inelastic flows is obliterated; while it is possible to share resources between the two types by qualitative differentiation of traffic controls (rate control for elastic flow and admission control of inelastic flows) [13]. Hence, we believe congestion control remains a vital function for sound internet operation for a foreseeable future. Contributions. This paper reports new analytical results on the performance of delay-based congestion control. We present how the estimation of queueing delay from samples of round-trip delay may cause unfairness among competing flows and instability if the protocol tuning parameters are not chosen accordingly. The alternative is to measure the queueing delay in routers and feed back those measures to all senders; hence ensuring fairness since all act on the same information. In such a case, the error at equilibrium will be the same for all users leading to a unique and fair equilibrium point. However, due to the quantization of the measurements, we have proved that the network will inevitably exhibit an oscillating behavior. Our analysis is based on time-delay systems theory and it involves large polynomials and complex transcendental equation analysis, which is typically unsolvable analytically except in very simple cases. Thus, we have considered only the single-user/single-buffer case to obtain the results for the stability analysis. It is however important to stress that our results may be scaled to a more general context, at least qualitatively, since observed issues that occur for the single-user/single-buffer topology, are expected to appear also for cases with complex topologies. Outline of paper. The paper has the following outline. Section 2 covers the related work and in section 3 we give a brief overview of FAST-TCP. Section 4 presents the model for the delay based congestion avoidance protocols. In section 5 we analyze the impact of the queuing delay estimation error on equilibrium and its stability. Section 6 provides a similar analysis, yet considers the quantization error when measuring the queuing delay. We conclude the work in section 7.
358
2
C. Briat, E.A. Yavuz, and G. Karlsson
Related Work
The congestion control algorithm implemented in TCP Reno [8] has performed well and gone through several enhancements since then. It is however well-known that as bandwidth-delay product continues to grow, it will become a performance bottleneck itself. The poor performance of TCP Reno in such networks is due to slowness of linear increase by one packet per round-trip time, severeness of multiplicative decrease per loss event, the difficulty in maintaining large average congestion windows, which requires an extremely small equilibrium loss probability and using a binary congestion signal based on packet loss which causes oscillations. Delay-based congestion control has been proposed in [10,21,3] to overcome these difficulties. Control protocols established on delay-based congestion avoidance (DCA) algorithms are shown to achieve better performance than protocols established on congestion avoidance algorithms based on packet loss as network congestion indicator, with respect to network efficiency, stability and latency [22,10,2]. Queuing delay can be estimated more accurately compared to loss since packet losses are rare events, and information on queuing delay has finer resolution than what loss samples provide. In [18], several objectives (stability, fairness, efficiency) are considered and necessary conditions for a delay-based protocol to achieve these are provided. It is shown that knowledge of both the (aggregate) queuing delay and the constant propagation delay are necessary to ensure both fairness and efficiency. DCA protocols react to shifts in measured RTT with the assumption that it is either an indication of congestion or of excess capacity in the bottleneck link. Since the protocols try to sustain a buffer occupancy at approximately a constant level, a new flow could easily overestimate the propagation delay by mistakenly including a constant queueing delay in the measured minimum RTT when it starts. This overestimation eventually causes unfairness, known as the persistent congestion problem, among contending flows as the sender assumes the link being less congested than what it actually is. Many methods have been proposed to correct this problem [11,12,5,20] yet almost all of them rely on intermediate routers to be upgraded and thus redeployed. A few other attempts to solve the problem without support from the queuing management mechanisms in routers, have been presented in [6,17] under certain limitations such as having large enough and homogeneous propagation delays, not too many simultaneous flows, and no uncontrolled cross traffic.
3
A DCA Protocol: FAST-TCP
FAST-TCP is the most representative DCA protocol that achieves higher throughput, lower latency and fewer packet drops compared to previous versions of TCP. Loss based protocols such as TCP Reno and its variants drive the network to congestion so that they can receive the feedback they need in order to adjust the size of their congestion windows. In contrast, TCP protocols with delay based congestion avoidance mechanisms, such as TCP Vegas and
Impact of Queueing Delay Estimation Error Equilibrium
359
FAST-TCP, keep an estimate of the round trip propagation delay, which is the minimum RTT observed throughout a single connection, to track changes in the queuing delay by measuring the RTT continuously. The queue length is estimated by measuring the difference between the observed RTT and the estimated propagation delay. FAST-TCP is an enhanced version of TCP Vegas that does not penalize the flows with large bandwidth-delay products and that has good convergence properties. FAST-TCP also differs from TCP Vegas the way the rate is adjusted when the number of packets stored is too small or large. TCP Vegas makes fixed size adjustments, independently of how far it is from the equilibrium point. FAST-TCP takes larger steps when the system is far away from the equilibrium and smaller steps otherwise to improve the speed of convergence and stability. FAST-TCP implements a learning procedure consisting of estimating the propagation by the minimum observed RTT (plus some filters) and this has been modelled accurately in [9]. One additional feature of FAST-TCP is independence of the propagation delays in equilibrium; hence sources with large propagation delays are not penalized as is usually the case for classical implementations of TCP. The congestion window update law explicitly depends on both the queuing and the propagation delays [22] in order to achieve fairness even in the case of heterogeneous propagation delays.
4
Analysis of The Ideal Case
In this section some preliminary results from [4] are recalled. We assume that both propagation and queuing delays are perfectly known (ideal case). We consider here the following continuous-time fluid-flow nonlinear model, very often used to analyze the behavior of a single-bottleneck network (with unlimited queue length) with N users using FAST-TCP [9,4]: τ˙ (t) =
1 c
N
xi (t−Tif ) i=1 τ (t)+Ti
+ δ(t) − 1
τ (g(t−Tib )) x˙ i (t) = γ − Ti +τ (g(t−T , i = 1, . . . , N b )) xi (t) + αi i
(1)
(2)
where xi , τ , c, Ti = Tif + Tib , Tif and Tib are the window size of user i, the queuing delay, the buffer output capacity, the total constant propagation delay of source i and the forward and backward constant propagation delays of source i respectively. The disturbance term δ represents the normalized cross-traffic modeling unregulated flows and non-FAST-TCP flows such as the usual TCP. The terms γ and αi are protocol tuning parameters, the first one acts on the speed of reaction of the protocol (the bandwidth, in the control theoretic sense) and αi is the desired number of enqueued packets at equilibrium. The function g(·) is the inverse function of f (t) = t + τ (t) as defined in [4] and allows to
360
C. Briat, E.A. Yavuz, and G. Karlsson
write the overall network rigorously as a nonlinear time-delay system with statedependent delay. The unique equilibrium point of this a model is given by: τ∗ =
i αi , c(1−δ∗ )
x∗i = αi 1 +
Ti τ∗
and φ∗i = =
αi τ∗
(3)
where φ∗i denotes the flow of source i at equilibrium, i = 1, . . . , N . From the above result we can see that the equilibrium is both proportionally-fair and efficient [19,18]. The equilibrium flows do not depend on the propagation delays, hence sources with large propagation delays are not penalized. Since the theoretical local/global stability analysis of the nonlinear model (1)-(2) is a very difficult still open problem, we will restrict the analysis to the single source problem (i.e. N = 1). Using results from time-delay systems theory [7,16,15,4] and robust analysis [23,7], the following theorem is proved in [4]: Theorem 1 (Network Local Stability [4]). Let us consider the network model (1)-(2) with single user. Then the equilibrium point (x∗ , τ ∗ ) is 1. locally delay-independent stable if and only if τ ∗ > T . 2. locally delay-dependent stable if τ ∗ < T and τ ∗ (T − 1) + T 2 ≤ 0. 3. locally delay-dependent stable if τ ∗ < T , τ ∗ (T − 1) + T 2 > 0 and γ < 1 τ ∗ (T −1)+T 2 . In the subsequent sections, the impact of the unperfect knowledge of the queuing delay value on the overall network stability will be studied.
5
Learning the Queuing Delay
In the FAST-TCP protocol implementation, the propagation delays are estimated as the minimal observed RTTs: Tˆi (t) := inf s∈[ti ,t] {RT Ti(s)} where the RTT is RT Ti(t) := Ti + τ (g(t − Tib )), Tˆi (t) is the estimated propagation delay for source i at time t and ti is the arrival time of source i in the network. So, the sources learn their propagation delays through an iterative process using the RTT measurements. The actual advantage of such a procedure lies in its simplicity: only the senders’ routines need to be modified, the entire infrastructure (routers/servers) remains unchanged. However, according to the scenario, the sources may be unable to estimate their propagation delay accurately, resulting in an underestimation of the queueing delays and a loss of fairness. The only way to observe the actual propagation delay is to communicate when the queueing delay is 0. However, DCA protocols have an antagonist effect since they strive to maintain a non-zero queue for efficiency and flow fairness. In order to take into account the learning process in the analysis, the protocol model (2) is refined to Ti +εi (t) x˙ i (t) = γ −xi (t) + τ (g(t−T (4) b ))+T xi (t) + αi i
i
Impact of Queueing Delay Estimation Error Equilibrium
361
where the learning errors εi (t) are defined by εi (t) = Tˆi (t) − Ti ≤ τ (g(s − Tib )). Since εi (t) can be anything according to the scenario, we will rather focus on asymptotic properties (equilibrium points and local stability) of the above protocol model for any possible values for the errors at equilibrium. 5.1
Impact on the Equilibrium Point
Solving for the equilibrium points for (1)-(4) yields the expressions: x∗i =
αi (Ti +τ ∗ ) , τ ∗ −ε∗ i
φ∗i =
αi τ ∗ −ε∗ i
and ε∗i < τ ∗
(5)
where the ε∗i ’s are the estimation errors at equilibrium and the delay at equilibrium τ ∗ solves N ∗ ∗ (6) i=1 φi − c(1 − δ ) = 0. From the above equations, we can clearly see that when the errors at equilibrium for each source are different, then proportional-fairness cannot be achieved. Hence, fairness is only reachable under the very strong assumption of exact knowledge of all the propagation delays. The equation (6) defining the equilibrium delay can be rewritten as the polynomial equation PN (τ ) = 0 where N N N PN (τ ) := i=1 αi j=i (τ ∗ − ε∗j ) − c(1 − δ ∗ ) i=1 (τ ∗ − ε∗i ) (7) and for which only the nonnegative solutions must be considered.It is well known that, in general, there is no analytical solutions to PN (τ ) = 0 when N is large. Hence we will restrict the analysis to simple cases where analytical results can be obtained. Equilibrium points analysis - Single source case. In the single source case, α ∗ the equilibrium queuing delay is given by τ ∗ = c(1−δ which is nothing but ∗) + ε a simple shift of the ideal equilibrium (3) leading to an increase of the queuing delay. When ε∗ → 0, we recover the equilibrium point of the ideal case. Equilibrium points analysis - Two sources case. In the 2-sources case, the equilibrium delay is defined by the following polynomial P2 (τ ) = −η2 τ 2 +η1 τ −η0 with η2 = c(1 − δ ∗ ), η1 = α1 + α2 + η2 (ε∗1 + ε∗2 ) and η0 = η2 ε∗1 ε∗2 + α1 ε∗2 + α2 ε∗1 . Since by definition, we have c, δ ∗ , αi , ε∗i > 0, i = 1, 2 then η2 > 0, η1 > 0 and η0 > 0. Hence, the real part of the roots of P2 are positive. To see that the solutions are all real, it is easy to show that the discriminant Δ := η12 − 4η2 η0 is positive. Thus, the network admits 2 positive equilibrium points given by √ Δ 2 τ ∗ = τ2n + ε1 +ε ± (8) ∗ 2 2c(1−δ ) α1 +α2 where τn = c(1−δ ∗ ) is the equilibrium delay in the ideal case. So, in the two sources problem, an overestimation of the propagation delays leads to the creation of two distinct positive queueing delay equilibrium points. Note also that the solutions τ ∗ → τn as ε1 , ε2 → 0 (the zero limit disappears due to a pole/zero cancellation).
362
C. Briat, E.A. Yavuz, and G. Karlsson
Fig. 1. Network Topology used for Simulation Table 1. Comparison of theoretical and simulation results (Np is packet size) Experiment 1 Parameters variables Theory α = 200 φ∗1 5006 c = 100Mb/s φ∗2 8100 δ∗ = 0 τ2∗ 40ms T1 = 24ms q2∗ 524 T2 = 24ms x∗1 320 Np = 8kb x∗2 518
NS-2 4614 7413 43ms 520 310 499
Experiment 2 Parameters variables Theory α = 200 φ∗1 4005 c = 100Mb/s φ∗2 6481 δ ∗ = 0.2 τ∗ 50ms T1 = 24ms q∗ 655 T2 = 24ms x∗1 296 Np = 8kb x∗2 479
NS-2 3701 5923 54ms 653 289 462
Example 1. Let us consider the topology depicted in Fig. 1 where we establish a FAST TCP connection between each source Si , and its corresponding destination Di . We assume that the packets carry a payload of 1000 bytes and that the maximal queue size is chosen to be sufficiently large to avoid packet dropping. In the considered scenario, we assume that the two flows arrive consecutively. The first one comes at time t = 10 seconds, when the queue is empty. Hence the first source can estimate exactly its propagation delay, so ε∗1 = 0, and the queuing delay converge to the equilibrium value τ1∗ = α/η where η = c(1 − δ ∗ ). Then the second flow comes at t = 30 seconds and makes the queue length increase, hence the minimal measured RTT for the second source is the one measured at t = 30 seconds, hence we have ε∗2 = τ1∗ . Solving for√the delay equilibrium points α when both flows are active we get τ2∗ = 2η 3 ± 5 which are both positive. Solving now for the flows, we get φ∗1 = 3+2η√5 and φ∗2 = 1+2η√5 whose sum equals η, showing then efficient but unfair (φ∗1 < φ∗2 ) equilibrium. The simulation of the topology in Figure 1 is performed with the NS-2 simulator and the obtained results are gathered in Table 1 with a comparison with the theoretical results of Section 5.1. Two scenarios are considered, the first one considers homogeneous propagation delays and no cross-traffic while the second adds a cross-traffic of 20Mb/s. We can see that, for the considered scenarios, the windows size, the queuing delay and the queue length are quite well predicted. Fig. 2 shows the rate evolution of the sources in the scenario without cross-traffic. As also noticed in [17], the flows fail to converge to a fair equilibrium.
Impact of Queueing Delay Estimation Error Equilibrium
363
16000
Rates (pkts/sec)
14000 12000 10000 8000 6000 4000 2000 0 0
20
40
60
80
100
Time (sec)
Fig. 2. Rates evolution for Source 1 (plain) and Source 2 (dashed) (no cross-traffic)
5.2
Impact on the Stability of the Equilibrium Point
It is interesting to study the local stability of the equilibrium points to answer the question on the protocol stability at an unfair equilibrium point. To this aim, the following linearized model from (1)-(4) is devised: ε∗ −τ ∗
(T +ε∗ )x∗
y˙ i (t) = −γ Tii +τ ∗ yi (t) − γ (Tii +τi∗ )2i ν(t − Ti − τ ∗ ) x∗ 1 i ν(t) ˙ = i c(Ti1+τ ∗ ) yi (t) − i c(Ti +τ ∗ )2 ν(t) + c ζ(t)
(9)
where yi (t) := xi (t) − x∗i , ν(t) := τ (t) − τ ∗ and ζ(t) := δ(t) − δ ∗ . Restricting us to the single user case, we get the following theorem for delay-independent stability: Theorem 2. The system (9) is locally delay-independent stable if and only if α the inequality ε∗ < c(1−δ ∗ ) − T holds. Proof. The proof is similar to as the one of [4, Theorem 4.3]. Since ε∗ ≥ 0 then in order to tolerate positive errors, the right-hand side must be at least positive. Based on this, we can conclude that the error has a negative impact on the stability. By extension, the same problem will occur in more complex topologies. Whenever delay-independent stability is not achieved we have the following theorem: Theorem 3. The system (9) is delay-dependent stable if one of the following conditions hold: 1. T + ε∗ − 2. T + ε∗ −
α c(1−δ∗ )2 α c(1−δ∗ )2
< 0; or ≥ 0 and γ <
c(1−δ ∗ )2 c(T +ε∗ )(1−δ ∗ )2 −α .
364
C. Briat, E.A. Yavuz, and G. Karlsson
Proof. The proof is identical as for Theorem 1; see [4]. According to the above results, we can conclude that the stability can be ensured through an appropriate choice of the tuning parameter γ similarly as in the ideal case. Additionally, the error term penalizes the maximal admissible speed of the protocol and has thus an impact on the overall efficiency of the network. Conversely, if the term γ is not chosen in order to consider the eventual error term, stability may be lost.
6
Measuring the Queuing Delay – Case Study
The second solution to the estimation of both queuing and propagation delays consists of an explicit feedback of the queuing delays by the routers. In such a framework, part of the header of the packet is dedicated to contain the sum of all the queueing delays the packet has experienced over its path. Thus, the source gets an explicit value for the total queuing delay and can subtract it from the measured RTT to compute the propagation delay. This solution needs, however, an update of all routers to add this feature. It is hence less simple to deploy than the estimation procedure for the queuing delay that is solely based on RTT measurements. Moreover, since the size of the packet header is constant, the measured aggregate queuing delay is stored with fixed precision and a measurement error is consequently introduced. Therefore, an analysis of the influence of the errors on the equilibrium points and on the stability deserve to be studied. Interestingly, the protocol model incorporating the use of a quantized measure of the queuing delay is very similar to as the one incorporating the learning error: −τ (g(t−Tib ))−εi (g(t−Tib )) x˙ i (t) = γ xi (t) + αi (t) . (10) τ (g(t−T b ))+Ti i
However, in the present case, the error term satisfies |εi (·)| ≤ q/2 where q is the resolution of the quantizer. The only differences lie in the sign of the errors which are not restricted to be positive and in the boundedness of the errors. Indeed, the worst case error only depends on the choice of the quantization step while in the previous case, the worst case error was depending on the state of the network. 6.1
Impact on the Equilibrium Point
At equilibrium, the queuing delay is constant and hence the quantization error is identical for all users, i.e. εi = ε∗ , i = 1, . . . , N . This an important property of the current approach. Indeed, simple computations yield: ∗
∗ x∗i = αi Tτ ∗i +τ −ε∗ , φi =
αi τ ∗ −ε∗
and τ ∗ =
i αi c(1−δ ∗ )
+ ε∗ .
(11)
In such a case, the equilibrium point is unique, efficient and proportionally fair. This was not the case with the learning strategy due to the imbalance between the errors. This is one of the benefits of the approach.
Impact of Queueing Delay Estimation Error Equilibrium
6.2
365
Impact on the Stability
Similarly to as previously, we will consider the single user problem. We will first assume that ε∗ = 0 to avoid complex calculations. A discussion will be provided for the case ε∗ = 0. When the quantization error is 0 at equilibrium, the quantization function ϕ(·) is an odd function which belongs to the sector (0, 2), i.e. we have 0 ≤ ϕ(s) ≤ 2 for any s ∈ R. A lot of works have been devoted s to the analysis of linear systems interconnected to sector nonlinearities [14]. The problem can be rewritten as the negative feedback interconnection of ϕ(·) and 3
−s(τ ∗ +T )
ξ2 ψ1 e F (s) = μ(s+μξ . A very important result for the stability analysis of such 1 )(s+μξ3 ) interconnections is called the circle criterion and consists of a generalization of the Nyquist criterion:
Theorem 4 (Circle Criterion). Let us consider an interconnection (with negative feedback) of an asymptotically stable system F (s) and a nonlinear element ϕ(·) satisfying the sector condition (k1 , k2 ). The interconnection is asymptotically stable if the graph of F (jω) with ω ∈ R does not enter the circle passing through the points −1/k1 and −1/k2 in the complex plane. In the considered case, the ’circle’ coincides with the vertical plane passing through the point −1/2 in the complex plane. It is easy to show that the circle condition is equivalent to show that F (jω) does not encircle the point −1/2 in the complex plane. This actually consists in a scaling of the Nyquist criterion and 2N (s) is equivalent to the stability of Fˆ2 (s) = D(s)+2N where F (s) = N (s)/D(s). (s) Since the structure of the quasipolynomial D(s) + 2N (s) is very similar to as in the ideal case, the same approach is used and yields the lemma: Theorem 5. Assuming that the quantization error at equilibrium is 0, then the equilibrium point (x∗ , τ ∗ ) of system (1)-(10) is 1. locally delay-independent stable if τ ∗ > 2T . 2. locally delay-dependent stable if τ ∗ < 2T and 2T (T + τ ∗ ) − τ ∗ ≤ 0. 3. locally delay-dependent stable if τ ∗ < 2T and 2T (T + τ ∗ ) − τ ∗ > 0 and γ < 2T (T +τ1 ∗ )−τ ∗ . Proof. The proof follows the same lines as for the other results; see [4]. We can conclude on the fact that, in the case ε∗ = 0, the tuning term γ can be chosen in order to avoid oscillations. We explain now why this is not possible in the general case ε∗ = 0. Indeed, when ε∗ = 0, the quantization function is not odd anymore since it must be horizontally shifted to be centered around the equilibrium point. Therefore the sector takes the more general form (0, θ) where θ is defined by θ := 2(1 − 2|κ|)−1 where ε∗ = κq, κ ∈ [−1/2, 1/2]. Hence, the term θ can reach arbitrarily large values, translating then the ’circle’ horizontally to the right. When the error is maximal (κ = ±1/2), the forbidden area of the complex plane is the entire open left-half plane itself. Due to the exponential term of F (s), then the graph of F (jω), ω > 0 always enters the open left-half plane and thus limit cycles cannot be avoided.
366
7
C. Briat, E.A. Yavuz, and G. Karlsson
Conclusion
In this paper, we have addressed the fairness and stability properties of delay based transmission control protocols; in particular FAST-TCP. The update law of FAST-TCP congestion windows requires to know both the propagation and queuing delays if fairness and efficiency are to be provided. We have incorporated two approaches to estimate the propagation delay using the model we developed for such protocols. The first approach, which is also the one implemented in FAST-TCP, employs an iterative learning process to estimate the propagation delay as the minimal observed RTT. The propagation delay is always overestimated in this case unless the queuing delay drops to zero. It is shown that such estimation procedure leads to loss of fairness due to diverse estimation errors at each source along with multiple equilibrium points for the queuing delay. We have developed a model for the delay based congestion avoidance protocols to analyze the impact of the queuing delay estimation error on equilibrium and its stability. Using this model, we have observed that the stability of the equilibrium points, in the single-source case, can be ensured through an appropriate choice of the tuning term γ of the protocol. The developed model, which is able to predict the congestion window size, the queuing delay and the number of enqueued packets quite accurately, is validated by running NS-2 simulations. The second approach we have analyzed is based on the assumption that the routers feedback a quantized measure of their queuing delay. Hence both propagation and queuing delays (modulo the quantization error) can be estimated easily. We have shown that this strategy manages to preserve the uniqueness of the equilibrium point as well as fairness, yet it can not prevent limit cycles (oscillations) around the equilibrium points for which the quantization error is too large.
References 1. Bonald, T., Feuillet, M., Proutiere, A.: Is the law of the jungle sustainable for the internet? In: Proc. IEEE INFOCOM (2009) 2. Brakmo, L.S., O’Malley, S.W., Peterson, L.L.: TCP Vegas: new techniques for congestion detection and avoidance. SIGCOMM Comput. Commun. Rev. 24(4), 24–35 (1994) 3. Brakmo, L.S., Peterson, L.L.: TCP Vegas: End to end congestion avoidance on a global internet. IEEE Journal on Selected Areas in Communications 13, 1465–1480 (1995) 4. Briat, C., Hjalmarsson, H., Johansson, K.H., Karlsson, G., J¨ onsson, U., Sandberg, H.: Nonlinear state-dependent delay modeling and stability analysis of internet congestion control. In: 49th IEEE Conference on Decision and Control, Altlanta, USA (2010) 5. Chan, Y.C., Chan, C.T., Chen, Y.C., Ho, C.Y.: Performance improvement of congestion avoidance mechanism for TCP Vegas. In: International Conference on Parallel and Distributed Systems, Vol. 0, p. 605 (2004)
Impact of Queueing Delay Estimation Error Equilibrium
367
6. Cui, T., Andrew, L., Zukerman, M., Tan, L.: Improving the fairness of FAST TCP to new flows. IEEE Communications Letters 10(5), 414–416 (2006) 7. Gu, K., Kharitonov, V.L., Chen, J.: Stability of Time-Delay Systems. Birkh¨auser, Basel (2003) 8. Jacobson, V.: Congestion avoidance and control. In: SIGCOMM 1988: Symposium Proceedings on Communications Architectures and Protocols, pp. 314–329. ACM, New York (1988) 9. Jacobsson, K.: Dynamic Modeling of Internet Congestion Control. Ph.D. thesis, KTH School of Electrical Engineering (2008) 10. Jain, R.: A delay-based approach for congestion avoidance in interconnected heterogeneous computer networks. SIGCOMM Comput. Commun. Rev. 19(5), 56–71 (1989) 11. La, R.J., Walrand, J., Anantharam, V.: Issues in TCP Vegas (2001) 12. Low, S.H., Peterson, L., Wang, L.: Understanding TCP vegas: a duality model. SIGMETRICS Perform. Eval. Rev. 29(1), 226–235 (2001) 13. Lundqvist, H., Ivars, I.M., Karlsson, G.: Edge-based differentiated services. In: de Meer, H., Bhatti, N. (eds.) IWQoS 2005. LNCS, vol. 3552, pp. 259–270. Springer, Heidelberg (2005) 14. Lur’e, A., Postnikov, V.: On the theory of stability of control systems. Prikl. Mat. i Mekh (Applied mathematics and mechanics) 8(3), 3–13 (1944) 15. Michiels, W., Niculescu, S.: Stability and stabilization of time-delay systems. In: An Eigenvalue Based Approach, SIAM Publication, Philadelphia (2007) 16. Niculescu, S.I.: Delay effects on stability. In: A Robust Control Approach, vol. 269. Springer, Heidelberg (2001) 17. Rodr´ıguez-P´erez, M., Herrer´ıa-Alonso, S., Fern´ andez-Veiga, M., Lopez-Garcia, C.: The persistent congestion problem of FAST-TCP: analysis and solutions. European Transactions on Telecommunications 21(6), 504–518 (2010) 18. Sandberg, H., Hjalmarsson, H., J¨ onsson, U., Karlsson, G., Johansson, K.: On performance limitations of congestion control. In: 48th IEEE Conference on Decision and Control, Shanghai, China, pp. 5869–5876 (2009) 19. Srikant, R.: The Mathematics of Internet Congestion Control. Birkh¨ auser, Boston (2004) 20. Tan, L., Yuan, C., Zukerman, M.: FAST TCP: fairness and queuing issues. IEEE Communications Letters 9(8), 762–764 (2005) 21. Wang, Z., Crowcroft, J.: Eliminating periodic packet losses in the 4.3-tahoe BSD TCP congestion control algorithm. SIGCOMM Comput. Commun. Rev. 22(2), 9–16 (1992) 22. Wei, D.X., Jin, C., Low, S.H., Hegde, S.: FAST TCP: motivation, architecture, algorithms, performance. IEEE/ACM Trans. Netw. 14(6), 1246–1259 (2006) 23. Zhou, K., Doyle, J.C., Glover, K.: Robust and Optimal Control. Prentice Hall, Upper Saddle River (1996)
On the Uplink Performance of TCP in Multi-rate 802.11 WLANs Naeem Khademi1 , Michael Welzl1 , and Renato Lo Cigno2 1
Department of Informatics, University of Oslo, Norway {naeemk,michawe}@ifi.uio.no 2 DISI, University of Trento, Italy [email protected]
Abstract. IEEE 802.11 defines several physical layer data rates to provide more robust communication by falling back to a lower rate in the presence of high noise levels. The choice of the current rate can be automatized; e.g., Auto-Rate Fallback (ARF) is a well-known mechanism in which the sender adapts its transmission rate in response to link noise using up/down thresholds. ARF has been criticized for not being able to distinguish MAC collisions from channel noise. It has however been shown that, in the absence of noise and in the face of collisions, ARF does not play a significant role for TCP’s downlink performance. The interactions of ARF, DCF and uplink TCP have not yet been deeply investigated. In this paper, we demonstrate our findings on the impact of rate fallback caused by collisions in ARF on the uplink performance of various TCP variants using simulations. Keywords: TCP; 802.11 WLAN; Auto-Rate Fallback.
1
Introduction
Rate adaptation is a mechanism that exploits the multi-rate capability of the physical layer defined in IEEE 802.11 standards to provide more robust communication in the presence of channel failures (e.g. noise, fading, interference). This is achieved by adapting the physical layer (PHY) rate selection based on the channel quality. Four (1∼11 Mbps) and eight (6∼54 Mbps) PHY rates are mandated by 802.11b and 802.11g standards, respectively. The choice of the specific adaptation algorithm is however left up to the vendor in the standard – but this is critical to the system performance. Several rate adaptation algorithms have been proposed in the literature, most notably Auto-Rate Fallback (ARF)[6], which chooses the rate based on the number of consecutive successful or unsuccessful transmission attempts (up/down thresholds). For instance, it falls back to a lower rate after two consecutive transmission failures (e.g. ACK frame is not received) and increases the rate after ten successful transmission attempts in Cisco Aironet 350 cards[1]. ARF is well known and widely adopted in wireless cards due to its simplicity, but it has been criticized for not being able to distinguish losses due to collision J. Domingo-Pascual et al. (Eds.): NETWORKING 2011, Part II, LNCS 6641, pp. 368–378, 2011. c IFIP International Federation for Information Processing 2011
On the Uplink Performance of TCP in Multi-rate 802.11 WLANs
369
from losses due to channel noise which, in combination with DCF, may affect the overall system performance in typical multi-user wireless scenarios. There has been a significant amount of research on the interaction of DCF and ARF indicating ARF’s poor performance at the MAC level, considering constant bitrate UDP traffic scenarios [9,12,16]. However, less work has been done to study the inter-layer dependencies of ARF, DCF and TCP; exceptions are [1,2]. These references reveal that, in scenarios with downlink traffic, TCP throughput hardly depends on the number of contending stations. It has been shown that, in pure TCP downlink scenarios, the number of active stations participating in multiple access contention at an instant of time stays extremely low due to the n:1 traffic ratio of TCP ACK to data packets, resulting in a very low collision rate and therefore gaining the maximum achievable throughput. These results are valid for many typical client-server (e.g. http access) scenarios. Considering today’s popular p2p file sharing, VoIP, email attachments and multimedia streaming applications, there is a need to study the overall behavior of the system in the presence of ARF in TCP uplink traffic scenarios, where several stations are contending to access the medium to upload large data packets simultaneously. In this paper we will address such scenarios. The rest of this paper is organized as follows: in section 2 we provide some background on the current research trend of rate adaptation mechanisms and their pro’s and con’s compared to ARF as well as cross-layer interactions of ARF, DCF and TCP. In section 3 we will evaluate the uplink performance of TCP in presence of ARF with simulations; in section 4 we extend our scope to high speed TCP variants and finally, in section 5, we propose future work and conclude.
2
Background
Rate adaptation mechanisms in 802.11 WLANs adapt the PHY transmission rate based on the channel conditions to optimize network parameters such as application-level throughput or power consumption. The former is the main focus of this paper. There have been several proposed algorithms in the literature aiming to achieve this goal [6,8,9,15,5]. ARF [6] is the first published rate adaptation algorithm basically designed for WaveLan II devices and later employed by several 802.11 Wi-Fi NICs. The basic idea of ARF is to increase the PHY transmission rate after a certain number of successful attempts (up threshold) and falling back to a lower rate after a certain number of consecutive failures (down threshold) in addition to setting a timer. After fallback, when the number of per-frame ACKs reaches the up threshold or the timer expires, the PHY rate will be increased. If the first frame transmission after increasing the rate (probing frame) fails, ARF immediately switches back to the previous rate and restarts the timer. Several problematic issues arise when deploying ARF. First: it takes frame loss as an indication of high channel noise and therefore mistakes collisions for noise. Second: it is too aggressive in rate reduction due to the normally small down
370
N. Khademi, M. Welzl, and R. Lo Cigno
threshold value (e.g. one or two) and it can therefore not utilize the total available bandwidth. Third: it attempts to increase the rate after each up threshold (e.g. ten) number of successful transmissions, but this may not be an indication that channel conditions have in fact improved. Some work has been done to solve these problems [8,9,15]. The CollisionAware Rate Adaptation (CARA) algorithm proposed in [8] exploits the RTS/ CTS frames’ functionality to differentiate frame collisions from frame transmission failures due to noise. The main idea behind CARA is that a transmission failure of a small RTS frame which is normally encoded at the lowest rate is less likely to be caused by channel noise and most probably the result of a collision, while the transmission failure of a larger data frame followed by a successful RTS/CTS handshake is probably due to channel noise. CARA’s practicality in infrastructure-based WLANs is limited because of the mandatory usage of RTS/CTS. Adaptive ARF (AARF) [9] is an extension of ARF which decreases the number of probing packet failures to use a higher rate by multiplying the default up threshold (e.g. 10) by two after a probing packet failure. The up threshold is set to its default value after a rate fallback. While AARF decreases the number of failed transmissions and retransmissions, it still inherits other weaknesses of ARF. Rather than ARF, other rate adaptation mechanisms such as [5] require incompatible changes to the 802.11 standard, and therefore they are out of our scope. In this paper we focus on ARF only since it is one of the few open-source and 802.11-compatible mechanisms which is widely employed in wireless NICs. As already mentioned, ARF’s performance has been studied extensively at the PHY/MAC level. However, few works are available that investigate this at the transport layer level focusing on cross-layer interaction of ARF, DCF and TCP [1,2]. In these references, it is claimed that ARF has a negligible impact of the performance of TCP thanks to TCP’s self-clocking mechanism and the 1/n chance of the AP to gain access to the channel, which decreases the number of stations actively competing for channel access. Since the cases studied in these works are strictly based on download traffic scenarios, they lack a generic conclusion about whether ARF plays a significant role for TCP or not. The impact of ARF on uplink TCP traffic is discussed in the next section.
3
TCP Performance in Multi-rate WLANs
To study the behavior of ARF in the presence of uplink and downlink flows, we performed a set of simulations using ns-2 [14] and an extension [13] to provide Auto-Rate Fallback support (AARF)1 . Ten stations are located at the same 1
Performance evaluations of this paper are made using an improved variant of ARF (AARF) [9] and mentioned as ARF hereafter. Since AARF inherits the two major problems of ARF (being collision-reactive and aggressive in rate reduction) and only reduces the number of failed probing frames, any problem found in AARF applies to ARF as well.
On the Uplink Performance of TCP in Multi-rate 802.11 WLANs
371
distance of 10m from an access point to gain an equal signal power. Noise and interference effects are omitted from the scenarios in order to isolate the frame collision events from the other sources of frame loss. The main focal point of this work is to study the potential negative impact of collisions coupling with ARF on TCP performance when the channel condition is relatively good in a simplified and idealized scenario; therefore we have left this study under noisy channel condition which is harder to model as our future work. Common parameters used in different sets of simulations in this paper are brought in Table 1. Table 1. Simulation parameters MAC protocol IEEE 802.11b IEEE 802.11g Packet size 1500 bytes Propagation model TwoRayGround Interface queue DropTail (50 packets size) CWMin 32 16 CWMax 1024 SIFS 10 µs DIFS 50 µs 28 µs Slot time 20 µs 9 µs RTS/CTS option Disabled
Figure 1(a) shows the per-second aggregate goodput of these stations under 802.11b channel when each of them is downloading unlimited FTP data carried by a TCP NewReno flow from a server which is connected to the AP via a 100 Mbps wired link2 . As observed, ARF does not affect the performance of download flows for the whole simulation period keeping the aggregate goodput at the maximum level compared to when ARF is disabled. This phenomenon is due to TCP’s self-clocking mechanism and the 1/n probability ratio of the AP to gain access to the shared channel. Simply put, data packets depart from the AP’s downstream queue to the wireless stations. Upon successfully receiving a data packet at the destination’s NIC, this station’s queue will be backlogged with an ACK packet to be sent back after winning in medium access contention. Since the AP has equal probability of access to the channel as any other stations and it is in charge of transferring all data packets, stations’ backlogged queues grow only as a consequence of AP’s successful contention to access the channel. Collision occur only when already backlogged stations and the AP compete to concurrently access the channel. Using birth-death Markov chains in [1,2], it has been shown that TCP’s selfclocking mechanism keeps the number of backlogged stations (stations actively participating in contention) in equilibrium at 2-3 contending stations for scenarios with 2-100 wireless stations. This level of contention provides a very low likelihood of collision, which mitigates the negative effect of ARF on downlink’s 2
Unless otherwise noted, the same parameter settings are also applied in all other simulations in this paper.
N. Khademi, M. Welzl, and R. Lo Cigno 600 550 500 450 400 350 300 250 200 150 100 50 0
ARF
0
20
Per-second Aggregate Goodput (pkts)
Per-second Aggregate Goodput (pkts)
372
NO-ARF
40
60 80 100 Time (Seconds)
120
140
600 550 500 450 400 350 300 250 200 150 100 50 0
ARF
0
20
(a) Download
NO-ARF
40
60 80 100 Time (Seconds)
120
140
(b) Upload
Fig. 1. Per-second aggregate goodput of 10 stations in 802.11b (wired delay= 50ms) 200
11 Mbps, PER=0.0 5.5 Mbps, PER=0.0 2 Mbps, PER=0.0 1 Mbps, PER=0.0
180 cwnd (packets)
160 140 120 100 80 60 40 20 0 0
10
20
30
40 50 60 70 Time (Seconds)
80
90 100
Fig. 2. cwnd of a single NewReno upload flow vs. PHY rate
TCP performance. In addition, the almost fixed collision rate causes what the authors called TCP’s “scalable performance” for a varying number of nodes. In contrast to these findings, in upload scenarios, wireless stations are participating in contention to transfer data to the AP and their queues are backlogged with data packets in accordance with their window size. A collision – in this case, between data packets – can be more harmful than in the download scenario, where ACK packets are colliding with each other or with data packets sent from the AP. The packet size also plays a role in determining the impact of collisions. The possible rate downshift triggered by ARF, affecting the transmission of a subsequent large data packet will lead to the under-utilization of the wireless channel bandwidth for a longer period of time compared to the transmission of a short ACK packet. Possibly, this may also result in the other stations’ backlogged queue to grow larger resulting in a higher level of collisions. Figure 1(b) shows the per-second aggregate goodput of uploading stations. It reveals that, when ARF is disabled, uploading stations perceive more fluctuations than the downloading stations because of the earlier mentioned difference in collision probability and impact. However, they achieve a better total performance because they more aggressively try to obtain access to the medium [7].
On the Uplink Performance of TCP in Multi-rate 802.11 WLANs 200
160
140
140
120 100 80 60
120 100 80 60
40
40
20
20
0
0
0
10
20
30
40
50
60
70
NewReno-ARF cwnd, PER=0.1 NewReno-ARF cwnd, PER=0.0
180
160 cwnd (Packets)
cwnd (Packets)
200
NewReno-NOARF cwnd, PER=0.1 NewReno-NOARF cwnd, PER=0.0
180
80
90 100
373
0
10
20
Time (Seconds)
(a) Without ARF
30
40 50 60 70 Time (Seconds)
80
90 100
80
90 100
(b) With ARF
2400 2200 2000 1800 1600 1400 1200 1000 800 600 400 200 0
NewReno-NOARF RTT, PER=0.1 NewReno-NOARF RTT, PER=0.0
RTT (ms)
RTT (ms)
Fig. 3. cwnd of a single NewReno upload flow
0
10
20
30
40
50
60
Time (Seconds)
(a) Without ARF
70
80
90 100
2400 2200 2000 1800 1600 1400 1200 1000 800 600 400 200 0
NewReno-ARF RTT, PER=0.1 NewReno-ARF RTT, PER=0.0
0
10
20
30
40 50 60 70 Time (Seconds)
(b) With ARF
Fig. 4. RTT of a single NewReno upload flow
With enabled ARF in the upload scenarios, we observe a significant goodput reduction where the occurrence of collisions between data packets leads to the rate downshifts (falling to 15-20% of achievable goodput on average). This admits our argument about upload flows being more prone to ARF than download flows. To better understand the characteristics of cross-layer interaction between TCP and ARF, it is necessary to investigate the impact of collisions (packet losses) on TCP dynamism when ARF is enabled or disabled. The impact of the PHY transmission rate on the cwnd size for a single NewReno flow is plotted in Figure 2. Based on a rule-of-thumb the maximum cwnd size is equal to the bandwidth-delay product, which means that a lower PHY transmission rate (channel bandwidth) will lead to a lower maximum cwnd as depicted in Figure 2. Collisions at the MAC layer are perceived as packet errors (bit errors at the PHY layer). Hence, the impact of packet errors on cwnd of a single NewReno upload flow is investigated in Figure 3. The packet error rate perceived at the MAC layer of NICs was chosen to be high enough (PER=0.1) and uniformly distributed in order to present a clearly observable effect. An error model
374
N. Khademi, M. Welzl, and R. Lo Cigno
was defined at the MAC layer of AP and each of the wireless nodes to discard the received frames randomly with the uniform probability of e.g. 10%. Based on Figure 3(a) it is obvious that frame losses due to packet error have a minimal impact on the cwnd size in the absence of ARF. This is because of the 802.11 MAC retransmission mechanism which ensures the receipt of a packet within a time that is normally shorter than a TCP retransmission timeout (RTO). Therefore an unsuccessful transmission of a MAC frame and/or its corresponding ACK frame will be compensated for by a frame retransmission, and consequently TCP cwnd will be kept almost intact at moderate values of the MAC frame error rate (PER of 0.1 is high here) . On the contrary, in the presence of ARF, the maximum cwnd drops to 60 packets (Figure 3(b)), indicating that the physical layer rate of 1 Mbps was mostly used as a result of ARF rate downshifts when PER=0.1. There is a difference in the sawtooth behavior of cwnd : with enabled ARF, cwnd growth is slower. This is because the delay (RTT) is increased by the MAC frame transmission and retransmission attempts with a reduced PHY rate; it therefore takes a longer time for a flow under ARF to fill up the AP buffer and experience a TCP packet drop. Figure 4 shows TCP’s RTT for this scenario, revealing that retransmission attempts have a negligible impact on the RTT in the absence of ARF while rate downshifts significantly increase the RTT and consequently lead to slower cwnd growth in the presence of ARF.
4
Performance of High-Speed TCPs
TCP as the dominant transport protocol in the Internet has evolved in recent years to better utilize higher physical layer link speeds. Standard TCP performs poorly in networks with a large bandwidth×delay product. There is a vast number of proposals for overcoming this limitation; some of them have been suggested for standardization in the Internet Engineering Task Force (IETF), and one of them (CUBIC [4]) is the default mechanism in Linux at the time of writing. These proposals are usually called high speed TCP flavors; in addition to CUBIC, we pick HTCP [10] and HSTCP [3] for our evaluation (because HTCP is commonly compared with CUBIC, and HSTCP is standardized). These variants can all achieve very large cwnd sizes compared to standard NewReno in long-distance networks. To give a short overview of these TCP variants, in CUBIC the cwnd is a cubic function of time since the last congestion event with the inflection point set to the cwnd prior to the event. HTCP uses Additive Increase/Multiplicative Decrease (AIMD) to control TCP’s cwnd. It increases its aggressiveness (in particular, the rate of additive increase) as the time since the previous loss increases. Finally, in HSTCP, when an ACK is received in the congestion avoidance phase, cwnd is increased by a(w)/w, and when a loss is detected via a triple duplicate ACK, cwnd is decreased by (1-b(w))w, where w is the current window size and the values of the functions a and b get larger and smaller with a growing w, respectively. We evaluated the performance of these high speed TCP flavors under
Aggregate Throughput (B/S)
On the Uplink Performance of TCP in Multi-rate 802.11 WLANs 1e+06 950000 900000 850000 800000 750000 700000 650000 600000 550000 500000 450000 400000 350000 300000 250000 200000 150000 100000 50000 0
CUBIC-NOARF HighSpeed-NOARF NewReno-NOARF HTCP-NOARF
375
CUBIC-ARF HighSpeed-ARF NewReno-ARF HTCP-ARF
NOARF
ARF
5
10
15
20 25 30 35 Number of Nodes
40
45
50
(a) 802.11b 5e+06
CUBIC-NOARF HighSpeed-NOARF NewReno-NOARF HTCP-NOARF
Aggregate Throughput (B/S)
4.5e+06 4e+06
CUBIC-ARF HighSpeed-ARF NewReno-ARF HTCP-ARF
3.5e+06 3e+06
NOARF
2.5e+06 2e+06 ARF
1.5e+06 1e+06 500000 0
5
10
15
20 25 30 35 Number of Nodes
40
45
50
(b) 802.11g Fig. 5. Aggregate throughput of TCP variants vs. number of nodes
the aforementioned uplink wireless-cum-wired scenario by using the ns-2 Linux TCP patch [11] with the original Linux kernel 2.6.16.3 source code. Figure 5 shows the average aggregate throughput of each TCP variant for a duration of 1000 seconds for a varying number of uploading stations where the wired delay is set to 100ms. In the absence of ARF, all high speed TCP variants perform almost the same and gain the maximum achievable throughput of 650-700 KB/s (Figure 5(a)) and 3.3 Mbps (Figure 5(b)) on average, which are the achievable throughputs associated to the maximum used wireless channel PHY rates of 802.11b and 802.11g (11 Mbps and 54 Mbps) respectively. Due to the limitation of the wireless channel bandwidth, they are not able to perform better than NewReno by letting the cwnd grow to a high value. The lines of HSTCP and New Reno are not distinguishable because their values are almost exactly the same.
N. Khademi, M. Welzl, and R. Lo Cigno
Aggregate Throughput (B/S)
376
1e+06 950000 900000 850000 800000 750000 700000 650000 600000 550000 500000 450000 400000 350000 300000 250000 200000 150000 100000 50000 0
CUBIC-NOARF HighSpeed-NOARF NewReno-NOARF HTCP-NOARF
CUBIC-ARF HighSpeed-ARF NewReno-ARF HTCP-ARF
NOARF
ARF
50
100
150
200 250 300 350 Wired Delay (ms)
400
450
500
450
500
(a) 802.11b 5e+06
Aggregate Throughput (B/S)
4.5e+06 4e+06
CUBIC-NOARF HighSpeed-NOARF NewReno-NOARF HTCP-NOARF
CUBIC-ARF HighSpeed-ARF NewReno-ARF HTCP-ARF
3.5e+06 3e+06
NOARF
2.5e+06 2e+06 1.5e+06
ARF
1e+06 500000 0
50
100
150
200 250 300 350 Wired Delay (ms)
400
(b) 802.11g Fig. 6. Aggregate throughput of TCP variants vs. wired delay
When coupled with ARF, their aggregate throughput decline significantly, down to 100-200 KB/s on average in 802.11b (at the same level as NewReno), revealing the fact that rate downshifts as a consequence of collisions supersede the possible performance improvement of high speed TCP variants caused by their faster cwnd growth. It is worth to notice that these high speed TCP variants are designed to perform the same as NewReno for small cwnd sizes. Therefore, having an small cwnd because a low PHY rate (e.g. 1 Mbps in 802.11b) is used most of the time leads to the same achievable throughput as with NewReno. In 802.11g, HSTCP performs slightly better than the rest of TCP variants achieving only 1.4 Mbps on average while the throughput of other TCP variants decline to 1 Mbps on average. It is observable that the total aggregate throughput is almost invariant to the number of wireless nodes participating in contention due to “scalable
On the Uplink Performance of TCP in Multi-rate 802.11 WLANs
377
performance” of TCP in uplink scenarios caused by TCP’s self-clocking mechanism (similar to downlink scenarios). However, as justified previously, the impact of the collisions between data packets in uplink scenarios and their consequent effect of rate downshifts on cwnd size is totally different from the collisions between ACK packets in downlink scenarios. To validate these results for larger bandwidth-delay products, where cwnd is expected to reach larger values, we repeated the simulations for different values of wired delay, ranging from 10 to 500ms with 10 uploading stations. Based on Figure 6 an extensive performance deterioration with enabled ARF is observable. Although different TCP variants perform almost the same in low bandwidth 802.11b wireless channel (Figure 6(a)), they behave differently under 802.11g channel with higher bandwidth where surprisingly, HSTCP and NewReno stand at the better throughput level compared to CUBIC and HTCP (Figure 6(b)). Without ARF, however, CUBIC and HTCP are able to exploit almost the total provided bandwidth by 802.11g channel while the throughput of NewReno and HSTCP decline as wired delay increases. In 802.11b, all TCP variants gain the total achievable throughput. Conjointly, our results reveal the poor uplink TCP performance in general, and also for high speed TCP variants, in multi-rate 802.11 networks.
5
Conclusive Remarks and Future Work
The uplink performance of TCP in multi-rate 802.11 networks has been studied and evaluated in this paper. It has been shown that ARF can significantly impede the TCP performance in uplink scenarios in contrast to the assumption that TCP’s self-clocking behavior mitigates the impact of ARF in previous work. Real-life tests to validate these findings are left as our future work. Further, newer 802.11 standards with higher available bandwidth must be taken into consideration (e.g. 802.11n) and an analytical model will be proposed in the future. The impact of collisions on TCP performance with ARF under the noisy channel conditions should be studied analytically and experimentally. In addition, TCP could be modified in a way to perform better in the presence of ARF (e.g., we found that a buggy, severely rate limited version of HSTCP in the Linux kernel 2.6.16.3 sometimes performed better than the rest of the TCP variants). A performance improvement could therefore consider active queue management policies that make use of such a window size limitation; this will be investigated in the future.
References 1. Choi, J., Park, K., Kim, C.K.: Cross-layer analysis of rate adaptation, DCF and TCP in multi-rate WLANs. In: 26th IEEE International Conference on Computer Communications, INFOCOM 2007, pp. 1055–1063. IEEE, Los Alamitos (2007) 2. Choi, S., Park, K., Kim, C.-k.: On the performance characteristics of WLANs: revisited. SIGMETRICS Performance Evaluation Review 33, 97–108 (2005)
378
N. Khademi, M. Welzl, and R. Lo Cigno
3. Floyd, S.: Highspeed TCP for large congestion windows. RFC 3649 (Experimental) (December 2003) 4. Ha, S., Rhee, I., Xu, L.: CUBIC: a new TCP-friendly high-speed TCP variant. SIGOPS Operating Systems Review 42, 64–74 (2008) 5. Holland, G., Vaidya, N., Bahl, P.: A rate-adaptive MAC protocol for multi-hop wireless networks. In: Proceedings of the 7th Annual International Conference on Mobile Computing and Networking, pp. 236–251. ACM, New York (2001) 6. Kamerman, A., Monteban, L.: WaveLAN-II: A high-performance wireless LAN for the unlicensed band. Bell. Labs Technical Journal 2(3), 118–133 (1997) 7. Khademi, N., Othman, M.: Size-based and direction-based TCP fairness issues in IEEE 802.11 WLANs. EURASIP Journal on Wireless Communications and Networking (2010) 8. Kim, J., Kim, S., Choi, S., Qiao, D.: CARA: Collision-aware rate adaptation for IEEE 802.11 WLANs. In: Proceedings of 25th IEEE International Conference on Computer Communications, INFOCOM 2006, April 2006, pp. 1–11 (2006) 9. Lacage, M., Manshaei, M.H., Turletti, T.: IEEE 802.11 rate adaptation: a practical approach. In: Proceedings of the 7th ACM International Symposium on Modeling, Analysis and Simulation of Wireless and Mobile Systems, MSWiM 2004, pp. 126– 134. ACM, New York (2004) 10. Leith, D., Shorten, R.: H-TCP: TCP for high-speed and long-distance networks. In: Proc. PFLDnet, Argonne 2004 (2004) 11. A Linux TCP implementation for NS2, http://netlab.caltech.edu/projects/ ns2tcplinux/ns2linux/ 12. Maguolo, F., Lacage, M., Turletti, T.: Efficient collision detection for auto rate fallback algorithm. In: IEEE Symposium on Computers and Communications, ISCC 2008, pp. 25–30 (July 2008) 13. NS-2.29 Wireless Update Patch., http://perso.citi.insa-lyon.fr/mfiore/ 14. The Network Simulator NS-2, http://www.isi.edu/nsnam/ns/ 15. Pang, Q., Leung, V.C.M., Liew, S.C.: A rate adaptation algorithm for IEEE 802.11 WLANs based on MAC-layer loss differentiation. In: 2nd International Conference on Broadband Networks. BroadNets 2005, vol. 1, pp. 659–667 (October 2005) 16. Xi, Y., Kim, B.-S., Wei, J.b., Huang, Q.-Y.: Adaptive multirate auto rate fallback protocol for IEEE 802.11 WLANs. In: IEEE Military Communications Conference, MILCOM 2006, pp. 1–7 (October 2006)
Author Index
Afek, Yehuda I-52 Agiatzidou, Eleni II-109 Altman, Eitan II-68, II-225 Armitage, Grenville I-458, II-328 Arumaithurai, Mayutan II-342 Avrachenkov, Konstantin I-307, II-225 Baccarelli, Enzo I-186 Bany Salameh, Haythem II-178 Barr´e, S´ebastien I-444 Blenn, Norbert II-314 Bolot, Jean II-198 Bonaventure, Olivier I-444 Bornhauser, Uli I-432 Branch, Philip I-458 Bremler-Barr, Anat I-52 Briat, Corentin II-356 Buchbinder, Niv I-172 Burin des Roziers, Cl´ement I-147 Capone, Antonio I-319 Casares-Giner, Vicente II-121 Casas, Pedro I-40 Chai, Wei Koong I-78 Chelius, Guillaume I-147 Chen, Jinbang II-150 Chen, Meng Chang II-263 Cho, Sangyeun I-406 Christensen, Ken I-160 Clegg, Richard G. I-78, II-135 Conti, Marco II-301 Cordeschi, Nicola I-186 del-Castillo, Juan I. II-250 Delicado, Francisco M. II-250 Diaz, Michel I-254 Doerr, Christian II-314 Ducrocq, Tony I-147 Duelli, Michael I-393 Dutta, Partha I-198 Edwards, Christopher I-212 Elias, Jocelyne I-307 Engel, Thomas I-1, I-28
Ercal, Gunes Erdelj, Milan
I-281 I-355
Faloutsos, Christos I-266 Faloutsos, Michalis I-266 Fdida, Serge I-120 Feldmann, Anja I-367 Fleury, Eric I-147 Fraboulet, Antoine I-147 Fran¸cois, J´erˆ ome I-1, I-28 Fu, Xiaoming II-342 Gallais, Antoine I-147 Ganguly, Niloy II-288 Georgopoulos, Panagiotis Ghosh, Saptarshi II-288 Goebel, Vera I-106 Goll, Sebastian I-393 Gopinathan, Ajay II-82 Grandi, Imad II-135 Gregori, Enrico II-54 Gu, Daqing I-66 Guijarro, Luis II-164
I-212
Hadjadj-Aoul, Yassine I-331 Halkes, Gertjan II-1 Hanna, Michel I-406 Hassan, Mahbub I-92 Hayes, David A. II-328 Hefeeda, Mohamed II-213 Heusse, Martin II-150 Hidell, Markus I-379 Horneffer, Martin I-432 Hossain, Imran I-92 Iannone, Luigi I-367 Improta, Alessandro II-54 Jain, Navendu
I-172
Kalyanaraman, Shivkumar Kanhere, Salil S. I-92 Karlsson, Gunnar II-356 Khademi, Naeem II-368 Kim, Juhoon I-367
I-198
380
Author Index
Kolty´s, Kamil II-97 Kong, Chenguang II-275 Koral, Yaron I-52 Kristiansen, Stein I-106 Krunz, Marwan II-178 Kumar, Avinash I-198 Landa, Raul I-78 Legout, Arnaud II-68 Lenzini, Luciano II-54 Li, Victor O.K. II-275 Li, Xu I-241 Li, Zongpeng II-82 Lindeberg, Morten I-106 Liu, Alex X. I-294 Liu, Jiangchuan II-13 Lo Cigno, Renato II-368 Ma, Tiejun I-343 Machiraju, Sridhar II-198 Maennel, Olaf I-420 Maestro, Juan Antonio I-160 Malouch, Naceur II-238 Mann, Vijay I-198 Marinakis, Dimitri I-134 Martignon, Fabio I-307, I-319 Martinez-Bauset, Jorge II-121, II-164 Martini, Peter I-432 Mazel, Johan I-40 McCarthy, Ben I-212 Melhem, Rami I-406 Menache, Ishai I-172 Meulle, Micka¨el I-420 Mitton, Nathalie I-147, I-241 Molazem Tabrizi, Farid II-213 Nahle, Salim II-238 Neglia, Giovanni I-307 Nguyen, Anh Dung I-254 Noel, Thomas I-147 Nogueira, Ant´ onio I-227 Oprescu, Iuniana I-420 Owezarski, Philippe I-40, I-420 Paasch, Christoph I-444 Pacheco-Paramo, Diego II-121 Paris, Stefano I-319 Passarella, Andrea II-301 Patriarca, Tatiana I-186
Pavlou, George I-78, II-135 Pelsser, Cristel I-420 Peters, Joseph II-213 Petrosyan, Leon I-307 Pham, Tuan-Minh I-120 Pie´ nkosz, Krzysztof II-97 Pietzuch, Peter I-343 Pla, Vicent II-121, II-164 Plagemann, Thomas I-106 Pouwelse, Johan II-1 Prakash, B. Aditya I-266 Psaras, Ioannis I-78 Ramakrishnan, K.K. II-342 Ramanath, Sreenath II-225 Ramiro, Victor I-254 Rathore, Muhammad Siraj I-379 Razafindralambo, Tahiry I-355 Reeves, Douglas S. II-25 Reviriego, Pedro I-160 Rio, Miguel II-135 Rossi, Lorenzo II-54 Salvador, Paulo I-227 S´ anchez-Maci´ an, Alfonso I-160 Sani, Luca II-54 Schlosser, Daniel I-393 Schmid, Stefan I-331 S´enac, Patrick I-254 Shafiq, Muhammad Zubair I-294 Simplot-Ryl, David I-241, I-355 Sj¨ odin, Peter I-379 So, Jung Ki II-25 Srivastava, Ajitesh II-288 Stamoulis, George D. II-109 State, Radu I-1, I-28 Steiner, Moritz II-40 Sun, Yeali S. II-263 Taleb, Tarik I-331 Tang, Siyu II-314 Taveira Ara´ ujo, Jo˜ ao II-135 Toczylowski, Eugeniusz II-97 Tong, Hanghang I-266 Tu, Yung-Cheng II-263 Uhlig, Steve I-420 Urvoy-Keller, Guillaume Valler, Nicholas C. I-266 Vandaele, Julien I-147
II-150
Author Index Van Mieghem, Piet II-314 Varvello, Matteo II-40 Veitch, Darryl I-15 Vidal, Jos´e R. II-164 Villal´ on, Jose M. II-250 Wagner, Cynthia I-28 Wang, Feng II-13 Wang, Haiyang II-13 Wang, Shaonan I-1 Welzl, Michael II-368 Whitesides, Sue I-134 Willkomm, Daniel II-198
Wolisz, Adam II-198 Wu, Chuan II-275 Wu, Kui I-134 Xu, Yuedong
II-68
Yao, Jun I-92 Yavuz, Emre A. Yu, Yifan I-66 Zander, Sebastian Zhang, Lele I-15
II-356
I-458
381