Lecture Notes in Computer Science
6347
Commenced Publication in 1973 Founding and Former Series Editors: Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen
Editorial Board David Hutchison, UK Josef Kittler, UK Alfred Kobsa, USA John C. Mitchell, USA Oscar Nierstrasz, Switzerland Bernhard Steffen, Germany Demetri Terzopoulos, USA Gerhard Weikum, Germany
Takeo Kanade, USA Jon M. Kleinberg, USA Friedemann Mattern, Switzerland Moni Naor, Israel C. Pandu Rangan, India Madhu Sudan, USA Doug Tygar, USA
Advanced Research in Computing and Software Science Subline of Lectures Notes in Computer Science Subline Series Editors Giorgio Ausiello, University of Rome ‘La Sapienza’, Italy Vladimiro Sassone, University of Southampton, UK
Subline Advisory Board Susanne Albers, University of Freiburg, Germany Benjamin C. Pierce, University of Pennsylvania, USA Bernhard Steffen, University of Dortmund, Germany Madhu Sudan, Microsoft Research, Cambridge, MA, USA Deng Xiaotie, City University of Hong Kong Jeannette M. Wing, Carnegie Mellon University, Pittsburgh, PA, USA
Mark de Berg Ulrich Meyer (Eds.)
Algorithms – ESA 2010 18th Annual European Symposium Liverpool, UK, September 6-8, 2010 Proceedings, Part II
13
Volume Editors Mark de Berg Department of Mathematics and Computing Science TU Eindhoven Eindhoven, The Netherlands E-mail:
[email protected] Ulrich Meyer Institute for Computer Science Goethe University Frankfurt/Main, Germany E-mail:
[email protected]
Library of Congress Control Number: 2010933821 CR Subject Classification (1998): F.2, I.3.5, C.2, E.1, G.2, D.2 LNCS Sublibrary: SL 1 – Theoretical Computer Science and General Issues ISSN ISBN-10 ISBN-13
0302-9743 3-642-15780-7 Springer Berlin Heidelberg New York 978-3-642-15780-6 Springer Berlin Heidelberg New York
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. springer.com © Springer-Verlag Berlin Heidelberg 2010 Printed in Germany Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed on acid-free paper 06/3180 543210
Preface
This volume contains the 69 papers presented at the 16th Annual European Symposium on Algorithms (ESA 2010), held in Liverpool during September 6–8, 2010, including three papers by the distinguished invited speakers Artur Czumaj, Herbert Edelsbrunner, and Paolo Ferragina. ESA 2010 was organized as a part of ALGO 2010, which also included the 10th Workshop on Algorithms in Bioinformatics (WABI), the 8th Workshop on Approximation and Online Algorithms (WAOA), and the 10th Workshop on Algorithmic Approaches for Transportation Modeling, Optimization, and Systems (ATMOS). The European Symposium on Algorithms covers research in the design, use, and analysis of efficient algorithms and data structures. As in previous years, the symposium had two tracks: the Design and Analysis Track and the Engineering and Applications Track, each with its own Program Committee. In total 245 papers adhering to the submission guidelines were submitted. Each paper was reviewed by three or four referees. Based on the reviews and the often extensive electronic discussions following them, the committees selected 66 papers in total: 56 (out of 206) to the Design and Analysis Track and 10 (out of 39) to the Engineering and Applications track. We believe that these papers together made up a strong and varied program, showing the depth and breadth of current algorithms research. Three papers deserve special mentioning: the papers “When LP Is the Cure for Your Matching Woes: Improved Bounds for Stochastic Matchings” by N. Bansal, A. Gupta, J. Li, J. Mestre, V. Nagarajan and A. Rudra and “Feasibility Analysis of Sporadic Real-Time Multiprocessor Task Systems” by V. Bonifaci and A. Marchetti-Spaccamela, which won the award for the best paper, and the paper “Shortest Paths in Planar Graphs with Real Lengths in O(n log2 n/ log log n) Time” by S. Mozes and C. Wulff-Nilsen, which won the award for the best student paper. We congratulate the authors on this succes. ESA 2010 was sponsored by the European Association of Theoretical Computer Science, the International Society of Computational Geometry, the London Mathematical Society, Springer, and the University of Liverpool. Besides the sponsors, we also wish to thank the people from the EasyChair Conference System; using their wonderful system saved us an enormous amount of work during the whole process. Finally, we thank all authors who submitted their work to ESA 2010, all Program Committee members for their hard work, and all reviewers who helped the Program Committees in evaluating the submitted papers, and we hope the readers will find the papers in these proceedings instructive and enjoyable.
July 2010
Mark de Berg Ulrich Meyer
Organization
Program Committee Design and Analysis Track Mark de Berg (Chair) Hans Bodlaender Peter Bro Miltersen Sergio Cabello Kenneth L. Clarkson Khaled Elbassioni Leah Epstein Leszek Gąsieniec Roberto Grossi Michael Kaufmann Samir Khuller Mikko Koivisto Sylvain Lazard Mohammad Mahdian S. Muthu Muthukrishnan Petra Mutzel Leen Stougie Yusu Wang Christos Zaroliagis
TU Eindhoven, The Netherlands Utrecht University, The Netherlands Aarhus University, Denmark University of Ljubljana, Slovenia IBM Almaden, USA MPI Saarbrücken, Germany University of Haifa, Israel University of Liverpool, UK Università di Pisa, Italy Universität Tübingen, Germany University of Maryland, USA University of Helsinki, Finland INRIA Nancy Grand Est, France Yahoo! Research, USA Rutgers University & Google, USA TU Dortmund, Germany VU and CWI Amsterdam, The Netherlands Ohio State University, CTI and University of Patras, Greece
Engineering and Applications Track András Benczúr Gerth Brodal Peter Eades Lars Engebretsen Andrew Goldberg Gunnar Klau Kishore Kothapalli Stefano Leonardi Ulrich Meyer (Chair) Marina Papatriantafilou Sylvain Pion Anita Schöbel Laura Toma Prudence Wong Norbert Zeh
Hungarian Academy of Sciences, Hungary Aarhus University, Denmark University of Sydney, Australia Google Zürich, Switzerland Microsoft Research, USA CWI Amsterdam, The Netherlands IIIT Hyderabad, India La Sapienza University, Rome, Italy Goethe University Frankfurt, Germany Chalmers University, Sweden INRIA Sophia Antipolis - Méditerranée, France University of Göttingen, Germany Bowdoin College, USA University of Liverpool, UK Dalhousie University, Canada
VIII
Organization
Organizing Committee The Organizing Committee from the University of Liverpool consisted of: Andrew Collins Leszek Gąsieniec (Chair) Russell Martin Igor Potapov Thelma Williams Prudence Wong
Referees Ittai Abraham Louigi Addario-Berry Isolde Adler Deepak Ajwani Marjan van den Akker Saeed Alaei Aris Anagnostopoulos Spyros Angelopoulos Elliot Anshelevich Sunil Arya Dominique Attali Evripidis Bampis Nikhil Bansal Jérémy Barbay Andreas Beckmann Anton Belov Oren Ben-Zwi Sergey Bereg Hoda Bidkhori Philip Bille Vincenzo Bonifaci Ilaria Bordino Prosenjit Bose David Bremner Patrick Briest Andrej Brodnik Costas Busch Sebastian Böcker Saverio Caminiti Stefan Canzar Alberto Caprara Ioannis Caragiannis Manuel Caroli
Daniel Cederman Ho-Leung Chan T-H. Hubert Chan Timothy Chan Frédéric Chazal Jianer Chen Ning Chen Siu-Wing Cheng Otfried Cheong Flavio Chierichetti Giorgos Christodoulou Ferdinando Cicalese Raphael Clifford David Cohen-Steiner Éric Colin de Verdière Atlas F. Cook IV José R. Correa Ovidiu Daescu Peter Damaschke Atish Das Sarma Pooya Davoodi Pedro M. M. de Castro Daniel Delling Camil Demetrescu Tamal Dey Yuanan Diao Martin Dietzfelbinger Thomas C. van Dijk Shahar Dobzinski Benjamin Doerr Vida Dujmovic Laurent Dupont Steph Durocher
Christoph Dürr Alon Efrat Edith Elkind Amr Elmasry David Eppstein Funda Ergun Thomas Erlebach Claus Ernst William Evans Hazel Everett Angelo Fanelli Mohammad Farshi Sandor Fekete Henning Fernau Paolo Ferragina Irene Finocchi Johannes Fischer Rudolf Fleischer Fedor Fomin Dimitris Fotakis Nikolaos Fountoulakis Kimmo Fredriksson Tom Friedetzky Zachary Friggstad Zhang Fu Stanley Fung Stefan Funke Hal Gabow Bernd Gärtner Frantisek Galcik Iftah Gamzu Jie Gao William Gasarch
Organization
Georgios Georgiadis Loukas Georgiadis Arpita Ghosh Panos Giannopoulus Matt Gibson Anders Gidenstam Joan Glaunes Marc Glisse Michael Gnewuch Xavier Goaoc Peter Gottschling Vineet Goyal Fabrizio Grandoni Alexander Grigoriev Gaël Guennebaud Jiong Guo Carsten Gutwenger Nima Haghpanah M.T. Hajiaghayi Olaf Hall-Holt K. Arnsfelt Hansen T. Dueholm Hansen Sariel Har-Peled David Hartvigsen Rafael Hassin Pinar Heggernes Danny Hermelin John Hershberger Martin Hoefer Wing Kai Hon Han Hoogeveen Chien-Chung Huang Thore Husfeldt Falk Hüffner Csanád Imreh Kazuo Iwama Satoru Iwata Bart Jansen Bin Jiang Satyen Kale Marcin Kaminski Sanjiv Kapoor Chinmay Karande Petteri Kaski Matthew Katz
Steven Kelk David Kempe Elena Kleiman Karsten Klein Christian Knauer Stavros Kolliopoulos Spyros Kontogiannis Amos Korman Guy Kortsarz Nitish Korula Adrian Kosowski Annamária Kovács Richard Kralovic Dieter Kratsch Stefan Kratsch Stephan Kreutzer Nils Kriege Sven Krumke Piyush Kumar Juha Kärkkäinen Stefan Langerman Kasper Dalgaard Larsen Francis Lazarus Lap-Kei Lee Joshua Letchford Asaf Levin Joshua Levine Maarten Löffler Jian Li Christian Liebchen Daniel Lokshtanov Zvi Lotker Anna Lubiw Tamas Lukovszki Meena Mahajan Kazuhisa Makino Johann Makowsky David Malec Azarakhsh Malekian Sven Mallach David Manlove Bodo Manthey A. Marchetti-Spaccamela Vangelis Markakis Russell Martin
IX
Dániel Marx Nicole Megow Julian Mestre Aranyak Mehta Pauli Miettinen Matus Mihalak Matthias Mnich Bojan Mohar Ankur Moitra Pat Morin Gabriel Moruz David Mount M. Müller-Hannemann Tobias Muller Wolfgang Mulzer Veli Mäkinen Thomas Mølhave Seffi Naor Giri Narasimhan Hariharan Narayanan Gonzalo Navarro Hamid Nazerzadeh Frank Neumann Alantha Newman Ilan Newman Hung Ngo Kim Thang Nguyen Rolf Niedermeier Nicolas Nisse Marc Noy Krzysztof Onak Jim Orlin Rasmus Pagh K. Panagiotou Gyula Pap Gregor Pardella Kunsoo Park Britta Peis Rudi Pendavingh Michal Penn Xavier Pennec Marko Petkovšek Jeff Phillips Greg Plaxton Valentin Polishchuk
X
Organization
Matthias Poloczek Laura Poplawski-Ma Marc Pouget E. Pountourakis Kirk Pruhs Geppo Pucci Yuri Rabinov Luis Rademacher Tomasz Radzik Harald Räcke Arash Rafiey Balaji Raghavachari S. Raghavan M. Sohel Rahman Rajmohan Rajaraman Rajiv Raman Jörg Rambau Pasi Rastas Imran Rauf Dror Rawitz Saurabh Ray Peter Reiter Liam Roditty Dana Ron Johan M. M. van Rooij Adi Rosén Günter Rote Thomas Rothvoß Kunihiko Sadakane Barna Saha Saket Saurabh Rahul Savani Francesco Scarcello Guido Schäfer Elad Michael Schiller
Florian Schoppmann Anna Schulze Celine Scornavacca Danny Segev C. Seshadhri Jiří Sgall Hadas Shachnai Rahul Shah Mordechai Shalom Jessica Sherette Junghwan Shin Somnath Sikdar Rodrigo Silveira Amitabh Sinha René Sitters Alexander Skopalik Michiel Smid Andreas Spillner Yannis Stamatiou Ulrike Stege David Steurer Håkan Sundell Wing-Kin Sung Rob van Stee Jukka Suomela Zoya Svitkina Tami Tamir Siamak Tazari Orestis Telelis Kavitha Telikepalli Dimitrios Thilikos Mikkel Thorup Hans Raj Tiwary Kostas Tsichlas Elias Tsigaridas
Andy Twigg George Tzoumas Steve Uhlig Takeaki Uno Jan Vahrenhold Gabriel Valiente Sergei Vassilvitskii Gert Vegter Marinus Veldhorst S. Venkatasubramanian Angelina Vidali Yngve Villanger Niko Välimäki Uli Wagner Tomasz Walen Haitao Wang Lei Wang Volker Weichert Oren Weimann Renato Werneck Matthias Westermann Christopher Whidden Peter Widmayer Ryan Williams Gerhard J. Woeginger Nicola Wolpert Hoi-Ming Wong Lirong Xia Qiqi Yan Neal Young Mariette Yvinec Bernd Zey Lintao Zhang Yong Zhang Binhai Zhu
Table of Contents – Part II
Invited Talk Data Structures: Time, I/Os, Entropy, Joules! . . . . . . . . . . . . . . . . . . . . . . . Paolo Ferragina
1
Session 8a Weighted Congestion Games: Price of Anarchy, Universal Worst-Case Examples, and Tightness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kshipra Bhawalkar, Martin Gairing, and Tim Roughgarden
17
Computing Pure Nash and Strong Equilibria in Bottleneck Congestion Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tobias Harks, Martin Hoefer, Max Klimm, and Alexander Skopalik
29
Combinatorial Auctions with Verification Are Tractable . . . . . . . . . . . . . . . Piotr Krysta and Carmine Ventre
39
How to Allocate Goods in an Online Market? . . . . . . . . . . . . . . . . . . . . . . . . Yossi Azar, Niv Buchbinder, and Kamal Jain
51
Session 8b Fr´echet Distance of Surfaces: Some Simple Hard Cases . . . . . . . . . . . . . . . . Kevin Buchin, Maike Buchin, and Andr´e Schulz
63
Geometric Algorithms for Private-Cache Chip Multiprocessors . . . . . . . . . Deepak Ajwani, Nodari Sitchinava, and Norbert Zeh
75
Volume in General Metric Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ittai Abraham, Yair Bartal, Ofer Neiman, and Leonard J. Schulman
87
Shortest Cut Graph of a Surface with Prescribed Vertex Set . . . . . . . . . . . ´ Eric Colin de Verdi`ere
100
Session 9a Induced Matchings in Subcubic Planar Graphs . . . . . . . . . . . . . . . . . . . . . . Ross J. Kang, Matthias Mnich, and Tobias M¨ uller
112
Robust Matchings and Matroid Intersections . . . . . . . . . . . . . . . . . . . . . . . . Ryo Fujita, Yusuke Kobayashi, and Kazuhisa Makino
123
XII
Table of Contents – Part II
A 25/17-Approximation Algorithm for the Stable Marriage Problem with One-Sided Ties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kazuo Iwama, Shuichi Miyazaki, and Hiroki Yanagisawa Strongly Stable Assignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ning Chen and Arpita Ghosh
135 147
Session 9b Data Structures for Storing Small Sets in the Bitprobe Model . . . . . . . . . Jaikumar Radhakrishnan, Smit Shah, and Saswata Shannigrahi On Space Efficient Two Dimensional Range Minimum Data Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Gerth Stølting Brodal, Pooya Davoodi, and S. Srinivasa Rao
159
171
Pairing Heaps with Costless Meld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Amr Elmasry
183
Top-k Ranked Document Search in General Text Databases . . . . . . . . . . . J. Shane Culpepper, Gonzalo Navarro, Simon J. Puglisi, and Andrew Turpin
194
Best-Paper Session Shortest Paths in Planar Graphs with Real Lengths in O(n log2 n/ log log n) Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shay Mozes and Christian Wulff-Nilsen When LP Is the Cure for Your Matching Woes: Improved Bounds for Stochastic Matchings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nikhil Bansal, Anupam Gupta, Jian Li, Juli´ an Mestre, Viswanath Nagarajan, and Atri Rudra
206
218
Feasibility Analysis of Sporadic Real-Time Multiprocessor Task Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Vincenzo Bonifaci and Alberto Marchetti-Spaccamela
230
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
243
Table of Contents – Part I
Invited Talk The Robustness of Level Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Paul Bendich, Herbert Edelsbrunner, Dmitriy Morozov, and Amit Patel
1
Session 1a Solving an Avionics Real-Time Scheduling Problem by Advanced IP-Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Friedrich Eisenbrand, Karthikeyan Kesavan, Raju S. Mattikalli, Martin Niemeier, Arnold W. Nordsieck, Martin Skutella, Jos´e Verschae, and Andreas Wiese
11
Non-clairvoyant Speed Scaling for Weighted Flow Time . . . . . . . . . . . . . . . Sze-Hang Chan, Tak-Wah Lam, and Lap-Kei Lee
23
A Robust PTAS for Machine Covering and Packing . . . . . . . . . . . . . . . . . . Martin Skutella and Jos´e Verschae
36
Session 1b Balancing Degree, Diameter and Weight in Euclidean Spanners . . . . . . . . Shay Solomon and Michael Elkin
48
Testing Euclidean Spanners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Frank Hellweg, Melanie Schmidt, and Christian Sohler
60
Fast Approximation in Subspaces by Doubling Metric Decomposition . . . Marek Cygan, Lukasz Kowalik, Marcin Mucha, Marcin Pilipczuk, and Piotr Sankowski
72
f -Sensitivity Distance Oracles and Routing Schemes . . . . . . . . . . . . . . . . . . Shiri Chechik, Michael Langberg, David Peleg, and Liam Roditty
84
Session 2a Fast Minor Testing in Planar Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Isolde Adler, Frederic Dorn, Fedor V. Fomin, Ignasi Sau, and Dimitrios M. Thilikos
97
On the Number of Spanning Trees a Planar Graph Can Have . . . . . . . . . . Kevin Buchin and Andr´e Schulz
110
XIV
Table of Contents – Part I
Contractions of Planar Graphs in Polynomial Time . . . . . . . . . . . . . . . . . . . Marcin Kami´ nski, Dani¨el Paulusma, and Dimitrios M. Thilikos
122
Session 2b Communication Complexity of Quasirandom Rumor Spreading . . . . . . . . Petra Berenbrink, Robert Els¨ asser, and Thomas Sauerwald A Complete Characterization of Group-Strategyproof Mechanisms of Cost-Sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Emmanouil Pountourakis and Angelina Vidali Contribution Games in Social Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Elliot Anshelevich and Martin Hoefer
134
146 158
Session 3a Improved Bounds for Online Stochastic Matching . . . . . . . . . . . . . . . . . . . . Bahman Bahmani and Michael Kapralov
170
Online Stochastic Packing Applied to Display Ad Allocation . . . . . . . . . . . Jon Feldman, Monika Henzinger, Nitish Korula, Vahab S. Mirrokni, and Cliff Stein
182
Caching Is Hard – Even in the Fault Model . . . . . . . . . . . . . . . . . . . . . . . . . Marek Chrobak, Gerhard J. Woeginger, Kazuhisa Makino, and Haifeng Xu
195
Session 3b Superselectors: Efficient Constructions and Applications . . . . . . . . . . . . . . Ferdinando Cicalese and Ugo Vaccaro Estimating the Average of a Lipschitz-Continuous Function from One Sample . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Abhimanyu Das and David Kempe Streaming Graph Computations with a Helpful Advisor . . . . . . . . . . . . . . . Graham Cormode, Michael Mitzenmacher, and Justin Thaler
207
219 231
Session 4a Algorithms for Dominating Set in Disk Graphs: Breaking the log n Barrier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Matt Gibson and Imran A. Pirwani Minimum Vertex Cover in Rectangle Graphs . . . . . . . . . . . . . . . . . . . . . . . . Reuven Bar-Yehuda, Danny Hermelin, and Dror Rawitz
243 255
Table of Contents – Part I
Feedback Vertex Sets in Tournaments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Serge Gaspers and Matthias Mnich
XV
267
Session 4b n-Level Graph Partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Vitaly Osipov and Peter Sanders Fast Routing in Very Large Public Transportation Networks Using Transfer Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hannah Bast, Erik Carlsson, Arno Eigenwillig, Robert Geisberger, Chris Harrelson, Veselin Raychev, and Fabien Viger Finding the Diameter in Real-World Graphs: Experimentally Turning a Lower Bound into an Upper Bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pierluigi Crescenzi, Roberto Grossi, Claudio Imbrenda, Leonardo Lanzi, and Andrea Marino
278
290
302
Session 5a Budgeted Red-Blue Median and Its Generalizations . . . . . . . . . . . . . . . . . . MohammadTaghi Hajiaghayi, Rohit Khandekar, and Guy Kortsarz All Ternary Permutation Constraint Satisfaction Problems Parameterized above Average Have Kernels with Quadratic Numbers of Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Gregory Gutin, Leo van Iersel, Matthias Mnich, and Anders Yeo Strong Formulations for the Multi-module PESP and a Quadratic Algorithm for Graphical Diophantine Equation Systems . . . . . . . . . . . . . . . Laura Galli and Sebastian Stiller Robust Algorithms for Sorting Railway Cars . . . . . . . . . . . . . . . . . . . . . . . . Christina B¨ using and Jens Maue
314
326
338 350
Session 5b Cloning Voronoi Diagrams via Retroactive Data Structures . . . . . . . . . . . . Matthew T. Dickerson, David Eppstein, and Michael T. Goodrich
362
A Unified Approach to Approximate Proximity Searching . . . . . . . . . . . . . Sunil Arya, Guilherme D. da Fonseca, and David M. Mount
374
Spatio-temporal Range Searching over Compressed Kinetic Sensor Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sorelle A. Friedler and David M. Mount
386
XVI
Table of Contents – Part I
Constructing the Exact Voronoi Diagram of Arbitrary Lines in Three-Dimensional Space: with Fast Point-Location . . . . . . . . . . . . . . . . . . Michael Hemmer, Ophir Setter, and Dan Halperin
398
Invited Talk Local Graph Exploration and Fast Property Testing . . . . . . . . . . . . . . . . . . Artur Czumaj
410
Session 6a A Fully Compressed Algorithm for Computing the Edit Distance of Run-Length Encoded Strings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kuan-Yu Chen and Kun-Mao Chao Fast Prefix Search in Little Space, with Applications . . . . . . . . . . . . . . . . . Djamal Belazzougui, Paolo Boldi, Rasmus Pagh, and Sebastiano Vigna On the Huffman and Alphabetic Tree Problem with General Cost Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hiroshi Fujiwara and Tobias Jacobs Medium-Space Algorithms for Inverse BWT . . . . . . . . . . . . . . . . . . . . . . . . . Juha K¨ arkk¨ ainen and Simon J. Puglisi
415 427
439 451
Session 6b Median Trajectories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kevin Buchin, Maike Buchin, Marc van Kreveld, Maarten L¨ offler, Rodrigo I. Silveira, Carola Wenk, and Lionov Wiratma
463
Optimal Cover of Points by Disks in a Simple Polygon . . . . . . . . . . . . . . . . Haim Kaplan, Matthew J. Katz, Gila Morgenstern, and Micha Sharir
475
Stability of ε-Kernels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pankaj K. Agarwal, Jeff M. Phillips, and Hai Yu
487
The Geodesic Diameter of Polygonal Domains . . . . . . . . . . . . . . . . . . . . . . . Sang Won Bae, Matias Korman, and Yoshio Okamoto
500
Session 7a Polyhedral and Algorithmic Properties of Quantified Linear Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ulf Lorenz, Alexander Martin, and Jan Wolf
512
Table of Contents – Part I
XVII
Approximating Parameterized Convex Optimization Problems . . . . . . . . . Joachim Giesen, Martin Jaggi, and S¨ oren Laue
524
Approximation Schemes for Multi-Budgeted Independence Systems . . . . . Fabrizio Grandoni and Rico Zenklusen
536
Session 7b Algorithmic Meta-theorems for Restrictions of Treewidth . . . . . . . . . . . . . Michael Lampis Determining Edge Expansion and Other Connectivity Measures of Graphs of Bounded Genus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Viresh Patel
549
561
Constructing the R* Consensus Tree of Two Trees in Subcubic Time . . . Jesper Jansson and Wing-Kin Sung
573
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
585
Data Structures: Time, I/Os, Entropy, Joules! Paolo Ferragina Dipartimento di Informatica, Universit` a di Pisa, Italy
1
The Scenario
Data volumes are exploding as organizations and users collect and store increasing amounts of information for their own use and for sharing it with others. To cope with these large datasets, software developers typically take advantage of faster and faster I/O-subsystems and multi-core processors, and/or they exploit the virtual memory to make the caching and delivering of data requested by their algorithms simple and effective whenever their working set is small. Sometimes, they gain an additional speed-up by reducing the storage usage of their algorithms because this impacts on the number of machines/disks required for a given computation, and on the amount of data that is transferred/cached to/in the faster memory levels closer to the CPUs. However it is well known, both in the algorithm and software engineering communities, that the principled exploitation of all these issues via a proper arrangement of data and a properly structured algorithmic computation can abundantly surpass the best expected technology advancements and the help coming from (sophisticated) operating systems or heuristics. As a result, data compression and indexing nowadays play a key role in the design of modern algorithms for applications that manage massive datasets. But their combination, to be effective, is not easy because of three main reasons: – each memory level (cache, DRAM, disk,...) has its own cost, capacity, latency and bandwidth, and thus accessing data in the memory levels closer to the CPUs is orders of magnitude faster than accessing data at the last levels. Therefore, space-efficient algorithms and data structures should be I/O-conscious and thus deploy (temporal and spatial) locality of reference as a key principle in their design. – compressed space typically comes at a cost— namely, compression and/or decompression time— so that compression should be plugged into algorithms and data structures without impairing the efficiency of their operations. – data compression and indexing seem “opposite approaches” because the former aims at removing data redundancies, whereas the latter introduces extra-data in the index to support faster operations. So it is not surprising that, until recently, algorithm and software designers were faced with a dilemma: achieve either efficient compression at the cost of slow operations, or vice versa.
Partially supported by Yahoo! Research, FIRB Linguistica 2006, and PRIN MadAlgo.
M. de Berg and U. Meyer (Eds.): ESA 2010, Part II, LNCS 6347, pp. 1–16, 2010. c Springer-Verlag Berlin Heidelberg 2010
2
P. Ferragina
This dichotomy was successfully addressed starting from the year 2000 [41], due to various scientific achievements which showed how to relate Information Theory with String-Matching concepts, in a way that index regularities that show up when data is compressible are discovered and exploited to reduce index occupancy without impairing query efficiency (see the surveys [72,58] and references therein). The net result has been the design of compressed data structures for indexing texts (aka compressed indexes, or compressed and searchable data formats) that take space close to the kth order entropy of the input text, and support the powerful substring queries and the extraction of arbitrary portions of data in time close to the one required by (optimal) uncompressed indexes. Given this latter feature, these data structures are sometime called self-indexes. Originally designed for string data, compressed indexes have been recently extended to deal with other data types, such as sequences (see e.g. [43,49,75,55,51]), dictionaries (see e.g. [48,38,56]), unlabeled trees (see e.g. [15,59,31,79]), labeled trees (see e.g. [40,47]), graphs (see e.g. [23,30,26]), binary relations and permutations (see e.g. [11,9]), and many others.1 The consequence of this impressive flow of results is that, nowadays, it is known how to index almost any (complex) data-type in compressed space and support various kinds of queries fast over it. From a theoretical point of view, and as far as the RAM model is concerned, the above dichotomy may be considered successfully addressed! This is in pills the current scenario of compressed data structures. Space and time do not allow me to go into the technical details of these solutions, and indeed this is not the goal of this paper which, rather, aims at raising some questions from the theoretical and the engineering side of algorithmic investigation, as well as at depicting few novel scenarios that I think algorithm researchers should be aware of and, hopefully, attack soon!
2
On the Engineering of Compressed Data Structures
We think that the theory of compressed indexes is mature enough to ask ourselves if and how this can be a breakthrough for compressed data storage, as suggested in [83]. In this section we will argue about this by commenting some recent achievements in the algorithm-engineering realm, and point out some new directions of research that are worth of attention. Last year, we described in [36] the Pizza&Chili’s effort2 , joint between the University of Pisa and the University of Chile, consisting of a set of tuned implementations of the most successful compressed text-indexes, together with effective test-beds and scripts for their automatic validation and test. These indexes, all adhering a standardized API, were extensively tested on various datasets showing that they take roughly the same space used by traditional compressors (like gzip and bzip2), with the additional feature of being able to extract arbitrary portions of compressed data at the speed of 2-4 MB/sec, and to search 1 2
It goes without saying that some citations are missing, mainly the older ones. The cited papers provide a good seed for browsing (backward) the significant literature. Look at http://pizzachili.di.unipi.it
Data Structures: Time, I/Os, Entropy, Joules!
3
for arbitrary patterns (hence, not necessarily words) at few μsec per occurrence. The searching capability over compressed data is clearly novel, very powerful, and well documented in [36]. So, in what follows, we will comment about the interesting random-access capability to compressed data that this technology also offers and that can be the backbone of advanced (compressed-)storage systems. A blind comparison would state that the decompression speed of compressed indexes is much slower than the one achievable by gUNziping the compressed data, given that the latter takes hundreds of MBs/sec. But this observation does not take into account the fact that a gzip-based storage scheme must operate in chunks whose size impacts on the final compression ratio and on the actual decompression speed: the larger is the chunk size the better is the compression ratio, but the slower is the actual decompression speed when “short” substrings have to be extracted! Conversely, the decompression speed of these novel compressed formats is linearly dependent on the length of the extracted substring, thus not on the size of the whole data upon which the compressed index is built. Hence, compressed indexes might be competitive in scenarios where short-records have to be retrieved but large chunks are needed for achieving effective compression ratios (such as with blogs, emails, tweets, etc.). Recently [42] investigated this question in the context of a Web-storage system which must support the fast access to individual pages of a large compressed collection (of few Tbs), distributed among the DRAMs of many PCs. This question is particularly challenging because nowadays it is known how to effectively compress most basic components of a search engine— i.e. the posting lists and the hyperlinked structure of the Web (see e.g. [85,17,84])— but not much is known about the compression of raw Web pages which are needed by modern search engines for supporting two precious functions such as: snippet retrieval (showing the context of a user query within the result pages), and cached-link (showing the page as it was crawled by the search engine). Known results are scattered over multiple papers which either do not consider large datasets [33], or refer to simple compressors (e.g. [84]), or do not provide the fine details of the proposed solutions (as in the Google BigTable’s paper [20]). In all these cases the compression performance is far from what we could achieve with state-of-the-art compressors and compressed indexes. In [42] we made a first step in this direction by experimentally comparing the random-access capability of dictionary-based compressors (which constitute the choice of modern web-storage systems, see [27,20]) versus the novel compressed-index technology. The results were somewhat surprising: modern dictionary-based compressors, such as lzma, achieve 5-8% compression ratio by using chunks of few Mbs (thus much less than the folklore rule of 20% when gzipping single pages) and a significant decompression throughput of about few hundreds MBs/sec; on the other hand, compressed indexes allow to use larger chunk sizes (up to 200Mbs) but their compression ratio is about 8 times worse than lzma, and their decoding speed is two order of magnitudes slower! This seems to be a negative result for this algorithmic technology when tested in the
4
P. Ferragina
wild. However, there are few subtle comments that I wish to point out in order to spur, hopefully, new investigations.3 First, on arbitrarily-permuted pages dictionary-based compressors get significantly worse both in compression ratio and decompression speed; whereas compressed indexes offer stable-performance thanks to the longer chunks they may compress and index. This robustness is particularly useful when the Web-storage system cannot change the ordering of its pages, or when no “useful ordering” is known about the indexed items (such as in textual DBs or tweets). Second, compressed indexes are slower in decompression speed but they must process only the requested item, whereas dictionary-based compressors must decompress the entire chunk containing that item. A simple “back of the envelop” calculation shows that the throughput performance is in favor of compressed indexes whenever items are of few Kbs and chunks of few Mbs! Third, even if competitive in speed, current implementations of compressed indexes are not competitive in compression ratio. This is somewhat surprising because the compressor underlying those indexes is the state-of-the-art in theory [35,33] and achieves 4% compression ratio on Web data, which is much lower than the 40% performance reported in [42] for the compressed indexes. This means that the additional information kept by those indexes is not very compressed: i.e. the sub-linear terms summed to the k-th order entropy costs are not negligible, indeed. As a consequence of these observations and experimental results we foresee, from the one hand, an algorithm design effort in reducing those terms (as pursued e.g. in [75,55,68] provided that this is possible [53,54]), and/or on the other hand, an algorithmic-engineering effort in improving the implementations available at the Pizza&Chili’s site, by possibly extending them to SSD-disks, multi-cores and GPUs. We believe that these actions are needed in order to successfully turn the compressed-index technology into an effective algorithmic tool for modern applications.4 On the other hand, we also believe that the simple heuristics proposed in [20,27] which use fixed-length chunks for gzip-based compressors are too elementary and non-optimal; so, for this issue, we foresee the design of new approaches that define variable-length chunks (such as [45]) or novel LZbased parsings that achieve a controlled space/time (de)compression bounds. Sect. 4 will address this question in more depth. As a second, more ambitious algorithm-engineering challenge, we foresee the design of a library of compressed data structures, built as an inspired mix of the LEDA’s and Pizza&Chili’s experiences, which should offer a large spectrum of approaches and (time/compression) trade-offs, adhere to a user-friendly API, 3
4
The reader should recall that compressed indexes offer search capabilities which are absent in classic compressors, such as lzma, gzip, bzip2, etc. So the previous comparison penalizes the compressed indexes but, nevertheless, it is useful to arise some interesting questions. A comment is in order at this point. This “limitation” did not prevent this technology to be applied successfully in bio-informatics, as witnessed by the numerous papers building upon Pizza&Chili (see e.g. [63,64,65] and references therein). The reason is the “small/medium” compression ratio one can achieve on collections of DNA or protein sequences, which somehow hides these inefficiencies.
Data Structures: Time, I/Os, Entropy, Joules!
5
and possibly take advantage of the deep experimental effort pursued recently on a wide spectrum of compressed data types, such as arrays [80,28,74,24], dictionaries [48], trees [39,7], and graphs [26], just to cite a few. This library would play an important role also in the mobile setting, as a backbone for developing applications which optimize both MIPS and joules, and thus achieve an effective combination of end-user satisfaction and battery saving as we will comment in Sect. 5. We are actually working in this direction!
3
On I/O-Conscious Compressed Data Structures, and Beyond
Compressed indexes are mainly designed to work in the RAM model and thus elicit many I/O-faults when executed in a hierarchical memory. To make compressed-index technology effective in practice, we need to simultaneously achieve I/O-efficiency in search and (de)compression operations plus entropybounds in space occupancy. This is a hot research topic with some preliminary contributions which open promising scenarios. We will not go into the technical details of those issues because the reader can find an overview of those results in [58] (see also [8,38,32]); rather, we prefer to sketch some questions that we believe deserve some attention in the near future. Apart few attempts made in the last years to turn compressed indexes into I/O-aware compressed data structures, researchers were able only recently to make a significant step in discerning the interplay between I/O-bottleneck and data compression. The most notable example is a new transform, called the geometric Burrows-Wheeler transform (shortly GBWT [21]), that converts a set of 2D- or 3D-points into a text, and vice versa. Such transformation shows a two-way connectivity between problems in text indexing and orthogonal range searching, the latter being a well-studied problem in the external memory setting and in terms of lower bounds [3]. The final result shows that the GBWT takes the space of the indexed text, and an I/O query bound identical to the four-sided 2D range search. If space occupancy is a primary concern, then GBWT can be further sparsified via a variable-length blocking scheme that achieves locality for good I/O performance, and the classic k-th order entropy bound in the space [57]. However it seems that if one wants to use non-replicating data structures then the optimal query bound of O(logB n+occ/B) I/Os, achievable by uncompressed indexes [37,14], cannot be matched in the compressed setting because of the relation with geometric range queries [3]. Given these negative premises, we foresee two “alterative” scenarios for improving compressed indexes: one is related with the use of Solid-State disks which offer faster random reads, at the cost of slower random writes; the other aims at deploying knowledge about the query distribution in order to make compressed indexes weighted or self-adjusting, and thus speed up frequent queries and slow down the unfrequent ones in a controlled way (and thus not demanding this to the underlying OS). The first issue has received some attention recently, and few SSD-aware data structures are currently known (see e.g. [4,66,71]). We believe that our experiments in [42], and the efficient performance of compressed
6
P. Ferragina
indexes in the RAM model, deserve more research on the combination of SSDs and compressed indexes. On the other hand, although the (self-)adjusting concept is well known to the algorithmic community, here it becomes particularly challenging because, in the past, data structures exposing this feature were implemented via the use of pointers which ease the re-organization of the data as queries are executed. This approach is particularly difficult to be implemented in the compression context because: how can we self-adjust data, avoid pointers in moving them around, and still be effective in compression? A result [50] shows that it is possible to design a weighted-like compressed index that achieves minimum average retrieval time for the occurrences of a pattern provided that a query-distribution is given and a space constraint is imposed in advance by the user on the bit-occupancy of the final compressed index (see also the concept of multi-objective optimization in Sect. 4). This is clearly a preliminary result, nonetheless very promising! All these results can be read also from another angle, namely that new forms of scientific hybridization or transversal contributions are foreseeable to make progress in data-structure design. The pioneering results in [41] showed how to relate Information Theory with String-Matching concepts in order to design the first compressed index. These ideas then reached a high level of sophistication with the numerous scientific results that followed and overflowed this area, achieving impacts which go well beyond the String-Matching field. Now, the results in [21,57,22] added Computational Geometry to the game and allowed to make progress in the I/O-efficiency of the compressed (text) indexes and in devising the first significant I/O-limitations to the use of this algorithmic technology in a two-level memory setting. It is therefore natural to ask: Which other algorithmic fields may allow to progress and/or plug new features to (I/O)compressed data structures? An answer to this question can possibly come from Graph Theory, as we have preliminary shown in [46,45] and we will comment about in the next section.
4
Multi-objective Data-Structure Design
Reorganizing data in order to improve the performance of a given compressor C is a recent and important paradigm of data compression (see e.g. [16,35]) and, subtly, occurs also in the design of compressed indexes (see e.g. the FMindex [41] and all those indexes based on the Burrows-Wheeler transform, such as [72,40]). The basic idea consists of permuting an input string T to form a new string T which is then partitioned into substrings T = T1 T2 · · · Th that are compressed individually by the base compressor C. The goal is to find the best instantiation of the two steps Permuting+Partitioning so that the compression of the individual substrings Ti minimizes the total length of the compressed output. This approach (abbreviated as PPC) is clearly at least as powerful as the classic data compression approach that applies C to the entire T : just take the identity permutation and set h = 1. The question is whether it can be more powerful than that!
Data Structures: Time, I/Os, Entropy, Joules!
7
Intuition leads to think favorably about it: by grouping together objects that are “related”, one can hope to obtain better compression even using a very weak compressor C. This intuition is frequently deployed in the Web-context where storage systems (such as Google’s BigTable [20], see Sect. 2) group Web pages by their host-address or url, and then proceed to compress chunks of data whose size depends on the application in hand: small chunks to support random access to compressed data (such as in search engines), large chunks to support streaming access to compressed data (such as in the Map-Reduce framework). These intuitions and heuristics have been sustained by convincing theoretical and experimental results only recently. In fact, the PPC-paradigm was introduced in [16] (where T is a table formed by fixed-size columns5 ), where the authors showed that the PPC-problem is MAX-SNP hard in its full generality, devised a link between PPC and the classical asymmetric-TSP problem, and then resorted known heuristics to find approximate solutions to the permuting and the partitioning steps based on several measures of correlations between the table’s columns. Subsequently the problem was attacked with T being a text string. Here the most famous instantiation of the PPC-paradigm has been obtained by combining the Burrows and Wheeler Transform [18,2] (shortly BWT) with proper zero-th order-entropy compressors (like MTF, RLE, Huffman, Arithmetic, or their variants). This is exactly the context in which compressed indexes were born [41]. In the data compression setting, this instantiation takes the name of compression booster [35,34] because the net result it produces is to boost the performance of the base compressor C from zero-th order-entropy bounds to k-th order entropy bounds, simultaneously over all k ≥ 0 and independently on the source generating the input string T . The BWT permutes single characters of T by deploying their following context (substring) in the original string, so that “related” characters are located contiguously and then compressed together via the partitioning step. Interestingly [52], the BWT seems to be the unique character-based permutation which is fast to be computed and achieves effective compression-ratios both in theory [69,34,62,67] and in practice [33]. The PPC-paradigm is even more challenging when T is obtained by concatenating a collection of (variable-length) files: such as in the serialization induced by the Unix tar command, or other more sophisticated heuristics like the ones discussed in [42,29]. In these cases, the partitioning step looks for homogeneous groups of contiguous files which can be effectively compressed together by the base-compressor C. Taking the largest memory-footprint offered by C, or using a fixed-length chunk, is not necessarily the best choice because real collections are typically formed by homogeneous groups of dramatically different sizes. So the use of the PPC-paradigm is general and this is the reason why it has been investigated under various angles by considering: different data formats (strings [35], trees [40], tables [16], etc.), different granularities for the items to be permuted (chars, node labels, columns, blocks, files [29,42], etc.), different permutations (see e.g. [52,82,29]), different base compressors to be boosted 5
See also Buchsbaum et al., Procs SODA 2000.
8
P. Ferragina
(0-th order compressors, gzip, bzip2, etc. [42]). But surprisingly enough, up to recently, no approach was able to achieve either efficient time bounds or guaranteed minimality in the length of the compressed output even for the restricted (but yet interesting!) case of a fixed permutation. Last year we made a step forward by designing in [45] an efficient algorithm that, given the base compressor C to be boosted6 and an input string T [1, n] (already permuted), computes in O(n log1+ n) time and O(n) space a partition whose final compressed size is (1 + )-far from the shortest (optimal) one. Apart from the time/space bounds and the underlying technicalities, this result is interesting because it is achieved by rephrasing the partitioning problem as a Single-Source Shortest-Path computation over a weighted DAG consisting of n nodes (one per character of T ) and O(n2 ) edges (one per substring of T ) whose costs are derived from the compressibility of the corresponding substrings via the base compressor C. By exploiting some interesting structural properties of this graph, [45] showed how to restrict the computation of that SSSP to a subgraph consisting of O(n log1+ n) edges only, and proved that this graph can be computed on-the-fly as the SSSP-computation proceeds over the DAG via the proper use of time-space efficient dynamic data structures. Similar algorithmic ideas, but dressed with different techniques, have been deployed in [46] to address another parsing problem, namely the one concerned with the most famous lossless data-compression scheme introduced by Lempel and Ziv more than 30 years ago [86]. This compression scheme is known as “dictionary-based compressor” and consists of squeezing an input string by replacing some of its substrings (aka phrases) with possibly shorter codewords which are actually pointers to a dictionary built as the string is processed. Although many fundamental results were known about the speed and effectiveness of this compression process [76,84], it was not known how to “achieve optimality when the LZ77-dictionary is in use under any constraint on the codewords other than being of equal length” [76, pag. 159]. Optimality means to achieve the shortest compressed output, and not the minimum number of phrases for which the well-known greedy parsing, that always selects the longest phrase in the dictionary, is optimal! To deal with the final compressed bit-sequence, the authors of [46] considered a natural class of phrase encoders typically used in practice [84] and showed that the proposed LZ77-parsing scheme achieves bit-optimality in O(n) optimal working space and in time proportional to O(n log n) or, even, to O(n) for (most of) the encodings used by gzip. Apart for the technicalities involved, these two results show that bit-optimal compression can be obtained in efficient time, but nothing is yet known about the efficiency of the decompression step which may be crucial for a wide range of applications in which the paradigm is “compress once & decompress many times” (like in Web search engines and IR systems, see Sect. 2), or where the decompression system is less powerful than the one compressing the data (like a server that distributes data to clients, possibly mobile phones, see Sect. 5). The decompression 6
It can be either zero-th order compressor (like Huffman and Arithmetic [84]) or k-th order compressors (like ppm).
Data Structures: Time, I/Os, Entropy, Joules!
9
speed clearly depends on the size of the chunk to be (de)compressed. This is true also for the classic gzip, and indeed it is not difficult to come up with examples of parsings which get effective compression ratios but induce many random I/Os in the decoding phase, as well as find parsings which are close to the optimal compression ratio but result much more efficient to decompress. To address these issues, some practical solutions (such as [20]) trade space occupancy for decompression efficiency by either copying far back only substrings that are sufficiently long to amortize the cost of their retrieval, or using chunks of fixed-length that balance decompression time and compressed space. In each case, the choice of the length is hand-made and hence we have no guarantee on its impact over the final result! On the other hand, the literature offers heuristics or principled approaches (such as [49,73,45,25]) whose decompression time is possibly optimal in the RAM model but non-optimal and, anyway, inefficient in terms of I/Os and/or space occupancy. As a result, known effective compressors are able to optimize only one of the two key parameters daily afflicting software developers: either decompression time/IO or compressed space! In the light of this scenario, we pose the following question: Given two approximation factors , δ, can we design a compression format that is decodable in O((1 + δ) Topt ) I/Os and takes (1 + ) Sopt space? Topt is the optimal number of I/Os required to decompress (part of) the input data, and Sopt is the optimal space in which that data can be compressed. More ambitiously, we could allow to fix either or δ equal to zero, and thus optimize one of the two resources given an upper-bound on the other. The ultimate goal is to achieve a space/time controlled (de)compression, with applications to data-storage systems (who cares about x% increase in the space occupancy if this reduces significantly the decoding time?) or mobile context (who cares about y% increase in the decoding time if this reduces significantly the space occupancy and/or the transmission time and/or the battery life?). We have preliminary evidence that this problem might be solved by resorting graph-techniques originally devised for the resource constrained shortest path problem [70], following the reductions sketched above. Hence Graph Theory could help here, but it should attack graphs with billions of nodes/edges! We mention that the above question has been partially addressed in the simpler context of Dictionary Compression by the so called Locality Preserving Front-Coding [14], where a novel scheme has been proposed that nicely scales with the parameter by achieving O(s/(B)) I/Os in decoding a dictionary string of length s, taking (1 + )-more space than the classic Front-Coding scheme [84] when it is applied on the input dictionary. It goes without saying that this approximation is with respect to the space required by the FC-scheme and not by the optimal dictionary compression (cfr [38]). Nonetheless this result let us argue favorably about the solvability of our problem above.7 We conclude by noticing that the idea of trading space-by-time is also present in some papers on compressed indexes (see e.g. [72,75]), but that notion is less 7
A preliminary extension to LZ-parsings, which trades I/O-efficient decompression with compressed-space occupancy, is provided in [44].
10
P. Ferragina
ambitious than what we are aiming for above. In fact, if we translate what we have said for data compressors to compressed indexes we obtain as a goal the design of compressed data structures that are able to optimize (and thus not trade) space-or-time provided that a bound on time-or-space is given!8
5
Energy-Aware Data-Structure Design
Energy efficiency has now become a key issue for servers and data center operations because, as stated in [12], “The cost of power and cooling is likely to exceed that of hardware...”. Studies have shown that every 1W used to power servers requires an additional 0.5-1W of power for cooling equipment, which is necessary because under-cooled equipments exhibit higher failure rates. A huge amount of effort is therefore going on worldwide to deal with the sustainability of the computing infrastructure, because energy and power have not traditionally been first order design constraints for IT technology. It is commonly believed that improvements in the energy efficiency of IT devices will be much more dramatic, and eventually have much greater impact, than in other areas of technology. In fact, a recent FET-EU report [1] stated that “some eight orders of magnitude separate the energy efficiency of conventional computers from what is theoretically possible. Closing this gap would lead to a significant improvement in the energy efficiency of information and communication technology (ICT)”. For computer systems, energy efficiency is roughly defined as the ratio of “computing work” done per unit energy [12]: EE = workdone/energy = performance/power. This definition is extremely vague, and its metric varies from application to application since the notion of work done varies: it might be transactions/Joule for OLTP systems, searches/Joule for a search engine, or in general MIPs/Watt for a mobile device. However, even if general, this formula evidences that software that is optimized for performance would also be power efficient, but power efficiency is not synonymous with computational efficiency: in fact, we can improve energy efficiency not only by maximizing computational performance, but also by reducing power or time. Much prior work was concerned with electrical and computer systems engineering, with a relatively smaller amount in the core areas of computer science. This work can be classified in three main approaches. The first one is engineering-like and consists of using system knobs and internal parameters to achieve the most energy-efficient configuration for the underlying hardware (see e.g. [60,77]). This rich body of work affects different levels of the solution stack (such as hardware and software), stages of the life cycle (such as design and runtime), components of the system (such as CPU, cache, memory, display, peripherals, etc.), target domains (such as mobile devices, wireless networks, and high-end servers). The goal is to design systems that are energy efficient at the peak performance point (maximum utilization) and remain such as load fluctuates or drops [12]. The second and third approaches come from the observation that, in the future, significant improvements in power and energy efficiency are likely to result 8
A preliminary example of this idea is provided in [50].
Data Structures: Time, I/Os, Entropy, Joules!
11
mainly from rethinking algorithms and applications. In particular, [61] proposed the shifting of computations and relocation of data to consolidate resource use both in time and space, in order to facilitate powering down individual hardware components. This scenario may be particularly effective with the advent of new intelligent CPUs (like Intel Core i5), which accelerate in response to demanding tasks, or SS-disks which trade lower power-usage and read-access time versus write performance and reliability. These forms of adaptivity demand for on-line algorithms that dynamically manage power and resource allocation by trading off performance, energy and reliability [5]. The final third approach, most pertaining with our discussion, is the one aiming for a “Science of Power Management” which should develop, as illustrated in [19,61]: theoretical models of power-performance tradeoffs at multiple levels of granularity and a set of canonical algorithmic techniques that should make it easier for new methods and protocols to filter into active use and, in turn, feed more progress. Certainly a program’s energy consumption strongly depends on the number of instructions executed and the number of accesses to the memory hierarchy. But situation is more complicated than that, and in fact [61] stated that “the correspondence between completion time and energy consumption is not one-to-one. [...] The average power consumption and computation rates are intricately tied together, making it difficult to speak of power complexity in isolation. [...] This also indicates that models for the study of energy-computation tradeoffs would need to address more than just the CPU [...] and be developed at all abstraction levels and granularities.” There is currently a significant effort in this direction but, as far as we know, small attention9 has been posed to the issue of designing energy-aware algorithms and data structures that we believe is central to the design of energy-aware systems and can even lead to richer dividends than the ones obtainable via engineering system knobs and parameters. In order to sustain this argument, let us consider a simple micro-benchmark, denoted as Ab , which consists of taking a byte array A[1, n], logically decomposed in n/b blocks of b bytes each, and then read all blocks of A in some order. Of course, all benchmarks Ab are equivalent from a computational point of view, since they read exactly n bytes; but from a practical point of view they will have, of course, different time performance which will depend on the pattern of memory accesses (i.e. sequential vs random), and on the architectural features of the underlying memory levels. Rather surprising, however, is that each benchmark will have different energy consumption, with unexpected gaps! In order to quantify these energy gaps we executed our benchmark on a smartphone, hence a machine where energy is a primary concern. We varied the block size b from 1 byte to 8kb (unblocked vs blocked data retrieval), and took n so that A spans from cache to internal memory, up to the flash memory of the phone. Figure 1 reports the battery consumption expressed as ujoule/sec. Although very preliminary, this simple test shows that accessing data which reside in different memory levels may increase the battery consumption by 3 to 4 order of 9
Few notable exceptions are [78,13], mainly concentrated on sorting.
12
P. Ferragina
' ( )*+ #),-$%&
!"#$%&
Fig. 1. Battery performance of our micro-benchmark on a Nokia N82
magnitudes; moreover, the blocked-access to the various memory levels saves energy even at small block sizes. We just scratched the surface of these issues, and we believe that in this setting the multi-objective design scenario discussed in Sect. 4, as well as the design of I/O-conscious compressed data structures (Sect. 3), assume particular interest because these could have a direct impact on trading MIPS (aka user experience) vs Energy consumption (aka battery life). We are working in this direction to design and implement a library of I/O-conscious algorithms and compressed data structures for mobile devices that takes full advantage of the above figures!
Acknowledgements The ideas presented in this paper are the result of hours of highlighting and, sometime fatiguing, discussions with many researchers, friends and students: R. Baeza-Yates, A. Barilari, M. Coppola, A. Cisternino, M. Farach, V. Makinen, G. Manzini, Muthu, G. Navarro, I. Nitto, R. Venturini, J. Vitter, and many others.
References 1. Future and Emerging Technologies – Proactive: Disruptive Solutions for Energy Efficient ICT. In: EU Expert Consultation Workshop (February 2010) 2. Adjeroh, D., Bell, T., Mukherjee, A.: The Burrows-Wheeler Transform: Data Compression, Suffix Arrays and Pattern Matching. Springer, Heidelberg (2008) 3. Agarwal, P.K., Erickson, J.: Geometric Range Searching and Its Relatives. Advances in Discrete and Computational Geometry 23, 156 (1999) 4. Ajwani, D., Beckmann, A., Jacob, R., Meyer, U., Moruz, G.: On computational models for flash memory devices. In: Procs. SEA. LNCS, vol. 5526, pp. 16–27. Springer, Heidelberg (2009)
Data Structures: Time, I/Os, Entropy, Joules!
13
5. Albers, S.: Energy-efficient algorithms. Comm. ACM 53(5), 86–96 (2010) 6. Arge, L., Brodal, G.S., Fagerberg, R.: Cache-Oblivious Data Structures. In: Handbook of Data Structures. CRC Press, Boca Raton (2005) 7. Arroyuelo, D., C´ anovas, R., Navarro, G., Sadakane, K.: Succinct trees in practice. In: Procs. ALENEX, pp. 84–97. SIAM, Philadelphia (2010) 8. Arroyuelo, D., Navarro, G.: A Lempel-Ziv text index on secondary storage. In: Ma, B., Zhang, K. (eds.) CPM 2007. LNCS, vol. 4580, pp. 83–94. Springer, Heidelberg (2007) 9. Barbay, J., Claude, F., Navarro, G.: Compact rich-functional binary relation representations. In: L´ opez-Ortiz, A. (ed.) LATIN 2010. LNCS, vol. 6034, pp. 170–183. Springer, Heidelberg (2010) 10. Barbay, J., He, M., Munro, J.I., Srinivasa Rao, S.: Succinct indexes for string, bynary relations and multi-labeled trees. In: Procs. SODA, pp. 680–689 (2007) 11. Barbay, J., Navarro, G.: Compressed representations of permutations, and applications. In: Procs. STACS, pp. 111–122 (2009) 12. Barroso, L.A., H¨ olzle, U.: The case for energy-proportional computing. IEEE Computer 40(12), 33–37 (2007) 13. Beckmann, A., Meyer, U., Sanders, P., Singler, J.: Energy-Efficient Sorting using Solid State Disks. In: Procs. IEEE Green Computing Conference (2010) 14. Bender, M., Farach-Colton, M., Kuszmaul, B.: Cache-oblivious String B-trees. In: Procs. ACM PODS, pp. 233–242 (2006) 15. Benoit, D., Demaine, E., Munro, I., Raman, R., Raman, V., Rao, S.: Representing trees of higher degree. Algorithmica 43, 275–292 (2005) 16. Buchsbaum, A.L., Fowler, G.S., Giancarlo, R.: Improving table compression with combinatorial optimization. J. ACM 50(6), 825–851 (2003) 17. Buehrer, G., Chellapilla, K.: A scalable pattern mining approach to web graph compression with communities. In: Procs. ACM WSDM, pp. 95–106 (2008) 18. Burrows, M., Wheeler, D.: A block-sorting lossless data compression algorithm. Technical Report 124, Digital Equipment Corporation (1994) 19. Cameron, K.W., Pruhs, K., Irani, S., Ranganathan, P., Brooks, D.: Report of the science of power management workshop. NSF Report (August 2009) 20. Chang, F., Dean, J., Ghemawat, S., Hsieh, W.C., Wallach, D.A., Burrows, M., Chandra, T., Fikes, A., Gruber, R.E.: Bigtable: A distributed storage system for structured data. ACM Trans. on Computer Systems 26(2) (2008) 21. Chien, Y.-F., Hon, W.-K., Shah, R., Vitter, J.S.: Geometric BWT: Linking range searching and text indexing. In: Procs. IEEE DCC, pp. 252–261 (2008) 22. Chiu, S.Y., Hon, W.K., Shah, R., Vitter, J.: I/O-efficient compressed text indexes: From theory to practice. In: Procs. IEEE DCC (2010) 23. Claude, F., Navarro, G.: A fast and compact web graph representation. In: Ziviani, N., Baeza-Yates, R. (eds.) SPIRE 2007. LNCS, vol. 4726, pp. 118–129. Springer, Heidelberg (2007) 24. Claude, F., Navarro, G.: Practical Rank/Select queries over arbitrary sequences. In: Amir, A., Turpin, A., Moffat, A. (eds.) SPIRE 2008. LNCS, vol. 5280, pp. 176–187. Springer, Heidelberg (2008) 25. Claude, F., Navarro, G.: Self-Indexed Text Compression using Straight-Line Programs. In: Kr´ aloviˇc, R., Niwi´ nski, D. (eds.) MFCS 2009. LNCS, vol. 5734, pp. 235–246. Springer, Heidelberg (2009) 26. Claude, F., Navarro, G.: Extended compact web graph representations. In: Elomaa, T. (ed.) Ukkonen Festschrift 2010. LNCS, vol. 6060, pp. 77–91. Springer, Heidelberg (2010)
14
P. Ferragina
27. Cutting, D.: Apache Lucene (2008), http://lucene.apache.org/ 28. Delpratt, O., Rahman, N., Raman, R.: Compressed prefix sums. In: van Leeuwen, J., Italiano, G.F., van der Hoek, W., Meinel, C., Sack, H., Pl´ aˇsil, F. (eds.) SOFSEM 2007. LNCS, vol. 4362, pp. 235–247. Springer, Heidelberg (2007) 29. Ding, S., Attenberg, J., Suel, T.: Scalable techniques for document identifier assignment in inverted indexes. In: Procs. WWW, pp. 311–320 (2010) 30. Farzan, A., Munro, I.: Succinct Representations of Arbitrary Graphs. In: Halperin, D., Mehlhorn, K. (eds.) ESA 2008. LNCS, vol. 5193, pp. 393–404. Springer, Heidelberg (2008) 31. Farzan, A., Raman, R., Rao, S.S.: Universal succinct representations of trees? In: Albers, S., Marchetti-Spaccamela, A., Matias, Y., Nikoletseas, S., Thomas, W. (eds.) ICALP 2009. LNCS, vol. 5555, pp. 451–462. Springer, Heidelberg (2009) 32. Ferragina, P., Gagie, T., Manzini, G.: Lightweight data indexing and compression in external memory. In: L´ opez-Ortiz, A. (ed.) LATIN 2010. LNCS, vol. 6034, pp. 697–710. Springer, Heidelberg (2010) 33. Ferragina, P., Giancarlo, R., Manzini, G.: The engineering of a compression boosting library: Theory vs practice in BWT compression. In: Azar, Y., Erlebach, T. (eds.) ESA 2006. LNCS, vol. 4168, pp. 756–767. Springer, Heidelberg (2006) 34. Ferragina, P., Giancarlo, R., Manzini, G.: The myriad virtues of wavelet trees. In: Bugliesi, M., Preneel, B., Sassone, V., Wegener, I. (eds.) ICALP 2006. LNCS, vol. 4051, pp. 561–572. Springer, Heidelberg (2006) 35. Ferragina, P., Giancarlo, R., Manzini, G., Sciortino, M.: Boosting textual compression in optimal linear time. J. ACM 52, 688–713 (2005) 36. Ferragina, P., Gonzalez, R., Navarro, G., Venturini, R.: Compressed text indexes: From theory to practice. ACM Journal of Experimental Algorithmics (2009) 37. Ferragina, P., Grossi, R.: The string B-tree: A new data structure for string search in external memory and its applications. J. ACM 46(2), 236–280 (1999) 38. Ferragina, P., Grossi, R., Gupta, A., Shah, R., Vitter, J.S.: On searching compressed string collections cache-obliviously. In: Procs. ACM PODS, pp. 181–190 (2008) 39. Ferragina, P., Luccio, F., Manzini, G., Muthukrishnan, S.: Compressing and searching XML data via two zips. In: Procs. WWW, pp. 751–760 (2006) 40. Ferragina, P., Luccio, F., Manzini, G., Muthukrishnan, S.: Compressing and indexing labeled trees, with applications. J. ACM 57(1) (2009) 41. Ferragina, P., Manzini, G.: Indexing compressed text. J. ACM 52(4), 552–581 (2005) 42. Ferragina, P., Manzini, G.: On compressing the textual web. In: Procs. ACM WSDM, pp. 391–400 (2010) 43. Ferragina, P., Manzini, G., M¨ akinen, V., Navarro, G.: Compressed representations of sequences and full-text indexes. ACM Trans. on Algorithms 3(2) (2007) 44. Ferragina, P., Nitto, I.: A delta-compressed storage scheme supporting I/O-efficient random access. Draft (2010) 45. Ferragina, P., Nitto, I., Venturini, R.: On optimally partitioning a text to improve its compression. In: Fiat, A., Sanders, P. (eds.) ESA 2009. LNCS, vol. 5757, pp. 420–431. Springer, Heidelberg (2009) 46. Ferragina, P., Nitto, I., Venturini, R.: On the bit-complexity of lempel-ziv compression. In: Procs. ACM-SIAM SODA, pp. 768–777 (2009) 47. Ferragina, P., Rao, S.S.: Tree compression and indexing. In: Kao, M.-Y. (ed.) Encyclopedia of Algorithms. Springer, Heidelberg (2008) 48. Ferragina, P., Venturini, R.: Compressed permuterm index. In: Procs. ACM SIGIR, pp. 535–542 (2007)
Data Structures: Time, I/Os, Entropy, Joules!
15
49. Ferragina, P., Venturini, R.: A simple storage scheme for strings achieving entropy bounds. In: Procs. ACM-SIAM SODA, pp. 690–696 (2007) 50. Ferragina, P., Venturini, R.: Weighted compressed self-indexes. Draft (2010) 51. Fischer, J.: Optimal succinctness for range minimum queries. In: L´ opez-Ortiz, A. (ed.) LATIN 2010. LNCS, vol. 6034, pp. 158–169. Springer, Heidelberg (2010) 52. Giancarlo, R., Restivo, A., Sciortino, M.: From first principles to the Burrows and Wheeler transform and beyond, via combinatorial optimization. Theoretical Computer Science 387(3), 236–248 (2007) 53. Golynski, A.: Optimal lower bounds for rank and select indexes. Theoretical Computer Science 387, 348–359 (2007) 54. Golynski, A., Grossi, R., Gupta, A., Raman, R., Rao, S.S.: On the size of succinct indices. In: Arge, L., Hoffmann, M., Welzl, E. (eds.) ESA 2007. LNCS, vol. 4698, pp. 371–382. Springer, Heidelberg (2007) 55. Grossi, R., Orlandi, A., Raman, R., Rao, S.S.: More haste, less waste: Lowering the redundancy in fully indexable dictionaries. In: Procs STACS, pp. 517–528 (2009) 56. Hon, W.K., Lam, T., Shah, R., Tam, S., Vitter, J.S.: Compressed index for dictionary matching. In: Procs. IEEE DCC, pp. 23–32 (2008) 57. Hon, W.K., Shah, R., Thankachan, S.V., Vitter, J.: On entropy-compressed text indexing in external memory. In: Karlgren, J., Tarhio, J., Hyyr¨ o, H. (eds.) SPIRE 2009. LNCS, vol. 5721, pp. 75–89. Springer, Heidelberg (2009) 58. Hon, W.K., Shah, R., Vitter, J.S.: Compression, indexing, and retrieval for massive string data. In: Amir, A., Parida, L. (eds.) CPM 2010. LNCS, vol. 6129, pp. 260– 274. Springer, Heidelberg (2010) 59. Jansson, J., Sadakane, K., Sung, W.K.: Ultra-succinct representation of ordered trees. In: Procs ACM-SIAM SODA, pp. 575–584 (2007) 60. Kant, K.: Data center evolution: A tutorial on state of the art, issues, and challenges. Computer Networks 53(17), 2939–2965 (2009) 61. Kant, K.: Toward a science of power management. IEEE Computer 42(9), 99–101 (2009) 62. Kaplan, H., Landau, S., Verbin, E.: A simpler analysis of Burrows-Wheeler-based compression. Theoretical Computer Science 387(3), 220–235 (2007) 63. Lam, T.W., Sung, W.K., Tam, S.L., Wong, C.K., Yiu, S.M.: Compressed indexing and local alignment of DNA. BioInformatics 24(6), 791–797 (2008) 64. Langmead, B., Trapnell, C., Pop, M., Salzberg, S.L.: Ultrafast and memory-efficient alignment of short dna sequences to the human genome. Genome Biology 10(R25) (2009) 65. Li, H., Durbin, R.: Fast and accurate long-read alignment with burrows-wheeler transform. BioInformatics 26(5), 589–595 (2010) 66. Li, Y., He, B., Luo, Q., Yi, K.: Tree indexing on flash disks. In: Procs. ICDE, pp. 1303–1306 (2009) 67. M¨ akinen, V., Navarro, G.: Implicit compression boosting with applications to selfindexing. In: Ziviani, N., Baeza-Yates, R. (eds.) SPIRE 2007. LNCS, vol. 4726, pp. 229–241. Springer, Heidelberg (2007) 68. M¨ akinen, V., Navarro, G., Siren, J., Valimaki, N.: Storage and Retrieval of Highly Repetitive Sequence Collections. Journal of Computational Biology 17(3), 281–308 (2010) 69. Manzini, G.: An analysis of the Burrows-Wheeler transform. J. ACM 48(3), 407– 430 (2001) 70. Mehlhorn, K., Ziegelmann, M.: Resource constrained shortest paths. In: Paterson, M. (ed.) ESA 2000. LNCS, vol. 1879, pp. 326–337. Springer, Heidelberg (2000)
16
P. Ferragina
71. Nath, S., Gibbons, P.B.: Online maintenance of very large random samples on flash storage. VLDB Journal 19(1), 67–90 (2010) 72. Navarro, G., M¨ akinen, V.: Compressed full-text indexes. ACM Computing Surveys 39(1) (2007) 73. Navarro, G., Russo, L.M.: Re-pair achieves high-order entropy. In: Procs. IEEE DCC, p. 537 (2008) 74. Okanohara, D., Sadakane, K.: Practical entropy-compressed Rank/Select dictionary. In: Procs. ALENEX (2007) 75. Patrascu, M.: Succincter. In: Procs. IEEE FOCS, pp. 305–313 (2008) 76. Rajpoot, N., S ¸ ahinalp, C.: Dictionary-based data compression. In: Khalid, S.(ed.) Handbook of Lossless Data Compression. Academic Press, London (2002) 77. Ranganathan, P.: Recipe for efficiency: Principles of power-aware computing. Comm. ACM 53(4), 60–67 (2010) 78. Rivoire, S., Shah, M.A., Ranganathan, P., Kozyrakis, C.: Joulesort: a balanced energy-efficiency benchmark. In: Procs. ACM SIGMOD, pp. 365–376 (2007) 79. Sadakane, K., Navarro, G.: Fully-functional succinct trees. In: Procs. ACM-SIAM SODA, pp. 134–149 (2010) 80. Vigna, S.: Broadword implementation of rank/select queries. In: McGeoch, C.C. (ed.) WEA 2008. LNCS, vol. 5038, pp. 154–168. Springer, Heidelberg (2008) 81. Vitter, J.: External memory algorithms and data structures. ACM Computing Surveys 33(2), 209–271 (2001) 82. Vo, B.D., Vo, K.-P.: Compressing table data with column dependency. Theoretical Computer Science 387(3), 273–283 (2007) 83. Willets, K.: Full-text searching & the Burrows-Wheeler transform. Dr Dobbs Journal (December 2003) 84. Witten, I.H., Moffat, A., Bell, T.C.: Managing Gigabytes: Compressing and Indexing Documents and Images, 2nd edn. Morgan Kaufmann Publishers, San Francisco (1999) 85. Yan, H., Ding, S., Suel, T.: Inverted index compression and query processing with optimized document ordering. In: Procs. WWW, pp. 401–410 (2009) 86. Ziv, J., Lempel, A.: A universal algorithm for sequential data compression. IEEE Trans. Information Theory 23, 337–343 (1977)
Weighted Congestion Games: Price of Anarchy, Universal Worst-Case Examples, and Tightness Kshipra Bhawalkar1, , Martin Gairing2, , and Tim Roughgarden1, 1 2
Stanford University, Stanford, CA, USA {kshipra,tim}@cs.stanford.edu University of Liverpool, Liverpool, U.K.
[email protected]
Abstract. We characterize the price of anarchy in weighted congestion games, as a function of the allowable resource cost functions. Our results provide as thorough an understanding of this quantity as is already known for nonatomic and unweighted congestion games, and take the form of universal (cost function-independent) worst-case examples. One noteworthy byproduct of our proofs is the fact that weighted congestion games are “tight”, which implies that the worst-case price of anarchy with respect to pure Nash, mixed Nash, correlated, and coarse correlated equilibria are always equal (under mild conditions on the allowable cost functions). Another is the fact that, like nonatomic but unlike atomic (unweighted) congestion games, weighted congestion games with trivial structure already realize the worst-case POA, at least for polynomial cost functions. We also prove a new result about unweighted congestion games: the worst-case price of anarchy in symmetric games is, as the number of players goes to infinity, as large as in their more general asymmetric counterparts.
1
Introduction
The class of congestion games is expressive enough to capture a number of otherwise unrelated applications – including routing, network design, and the migration of species (see references in [20]) – yet structured enough to permit a useful theory. Such a game has a ground set of resources, and each player selects a subset of them (e.g., a path in a network). Each resource has a univariate cost function that depends on the load induced by the players that use it, and each player aspires to minimize the sum of the resources’ costs in its chosen strategy (given the strategies chosen by the other players). Congestion games have played a starring role in recent research on quantifying the inefficiency of game-theoretic equilibria. They are rich enough to encode the
Supported by a Stanford Graduate Fellowship. Partly supported by a fellowship within the Postdoc-Programme of the German Academic Exchange Service (DAAD). Supported in part by NSF CAREER Award CCF-0448664, an ONR Young Investigator Award, an ONR PECASE Award, an AFOSR MURI grant, and an Alfred P. Sloan Fellowship.
M. de Berg and U. Meyer (Eds.): ESA 2010, Part II, LNCS 6347, pp. 17–28, 2010. c Springer-Verlag Berlin Heidelberg 2010
18
K. Bhawalkar, M. Gairing, and T. Roughgarden
Prisoner’s Dilemma, and more generally can have Nash equilibria in which the sum of the players’ costs is arbitrarily larger than that in a minimum-cost outcome. Thus the research goal is to understand how the parameters of a congestion game govern the inefficiency of its equilibria, and in particular to establish useful sufficient conditions that guarantee near-optimal equilibria. A simple observation is that the inefficiency of equilibria in a congestion game depends fundamentally on the “degree of nonlinearity” of the cost functions. Because of this, we identify a “thorough understanding” of the inefficiency of equilibria in congestion games with a simultaneous solution to every possible special case of cost functions. In more detail, an ideal theory would include the following ingredients. 1. For every set C of allowable resource cost functions, a relatively simple recipe for computing the largest-possible price of anarchy (POA) – the ratio between the sum of players’ costs in an equilibrium and in a minimum-cost outcome – in congestion games with cost functions in C. 2. For analytically simple classes C like bounded-degree polynomials, an exact formula for the worst-case POA in congestion games with cost functions in C. 3. An understanding of the “game complexity” required for the worst-case POA to be realized. Ideally, such a result should refer only to the strategy sets and be independent of the allowable cost functions C. 4. An understanding of the equilibrium concepts – roughly equivalently, the rationality assumptions needed – to which the POA guarantees apply. This paper is about the fundamental model of weighted congestion games [16, 17, 18], where each player i has a weight wi and the load on a resource is defined as the sum of the weights of the players that use it. Such weights model non-uniform resource consumption among players and can be relevant for many reasons: for different durations of resource usage [22]; for modeling different amounts of traffic (e.g., by Internet Service Providers from different “tiers”); and even for collusion among several identical users, who can be thought of as a single “virtual” player with weight equal to the number of colluding players [9, 12]. Our main contribution is a thorough understanding of the worst-case POA in weighted congestion games, in the form of the four ingredients listed above. 1.1
Overview of Results
Result 1: Exact POA of general weighted congestion games with general cost functions. We provide the first characterization of the exact POA of general weighted congestion games with general cost functions. For a given set of cost functions C, the properties of the functions in this class determine certain “feasible values” of two parameters λ, μ, which lead to upper bounds of the form λ/(1 − μ). We denote the best upper bound that can be obtained using this two-parameter approach by ζ(C). The hard work then lies in proving that there always exists a weighted congestion game that realizes this upper bound. The abstract approach is to make use of the inequalities used in the upper bound proof — in the spirit of “complementary slackness” arguments in
Weighted Congestion Games
19
linear programming — with the cost functions and loads on the resources that make these inequalities tight employed in the worst-case example. Ultimately, we can exhibit examples with POA arbitrarily close to our upper bound of ζ(C). This approach is similar to the one taken in [21] for unweighted congestion games, although non-uniform player weights create additional technical issues that we need to resolve. Our constructions here are completely different from those used for unweighted congestion games in [21]. A side effect of our upper bound proof is that ζ(C) is actually the “Robust POA” defined by Roughgarden [21]. As a consequence, the same upper bound also applies to the POA of the mixed Nash, correlated, and coarse correlated equilibria of a weighted congestion game (see e.g. [24, chapter 3] for definitions). Since we prove a matching lower bound for the pure POA — which obviously extends to the three more general equilibrium concepts — our bound of ζ(C) is the exact worst-case POA with respect to all four equilibrium concepts. We note that exact worst-case POA formulas for weighted congestion games with polynomial cost functions and nonnegative coefficients were given previously in [1]. Characterizing the POA with respect to an arbitrary set of cost functions is more technically challenging and is also a well-motivated goal: for example, M/M/1 delay functions of the (non-polynomial) form 1/(u − x) for a parameter u pervade the networking literature (e.g. [4]). Result 2: Exact POA of congestion games on parallel links with polynomial cost functions. We prove that, for polynomial cost functions with nonnegative coefficients and maximum degree d, the worst-case POA is realized on a network of parallel links (for each d). We note that even for affine cost functions, only partial results were previously known about the worst-case POA of weighted congestion games in networks of parallel links. Our work implies that, at least for polynomial cost functions, the worst-case POA of weighted congestion games is essentially independent of the allowable network topologies (in the same sense as [19]). This result stands in contrast to unweighted congestion games, where the worst-case POA in networks of parallel links is provably smaller that the worst-case POA in general unweighted congestion games [2, 8, 13]. Result 3: POA of symmetric unweighted congestion games is as large as asymmetric ones. Our final result contributes to understanding how the worst-case POA of unweighted congestion games depends on the game complexity. We show that the POA of symmetric unweighted congestion games with general cost functions is the same value γ(C) as that for asymmetric unweighted congestion games. This fact was previously known only for affine cost functions [7]. Thus, the gap between the worst-case POA of networks of parallel links and the worst-case POA of general unweighted atomic congestion games occurs inside the class of symmetric games, and not between symmetric and asymmetric games. 1.2
Further Related Work
Koutsoupias and Papadimitriou [14] initiated the study of the POA of mixed Nash equilibria in weighted congestion games on parallel links (with a different
20
K. Bhawalkar, M. Gairing, and T. Roughgarden
objective function). For our objective function, the earliest results on the pure POA of unweighted congestion games on parallel links are in [15,23]. The first results for general congestion games were obtained independently by Christodoulou and Koutsoupias [7] and Awerbuch, Azar, and Epstein [3]. Christodoulou and Koutsoupias [7, 6] established an exact worst-case POA bound of 5/2 for unweighted congestion games with affine cost functions. They obtained the same bound for the POA of pure Nash, mixed Nash, and correlated equilibria [6]. They also provided an asymptotic POA upper bound of dΘ(d) for cost functions that are polynomials with nonnegative coefficients and degree at most d. In their concurrent paper, Awerbuch, Azar, and Epstein [3] provided the exact POA of weighted congestion games with affine cost functions (1 + φ ≈ 2.618, where φ is the golden ratio). They also proved an upper bound of dΘ(d) for the POA of weighted congestion games with cost functions that are polynomials with nonnegative coefficients and degree at most d. Later, Aland et al. [1] obtained exact worst-case POA bounds for both weighted and unweighted congestion games with cost functions that are polynomials with nonnegative coefficients. Caragiannis et al. [5] analyzed asymmetric singleton congestion games with affine cost functions. They proved lower bounds of 5/2 and 1 + φ on the worstcase POA of unweighted and weighted such games, respectively. Gairing and Schoppmann [10] provided a detailed analysis of the POA of singleton congestion games. They generalized the results in [5] to polynomial cost functions, showing that the worst-case POA in asymmetric singleton games is as large as in general games. For symmetric singleton congestion games (i.e., networks of parallel links) and polynomial cost functions they showed that the POA is bounded below by certain Bell numbers. This last result is subsumed by our second contribution.
2
Preliminaries
Congestion Games. A general weighted congestion game Γ is composed of a set of N players N , a set of resources E and a set of non-negative and nondecreasing cost functions from R+ to R+ . For each player i ∈ N a weight wi and a set Si ⊆ 2E of strategies are specified. The congestion on a resource is the total weight of all players using that resource and the associated congestion cost is specified by a function ce of the congestion on that edge. An outcome is a choice of strategies s = (s1 , s2 , ...s N ) by players with si ∈ Si . The congestionon a resource e is given by xe = i:e∈si wi . A player’s cost is Ci (s) = wi e∈si ce (xe ). The social cost of an outcome is the sum of the players’ costs: C(s) = N i=1 Ci (s). The total cost can also be written as C(s) = x c (x ). e e e e∈E In unweighted congestion games, all players have unit weight. A congestion game is symmetric if all players have the same set of strategies: Si = S ⊆ 2E for all i. In a singleton congestion game, every strategy of every players consists of a single resource. Symmetric singleton games are also called congestion games on parallel links.
Weighted Congestion Games
21
Pure Nash Equilibrium. A pure Nash equilibrium is an outcome in which no player has an incentive to deviate from its current strategy. Thus the outcome s is a pure Nash equilibrium if for each player i and alternative strategy s∗i ∈ Si , Ci (s) ≤ Ci (s∗i , s−i ). Here, (s∗i , s−i ) denotes the outcome that results when player i changes its strategy in s from si to s∗i . We refer to this inequality as the Nash condition. A pure Nash equilibrium need not have the minimum-possible social cost. The price of anarchy (POA) captures how much worse Nash equilibria are compared to the cost of the best social outcome. The POA of a congestion game is defined as the largest value of the ratio C(s)/C(s∗ ), where s ranges over pure Nash equilibria and s∗ ranges over all outcomes. The POA of a class of games is the largest POA among all games in the class. Weighted congestion games might not admit pure Nash equilibria [16]; see Harks and Klimm [11] for a detailed characterization. Fortunately, our lower bound constructions produce games in which such equilibria exist; and, our upper bounds apply to more general equilibrium concepts that do always exist (mixed Nash, correlated, and coarse correlated equilibria, see e.g.. [24, chapter 3]). The POA with respect to these other equilibrium concepts is defined in the obvious way, as the worst-case ratio between the expected social cost of an equilibrium and the minimum-possible social cost.
3
Upper Bounds for Weighted Congestion Games
In this section we provide an upper bound on the POA of weighted congestion games with general cost functions. To explain the upper bound, consider a set C of cost functions and the set of parameter pairs A(C) ={(λ, μ) : μ < 1;
x∗ c(x + x∗ ) ≤ λx∗ c(x∗ ) + μxc(x)},
(1)
where the constraints range over all functions c ∈ C and real numbers x ≥ 0 and x∗ > 0. Each parameter pair (λ, μ) in A(C) yields the following upper bound on the POA of weighted congestion games with cost functions in C. Proposition 1. For every class C of cost functions and every pair (λ, μ) ∈ A(C), the POA of every weighted congestion game with cost functions in C is at most λ/(1 − μ). The upper bound in Proposition 1 is proved via a “smoothness argument” in the sense of [21]. Many earlier works also proved upper bounds on the POA in congestion games in this way (e.g. [1, 7]). As proved in [21], every such upper bound automatically applies to (among other things) the POA of mixed Nash equilibria, correlated equilibria, and coarse correlated equilibria [21]. The best upper bound implied by Proposition 1 is denoted by ζ(C). Definition 1. For a class of functions C, define ζ(C) := inf Define ζ(C) := +∞ if A(C) is empty.
λ 1−μ
: (λ, μ) ∈ A(C) .
22
4
K. Bhawalkar, M. Gairing, and T. Roughgarden
Lower Bounds for Weighted Congestion Games
In this section we describe two different lower bounds that match the upper bound ζ(C) given in Definition 1. The first lower bound applies to every class C of allowable cost functions satisfying a mild technical condition, and makes use of asymmetric congestion games. The second lower bound applies only to polynomial cost functions with nonnegative coefficients, but uses only networks of parallel links. For each lower bound, we are given a class of cost functions C, and we will describe a series of games with POA approaching ζ(C). For each game we specify player weights, player strategies, and resource cost functions. Additionally, we describe two outcomes s and s∗ . Justifying the example as a proper lower bound requires checking that s is a (pure) Nash equilibrium and that the ratio of costs of the outcomes s and s∗ is close to ζ(C). Our lower bound constructions are guided by the aspiration to satisfy simultaneously all of the inequalities in the proof of Proposition 1 exactly. This goal translates to the following conditions. (a) In the outcome s, each player is indifferent between its strategy si and the deviation s∗i . (b) For each player i, the strategies si and s∗i are disjoint. (c) Cost functions and resource congestion in the outcomes s and s∗ form tuples (c, x, x∗ ) that correspond to binding constraints in the infimum in (1). (d) In the outcome s∗ , each resource is used by a single player. We believe that satisfying all of these conditions simultaneously is impossible (and can prove it for congestion games on parallel links). Nevertheless, we are able to “mostly” satisfy these conditions, which permits an asymptotic lower bound of ζ(C) as the number of players and resources tend to infinity. 4.1
Weighted Congestion Games with General Cost Functions
Now we present the lower bound examples that obtain POA arbitrarily close to ζ(C) for most classes of cost functions C. We assume that the class C is closed under scaling and dilation, meaning that if c(x) ∈ C and r ∈ IR+ , then rc(x) and c(rx) are also in C. Standard scaling and replication tricks (see [19]) imply that closure under scaling is without loss of generality. Closure under dilation is not without loss but is satisfied by most natural classes of cost functions. To generically construct lower bound examples, we examine the set of constraints to find ones that are binding, and use them to get close to the values of (λ, μ) that yield ζ(C). First, we observe that scaling and dilation does not change the set of constraints in the definition of A(C). The set of cost functions C can then be seen as composed of a number of disjoint equivalence classes (where the relation is differing by scaling and dilation). For simplicity, we assume that the number of equivalence classes of functions in C is countable.
Weighted Congestion Games
23
The set A(C) is restricted by a number of constraints – one for each cost function c ∈ C and non-negative real numbers x, x∗ . In the next lemma we establish that the uncountable set of constraints on the set A(C) can be refined to a countable set of constraints that yield the same set. This proves useful later when we need to reason about constraints that are “most binding”. Lemma 1. Given a set of cost functions C containing a countable number of equivalence classes, the set A(C) can be represented as constrained by a countable set of constraints. Order this countable set of constraints and let An denote the set of (λ, μ) pairs that satisfy the first n constraints. Let ζn be the minimum value of λ/(1 − μ) on this set. We define ζn = +∞ if the set An is empty. Observe that ζn is a nondecreasing sequence with limit ζ(C). For every finite set of constraints we can identify precisely the point where the least value of λ/(1 − μ) is obtained. We next note some properties of this infimum that are crucial in building our lower bound example. Lemma 2. Fix n, and let An , ζn be defined as above. Suppose that there exist λn , μn such that ζn = λn /(1 − μn ). Then there exist z1 , z2 s.t. for every w > 0 there exist c1 , c2 , η ∈ [0, 1] s.t. cj (w · (zj + 1)) = λn cj (w) + μn cj (w · zj ) · zj for j ∈ {1, 2} and , η · c1 (w · (z1 + 1)) + (1 − η) · c2 (w · (z2 + 1)) = η · c1 (w · z1 ) · z1 + (1 − η) · c2 (w · z2 ) · z2 .
(2) (3)
For a fixed index n, if ζn is attained for some λ, μ, the lemma above provides z1 , z2 and c1 , c2 that correspond to the most binding constraints. We use these in our lower bound construction. The following example serves as a lower bound. Lower Bound 1. For some parameter k ∈ N (chosen later) we construct a weighted congestion game with player set N and resource set E as follows (see Figure 1). Player strategies: Organize the resources in a tree of depth k, which is a complete binary tree of depth k − 1 with each leaf extended by a path of length 1. For each non-leaf node i in the tree there is a player i with 2 strategies: either choose node i or all children of i. Player weights: If i is the root then wi = 1 otherwise if node i is the left (right) child of some node j then wi = wj · z1 (wi = wj · z2 ), where z1 , z2 are chosen for (λn , μn ) as in Lemma 2. Let NL ⊂ N be the set of players connected to a leaf. Cost functions: The cost functions of the resources are defined recursively: – For the root we can choose any cost function c ∈ C with c(1) = 1. By a scaling argument such a cost function exists. – Any leaf resource gets the same cost function as its parent.
24
K. Bhawalkar, M. Gairing, and T. Roughgarden
Fig. 1. Construction for the Asymmetric Lower Bound (k = 4). Hollow circles denote resources and solid circles denote players.
– Consider an arbitrary resource e which is not a leaf nor its children are leaves. Let ce be its cost function and let we be the weight of the corresponding player e. Let l, r be the left and right child of e respectively. By construction the corresponding players have weights wl = z1 · we and wr = z2 · we . Under all pairs of cost functions c1 , c2 that satisfy Lemma 2 for x∗ = we choose those that also satisfy cj (we · zj ) · zj = ce (we )
(4)
for j ∈ {1, 2}. By a scaling argument such a pair always exists. Let ηe be the corresponding value for η in Lemma 2 and define cl = ηe · c1
and cr = (1 − ηe ) · c2 .
(5)
Nash strategy: The Nash outcome in this example is the outcome s where each player chooses the resource closer to the root. Optimal strategy: the optimal outcome s∗ is the outcome where each player chooses its strategy further from the root. We have to show that, in the construction above, the outcome s is indeed a Nash equilibrium and that the ratio C(s)/C(s∗ ) is at least λn /(1−μn ). We prove these claims in the following lemma: Lemma 3. The POA of the congestion game in Lower Bound 1 is λn /(1 − μn ). The analysis above still leaves out the case when ζn is not obtained by any feasible values of λn , μn . A slightly modified example serves as the lower bound in this case. Combining all of the analysis above we obtain the following result, Theorem 1. For every class C of allowable cost functions that is closed under dilation, the worst-case POA of weighted congestion games with cost functions in C is precisely ζ(C).
Weighted Congestion Games
4.2
25
A Lower Bound for Parallel-Link Networks
In the following, we focus on weighted congestion games with cost functions that are polynomials with nonnegative coefficients and maximum degree d. For such games we show that the POA does not change if we restrict to (symmetric) weighted congestion games on parallel links. Recall that for the general case the POA was shown to be φd+1 [1] where φd is such that (φd + 1)d = φd+1 . We d d establish the following theorem. Theorem 2. For weighted congestion games on parallel-link networks with cost functions that are polynomials with nonnegative coefficients and maximum de. gree d, the worst-case POA is precisely φd+1 d We reiterate that the lower bound in Theorem 2 is new even for affine cost functions. We describe the example below.
Fig. 2. Structure of the symmetric lower bound example for branching factor α = 4 and number of levels k = 3. Hollow circles denote resources and solid circles denote players.
Lower Bound 2. Let k be an integer. We construct the following congestion game on parallel links. (See Figure 2 for reference.) Let φd satisfy (φd + 1)d = φd+1 . Let α be an integer satisfying αd ≥ φ2d+2 which implies, αd ≥ φd+1 . d d d Player strategies: Player strategies are single resources and all players have access to all resources. Cost functions: Group the resources in groups A0 , A1 · · · Ak . For each i = 0, 1, . . . , k − 1, group Ai contains αi resources with cost function ci (x) = d d+1 i d α /φd x . The last group Ak contains αk resources with the cost func k−1 d tion ck (x) = αd αd /φd+1 x . These resources are arranged in a tree d with resources from group Ai at level i of the tree. Player weights: Group players into groups P1 , ...Pk . For i = 1, 2, . . . k, group Pi contains one player for each resources in Ai with player weight wi = k−i (α/φd ) . Optimal strategy: Players in group Pi play on resources in group Ai in the optimal strategy. Denote this outcome by s∗ . Nash strategy: Players in group Pi play on resources in group Ai−1 in the Nash strategy with α players on each resource. Denote this outcome by s.
26
K. Bhawalkar, M. Gairing, and T. Roughgarden
The example described above is symmetric — all players have access to all of the strategies. Theorem 2 then follows from the upper bound in [1] and the following lemma. Lemma 4. For the games in Lower Bound 2, outcome s is a pure Nash equilibC(s) d+1 rium and limk→∞ C(s . ∗ ) = φd
5
Unweighted Congestion Games
We show that for symmetric unweighted congestion games the worst-case POA is the same as that for general unweighted congestion games. A result in [21] gives an upper bound on the POA of unweighted congestion games. For a class of congestion cost functions C, one uses the set of parameter pairs A(C) = {(λ, μ) : μ < 1; x∗ c(x + 1) ≤ λx∗ c(x∗ ) + μxc(x)},
(6)
where the constraints range over cost functions c ∈ C and integers x ≥ 0, x∗ > 0. Then for each (λ, μ) ∈ A(C), the pure POA is at most λ/(1 − μ) in unweighted congestion games with cost functions in C. We use γ(C) to denote the best such upper bound: γ(C) = inf {λ/(1 − μ) : (λ, μ) ∈ A(C)}. Roughgarden [21] also showed how to construct an asymmetric unweighted congestion game that matches the upper bound γ(C) (for each C). Here we show that even symmetric games can be used to achieve this upper bound, in the limit as the number of players and resources tends to infinity. We establish the following theorem. Theorem 3. For every set of cost functions C, there exist symmetric congestion games with cost functions in C and POA arbitrarily close to γ(C). Along the lines of [21] we define γ(C, n) as the minimal value of λ/(1 − μ) that can be obtained when the load on each edge is restricted to be at most n. Then γ(C, n) approaches γ(C) as n approaches ∞. For any finite n the feasible region for (λ, μ) is the intersection of a finite number of half planes, one for each value of x and x∗ . An additional constraint on the feasible region is that μ < 1. We now recall the following lemma from [21]. ˆ μ Lemma 5. Fix finite n and a set of functions C and suppose there exists (λ, ˆ) ˆ λ such that, 1−ˆμ = γ(C, n). Then there exist c1 , c2 ∈ C, x1 , x2 ∈ {0, 1, ..., n}, ˆ j (x∗ )x∗ + μ ˆcj (xj )xj for j in {1, x∗ , x∗ ∈ {1, 2, ..., n} such that, cj (xj + 1)x∗ = λc 1
2
j
j
j
2}, βc1 ,x1 ,x∗1 < 1, and βc2 ,x2 ,x∗2 ≥ 1, where βc,x,x∗ is defined as
xc(x) . x∗ c(x+x∗ )
Note that the lemma as stated here differs from the one in [21] in the additional condition on βc1 ,x1 ,x∗1 , βc2 ,x2 ,x∗2 . However the modified version can be easily obtained by noting that we are always guaranteed a half plane with β < 1 and as long as the value of γ(C, n) is attained there is another half plane with β ≥ 1. We now describe the lower bound.
Weighted Congestion Games
27
Lower Bound 3. Let N be an integer which will denote the number of players. Let c1 , x1 , x∗1 and c2 , x2 , x∗2 be defined as in the above lemma. Resources and Cost functions: There are two groups of resources A1 and A2 . For j = 1, 2 group Aj contains xNj xN∗ resources with the congestion j
cost function αj cj (x). We choose α1 , α2 later. Players: There are N players each with unit weight. Each player i has access to only two strategies, si and s∗i . Optimal strategy: For each j ∈ {1, 2}, i ∈ {1, 2, ...N }: define set O(i, j) composed of Δj = xNj xN∗−1 −1 resources of type Aj . Each resource is contained j
in x∗j sets. Define the optimal strategy for player i, s∗i , as the union of O(i, 1) and O(i, 2). Equilibrium strategy: For each j ∈ {1, 2}, i ∈ {1, 2, ...N }: define set E(i, j) x as composed of Nj Δj resources from each O(i , j) for all i ∈ {1, 2, ...N }. x The number of resources in E(i, j) is x∗j Δj . Each resource is contained in xj j
sets. Define the equilibrium strategy si for player i as the union of E(i, 1) and E(i, 2). We choose α1 , α2 s.t. each player is indifferent between his strategy in the equilibrium and optimal outcomes when all other players play their equilibrium strategies. The following lemma establishes that such non-negative α1 , α2 exist. Lemma 6. If tuples c1 , x1 , x∗1 and c2 , x2 , x∗2 with x1 , x2 ≥ 0, x∗1 , x∗2 > 0 are s.t. βc1 ,x1 ,x∗1 < 1 and βc2 ,x2 ,x∗2 ≥ 1, then for the game instance described in Lower Bound 3, there exist α1 , α2 ≥ 0 s.t. for each player i, Ci (si , s−i ) = Ci (s∗i , s−i ). It now remains to prove that the Lower Bound 3 has the desired POA. ˆ Lemma 7. The POA of the game in Lower Bound 3 approaches λ/(1 −μ ˆ) as N → ∞. Even when γ(C, n) is not attained by any pair (λ, μ), a similar construction yields an instance with POA arbitrarily close to γ(C, n). Lemma 8. For every class C of functions and integer n such that the value γ(C, n) is not attained by any feasible pair (λ, μ), there exist symmetric unweighted congestion games with cost functions in C and POA arbitrarily close to γ(C, n). Lemmas 7 and 8 together establish Theorem 3.
References 1. Aland, S., Dumrauf, D., Gairing, M., Monien, B., Schoppmann, F.: Exact price of anarchy for polynomial congestion games. In: Durand, B., Thomas, W. (eds.) STACS 2006. LNCS, vol. 3884, pp. 218–229. Springer, Heidelberg (2006) ´ Wexler, T., Roughgarden, 2. Anshelevich, E., Dasgupta, A., Kleinberg, J., Tardos, E., T.: The price of stability for network design with fair cost allocation. SIAM Journal on Computing 38(4), 1602–1623 (2008)
28
K. Bhawalkar, M. Gairing, and T. Roughgarden
3. Awerbuch, B., Azar, Y., Epstein, A.: Large the price of routing unsplittable flow. In: STOC, pp. 57–66 (2005) 4. Bertsekas, D.P., Tsitsiklis, J.N.: Parallel and Distributed Computation: Numerical Methods. Athena Scientific, Belmont (1997) 5. Caragiannis, I., Flammini, M., Kaklamanis, C., Kanellopoulos, P., Moscardelli, L.: Tight bounds for selfish and greedy load balancing. In: Bugliesi, M., Preneel, B., Sassone, V., Wegener, I. (eds.) ICALP 2006, Part I. LNCS, vol. 4051, pp. 311–322. Springer, Heidelberg (2006) 6. Christodoulou, G., Koutsoupias, E.: On the price of anarchy and stability of correlated equilibria of linear congestion games. In: Brodal, G.S., Leonardi, S. (eds.) ESA 2005. LNCS, vol. 3669, pp. 59–70. Springer, Heidelberg (2005) 7. Christodoulou, G., Koutsoupias, E.: The price of anarchy of finite congestion games. In: STOC, pp. 67–73 (2005) 8. Fotakis, D.: Congestion games with linearly independent paths: Convergence time and price of anarchy. In: Monien, B., Schroeder, U.-P. (eds.) SAGT 2008. LNCS, vol. 4997, pp. 33–45. Springer, Heidelberg (2008) 9. Fotakis, D., Kontogiannis, S.C., Spirakis, P.G.: Atomic congestion games among coalitions. In: Bugliesi, M., Preneel, B., Sassone, V., Wegener, I. (eds.) ICALP 2006, Part I. LNCS, vol. 4051, pp. 572–583. Springer, Heidelberg (2006) 10. Gairing, M., Schoppmann, F.: Total latency in singleton congestion games. In: Deng, X., Graham, F.C. (eds.) WINE 2007. LNCS, vol. 4858, pp. 381–387. Springer, Heidelberg (2007) 11. Harks, T., Klimm, M.: On the existence of pure nash equilibria in weighted congestion games. In: ICALP (2010) ´ Wexler, T.: The effect of collusion in congestion 12. Hayrapetyan, A., Tardos, E., games. In: STOC, pp. 89–98 (2006) 13. Holzman, R., Law-Yone, N.: Network structure and strong equilibrium in route selection games. Mathematical Social Sciences 46, 193–205 (2003) 14. Koutsoupias, E., Papadimitriou, C.: Worst-case equilibria. In: Meinel, C., Tison, S. (eds.) STACS 1999. LNCS, vol. 1563, pp. 404–413. Springer, Heidelberg (1999) 15. L¨ ucking, T., Mavronicolas, M., Monien, B., Rode, M.: A new model for selfish routing. In: STACS 2004, TCS 2008, vol. 406(3), pp. 187–206 (2008) 16. Milchtaich, I.: Congestion games with player-specific payoff functions. Games and Economic Behavior 13(1), 111–124 (1996) 17. Monderer, D., Shapley, L.S.: Potential games. Games and Economic Behavior 14(1), 124–143 (1996) 18. Rosenthal, R.W.: A class of games possessing pure-strategy Nash equilibria. International Journal of Game Theory 2(1), 65–67 (1973) 19. Roughgarden, T.: The price of anarchy is independent of the network topology. Journal of Computer and System Sciences 67(2), 341–364 (2003) 20. Roughgarden, T.: Potential functions and the inefficiency of equilibria. In: Proceedings of the ICM, vol. III, pp. 1071–1094 (2006) 21. Roughgarden, T.: Intrinsic robustness of the price of anarchy. In: STOC, pp. 513– 522 (2009) 22. Shapley, L.S.: Additive and Non-Additive Set Functions. PhD thesis, Department of Mathematics, Princeton University (1953) 23. Suri, S., T´ oth, C., Zhou, Y.: Selfish load balancing and atomic congestion games. Algorithmica 47(1), 79–96 (2007) 24. Young, H.P.: Strategic Learning and its Limits. Oxford University Press, London (2005)
Computing Pure Nash and Strong Equilibria in Bottleneck Congestion Games Tobias Harks1 , Martin Hoefer2 , Max Klimm1 , and Alexander Skopalik2 1
2
Department of Mathematics, TU Berlin, Germany {harks,klimm}@math.tu-berlin.de Department of Computer Science, RWTH Aachen University, Germany {mhoefer,skopalik}@cs.rwth-aachen.de
Abstract. Bottleneck congestion games properly model the properties of many real-world network routing applications. They are known to possess strong equilibria – a strengthening of Nash equilibrium to resilience against coalitional deviations. In this paper, we study the computational complexity of pure Nash and strong equilibria in these games. We provide a generic centralized algorithm to compute strong equilibria, which has polynomial running time for many interesting classes of games such as, e.g., matroid or single-commodity bottleneck congestion games. In addition, we examine the more demanding goal to reach equilibria in polynomial time using natural improvement dynamics. Using unilateral improvement dynamics in matroid games pure Nash equilibria can be reached efficiently. In contrast, computing even a single coalitional improvement move in matroid and single-commodity games is strongly NP-hard. In addition, we establish a variety of hardness results and lower bounds regarding the duration of unilateral and coalitional improvement dynamics. They continue to hold even for convergence to approximate equilibria.
1 Introduction One of the central challenges in algorithmic game theory is to characterize the computational complexity of equilibria. Results in this direction yield important indicators if game-theoretic solution concepts are plausible outcomes of competitive environments in practice. Probably the most prominent stability concept in (non-cooperative) game theory is the Nash equilibrium – a state, from which no player wants to unilaterally deviate. The complexity of Nash equilibrium has been under increased scrutiny for quite some time. A drawback of the Nash equilibrium is that in general it exists only in mixed strategies. There are, however, practically important classes of games that allow pure Nash equilibria (PNE), most prominently congestion games. In a congestion game [20], there is a set of resources, and the pure strategies of players are subsets of this set. Each resource has a delay function depending on the load, i.e., the number of players that select strategies containing the respective resource. The individual cost for a player in a regular congestion game is given by the sum over the delays of the resources in his strategy. Congestion games are an elegant model to study the effects of resource usage and congestion with strategic agents. They have been used frequently to model competitive M. de Berg and U. Meyer (Eds.): ESA 2010, Part II, LNCS 6347, pp. 29–38, 2010. c Springer-Verlag Berlin Heidelberg 2010
30
T. Harks et al.
network routing scenarios [21]. For these games the complexity of exact and approximate PNE is now well-understood. A detailed characterization in terms of, e.g., the structure of strategy spaces [1, 9] or the delay functions [5, 23] has been derived. However, regular congestion games have shortcomings, especially as models for the prominent application of routing in computer networks. The delay of a stream of packets is usually determined by the latency experienced due to available bandwidth or capacity of links. Hence, the total delay of a player is closely related to the performance of the most congested (bottleneck) link (see, e.g., [4, 6, 14, 19]). A model that captures this aspect more realistically are bottleneck congestion games, in which the individual cost of a player is the maximum (instead of sum) of the delays in his strategy. Despite being a more realistic model for network routing, they have not received similar attention in the literature. For classes of non-atomic (with infinitesimally small players) and atomic splittable games (finite number of players with arbitrarily splittable demand) existence of PNE and bounds on the price of anarchy were considered in [6, 17]. For atomic games with unsplittable demand PNE do always exist [4]. In fact, Harks et al. [12] establish the finite improvement property via a lexicographic potential function. Interestingly, they are able to extend these conditions to hold even if coalitions of players are allowed to change their strategy in a coordinated way. This implies that bottleneck congestion games do admit even (pure) strong equilibria (SE), a solution concept introduced by Aumann [3]. In a SE, no coalition (of any size) can deviate and strictly decrease the individual cost of each member. Every SE is a PNE, but the converse holds only in special cases (e.g., for singleton games [13]). SE represent a very robust and appealing stability concept. In general games, however, they are quite rare, which makes the existence guarantee in bottleneck congestion games even more remarkable. For instance, even in dominant strategy games such as the Prisoner’s Dilemma there might be no SE. Not surprisingly, for regular congestion games with linear aggregation the existence of SE is not guaranteed [13, 15]. The existence of PNE and SE in bottleneck congestion games raises a variety of important questions regarding their computational complexity. In which cases can PNE and SE be computed efficiently? As the games have the finite improvement property, another important issue is the duration of natural (coalitional) improvement dynamics. More fundamentally, it is not obvious that even a single such coalitional improving move can be found efficiently. These are the main questions that we address in this paper. 1.1 Our Results We examine the computational complexity of PNE and SE in bottleneck congestion games. In Section 2 we focus on computing PNE and SE using (centralized) algorithms. Our first main result is a generic algorithm that computes a SE for any bottleneck congestion game. The algorithm iteratively decreases capacities on the resources and relies on a strategy packing oracle. The oracle decides if a given set of capacities allows to pack a collection of feasible strategies for all players and outputs a feasible packing if one exists. The running time of the algorithm is essentially determined by the running time of this oracle. We show that there are polynomial time oracles for three important and fundamental combinatorial structures: matroids, a-arborescences, and singlecommodity networks. The case of matroids contains, among others, spanning trees in
Computing Pure Nash and Strong Equilibria in Bottleneck Congestion Games
31
undirected graphs which have applications in various areas of networking. Regarding directed graphs, a-arborescences arise as a natural generalization of spanning trees. Our generic algorithm yields an efficient algorithm to compute SE for all the corresponding classes of games. For general games, however, we show that the problem of computing a SE is NP-hard, even in two-commodity networks. In Section 3 we study the duration and complexity of sequential improvement dynamics that converge to PNE and SE. We first observe that for every matroid bottleneck congestion game the lazy best response dynamics presented in [1] converge in polynomial time to a PNE. In contrast to this positive result for unilateral dynamics, we show that it is NP-hard to decide if a coalitional improving move exists, even for matroid and single-commodity network games, and even if the deviating coalition is fixed a priori. This highlights an interesting contrast for these two classes of games: While there are polynomial time algorithms to compute a SE, it is impossible to decide efficiently if a given state is a SE – the decision problem is co-NP-hard. For more general games, we observe in Section 3.2 that constructions of [23] regarding the hardness of computing PNE in regular games can be adjusted to yield similar results for bottleneck games. In particular, in (a) symmetric games with arbitrary delay functions and (b) asymmetric games with bounded-jump delay functions computing a PNE is PLS-complete. In addition, we show that in both cases there exist games and starting states, from which every sequence of improvement moves to a PNE is exponentially long. We extend this result to the case when moves of coalitions of size O(n1− ) are allowed, for any constant > 0. In addition, we observe that all of these hardness results generalize to the computation of α-approximate PNE and SE, for any polynomially bounded factor α. An α-approximate PNE (SE) is a relaxation of a PNE (SE), which is stable only against (coalitional) improving moves that decrease the delay of the (every) moving player by at least a factor of α > 1. We conclude the paper in Section 4 by outlining some interesting open problems regarding the convergence to approximate equilibria. 1.2 Preliminaries Bottleneck congestion games are strategic games G = (N, S, (ci )i∈N ), where N = {1, . . . , n} is the non-empty and finite set of players, S = i∈N Si is the non-empty set of states or strategy profiles, and ci : S → N is the individual cost function that specifies the cost value of player i for each state S ∈ S. A game is called finite if S is finite. For the sake of a clean mathematical definition, we define strategies and costs using the general notion of a congestion model. A tuple M = (N, R, S, (dr )r∈R ) is called a congestion model if N = {1, . . . , n} is a non-empty, finite set of players, R = {1, . . . , m} is a non-empty, finite set of resources, and S = i∈N Si is the set of states or profiles. For each player i ∈ N, the set Si is a non-empty, finite set of pure strategies S i ⊆ R. Given a state S , we define r (S ) = |{i ∈ N : r ∈ S i }| as the number of players using r in S . Every resource r ∈ R has a delay function dr : S → N defined as dr (S ) = dr (r (S )). In this paper, all delay functions are non-negative and non-decreasing. A congestion model M is called matroid congestion model if for every i ∈ N there is a matroid Mi = (R, Ii ) such that Si equals the set of bases of Mi . We denote by rk(M) = maxi∈N rk(Mi ) the rank of the matroid congestion model. (Bottleneck) congestion games corresponding
×
×
32
T. Harks et al.
to matroid congestion models will be called matroid (bottleneck) congestion games. Matroids exhibit numerous nice properties. For a comprehensive overview see standard textbooks [16, Chapter 13] and [22, Chapters 39 – 42]. Let M be a congestion model. The corresponding bottleneck congestion game is the strategic game G(M) = (N, S, (ci )i∈N ) in which ci is given by ci (S ) = maxr∈S i dr r (S ) . We drop M whenever it is clear from context. We define the corresponding regular congestion game in the same way, the only difference is that ci (S ) = r∈S i dr r (S ) . For a coalition C ⊆ N we denote by −C its complement and by SC = i∈C Si the set of states of players in C. A pair S , (S C , S −C ) ∈ S × S is called an α-improving move of coalition C if ci (S ) > αci (S C , S −C ) for all i ∈ C and α ≥ 1. For α = 1 we call S , (S C , S −C ) improving move (or profitable deviation). A state S is a k-strong equilibrium (k-SE), if there is no improving move (S , ·) for a coalition of size at most k. We say S is a strong equilibrium (SE) if and only if it is a n-SE. Similarly, S is a pure Nash equilibrium (PNE) if and only if it is a 1-SE. We call a state S an α-approximate SE (PNE) if no coalition (single player) has an α-improving move (S , ·). We denote by I(S ) the set of all possible α-improving moves (S , S ) to other states S ∈ S. We call a sequence of states (S 0 , S 1 , . . . ) an improvement path if every tuple (S k , S k+1 ) ∈ I(S k ) for all k = 0, 1, 2, . . . . Intuitively, an improvement path is a path in a so-called state graph G(G) derived from G, where every state S ∈ S corresponds to a node in G(G) and there is a directed edge (S , S ) if and only if (S , S ) ∈ I(S ).
×
2 Computing Strong Equilibria In this section, we investigate the complexity of computing a SE in bottleneck congestion games. We first present a generic algorithm that computes a SE for an arbitrary bottleneck congestion game. It uses an oracle that solves a strategy packing problem (see Definition 1), which we term strategy packing oracle. For games in which the strategy packing oracle can be implemented in polynomial time, we obtain a polynomial algorithm computing a SE. We then examine games for which this is the case. In general, however, we prove that computing a SE is NP-hard, even for two-commodity bottleneck congestion games. The Dual Greedy. The general approach of our algorithm is to introduce upper bounds ur (capacities) on each resource r. The idea is to iteratively reduce upper bounds of costly resources as long as the residual capacities admit a feasible strategy packing, see Definition 1 below. Our algorithm can be interpreted as a dual greedy, or worst out algorithm as studied, e.g., in the field of network optimization, see Schrijver [22]. Definition 1. [Strategy packing oracle] Input: Finite set of resources R with upper bounds (ur )r∈R , and n collections S1 , . . . , Sn ⊆ 2R given implicitly by a certain combinatorial property. Output: Sets S 1 ∈ S1 , . . . , S n ∈ Sn such that |i ∈ {1, . . . , n} : r ∈ S i | ≤ ur for all r ∈ R, or the information, that no such sets exist. More specifically, when the algorithm starts, no strategy has been assigned to any player and each resource can be used by n players, thus, ur = n. If r is used by n players, its
Computing Pure Nash and Strong Equilibria in Bottleneck Congestion Games
33
cost equals dr (n). The algorithm now iteratively reduces the maximum resource cost by picking a resource r with maximum delay dr (ur ) and ur > 0. The number of players allowed on r is reduced by one and the strategy packing oracle checks, if there is a feasible strategy profile obeying the capacity constraints. If the strategy packing oracle outputs such a feasible state S , the algorithm reiterates by choosing a (possibly different) resource that has currently maximum delay. If the strategy packing oracle returns ∅ after the capacity of some r ∈ R was reduced to ur − 1, we fix the strategies of those ur many players that used r in the state the strategy packing oracle computed in the previous iteration and decrease the bounds ur of all resources used in the strategies accordingly. This ensures that r is frozen, i.e., there is no residual capacity on r for allocating this resource in future iterations of the algorithm. The algorithm terminates after at most n · m calls of the oracle. Theorem 1. Dual Greedy computes a SE. It is worth noting that the dual greedy algorithm applies to arbitrary strategy spaces. If the strategy packing problem can be solved in polynomial time, this algorithm computes a SE in polynomial time. Hence, the problem of computing a SE is polynomial time reducible to the strategy packing problem. For general bottleneck congestion games the converse is also true. Theorem 2. The strategy packing problem is polynomial time reducible to the problem of computing a SE in a bottleneck congestion game. Note that in the proof of the upper theorem the combinatorial structure of the strategy packing problem is not preserved. However, it is easy to establish a one to one correspondence for the case n = 2. That is, a SE of the two player game G = (N, S1 × S2 , (ci )i∈N ) can be computed in polynomial time for all delay functions if and only if the strategy packing oracle with input S1 , S2 can be implemented in polynomial time for every vector of capacities. In the next section we will present some interesting cases, in which the strategy packing problem can be solved in polynomial time, or in which computation becomes NP-hard. Complexity of Strategy Packing Theorem 3. The strategy packing problem can be solved in polynomial time for matroid bottleneck congestion games where the strategy set of player i equals the set of bases of a matroid Mi = (R, Ii ) given by a polynomial independence oracle. Proof. For each matroid Mi = (R, Ii ), we construct a matroid Mi = (R , Ii ) as follows. For each resource r ∈ R, we introduce ur resources r1 , . . . , rur to R . We say that r is the representative of r1 , . . . , rur . Then, a set I ⊂ R is independent in Mi if the set I that arises from I by replacing resources by their representatives is independent in Mi . This construction gives rise to a polynomial independence oracle for Mi . Now, we regard the matroid union M = M1 ∨ · · · ∨ Mn , which again is a matroid. Using the algorithm proposed by Cunningham [7] we can compute a maximum-size set B in I1 ∨ · · · ∨ In in time polynomial in n, m, rk(M), and the maximum complexity of the n independence oracles.
34
T. Harks et al.
Clearly, if |B| < i∈N rk(Mi ), there is no feasible packing of the bases of M1 , . . . , Mn . If, in contrast, |B| = i∈N rk(Mi ), we obtain the corresponding strategies (S 1 , . . . , S n ) using the algorithm.
We now consider strategy spaces defined as a-arborescences, which are in general not matroids. Let D = (V, R) be a directed graph with |R| = m. For a distinguished node in a ∈ V, we define an a-arborescence as a directed spanning tree, where a has in-degree zero and every other vertex has in-degree one. Theorem 4. The strategy packing problem can be solved in time O(m2 n2 ) for aarborescence games in which the set of strategies of each player equals the set of aarborescences in a directed graph D = (V, R). For single-commodity networks efficient computation of a SE is possible using wellknown flow algorithms to implement the oracle. When we generalize to two commodities, however, a variety of problems concerning SE become NP-hard by a simple construction. Theorem 5. The strategy packing problem can be solved in time O(m3 ) for singlecommodity bottleneck congestion games. Proof. Assigning a capacity of ur to each edge and using the algorithm of Edmonds and Karp we obtain a maximum flow within O(m3 ). Clearly, if the value of the flow is smaller than n, no admissible strategies exist and we can return ∅. If the flow is n or larger we can decompose it in at least n unit flows and return n of them.
Theorem 6. In two-commodity network bottleneck games it is strongly NP-hard to (1) compute a SE, (2) decide for a given state whether any coalition has an improving move, and (3) decide for a given state and a given coalition if it has an improving move. Proof. We reduce from the 2 Directed Arc-Disjoint Paths (2DADP) problem, which is strongly NP-hard, see Fortune et al. [10]. The problem is to decide if for a given directed graph D = (V, A) and two node pairs (s1 , t1 ), (s2 , t2 ) there exist two arc-disjoint (s1 , t1 )and (s2 , t2 )-paths. For the reduction, we define a corresponding two-commodity bottleneck game by introducing non-decreasing delay functions on every arc r by dr (x) = 0, if x ≤ 1 and 1, else. We associate every commodity with a player. Then, 2DADP is a Yes-instance if and only if every SE provides a payoff of zero to every player. For the other problems we simply construct a solution, in which the strategies are not arc-disjoint. The remaining results follow.
3 Convergence of Improvement Dynamics In the previous section, we have outlined some prominent classes of games, for which SE can be computed in polynomial time. Furthermore, it is known [12] that sequential improvement dynamics converge to PNE and SE. We now show that the Nash dynamics convergences quickly to a PNE in matroid games. For the convergence to SE one has to consider deviations of coalitions of players. However, deciding if such a deviation exists is NP-hard even in matroid games or single-commodity network games.
Computing Pure Nash and Strong Equilibria in Bottleneck Congestion Games
35
3.1 Matroid and Single-Commodity Network Games We first observe that bottleneck congestion games can be transformed into regular congestion games while preserving useful properties regarding the convergence to PNE. This allows to show fast convergence to PNE in matroid bottleneck games. Convergence to Pure Nash Equilibria. The following lemma establishes a connection between bottleneck and regular congestion games. For a bottleneck congestion game G we denote by Gsum the regular congestion game with the same congestion model as G except that we choose dr (·) = mdr (·) , r ∈ R. Lemma 1. Every PNE for Gsum is a PNE for G. Proof. Suppose S is a PNE for Gsum but not for G. Thus, there is player i ∈ N and strategy S i ∈ Si , such that maxr∈S i dr (r (S )) > maxr∈S i dr (r (S i , S −i )). We define d¯ := ¯ We obtain a contradiction maxr∈S i dr (r (S i , S −i )). This implies maxr∈S i dr (r (S )) ≥ d+1. by observing ¯ ¯ dr (r (S )) ≥ max dr (r (S )) ≥ md+1 > (m − 1) md ≥ dr (r (S i , S −i )) . r∈S i
r∈S i r∈S i We analyze the lazy best response dynamics considered for regular matroid congestion games presented in [2] and combine their analysis with Lemma 1. This allows to establish the following result. Theorem 7. Let G be a matroid bottleneck congestion game. Then the lazy best response dynamics converges to a PNE in at most n2 · m · rk(M) steps. Proof. We consider the lazy best response dynamics in the corresponding game Gsum . In addition, we suppose that a player accepts a deviation only if his bottleneck value is strictly reduced. It follows that the duration is still bounded from above by n2 · m · rk(M) best responses as shown in [1].
Convergence to Strong Equilibria. For matroid bottleneck congestion games we have shown above that it is possible to converge to a PNE in polynomial time by a kind of best-response dynamics with unilateral improving moves. While previous work [12] establishes convergence to SE for every sequence of coalitional improving moves, it may already be hard to find one such move. In fact, we show that an α-improving move can be strongly NP-hard to find, even if strategy spaces have simple matroid structures. This implies that deciding whether a given state is an α-approximate SE is strongly co-NP-hard – even if all delay functions satisfy the β-bounded-jump condition1 for any β > α. Regular congestion games with β-bounded-jump delays are known to allow improved complexity results for approximate equilibria (see, e.g., [5]). Theorem 8. In matroid bottleneck congestion games it is strongly NP-hard to decide for a given state S if there is some coalition C ⊆ N that has an α-improving move, for every polynomial time computable α. 1
Delay function dr satisfies the β-bounded-jump condition if dr (x + 1) ≤ β · dr (x) for any x ≥ 1.
36
T. Harks et al.
Proof. We reduce from Set Packing. An instance of Set Packing is given by a set of elements E and a set U of sets U ⊆ E, and a number k. The goal is to decide if there are k mutually disjoint sets in U. Given an instance of Set Packing we show how to construct a matroid game G and a state S such that there is an improving move for some coalition of players C if and only if the instance of Set Packing has a solution. The game will include |N| = 1 + |U| + |E| + U∈U |U| many players. First, we introduce a master player p1 , which has two possible strategies. He can either pick the coordination resource rc or the trigger resource rt . For each set U ∈ U, there is a set player pU . Player pU can choose either rt or a set resource rU . For each set U and each element e ∈ U, there is an inclusion player pU,e . Player pU,e can use either the set resource rU or an element resource re . Finally, for each element e, there is an element player pe that has strategies {rc , re } and {rc , ra } for some absorbing resource ra . The state S is given as follows. Player p1 is on rc , all set players use rt , all inclusion players the corresponding set resources rU , and all element players the strategies {rc , re }. The coordination resource rc is a bottleneck for the master player and all element players. The delays are drc (x) = α + 1, if x > |E| and 1, otherwise. The trigger resource has delay drt (x) = 1, if x ≤ |U| − k + 1, and α + 1, otherwise. For the set resources rU the delay is drU (x) = 1, if x ≤ 1 and α + 1, otherwise. Finally, for the element resources the delay is dre (x) = 1 if x ≤ 1 and α + 1 otherwise.
The previous theorem shows hardness of the problem of finding a suitable coalition and a corresponding improving move. Even if we specify the coalition in advance and search only for strategies corresponding to an improving move, the problem remains strongly NP-hard. Corollary 1. In matroid bottleneck congestion games it is strongly NP-hard to decide for a given state S and a given coalition C ⊆ N if there is an α-improving move for C, for every polynomial time computable α. We can adjust the previous two hardness results on matroid games to hold also for single-commodity network games. Theorem 9. In single-commodity network bottleneck congestion games it is strongly NP-hard to decide for a given state S (1) if there is some coalition C ⊆ N that has an α-improving move, and (2) if a given coalition C ⊆ N has an α-improving move, for every polynomial time computable α. 3.2 General Games and Approximation The results of the previous sections imply hardness of the computation of SE or coalitional deviations, even in network games. Therefore, when considering general games we here restrict ourselves mostly to unilateral improving moves and PNE. Unfortunately, even in this restricted case the hardness results for regular congestion games in Skopalik and V¨ocking [23] immediately imply identical results for bottleneck congestion games. The main result of [23] shows that computing an approximate PNE is PLShard. The proof is a reduction from CircuitFlip. We can regard the resulting congestion game as a bottleneck congestion game. It is straightforward to adjust all arguments in
Computing Pure Nash and Strong Equilibria in Bottleneck Congestion Games
37
the proof of [23] to remain valid for bottleneck congestion games. A standard transformation [9] immediately yields the same result even for symmetric games, in which Si = S j for all i, j ∈ N. Corollary 2. Finding an α-approximate PNE in a symmetric bottleneck congestion game with positive and increasing delay functions is PLS-complete, for every polynomial-time computable α > 1. A second result in [23] reveals that sequences of α-improving moves do not reach an α-approximate PNE quickly – even if all delay functions satisfy the β-bounded-jump condition with a constant β. Again, the proof remains valid if one regards the game as an asymmetric bottleneck congestion game. This yields the following corollary. Corollary 3. For every α > 2, there is a β > 1 such that, for every n ∈ N, there is a bottleneck congestion game G(n) and a state S with the following properties. The description length of G(n) is polynomial in n. The length of every sequence of α-improving moves leading from S to an α-approximate equilibrium is exponential in n. All delay functions of G(n) satisfy the β-bounded-jump condition. Using the same trick as before to convert an asymmetric game in a symmetric one yields a similar result for symmetric games. However, we must sacrifice the β-bounded-jump condition of the delay functions, for every β polynomial in n. Despite the fact that (coalitional) improving moves are NP-hard to compute, one might hope that the state graph becomes sufficiently dense such that it allows short improvement paths. Unfortunately, we can show that this is not true, even if we consider all improving moves of coalitions of size up to O(n1− ), for any constant > 0. Again, the same result holds for symmetric games when sacrificing the bounded-jump condition. Theorem 10. For every α > 2, there is a β > 1 such that, for every n ∈ N and for every k ∈ N, there is a bottleneck congestion game G(n, k) and a state S with the following properties. The description length of G(n, k) is polynomial in n and k. The length of every sequence of α-improving moves of coalitions of size at most k leading from S to an α-approximate k-SE is exponential in n. All delay functions of G(n, k) satisfy the β-bounded-jump condition.
4 Conclusion We have provided a detailed study of the computational complexity of exact and approximate pure Nash and strong equilibria in bottleneck congestion games. However, some important and fascinating open problems remain. While we have shown that results from [23] essentially translate, we were not able to establish the positive result of [5] about quick convergence to approximate PNE for symmetric games with bounded-jump delays. In addition, there are open problems regarding the duration of unilateral dynamics in symmetric network games and hardness of computing PNE in asymmetric networks. Finally, it would be interesting to see how results on centralized computation of SE extend to the computation of α-approximate SE and k-SE, for 1 < k < n.
38
T. Harks et al.
References 1. Ackermann, H., R¨oglin, H., V¨ocking, B.: On the impact of combinatorial structure on congestion games. J. ACM 55(6), 1–22 (2008) 2. Ackermann, H., R¨oglin, H., V¨ocking, B.: Pure Nash equilibria in player-specific and weighted congestion games. Theor. Comput. Sci. 410(17), 1552–1562 (2009) 3. Aumann, R.: Acceptable points in general cooperative n-person games. In: Contributions to the Theory of Games, vol. 4 (1959) 4. Banner, R., Orda, A.: Bottleneck routing games in communication networks. IEEE J. Selected Areas Comm. 25(6), 1173–1179 (2007) 5. Chien, S., Sinclair, A.: Convergence to approximate Nash equilibria in congestion games. In: Proc. 18th Symp. Disc. Algorithms (SODA), pp. 169–178 (2007) 6. Cole, R., Dodis, Y., Roughgarden, T.: Bottleneck links, variable demand, and the tragedy of the commons. In: Proc. 17th Symp. Disc. Algorithms (SODA), pp. 668–677 (2006) 7. Cunningham, W.H.: Improved bounds for matroid partition and intersection algorithms. SIAM J. Comput. (15), 948–957 (1986) 8. Edmonds, J.: Matroid partition. In: Dantzig, G.B., Veinott, A.F. (eds.) Mathematics of the Decision Sciences, pp. 335–345 (1968) 9. Fabrikant, A., Papadimitriou, C., Talwar, K.: The complexity of pure Nash equilibria. In: Proc. 36th Symp. Theory of Computing (STOC), pp. 604–612 (2004) 10. Fortune, S., Hopcroft, J.E., Wyllie, J.C.: The directed subgraph homeomorphism problem. Theor. Comput. Sci. 10, 111–121 (1980) 11. Gabow, H.N.: A matroid approach to finding edge connectivity and packing arborescences. J. Comput. Syst. Sci. 50(2), 259–273 (1995) 12. Harks, T., Klimm, M., M¨ohring, R.H.: Strong Nash equilibria in games with the lexicographical improvement property. In: Leonardi, S. (ed.) WINE 2009. LNCS, vol. 5929, pp. 463–470. Springer, Heidelberg (2009) 13. Holzman, R., Law-Yone, N.: Strong equilibrium in congestion games. Games Econ. Behav. 21(1-2), 85–101 (1997) 14. Keshav, S.: An engineering approach to computer networking: ATM networks, the Internet, and the telephone network. Addison-Wesley, Reading (1997) 15. Konishi, H., Le Breton, M., Weber, S.: Equilibria in a model with partial rivalry. J. Econ. Theory 72(1), 225–237 (1997) 16. Korte, B., Vygen, J.: Combinatorial Optimization: Theory and Algorithms. Springer, Heidelberg (2002) 17. Mazalov, V., Monien, B., Schoppmann, F., Tiemann, K.: Wardrop equilibria and price of stability for bottleneck games with splittable traffic. In: Spirakis, P.G., Mavronicolas, M., Kontogiannis, S.C. (eds.) WINE 2006. LNCS, vol. 4286, pp. 331–342. Springer, Heidelberg (2006) 18. Nash-Williams, C.: An application of matroids to graph theory. In: Rosenstiehl, P. (ed.) Theory of Graphs; Proc. Intl. Symp. Rome 1966, pp. 263–265 (1967) 19. Qiu, L., Yang, Y.R., Zhang, Y., Shenker, S.: On selfish routing in internet-like environments. IEEE/ACM Trans. Netw. 14(4), 725–738 (2006) 20. Rosenthal, R.W.: A class of games possessing pure-strategy Nash equilibria. Intl. J. Game Theory 2(1), 65–67 (1973) ´ Roughgarden, T., Vazirani, V. 21. Roughgarden, T.: Routing games. In: Nisan, N., Tardos, E., (eds.) Algorithmic Game Theory, ch. 18, Cambridge University Press, Cambridge (2007) 22. Schrijver, A.: Combinatorial Optimization: Polyhedra and Efficiency. Springer, Heidelberg (2003) 23. Skopalik, A., V¨ocking, B.: Inapproximability of pure Nash equilibria. In: Proc. 40th Symp. Theory of Computing (STOC), pp. 355–364 (2008)
Combinatorial Auctions with Verification Are Tractable Piotr Krysta and Carmine Ventre Department of Computer Science, University of Liverpool, Liverpool, UK {p.krysta,carmine.ventre}@liverpool.ac.uk Abstract. We study mechanism design for social welfare maximization in combinatorial auctions with general bidders given by demand oracles. It is a major open problem in this setting to design a deterministic truthful auction which would provide the best possible approximation guarantee in polynomial time, even if bidders are double-minded (i.e., they assign positive value to only two sets in their demand collection). On the other hand, there are known such randomized truthful auctions in this setting. In the general model of verification (i.e., some kind of overbidding can be detected) we design the first deterministic truthful auctions which indeed provide essentially the best possible approximation guarantees achievable by any polynomial-time algorithm. This shows that deterministic truthful auctions have the same power as randomized ones if the bidders withdraw from unrealistic lies.
1 Introduction Algorithmic mechanism design attempts to marry up computational and economic considerations. A mechanism has to deal with the strategic behavior of the participants but still has to compute the outcome efficiently. Facing a truthful mechanism, participants are always rationally motivated to correctly report their private information. (For the basics of mechanism design we refer to Ch. 9 in [17].) Many works in the literature (including this one) require truthtelling to be a dominant strategy equilibrium. This solution concept is very robust, but sometimes it may be too strong to simultaneously guarantee truthfulness and computational efficiency. This is the case for the arguably main technique of the field: VCG mechanisms [17]. VCGs are truthful once the exact optimal outcome is achieved. This clashes with computational aspects, as for many interesting applications exact optimization is an NP-hard problem. So, we have to resort to efficient approximation algorithms. Unfortunately, VCGs cannot be applied in these cases [15]. The main challenge in algorithmic mechanism design is to go beyond VCG and design efficient truthful mechanisms for those hard applications. The design of truthful combinatorial auctions (CAs, see §2 for definition) is the canonical problem in the area suffering from this VCG drawback. The computational optimization problem in CAs is NP-hard to solve optimally or even to approximate: neither an ap1 proximation ratio of m 2 − , for any constant > 0, nor of O( logd d ) can be obtained in polynomial time [14,13,10], where m is the number of goods to sell and d denotes the
Work supported by EPSRC grant EP/F069502/1 and by DFG grant Kr 2332/1-3 within Emmy Noether Program.
M. de Berg and U. Meyer (Eds.): ESA 2010, Part II, LNCS 6347, pp. 39–50, 2010. c Springer-Verlag Berlin Heidelberg 2010
40
P. Krysta and C. Ventre
maximum size of subsets of goods bidders are interested in. Therefore VCGs cannot be used to solve CAs efficiently and strategically. To date we do not yet have a complete picture of the hardness of CAs. That is, the question in Ch. 12 of [17], is still unanswered: “What are the limitations of deterministic truthful CAs? Do approximation and dominant-strategies clash in some fundamental and well-defined way for CAs?” Related work and our contributions. In attempt to answer the questions above, a large body of literature focused on the design of efficient truthful CAs under different assumptions. The first results of tractability rely on the restriction of the bidders’ domains. If bidders are interested in only one set, the so-called single-minded domain, CAs are very well understood: a certain monotonicity property is sufficient for truthfulness and can be guaranteed efficiently achieving the best approximation ratio (in terms of m) possible, 1 see Lehmann et al. [13]. For single-minded domains, a host of other truthful CAs are known, e.g., [1,4]. Also, a number of truthful CAs were found under different assumptions (restrictions) on the valuation domains (see Fig. 11.2 in [17] for a complete picture). The situation is very different for multi-dimensional domains where bidders can valuate different sets of goods differently. Very few results are known and still do not answer the questions above. In Holzman et al. [11] an algorithm that optimizes over a carefully chosen range of√solutions, i.e., a maximal-in-range (MIR) algorithm, is shown to be a truthful O(m/ log m) approximation. A second result is the mechanism of Bartal et al. [2] that applies only to the special case of auctions with many duplicates of each good. No other deterministic positive results are known even for the simplest case of double-minded bidders, i.e., bidders with only two non-zero valuations. On the contrary, obtaining efficient truthful CAs is not tractable when using deterministic MIR algorithms, see Buchfuhrer et al. [6]. Positive results are instead known for randomized truthful (in the universal sense) √ CAs. In particular, Dobzinski [7] presents a universally truthful CA providing an O( m)-approximate solution with high probability. Due to this error probability, these solutions do not guarantee the approximation ratio. The success probability cannot be amplified either: Repeating the auction would destroy truthfulness [7]. 2 In this paper, we show deterministic truthful CAs with multi-dimensional bidders that run in polynomial time and return (essentially) best possible approximate solutions under a general and well motivated assumption. We use an approach which is orthogonal to the previously used restriction of the domain for CAs: we keep the most general valuation domains but restrict the way bidders can lie. We employ well motivated economic settings [8,9] and the idea of verification [16]. Verification and CAs: Motivation. It was observed in [8] that in an economic scenario of a “regulation of a monopolist” it makes sense to assume that no bidder will ever overbid: overbidding can sometimes be infinitely costly. One of such cases arises when the mechanism designer could “inspect” the profits (valuations in CAs terminology) generated by the bidder who can only hide a portion of the profits but cannot inflate them [8]. For a concrete example, consider a government (auctioneer) auctioning business licenses for a set U of cities under its administration. A business company (bidder) wants to get a license for some subset of cities (subset of U) to sell her product stock to the market. Consider the bidder’s profit for a subset of cities S to be equal to a unitary 1 2
The best here refers to any (even not truthful) polynomial-time algorithm. For this auction no approximation guarantee in terms of d is claimed.
Combinatorial Auctions with Verification Are Tractable
41
publicly known product price (e.g., for some products, such as drugs, the government could fix a social price) times the number of product items available in the stocks the company has in the cities of S.3 In this scenario, the bidder is strategic on her stock availability and thus cannot overbid her profits: the concealment of existing product items in stock is costless but disclosure of unavailable ones is prohibitively costly (if not impossible!). Our assumption on top of the motivations in [8] is that the “inspection” is only carried on for implemented solutions and thus the assumption of no-overbidding is, in our model, only made for the actual outcome of the mechanism. This assumption follows the one used in the mechanisms with verification literature (see, e.g., [16,20,18,19]) and generalizes similar no-overbidding assumptions used in the literature of sponsored search auctions [5]. As we show in this paper it is challenging to design truthful CAs in this verification model. (For more details see §2.) Observe moreover that in cases like the business licenses auction above, it makes sense to consider truthful auctions with no money, i.e., without payments. Indeed, the government could not ask money to business companies as its own objective is to set up a market in the country in a way to maximize the social welfare. In fact, some of our auctions are without money. The results. To summarize, intuitively, our verification model (for precise definitions see §2) assumes CAs with multi-minded bidders where each bidder i can express valuations vi (S) on any subset S of goods, and if set S is awarded to her she may not overbid the true value vi (S) (this is the verification step) otherwise the bid is unrestricted. Moreover, our model also allows bidder i to misreport the true sets S themselves. Our truthful auctions therefore enforce the bidders to be truthful both with respect to their true values and demanded sets. We prove the following results in our verification CAs model. We first look in §4 at uniform-size bidders, i.e., bidders√that have non-zero valuations for bundles of same size, and we provide a (min{d, m} + 1)-approximate polynomial-time deterministic truthful combinatorial auction with verification and with no money. Then in §5 and §7√the general setting is considered. For any ε > 0, we provide a ((1 + ε)·min{d+ 1, 2 m})-approximate polynomial-time deterministic truthful combinatorial auction with verification for bidders bidding from finite domains. The assumption of finite domains easily fits real-world auctions: currency is, by its nature, discrete and with natural upper and lower bounds. We stress that the approximation guarantees of our CAs are essentially best possible even in non-strategic computational sense (cf. computational lower bounds mentioned in the introduction). Our results can be read in terms of the open questions about CAs in [17] and reported above as follows. Although, it seems there is a clash between approximation and dominant strategies [6] our results show that this is uniquely because of the presence of unrealistic lies. Moreover, we introduce verification to the realm of CAs and give another evidence of the power of verification [16,19]. Interestingly, our results are also the first non-MIR truthful mechanisms with verification for multi-dimensional agents. Our auctions use a greedy algorithm (see §3) which is not MIR. We show that for this algorithm there exist payments leading to truthful CAs by means of the cycle monotonicity technique (see, e.g., [21]). This technique features a certain graph whose shape 3
Note that bidders will sell products already in stock (i.e., no production costs are involved as they have been sustained before the auction is run). This is conceivable when a government runs an auction for urgent needs (e.g., salt provision for icy roads).
42
P. Krysta and C. Ventre
depends, among others, on the algorithm and amounts to showing that no negativeweight cycle belongs to the graph. We prove that the existence of certain edges in this graph is in contradiction with the algorithm’s greedy rule. Due to this, we can show that the graph contains only non-negative weight cycles. The novelty of our arguments in the field of mechanisms with verification is in handling more general kind of cycles; previous analyses rely on having “simple” cycles comprised of all zero-weight edges. We specifically show that the cycle-monotonicity technique surprisingly couples with our greedy algorithm allowing for the full truthfulness analysis where both valuations and sets themselves are private data of the bidders. To analyze the approximation guarantee of the algorithm (see §6) and show that it is (essentially) as good as possible in terms of both parameters m and d we rely on LP-duality. Similarly to [12], we express the problem as an LP (relaxation of the natural IP for CAs), consider its dual and study the property of some dual solutions defined upon the greedy algorithm. Our LP-duality analyses can be viewed as extensions and generalizations of those in [12]. We finally show how to carefully employ demand queries to efficiently represent the input and obtain polynomial-time truthful approximate CAs with general bidders (see §7).
2 Model and Preliminaries In a combinatorial auction we have a set U of m goods and n bidders. Each bidder i has a private valuation function vi and is interested in obtaining only one set in a private collection Si of subsets of U. The valuation function maps subsets of goods to nonnegative real numbers (vi (∅) is normalized to be 0), i.e., we are interested in multi-minded XOR bidders. Agents’ valuations are monotone: for S ⊇ T we have vi (S) ≥ vi (T ). Notice that the number of these valuations is exponential in m while we need mechanisms running in time polynomial in m and n. So, we have to assume how these valuations are encoded. As in, e.g., [3,7], we assume that the valuations are represented as black boxes which can answer a specific natural type of queries, demand queries4 . The goal is to find a partition S1 , . . . , Sn of U such that n i=1 vi (Si ) –the social welfare– is maximized. To ease our presentation we will assume that |Si | = poly(n, m) and in §7 we will show how to use demand queries to replace this assumption. As an example consider U = {1, 2, 3} and the first bidder to be interested in S1 = {{1}, {2}, {1, 2}}. The valuation function of bidder i for a given set S ∈ Si is vi (S) = maxS ∈Si :S⊇S {vi (S )} if S ⊇ S for some S ∈ Si and is vi (S) = 0 otherwise. Accordingly we say that vi (S) = 0 (for S ∈ Si ) is defined by an inclusion-maximal set S ∈ Si such that S ⊆ S and vi (S ) = vi (S). If vi (S) = 0 then we say that ∅ defines it. So in the above example v1 ({1, 2, 3}) is defined by {1, 2}. Throughout the paper we assume that bidders are interested in sets of cardinality at most d ∈ N, i.e., d = max{|S| : ∃ i s.t. S ∈ Si ∧ vi (S) > 0}. We want to design an allocation algorithm A and a payment function P . The auction (A, P ) for a given input of bids from the bidders, outputs an assignment (i.e., at most 4
In a demand query (with bundle prices) the bidder is presented with a compact representation of prices p(S) for each S ⊆ U, and the answer is the bundle S that maximizes the profit v(S) − p(S). Strictly speaking XOR bidders are equivalent to bidders given by value queries while demand queries are strictly more powerful than value queries. See [17,3] for details.
Combinatorial Auctions with Verification Are Tractable
43
one of the requested sets is given to each bidder) and charges the bidder i a payment Pi . Allocations and payments should be defined so that no bidder has an incentive to misreport her preferences. More formally, we let Ti be a set of poly(n, m) non-empty subsets of U and let zi be the corresponding valuation function of agent i, i.e., zi : Ti → R+ . We call bi = (zi , Ti ) a declaration of bidder i. We let ti = (vi , Si ) be the true type of agent i. We let Di denote the set of all the possible declarations of agent i and call Di the declaration domain of bidder i. We also say that bidders are uniform-size if for any i and all (·, Ti ) ∈ Di it holds: |T | = |T | for any T, T ∈ Ti . Fix the declarations b−i of all the agents but i. For any declaration bi = (zi , Ti ) in Di , we let Ai (bi , b−i ) be the set that A on input b = (bi , b−i ) allocates to bidder i. If no set is allocated to i then we naturally set Ai (bi , b−i ) = ∅. We say that (A, P ) is a truthful auction if for any i, bi ∈ Di , b−i : vi (Ai (ti , b−i )) − Pi (ti , b−i ) ≥ vi (Ai (b)) − Pi (b).
(1)
Recall that Ai (ti , b−i ) may not belong to the set of demanded sets Si . In particular, there can be several sets in Si (or none) that are subsets of Ai (ti , b−i ). However, as observed above, the valuation is defined by a set in Si ∪ {∅} which is an inclusionmaximal subset of set Ai (ti , b−i ) that maximizes the valuation of agent i. We denote such a set as σi (A(ti , b−i )|ti ), i.e., vi (Ai (ti , b−i )) = vi (σi (A(ti , b−i )|ti )). In our running example above, it can be for some algorithm A and some b−1 , that A1 (t1 , b−1 ) = {1, 2, 3} ∈ S1 whose valuation is defined as observed above by {1, 2}; the set {1, 2} is denoted as σ1 (A(t1 , b−1 )|t1 ). (Similarly, we define σi (A(bi , b−i )|bi )) ∈ Ti ∪{∅} w.r.t. Ai (bi , b−i ) and declaration bi .) Following the same reasoning, we let σi (A(bi , b−i )|ti ) denote the set in Si ∪ {∅} such that vi (Ai (bi , b−i )) = vi (σi (A(bi , b−i )|ti )). We focus on exact algorithms5 in the sense of [13]. This means that Ai (bi , b−i ) ∈ Ti ∪ {∅}. This implies that Ai (bi , b−i ) = σi (A(bi , b−i )|bi ) and then the definition of σi (·|·) yields the following for any ti and bi in Di : σi (A(bi , b−i )|ti ) ⊆ Ai (bi , b−i ) = σi (A(bi , b−i )|bi ).
(2)
In the verification model each bidder can only declare lower valuations for the set she is awarded. More formally, bidder i whose type is ti = (vi , Si ) can declare a type bi = (zi , Ti ) if and only if whenever σi (A(bi , b−i )|bi ) = ∅: zi (σi (A(bi , b−i )|bi )) ≤ vi (σi (A(bi , b−i )|ti )).
(3)
In particular, bidder i evaluates the assigned set σi (A(bi , b−i )|bi ) ∈ Ti as σi (A(bi , b−i )| ti ) ∈ Si ∪{∅}, i.e., vi (σi (A(bi , b−i )|ti )) = vi (σi (A(bi , b−i )|bi )), and therefore the set σi (A(bi , b−i )|bi ) can be used to verify a posteriori that bidder i has overbid declaring zi (σi (A(bi , b−i )|bi )) > vi (σi (A(bi , b−i )|bi )) = vi (σi (A(bi , b−i )|ti )). To be more concrete, consider economic motivations above. The set of cities σi (A(bi , b−i )|bi ) for which the government assigns licenses to bidder i when declaring bi , can be used a posteriori to verify overbidding by simply counting the product items available in granted cities stocks of bidder i. When (3) is not satisfied then the bidder is caught lying by the 5
In our setting, an algorithm is exact if, to each bidder, either only one of the declared sets is awarded or none.
44
P. Krysta and C. Ventre
verification step and the auction punishes her so to make this behavior very undesirable (i.e., for simplicity we can assume that in such a case the bidder will have to pay a fine of infinite value). This way (1) is satisfied directly when (3) does not hold (as in such a case a lying bidder would have an infinitely bad utility because of the punishment/fine). Thus in our model, truthfulness with verification of an auction is fully captured by (1) holding only for any i, b−i and bi = (zi , Ti ) ∈ Di such that (3) is fulfilled. Cycle monotonicity. The technique we will use to derive truthful auctions for multiminded XOR bidders is the so-called cycle monotonicity. Consider an algorithm A. Fix bidder i and declarations b−i . The declaration graph associated to algorithm A has a vertex for each possible declaration in the domain Di . We add an arc between a = (z, T ) and b = (w, U) in Di whenever a bidder of type a can declare to be of type b obeying (3). Following the definition of the verification setting, edge (a, b) belongs to the graph if and only if z(σ(b|a)) ≥ w(σ(b|b)). 6 The weight of the edge (a, b) is defined as z(σ(a|a)) − z(σ(b|a)) and thus encodes the loss that a bidder whose type is (z, T ) incurs by declaring (w, U). The following known result relates the cycles of the declaration graph to the existence of payments leading to truthful auctions. Proposition 1 ([21]). If each declaration domain Di is finite and each declaration graph associated to algorithm A does not have negative-weight cycles then there exists a payment function P such that (A, P ) is a truthful auction with verification for multi-minded XOR bidders. Above proposition is adapted to the verification setting as in, e.g., [20]. Therein it is showed that cycle monotonicity and verification couple only for finite domains. This is the technical reason for which one of our auctions assumes finite domains. Concerning our model, a stronger assumption would require (3) to hold for all subsets and not just for the output set. However, in such a model, obtaining truthfulness would be immediate (the declaration graph associated to any algorithm would be acyclic!). On the contrary in our model the design of truthful CAs with verification is challenging because of our weaker assumption (3). Our verification step applies a posteriori and thus assuming (3) is actually weaker. Indeed, here a millionaire bidder among billionaire bidders could speculate on his declaration knowing that the algorithm will not award a big set to her even if she slightly overbids (assuming that the algorithm, which is publicly known, has a reasonable approximation guarantee). This kind of misbehavior can be harmful for obtaining truthful CAs with verification as showed in Example 1. The latter shows how, in connection with our greedy auction, the bidders could speculate on their declarations in our model. Note that using the outcome to verify overbidding, and thus giving the bidder the chance to speculate on valuations of unselected outcomes, is the core of the entire mechanisms with verification literature (e.g., [16,18,19]).
3 The Algorithm In this paper we consider a very simple but effective greedy algorithm inspired by [13]. The algorithm orders the bids received in non-increasing order of efficiency. The 6
We let σ(b|a) be a shorthand for σi (A(b, b−i )|a) when A, i, b−i are understood.
Combinatorial Auctions with Verification Are Tractable
45
Algorithm 1. The greedy algorithm component of our truthful auctions. 1 2 3
4 5 6 7 8
Let l denote the number of different bids, i.e., l = n i=1 ki , with ki = |Ti |. For any bidder i and S ∈ Ti , consider the efficiency of bid bi (S) defined as bi (S)/ |S|. Let b1 , b2 , . . . , bl be the non-zero bids and S1 , . . . , Sl be all the demanded sets ordered by non-increasing efficiency, i.e., b1 / |S1 | ≥ . . . ≥ bl / |Sl |. In case of ties: (i) if the tie is between declarations of different bidders consider first the bid of the lexicographically bigger bidder, (ii) otherwise consider first bigger sets. For each j = 1, . . . , l let β(j) ∈ {1, . . . , n} be the bidder bidding bj for the set Sj . P ← ∅, B ← ∅. For i = 1, . . . , l do If β(i) ∈ B ∧ Si ∩ S = ∅ for all S in P then (a) P ← P ∪ {Si }, (b) B ← B ∪ β(i). Return P.
efficiency of a bid for a certain set is defined as the ratio between the declared valuation and the square root of the size of the demanded set. Then the algorithm scans the list in this order and grants the sets that do not intersect with any of the previously selected sets to unselected bidders (see Algorithm 1). Note, that in Algorithm 1, sets S1 , . . . , Sl are all the sets demanded by all bidders (with non-zero bids), i.e., {S1 , . . . , Sl } = T1 ∪ . . . ∪ Tn . An important aspect of the algorithm is how it breaks efficiency ties, if any, in line 3. Whenever there is a tie between declarations of different bidders a fixed bid-independent tie-breaking rule is used. Such a rule uses a lexicographic order among the bidders which is fixed beforehand. When instead the tie is between declarations of the same bidder the tie-breaking rule depends on the bids. However, we will show that such a rule couples with truthfulness (with verification). We say that a set S ∈ Ti (for some i) collides with a set S if S is granted and either S ∩S = ∅ or S is granted to bidder i. Algorithm 1 does not grant colliding sets. Example 1. We argue that Algorithm 1 needs payments to lead to truthful CAs with verification. The instance has a single double-minded bidder interested in sets S1 and S2 and whose true valuation is such that v(S1 )/ |S1 | > v(S2 )/ |S2 | with v(S1 ) < v(S2 ). Algorithm 1 allocates to the bidder the set S1 guaranteeing an utility of v(S1 ). Consider a declaration b = (z, T ) = ((z(T1 ), v(S2 )), (T1 , S2 )) such that S2 ⊆ T1 ⊃ S1 , z(T ) > v(S ) (thus a declaration compatible with (3)) and z(T )/ |T1 | < 1 1 1 v(S2 )/ |S2 |. Algorithm 1 in input b allocates the set S2 to the bidder guaranteeing an utility of v(S2 ) greater than the utility v(S1 ) derived by truthtelling. We now show structural properties of the declaration graph associated to Algorithm 1. Lemma 1. For any i and b−i the following holds. Let a = (z, T ), b = (w, U) be two declarations in Di such that (a, b) is an edge of the declaration graph associated to Algorithm 1. If σ(b|b) = ∅ then σ(b|a) = ∅. Lemma 2. For any i and b−i the following holds. Let a = (z, T ), b = (w, U) be two declarations in Di . Let S be a set granted by Algorithm 1 on input (a, b−i ). If S collides with U ∈ U and precedes U in the ordering considered by Algorithm 1 on input (b, b−i ) then U is not granted on input (b, b−i ).
46
P. Krysta and C. Ventre
Lemma 3. For any i and b−i the following holds. Let a = (z, T ), b = (w, U) be two declarations in Di such that σ(b|b) = ∅. If σ(b|a) = ∅ is not granted on input (a, b−i ) because it collides with some granted set S ∈ T then in the declaration graph associated to Algorithm 1 there is no edge (a, b). Proof. Fix bidder i and declarations b−i . Assume for a contradiction that there is an edge (a = (z, T ), b = (w, U)) in the declaration graph with σ(b|b) = ∅ and where σ(b|a) is not granted because of the collision with some set S ∈ T . Since S ∈ T then it must be that S ∩ σ(b|a) = ∅. Also since a non-granted set can only be considered after the granted set it collides with, we have that S is considered before σ(b|a) in the ordering with respect to bid a. From the existence of (a, b), since σ(b|b) = ∅, we have z(σ(b|a)) ≥ w(σ(b|b)) and by (2) we observe that σ(b|a) ⊆ σ(b|b) thus z(σ(b|a)) w(σ(b|b)) implying √ ≥ √ . But since S ∈ T then the algorithm uses in line 3 a |σ(b|a)|
|σ(b|b)|
bid-independent tie-breaking rules. Because of this and as the reported valuation for S is unchanged, S will be considered by the algorithm earlier than σ(b|b) in the ordering with respect to bid b. Also, since (as argued above) S ∩ σ(b|a) = ∅ and σ(b|a) ⊆ σ(b|b) we have that S ∩ σ(b|b) = ∅, i.e., S collides also with σ(b|b). But then by Lemma 2 we reach the contradiction that σ(b|b) = Ai (b, b−i ) is not granted on input (b, b−i ).
4 Uniform-Size Bidders In this section we assume that for each bidder i all the demanded sets in Ti have the same size di . Observe that under this hypothesis the order of the efficiency of bids of bidder i in Algorithm 1 is completely√determined by the value of the reported valuations since the denominator is fixed to be di . As our first result we prove the following. √ Theorem 1. There exists a (min{d, m} + 1)-approximate polynomial-time truthful auction with verification and with no money for uniform-size multi-minded XOR bidders even for infinite declaration domains. The running time is polynomial in m, n, l, and the number of bits to represent the bids.
5 General Bidders In this section we consider multi-minded XOR bidders that can misreport both valuation and sets in an unrestricted way (but obeying (3)). We begin with the following fact. Lemma 4. For any i and b−i the following holds. Let a = (z, T ), b = (w, U) be two = ∅. In the declaration graph declarations in Di such that σ(a|a) = ∅ and σ(b|b) associated to Algorithm 1 there is no edge (a, b). Thus, edges outgoing from vertices a such that σ(a|a) = ∅ only belong to cycles (if any) of all vertices for which Algorithm 1 grants no set. These cycles thus have weight 0. Proof. Fix i, b−i , and assume that edge (a = (z, T ), b = (w, U)) is in the declaration graph with σ(a|a) = ∅ and σ(b|b) = ∅. By Lemma 1, σ(b|a) = ∅. Since σ(a|a) = ∅, σ(b|a) is not granted on input (a, b−i ) and then σ(b|a) collides with some granted set S. Because non-granted sets collide with granted set, σ(a|a) = ∅ implies that S ∈ T. By Lemma 3 we reach a contradiction that (a, b) does not belong to the graph.
Combinatorial Auctions with Verification Are Tractable
47
Lemma 5 shows that there are no negative-weight edges in cycles of the declaration graph associated to Algorithm 1, which will be used to prove Theorem 2. Lemma 5. No declaration graph associated to Algorithm 1 has negative-weight cycles. Proof. Assume by contradiction that for some i and b−i there exists a declaration graph associated to Algorithm 1 with a negative-weight cycle C := a0 = (z 0 , T 0 ) → a1 = (z 1 , T 1 ) → . . . → ak = (z k , T k ) → ak+1 = (z k+1 , T k+1 ) = a0 . Since C has negative weight, by second part of Lemma 4, it involves vertices for which Algorithm 1 = ∅ for j = 0, . . . , k. This and the existence of edge grants some set, i.e., σ(aj |aj ) (aj−1 , aj ), j = 1, . . . , k, implies, by Lemma 1, that σ(aj |aj−1 ) = ∅ as well. Moreover, the cycle has at least a negative weight edge. Without loss of generality assume that (a0 , a1 ) has negative weight z 0 (σ(a0 |a0 )) − z 0 (σ(a1 |a0 )) < 0 thus implying that 0 0 0 0 1 0 σ(a0 |a0 ) = σ(a1 |a0 ). We can prove that z√(σ(a0 |a0 )) > z√(σ(a1 |a0 )) and that for any z j (σ(aj+1 |aj ))
j = 0, . . . , k − 2 it holds: √
|σ(aj+1 |aj )|
by letting k = k − 1, we obtain that:
|σ(a |a )| |σ(a |a )| z j+1 (σ(aj+2 |aj+1 ))
≥ √
|σ(aj+2 |aj+1 )|
. Using these inequalities,
u0 (σ(a0 |a0 )) uk (σ(ak |ak )) uk (σ(a0 |ak )) u0 (σ(a1 |a0 )) uk (σ(ak |ak )) ≥ ≥ , > ≥ |σ(a0 |a0 )| |σ(a1 |a0 )| |σ(a0 |ak )| |σ(ak |ak )| |σ(ak |ak )|
where first and last inequality follow from the existence of edges (ak , a0 ) and (ak , ak ) = respectively and from (2). The chain of inequalities above implies that σ(ak |ak ) σ(a0 |ak ) and so contradicts the fact that Algorithm 1 grants set σ(ak |ak ) = σ(a0 |ak ) for declaration ak . Indeed, as for a0 the set granted is σ(a0 |a0 ), then no set preceding it in the ordering with respect to a0 collides with it. Since ∅ = σ(a0 |ak ) ⊆ σ(a0 |a0 ) k 0 (from the existence of the edge (a , a )) then no set collides with σ(a0 |ak ) as well in the ordering with respect to ak (as the only difference between the orderings with respect to ak and a0 is in the declarations of bidder i and as we are under the hypothesis that Algorithm 1 grants σ(ak |ak ) for declaration ak ). Thus a contradiction. Theorem 2. There exists a polynomial-time truthful auction with verification for multiminded XOR bidders bidding√from finite declaration domains. For any ε > 0, the auction is ((1 + ε) min{d + 1, 2 m})-approximate. The running time is polynomial in m, n, l, and the number of bits to represent the bids. Proof. By Lemma 5 and Proposition 1, Algorithm 1 admits a payment function leading to a truthful auction for finite domains. For payments we need to compute a polynomial number of shortest paths (one for each bidder) on (possibly) exponentially large graphs. As bids are expressed explicitly, for a single bidder the input has size O(log bmax ) where bmax is the maximum bid in Di . The graph has O(|Di |) nodes. To keep its size polynomial, we round up the declarations to powers of (1 + ε) and run Algorithm 1. Thus we obtain polynomial-time computable payment functions: the graph’s size is now O(log1+ε bmax ), i.e., polynomial in the input size. Truthfulness follows since the rounding selects a subgraph with only vertices that are powers of (1 + ε). Since we show that the whole graph has non-negative weight cycles, so is for the subgraph. This implies that no bidder has an incentive to lie so as to be rounded to a different value. The approximation guarantee follows from Lemma 6 in §6 and from the rounding step.
48
P. Krysta and C. Ventre
6 Approximation Guarantee We will use the linear programming duality theory to prove the approximation guarantees of our algorithm. We let S = ∪ni=1 Si , where bidder i demands sets Si . For a given set S ∈ Si we denote by bi (S) the bid of bidder i for that set. Let [n] be the set {1, . . . , n}. The linear programming (LP) relaxation of our problem and its dual are: n max (4) S∈Si bi (S)xi (S) i=1 n s.t. i=1 S:S∈Si ,e∈S xi (S) ≤ 1 ∀e ∈ U (5) ∀i ∈ [n] (6) S∈Si xi (S) ≤ 1 min s.t.
xi (S) ≥ 0 ∀i ∈ [n]∀S ∈ Si , n e∈U ye + i=1 zi zi + e∈S ye ≥ bi (S) ∀i ∈ [n] ∀S ∈ Si
z i , ye ≥ 0
∀i ∈ [n] ∀e ∈ U.
(7) (8) (9) (10)
In this dual LP dual variable zi corresponds to constraint (6). The approximation factor of Algorithm 1 is very close to best possible factors in terms of m and d for this problem: √ Lemma 6. Algorithm 1 is min{d+1, 2 m}-approximate. Moreover, if for each bidder i, we have that |S| √ = |S | for all S, S ∈ Si , then the approximation ratio of this algorithm is min{d, m} + 1. √ Proof sketch: We sketch only a 2 m-approximation ratio. Let P be a solution output by Algorithm 1, and ΨP = ∪S∈P S. For each set S ∈ S \ P there either is an element e ∈ ΨP ∩ S which is the witness of the fact that ∃S ∈ P s.t. e ∈ S , or there exists a bidder i and S ∈ P such that S , S ∈ Si . For each S ∈ S \ P we keep in ΨP an arbitrary witness for S. We define two fractional dual solutions, y 1 , y 2 . y 1 is defined after Algorithm 1 terminated. Let μ = Si ∈P bβ(i) (Si ), and let P(S) = S ∩ ΨP if μ S ∩ ΨP = ∅ and P(S) = S if S ∩ ΨP = ∅. For each e ∈ U, ye1 = m . y 2 is defined during the execution of Algorithm 1. We add to Algorithm 1 – in line 5: ye2 := 0 for all bβ(i) (Si ) e ∈ U and zi2 := 0 for all i ∈ [n]; and in line 7(a): ye2 := |P(S , for all e ∈ P(Si ), i )| 2 and zβ(i) := bβ(i) (Si ). Both dual solutions provide lower bounds on the cost of P, yej ≤ bβ(i) (Si ), for j = 1, 2. (11) e∈U
Si ∈P
We define a dual solution y as a convex combination of y 1 and y 2 : ye = 12 (ye1 + ye2 ) for [n]. each e ∈ U, and z as zi = zi2 for each i ∈ √ √ We next show that the scaled solution ( m · y, m · z) is dual LP feasible, i.e., (9) is fulfilled, which means that for each i and set S ∈ S ∩ Si , √ √ mzi + m ye ≥ bi (S). (12) e∈S
If S ∈ S \ P, then either S collided with some set chosen earlier to P via a witness e ∈ S, or P already had an S ∈ P with S, S ∈ Si . We prove (12) by using the greedy i (S) rule of the algorithm, and in the first case by showing a lower bound of b√ on the m
Combinatorial Auctions with Verification Are Tractable
49
convex combination 12 (ye1 + ye2 ), and in the second case by showing the same lower bound on zi . If S ∈ P √ ∩ Si , then√(12) follows from the definition of zi . Finally, because ( √ √ m· y, m · z) is dual LP feasible, by the weak duality m ni=1 zi + m e∈U ye is an upper bound nthe value√optof the op√ on timal integral solution, and so by (11): opt ≤ m zi + m e∈U ye = i=1√ √ √ √ m Si ∈P zβ(i) + m e∈U ye ≤ m Si ∈P bβ(i) (Si ) + m Si ∈P bβ(i) (Si ) = √ 2 m Si ∈P bβ(i) (Si ).
7 Bidders Given by Demand Oracles Suppose now that each bidder declares her demand oracle to the mechanism. Although, the number of alternative sets for bidders, denoted by l in the text of Algorithm 1, can be exponential in m, our goal is an auction with running time poly(m, n). A demand oracle for bidder j, given a bundle pricing function p : 2U → R+ , outputs a bundle (set) in arg max{bj (S) − p(S) : S ⊆ U}.7 We assume here that in case of ties there is a fixed tie breaking rule, independent from the values bj (·). We will now show how to implement Algorithm 1 in polynomial time assuming that the bidders are given by such demand oracles. We first show how, for a given bidder j and a given C ⊂ U, to compute in polynomial time a set S ⊆ U that maximizes the efficiency bj (S)/ |S|, among sets S such that S ∩ C = ∅. For each possible size s = 1, 2, . . . , m (or s = 1, . . . , d if bidders are interested in sets of size at most d) of set S do the following: Define p : 2U → R+ as p(S) = +∞ if (|S| = s or S∩C = ∅) and p(S) = 0 if (|S| = s and S ∩ C = ∅). Then ask the following demand query T (s) := arg max{bj (S) − p(S) : S ⊆ U}; thus set T (s) maximizes bj (·) among sets of size s that√are disjoint with set C. Now, to maximize bj (S)/ |S| compute arg max{bj (T (s))/ s : T (s) ∈ {T (1), T (2), . . . , T (m)}}. To implement an iteration i of Algorithm 1 in steps 6-7 first, for each bidder j ∈ B (i.e., we only consider bidders who have not got assigned any set up to iteration i), we compute by the above method the set Tj := arg max{bj (S)/ |S|} where we set C := ∪Q∈P Q (i.e., to maximize bj (S)/ |S| we only take sets that are disjoint with the ones in the current solution P). Nowas set Si in step 7 of Algorithm 1 we take Si := Tj where j := arg max{bj (Tj )/ |Tj | : j ∈ B}. We terminate the algorithm when either C becomes the whole universe U or B = {1, . . . , n} or there is no set Tj output for any j ∈ B, and in such a case we output the current P. We can then prove the following. Theorem 3. For any ε > 0, there exists a truthful polynomial-time ((1 + ε) min{d + √ 1, 2 m})-approximate auction with verification for general bidders given by demand oracles √ bidding from finite declaration domains. There exists a polynomial-time (min{d, m} + 1)-approximate truthful auction with verification and with no money for uniform-size bidders given by demand oracles. The running time is polynomial in m, n, and the number of bits to represent the bids. 7
Demand queries with item prices are also known. However, it was observed in [3] that bundleprice demand queries and item-price queries are incomparable in their power when one carefully analyzes the representation of bundle-price demand queries. We note that our bundle-price queries have a compact representation (namely, given by a number and a set of goods).
50
P. Krysta and C. Ventre
Acknowledgments. We thank Ron Lavi for the helpful discussions, Paolo Penna for pointing out [8] and Dimitris Fotakis for an observation used in the proof of Lemma 6.
References ´ An approximate truthful mechanism 1. Archer, A., Papadimitriou, C.H., Talwar, K., Tardos, E.: for combinatorial auctions with single parameter agents. In: Proc. of SODA (2003) 2. Bartal, Y., Gonen, R., Nisan, N.: Incentive compatible multi unit combinatorial auctions. In: The Proc. of the 9th TARK, pp. 72–87 (2003) 3. Blumrosen, L., Nisan, N.: On the computational power of demand queries. SIAM J. Comput. 39(4), 1372–1391 (2009) 4. Briest, P., Krysta, P., V¨ocking, B.: Approximation techniques for utilitarian mechanism design. In: Proc. of STOC, pp. 39–48 (2005) 5. Bu, T., Deng, X., Qi, Q.: Multi-bidding strategy in sponsored keyword auction. In: Preparata, F.P., Wu, X., Yin, J. (eds.) FAW 2008. LNCS, vol. 5059, pp. 124–134. Springer, Heidelberg (2008) 6. Buchfuhrer, D., Dughmi, S., Fu, H., Kleinberg, R., Mossel, E., Papadimitriou, C.H., Schapira, M., Singer, Y., Umans, C.: Inapproximability for vcg-based combinatorial auctions. In: The Proc. of SODA (2010) 7. Dobzinski, S.: Two randomized mechanisms for combinatorial auctions. In: Charikar, M., Jansen, K., Reingold, O., Rolim, J.D.P. (eds.) RANDOM 2007 and APPROX 2007. LNCS, vol. 4627, pp. 89–103. Springer, Heidelberg (2007) 8. Gorkem, C.: Mechanism design with weaker incentive compatibility constraints. Games and Economic Behavior 56(1), 37–44 (2006) 9. Green, J.R., Laffont, J.: Partially Verifiable Information and Mechanism Design. The Review of Economic Studies 53, 447–456 (1986) 10. Hazan, E., Safra, S., Schwartz, O.: On the complexity of approximating k-set packing. Computational Complexity 15(1), 20–39 (2006) 11. Holzman, R., Kfir-Dahav, N., Monderer, D., Tennenholtz, M.: Bundling equilibrium in combinatorial auctions. Games and Economic Behavior 47, 104–123 (2004) 12. Krysta, P.: Greedy approximation via duality for packing, combinatorial auctions and routing. In: Jedrzejowicz, J., Szepietowski, A. (eds.) MFCS 2005. LNCS, vol. 3618, pp. 615–627. Springer, Heidelberg (2005) 13. Lehmann, D.J., O’Callaghan, L., Shoham, Y.: Truth revelation in approximately efficient combinatorial auctions. J. ACM 49(5), 577–602 (2002) 14. Nisan, N.: The communication complexity of approximate set packing and covering. In: Widmayer, P., Triguero, F., Morales, R., Hennessy, M., Eidenbenz, S., Conejo, R. (eds.) ICALP 2002. LNCS, vol. 2380, pp. 868–875. Springer, Heidelberg (2002) 15. Nisan, N., Ronen, A.: Computationally feasible vcg mechanisms. In: Proc. of EC (2000) 16. Nisan, N., Ronen, A.: Algorithmic Mechanism Design. Games and Economic Behavior 35, 166–196 (2001) ´ Vazirani, V.: Algorithmic Game Theory (2007) 17. Nisan, N., Roughgarden, T., Tardos, E., 18. Penna, P., Ventre, C.: Collusion-resistant mechanisms with verification yielding optimal solutions. In: Halperin, D., Mehlhorn, K. (eds.) ESA 2008. LNCS, vol. 5193, pp. 708–719. Springer, Heidelberg (2008) 19. Penna, P., Ventre, C.: Optimal collusion-resistant mechanisms with verification. In: Proc. of EC, pp. 147–156 (2009) 20. Ventre, C.: Mechanisms with verification for any finite domain. In: Spirakis, P.G., Mavronicolas, M., Kontogiannis, S.C. (eds.) WINE 2006. LNCS, vol. 4286, pp. 37–49. Springer, Heidelberg (2006) 21. Vohra, R.V.: Paths, cycles and mechanism design. Technical report, Kellogg School of Management (2007)
How to Allocate Goods in an Online Market? Yossi Azar1 , Niv Buchbinder2, and Kamal Jain3 1
Tel-Aviv University, Tel-Aviv, 69978, Israel 2 Microsoft Research, New England 3 Microsoft Research, Redmond
Abstract. We study an online version of Fisher’s linear case market. In this market there are m buyers and a set of n dividable goods to be allocated to the buyers. ˆ in which The utility that buyer i derives from good j is uij . Given an allocation U ˆi we suggest a quality measure that is based on taking an avbuyer i has utility U ˆi with respect to any other allocation U . We motivate this erage of the ratios Ui /U quality measure, and show that market equilibrium is the optimal solution with respect to this measure. Our setting is online and so the allocation of each good should be done without any knowledge of the upcoming goods. We design an online algorithm for the problem that is only worse by a logarithmic factor than any other solution with respect to our proposed quality measure, and in particular competes with the market equilibrium allocation. We prove a tight lower bound which shows that our algorithm is optimal up to constants. Our algorithm uses a primal dual convex programming scheme. To the best of our knowledge this is the first time that such a scheme is used in the online framework. We also discuss an application of the framework in display advertising business in the last section.
1 Introduction Allocating goods to buyers in a way that maximizes social welfare and fairness guarantees has been studied extensively. One well established framework for achieving a desirable allocation is the market equilibrium concept [1]. In a general market setting known as Fisher’s linear case market we are given m buyers and n divisible goods. Each buyer has a budget ei . The utility functions are linear, which means that buyer i derives a utility uij out of good j. It is well known that in this market (as well as many other more general markets) there exist prices for the goods and a corresponding (equilibrium) allocation of the goods to the buyers with several desirable properties. First, all goods are fully allocated and all buyers fully extract their budgets. Second, buyers are only assigned goods that belong to their optimal basket. That is, buyers are only allocated goods that maximize the utility per price for the buyer. This allocation has two additional nice properties. First, when considering splitting a buyer’s budget into two buyers whose utilities are the same as the original buyer, the sum of allocations of the market equilibrium solution after the split is also optimal before the split. This property shows the robustness of the market equilibrium allocation since buyers do not benefit
Research supported in part by the Israel Science Foundation.
M. de Berg and U. Meyer (Eds.): ESA 2010, Part II, LNCS 6347, pp. 51–62, 2010. c Springer-Verlag Berlin Heidelberg 2010
52
Y. Azar, N. Buchbinder, and K. Jain
from simulating themselves using several “smaller” buyers. A second property is that scaling all utilities of some buyer by a constant does not change the allocation, which is a desirable property because utilities of buyers are incomparable. This again means that buyers cannot benefit from boosting up their utilities. As early results capture only existence of the allocation, computing the market equilibrium allocation (and prices) for various markets in polynomial time received much attention recently [13,11,16,19,12]. ˆ in which buyer i has utility U ˆi . We would like to measure Consider an allocation U the quality of the allocation with respect to any other allocation U . Since utilities of different buyers are incomparable and may be scaled up or down, it is only meaningful ˆi , for 1 ≤ i ≤ m. A natural way to measure the quality of to look at the ratios Ui /U ˆ U with respect to some other allocation U is to select some average, for example an ˆi . The lower the average is the arithmetic average, and to look at the average of Ui /U ˆ ˆ , this evaluation can be done better U is. When considering the quality and fairness of U with respect to any other allocation U . It is then natural to evaluate the quality of the ˆ as the maximum over all possible allocations U of these averages. Our allocation U ˆ is therefore the following: proposed quality measure for U m Ui max avg (1) ˆi U i=1 U where avg can be any weighted average with the budgets, ei , of the buyers playing the role of weights. Looking for the best or fairest allocation then corresponds to looking for an allocation that minimizes the maximum. That is, we are looking for U ∗ that is the solution to the following: m Ui U ∗ = arg min max avg ˆi U i=1 U ˆ U We explore this definition for several natural averages: arithmetic, geometric and harmonic averages. A simple observation shows that market equilibrium is the optimal offline allocation for both the harmonic and the geometric averages. This follows directly from the convex program suggested by Eisenberg and Gale [14] as a way to compute market equilibrium in Fisher’s linear case. Interestingly, we show that market equilibrium is also optimal for the arithmetic average. See Section 2.1 for a detailed proof. These results mean that the market equilibrium allocation achieves a value of 1 (which is optimal) with respect to quality measure (1) which motivates further our definition. Market equilibrium is, indeed, a desirable allocation. However, in many settings the allocation of goods should be done in an online fashion, where goods are arriving oneby-one, and previous allocation decisions are irrevocable. As one motivating example one may think of a wireless router (base station) with many users. At any point in time there is a quality of transmission for each user, and the router should decide how to split the bandwidth between the users [21]. The problem is, of course, online, where every time slot corresponds to a new product. Achieving a fair allocation in this online environment is a natural goal. A second example is allocating impressions to advertisers on the internet, so as to maximize social welfare and fairness. It is worth mentioning that our setting considers fractional allocations while impressions
How to Allocate Goods in an Online Market?
53
are indivisible goods. However, fractional allocations (like the one that is generated later by our algorithm) can be simulated in many cases using simple randomization with only additional small loss. See further discussion in Section 5. The online setting raises a natural question of whether it is possible to achieve some of the desirable qualities of the offline market equilibrium allocation within an online framework. 1.1 Our Results We study the quality of allocations that may be achieved in an online setting with respect to our proposed quality measure. Our main contribution is designing an online allocation mechanism with the following properties: Theorem 1. Let U be any allocation of the goods to the buyers. Our online algorithm ˆi such that: computes an allocation xˆij with utilities U m Ui umax ei · max ≤ 1 + ln m + ln n + ln ˆ U umin Ui i=1 where uumax is the ratio of the maximum utility a buyer derives from a good divided by min the minimum non zero utility the buyer derives from any other good. Remark 11. While Theorem 1 refers to an arithmetic average, we immediately get the same performance guarantee with respect to the harmonic and geometric averages by the Arithmetic-Geometric-Harmonic Means Inequality. This result shows that even in an online setting we may achieve an allocation whose average is only worse by a logarithmic factor than any offline allocation. We show that the performance of our algorithm is tight up to constants by proving the following lower bounds on the performance of any online algorithm even when all budgets are equal. We prove two lower bounds. A tight lower bound for the arithmetic and geometric averages, and an almost tight lower bound for the harmonic average. ˆi be the utilities achieved by any online algorithm and let U ∗ be the Theorem 2. Let U i utilities of the market equilibrium allocation, then there exists an instance with n ≥ m2 such that:
1/m
m Ui∗ m Ui∗ 1 = Ω min m, ln n + ln uumax – m ˆi ≥ ˆi i=1 U i=1 U min
max ln n+ln u umin 1 m – 1 m Uˆi = Ω min ln m , ln ln n+ln ln umax m
i=1 U ∗ i
umin
where uumax is the ratio of the maximum utility a buyer derives from a good divided by min the minimum non zero utility the buyer derives from any other good. Techniques: Whenever a good arrives, our algorithm computes an allocation to the newly arrived good, by solving a small linear program that can be interpreted as invoking Karush, Kuhn, Tucker (KKT) optimality conditions of the convex program with respect to the current allocation and the utility function of the new good. Along with the
54
Y. Azar, N. Buchbinder, and K. Jain
allocation of the good, we get a dual variable that is later used in our analysis. To the best of our knowledge this is the first time that such a primal dual convex programming scheme is used in the online framework. Another main difference from previous works is that while the performance measure in most works is essentially a ratio of sums, our performance measure is a sum (average) of ratios. We believe that this measure of quality is very reasonable and is applicable to other scenarios as well. 1.2 Previous Results Existence of a market equilibrium allocation in a very general setting was proved by Arrow and Debreu [1]. Algorithmic aspects of the offline linear case of Fisher’s model [6,20] were studied in [13]. They designed a polynomial time algorithm that computes prices for the goods and the market equilibrium allocation. This was done by designing an algorithm that solves the convex program suggested by Eisenberg and Gale [14] (see our preliminaries). Computing a market equilibrium allocation in the offline case for other markets has also been studied recently [16] (See also [19]). Allocation of goods in an online fashion was studied in many different settings [4,17,18,9]. For instance, Blum et al. consider a setting in which sellers and buyers are trading a single commodity. Sell/Buy bids arrive online with an expiration time, and the goal of the auctioneer is to match these bids so as to maximize revenue, or social welfare. Competing against offline solutions in an adversarial setting was studied in many settings [5]. The closest to our setting is the problem of scheduling jobs on unrelated machines. In this problem, there is a set of m machines and a set of n jobs. The load of job j on machine i is lij . The basic problem is to minimize the load on the most loaded machine. Awerbuch et al. [2] designed an online O(log m)-competitive algorithm for this problem. This result is tight. Later an equivalent primal-dual interpretation of the algorithm was shown in [7,8] (See also [10]). A more general measure of performance of minimizing the lp norm for any p was studied in [3]. They showed that the greedy algorithm is O(p)-competitive which is the best possible. Notice, however, several important differences between the load balancing problem and our problem. First, the current problem is a maximization problem and not a minimization problem. Second, while the performance measure in all these works is essentially a ratio of sums, our performance measure is a sum (average) of ratios.
2 Preliminaries We study here Fisher’s linear case model. Our market consists of a set of m buyers, and n divisible goods. Let ei be the budget of buyer i. The budget represents here the importance of the buyer, and for simplicity of representation we normalize ei so that m i=1 ei = 1. The utilities functions of the goods to each buyer are linear. Let uij be the utility buyer i derives from good j. Let umax,i maxnj=1 {uij } be the maximum utility the buyer gets from a good. Let umin,i minj|uij >0 {uij } be the minimum non umax,i m zero utility from a good of buyer i. Let uumax max be the maximum ratio i=1 u min min,i of utilities over the buyers. Without loss of generality we assume here that for each good
How to Allocate Goods in an Online Market?
55
j there exists a buyer i such that uij > 0 (otherwise the good may be discarded). Also, for each buyer i there exists a good j such that uij > 0 (otherwise buyer i gets utility 0 in any allocation). In an allocation, let xij be the amount of good j allocated to buyer i. n Given an allocation the utility derived by buyer i is Ui j=1 uij xij . We study here an online setting in which goods arrive one-by-one in an online fashion. Upon arrival of a good the algorithm should decide how to allocate it to the buyers, and this decision is irrevocable. The market equilibrium allocation: It is well known that the following convex program suggested first by Eisenberg and Gale [14] can be used in order to compute the market equilibrium allocation in Fisher’s linear model: max
m
ei · ln (Ui )
i=1
n ∀1 ≤ i ≤ m Ui = j=1 uij xij m ∀1 ≤ j ≤ n i=1 xij ≤ 1
Subject to:
∀1 ≤ i ≤ m, 1 ≤ j ≤ n
xij ≥ 0
Let x∗ij be the optimal solution to the convex program. We may define a set of lagrangian variables pj for each good j in the program. These variables may be interpreted as prices for the goods. The KKT optimality conditions define a relationship between the optimal values of x∗ij and pj . First, all pj are strictly positive (assuming that each good has an interested buyer). In the optimal allocation x∗ij each good is fully allocated and also the two following conditions are satisfied: Optimality conditions: p
u
1. For each buyer i and item j: eji ≥ n ijuij x∗ ij j=1 p 2. For each buyer i and item j: x∗ij > 0 ⇒ eji =
n uij ∗ j=1 uij xij
n m Using these conditions it is also possible to prove that j=1 pj = i=1 ei = 1 which means that the prices pj are clearing the market (extract all the budgets of the buyers). 2.1 Our Performance Measure As explained in the introduction we chose our performance measure for an allocation ˆ to be: U m Ui max avg (2) ˆi U i=1 U We choose to concentrate on studying the arithmetic, geometric and harmonic averages. We prove first that the market equilibrium allocation is optimal with respect to all these measures, which gives motivation for our study of this performance measure. Lemma 1. Let x∗ij be the market equilibrium allocation and let Ui∗ be the utilities of the buyers. Then for any other allocation xij with utilities Ui to the buyers:
56
Y. Azar, N. Buchbinder, and K. Jain
m
1
i=1 ei
·
Ui∗ Ui
≤
e m Ui i i=1
Ui∗
≤
m
ei ·
i=1
Ui ≤ 1. Ui∗
We also note that the maximum ratio of 1 is tight since it is tight for the allocation Ui = Ui∗ . Proof. We remark first that it is clear that: e m Ui i ≤1 Ui∗ i=1
ei since the convex programming above actually maximizes ( m i=1 Ui ). Therefore, it is left to prove only the last inequality. Let pj be the market clearance prices as computed by the convex program. m i=1
ei ·
m n m n Ui uij xij pj = e · ≤ e · xij i i n ∗ Ui∗ ei u x ij ij j=1 i=1 j=1 i=1 j=1
=
n j=1
pj
m
xij =
i=1
n
pj = 1
(3)
(4)
j=1
where Inequality (3) follows by the first optimality condition and the last equality follows by the second fact that the prices clear the market.
3 The Algorithm In this section we design and analyze our main algorithm for finding an allocation of the goods in an online fashion. Note the similarity between the way our algorithm computes the current allocation and the optimality conditions of the convex linear program used to computed the market equilibrium allocation. Our algorithm works as follows. When a new good j arrives with utilities uij : – For any k < j let xˆik be the allocation of the algorithm to the previous goods. – Compute an allocation for the current good by solving the following optimization problem: min pj m i=1 xij ≤ 1 For each buyer i such that uij > 0: pj u ≥ uij xij + ij uik xˆik ei k<j
We remark that for all k < j, x ˆik , that is the allocation of the online algorithm of the previous goods, are constants. The only variables in the program are xij of the current good and pj . We state the algorithm in this certain way that is convenient for our proof. However, we illustrate here a different way of viewing the algorithm that shows an easy
How to Allocate Goods in an Online Market?
57
way to compute the allocation and gives an additional intuition. To find the allocation one may define a variable p˜j p1j . Minimizing pj corresponds to maximizing p˜j . Using this variable we get the following simple linear program. max p˜j m i=1 xij ≤ 1 For each buyer i and good j such that uij > 0: p˜j ≤
xij ei
+
uij x ˆij uij ·ei
k<j
In the constraints imposed on each buyer and good the second term is simply a constant which is the utility obtained so far by the user normalized by the utility of the current good and the budget (importance) of the buyer. It is possible to obtain an optimal solution to the LP by increasing xij in rate proportional toei to all buyers for which m the RHS of the constraint is minimized. This is done until i=1 xij = 1. Viewing the process that way we obtain a simple observation that we later use in our proof. Observation 31. Let j be the first good for which uij > 0, then x ˆij ≥ ei . More intuition for our algorithm can be obtained by considering the simpler special case where the utilities are either 0 and 1, and all budgets are equal. For this special case the algorithm reduces to a ”water level” algorithm that tries to balance the utilities of the buyers. Therefore, the algorithm can be viewed as an adaptation of the water level algorithm to this more complex setting. Additional properties of the algorithm: The main idea of the algorithm is to invoke the offline KKT optimality conditions with respect to our current solution. This allow us later to bound the quality of our solution as a function of the dual variables obtained in the process. We assume here without loss of generality that for any good j there is a buyer i for which uij > 0. This means that for every good j, pj that is computed by our algorithm is strictly larger than 0. Given this condition the following important lemma states an immediate property of the algorithm with respect to the x ˆij and the dual values pj it computes. The lemma is an online version of the complementary slackness conditions that are obtained offline. Lemma 2. Let x ˆij and pj be the allocation and the values pj computed during the execution of the online algorithm then: p u 1. For each buyer i and good j: eji ≥ j iju xˆ , / or uij = jk=1 uik xˆik = 0. k=1
2. For each buyer i and good j: x ˆij > 0 ⇒
ik
pj ei
ik
=
j
uij . uik x ˆik
k=1
3.1 Analysis of the Algorithm In this section we prove our main result bounding the quality achieved by the algorithm. We prove that the online algorithm achieves an allocation that is at most logarithmic factor worse than any other allocation with respect to quality measure 1 and in particular from the best allocation with respect to this measure which is the market equilibrium allocation.
58
Y. Azar, N. Buchbinder, and K. Jain
Theorem 3. Let U be any allocation of the goods to the buyers. Our online algorithm ˆi such that: computes an allocation xˆij with utilities U m Ui umax max ei · ≤ 1 + ln m + ln n + ln ˆi U umin U i=1 where uumax is the ratio of the maximum utility a buyer derives from a good divided by min the minimum non zero utility the buyer derives from any other good. Proof. Let p1 , p2 , . . . , pn be the dual variables computed by the online algorithm. Let x ˆij be the amount of good j that was allocated by our algorithm to buyer i. Let xij be the amount of item j allocated to buyer i in any other fixed solution. We assume here without loss of generality that xij > 0 only when uij > 0. By the properties of our algorithm we get that. n m m m n Ui u x j=1 uij xij n ij ij ei · = ei · n = ei · ˆ ˆik ˆik Ui k=1 uik x k=1 uik x i=1 i=1 i=1 j=1 ≤ ≤ ≤ = =
m i=1 m
ei · ei ·
i=1 n
n j=1 n j=1
pj j=1 n m
uij xij
k=1
(5)
uik x ˆik n
m
pj xij = pj xij ei j=1 i=1
(6)
(7) pj xˆij =
j=1 i=1 m n
ei ·
i=1
j
j=1
m
ei ·
i=1
j
n pj j=1
ei
x ˆij
uij x ˆij
k=1
uik x ˆik
(8)
(9)
Inequality (5) follows by summing over fewer values. Inequality (6) follows by the n first property in Lemma 2. Inequality (7) follows since in any allocation i=1 xij ≤ 1. n ˆij = 1. Finally, Equality (9) follows Equality (8) follows since in our allocation i=1 x by the second property of Lemma 2. To bound Inequality (9) we prove the following claim. Claim. For each buyer 1 ≤ i ≤ m: n j=1
j
uij x ˆij
k=1
uik x ˆik
⎞ ⎛ n ≤ 1 + ln ⎝ uij xˆij ⎠ − ln (uij x ˆij ) j=1
ˆi ) − ln (uij x = 1 + ln(U ˆij ) where j is the first good for which x ˆij > 0.
How to Allocate Goods in an Online Market?
59
Proof. The proof is by induction on the number goods n. Let j be the first good for which x ˆij > 0. For this good the LHS as well as the RHS are 1 and so the claim holds. It is easy to see that both sides of the inequality change only when x ˆij > 0, thus, we only consider such goods. Assume that the claim holds for − 1 goods for which x ˆij > 0. Then For the th good we get that: j=1
−1 uij x ˆij ui x ˆi ≤ 1 + ln uij x ˆij − ln (uij x ˆij ) + j u x ˆ u ˆik ik ik k=1 k=1 ik x j=1 ≤ 1 + ln uij x ˆij − ln (uij x ˆij )
(10)
(11)
j=1
Inequality (10) follows by the induction hypothesis. Inequality 11 reduces to proving that: ⎛ ⎞ ⎛ ⎞ −1 ˆi ui x ln ⎝ uij x ˆij ⎠ − ln ⎝ uij x ˆij ⎠ + ˆik j=1 j=1 k=1 uik x ui x ˆi ui x ˆi = ln 1 − ≤0 + ˆik ˆik k=1 uik x k=1 uik x The final inequality is true since for any 0 < a < 1, ln(1 − a) ≤ −a. Plugging Claim 3.1 into Inequality (9), we get: m i=1
Ui ˆi − ln (uij x ei · ≤ ei · 1 + ln U ˆij ) ˆi U m
i=1
=1+
m
ˆi − ln (uij x ei · ln U ˆij )
(12)
i=1
To bound (12) we bound the second term. By observation 31 the worst case is when for i and j uij = umin,i where umin,i is the minimum non zero utility of buyer i, and all other buyers received no utility so far. In this case the algorithm assigns a fraction of ei of the good to buyer i. Thus, − ln (uij x ˆij ) ≤ ln e1i − ln umin,i . and we get: m m ˆi U Ui 1 ei · ≤1+ ei · ln + ln ˆi ei umin,i U i=1 i=1 m
1 umax + ln n + ln ei umin i=1 umax ≤ 1 + ln m + ln n + ln . umin ≤1+
ei ln
(13) (14)
ˆi ≤ numax,i , where umax,i is the maximum utility of Inequality (13) follows since U buyer i. Inequality (14) follows since the entropy on m different values is at most log m. This concludes our proof.
60
Y. Azar, N. Buchbinder, and K. Jain
4 Lower Bound In this section we show that the performance of our algorithm is almost tight. We show a lower bound that is tight up to constants on the performance of any online allocation algorithm for the geometric and arithmetic averages. For the harmonic average we show an almost tight lower bound. Due to lack of space the proof of Theorem 4 is omitted. ˆi be the utilities achieved by any online algorithm and let U ∗ be the Theorem 4. Let U i utilities of the market equilibrium allocation, then there exists an instance with n ≥ m2 such that:
1/m
m Ui∗ m Ui∗ umax 1 – m ≥ = Ω min m, ln n + ln ˆi ˆ i=1 U i=1 U umin
i umax ln n+ln u – 1 m1 Uˆi = Ω min lnmm , ln ln n+ln lnmin umax m
i=1 U ∗ i
umin
where uumax is the ratio of the maximum utility a buyer derives from a good divided by min the minimum non zero utility the buyer derives from any other good.
5 Further Discussion We believe that an interesting part of our work is the suggestion of our quality measure. This quality measure which is essentially an average of ratios is quite natural in the setting of Fisher’s linear case market, and we believe it may be beneficial in other scenarios as well. The choice of what average to use is quite flexible and may be explored further in terms of fairness. While we consider an online setting, it is also interesting to study simple offline dynamics that lead to good allocations in terms of our proposed quality measure, as an example consider the following applications. Consider the following setting of display advertising (See also [15] for a closely related model). Suppose there is a company which sells display advertising and has a very detailed technology of ad-targeting. Whenever a user views any of its webpages, the company can make reliable prediction of the worth of the user to the advertisers. Since the advertisers are relatively less sophisticated than the company, the company offers a limited bid expressive language for the advertisers to describe their ad-targeting needs. At the time of placing an advertising order, the advertisers could describe the age, gender, geographical location etc, of the desired audience being targeted. During the run time, i.e. when users are visiting the websites, the company is likely to have more detailed information about the users, and using the sophisticated machine learning algorithms could make prediction about the worth of an user to the advertisers. The advertising order in the display industry is usually guaranteed, i.e., once the order is accepted then the company would have to serve the order or pay severe penalties of not fulfilling the orders. An advertising order usually has a minimum impression count which must be fulfilled, and a maximum dollar budget which could be charged. Usually when the company accepts an advertising order, it has sure that with high probability the order can be fulfilled. Usually the company has access to low quality ad inventory which could also be used to fulfill the orders.
How to Allocate Goods in an Online Market?
61
Note that the company can’t keep allocating the low quality ad inventory to an advertiser. Doing so may mean that the advertiser may not return back to buy more advertising in the future. So the company needs some kind of method to fairly allocate the advertising space. In summary, the company has N advertisers with their budgets described. Users arrive in an online fashion. Whenever a user arrives, the company predicts the value of the user (i.e., utility) to the advertisers. Based on this predicted value, the company assigns the user to one of the advertisers. Since a large number of users are expected to arrive, one could assume fractional allocation too, which could be converted into integral allocation by using randomized rounding on a large number of users. Note that, unlike the AdWords problem, the entire budget will be charged here, since the obligation towards an advertiser is assumed be matched with low quality ad inventory, if needed. The AdWords problem makes advertisers satisfied by not charging a part of their budget when the ad-inventory is not delivered. In display advertising there is no such flexibility, and the advertisers must be made happy by allocating proportionately a high quality ad-inventory. This is precisely the model studied in this paper.
Acknowledgement We would like to thank Gagan Goel for suggesting the wireless router motivation, and also acknowledge to attend a talk presented by Nitish Karula. A discussion among the audience members with the speaker inspired the application mentioned in the discussion section.
References 1. Arrow, K., Debreu, G.: Existence of an equilibrium for competitive economy. Econometrica, 265–290 (1954) 2. Aspnes, J., Azar, Y., Fiat, A., Plotkin, S.A., Waarts, O.: On-line routing of virtual circuits with applications to load balancing and machine scheduling. J. ACM 44(3), 486–504 (1997) 3. Awerbuch, B., Azar, Y., Grove, E.F., Kao, M.-Y., Krishnan, P., Vitter, J.S.: Load balancing in the lp norm. In: Proc. of 36th FOCS, pp. 383–391 (1995) 4. Blum, A., Sandholm, T., Zinkevich, M.: Online algorithms for market clearing. J. ACM 53(5), 845–879 (2006) 5. Borodin, A., El-Yaniv, R.: Online computation and competitive analysis. Cambridge University Press, Cambridge (1998) 6. Brainard, W.C., Scarf, H.E.: How to compute equilibrium prices in 1891. In: Cowles Foundations Discussion paper, p. 1270 (2000) 7. Buchbinder, N., Naor, J.: Online primal-dual algorithms for covering and packing problems. In: 13th Annual European Symposium on Algorithms (2005) 8. Buchbinder, N., Naor, J.: Improved bounds for online routing and packing via a primal-dual approach. In: Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2006) (2006) 9. Buchbinder, N., Jain, K., Naor, J.(S.).: Online primal-dual algorithms for maximizing adauctions revenue. In: Arge, L., Hoffmann, M., Welzl, E. (eds.) ESA 2007. LNCS, vol. 4698, pp. 253–264. Springer, Heidelberg (2007)
62
Y. Azar, N. Buchbinder, and K. Jain
10. Buchbinder, N., Naor, J.(S.): The design of competitive online algorithms via a primaldual approach. Foundations and Trends in Theoretical Computer Science 3, 93–263 (2009) 11. Chakrabarty, D., Devanur, N.R., Vazirani, V.V.: New results on rationality and strongly polynomial time solvability in eisenberg-gale markets. In: Spirakis, P.G., Mavronicolas, M., Kontogiannis, S.C. (eds.) WINE 2006. LNCS, vol. 4286, pp. 239–250. Springer, Heidelberg (2006) 12. Devanur, N.R., Kannan, R.: Market equilibria in polynomial time for fixed number of goods or agents. In: FOCS, pp. 45–53 (2008) 13. Devanur, N.R., Papadimitriou, C.H., Saberi, A., Vazirani, V.V.: Market equilibrium via a primal-dual-type algorithm. In: FOCS, pp. 389–395 (2002) 14. Eisenberg, E., Gale, D.: Consensus of subjective probabilioties: The pari-mutuel method. Annual of mathematical statistics 30, 165–168 (1959) 15. Feldman, J., Korula, N., Mirrokni, V.S., Muthukrishnan, S., P´al, M.: Online ad assignment with free disposal. In: Leonardi, S. (ed.) WINE 2009. LNCS, vol. 5929, pp. 374–385. Springer, Heidelberg (2009) 16. Jain, K., Vazirani, V.V.: Eisenberg-gale markets: algorithms and structural properties. In: STOC 2007: Proceedings of the thirty-ninth annual ACM symposium on Theory of computing, pp. 364–373 (2007) 17. Mahdian, M., Saberi, A.: Multi-unit auctions with unknown supply. In: EC 2006: Proceedings of the 7th ACM conference on Electronic commerce, pp. 243–249 (2006) 18. Mehta, A., Saberi, A., Vazirani, U., Vazirani, V.: Adwords and generalized online matching. J. ACM 54(5), 22 (2007) 19. Nisan, N., Roughgarden, T., Tardos, E., Vazirani, V.V.: Algorithmic Game Theory. Cambridge University Press, New York (2007) 20. Scarf, H.: The computation of economic equilibria (with collaboration of t. hansen). In: Cowles foundation monograph, No. 24 (1973) 21. Stolyar, A.L.: Greedy primal-dual algorithm for dynamic resource allocation in complex networks. Queueing Syst. Theory Appl. 54(3), 203–220 (2006)
Fr´ echet Distance of Surfaces: Some Simple Hard Cases Kevin Buchin1 , Maike Buchin2 , and Andr´e Schulz3 1
3
Department of Mathematics and Computer Science, TU Eindhoven
[email protected] 2 Department of Information and Computing Sciences, Utrecht University
[email protected] Institut f¨ ur Mathematsche Logik und Grundlagenforschung, Universit¨ at M¨ unster
[email protected]
Abstract. We show that it is NP-hard to decide the Fr´echet distance between (i) non-intersecting polygons with holes embedded in the plane, (ii) 2d terrains, and (iii) self-intersecting simple polygons in 2d, which can be unfolded in 3d. The only previously known NP-hardness result for 2d surfaces was based on self-intersecting polygons with an unfolding in 4d. In contrast to this old result, our NP-hardness reductions are substantially simpler. As a positive result we show that the Fr´echet distance between polygons with one hole can be computed in polynomial time.
1
Introduction
The Fr´echet distance is a similarity measure for curves [11] and surfaces [12]. It compares distances at pairs of points given by parameterizations of the curves or surfaces. To be independent of the given parameterizations, re-parameterizations by arbitrary homeomorphisms are allowed. If the curves or surfaces are homeomorphic to their parameter space, and thus to each other, the Fr´echet distance can be simplified by considering homeomorphisms directly between the surfaces. This is the case for non-intersecting surfaces, such as terrains and polygons (with and without holes), which we consider in this paper. For two homeomorphic surfaces P, Q the Fr´echet distance is formally defined as δF (P, Q) := inf max dist(t, σ(t)) σ hom t∈P
where dist(·, ·) denotes the underlying metric on Rd (typically the Euclidean metric) and σ : P → Q ranges over all orientation-preserving homeomorphisms from P to Q (see for instance [1] for the more general definition). For curves, the Fr´echet distance can be intuitively illustrated by a man walking his dog. Man and dog are walking on the two given curves, and the man is holding the dog on a leash. They may choose their speed independently, but they may not back up on their respective curves. The Fr´echet distance then equals the shortest necessary leash length. For surfaces, the man-dog illustration does not M. de Berg and U. Meyer (Eds.): ESA 2010, Part II, LNCS 6347, pp. 63–74, 2010. c Springer-Verlag Berlin Heidelberg 2010
64
K. Buchin, M. Buchin, and A. Schulz
apply anymore, because 2d (surface) homeomorphisms in contrast to 1d (curve) homeomorphisms cannot be characterized as monotone onto continuous maps (i.e., cannot be interpreted as time component in the illustration). The missing characterization of homeomorphisms in 2d is also the reason that the Fr´echet distance is much harder to handle for 2d surfaces. For curves the Fr´echet distance is well studied. Alt and Godau [2] gave the first polynomial time algorithm and many more results followed. Due to its natural definition, the Fr´echet distance of curves occurs in many applications like recognizing handwritten characters [16], analyzing proteins in bio-informatics [14], and analyzing GPS-tracks in geographic information science [4,7,5]. Like for curves, similarity is typically better captured by the use of a mapping between surfaces than by the Hausdorff distance. In computer graphics and geometric processing the importance of mappings between surfaces stems from the central role of parametrizations (see the survey by Floater and Hormann [10]). In particular, mappings between a surface and a simplified version of it, are often required, e.g., for texture mapping or 3d morphing. We refer to Chazal et al. [8] for an overview of applications. They give conditions under which the Fr´echet distance between the surfaces equals their Hausdorff distance. An extended version of the Fr´echet distance (with an additional constraint on the size of determinant of the Jacobian) was recently used by Dey, Ranjan, and Wang to prove the convergence of the Laplace spectrum for meshes that approximate a smooth surface [9]. While these examples show the importance of the Fr´echet distance for surfaces, little is known about algorithms for computing it.The Fr´echet distance between surfaces was first studied from a computational perspective by Godau in his thesis [13]. He showed that the decision problem for the Fr´echet distance between surfaces in general is NP-hard. More specifically, he showed that the decision problem for self-intersecting 2d surfaces in 2d is NP-hard. These surfaces can be “unfolded” in 4d, i.e., the decision problem is NP-hard for non-intersecting surfaces in 4d. Godau stated as an open problem the question whether decision problem for the natural case of non-intersecting 2d surfaces in 3d is also NP-hard, which we resolve in this paper. Interestingly, the question whether the Fr´echet distance between surfaces is computable is still open. Alt and Buchin [1] recently showed that the Fr´echet distance between surfaces is semi-computable and that the weak Fr´echet distance between surfaces (a variant of the Fr´echet distance) is polynomial time computable. Furthermore, Buchin et al. [6] showed that the Fr´echet distance between simple polygons, i.e., non-intersecting 2d surfaces in 2d, is polynomially time computable (in contrast to Godau’s NP-hardness proof for intersecting 2d surfaces in 2d). The two remaining natural open problems, for which it was not known whether they are NP-hard or not, address non-intersecting 2d surfaces in 3d, and polygons with holes. An interesting special case are terrains, i.e., 2d surfaces in 3d which are one-to-one in z-direction. Furthermore, the option of modifying the Fr´echet distance by replacing the homeomorphisms in its definition with “more reasonable” maps, seemed promising. (In this spirit, Buchin et al. [6] showed
Fr´echet Distance of Surfaces: Some Simple Hard Cases
65
that for the Fr´echet distance between simple polygons the homeomorphisms can be naturally restricted to a smaller, well-behaved class of maps). However, in this paper, we close the existing gap by showing NP-hardness for all of these cases. That is, we show NP-hardness for the decision problem of the Fr´echet distance between 2d polygons with holes, between terrains and between simple 2d polygons “folded” in 2d. Our NP-hardness proofs also hold if we restrict the homeomorphisms to x- and y-monotone onto continuous maps. In particular, for terrains and with restricted homeomorphisms, this is an unexpected result. The results presented in this paper leave only little room for positive results on the Fr´echet distance between surfaces. One remaining open question is whether polynomial time algorithms exist for the Fr´echet distance between polygons with a fixed number of holes. In this direction, we give a polynomial time algorithm for the Fr´echet distance between polygons with one hole. For more than one hole, this question remains open (because the topological characterization becomes more complex). The rest of this paper is structured as follows: In Section 2 we give the NPhardness reductions and in Section 3 the algorithm for polygons with one hole. In Section 4 we discuss natural directions to proceed. In particular, we demonstrate how to obtain a polynomial time or even fixed-parameter tractable algorithm for a bounded number of holes by restricting the class of homeomorphisms. Further we discuss the computability of the Fr´echet distance for graphs, and in particular sketch a polynomial time algorithm for the Fr´echet distance for trees.
2 2.1
NP-Hardness Plane Polygons with Holes
We study the decision problem, i.e., deciding if two flat 2d polygons with holes lying in the plane have a Fr´echet distance less than a given value ε. NP-hardness is shown by reducing from monotone one-in-three SAT, which is a variation of one-in-three SAT where each variable occurs only positive in all clauses, and a clause is fulfilled if exactly one of the three variables is true. This problem was shown to be NP-hard by Schaefer [15]. Let us start with the high level idea of the reduction. Every homeomorphism that is a candidate for the Fr´echet distance has to map the boundaries of the holes of the first polygon to the boundaries of the holes of the second polygon. In other words, a bijection of the boundaries of the holes is a necessary ingredient for such a homeomorphism. For simplicity we refer to one polygon as the blue polygon and to the other as the red polygon. Similarly, we use these colors for the corresponding holes, boundaries, points, etc. The bijection between the red and blue holes is called hole assignment. If every two mapped holes have Fr´echet distance at most ε we call the hole assignment feasible. For most of the holes there will be exactly two candidates, where the hole boundary can be mapped to. Roughly speaking, the choice made by the bijection corresponds to the truth assignments of the variables of the monotone one-in-three SAT formula.
66
K. Buchin, M. Buchin, and A. Schulz
v1 v2 pool of truths
vl c1
c2
ck
Fig. 1. Schematic construction of the reduction for variables v1 , . . . , vl and clauses c1 , . . . , ck
We associate the given monotone one-in-three SAT formula with a graph. The variables and clauses are vertices of the graph, and we have an edge between a variable and a clause, whenever the variable appears in the clause. Moreover we have an additional extra node in the graph that is adjacent to all variables. Assume that the boundary of the two polygons is a sufficiently large rectangle and that both boundaries coincide. In order to embed the graph of the formula we place all variables on a vertical line and all clauses on a horizontal line. The edges between variables and clauses are drawn orthogonally, that means we emanate at a variable in horizontal direction and make a turn when we are above the associated clause and continue vertically downwards. The edges between the extra node and the variables are also drawn using horizontal and vertical edge segments only. See Figure 1 for a schematic illustration. We substitute the embedding of the graph of the formula by several gadgets. Each gadget consists of a collection of red and blue holes. In particular, we replace each edge by a wire gadget, each variable by a synchronization gadget, each clause by a clause gadget and the extra node by the pool of truths gadget. Moreover, we introduce for every crossing of two edges a cross-over gadget. Next, we explain these gadgets one by one. Wire. A wire consists of two parallel rows of holes (squares) of alternating color. The second row is a displaced version of the first row and two neighboring squares in a row have Fr´echet distance ε. We call a single row of squares a halfwire. Figure 2 shows a part of a wire. Any feasible hole assignment has to map every hole either to its right or to its left neighbor within its row. Therefore, we have only two ways to pick a feasible hole assignment for a wire. Our choice represents the variable assignment and the wires propagate this information to the clause gadgets. We associate the case where the blue holes are mapped in direction of the clauses with a variable set to true, otherwise false. Notice, that bends can be realized easily, for example as shown in Figure 2.
Fr´echet Distance of Surfaces: Some Simple Hard Cases
67
ε ε ε ε
ε
wire
bent wire Fig. 2. The wire gadget
Synchronization. A variable is connected to each clause it appears in by one wire. We have to guarantee that the wires for one variable carry the same information. In order to enforce this, we introduce the synchronization gadget. A synchronization gadget of size m synchronizes m single wires, i.e., it ensures that the m wires are all transmitting the same signal. Such a gadget consists of three columns of m holes that have the shape of wedges, as illustrated in Figure 3. The holes of the middle column are red, all other holes blue. By placing the holes at distance ε, each hole in each row can be matched to exactly its two neighbors. Furthermore, all middle wedges need to be matched to the same side (i.e. all to the left or all to the right). To see this, consider the dashed line segment pq ¯ in Figure 3. In the case that the upper red wedge would be matched to the left and the red wedge beneath would be matched to the right, the point p would be mapped to p and the point q to the point q . A homeomorphism realizing this would then have to map the segment pq ¯ to a path connecting p with q in the blue polygon. Because the neighboring blue wedges separate the ε-neighborhood of the segment pq ¯ (see Figure 3) such a path would have to leave the ε-neighborhood of pq, ¯ which violates a Fr´echet distance ε. Thus, p, q need to be matched to the same side. In order to connect wires to the synchronization gadgets we “morph” the squares of the wire to the wedges of the synchronization gadgets, while maintaining the property that each hole can be matched to exactly its two neighbors. Clause. In every clause gadget three wires meet (one for each variable occurring in the clause). Figure 3 shows the gadget. The wires enter the gadget from opposite sites and the last blue holes are a little smaller. This does not interfere with our property that we have exactly two candidates, where a hole can be mapped onto. The six smaller blue holes “compete” for the two very small red holes in the center (remember that the homeomorphism has to be injective). More specifically, every wire of a variable that is set to true, will want to match its small blue holes to the red holes of the clause. Since the half-wires are synchronized the two small red holes can only be assigned to exactly one of the wires. This assures that exactly one of the incoming wires comes from a variable that has been set on true.
68
K. Buchin, M. Buchin, and A. Schulz
q
q
ε
p
p
ε
Fig. 3. The synchronization gadget (left) and clause gadget (right). In the center of the clause gadget are two tiny red holes.
Pool of Truths. Each wire, corresponding to the occurrence of a variable, starts in the pool of truths and ends in a clause (passing the variable synchronization in between). The wire starts and ends with two blue holes at each end. Thus, in a feasible hole assignment two further red holes are needed either at the start or at the end of the wire. For wires of variables that are set to true, these are the red holes in the clauses. For wires of variables that are set to false, these are provided by the pool of truths. Let k be the number of clauses. In the pool of truths we place 4k small squares inside an ε/2 ball and connect all wires with the boundary of the ball. In order to achieve this we make the squares of the wires small enough (via morphing), such that they all fit in the boundary of the ε/2 ball. Since the wires lie very close together, we do not guarantee that the hole assignment matches only holes of each wire. However, this is not a problem, since only after the synchronization the wires have to maintain their truth assignment. Cross-Over. In the cross-over gadget we need the two (synchronized) halfwires. See Figure 4 for a schematic drawing of the gadget. We can always assume that a vertical wire crosses a horizontal wire. We split the vertical wire into its two half-wires. Furthermore we shrink the squares, such that they become sufficiently small. We have to guarantee the correct propagation of signals along both wires. For this, consider the 7 squares that are involved in each of the two crossings of the horizontal wire with one of the vertical half-wires. Three of these squares are blue and four are red, thus one of the red squares has to be matched with a blue square not within these 7 squares. The propagation is correct if this is a blue square of the vertical wire. This, however, follows from the synchronization of the horizontal double wire, because a square of a synchronized wire can only be “unassigned” if its partner is also. Note that since the 7 squares that are placed around a crossing are relatively small and close to each other, we can extend the hole assignment to a homeomorphism between the polygons. The correctness of the reduction follows from the correctness of the gadgets. If no satisfying variable assignment exists, then we cannot find a feasible hole assignment and hence no desired homeomorphism exists. Notice that without
Fr´echet Distance of Surfaces: Some Simple Hard Cases
ε/2
Sync
ε
Sync
69
Sync
Fig. 4. The pool of truths gadget (left) and the cross-over gadget (right)
the synchronization gadget a feasible hole assignment could be computed as bipartite perfect matching in polynomial time. If there is a truth assignment for the variables that fulfills all clauses, we can obtain a feasible hole assignment. The feasible hole assignment can be extended to a homeomorphism that certifies a Fr´echet distance of ε or less. For this we define the homeomorphism locally for each gadget and as identity everywhere else. For straight wires and synchronization gadgets, the homeomorphism can simply be chosen as shift to the right or left by ε. For the other gadgets, we enclose each pair of an assigned red and blue hole by a simple polygon, such that the set of simple polygons are non-intersecting. Then we define the homeomorphism inside these polygons, such that it maps the red to the blue hole and is the identity on the boundary. It is easy to see that this can be done for bent wires and clauses. For cross-over gadgets, this can be done because the holes around the crossing are relatively small and close to each other. For the pool of truths, we start with a ”nice” hole assignment, where holes are not mapped in between wires and holes of the pool are mapped to nearby wire holes. The size of the constructed polygons is clearly polynomially bounded by the size of the formula. We can therefore conclude with: Theorem 1. The decision problem for the Fr´echet distance of 2d polygons with holes is NP-hard. This problems remains NP-hard even if the polygons are embedded in the plane. 2.2
Terrains and Self-intersecting Surfaces in 2d
The NP-hardness reduction for polygons with holes can easily be adapted to terrains. On each hole, simply place a (plateau) mountain, which has the hole as base and nearly as plateau top (i.e., it has a very steep slope). All arguments hold as before. Thus, we get the following corollary from Theorem 1. Corollary 1. The decision problem for the Fr´echet distance of terrains is NPhard. For terrains, NP-hardness can also be shown by a simpler construction, which we give in the full version of this paper. Our NP-hardness reductions can also be modified to self-intersecting 2d-surfaces in 2d without holes. For this, simply take
70
K. Buchin, M. Buchin, and A. Schulz
(one of) the NP-hardness reductions for terrains and “fold” all mountains into the plane in the same direction. Thus, we get a further corollary from Theorem 1. Corollary 2. The decision problem for the Fr´echet distance of self-intersecting 2d-surfaces in 2d is NP-hard. For this type of surfaces Godau already showed NP-hardness [13], however again, our reduction is much simpler. The surfaces and homeomorphisms we use in our NP-hardness reduction are all “very nice”. In all reductions, the homeomorphisms can even be restricted to x- and y-monotone onto continuous mappings (for this, we place the pool of truths, variables and clauses such that all wires are monotone in these directions). Corollary 3. The decision problem for a variant of the Fr´echet distance, where only x- and y-monotone onto continuous mappings are allowed as re-parameterizations, of polygons with holes, terrains and self-intersecting 2d-surfaces in 2d, is NP-hard. This does not leave much room for the possibility of giving polynomial time algorithms for a modified Fr´echet distance of terrains where the homeomorphism are restricted to “nicer maps”. Furthermore, our proof holds for a modified Fr´echet distance as suggested in [9] where the determinant of the Jacobian of the homeomorphism has to be bounded.
3
Algorithm for Polygons with One Hole
In this section, we give a polynomial time algorithm for computing the Fr´echet distance between polygons with one hole by extending the polynomial time algorithm for simple polygons. For this to work, we need to restrict the homotopy types of paths to which diagonals of a convex decomposition of one polygon are mapped to. For simple polygons, Buchin et al. showed that to compute the Fr´echet distance it suffices to consider only shortest paths maps in between the polygons, instead of homeomorphisms (cf. Proposition 6 in [6]). A shortest paths map between simple polygons P, Q is a homeomorphism from the boundary of P to the boundary of Q which is extended to the diagonals of a convex decomposition of P by mapping diagonals to the shortest paths in between the images of their endpoints in Q. For computing the Fr´echet distance it suffices to take the maximum of a shortest paths map on the edges of the convex decomposition, i.e., the boundary and diagonals. This result directly generalizes to polygons with holes, i.e., it again suffices to map a convex decomposition of one polygon to decompositions of the other polygon, where diagonals are mapped to shortest paths, and the maximum is taken only over the edges of the convex decomposition. For simple polygons, the convex decomposition consisted of the boundary and diagonals in between boundary vertices. Now, the edges of a convex decomposition consist of the
Fr´echet Distance of Surfaces: Some Simple Hard Cases
71
outer boundary, inner boundary of the polygon and diagonals, which are of three types: outer-outer, inner-inner, and outer-inner. We can handle the outer and the inner boundary and the outer-outer and the inner-inner diagonals independently using the algorithm for simple polygons. However, we also need to integrate the placements of outer-inner diagonals. For this, we use an extra graph and we synchronize the sweep over the two free space diagrams. Like the algorithm for simple polygons, our algorithm is based on a characterization of the Fr´echet distance in the free space diagrams. For simple polygons, the Fr´echet distance is less or equal a given value ε if and only if there is a path in the free space diagram that also maps the diagonals of a convex decomposition to shortest paths with Fr´echet distance less or equal ε. To decide this, Buchin et al. combined the information of paths in reachable free space with correct diagonal placements in a graph called the reachability graph, which can be computed by a sweep over the free space diagram. For simple polygons with one hole, the Fr´echet distance is less or equal ε if and only if there are monotone paths in the outer and inner free space diagram that map all three types of diagonals correctly, i.e., to shortest paths with Fr´echet distance less or equal ε. For outer-outer and inner-inner diagonals, the mapping to a shortest path is determined by the path in the respective free space diagram. For an outer-inner diagonal, only the mapping of its endpoints is determined by the two paths in both free space diagrams. Furthermore, there is the choice of infinitely many shortest paths resulting from different homotopy types of the image of the diagonal. To map the outer-inner diagonals, we first show that only a constant number of homotopy types for the images of diagonals need to be considered for which we can compute homotopic shortest paths [3]. Then, for each fixed homotopy type, the algorithm sweeps over both free space diagrams in parallel, synchronizing at endpoints of outer-inner diagonals (in between these the two free space diagrams can be computed independently). When reaching the endpoints of an outer-inner diagonal, the possible mappings are determined and this information is integrated into the two reachability structures. Homotopy types of diagonal images. For an outer-inner diagonal, the intervals to which its endpoints are mapped are determined by the paths in the outer and inner free space diagrams. For each mapping of its endpoints, however, there are infinitely many homotopic shortest paths, winding left or right n times around the hole, for all natural numbers n. First, note that choosing a homotopic shortest path for one outer-inner diagonal, determines the homotopy type of shortest paths for all other outer-inner diagonals. Thus, it suffices to determine the homotopy type of the shortest path for the first diagonal. Furthermore, we have the following observation and lemma. Observation 1. If the homotopic shortest path of one outer-inner diagonal winds more than twice around the hole, then all homotopic shortest paths of the other outer-inner diagonals wind around the hole at least once. Lemma 1. If all homotopic shortest paths wind around the hole at least once, then we can remove one loop from each without increasing the Fr´echet distance
72
K. Buchin, M. Buchin, and A. Schulz
Fig. 5. Five possible cases for the first outer-inner diagonal
between any of the outer-inner diagonals and their corresponding homotopic shortest paths. Proof. Assume the Fr´echet distance of an outer-inner diagonal and its matched homotopic shortest path is less than or equal ε. In the corresponding matching, a segment of the diagonal is matched to the loop. This implies that both endpoints of the segment have distance at most ε to the start/endpoint of the loop. This again implies that all points of the segment have distance at most ε to the start/endpoint of the loop. Thus, instead of matching the segment to the loop, it can also be matched to this point without increasing the Fr´echet distance. By Lemma 1 we do not need to consider homotopic shortest paths that wind more than twice around the hole. This leaves (at most) five cases, which are illustrated in Figure 5. The shortest path 1. touches the inner boundary only in its endpoint 2. traverses part of the inner boundary, but not the whole, in clockwise direction 3. traverses part of the inner boundary, but not the whole, in counter-clockwise direction 4. traverses the inner boundary once, but less than twice, in clockwise direction 5. traverses the inner boundary once, but less than twice, in counter-clockwise direction Note that “traverses” does not mean it visits all vertices along the way, but only some (because it is a shortest path). The above cases apply to diagonal end points mapped to points. A placement of a diagonal, however, is a mapping to intervals. The above case distinction easily generalizes to this situation: we consider the two shortest paths bounding the hour glass (together with the given intervals). For an hourglass, the “higher” case of the cases for these two paths apply. E.g., if case 1 and 2 apply to these two shortest paths, then case 2 applies to the hour glass. For the first outer-inner diagonal that we place, we test these five placements and run the remaining algorithm for each valid placement. The first case does not occur, if the second point lies “on the back side of the hole”. In the full version of the paper we give the further details of the algorithm. We can conclude with the following theorem. Theorem 2. The decision problem of the Fr´echet distance between two polygons with one hole can be computed in polynomial time.
Fr´echet Distance of Surfaces: Some Simple Hard Cases
4
73
Discussion
As said in the introduction, our NP-hardness results leave only little room for positive results on the Fr´echet distance between surfaces. We discuss now some extensions and open problems. More details are in the full version of the paper. It remains as an open question whether deciding the Fr´echet distance between two polygons in 2d, parametrized by the number of holes, is fixed-parameter tractable. It is also open whether the Fr´echet distance between polygons with a fixed number of holes can be computed in polynomial time. The main obstacle in generalizing the algorithm for one hole, is a missing bound on the number of homotopy types. By restricting the class of homeomorphisms we can obtain a bound as follows. Consider only homeomorphisms that map line segments to curves with a total absolute curvature1 less than c > 0. We can again restrict us to shortest paths as images of diagonals. The number of holes such a path goes around is bounded in terms of c and the number of holes h. This gives a bound on the number of homotopy types for one path. We first place h diagonals to connect all holes and the boundary and test all feasible combinations of homotopy types. For all remaining diagonals the homotopy type is then fixed. This results in a polynomial-time algorithm for constant c and h, and for point holes even in a fixed parameter tractable algorithm with parameter max(c, h). The Fr´echet distance can also be considered for geometric graphs. The NPhardness reduction presented in Section 2.1 is based on a feasible hole assignment. Such an assignment can be considered as matching between two (geometric) graphs, that maps vertices within ε distance. Notice, that the reduction in Section 2.1 relies on a mapping of the space between the holes, in order to make the synchronization gadget work. Thus, the complexity of the decision problem if two straight-line drawings have Fr´echet distance at most ε remains open. Since every homeomorphism on the two drawings would induce an isomorphism of the graphs, the problem is at least GraphIsomorphism-hard. If the two graphs are trees, one can answer the decision problem in polynomial time. Let the height of a node be the length of the shortest path to a leaf of a tree. A level of a tree contains all nodes with the same height. We say that two nodes with the same height have an ε-matching if their subtrees of nodes with smaller height have Fr´echet distance not more than ε. We store this information in a n × n array and fill the array by checking every pair of vertices within each level, starting with pairs of level zero and then increasing the level. If two children have an ε-matching they are candidates for being matched when matching u with v, otherwise not. Thus, we can decide if the pair (u, v) has an ε-matching by finding a perfect matching in a bipartite graph, plus checking if u and v lie close enough together. Finding such a matching in the highest level implies that the trees have an ε-matching. The running time of this algorithm is dominated by the time to compute a perfect matching on a bipartite graph, which can be done in O(n5/2 ) time using Dinitz’ algorithm (see for example [17]). 1
For C 2 curves the total absolute curvature is the integral of the curvature, for piecewise C 2 curves we add the exterior angles at non-smooth points.
74
K. Buchin, M. Buchin, and A. Schulz
Acknowledgements. The authors would like to thank Erik Demaine for helpful discussions. The first author is supported by the Netherlands Organisation for Scientific Research (NWO) under project no. 639.022.707. The second and third authors are supported by the German Research Foundation (DFG) under grant numbers BU 2419/1-1 and SCHU 2458/1-1, respectively.
References 1. Alt, H., Buchin, M.: Can we compute the similarity between surfaces? Discrete & Computational Geometry 43(1), 78–99 (2010) 2. Alt, H., Godau, M.: Computing the Fr´echet distance between two polygonal curves. Internat. J. Computational Geometry and Applications 5, 75–91 (1995) 3. Bespamyatnikh, S.: Computing homotopic shortest paths in the plane. J. Algorithms 49(2), 284–303 (2003) 4. Brakatsoulas, S., Pfoser, D., Salas, R., Wenk, C.: On map-matching vehicle tracking data. In: Proc. 31st Internat. Conf. Very Large Data Bases (VLDB), pp. 853–864 (2005) 5. Buchin, K., Buchin, M., Gudmundsson, J., Luo, J., L¨ offler, M.: Detecting commuting patterns by clustering subtrajectories. In: Hong, S.-H., Nagamochi, H., Fukunaga, T. (eds.) ISAAC 2008. LNCS, vol. 5369, pp. 644–655. Springer, Heidelberg (2008) 6. Buchin, K., Buchin, M., Wenk, C.: Computing the Fr´echet distance between simple polygons. Comput. Geom.: Theory and Applications 41(1-2), 2–20 (2008) 7. Buchin, K., Buchin, M., Gudmundsson, J.: Constrained free space diagrams: a tool for trajectory analysis. Internat. J. Geographical Information Science 24, 1101– 1125 (2010) 8. Chazal, F., Lieutier, A., Rossignac, J., Whited, B.: Ball-map: Homeomorphism between compatible surfaces. GVU Tech. Report GIT-GVU-06-05 (2005); To appear in the Internat. J. Comput. Geom. and Appl. 9. Dey, T.K., Ranjan, P., Wang, Y.: Convergence, stability, and discrete approximation of Laplace spectra. In: Proc. 21st Annu. ACM-SIAM Sympos. Discr. Algorithms (SODA), pp. 650–663 (2010) 10. Floater, M.S., Hormann, K.: Surface parameterization: a tutorial and survey. In: Dodgson, N.A., Floater, M.S., Sabin, M.A. (eds.) Advances in Multiresolution for Geometric Modelling, Mathematics and Visualization, pp. 157–186. Springer, Berlin (2005) 11. Fr´echet, M.: Sur quelques points du calcul fonctionnel. Rendiconti Circ. Mat. Palermo 22, 1–74 (1906) 12. Fr´echet, M.: Sur la distance de deux surfaces. Ann. Soc. Polonaise Math. 3, 4–19 (1924) 13. Godau, M.: On the complexity of measuring the similarity between geometric objects in higher dimensions. PhD thesis, Freie Universit¨ at Berlin, Germany (1998) 14. Jiang, M., Xu, Y., Zhu, B.: Protein structure-structure alignment with discrete fr´echet distance. J. Bioinformatics and Computational Biology 6(1), 51–64 (2008) 15. Schaefer, T.J.: The complexity of satisfiability problems. In: Proc. 10th Annu. ACM Sympos. Theory of Computing, pp. 216–226. ACM, New York (1978) 16. Sriraghavendra, E., Karthik, K., Bhattacharyya, C.: Fr´echet distance based approach for searching online handwritten documents. In: Proc. 9th Int. Conf. Doc. Analy. Recog. (2007) 17. Tarjan, R.E.: Data Structures and Network Algorithms. CBMS-NSF Regional Conference Series in Applied Mathematics, vol. 44. SIAM, Philadelphia (1983)
Geometric Algorithms for Private-Cache Chip Multiprocessors (Extended Abstract) Deepak Ajwani1 , Nodari Sitchinava1 , and Norbert Zeh2, 1
MADALGO , Department of Computer Science, University of Aarhus, Denmark {ajwani,nodari}@cs.au.dk 2 Faculty of Computer Science, Dalhousie University, Halifax, Canada
[email protected] Abstract. We study techniques for obtaining efficient algorithms for geometric problems on private-cache chip multiprocessors. We show how to obtain optimal algorithms for interval stabbing counting, 1-D range counting, weighted 2-D dominance counting, and for computing 3-D maxima, 2-D lower envelopes, and 2-D convex hulls. These results are obtained by analyzing adaptations of either the PEM merge sort algorithm or PRAM algorithms. For the second group of problems—orthogonal line segment intersection reporting, batched range reporting, and related problems—more effort is required. What distinguishes these problems from the ones in the previous group is the variable output size, which requires I/O-efficient load balancing strategies based on the contribution of the individual input elements to the output size. To obtain nearly optimal algorithms for these problems, we introduce a parallel distribution sweeping technique inspired by its sequential counterpart.
1
Introduction
With recent advances in multicore processor technologies, parallel processing at the chip level is becoming increasingly mainstream. Current multicore chips have 2, 4 or 6 cores, but Intel recently announced a 48-core chip [20], and the trend to increasing numbers of cores per chip continues. This creates a need for algorithmic techniques to harness the power of increasing chip-level parallelism [17]. A number of papers have made progress towards addressing this need [2, 3, 9, 11–13]. Ignoring the presence of a memory hierarchy, current multicore chips resemble a PRAM, with all processors having access to a shared memory and communicating with each other exclusively through shared memory accesses. However, each processor (core) has a low-latency private cache inaccessible to other processors. In order to take full advantage of such architectures, now commonly known as private-cache chip multiprocessors (CMP’s), algorithms have to be designed with
Supported in part by the Natural Sciences and Engineering Research Council of Canada and the Canada Research Chairs programme. MADALGO is the Center for Massive Data Algorithmics, a center of the Danish National Research Foundation.
M. de Berg and U. Meyer (Eds.): ESA 2010, Part II, LNCS 6347, pp. 75–86, 2010. c Springer-Verlag Berlin Heidelberg 2010
76
D. Ajwani, N. Sitchinava, and N. Zeh
a focus on minimizing the number of accesses to shared memory. In this paper, we study techniques to address this problem for a number of geometric problems, specifically for 2-D dominance counting, 3-D maxima, 2-D lower envelope, 2-D convex hull, orthogonal line segment intersection reporting, batched 2-D orthogonal range reporting, and related problems. For these problems, optimal PRAM [5, 7, 14, 18] and sequential I/O-efficient algorithms [10, 19] are known, and some of these problems have also been studied in coarse-grained parallel models [15, 16]. The previous parallel algorithms and the I/O-efficient sequential algorithms achieve exactly one of our goals— parallelism or I/O efficiency—while the algorithms in this paper achieve both. 1.1
Model of Computation and Previous Work
Our algorithms are designed in the parallel exCPU 1 CPU 2 CPU P ternal memory (PEM) model of [2]; see FigM/B M/B M/B ure 1. This model considers a machine with P processors, each with a private cache of B Cache Cache Cache size M . Processors communicate with each other through access to a shared memory of B Shared memory conceptually unlimited size. Each processor can use only data in its private cache for comFig. 1. PEM model putation. The caches and the shared memory are divided into blocks of size B. Data is transferred between the caches and shared memory using parallel input-output (I/O) operations. During each such operation, each processor can transfer one block between shared memory and its private cache. The cost of an algorithm is the number of I/Os it performs. As in the PRAM model, different assumptions can be made about how to handle multiple processors reading or writing the same block in shared memory during one I/O operation. Throughout this paper, we allow concurrent reading of the same block by multiple processors but disallow concurrent block writes; in this respect, the model is similar to a CREW in the PEM PRAM. The cost of sorting 2 model is sortP (N ) = O PNB logM/B N [2], provided P ≤ N/B and M = B O(1) . B The PEM model provides the simplest possible abstraction of current multicore chips, focusing on the fundamental I/O issues that need to be addressed when designing algorithms for these architectures, similar to the I/O model [1] in the sequential setting. The hope is that the developed techniques are also applicable to more complicated multicore models. For the PEM graph algorithms developed in [3], this has certainly been the case already [13]. A number of other results have been obtained in more complicated multicore models. In [8], Bender et al. discussed how to support concurrent searching and updating of cache-oblivious B-trees by multiple processors. In [9, 11, 12], different multicore models are considered and cache- and processor-oblivious divide-andconquer and dynamic programming algorithms are presented whose performance is within a constant factor of optimal for the studied problems. An important difference between the work presented in this paper and the previous results mentioned above is that the algorithms in this paper are
Geometric Algorithms for Private-Cache Chip Multiprocessors
77
output-sensitive. This creates a challenge in allocating input elements to processors so that all processors produce roughly equal fractions of the output. To the best of our knowledge, output-sensitive computations have not been considered before in any of the multicore models mentioned above. However, there exists related work in the sequential I/O [1, 19] and cache-oblivious models [4, 10], and in the PRAM [14, 18] model. The PRAM solutions rely on very fine-grained access to shared memory, while the cache-efficient solutions seem inherently sequential. 1.2
New Results
In this paper, we focus on techniques to solve fundamental computational geometry problems in the PEM model. The main contribution of the paper is a parallelization of the distribution sweeping paradigm [19], which has proven very successful as a basis for solving geometric problems in the sequential I/O model. Using this technique, we obtain solutions for reporting orthogonal line segment intersections, batched range searching, and related problems. The above problems can be solved using Θ(sortP (N ) + K/P B) I/Os in the sequential I/O model (P = 1) and in the CREW PRAM model (M, B = O(1)). Here, K denotes the output size. Thus, it seems reasonable to expect that a similar I/O bound can be achieved in the PEM model. We don’t achieve this goal in this paper but present two algorithms that come close to it: one performing O(sort P (N + K)) I/Os, the other O(sortP (N ) logd P + K/P B), for d := min( N/P , M/B). The main challenge in obtaining these solutions is to balance the output reporting across processors, as different input elements may make different contributions to the output size. Our solutions are obtained using two different solutions to this balancing problem. Our solutions are based on O(sortP (N )) I/O algorithms for the counting versions of these problems. We also obtain optimal O(sortP (N )) I/O solutions for computing the lower envelope of a set of non-intersecting line segments in the plane, the maxima of a 3-D point set, and the convex hull of a 2-D point set.
2
Tools
In this section, we define primitives we use repeatedly in our algorithms. Unless stated otherwise, we assume P ≤ min(N/B 2 , N/(B log N )) and M = B O(1) . Prefix sum and compaction. Given an array A[1 . . N ], the prefix sum problem is to compute an array S[1 . . N ] such that S[i] = ij=1 A[j]. Given a second boolean array M [1 . . N ], the compaction problem is to rearrange the array A such that all elements A[i] with M [i] = true are together at the beginning of A and the relative order of elements with same value of M [i] remains unchanged. PEM algorithms for these problems with I/O complexity O(N/P B + log P ) are presented in [2] (also see [21]). Since we assume P ≤ N/(B log N ), the I/O complexity of both operations reduces to O(N/P B). Global r load balancing. Let A1 , A2 , . . . , Ar be a collection of arrays with r ≤ P and j=1 |Aj | = N , and assume each element x has a positive weight wx . Let
78
D. Ajwani, N. Sitchinava, and N. Zeh
wmax = maxx wx , Wj = x∈Aj wx and W = rj=1 Wj . A global load balancing operation assigns contiguous subarrays of A1 , A2 , . . . , Ar to processors so that O(1) subarrays are assigned to each processor and the total weight of the elements assigned to any processor is O(W/P + wmax ). This operation can be implemented using O(1) prefix sum and compaction operations and, thus, takes O(N/P B) I/Os. Details of this operation will appear in the full paper. Transpose and compact. Given P arrays A1 , A2 , . . . , AP of total size N and such that each array Ai is segmented into d sub-arrays Ai,1 , Ai,2 , . . . , Ai,d , a transpose and compact operation generates d arrays A1 , A2 , . . . , Ad , where Aj is the concatenation of arrays A1,j , A2,j , . . . , AP,j . The segmentation is assumed to be given as a P × d matrix M stored in row-major order and such that M [i, j] is the size of array Ai,j . A transpose and compact operation can be implemented using O(N/P B + d) I/Os as follows. We copy M into a matrix M and round every entry in M up to the next multiple of B. We add a 0th column to M and a 0th row to M , all of whose entries are 0, and compute row-wise prefix sums of M and column-wise prefix sums of M . Let the resulting matrices be M r and M c , respectively. Array Ai,j needs to be copied from position M r [i, j − 1] in Ai to position M c [i − 1, j] in Aj . We assign portions of the arrays A1 , A2 , . . . , AP to processors using a global load balancing operation so that no processor receives more than O(N/P + B) = O(N/P ) elements and the pieces assigned to processors, except the last piece of each array Ai,j , have sizes that are multiples of B. Each processor copies its assigned blocks of arrays A1 , A2 , . . . , AP to arrays A1 , A2 , . . . , Ad . Finally, we use a compaction operation to remove the gaps introduced in arrays A1 , A2 , . . . , Ad by the alignment of the sub-arrays Ai,j at block boundaries. Note that the size of the arrays A1 , A2 , . . . , Ad with the sub-arrays Ai,j padded to full blocks is at most N + P d(B − 1). Thus, the prefix sum, compaction, and global load balancing operations involved in this procedure can be carried out using O(N/P B + d) I/Os. The row-wise and column-wise prefix sums on matrices M and M can also be implemented in this bound. However, M needs to be stored in column-major order for this operation. This is easily achieved by transposing M using O(d) I/Os (as its size is only (P + 1) × d) and then transposing it back into row-major order after performing the prefix sum.
3
Counting Problems
Interval stabbing counting and 1-D range counting. Let I be a set of intervals, S a set of points on the real line, and N := |I| + |S|. The interval stabbing counting problem is to compute the number of intervals in I containing each point in S. The 1-D range counting problem is to compute the number of points in S contained in each interval in I. Theorem 1. Interval stabbing counting and 1-D range counting can be solved using O(sortP (N )) I/Os in the PEM model. If the input is given as an x-sorted list of points and interval endpoints, interval stabbing counting and 1-D range counting take O(N/P B) and O(sortP (|I|) + |S|/P B) I/Os, respectively.
Geometric Algorithms for Private-Cache Chip Multiprocessors
79
Proof. Given the x-sorted list of points and interval endpoints, the number of intervals containing a point q ∈ S is the prefix sum of q after assigning a weight of 1 to every left interval endpoint, a weight of −1 to every right interval endpoint, and a weight of 0 to every point in S. Thus, the interval stabbing problem can be solved using a single prefix sum operation, which takes O(N/P B) I/Os. The number of points contained in an interval in I is the difference of the prefix sums of its endpoints after assigning a weight of 1 to every point in S and a weight of 0 to every interval endpoint. This prefix sum operation takes O(N/P B) I/Os again. To compute the differences of the prefix sums of the endpoints of each interval, we extract the set of interval endpoints from the x-sorted list using a compaction operation and sort the resulting list to store the endpoints of each interval consecutively. This takes another O(sortP (|I|) + |S|/P B) I/Os, for a total of O(sortP (|I|) + |S|/P B) I/Os. If the x-sorted list of points and interval endpoints is not given, it can be produced from I and S using O(sortP (N )) I/Os, which dominates the total cost of the computation. 2-D weighted dominance counting. Given two points q1 = (x1 , y1 ) and q2 = (x2 , y2 ) in the plane, we say that q1 1-dominates q2 if y1 ≥ y2 ; q1 2dominates q2 if, in addition, x1 ≥ x2 . The latter is the standard notion of 2-D dominance. In the 2-D weighted dominance counting problem, we are given a set S of points, each with an associated weight w(q), and our goal is to compute the total weight of all points in S 2-dominated by each point in S. Our algorithm in Section 4 for orthogonal line segment intersection reporting requires us to count the number of intersection of each segment. This problem and the problem of 2-D batched range counting reduce to 2-D weighted dominance counting by assigning appropriate weights to segment endpoints or points [6]. Thus, it suffices to present a solution to 2-D weighted dominance counting here. Theorem 2. 2-D weighted dominance counting can be solved using O(sortP (N )) I/Os in the PEM model, provided P ≤ N/B 2 and M = B O(1) . Proof. We start by sorting the points in S by their x-coordinates and partitioning the plane into vertical slabs σi , each containing N/P points. Each processor pi is assigned one slab σi and produces a y-sorted list U (σi ) of points in this slab, each annotated with labels Wσ1i (q) and Wσ2i (q), which are the total weights of the points within σi that q 1- and 2-dominates, respectively. After the initial sorting step to produce the slabs, which takes O(sortP (N )) I/Os, the lists U (σi ) and the labelling of the points in these lists can be produced using O(sort1 (N/P )) I/Os using standard I/O-efficient techniques [19] independently on each processor. We merge these lists using the d-way cascading merge procedure of PEM merge sort [2], which takes O(sortP (N )) I/Os and can be viewed as a d-ary tree with leaves σ1 , σ2 , . . . , σP and logd P levels. At each tree node v, the procedure computes a y-sorted list U (v), which is the merge of the y-sorted lists U (σi ) associated with the leaves of the subtree with root v. Next we observe that we can augment the merge procedure at each node v to compute weights Wv1 (q) and Wv2 (q), which are the total weights of the points in U (v) 1- and 2-dominated by q,
80
D. Ajwani, N. Sitchinava, and N. Zeh
respectively. For the root r of the merge tree, we have U (r) = S, and Wr2 (q) is the total weight of the points dominated by q, for each q ∈ U (r). So consider a node v with children w1 , w2 , . . . , wd . The cascading merge produces list U (v) in rounds, in each round merging finer samples of the lists U (w1 ), U (w2 ), . . . , U (wd ) than in the previous round. In the round that produces the full list U (v) from full lists U (w1 ), U (w2 ), . . . , U (wd ), the processor placing a point q ∈ U (wi ) into U (v) also accesses the predecessor prdwj (q) of q in list U (wj ), for all 1 ≤ j ≤ d, which is the point in U (wj ) with maximum y-coordinate no greater than q’s. Now it suffices to observe that Wv1 (q) d and Wv2 (q) can be computed as Wv1 (q) = j=1 Ww1 j (prdwj (q)) and Wv2 (q) = i−1 Ww2 i (q) + j=1 Ww1 j (prdwj (q)). This does not increase the cost of the merge step, and the total I/O complexity of the algorithm is O(sortP (N )).
4
Parallel Distribution Sweeping
We discuss our parallel distribution sweeping framework using orthogonal line segment intersection reporting as an example. Batched orthogonal range reporting and rectangle intersection reporting can be solved in the same complexity using adaptations of the procedure in this section. Details of these adaptations will be given in the full version of the paper. The distribution sweeping technique recursively divides the plane into vertical slabs, starting with the entire plane as one slab and in each recursive step dividing the given slab into d child slabs, for an appropriately chosen parameter d. This division is chosen so that each slab at a given level of recursion contains roughly the same number of objects (e.g., segment endpoints and vertical segments). In the sequential setting [19], d = M/B, and the recursion stops when the input problem fits in memory. In the parallel setting, we set d := min{ N/P , M/B},1 and the lowest level of recursion divides the plane into P slabs, each containing about N/P input elements. Viewing the recursion as a rooted tree, we talk about leaf invocations and children of a non-leaf invocation. We refer to an invocation on slab σ at the kth recursive level as Iσk . We describe two variants of parallel distribution sweeping. In both variants, each invocation Iσk receives as input a y-sorted list Yσk containing horizontal segments and vertical segment endpoints, and the root invocation IR02 contains all horizontal segments and vertical segment endpoints in the input. For a non, Iσk+1 , . . . , Iσk+1 denote its child invocations, Eσkj the leaf invocation Iσk , let Iσk+1 1 2 d k y-sorted list of horizontal segments in Yσ with an endpoint in σj , Sσkj the ysorted list of horizontal segments in Yσk spanning σj and with an intersection in σj , and Vσkj the y-sorted list of vertical segment endpoints in Yσk contained in σj . The first distribution sweeping variant constructs Yσk+1 as the merge of j k k k k+1 lists Eσj , Sσj , and Vσj and recurses on each child invocation Iσj with this input. The second variant constructs a y-sorted list Rσkj := Sσkj ∪ Vσkj , for each child 1
The choice of d comes from the d-way PEM mergesort of [2] and ensures that d = O(N/P B).
Geometric Algorithms for Private-Cache Chip Multiprocessors
81
slab σj , reports all intersections between segments in Rσkj , and then recurses on each child invocation Iσk+1 with input Yσk+1 := Eσkj ∪ Vσkj ; see Figure 2. In both j j variants, every leaf invocation Iσk finds all intersections between the elements in Yσk using sequential I/O-efficient techniques, even though some effort is required to balance the work among processors. The first variant, with I/O complexity O(sortP (N + K)), defers the reporting of interh sections to the leaf invocations and ensures that k the input to every leaf Iσ invocation is exactly the list of vertical segment endpoints in σ and of all horizontal segments with an endpoint or an σ1 σ2 σ3 σ4 intersection in σ. The second variant achieves an I/O complexity of O(sortP (N ) logd P + K/P B) Fig. 2. When deferring interand is similar to the sequential distribution section reporting to the leaves, sweeping technique in that each non-leaf invo- we have h ∈ Yσk+1 , for j ∈ j cation Iσk finds all intersections between vertical {1, 2, 4}. When reporting intersegments in each child slab σj and horizontal sections immediately, we have segments spanning this slab and then recurses h ∈ Yσk+1 , for j ∈ {1, 4} and j on each slab σj to find intersections between seg- h ∈ Rσk2 . ments with at least one endpoint in this slab. (for both variants) and Rσkj at First we discuss how to produce the lists Yσk+1 j non-leaf invocations, as this step is common to both solutions. Then we discuss each of the two distribution sweeping variants in detail. 4.1
and Rkσj for Non-leaf Invocations Generating Lists Yσk+1 j
We process all invocations Iσk at the kth recursive level in parallel. Let Nk := k k σ |Yσ | and Pσ := P |Yσ |/Nk . Since Nk = Ω(N ), Nk can be computed using O(Nk /P B) I/Os using a prefix sum operation. Within each vertical slab σ, we define Pσ horizontal slabs, each containing |Yσk |/Pσ = Nk /P elements of Yσk . The Pσ horizontal slabs and d vertical child slabs σj define a Pσ × d grid. We refer to the cell in row i and column j as Cij . Our first step is to compute the number of vertical segments intersecting the horizontal boundaries between adjacent grid cells. Then we use this information to count, for each horizontal segment h ∈ Yσk , the number of grid cells that h spans and where it has at least one intersection. Finally, we generate y-sorted and lists Yij and Rij , for each grid cell Cij , which are the portions of Yσk+1 j k k+1 Rσj containing elements from the ith horizontal slab. The lists Yσj and Rσkj are then obtained from the lists Yij and Rij , respectively, using transpose and compact operations. Next we discuss these steps in detail. 1. Intersection counts for horizontal grid cell boundaries. Using global load balancing, we allocate O(Nk /P ) elements of each list Yσk to a processor. This partition of Yσk defines the Pσ horizontal slabs in σ’s grid. The processor associated with the ith horizontal slab sequentially scans its assigned portion of Yσ and generates y-sorted lists Vij of vertical segment endpoints in each cell Cij .
82
D. Ajwani, N. Sitchinava, and N. Zeh
It also adds an entry representing the top boundary of the cell Cij as the first element in each list Vij . Using a transpose and compact operation, we obtain y-sorted lists Vσj of vertical segment endpoints and cell boundaries in each of the d child slabs σj . Observing that Nk = Ω(N ) and d = O(N/P B) [2], the intersection counts for all cell boundaries in σj can now be computed using O(Nk /P B) I/Os by treating these cell boundaries as stabbing queries over Vσj . The total I/O complexity of this step is therefore O(Nk /P B). 2. Counting cells with intersections for each horizontal segment. Each processor performs a vertical sweep of the portion of Yσk assigned to it in Step 1. For each vertical slab σj , it keeps track of the number of vertical segments in σj that span the current y-coordinate, starting with the intersection count of the top boundary of Cij and updating the count whenever the sweep passes a top or bottom endpoint of a vertical segment. When the sweep passes a horizontal segment h, this segment has an intersection in a cell Cij spanned by h if and only if the count for slab σj is non-zero. By testing this condition for each cell, we can determine th , the number of slabs σj spanned by h and where h has an intersection. We assign weights wh := 1 + th and wq := 1 to each horizontal segment h and vertical segment endpoint q. The I/O complexity of this step is O(Nk /P B) I/Os because each processor scans Nk /P elements in this step and keeps d ≤ M counters in memory. 3. Generating child lists. Using a global load balancing operation with the weights computed in Step 2, we reallocate the elements in Yσk to processors so thattheelements assigned to each processor have total weight Wk /P , where Wk = σ e∈Yσk we . This partitioning of Yσk induces new horizontal slabs in σ’s grid. We repeat Step 1 to count the number of vertical segments intersecting each horizontal cell boundary and repeat the sweep from Step 2, this time copying every horizontal segment with an endpoint in Cij to Yij and, depending on the distribution sweeping variant, adding every horizontal segment spanning σj and with an intersection in σj to Yij or Rij , and every vertical segment endpoint in σj to Yij and Rij . Finally, we obtain the lists Yσk+1 and Rσkj using a transpose j Wk Nk +Lk and compact operation. The I/O complexity of this step is O( P B ) = O( P B k) I/Os, where Lk = h th with the sum taken over all horizontal segments h ∈ Yσ . By summing the costs of these three steps, we obtain the following lemma. Lemma 1. At the kth recursive level, the y-sorted lists Yσk+1 and Rσkj can be j Nk +Lk k generated using O( P B ) I/Os, where Nk = σ |Yσ | and Lk = h th with the second sum taken over all horizontal segments in slab lists Yσk . 4.2
An O(sortP (N + K)) Solution
Our O(sortP (N + K)) I/O solution defers the reporting of intersections to the leaf invocations, ensuring that the input to each leaf invocation Iσk includes all segments with an endpoint in σ and all horizontal segments with an intersection in σ. We achieve this by setting Yσk+1 := Vσkj ∪ Eσkj ∪ Sσkj , for each child slab σj j k of a non-leaf invocation Iσ . By Lemma 1, the input lists for level k + 1 can be N +K k generated using O( NkP+L B ) = O( P B ) I/Os because Nk ≤ N + K and Lk ≤ K.
Geometric Algorithms for Private-Cache Chip Multiprocessors
83
Since there are logd P recursive levels, the cost of all non-leaf invocations is O( NP+K logd P ) = O(sortP (N + K)) I/Os. At the leaf level, we balance the B reporting of intersections among processors based on the number of intersections of each horizontal segment. The details are as follows. 1. Counting intersections. We partition each list Yσk into y-sorted lists Hσ and Vσ of horizontal segments and vertical segment endpoints. This takes O(Nk /P B) I/Os by copying each element of Yσk into the corresponding position of Hσ or Vσ and compacting the two lists. Using global load balancing, we allocate O(Nk /P ) = O( N +K P ) horizontal segments from O(1) slabs to each processor. Applying sequential I/O-efficient orthogonal intersection counting [19] to its assigned horizontal segments and the vertical segments in the corresponding slabs, each processor computes th , the number of intersections of each of its horizontal segments h, and assigns weight wh := 1 + th to h. Since |Vσ | = O(N/P ), the cost of this step is O(sort1 ( N +K P )) = O(sortP (N + K)). 2. Reporting intersections. Using global load balancing with the weights computed in the previous step, we re-allocate horizontal segments to processors so that each processor is responsible for segments of total weight W/P = ( σ h∈Hσ wh )/P = O( N +K P ). Each processor runs a sequential I/O-efficient orthogonal line segment intersection reporting algorithm [19] on its horizontal segments and the vertical segments in the corresponding O(1) slabs. This step takes O(sort1 (N/P + W/P )) = O(sortP (N + K)) I/Os. By summing the costs of all invocation, we obtain the following theorem. Theorem 3. In the PEM model, orthogonal line segment intersection reporting N N O(1) takes O(sortP (N + K)) I/Os, provided P ≤ min{ B log . N , B 2 } and M = B 4.3
An O(sortP (N ) logd P + K/P B) Solution
In our O(sortP (N ) logd P + K/P B) solution, each invocation Iσk generates lists Yσk+1 := Vσkj ∪ Eσkj and Rσkj := Vσkj ∪ Sσkj , for each child slab σj of σ, and then j reports all intersections between elements in Rσkj before recursing on each slab σj with input Yσk+1 . The leaf invocations are the same as in the O(sortP (N + K)) j solution, and we process all invocations at each level of recursion simultaneously. k and Rσkj at the kth recursive level takes O( NkP+L ) Generating all lists Yσk+1 j B k I/Os; see Section 4.1. Since each listYσ contains only segments with an endpoint in σ, we have Nk ≤ 2N and k Nk = O(N logd P ). Since we also have k+1 and Rσkj for all non-leaf invok Lk ≤ K, the cost of generating lists Yσj cations is O((N/P B) logd P + K/P B), while the cost of all leaf invocations is O(sortP (N ) + K/P B) (each processor processes elements from only O(1) slabs, and each slab contains only O(N/P ) vertical segments and horizontal segment endpoints). Next we discuss how to report all intersections between elements of K +K/ log P the lists Rσkj at the kth recursive level using O(sortP (N ) + k P B d ) I/Os, where Kk is the number of intersections reported at the kth recursive level. This sums to a cost of O(sortP (N ) logd P + K/P B) I/Os for all non-leaf invocations and dominates the total cost of the algorithm. This proves the following result.
84
D. Ajwani, N. Sitchinava, and N. Zeh
Theorem 4. In the PEM model, orthogonal line segment intersection reporting N takes O(sortP (N ) logd P + PKB ) I/Os, if P ≤ min{ B log , N } and M = B O(1) . N B2 logd P To achieve a cost of O(sortP (N ) + Kk +K/ ) I/Os per recursive level, we asPB sume every vertical segment has at most K := max{N/P, K/(P logd P )} intersections. Below we sketch how to eliminate this assumption by splitting vertical segments with more than K intersections into subsegments with at most K intersections as needed. To report the intersections at the kth recursive level, we process all lists Rσkj in parallel. We do this in three steps. First we count the number of intersections of each vertical segment in such a list. Then we split each list Rσkj into y-sorted lists Vσj and Hσj containing the top endpoints of vertical segments and horizontal segments, respectively. Each endpoint in Vσj also stores the bottom endpoint and the number of intersections of the corresponding segment. In the third step, we allocate portions of the lists Vσj to processors, and each processor reports the intersections of its allocated vertical segments. The details are as follows. 1. Counting intersections. Counting the number of intersections for each vertical segment in Rσkj is equivalent to answering 1-D range counting queries over Rσkj , as each horizontal segment in Rσkj completely spans σj . Thus, by applying Theorem 1 to all lists Rσkj simultaneously, this step takes O(sortP (N ) + Kk /P B) I/Os because there are O(N ) vertical segments and at most Kk horizontal segments in all lists Rσkj at the kth recursive level. 2. Generating lists Hσj and Vσj . Splitting Rσkj into lists Hσj and Vσj can be done as the splitting of Yσk for leaf invocations. Before doing this, however, we annotate every vertical segment endpoint q with the index scc(q) such that Hσj [scc(q)] is the first horizontal segment below q in the list Hσj . This is done by assigning a weight of 0 to vertical segment endpoints and 1 to horizontal segments and computing prefix sums on these weights. Thus, the I/O complexity of this k step is O( NP+K B ). 3. Reporting intersections. Let tq be the number of intersections of the vertical segment with top endpoint q, and wq := 1 + tq . We allocate portions of the lists Vσj to processors by using global load balancing with these weights. Since every vertical segment has at most K intersections, this assigns segment k endpoints with total weight O( N +K + K ) to each processor. The cost of this P N +Kk assignment step is O( P B ) I/Os. Now each processor performs a sequential sweep of its assigned portion V of a list Vσj and of a portion H of Hσj , starting with position scc(q), where q is the first point in Vσj . The elements in V and H are processed by decreasing y-coordinates. When processing a segment endpoint in V , its vertical segment is inserted into an active list A. When processing a segment h in H , we scan A to report all intersections between h and vertical segments in A and remove all vertical segments from A that do not intersect h. The sweep terminates when all points in V have been processed and A is empty. The I/O complexity per processor pi is easily seen to be O(ri + (Wi + Zi )/B), where ri = O(1) is the number of portions of lists Vσj assigned to pi , Wi is
Geometric Algorithms for Private-Cache Chip Multiprocessors
85
the total weight of the elements in these portions, and Zi is the total number of scanned elements in the corresponding lists H . Our goal is to show that Zi = O(Wi + K ), which bounds the cost of reporting intersections by O(1 + N +Kk N +Kk K K P B + B ) = O( P B + B ). To this end, we show that there are only O(K ) horizontal segments scanned by pi that do not intersect any vertical segments assigned to pi . Consider the last segment h in a portion H of a list Hσj scanned by pi and which does not intersect a segment in the corresponding sublist V of Vσj assigned to pi . Since every horizontal segment in Hσj has at least one intersection at this recursive level, h must intersect some vertical segment v assigned to another processor. Observe that the top endpoint of v must precede V in Vσj , which implies that v intersects all segments in H scanned by pi but without intersections with segments in V . Since v has at most K intersections, there can be at most K such segments in H , and pi scans portions of only O(1) lists Hσj . By adding the costs of the different steps, we obtain a cost of O(sortP (N ) + (Kk + K/ logd P )/P B) I/Os per recursive level, as claimed in Theorem 4. Our algorithm relies on the assumption that every vertical segment has at most K intersections in two places: balancing the reporting load among processors and bounding the number of elements in Hσj -lists scanned by each processor. After Step 1, the top endpoint q of each vertical segment in Vσj stores its intersection count tq and the index scc(q) of the first segment in Hσj below q. For each endpoint q with tq > K , we generate lq := tq /K copies q1 , q2 , . . . , qlq , each with an intersection count of K —qlq has intersection count tq mod K —and successor index scc(qi ) := scc(q) + (i − 1)K . We sort the resulting augmented Vσj -list by the successor indices of its entries and modify the reporting step to remove a vertical segment from the active list when a number of intersections matching its intersection count have been reported. This is equivalent to splitting each vertical segment with more than K intersections at the current recursive level into subsegments with at most K intersections each. The full version of this paper will show that the number of elements in the Vσj -lists at each recursive level remains O(N ) and that this does not alter the cost of the algorithm.
5
Additional Problems
Theorem 5. The lower envelope of a set of non-intersecting 2-D line segments, the convex hull of a 2-D point set, and the maxima of a 3-D point set can be computed using O(sortP (N )) I/Os, provided P ≤ N/B 2 and M = B O(1) . Proof (Sketch). The lower envelope of a set of non-intersecting line segments and the maxima of a 3-D point set can be computed by merging point lists sorted along one of the coordinate axes and computing appropriate labels of the points in each list U (v) from the labels of their predecessors in v’s child lists using the same strategy as for 2-D weighted dominance counting [6]. The result on convex hull is obtained using a careful analysis (whose details will be presented in the full paper) of an adaptation of the CREW PRAM algorithm of [7].
86
D. Ajwani, N. Sitchinava, and N. Zeh
References 1. Aggarwal, A., Vitter, J.S.: The input/output complexity of sorting and related problems. Communications of the ACM 31(9), 1116–1127 (1988) 2. Arge, L., Goodrich, M.T., Nelson, M.J., Sitchinava, N.: Fundamental parallel algorithms for private-cache chip multiprocessors. In: SPAA, pp. 197–206 (2008) 3. Arge, L., Goodrich, M.T., Sitchinava, N.: Parallel external memory graph algorithms. In: IPDPS (2010) 4. Arge, L., Mølhave, T., Zeh, N.: Cache-oblivious red-blue line segment intersection. In: Halperin, D., Mehlhorn, K. (eds.) ESA 2008. LNCS, vol. 5193, pp. 88–99. Springer, Heidelberg (2008) 5. Atallah, M.J., Cole, R., Goodrich, M.T.: Cascading divide-and-conquer: A technique for designing parallel algorithms. SIAM J. Comp. 18(3), 499–532 (1989) 6. Atallah, M.J., Goodrich, M.T.: Efficient plane sweeping in parallel. In: SOCG, pp. 216–225 (1986) 7. Atallah, M.J., Goodrich, M.T.: Parallel algorithms for some functions of two convex polygons. Algorithmica 3, 535–548 (1988) 8. Bender, M.A., Fineman, J.T., Gilbert, S., Kuszmaul, B.C.: Concurrent cacheoblivious B-trees. In: SPAA, pp. 228–237 (2005) 9. Blelloch, G.E., Chowdhury, R.A., Gibbons, P.B., Ramachandran, V., Chen, S., Kozuch, M.: Provably good multicore cache performance for divide-and-conquer algorithms. In: SODA, pp. 501–510 (2008) 10. Brodal, G.S., Fagerberg, R.: Cache oblivious distribution sweeping. In: Widmayer, P., Ruiz, F.T., Bueno, R.M., Hennessy, M., Eidenbenz, S., Conejo, R. (eds.) ICALP 2002. LNCS, vol. 2380, pp. 426–438. Springer, Heidelberg (2002) 11. Chowdhury, R.A., Ramachandran, V.: The cache-oblivious gaussian elimination paradigm: Theoretical framework, parallelization and experimental evaluation. In: SPAA, pp. 71–80 (2007) 12. Chowdhury, R.A., Ramachandran, V.: Cache-efficient dynamic programming for multicores. In: SPAA, pp. 207–216 (2008) 13. Chowdhury, R.A., Silvestri, F., Blakeley, B., Ramachandran, V.: Oblivious algorithms for multicores and network of processors. In: IPDPS (2010) 14. Datta, A.: Efficient parallel algorithms for geometric partitioning problems through parallel range searching. In: ICPP, pp. 202–209 (1994) 15. Dehne, F., Fabri, A., Rau-Chaplin, A.: Scalable parallel geometric algorithms for coarse grained multicomputers. In: SOCG, pp. 298–307 (1993) 16. Fj¨ allstr¨ om, P.O.: Parallel algorithms for batched range searching on coarse-grained multicomputers. Link¨ oping Electronic Articles in Computer and Information Science 2(3) (1997) 17. Gibbons, P.: Theory: Asleep at the switch to many-core. In: Workshop on Theory and Many-Cores (T&MC) (May 2009) 18. Goodrich, M.T.: Intersecting line segments in parallel with an output-sensitive number of processors. SIAM J. Comp. 20(4), 737–755 (1991) 19. Goodrich, M.T., Tsay, J.J., Vengroff, D.E., Vitter, J.S.: External-memory computational geometry. In: FOCS, pp. 714–723 (1993) 20. Intel Corp.: Futuristic Intel chip could reshape how computers are built, consumers interact with their PCs and personal devices (December 2009), http://www.intel. com/pressroom/archive/releases/2009/20091202comp_sm.htm 21. Sitchinava, N.: Parallel external memory model – a parallel model for multi-core architectures. Ph.D. thesis, University of California, Irvine (2009)
Volume in General Metric Spaces Ittai Abraham1 , Yair Bartal2, , Ofer Neiman3, , and Leonard J. Schulman4,
3
1 Microsoft Research
[email protected] 2 Hebrew University
[email protected] Courant Institute of Mathematical Sciences
[email protected] 4 Caltech
[email protected]
Abstract. A central question in the geometry of finite metric spaces is how well can an arbitrary metric space be “faithfully preserved” by a mapping into Euclidean space. In this paper we present an algorithmic embedding which obtains a new strong measure of faithful preservation: not only does it (approximately) preserve distances between pairs of points, but also the volume of any set of k points. Such embeddings are known as volume preserving embeddings. We provide the first volume preserving embedding that obtains constant average volume distortion for sets of any fixed size. Moreover, our embedding provides constant bounds on all bounded moments of the volume distortion while maintaining the best possible worst-case volume distortion. Feige, in his seminal work on volume preserving embeddings defined the volume of a set S = {v1 , . . . , vk } of points in a general metric space: the product of the distances from vi to {v1 , . . . , vi−1 }, normalized by 1 , where the ordering of the points is that given by Prim’s minimum (k−1)! spanning tree algorithm. Feige also related this notion to the maximal Euclidean volume that a Lipschitz embedding of S into Euclidean space can achieve. Syntactically this definition is similar to the computation of volume in Euclidean spaces, which however is invariant to the order in which the points are taken. We show that a similar robustness property holds for Feige’s definition: the use of any other order in the product affects volume1/(k−1) by only a constant factor. Our robustness result is of independent interest as it presents a new competitive analysis for the greedy algorithm on a variant of the online Steiner tree problem where the cost of buying an edge is logarithmic in its length. This robustness property allows us to obtain our results on volume preserving embeddings.
Part of the research was done while the authors was at the Center of the Mathematics of Information, Caltech, CA, USA, and was supported in part by a grant from the Israeli Science Foundation (195/02) and in part by a grant from the National Science Foundation (NSF CCF-0652536). Partially funded by NSF Expeditions in Computing award, 2009-2010. Supported in part by NSF CCF-0515342 and NSA H98230-06-1-0074.
M. de Berg and U. Meyer (Eds.): ESA 2010, Part II, LNCS 6347, pp. 87–99, 2010. c Springer-Verlag Berlin Heidelberg 2010
88
1
I. Abraham et al.
Introduction
Recent years have seen a large outpouring of work in analysis, geometry and theoretical computer science on metric space embeddings guaranteed to introduce only small distortion into the distances between pairs of points. Euclidean space is not only a metric space, it is also equipped with higher dimensional volumes. General metrics do not carry such structure. However, a general definition for the volume of a set of points in an arbitrary metric was developed by Feige [10]. In this paper we extend the study of metric embeddings into Euclidean space by first, showing a robustness property of the general volume definition. Using this robustness property, together with existing metric embedding methods, to show an embedding that guarantees small distortion not only on pairs, but also on the volumes of sets of points. The robustness property (see Theorem 2) is that the minimization over permutations in the volume definition affects it by only a constant. This result is of independent interest as it provides a competitive analysis for the greedy algorithm on a variant of the online Steiner tree problem where the cost of buying an edge is logarithmic in its length, showing that the cost of greedy is within an additive term of the minimum spanning tree, implying a constant competitive ratio. Our main result is an algorithmic embedding (see Theorem 3) with constant average distortion for sets of any fixed size. In fact, our bound on the average distortion scales logarithmically with the size of the set. Moreover this bound holds even for higher moments of the distortion (the q -distortion), while the embedding still maintains the best possible worst case distortion bound, simultaneously. Hence our embedding generalizes both [17] and [2] (see Related Work below). Volume in general metric spaces Let dE denote Euclidean distance, and let affspan denote the affine span of a point set. The (n − 1)-dimensional Euclidean volume of the convex hull of points X = {v1 , . . . , vn } ⊆ Rd is volE (X) =
n 1 dE (vi , affspan(v1 , . . . , vi−1 )). (n − 1)! i=2
This definition is, of course, independent of the order of the points. Feige’s notion of volume. Let (X, dX ) be a finite metric space, X = {v1 , . . . , vn }. Let Sn be the symmetric group on n symbols, and let πP ∈ Sn be the order in which the points of X may be adjoined to a minimum spanning tree by Prim’s algorithm. (Thus vπP (1) is an arbitrary point, vπP (2) is the closest point to it, etc.) Feige’s notion of the volume of X is (we have normalized by a factor of (n − 1)!): volF (X) =
n 1 dX (vπP (i) , {vπP (1) , . . . , vπP (i−1) }). (n − 1)! i=2
(1)
Volume in General Metric Spaces
89
πP minimizes the above expression1 . It should be noted that even if X is a subset of Euclidean space, volE and volF do not agree. (The latter can be arbitrarily larger than the former.) The actual relationship that Feige found between these notions is nontrivial. Let L2 (X) be the set of non-expansive embeddings from X into Euclidean space. Feige proved the following: Theorem 1 (Feige). For any n point metric space (X, d): 1/(n−1) volF (X) 1≤ ≤ 2. supf ∈L2 (X) volE (f (X)) Thus, remarkably, volF (X) is characterized to within a factor of 2 (after normalizing for dimension) by the Euclidean embeddings of X. Our work, part I: Robustness of the metric volume. What we show first is that Feige’s definition is insensitive to the minimization over permutations implicit in Equation (1), and so also a generalized version of Theorem 1 can be obtained. Theorem 2. There is a constant C such that for any n-point metric space (X, d), and with πP defined as above, and for every π ∈ Sn : n 1/(n−1) i=2 dX (vπ(i) , {vπ(1) , . . . , vπ(i−1) }) 1≤ ≤ C. n i=2 dX (vπP (i) , {vπP (1) , . . . , vπP (i−1) }) An alternative interpretation of this result can be presented as the analysis of the following online problem. Consider the following variant of the online Steiner tree problem [14]. Given a weighted graph (V, E), at each time unit i, the adversary outputs a vertex vi ∈ V and an online algorithm can buy edges Ei ⊆ E. At each time unit i, the edges bought E1 , . . . , Ei must induce a connected graph among the current set of vertices v1 , . . . , vi . The competitive ratio of an online algorithm is the worst ratio between the cost of the edges bought and the cost of the edges bought by the optimal offline algorithm. This problem has been wellstudied when the cost of buying an edge is proportional to its length. Imase and Waxman prove that the greedy algorithm is O(log n) competitive, and shown this bound is asymptotically tight . It is natural to consider a variant where the cost of buying is a concave function of the edge length. In this case a better result may be possible. In particular we analyze the case where this cost function is logarithmic in edge length. Such a logarithmic cost function may capture the economy-of-scale effects where buying multiplicatively longer edges costs only additively more. Theorem 2 can be interpreted as a competitive analysis of the greedy algorithm in this model, showing that the cost of the greedy algorithm is within O(n) additive term of the minimum spanning tree, which implies an O(1) competitive ratio for this problem. 1
This is because the (sorted) vector of edge lengths created by Prim’s algorithm is smaller or equal in each coordinate than any sorted vector of a spanning tree’s edge lengths.
90
I. Abraham et al.
Our work, part II: Volume Preserving Embeddings We use Theorem 2 and recent results on metric embeddings [2] to show that their algorithm provides a noncontractive embedding into Euclidean space that faithfully preserves volume in the following sense: the embedding obtains simultaneously both O(log k) average volume distortion and O(log n) worst case volume distortion for sets of size k. Given an n point metric space (X, d) an injective mapping f : X → L2 is called anembedding. An embedding is (k − 1)-dimensional non-contractive if for any S ∈ X k : volE (f (S)) ≥ volF (S). Let f be a (k − 1)-dimensional non-contractive embedding. For a set S ⊆ X k define the (k − 1)-dimensional distortion of S under f as:
distf (S) =
volE (f (S)) volF (S)
1/(k−1) .
For 2 ≤ k ≤ n define the (k − 1)-dimensional distortion of f as dist(k−1) (f ) = max distf (S) S∈(X k) More generally, for 2 ≤ k ≤ n and 1 ≤ q ≤ ∞, define the (k − 1)-dimensional q -distortion of f as: dist(k−1) (f ) = ES∼(X ) [distf (S)q ]1/q q k
where the expectation is taken according to the uniform distribution over X . Observe that the notion of (k − 1)-dimensional distortion is expressed by k dist(k−1) (f ) and the average (k − 1)-dimensional distortion is expressed by the ∞ (k−1) dist1 (f )-distortion. It is worth noting that Feige’s definition of volume is related to the maximum volume obtained by non-expansive embeddings, while the definition of average distortion and q -distortion are using non-contractive embeddings. We note that these definitions are crucial in order to capture the coarse geometric notion described above and achieve results that significantly beat the usual worst case lower bounds (which depend on the size of the metric). It is clear that one can modify the definition to allow arbitrary embeddings (in particular non-contractive) by defining distortions normalized by taking their ratio with respect to the largest contraction.2 Our main theorem on volume preserving embeddings is: Theorem 3. For any metric space (X, d) on n points and any 2 ≤ k ≤ n, there exists a map f : X → L2 such that for any 1 ≤ q ≤ ∞, dist(k−1) (f ) ∈ q 2
There are other notions of average distortion that may be of interest, in particular such notions which normalize with respect to the maximum distortion have been considered. While these have advantages of their own, they take a very different geometric perspective which puts emphasis on small distance scales (as opposed to the coarse geometric perspective in this paper) and the worst case lower bounds hold for these notions.
Volume in General Metric Spaces
91
O(min{q/(k − 1) · log k, log n}). In particular, dist(k−1) (f ) ∈ O(log n) and ∞ (k−1) dist1 (f ) ∈ O(log k). On top of the robustness property of the general volume definition of Theorem 2 the proof of Theorem 3 builds on the embedding techniques developed in [2] (in the context of pairwise distortion) along with combinatorial arguments that enable the stated bounds on the average and q -volume distortions. Our embedding preserves well sets with typically large distances and can be viewed within the context of coarse geometry where we desire a “high level” geometric representation of the space. This follows from a special property formally stated in Lemma 4. 1.1
Related Work
Embeddings of metric spaces have been a central field of research in theoretical computer science in recent years, due to the fact the metric spaces are important objects in representation of data. A fundamental theorem of Bourgain [5] states that every n point metric space (X, d) can be embedded in L2 with distortion O(log n), where the distortion is defined as the worst-case multiplicative factor by which a pair of distances change. Our work extends this result in two aspects: (1) bounding the distortion of sets of arbitrary size, and (2) providing bounds for the q -distortion for all q ≤ ∞. Volume preserving embeddings. Feige [10] introduced volume preserving embeddings. He showed that Bourgain’s embedding provides√an embedding into Eu√ clidean space with (k − 1)-dimensional distortion of O( log n · log n + k log k). Following Feige’s work some special cases of volume preserving embeddings were studied, where the metric space X is restricted to a certain class of metric spaces. Rao [20] studies the case where X is planar or is an excluded-minor metric showing constant (k − 1)-dimensional distortions. Gupta [12] showed an improved approximation of the bandwidth for trees and chordal graphs. As the Feige volume does not coincide with the standard volume of Euclidean set it is also interesting to study this special case when the metric space is given in Euclidean space. This case was studied by Rao [20], Dunagan and Vempala [8] and by Lee [19]. We note that our work provides the first average distortion and q -distortion analysis also in the context of this special case. The first improvement on Feige’s volume distortion bounds comes from the work of Rao [20]. As observed by many researchers Rao’s embedding gives more general results depending on a certain decomposability parameter of the space. This provides a bound on the (k − 1)-dimensional distortion of O((log n)3/2 ) for all k ≤ n. This bound has been further improved to O(log n) in work of Krauthgamer et al. [17]. Krauthgamer, Linial and Magen [18] show a matching Ω(log n) lower bound on the (k − 1)-dimensional distortion for all k < n1/3 . In this paper we provide embedding with guarantees on the (k−1)-dimensional q -distortion for all q ≤ ∞ simultaneously. As a special case, our bounds imply the best possible worst case (k − 1)-dimensional distortion of O(log n).
92
I. Abraham et al.
Average and q Distortion. The notions of average distortion and q -distortion is tightly related to the notions of partial embeddings and scaling embedding3 , which demand strong guarantees for a (1 − ) fraction of the pairwise distances. These notions were introduced by Kleinberg, Slivkins and Wexler [15], largely motivated by the study of distances in computer networks. In [1] partial embedding into Lp with tight O(log 1/) partial distortion were given. The embedding method of [2] provides a scaling embedding with O(log 1/) distortion for all values of > 0 simultaneously. As a consequence of having scaling embedding, they show that any metric space can be embedded into Lp with constant average distortion, and more generally that the q -distortion bounded by O(q), while maintaining the best worse case distortion possible of O(log n), simultaneously. Previous results on average distortion have applications for a variety of approximation problems, including uncapacitated quadratic assignment [2], and in addition have been used in solving graph theoretic problems [9]. Following [15,1,2] related notions have been studied in various contexts [6,16,3,7].
2
Robustness of the Metric Volume
Proof of Theorem 2. For a tree T on n vertices {v1 , . . . , vn } let vol(T ) be the product of the edge lengths. Because of the matroid exchange property, this product is minimized by an MST. Thus for any metric space on points {v1 , . . . , vn } and any spanning tree T , volF (v1 , . . . , vn ) ≤ vol(T )/(n − 1)!; the inequality is saturated by any (and only a) minimum spanning tree. Definition 1. A forced spanning tree (FST) for a finite metric space is a spanning tree whose vertices can be ordered v1 , . . . , vn so that for every i > 1, vi is connected to a vertex that is closest among v1 , . . . , vi−1 , and to no other among these. (We call such an ordering admissible for the tree.) An MST is an FST with the additional property that in an admissible ordering vi is a closest vertex to v1 , . . . , vi−1 among vi , . . . , vn . Definition 2. For a tree T let Δ(T ) denote its diameter (the largest distance between any two points in the tree). Let the diameter Δ(F ) of a forest F with components T1 , T2 , . . . , Tm be Δ(F ) = max1≤i≤m Δ(Ti ). For a metric space (X, d) let Δk (X) = min{Δ(F ) | F is a spanning forest of X with k connected components}. Lemma 1. Let (X, d) be a metric space. Let k ≥ 1. An FST for X has at most k − 1 edges of length greater than Δk (X). Proof. Let v1 , . . . , vn be an admissible ordering of the vertices of the FST. Assign each edge to its higher-indexed vertex. Since the ordering is admissible, this assignment is injective. The lemma is trivial for k = 1. For k ≥ 2, cover X by 3
Alternatively known as embeddings with slack and embeddings with gracefully degrading distortion.
Volume in General Metric Spaces
93
the union of k trees each of diameter at most Δk (X). Only the lowest-indexed vertex in a tree can be assigned an edge longer than Δk (X). (Note that v1 is assigned no edge, hence the bound of k − 1.) Corollary 1. For any n-point metric space (X, d) and any FST T for X, n−1 vol(T ) ≤ k=1 Δk (X). Proof. Order the edges from 1 to n − 1 by decreasing length. The k’th edge is no longer than Δk (X). Using Corollary n−1 1, our proof of Theorem 2 reduces to showing that for any MST T of X, k=1 Δk (X) ≤ eO(n−1) vol(T ). Specifically we shall show that for any spanning tree T , n−1 n−1 1 4π 2 Δk (X) ≤ 2 vol(T ). n 3 k=1
(Observe incidentally that the FST created by the Gonzalez [11] and Hochbaumn−1 Shmoys [13] process has vol at least 21−n k=1 Δk (X).) The idea is to recursively decompose T by cutting an edge; letting the two remaining trees be T1 (with some m edges) and T2 (with n−2−m edges), we shall n−1 m n−2−m upper bound 1 Δk (T ) in terms of 1 Δk (T1 ) and Δk (T2 ). More 1 on this after we show how to pick an edge to cut. Recall: j≥1 1/j 2 = π 2 /6. Edge selection. Find a diametric path γ of T , i.e., a simple path whose length |γ| equals the diameter Δ(T ). For appropriate ≥ 2 let u1 , . . . , u be the weights of the edges of γ in the order they appear on the path. Select the j’th edge on the path, for a 1 ≤ j ≤ for which uj /|γ| > 1/(2(π 2 /6) min{j, + 1 − j}2 ). Such an edge exists, as otherwise 1 uj ≤ (6/π 2 )|γ| 1 j −2 < |γ|. Without loss of generality j ≤ + 1 − j (otherwise flip the indexing on γ), hence cutting uj n−1 contributes overhead |γ|/uj < 2(π 2 /6)j 2 to the product 1 Δk , and yields subtrees T1 and T2 each containing at least j − 1 edges. Think of this recursive process as successively breaking the spanning tree into a finer and finer forest. Note that we haven’t yet specified which tree of the forest is cut, but we have specified which edge in that tree is cut. The order in which trees are chosen to be cut is: Fk (T ) (which has k components) is defined by (a) F1 (T ) = T ; (b) For 1 < k < n, Fk (T ) is obtained from Fk−1 (T ) by cutting an edge in the tree of greatest diameter. Note that by definition Δk (X) ≤ Δ(Fk (T )). Induction. Now we show that n−1 1
1 Δ(Fk (T )) ≤ 2 n
4π 2 3
n−1 vol(T ).
It will be convenient to do this by an induction showing that there are constants c1 , c2 > 0 such that n−1 1
Δ(Fk (T )) ≤ ec1 (n−1)−c2 log n vol(T ),
94
I. Abraham et al.
and finally justify the choices c1 = log(4π 2 /3) and c2 = 2. As to base-cases, n = 1 is trivial, and n = 2 is assured for any c1 ≥ 0. For n > 2 let the children of T be T1 and T2 , that is to say, F2 (T ) = {T1 , T2 }. Let m and n − 2 − m be the numbers of edges in T1 and T2 respectively. Observe that with j as defined above, min{m, n − 2 − m} ≥ j − 1 ≥ 0. Examine three sequences of forests: the T sequence, F1 (T ), . . . , Fn−1 (T ); the T1 sequence, F1 (T1 ), . . . , Fm (T1 ); the T2 sequence, F1 (T2 ), . . . , Fn−2−m (T2 ). As indicated earlier, in each forest f in the T sequence other than F1 (T ), choose a component t of greatest diameter, i.e., one for which Δ(t) = Δ(f ). (In case of ties some consistent choice must be made within the T, T1 and T2 sequences.) If t lies within T1 , assign f to the forest in the T1 sequence that agrees with f within T1 . Similarly if t lies within T2 , assign f to the appropriate forest in the T2 sequence. Due to the process defining the forests Fk (T ), this assignment is injective. Moreover, a forest in the T sequence, and the forest it is assigned to in the T1 or T2 sequence, share a common diameter. Hence n−1
m n−2−m Δ(Fk (T )) = ( Δ(Fk (T1 )))( Δ(Fk (T2 ))),
2 n−1
1
Δ(Fk (T )) = Δ(T ) ·
1
n−1
Δ(Fk (T )) = Δ(T ) · (
2
1 m
Δ(Fk (T1 )))(
1
n−2−m
Δ(Fk (T2 ))).
1
Now by induction: n−1
Δ(Fk (T )) ≤ Δ(T ) · ec1 m−c2 log(m+1) · vol(T1 ) · ec1 (n−2−m)−c2 log(n−1−m) · vol(T2 ).
1
As vol(T ) = uj · vol(T1 )vol(T2 ) we get n−1 1
Δ(Fk (T )) ≤ (Δ(T )/uj ) · exp {c1 (n − 2) − c2 (log(m + 1) + log(n − 1 − m))} vol(T ) ≤ exp log(2(π 2 /6)j 2 ) + c1 (n − 2) − c2 (log(m + 1) + log(n − 1 − m)) ≤ exp log(π 2 j 2 /3) + c1 (n − 2) − c2 (log j + log(n/2)) ≤ exp c1 (n − 1) − c2 log n − (c2 − 2) log j − (c1 − c2 log 2 − log(π 2 /3))
Choose c2 ≥ 2 to take care of the third term in the exponent, and choose c1 ≥ log(π 2 /3) + c2 log 2 to take care of the fourth term in the exponent. (In the theorem statement, both of these choices have been made with equality.) So . . . ≤ exp {c1 (n − 1) − c2 log n} .
3
Volume Preserving Embeddings
In this section we prove Theorem 3. In [2] a general framework for embedding metrics into normed spaces was introduced. In particular we define an embedding
Volume in General Metric Spaces
95
fˆ : X → L2 in O(log n) dimensions and show that the (pairwise) distortion is O(log n) and the q -distortion is O(q). Here, we extend this work to apply to sets of points of higher cardinality. We use the same map of [2] while taking more dimensions: O(k log n), so the map has the stronger property of being volume preserving. 3.1
The Embedding
Our embedding is an extension of the embedding of [2], which is partition-based (e.g., is [4,20]). It is constructed by concatenating O(k log n) random maps X → R where each such map is formed by summing terms over all scales, where each scale is a an embedding created using an approach similar to [20], using the uniform probabilistic partition techniques of [2]. See the full version for the required definitions and notations for uniform probabilistic partitions and their properties. In a nut-shell, a partition of X is a pair-wise disjoint collection of clusters covering X. In a Δ bounded partition the diameter of every cluster is at most Δ, and the (η, δ)-padding property of a distribution over Δ-bounded partitions is that for all x ∈ X, Pr[B(x, ηΔ) ⊆ P (x)] ≥ δ (where P (x) denotes the cluster containing x), and η : X → (0, 1]. Let D = c · k log n, Δ0 = diam(X), I = [log4 (Δ0 ) ], where c is a constant to be determined later. For all j ∈ I, Δj = Δ0 /4j . Fix some h ∈ [D]. For all j ∈ I (h) create a Δj -bounded (ηj , 1/2)-padded probabilistic partition Pj sampled from a certain distribution Pˆj over a set of partitions Pj (for details see [2]). This distribution Pˆj is accompanied by a collection of uniform functions4 {ξP : X → {0, 1} | P ∈ Pj } and {ηP : X → (0, 1] | P ∈ Pj }. Roughly speaking, ηP (x) is the inverse logarithm of the local growth rate of the space in the cluster containing x, and ξP (x) is an indicator for sufficient local growth rate. Define for x ∈ X, (h) (h) 0 < j ∈ I, φj : X → R+ , by φj (x) = ξP (h) (x)/ηP (h) (x) . j
j
(h)
Let {σj (A)|A ∈ Pj , 0 < j ∈ I} be i.i.d symmetric {0, 1}-valued Bernoulli D random variables. Define the embedding f : X →
L2 by(h)defining for all h ∈ [D] (h) −1/2 : X → R+ and let f = D . For all j ∈ I define a function f h∈D f (h) (h) (h) fj : X → R+ and let f = j>0 fj . For x ∈ X define (h)
(h)
(h)
(h)
(h)
fj (x) = σj (Pj (x)) · min{φj (x) · d(x, X \ Pj (x)), Δj } , Finally we let fˆ = C · f be scaled version of f , where C is a universal constant. 3.2
Analyzing the (k − 1)-Dimensional Distortion
In what follows we give the necessary definitions and state the main technical lemma (Lemma 2) that summarizes the distortion properties of the embedding needed to prove Theorem 3. We start with some definitions. 4
A function f is uniform with respect to a partition P if for any x, y ∈ X, P (x) = P (y) implies that f (x) = f (y).
96
I. Abraham et al.
Definition 3. For a point x ∈ X and radius r ≥ 0 let B(x, r) = {y ∈ X|d(x, y) ≤ r}. For x ∈ X, and > 0 let r (x) be the minimal radius r such that |B(x, r)| ≥ n. ˆ
ˆ
f (y) 2 In [2] it was shown that 1 ≤ f (x)− ≤ O(log(1/)), for any 0 < < 1/2 such d(x,y) that min{r/2 (x), r/2 (y)} < d(x, y) . In this section we generalize the analysis to sets of size k: First we define the values for a set, then in Lemma 2 we show an analogue for pair distortion on some pairs in the set (even a stronger bound is given, with respect to the affine span), then we show in Lemma 4 that the volume distortion of a set S is bounded and finally conclude the appropriate bounds on the various q distortions. (S) (S) For any sequence S = (s0 , s1 , . . . sk−1 ), define a sequence (S) = (1 , . . . , k−1 ) as follows. For any i ∈ {1, . . . , k − 1} let t(i) ∈ {0, . . . , i − 1} be the index of the (S) −j point satisfying d(si , {s0 , . . . , si−1 }) = d(si , st(i) ), then i = 2 where j is the
minimal integer such that min r(S) /2 (si ), r(S) /2 (st(i) ) i
i
< d(si , st(i) ). In other
words, if = 2−j then either the radius of B1 or the radius of B2 is smaller than d(si , st(i) ) where B1 is the ball around si that contains n2−(j+1) points and B2 is the ball around st(i) that contains n2−(j+1) points. (S) i
Lemma 2. Let (X, d) be an n point metric space, 2 ≤ k ≤ n, and let fˆ : X → L2 be the embedding defined in Section 3.1. Then with high probability, for any S∈ X k there exists an ordering S = (s0 , s1 , . . . sk−1 ) such that for all 1 ≤ i < k: 1≤
dE (fˆ(si ), affspan(fˆ(s0 ), . . . , fˆ(si−1 ))) (S) ≤ O(log(1/i )), d(si , {s0 , . . . , si−1 })
where dE denotes the Euclidean distance. We defer the proof of Lemma 2 to the full version. In what follows we show that this lemma implies the desired distortion bounds for the embedding. From now on fix some 2 ≤ k ≤ n. We first make use Theorem 2 to bound of as a function of (S) the (k − 1)-dimensional distortion of each set S ∈ X k implied by Lemma 2. Lemma 3. The embedding fˆ is (volume) non-contractive. Proof. Fix some S = (s0 , . . . , sk−1 ). Let qi = dE (fˆ(si ), affspan(fˆ(s0 ), . . . , fˆ(si−1 )). k−1 q By definition, volE (fˆ(S)) = i=1 i . By the lower bound part of Lemma 2: (k−1)!
k−1 k−1 qi 1 volF (S) ≤ d(si , {s0 , . . . , si−1 }) ≤ i=1 = volE (fˆ(S)). (k − 1)! i=1 (k − 1)! : Lemma 4. For any S ∈ X k ⎛ 1/(k−1) ⎞ k−1 (S) ⎠. dist ˆ(S) ≤ O ⎝ log(1/i ) f
i=1
Volume in General Metric Spaces
97
Proof. Let S = (s0 , . . . , sk−1 ) be the sequence determined by Lemma 2. By def k−1 q inition, volE (fˆ(S)) = i=1 i , where qi = dE (fˆ(si ), affspan(fˆ(s0 ), . . . , fˆ(si−1 )). (k−1)!
By the upper bound part of Lemma 2: volE (fˆ(S)) =
k−1
k−1 qi 1 (S) ≤ c1 log(1/i ) · d(si , {s0 , . . . , si−1 }) , (k − 1)! (k − 1)! i=1 i=1
where c1 is an appropriate constant. Now Theorem 2 guarantees that: k−1 1 d(si , {s0 , . . . , si−1 }) ≤ ck−1 · volF (S), 2 (k − 1)! i=1
for an appropriate constant c2 , implying that: volE (fˆ(S)) ≤ (c1 c2 )k−1 · volF (S)
k−1
(S)
log(1/i ).
i=1
3.3
Analyzing the Worst Case (∞ ) Volume Distortion
Lemma 5. The (k−1)-dimensional distortion of fˆ is O(log n) i.e. dist(k−1) (fˆ) = ∞ O(log n). Proof. Using Lemma 4 noting that for any S ∈ 3.4
X k
(S)
and i ∈ [k], i
≥ 1/n.
Analyzing the Average (1 ) Volume Distortion
Lemma 6. The average (k − 1)-dimensional distortion of fˆ is O(log k) i.e. (k−1) ˆ dist1 (f ) = O(log k). (S) Proof. Define for every S ∈ X , and for any si ∈ S, ˆi as a power of 1/2 and k the maximal such that d(si , S \ {si }) > rˆ(S) /2 (si ). By definition rˆ(S) /2 (si ) ≤ (S)
(S)
i
i
r(S) /2 (si ) and hence, ˆi ≤ i . Let C be an appropriate constant. The average i (k − 1)-dimensional distortion can be bounded as follows (k−1)
dist1
C
(fˆ)
⎡ ≤ ES∈(X ) ⎣ k
k−1 i=1
1/(k−1) ⎤ (S) log(1/i )
⎡
⎦ ≤ E⎣
k−1
1/(k−1) ⎤ (S) log(1/ˆ i )
⎦
i=1
k−1 k−1 1 1 (S) (S) ≤E log(1/ˆ i ) = E log(1/ˆ i ) k − 1 i=1 k − 1 i=1
using the arithmetic-geometric mean inequality and the linearity of expectation. For every set S ∈ X let m = m(S) be the maximal integer such that for all k i ∈ {0, 1, . . . , k − 1}, B(si , rm/n (si )) ∩ S = {si }. That is, for every point s ∈ S
98
I. Abraham et al. (S)
the first m − 1 nearest in S. Since ˆi ≥ m/(2n) neighbors (in X) of s, are not X (S) then E log(1/ˆ i ) ≤ E [log(n/m)] + 1, for all S ∈ k and i ∈ {0, 1, . . . , k − 1} (note that here the range for i includes the first index 0). We now proceed to bound E [log(n/m)]. Let A(s, t) be the event that the t-th nearest neighbor of a point s ∈ S is also in S (using a consistent lexicographic order on the points so that the t-th nearest neighbor is unique). The probability that A(s, t) occurs is exactly (k − 1)/(n − 1), since given that s ∈ S there are k − 1 additional points to choose for S uniformly at random. Hence by union bound Pr[m(S) = t] ≤ Pr[
A(s, t)] ≤
s∈S
Pr[A(s, t)] ≤
s∈S
k(k − 1) . n−1
Let h = kn2 , it follows that Pr[m(S) = t] ≤ 2/h. Hence, ES [log(n/m(S))] ≤
h
Pr[m(S) = t] · log(n/t) + Pr[m(S) > h] log(n/h)
t=1
2 ≤ h Note that
h t=1
h log n −
h
log t
+ log(k 2 ).
t=1
log t = log(h!) ≥ h log(h/e), hence
E [log(n/m)] ≤ (log n − log(n/(ek 2 )) + 2 log k = O(log k).
References 1. Abraham, I., Bartal, Y., Chan, T.-H.H., Dhamdhere Dhamdhere, K., Gupta, A., KLeinberg, J., Neiman, O., Slivkins, A.: Metric embeddings with relaxed guarantees. In: Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science, Washington, DC, USA, pp. 83–100. IEEE Computer Society, Los Alamitos (2005) 2. Abraham, I., Bartal, Y., Neiman, O.: Advances in metric embedding theory. In: Proceedings of the 38th Annual ACM Symposium on Theory of Computing, pp. 271–286. ACM Press, New York (2006) 3. Abraham, I., Bartal, Y., Neiman, O.: Embedding metrics into ultrametrics and graphs into spanning trees with constant average distortion. In: Proceedings of the 18th Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 502–511 (2007) 4. Bartal, Y.: Probabilistic approximation of metric spaces and its algorithmic applications. In: Proceeings of the 37th Annual Symposium on Foundations of Computer Science (Burlington, VT, 1996), pp. 184–193. IEEE Comput. Soc. Press, Los Alamitos (1996) 5. Bourgain, J.: On Lipschitz embedding of finite metric spaces in Hilbert space. Israel J. Math. 52(1-2), 46–52 (1985) 6. Chan, T.-H.H., Dinitz, M., Gupta, A.: Spanners with slack. In: Proceedings of the 14th conference on Annual European Symposium, London, UK, pp. 196–207. Springer, Heidelberg (2006)
Volume in General Metric Spaces
99
7. Dinitz, M.: Compact routing with slack. In: Proceedings of the 26th Annual ACM Symposium on Principles of Distributed Computing, pp. 81–88. ACM, New York (2007) 8. Dunagan, J., Vempala, S.: On Euclidean embeddings and bandwidth minimization. In: Goemans, M.X., Jansen, K., Rolim, J.D.P., Trevisan, L. (eds.) RANDOM 2001 and APPROX 2001. LNCS, vol. 2129, p. 229. Springer, Heidelberg (2001) 9. Elkin, M., Liebchen, C., Rizzi, R.: New length bounds for cycle bases. Information Processesing Letters 104(5), 186–193 (2007) 10. Feige, U.: Approximating the bandwidth via volume respecting embeddings. J. Comput. System Sci. 60(3), 510–539 (2000) 11. Gonzalez, T.F.: Clustering to minimize the maximum intercluster distance. Theoretical Computer Science 38, 293–306 (1985) 12. Gupta, A.: Improved bandwidth approximation for trees and chordal graphs. J. Algorithms 40(1), 24–36 (2001) 13. Hochbaum, D.S., Shmoys, D.B.: A best possible heuristic for the k-center problem. Math. Oper. Res. 10, 180–184 (1985) 14. Imase, M., Waxman, B.M.: Dynamic steiner tree problem. SIAM J. Discrete Math. 4(3), 369–384 (1991) 15. Kleinberg, J., Slivkins, A., Wexler, T.: Triangulation and embedding using small sets of beacons. J. ACM 56(6), 1–37 (2009) 16. Konjevod, G., Richa, A.W., Xia, D., Yu, H.: Compact routing with slack in low doubling dimension. In: Proceedings of the 26th Annual ACM Symposium on Principles of Distributed Computing, pp. 71–80. ACM, New York (2007) 17. Krauthgamer, R., Lee, J.R., Mendel, M., Naor, A.: Measured descent: A new embedding method for finite metrics. In: 45th Annual IEEE Symposium on Foundations of Computer Science, pp. 434–443. IEEE, Los Alamitos (October 2004) 18. Krauthgamer, R., Linial, N., Magen, A.: Metric embeddings–beyond onedimensional distortion. Discrete Comput. Geom. 31(3), 339–356 (2004) 19. Lee, J.R.: Volume distortion for subsets of euclidean spaces. In: Proceedings of the 22nd Annual Symposium on Computational Geometry, pp. 207–216. ACM, New York (2006) 20. Rao, S.: Small distortion and volume preserving embeddings for planar and Euclidean metrics. In: Proceedings of the 15th Annual Symposium on Computational Geometry, pp. 300–306. ACM, New York (1999)
Shortest Cut Graph of a Surface with Prescribed Vertex Set ´ Eric Colin de Verdi`ere ´ Laboratoire d’informatique, Ecole normale sup´erieure, CNRS, Paris, France
[email protected]
Abstract. We describe a simple greedy algorithm whose input is a set P of vertices on a combinatorial surface S without boundary and that computes a shortest cut graph of S with vertex set P . (A cut graph is an embedded graph whose removal leaves a single topological disk.) If S has genus g and complexity n, the running-time is O(n log n+(g + |P |)n). This is an extension of an algorithm by Erickson and Whittlesey [Proc. ACM-SIAM Symp. on Discrete Algorithms, 1038–1046 (2005)], which computes a shortest cut graph with a single given vertex. Moreover, our proof is simpler and also reveals that the algorithm actually computes a minimum-weight basis of some matroid.
1
Introduction
A basic tool to compute with topological surfaces is the notion of decomposition: cutting a surface S into topologically simpler elements, so that S is entirely described by how these elements are assembled together. In the last decade, algorithms for computing several types of decompositions have been given. The most important type of decomposition is the notion of cut graph: a graph G embedded (drawn without crossing) on S such that S \ G is a topological disk. When the surface is endowed with a metric, it is an asset to be able to compute a shortest decomposition of some kind. From a theoretical perspective, this provides a “canonical” decomposition of the surface. For applications, like parameterization or texture mapping, it is often necessary to make the surface planar, by cutting it along curves. The way the curves are chosen affects the quality of the parameterization; so shortest curves are desirable (where the “metric” can also be adapted to favor cutting along areas of high curvature, for example). See, e.g., the discussion by Erickson and Har-Peled [13] and references therein. In this paper, we are interested in computing a shortest cut graph of a surface with a given vertex set. Before presenting our result, we survey related works. 1.1
Previous Work
Most of the previous works in the algorithmic study of curves on surfaces consider the combinatorial surface model [8, 9], where the input surface S comes with a
Supported by the Agence Nationale de la Recherche under the Triangles project of the Programme blanc ANR-07-BLAN-0319.
M. de Berg and U. Meyer (Eds.): ESA 2010, Part II, LNCS 6347, pp. 100–111, 2010. c Springer-Verlag Berlin Heidelberg 2010
Shortest Cut Graph of a Surface with Prescribed Vertex Set
101
representation of a fixed graph embedding M . By definition, each edge of each subsequent graph embedding G is required to be a walk in M ; however, these walks may share edges and vertices of M : it is only required that G be the limit of a sequence of real graph embeddings on the surface. Each edge of M bears a non-negative weight, used to measure the length of the graph embeddings. (See Section 2.4 for a more detailed description of this model.) Erickson and Har-Peled [13] are the first to study the problem of computing a shortest cut graph of a surface. They prove that the problem is NP-hard, by reduction to a minimum Steiner tree problem, and provide a polynomial-time approximation algorithm. So far, the only known algorithm to compute a shortest decomposition of a surface is that of Erickson and Whittlesey [14]. Specifically, a system of loops of a surface S without boundary is a cut graph G with a single vertex, its basepoint. Relying on ideas by Eppstein on tree-cotree decompositions [12], Erickson and Whittlesey provide a greedy algorithm to compute a shortest system of loops with a given basepoint on an orientable surface without boundary, with runningtime O(n log n + gn), where n is the complexity of the combinatorial surface and g is its genus. (Actually, O(n log n) is sufficient to compute an implicit description of the system of loops, which has O(gn) complexity.) In contrast, it is striking that no algorithm has been found to compute, even approximately, shortest decompositions of other kinds, like canonical systems of loops [18], pants decompositions [10], or octagonal decompositions [8]. The algorithm by Erickson and Whittlesey [14] has been used as a subroutine in other problems, like computing shortest non-contractible or non-separating cycles [2, 17]. Two other structural properties of the shortest system of loops prove useful in this context: each of the loops is as short as possible in its homotopy class, and is the concatenation of two shortest paths plus an edge. 1.2
Our Result
There is a natural generalization of this greedy algorithm which, instead of a system of loops, computes a cut graph whose vertex set is exactly a prescribed set of k points. The algorithm runs in O(n log n) time plus the output size, which is O((g + k)n). In this paper, we prove that this generalization (on a possibly non-orientable surface) computes a shortest cut graph on S with that vertex set. This should be put in perspective with the NP-hardness of computing a shortest cut graph of a surface [13]: our result implies that this is easy once the vertex set has been “guessed”. Compared to the paper by Erickson and Whittlesey [14], our proof is also simpler and more natural. In particular, to prove that a greedy algorithm is optimal, a generic strategy is to express the underlying combinatorial structure as a matroid. Erickson and Whittlesey show that the partial homotopy bases and the partial systems of loops do not form matroids, so they prove optimality of their result by more indirect means. We show, however, that their algorithm and its generalization fit within the matroid framework: they compute a minimum-weight independent set in some matroid, and that independent set turns out to be a shortest cut graph.
102
´ Colin de Verdi`ere E.
The main additional tool we need to introduce for our extension is relative homology, which provides an algebraic condition on whether a given set of paths separates the surface. We give an ad-hoc, self-contained description of relative homology that is sufficient for our purposes. In contrast, for the shortest system of loops [14], only standard homology was required. When S has at least one boundary component, a variant of the problem is the computation of a shortest system of arcs: a family of simple, pairwise disjoint paths whose endpoints are on the boundary of the surface and that cut S into a disk. Our result implies that we can also compute a shortest system of arcs. While such systems of arcs have already been described and used for various purposes, such as computing shortest splitting cycles [5], shortest curves with prescribed homotopy [8], or minimum cuts [4], it was not known that they were as short as possible. 1.3
Overview
At a high level, the algorithm for computing a shortest cut graph with prescribed vertex set P of a surface S is simple enough to be presented here. We begin by growing disks around all vertices of P simultaneously, and consider the curves on which these disks collide. This is essentially the Voronoi diagram of P , except that both sides of a given “Voronoi” edge may be incident to the same cell (Figure 1(a)). Define the weight of a “Voronoi” edge to be the length of the dual “Delaunay” edge (the shortest path crossing that “Voronoi” edge and connecting the two points in P on either side of it). Compute a maximum spanning tree of the “Voronoi” diagram with respect to these weights (Figure 1(b)). Return the “Delaunay” edges whose dual “Voronoi” edges are not in that maximum spanning tree (Figure 1(c),(d)). The rest of the paper is organized as follows. We describe some necessary background on matroids and topology of surfaces in Section 2. Then we state and prove our main result on shortest cut graphs with prescribed vertex set (Section 3). Finally, we discuss a few extensions.
2 2.1
Preliminaries Matroids and Greedy Algorithms
A matroid is a pair (S, Σ) where S is a finite set and Σ is a non-empty collection of subsets of S satisfying the two following conditions: (1) if I ∈ Σ and J ⊆ I, then J ∈ Σ; (2) if I, J ∈ Σ and |I| < |J|, then I ∪ {z} ∈ Σ for some z ∈ J \ I. For example, if S is a finite vector space and Σ is the collection of all linearly independent subsets of S, then (S, Σ) is a matroid. We refer to the elements in Σ as the independent sets. A basis of (S, Σ) is an inclusionwise maximal independent set. By (2), all bases have the same cardinality. Consider a weight function w : S → R. For every subset I of S, define w(I) = z∈I w(z). It is well-known [11, Sect. 16.4] that the following greedy algorithm finds a basis I of (S, Σ) minimizing w(I): maintain an initially empty
Shortest Cut Graph of a Surface with Prescribed Vertex Set
103
(a)
(b)
(c)
(d)
Fig. 1. Overview of the algorithm, on a torus. (a): The vertex set P and its “Voronoi” diagram; light curves lie on the back of the surface. (b): In bold, a maximum spanning tree (in this case, a path) of that “Voronoi” diagram, where the weight of an edge is the length of the dual “Delaunay” edge. Most of the path is on the back of the torus. (c): Here, the “Voronoi” edges not in the maximum spanning tree are replaced by their dual “Delaunay” edges. (d): Those “Delaunay” edges form the shortest cut graph with vertex set P .
independent set I; repeatedly choose y ∈ S \ I with I ∪ {y} in Σ and with w(y) as small as possible; stop when no such y exists. 2.2
Surfaces and Embeddings
We recall here standard definitions on topology of surfaces. For general background on topology, see for example Stillwell [19] or Hatcher [16]. In this paper, unless noted otherwise, S is a compact, connected surface without boundary; g denotes its Euler genus. Thus, if S is orientable, g ≥ 0 is even, and S is a sphere with g/2 handles attached; if S is non-orientable, S is a sphere with g ≥ 1 disks replaced by M¨obius strips. We consider curves drawn on S . A path p is a continuous map from [0, 1] to S ; it is simple if it is one-to-one, except that its endpoints p(0) and p(1) may coincide. Similarly, we say that two paths are disjoint if their images are disjoint, except that some of their endpoints may coincide. A loop with basepoint x is a path with both endpoints equal to x. A cycle is a continuous map from the unit circle S 1 to S . We often identify a curve with its image on S . Two paths p and q are homotopic if, informally, there is a continuous deformation between them that keeps their endpoints fixed. More precisely, a homotopy is a continuous map h : [0, 1] × [0, 1] → S such that h(0, ·) = p, h(1, ·) = q, and such that h(·, 0) and h(·, 1) are constant maps. Similarly, a homotopy between two cycles γ and δ is a continuous map h : [0, 1] × S 1 such that h(0, ·) = γ and h(1, ·) = δ. A loop or cycle homotopic to a constant loop or cycle is contractible.
104
´ Colin de Verdi`ere E.
An embedding of a graph G on S is a “crossing-free” drawing of G on S : it maps the vertices of G to distinct points on S and the edges of G to simple paths on S that intersect only at common endpoints. Often, we will identify the graph G with its embedding on S . A face of an embedding of G on S is a connected component of S minus (the image of) G. A graph is cellularly embedded on S if every face of the graph is an open disk. Given a graph embedding G, we denote by S \\G the surface S cut along G. In particular, if G is cellularly embedded, then S \\G is a disjoint union of closed disks. A cut graph of S is a graph G embedded on S whose unique face is a disk (sometimes called a polygonal schema of S ). The proof of the following lemma is omitted (the first point follows from Euler’s formula and the second one from the classification of surfaces with boundary): Lemma 1. The following properties hold: i. A cut graph with k vertices has g + k − 1 edges. ii. Let G be a graph embedded on S . Then every face of G is a disk if and only if no edge can be added to G without increasing its number of faces or creating a new vertex. 2.3
Homology
In this paper, we will use 1-dimensional singular homology for graphs embedded on surfaces without boundary, over Z/2Z, relatively to a finite set of points [16, p. 115]. For convenience, we give a non-standard, self-contained presentation of this tool that is sufficient for our purposes. Note that our notion of homology differs from the one used in the previous papers in the area [1–3, 5, 6, 13, 14, 17]: here, relative homology is needed. Let P be a finite set of points on S . A P -path is a path intersecting P exactly at its endpoints. The definition of non-oriented P -path is similar, except that one P -path and its reversal are considered to be the same non-oriented P -path. If p is an arbitrary non-oriented P -path, we denote by p and p the two oriented versions of P . A homology cycle is a finite set of non-oriented P -paths. The homology cycles form a vector space over the field Z/2Z: addition is symmetric difference, multiplication by zero gives the empty set, and multiplication by one is the identity. Let (p1 , . . . , pm ) be a sequence of non-oriented P -paths. Assume that there exists (τ1 , . . . , τm ) ∈ {, }m such that the concatenation of pτ11 , . . . , pτmm is a contractible loop . Then {p1 } + · · · + {pm } (the set of non-oriented P -paths appearing an odd number of times in (p1 , . . . , pm )) is called a homology boundary. The homology boundaries form a vector subspace of the set of homology cycles. (Proof: Let S and S be two homology boundaries. Let be a contractible loop as in the definition, witnessing the fact that S is a homology boundary, and similarly for S . Let q be a P -path from the basepoint of to the basepoint of . Then the concatenation of , q, , and the reverse of q is a contractible loop witnessing the fact that the symmetric difference of S and S is a homology boundary.)
Shortest Cut Graph of a Surface with Prescribed Vertex Set
105
Two homology cycles are homologous if their sum (symmetric difference) is a homology boundary. Thus, the homology cycles are partitioned into homology classes. Together, the homology classes form the homology space; formally, it is the vector space that is the quotient of the space of homology cycles Z by the space of homology boundaries B. Although Z and B have infinite dimension, it will turn out that the dimension of their quotient is finite. Given a P -path p, we denote by [p] its homology class (more precisely, the homology class of {p}, where p is the non-oriented version of p). If p and p are homotopic P -paths, then [p] = [p ]: homology is a coarser relation than homotopy. 2.4
Endowing Surfaces with a Metric
Our algorithm needs a way to measure lengths of curves and graph embeddings. Like most earlier works in the area, our result is described in the combinatorial surface model [9, 10]. Actually, most of the description below is in the equivalent, dual, cross-metric setting [8]. See Colin de Verdi`ere and Erickson [8] for a more precise discussion, and also the connection with the informal description given in the introduction. A combinatorial surface (S , M ) is a surface S together with a fixed graph M cellularly embedded on S ; each edge of M has a non-negative weight. Let M ∗ be the dual graph of M , also embedded on S : it has a vertex inside each face of M ∗ , and, for every edge e of M , an edge e∗ that crosses e exactly once and no other edge of M . The weight of e∗ equals, by definition, the weight of e. We only consider graph embeddings (in particular, curves) that are regular with respect to M ∗ ; namely, every vertex of the embedding is in a face of M ∗ , and every edge of the embedding intersects the edges of M ∗ at finitely many points in their relative interior, where a crossing occurs. The length of a graph embedding on (S , M ) is the sum of the weights of the edges of M ∗ crossed by the curve, counted with multiplicity. A graph embedding can be represented combinatorially by its arrangement with M ∗ on S . The complexity of a combinatorial surface is the total number of vertices, edges, and faces of M (or, equivalently, M ∗ ). In this paper, (S , M ) is a combinatorial surface, and M ∗ is the dual graph of M .
3
Shortest Cut Graph with Prescribed Vertex Set
Our main result is the following: Theorem 2. Let (S , M ) be a combinatorial surface, possibly non-orientable, without boundary. Let g be its genus and n be its complexity. Let P be a set of k vertices of M . We can compute a shortest cut graph of (S , M ) with vertex set (exactly) P in O(n log n + (g + k)n) time. In particular, if S is orientable and P is a single vertex, this is the result by Erickson and Whittlesey [14, Theorem 3.9]. Again, we note that the O((g + k)n) term is present because it is the worst-case complexity of the output, but an implicit representation of the solution can be found in O(n log n) time.
106
3.1
´ Colin de Verdi`ere E.
Homology Bases and Cut Graphs
In this paper, a homology basis is a set of P -paths {p1 , . . . , pm } such that [p1 ], . . . , [pm ] form a basis of the homology vector space of S with respect to P . (Note that this definition differs from the one by Erickson and Whittlesey [14].) We will see that the algorithm for Theorem 2 computes a shortest homology basis. The connection with cut graphs is the following. Proposition 3. Let G be a graph embedded on S whose vertices belong to P . The edges of G form a basis of the homology vector space if and only if G is a cut graph with vertex set P . This result is standard in the simpler case of Erickson and Whittlesey [14], and is indeed implicit in their work, but not in our case, where relative homology is used. The proof relies on the three following lemmas. The first two lemmas can be proved directly if one assumes the equivalence between simplicial and singular homology, but we provide alternate proofs that do not rely on this equivalence. Lemma 4. If G has at least two faces, then the edges of G are homologically dependent. Proof. Let f be an arbitrary face of G, and let S be the set of edges of G incident exactly once to f . Since G has at least two faces, the set S is non-empty. We next prove that S is a homology boundary, which concludes. By Lemma 1(ii), we can find a (possibly empty) set {p1 , . . . , pm } of pairwise disjoint simple P -paths such that f \\{p1 , . . . , pm } is a disk. In other words, if we add to the edge set of G the P -paths p1 , . . . , pm , obtaining a new graph G with vertex set in P , then f corresponds to a single face f of G that is a disk. We may follow the boundary of f , recording the P -paths in order along that boundary. This boundary is contractible since f is a disk. Furthermore, the set of non-oriented P -paths appearing an odd number of times on the boundary of f is exactly S, so S is a homology boundary. Lemma 5. If G has a single face, then the edges of G are homologically independent. Proof. Assume that a non-empty subset E of edges of G forms a homology boundary: some P -paths can be concatenated to form a contractible loop , such that E is exactly the set of non-oriented P -paths appearing an odd number of times in this concatenation. We actually view as a contractible cycle γ. Let e be a fixed edge of G; we shall prove that e does not belong to E . Since this is valid for every e, this proves E = ∅, a contradiction. Because G has a single face, there exists a cycle δ that crosses e once, and crosses no other edge of G. Without loss of generality, we may assume that γ and δ meet at a finite number of points, where they actually cross. It is wellknown [15, p. 79] that the parity of the number of intersection points between γ and δ depends only on the homotopy classes of γ and δ: intuitively, changing γ
Shortest Cut Graph of a Surface with Prescribed Vertex Set
107
continuously only creates or removes pairs of intersection points with δ. Since γ is contractible, γ must thus cross δ an even number of times. Recall that E is the set of non-oriented P -paths appearing an odd number of times in γ. Hence, the P -paths in E cross δ an even number of times in total. However, the only edge of E crossing δ is e, since E is a subset of edges of G. Thus e crosses δ an even number of times, and does not belong to E . Lemma 6. If G is a cut graph with vertex set P , then the edges of G generate the homology vector space. Proof. It suffices to prove that every P -path p is homotopic to the concatenation of some edges of G. We can assume that p crosses G regularly. So p is the concatenation of paths p1 , . . . , pm intersecting G exactly at their endpoints. (Recall that every point in P is a vertex of G.) Since S \\G is a disk, every path pi is homotopic to a path in the image of G. Thus p is homotopic to a path in the image of G, and therefore to a concatenation of edges of G. Proof of Proposition 3. By Lemmas 5 and 6, if G is a cut graph with vertex set P , then its edges form a basis of the homology vector space. Conversely, assume that the edges of G form a basis of the homology vector space; then G has a single face by Lemma 4. If G is not a cut graph with vertex set P , then an edge can be added to G while keeping S \\G connected, but without changing its vertex set (Lemma 1(ii)); these edges are homologically independent (Lemma 5). So the edges of G did not form an inclusionwise maximal homologically independent set; this is a contradiction. 3.2
Algorithm (A): Shortest Homology Basis
The algorithm for Theorem 2 can be formulated in the following abstract form (let us call it (A)): Maintain an initially empty set I of non-oriented P -paths. At each step, add some shortest P -path in I homologically independent with the P -paths already in I. Stop when no such path exists. Proposition 7. (A) computes a shortest homology basis. Proof. Let S be the set of all homology classes, and Σ be the set of subsets of S that are linearly independent. Define the weight of s ∈ S to be the minimum length of a P -path in s. (Of course, only P -paths that are regular with respect to M ∗ are considered. Furthermore, the weight is ∞ if there is no P -path in s.) Consider the following greedy algorithm: Let I be an initially empty set of homology classes. Iteratively add to I a minimum-weight homology class linearly independent with those already in I; stop when no such homology class exists. Since S is a vector space of finite dimension (this follows from Proposition 3), (S, Σ) is a matroid, and this algorithm gives a minimum-weight basis of the homology vector space. The regular P -paths redundantly generate the homology vector space, so this algorithm only chooses homology classes of P -paths. It therefore coincides with the algorithm (A).
108
3.3
´ Colin de Verdi`ere E.
Algorithm (A ): Shortest Cut Graph
In this section, we prove that, under an additional restriction, the algorithm (A) computes a shortest cut graph with vertex set P . Let F be a spanning forest of shortest paths in M , starting from every vertex of P simultaneously. In other words, each connected component of F is a tree containing exactly one vertex of P , and F contains a shortest path from every vertex of M to its nearest vertex in P . Let C ∗ be the graph obtained from M ∗ by removing each edge e∗ whose primal edge e belongs to F ; this graph C ∗ is the “Voronoi” diagram we alluded to in the introduction. Lemma 8. S \\C ∗ is a set of closed disks, each containing a single vertex in P . Proof. F contains all vertices of M , and thus S \\C ∗ is obtained by gluing the faces of M ∗ along the edges of F ∗ . For each tree in the forest F , we attach the corresponding faces of M ∗ according to this tree. Since attaching disks together in a tree-like fashion gives a disk, we indeed obtain that S \\C ∗ is a set of closed disks. Furthermore, every such disk contains a point in P . For each edge e∗ in C ∗ , let σ(e∗ ) be a shortest P -path crossing e∗ exactly once and no other edge of C ∗ . Thus, each part of σ(e∗ ) on either side of its crossing with e∗ is a shortest path. So we can, without loss of generality, assume that all the paths σ(e∗ ) are simple and pairwise disjoint. A P -path is primitive if it is of the form σ(e∗ ) for some edge e∗ of C ∗ . The following proposition proves a structural property on the P -paths chosen by the algorithm. Similar arguments have been used in recent papers [1, 2, 14] while the core of the idea, the 3-path condition, is older [20]. Proposition 9. At each step, we may assume that (A) chooses primitive paths. Proof. Every P -path computed by (A) has to cross C ∗ at least once, for otherwise it is contractible and thus homologically trivial. Assume some path p crossing edges e∗1 , . . . , e∗m of C ∗ is chosen at some step of the algorithm (A). This path is homotopic to the concatenation of σ(e∗1 ), . . . , σ(e∗m ), since the faces of C ∗ are disks. Thus [p] = [σ(e∗1 )]+ . . .+ [σ(e∗m )]. Since p is homologically independent with the already chosen P -paths, one of the σ(e∗i ) must be homologically independent with these P -paths. Moreover, σ(e∗i ) is no longer than p, since p crosses e∗i and since σ(e∗i ) is a shortest P -path among those crossing e∗i . Therefore, σ(e∗i ) can be chosen by (A). We call (A ) the version of algorithm (A) that always chooses primitive paths. In particular, (A ) chooses P -paths that are simple and pairwise disjoint. Corollary 10. Algorithm (A ) computes a shortest cut graph with vertex set P . Proof. Since the P -paths chosen by (A ) are simple and pairwise disjoint, (A ) computes a homology basis that is a graph with vertex set included in P , and thus a cut graph with vertex set P (Proposition 3). Also by Proposition 3, any shorter cut graph with vertex set P would be a shorter homology basis, which is impossible (Proposition 7).
Shortest Cut Graph of a Surface with Prescribed Vertex Set
109
e∗1 σ(e∗1 ) ∩ f σ(e∗2 ) ∩ f e∗2
Fig. 2. The retraction in the proof of Lemma 11. In this example, E ∗ = {e∗1 , e∗2 }.
3.4
Algorithm (A ): Concrete Algorithm
In this section, we describe an algorithm (A ), already sketched in the introduction, that will be shown to give the same result as (A ) and that can be efficiently implemented. The following lemma was noted and used by Erickson and Whittlesey [14, Section 3.4] in the particular case where P is a single vertex. Lemma 11. Let E ∗ be a set of edges of C ∗ . Then C ∗ − E ∗ is connected if and only if S \ (P ∪ σ(E ∗ )) is connected. (σ(E ∗ ) is the set {σ(e∗ ) | e∗ ∈ E ∗ }.) Proof. We will actually show that Y := C ∗ − E ∗ is a deformation retract of X := S \ (P ∪ σ(E ∗ )): there is a continuous map ϕ : [0, 1] × X → X such that ϕ(0, ·) is the identity, ϕ(1, ·) has its image in Y and is the identity when restricted to Y . This in particular implies the result. For every edge e∗ of C ∗ , let τ (e∗ ) be the intersection of e∗ with σ(e∗ ). Consider a face f of C ∗ . We can retract f \(P ∪σ(E ∗ )) onto (∂f )\τ (E ∗ ) (Figure 2). Gluing these retractions together, we obtain a deformation retract of S \ (P ∪ σ(E ∗ )) to C ∗ \ τ (E ∗ ). This in turn retracts to C ∗ − E ∗ . (A ) can be described as follows: For every edge e∗ of C ∗ , define the weight of e∗ to be the length of the P -path σ(e∗ ). Compute a maximum spanning tree T ∗ of C ∗ with respect to these weights. Return {σ(e∗ ) | e∗ ∈ C ∗ \ T ∗ }. Proof of Theorem 2. We prove that Algorithm (A ) is equivalent to (A ), which computes a shortest cut graph (Corollary 10). In light of Proposition 3, the algorithm (A ) can be rephrased as follows: Maintain an initially empty set of edges I ∗ of C ∗ ; iteratively add to I ∗ the minimum-weight edge e∗ of C ∗ − I ∗ such that S \ σ(I ∗ ∪ {e∗ }) is connected. This latter condition is equivalent to having C ∗ − (I ∗ ∪ {e∗ }) connected (Lemma 11). Thus (A ) iteratively removes minimum-weight edges of C ∗ while keeping it connected. No such edge belongs to a maximum spanning tree of C ∗ [11, Exercise 23.1-5], so (A ) computes a maximum spanning tree and is therefore equivalent to (A ). There remains to analyze the complexity of (A ). The graph C ∗ , together with the weights of its edges, can be computed in O(n log n) time using Dijkstra’s
110
´ Colin de Verdi`ere E.
algorithm. The maximum spanning tree T ∗ can also be computed in O(n log n) time using any textbook algorithm [11, Ch. 23]. The edges in C ∗ − T ∗ form an implicit description of the cut graph; to compute it explicitly, one has to build the paths σ(e∗ ), for each edge e∗ ∈ C ∗ − T ∗ . Each such P -path has O(n) complexity. Furthermore, the number of P -paths is g + k − 1 (Lemma 1(i)). In particular, each edge of a shortest cut graph with vertex set P enjoys the following useful properties: it is a primitive P -path, and is as short as possible in its homology class (and therefore in its homotopy class).
4
Conclusion
We sketch a few extensions of our result. It is sometimes useful to cut a surface S with b boundaries into a disk along a system of arcs, whose endpoints are on the boundary of S [4, 5, 8]. It follows from Theorem 2 that the algorithm of these papers computes the shortest such cut graph. Roughly, it works as follows: Fill each boundary of S with a disk, obtaining a surface without boundary S ; let P be a set of points, one inside each disk; compute a shortest cut graph with vertex set P on S , where crossing a boundary of S corresponds to a large (symbolically infinite) weight; the restriction of that cut graph to S is the shortest system of arcs. The running-time is O(n log n + (g + b)n). As also noted by Erickson and Whittlesey [14], the algorithm extends to piecewise-linear (PL) surfaces, obtained by assembling Euclidean polygons: the running-time becomes O(n2 + (g + k)n), because an algorithm to compute shortest paths on such surfaces has to be used [7]. Since C ∗ is a graph embedded on S with k faces, Euler’s formula implies that it has O(g + k) edges (if we remove iteratively vertices of degree one together with their incident edges, which belong to any spanning tree of C ∗ anyway, and vertices of degree two, merging the two incident edges). Since shortest paths on PL surfaces may overlap for some time and then diverge, the output of the algorithm is a set of non-crossing paths, which can be transformed into a graph embedding by an arbitrarily small perturbation. Actually, everything extends to the case of Riemannian surfaces, except, of course, that we cannot in general compute shortest paths and the graph C ∗ . Acknowledgments. I would like to thank Jeff Erickson for helpful discussions, and Francis Lazarus for comments on a preliminary version of this paper.
References ´ Lazarus, F.: Finding shortest non-trivial cycles 1. Cabello, S., Colin de Verdi`ere, E., in directed graphs on surfaces. In: Proc. ACM Symp. on Computational Geometry, pp. 156–165 (2010) ´ Lazarus, F.: Output-sensitive algorithm for 2. Cabello, S., Colin de Verdi`ere, E., the edge-width of an embedded graph. In: Proc. ACM Symp. on Computational Geometry, pp. 147–155 (2010)
Shortest Cut Graph of a Surface with Prescribed Vertex Set
111
3. Cabello, S., Mohar, B.: Finding shortest non-separating and non-contractible cycles for topologically embedded graphs. Disc. Comput. Geom. 37(2), 213–235 (2007) 4. Chambers, E.W., Erickson, J., Nayyeri, A.: Minimum cuts and shortest homologous cycles. In: Proc. ACM Symp. on Computational Geometry, pp. 377–385 (2009) ´ Erickson, J., Lazarus, F., Whittlesey, K.: 5. Chambers, E.W., Colin de Verdi`ere, E., Splitting (complicated) surfaces is hard. Comput. Geom.: Theory Appl. 41(1-2), 94–110 (2008) 6. Chambers, E.W., Erickson, J., Nayyeri, A.: Homology flows, cohomology cuts. In: Proc. ACM Symp. on Theory of Computing, pp. 273–282 (2009) 7. Chen, J., Han, Y.: Shortest paths on a polyhedron. Int. J. Comput. Geom. Appl. 6, 127–144 (1996) ´ Erickson, J.: Tightening non-simple paths and cycles on 8. Colin de Verdi`ere, E., surfaces. In: Proc. ACM-SIAM Symp. on Discrete Algorithms, pp. 192–201 (2006) ´ Lazarus, F.: Optimal system of loops on an orientable surface. 9. Colin de Verdi`ere, E., Disc. Comput. Geom. 33(3), 507–534 (2005) ´ Lazarus, F.: Optimal pants decompositions and shortest 10. Colin de Verdi`ere, E., homotopic cycles on an orientable surface. J. ACM 54(4), Article No. 18 (2007) 11. Cormen, T.H., Leiserson, C.E., Rivest, R.R., Stein, C.: Introduction to algorithms. MIT Press, Cambridge (2001) 12. Eppstein, D.: Dynamic generators of topologically embedded graphs. In: Proc. ACM-SIAM Symp. on Discrete Algorithms, pp. 599–608 (2003) 13. Erickson, J., Har-Peled, S.: Optimally cutting a surface into a disk. Disc. Comput. Geom. 31(1), 37–59 (2004) 14. Erickson, J., Whittlesey, K.: Greedy optimal homotopy and homology generators. In: Proc. ACM-SIAM Symp. on Discrete Algorithms, pp. 1038–1046 (2005) 15. Guillemin, V., Pollack, A.: Differential topology. Prentice-Hall, Englewood Cliffs (1974) 16. Hatcher, A.: Algebraic topology. Cambridge University Press, Cambridge (2002), http://www.math.cornell.edu/~ hatcher/ 17. Kutz, M.: Computing shortest non-trivial cycles on orientable surfaces of bounded genus in almost linear time. In: Proc. ACM Symp. on Computational Geometry, pp. 430–438 (2006) 18. Lazarus, F., Pocchiola, M., Vegter, G., Verroust, A.: Computing a canonical polygonal schema of an orientable triangulated surface. In: Proc. ACM Symp. on Computational Geometry, pp. 80–89 (2001) 19. Stillwell, J.: Classical topology and combinatorial group theory, 2nd edn. Springer, Heidelberg (1993) 20. Thomassen, C.: Embeddings of graphs with no short noncontractible cycles. J. Comb. Theory, Series B 48(2), 155–177 (1990)
Induced Matchings in Subcubic Planar Graphs Ross J. Kang1, , Matthias Mnich2, , and Tobias M¨ uller3, 1
3
Durham University, Durham, United Kingdom
[email protected] 2 Technische Universiteit Eindhoven, Eindhoven, The Netherlands
[email protected] Centrum Wiskunde & Informatica (CWI), Amsterdam, The Netherlands
[email protected]
Abstract. We present a linear-time algorithm that, given a planar graph with m edges and maximum degree 3, finds an induced matching of size at least m/9. This is best possible.
1
Introduction
For a graph G = (V, E), an induced matching is a set M ⊆ E of edges such that the graph induced by the endpoints of M is a disjoint union of edges. Put otherwise, a shortest path in G between any two edges in M has length at least 2. In this article, we prove that every planar graph with maximum degree 3 has an induced matching of size at least |E(G)|/9 (which is best possible), and we give a linear-time algorithm that finds such an induced matching. The problem of computing the size of a largest induced matching was introduced in 1982 by Stockmeyer and Vazirani [15] as a variant of the maximum matching problem. They motivated it as the “risk-free” marriage problem: find the maximum number of married couples such that no married person is compatible with a married person other than her/his spouse. Recently, the induced matching problem has been used to model the capacity of packet transmission in wireless ad hoc networks, under interference constraints [2]. It was already shown by Stockmeyer and Vazirani that the maximum induced matching problem is NP-hard even for quite a restricted class of graphs: bipartite graphs of maximum degree 4. Other classes in which this problem is NP-hard include that of planar bipartite graphs and of line graphs. Despite these discouraging negative results, there is large body of work showing that the maximum induced matching number can be computed in polynomial time in other classes of graphs, e.g. trees, chordal graphs, cocomparability graphs, asteroidal-triple free
Research partially supported by the Engineering and Physical Sciences Research Council (EPSRC), grant EP/G066604/1. Research partially supported by the Netherlands Organisation for Scientific Research (NWO), grant 639.033.403. Research partially supported by a VENI grant from Netherlands Organization for Scientific Research (NWO).
M. de Berg and U. Meyer (Eds.): ESA 2010, Part II, LNCS 6347, pp. 112–122, 2010. c Springer-Verlag Berlin Heidelberg 2010
Induced Matchings in Subcubic Planar Graphs
113
graphs, graphs of bounded cliquewidth. See Duckworth, Manlove and Zito [4] for a survey of and references to these complexity results. Since our main focus in this paper is the class of planar graphs of maximum degree 3, we point out that Lozin [10] showed that the maximum induced matching problem is NP-hard for this class; on the other hand, the problem admits a polynomial-time approximation scheme for subcubic planar graphs [4]. There have been recent efforts to determine the parameterised complexity of the maximum induced matching problem. In general, the problem of deciding if there is an induced matching of size k is W[1]-hard with respect to k [13]. It is even W[1]-hard for the class of bipartite graphs, as shown by Moser and Sikdar [12]. Therefore, the maximum induced matching problem is unlikely to be in FPT. Consult the monograph of Niedermeier [14] for a recent detailed account of fixed-parameter algorithms. On the positive side, Moser and Sikdar showed that the problem is in FPT for the class of planar graphs as well as for the class of bounded degree graphs. Notably, by examining a greedy algorithm, they showed that for graphs of maximum degree at most 3 the maximum induced matching problem has a problem kernel of size at most 26k [12]. Furthermore, Kanj et al. [9], using combinatorial methods to bound the size of a largest induced matching in twinless planar graphs, contributed an explicit bound of 40k on kernel size for the planar maximum induced matching problem; this was subsequently improved to 28k by Erman et al. [5]. (A graph is twinless if it contains no pair of vertices both having the same neighbourhood.) We provide a result similar in spirit to the last-mentioned results. In particular, we promote the use of a structural approach to derive explicit kernel size bounds for planar graph classes. Our main result relies on graph properties proven using a discharging procedure. Recall that the discharging method was developed to establish the famous Four Colour Theorem. Theorem 1. Every planar graph of maximum degree 3 with m edges has an induced matching of size at least m/9, and such a matching can be found in time linear in the number of vertices. Corollary 1. Every 3-regular planar graph with n vertices has an induced matching of size at least n/6. Theorem 1 implies that the problem of determining if a subcubic planar graph has an induced matching of size at least k has a problem kernel of size at most 9k. To see this, here is the kernelisation: take as input G = (V, E); if k ≤ |E|/9, then answer “yes” and produce an appropriate matching by way of Theorem 1; otherwise, |E| < 9k and we have obtained a problem kernel with fewer than 9k edges. Similarly, for cubic planar graphs, Theorem 1 implies a problem kernel of size at most 6k. Our result gives lower bounds on the maximum induced matching number for subcubic or cubic planar graphs that are best possible: consider the disjoint union of multiple copies of the triangular prism. The condition on maximum degree in Theorem 1 cannot be weakened: the disjoint union of multiple copies of the octahedron is a 4-regular planar graph
114
R.J. Kang, M. Mnich, and T. M¨ uller
with m edges that has no induced matching with more than m/12 edges. Also, the condition on planarity cannot be dropped: the disjoint union of multiple copies of the graph in Figure 1 is a subcubic graph with m edges that has no induced matching with more than m/10 edges.
Fig. 1. A subcubic graph with no induced matching of size 2
There has been considerable interest in induced matchings due to its intimate connection with the strong chromatic index. Recall that a strong edge k-colouring of G is a proper k-colouring of the edges such that no edge is adjacent to two edges of the same colour, i.e. a partition of the edge set into k induced matchings. If G has m edges and admits a strong edge k-colouring, then a largest induced matching in G has size at least m/k. Thus, Theorem 1 is related to the longstanding Erd˝ os-Neˇsetˇril conjecture (cf. Faudree et al. [6,7], Chung et al. [3]). Our result lends support to a conjecture of Faudree et al. [7], claiming that every planar graph of maximum degree 3 is strongly edge 9-colourable. This conjecture has an earlier origin: it is implied by one case of a thirty-year-old conjecture of Wegner [16], asserting that the square of a planar graph with maximum degree 4 can be 9-coloured. (Observe that the line graph of a planar graph with maximum degree 3 is a planar graph with maximum degree 4.) Independently, Andersen [1] and Hor´ ak, Qing and Trotter [8], demonstrated that every subcubic graph has a strong edge 10-colouring, which implies that every subcubic graph with m edges has an induced matching of size at least m/10. The remainder of this paper is organised as follows. In Section 2 we introduce necessary terminology. We exhibit the linear-time algorithm in Section 3 and provide the details of the discharging procedure in Section 4.
2
Notation and Preliminaries
Recall that a plane graph is a planar graph for which an embedding in the plane is fixed. The algorithm that we shall present in Section 3 does not need any information about the embedding of the input graph. However, later on, Lemmas 2 and 3 do make use of any particular embedding of the graph under consideration. Throughout this paper, G will be a subcubic planar graph with vertex set V = {1, . . . , n} and edge set E. In cases when we have also fixed the embedding, we will denote the set of faces by F . A vertex of degree d is called a d-vertex. A vertex is an (d)-vertex if its degree is at most d and an (d)-vertex if its degree is at least d. The notions of d-face, (d)-face, (d)-face, d-cycle, (d)-cycle and (d)-cycle are defined
Induced Matchings in Subcubic Planar Graphs
115
analogously as for the vertices, where the degree of a face or cycle is the number of edges along it, with the exception that a cut-edge on a face is counted twice. Let deg(v), respectively, deg(f ), denote the degree of vertex v, respectively, face f . Given u, v ∈ V , the distance dist(u, v) between u and v in G is the length (in edges) of a shortest path from u to v. Given two subgraphs G1 and G2 of G, the distance dist(G1 , G2 ) between G1 and G2 is defined as the minimum distance dist(v1 , v2 ) over all vertex pairs (v1 , v2 ) ∈ V (G1 ) × V (G2 ). Note that another way to say that M ⊆ E is an induced matching is that dist(e, f ) ≥ 2 for all distinct e, f ∈ M. For a set E ⊆ E of edges we will set Ψ (E ) := {e ∈ E : dist(e, E ) < 2}. We say that a set of edges E ⊆ E is good if E is an induced matching, 1 ≤ |E | ≤ 5 and |Ψ (E )| ≤ 9|E |. We say that E is minimally good if it is good and no proper subset of E is good. The following straightforward observation will be needed later on. Lemma 1. If E ⊆ E is minimally good, then 2 ≤ dist(e, f ) ≤ 15 for all distinct e, f ∈ E . Proof. That dist(e, f ) ≥ 2 for all distinct e, f ∈ E was already observed above. Let us now note that no E ⊆ E can exist with dist(e, f ) ≥ 4 for all e ∈ E , f ∈ E \ E . This is because otherwise Ψ (E ) ∩ Ψ (E \ E ) = ∅, which implies that |Ψ (E )| = |Ψ (E )| + |Ψ (E \ E )| and at least one of E and E \ E must be good, contradicting that E is minimally good. It follows that we can list E as e1 , e2 , e3 , e4 , e5 with dist(ei , {e1 , . . . , ei−1 })≤3. This shows that for any e, f ∈ E there is a path of length at most 15 between an endpoint of e and an endpoint of f . (Note that the distance is not 12, because we may have to use up to three of the edges ei in the path between e and f ). Given v ∈ V , let N (v) denote the set of vertices adjacent to v and for k ∈ N let N k (v) denote the set of vertices at distance at most k from v. For H ⊆ G a subgraph we will set N k (H) := v∈V (H) N k (v). Two distinct cycles or faces are adjacent if they share at least one edge. Two cycles or faces are in sequence if they share exactly one edge. Similarly, we say that a collection C1 , C2 , . . . , Ck of cycles or faces are in sequence if Ci and Ci+1 are in sequence and Ci is at distance 1 from Ci+2 for all (appropriate) i. A double 4-face refers to two 4-faces in sequence.
3
The Algorithm
Our algorithm makes use of the following result. Theorem 2. Every subcubic planar graph contains a good set of edges. Theorem 2 follows immediately from Lemmas 2 and 3 below. We postpone the proof until later and first describe the algorithm.
116
R.J. Kang, M. Mnich, and T. M¨ uller
Theorem 2 allows us to adopt a greedy approach for building an induced matching. We start from M = ∅ and H = G. In each iteration, we find a minimally good E in H, then augment M by E and delete Ψ (E ) from H, i.e. we set M := M ∪ E and H := H \ Ψ (E ). The theorem guarantees that we may iterate until H is the empty graph. By the definition of a good set of edges — in particular, that Ψ is at most nine times the number of edges in the set — the matching M at the end of the process must have size at least |E|/9. Apart from proving Theorem 2, it also remains to show that this procedure can be implemented in linear time. For convenience, we adopt the Random Access Machine (RAM) model of computation. (See for instance Section 2.2 of [11] for a detailed description of the RAM model.) Our algorithm assumes that the graph is stored as a neighbour-list array, i.e. an array with an entry for each vertex v ∈ {1, . . . , n}, which contains a list with the labels of all (up to three) neighbours of v. Observe that if a graph is stored as a list of edges or an adjacency matrix then it is possible to create a neighbour-list array in linear time (cf. Exercise 8.1 of [11]). The algorithm examines the vertices one after another in some order given by a queue Q. We store Q by means of a variable that tells us which is the first element of Q and a variable that tells us which is the last element together with an array with n entries, where this time the entry for vertex v stores the labels of the preceding and succeeding element in Q. This ensures that the operations of deleting an arbitrary element from Q, and inserting elements in the first or last positions all take constant time. Our algorithm repeats the following steps as long as Q is non-empty. Let v denote the first element of Q. 1. If v is isolated, then we remove v from Q. 2. If v is not isolated, then we check whether there is a minimally good set of edges E such that v is the endpoint of some edge e ∈ E , and 2a. if such an E does not exist, then we move v to the back of the queue, 2b. or if E does exist, then we set M := M ∪ E , H := H \ Ψ (E ), and we put the vertices of N 20 (E ) at the front of Q in an arbitrary order. The check for a suitable E in Step 2 can be done in constant time: by Lemma 1, we need only consider sets E of up to five edges such that each edge e ∈ E has at least one endpoint at distance at most 16 from v. Hence, all vertices incident with edges of Ψ (E ) will be within distance 18 of v. Thus, to find a minimally good E with at least one edge incident to v, we need only examine the subgraph H[N 18 (v)] of H induced by all vertices at distance at most 18 from v. Now this subgraph has at most 3 · 217 = O(1) vertices, and it can be computed in constant time from the neighbour-list array data structure for H. (We read in constant time which are the neighbours of v, then in constant time which are the neighbours of the neighbours and so on until depth 18.) Since H[N 18 (v)] has constant size, we can clearly find a set E of the required form in constant time, if one exists.
Induced Matchings in Subcubic Planar Graphs
117
By moving N 20 (E ) to the front of Q in Step 2b, we make sure that vertices u for which H[N 18 (u)] has been affected will be examined before other vertices. (When we remove Ψ (E ) from H, H[N 18 (u)] is only affected if u ∈ N 20 (E ).) Note that we can again find the vertices of N 20 (E ) in constant time. Also, note that removing an edge e = uv from the neighbour-list data structure can be done in constant time, since we just need to update the entries of the array for u and the entry for v. Hence, removing Ψ (E ) from H can also be done in constant time. We still need to establish the following. Theorem 3. Given a subcubic planar graph G = (V, E), the algorithm computes an induced matching of size at least |E|/9 in time O(|V |). Proof. As discussed earlier, Theorem 2 establishes correctness of the greedy procedure. Moreover, we showed that each of Steps 1, 2, 2a and 2b of the algorithm will take only O(1) operations. It therefore suffices to show that these steps are iterated at most O(n) times. We prove this by showing that each vertex u occurs only O(1) times as the first element of Q. Let us first observe that a vertex u is moved to the front of Q at most 3 · 219 times. This is because at each iteration in which u was moved to the front of Q, some minimally good E was found with u ∈ N 20 (E ). Hence, at least one edge in H[N 20 (u)] was removed (namely, the edge of Ψ (E ) closest to u). Since H[N 20 (u)] has at most 3 · 219 edges, it must indeed hold that u cannot have been moved to the front of Q more than 3 · 219 times. Next, we claim that a vertex u is moved to the back of Q at most 3 · 219 + 1 times. To see this, let u be the first element of Q and suppose that it was also the first element during a prior iteration. Suppose further that at the last iteration in which u was the first element of Q, it got sent to the back and that it did not get sent to the front of Q in any later iteration. Let us write Q = (v0 , v1 , . . . , vk ) where v0 = u. Then v1 , . . . , vk must have been examined after u, and no minimally good E was found in each case so that they were sent to the back of Q. All other vertices of G must have been deleted already because they became isolated at some point. Moreover, for each l ∈ {0, . . . , k}, no edge of H[N 18 (vl )] was deleted after vl was examined for the last time. (Otherwise, a minimally good E such that Ψ (E ) hits H[N 18 (vl )] would have been found earlier. But then we would also have vl ∈ N 20 (E ) and vl would thus have been sent to the front of Q.) Since we can determine whether there is a minimally good E with v ∈ e for some e ∈ E from H[N 18 (v)] alone, there can be no minimally good E in H at all. But this contradicts Theorem 2. We have thus just shown that, if a vertex u is sent to the back of the queue Q, then by the next time we encounter it as the first element of the queue it has been sent to the front of the queue Q at least once. This implies that the number of times that u is sent to the back of Q is at most one more than the number of times that u is sent to the front of Q, as claimed. (The additional plus one arises because the first time we encountered u we may have sent it to the back of Q.) Finally, observe that the number of times vertex u occurs as the first element of Q is at most the number of times that u is sent to the back of Q plus the number
118
R.J. Kang, M. Mnich, and T. M¨ uller
of times that u is sent to the front of Q, plus one. (The additional plus one arises when u is isolated and is deleted from Q.) By our previous observations, this adds up to at most 2 · 3 · 219 + 2 = O(1), which concludes the proof.
4
The Proof of Theorem 2
Theorem 2 is a direct consequence of the following two lemmas. Recall that a plane graph is a planar graph with a fixed embedding in the plane. Fixing an embedding has the advantage that we can speak unambiguously of the faces of the graph. Lemma 2. Let G be a subcubic plane graph. If G contains one of the following structures, then G contains a good set of edges: (C1) (C2) (C3) (C4) (C5) (C6) (C7) (C8) (C9) (C10) (C11)
a 1-vertex; a 2-vertex incident to an (7)-cycle; a 2-vertex at distance at most 2 from a 2-vertex; a 2-vertex at distance at most 2 from an (5)-cycle; a 3-cycle in sequence with an (6)-cycle or a 7-face; a 4- or 5-cycle in sequence with a 5- or 6-cycle; a 3-cycle at distance 1 from an (5)-cycle; a double 4-face adjacent to an (7)-cycle; a 4-cycle, (8)-cycle and 4-cycle in sequence; a 4-cycle, 7-cycle and 5-cycle in sequence; a 3-cycle or double 4-face at distance at most 2 from a 3-cycle or double 4-face; and (C12) a double 4-face at distance 1 from a 5-cycle. Lemma 3. Every subcubic plane graph contains one of the structures (C1)– (C12) listed in Lemma 2. The proof of Lemma 2 is a rather lengthy case analysis, which will be provided in the journal version of this paper. We now prove Lemma 3 by means of a discharging procedure. 4.1
The Proof of Lemma 3
Suppose that G is a subcubic plane graph that does not contain any of the structures (C1)–(C12). We will obtain a contradiction by using the discharging method, which is commonly used in graph colouring. The rough idea of this method is as follows. Each vertex and face of G is assigned an initial “charge”. Here the charges are chosen such that their total sum is negative. We then apply certain redistribution rules (the discharging procedure) for exchanging charge between the vertices and faces. These redistribution rules are chosen such that the total sum of charges is invariant. However, we will prove by a case analysis that if G contains none of (C1)–(C12), then each vertex and each face will have non-negative charge after
Induced Matchings in Subcubic Planar Graphs
119
the discharging procedure has finished. This contradicts that the total sum of the charges is negative. Hence G must have at least one of (C1)–(C12). We now proceed to the details. Initial charge. For every vertex v ∈ V , we define the initial charge ch(v) to be 2 deg(v) − 6, while for every face f ∈ F , we define the initial charge ch(f ) to be deg(f ) − 6. We claim that this way the total sum of initial charges will be negative. To see this, note that by Euler’s formula 6|E| − 6|V | − 6|F | = −12. It follows from v∈V deg(v) = 2|E| = f ∈F deg(f ) that −12 = (4|E| − 6|V |) + (2|E| − 6|F |) =
v∈V
(2 deg(v) − 6) +
(deg(f ) − 6),
f ∈F
which proves the claim. Discharging procedure. To describe a discharging procedure, it suffices to fix how much each vertex or face sends to each of the other vertices and faces. In our case, vertices and (6)-faces do not send any charge. The (7)-faces redistribute their charge as follows. Each (7)-face sends • 1/5 to each adjacent 5-face, • 1/2 to each adjacent 4-face that is not in a double 4-face, • 1/2 to each adjacent 4-face in a double 4-face if it is adjacent to both 4-faces, • 1 to each adjacent 4-face in a double 4-face if it is adjacent to only one, • 1 to each adjacent 3-face, and • 1 to each incident 2-vertex. When we say that an (7)-face sends charge to an adjacent face or incident vertex, we mean that the charge is sent as many times as these elements are adjacent or incident to each other. For v ∈ V and f ∈ F , we denote their final charges — that is, the charges after the redistribution – by ch∗ (v) and ch∗ (f ), respectively. In what follows we will often say that something holds “by (Cx)” for some x = 1, . . . , 12. By this we of course mean “by the absence of (Cx)”. Final charge of 2-vertices. The initial charge of a 2-vertex is −2. By (C2) it is adjacent to two (8)-faces. Hence it receives 2, so that its final charge is non-negative. Final charge of 3-vertices. A 3-vertex has initial charge 0. Since it sends no charge, its final charge is non-negative. Final charge of 3-faces. A 3-face has initial charge −3. By (C5) it is only adjacent to (8)-faces. Hence it receives charge of 3 and its final charge is nonnegative.
120
R.J. Kang, M. Mnich, and T. M¨ uller
Final charge of 4-faces. Let f be a 4-face; then its initial charge is ch(f ) = −2. If f is not in a double 4-face then by (C5) and (C6) f is only adjacent to (7)faces, and receives a charge of at least 1/2 from each of them; thus ch∗ (f ) ≥ 0. Otherwise, if f is in a double 4-face, then f is adjacent to exactly one 4-face and three (8)-faces by (C8). Thus, f receives a charge of 1 from one (8)-face and charges of at least 1/2 from the other two, and so ch∗ (f ) ≥ 0. Final charge of 5-faces. Let f be a 5-face; then its initial charge is ch(f ) = −1. Since f is not adjacent to any (6)-faces by (C5) and (C6), it receives a charge of 1/5 from each adjacent face, and so ch∗ (f ) ≥ 0. Final charge of 6-faces. The initial charge of a 6-face is 0 and it sends no charge, so its final charge is non-negative. Final charge of 7-faces. Let f be a 7-face; then its initial charge is ch(f ) = 1. By (C5), (C8), (C9) and (C10), f is adjacent to no 3-faces, no double 4-faces and at most two 4- or 5-faces. Thus, f sends a charge of at most 2 · 1/2, and so ch∗ (f ) ≥ 0. Final charge of 8-faces. Let f be an 8-face; then its initial charge is ch(f ) = 2. We consider several cases. First, suppose that f is incident to a 2-vertex. By (C3), f is incident to at most two 2-vertices. However, if f is incident to exactly two 2-vertices, then by (C2) and (C4) f is adjacent only to (6)-faces; thus, ch∗ (f ) = 0. So assume that f is incident to exactly one 2-vertex v. By (C4), faces that are adjacent to f but at distance at most two from v must be (6)-faces. There remain two other faces adjacent to f , that are adjacent to each other. If one of these is a 3-face then the other is a (8)-face by (C5), so that ch∗ (f ) ≥ 0. If both are 4-faces, then both receive 1/2 from f so that ch∗ (f ) ≥ 0. If one is a 4-face and the other a 5-face then the 4-face is not in a double 4-face by (C8), so that again ch∗ (f ) ≥ 0. If one is a 4-face and the other a (6)-face then ch∗ (f ) ≥ 0. Finally, if both are (5)-faces then also ch∗ (f ) ≥ 0. Thus, we may hereafter assume that f is not incident to a 2-vertex. Second, suppose that f is adjacent to a 3-face f . By (C5) and (C7), faces that are adjacent to f but at distance at most two from f must be (6)-faces. There remain three other faces adjacent to f , call them f1 , f2 , f3 , in sequence. By (C7), (C8), (C9), (C11) and (C12), if one of these is a 3-face or part of a double 4-face, then the others are (6)-faces, so that ch∗ (f ) ≥ 0. We can thus suppose none of f1 , f2 , f3 is a 3-face or part of a double 4-face. By (C9), at most one of f1 , f2 , f3 is a 4-face (that is not part of a double 4-face), and by (C6), at most two are 5-faces. Hence ch∗ (f ) ≥ 0. Thus, we may hereafter assume that f is not adjacent to a 3-face. Third, suppose that f is adjacent to a 4-face f . Assume that f is part of a double 4-face. By (C6), (C8), (C9), (C11), and (C12), faces that are adjacent to f but at distance at most two from f must be (6)-faces. There remain at most three other faces adjacent to f and we can proceed as in the previous case. Thus f is not part of a double 4-face. By (C6), (C9), of the faces that are adjacent to f but at distance at most two from f , none are 4-faces and at
Induced Matchings in Subcubic Planar Graphs
121
most two are 5-faces. Thus in total, at most 1/2 + 2/5 < 1 charge is sent to f and these four faces. Again, there remain at most three other faces adjacent to f and we proceed as in the previous case. Thus, we may hereafter assume that f is not adjacent to an (4)-face. Finally, by (C6), f is adjacent to at most four 5-faces, and so f sends total charge of at most 4 · 1/5 < 2 and ch∗ (f ) > 0. Final charge of (9)-faces. Let f be an (9)-face and let v1 e1 v2 e2 v3 e3 v4 e4 v5 be a path of four edges along f . Denote by fi the face adjacent to f via the edge ei . We first show that the combined charge sent through these four edges (counting half of the charge contributed to the end-vertices v1 , v5 if 2-vertices) is at most 3/2. First, suppose that at least one of the vi is a 2-vertex. By (C3), at most two are 2-vertices. If two are, then, without loss of generality, either v1 and v4 are 2-vertices, or v1 and v5 are 2-vertices. In both cases, f1 , . . . , f4 are all (6)-faces by (C4) and the total charge sent is at most 3/2. If exactly one of the vi is a 2-vertex, then without loss of generality, either v1 is is a 2-vertex, or one of v2 or v3 is. By (C4) f1 , f2 and f3 are (6)-faces and the total charge sent is at most 3/2 (since f4 is sent charge at most 1). Second, suppose that some fi is a 3-face. Without loss of generality, there are two sub-cases to consider: i = 1 or i = 2. In the former sub-case, we have by (C5), (C7) and (C11) that f2 and f3 are both (6)-faces and f4 is forbidden from being a 3-face or part of a double 4-face, in which case the total charge sent is at most 3/2. In the latter sub-case, we have by (C5) and (C7) that f1 , f3 and f4 are (6)-faces, in which case the total charge sent is 1. Third, suppose that some fi is part of a double 4-face. Without loss of generality, there are two sub-cases to consider: i = 1 or i = 2. In the former sub-case, suppose that f2 is also part of the same double 4-face. Then by (C8) and (C11) f3 is a (6)-face and f4 is not part of a double 4-face; thus, the total charge sent is at most 3/2. Therefore, in the former case we may suppose that f2 is not a 4-face. By (C6), (C8) and (C11), at most one of f2 , f3 , f4 is a 4- or 5-face and none is part of a double 4-face, in which case the total charge sent is at most 3/2. Next, in the latter sub-case, we have by (C8) that f1 , f3 are (6)-faces, and by (C11) that f4 is not part of a double 4-face, in which case the total charge sent is at most 3/2. We now have that none of the vi is a 2-vertex, none of the fi is a 3-face or part of a double 4-face. By (C6), not every fi is a 4- or 5-face. Thus, the total charge sent is at most 3/2, completing our proof of the claim. We complete the analysis of the final charge for f . Let us denote the facial cycle by v1 e1 v2 e2 v3 · · · vk ek v1 and denote by fi the face adjacent to f via the edge ei . Note that we may assume without loss of generality that deg(v1 ) = deg(v2 ) = 3 and f1 is a (6)-face. By the above claim, the total charge sent through e2 , e3 , . . . , e9 is at most 3. Every face fi , i > 9, receives a charge of at most 1 from f (where we count 1/2 for each 2-vertex that f shares with fi ). Hence, f sends total charge at most 3 + deg(f ) − 9 = deg(f ) − 6 = ch(f ), and so ch∗ (f ) ≥ 0.
122
R.J. Kang, M. Mnich, and T. M¨ uller
We have seen that every vertex and every face of G has non-negative final charge, which gives the required contradiction and finishes the proof of Lemma 3.
References 1. Andersen, L.D.: The strong chromatic index of a cubic graph is at most 10. Discrete Math. 108(1-3), 231–252 (1992); Topological, algebraical and combinatorial structures. Frol´ık’s memorial volume 2. Balakrishnan, H., Barrett, C.L., Kumar, V.S.A., Marathe, M.V., Thite, S.: The distance-2 matching problem and its relationship to the MAC-layer capacity of ad hoc wireless networks. IEEE Journal on Selected Areas in Communications 22(6), 1069–1079 (2004) 3. Chung, F.R.K., Gy´ arf´ as, A., Tuza, Z., Trotter, W.T.: The maximum number of edges in 2K2 -free graphs of bounded degree. Discrete Math. 81(2), 129–135 (1990) 4. Duckworth, W., Manlove, D.F., Zito, M.: On the approximability of the maximum induced matching problem. J. Discrete Algorithms 3(1), 79–91 (2005) 5. Erman, R., Kowalik, L., Krnc, M., Walen, T.: Improved induced matchings in sparse graphs. In: IWPEC, pp. 134–148 (2009) 6. Faudree, R.J., Gy´ arf´ as, A., Schelp, R.H., Tuza, Z.: Induced matchings in bipartite graphs. Discrete Math. 78(1-2), 83–87 (1989) 7. Faudree, R.J., Schelp, R.H., Gy´ arf´ as, A., Tuza, Z.: The strong chromatic index of graphs. Ars Combin. 29(B), 205–211 (1990); The 12th BCC (Norwich, 1989) 8. Hor´ ak, P., He, Q., Trotter, W.T.: Induced matchings in cubic graphs. J. Graph Theory 17(2), 151–160 (1993) 9. Kanj, I.A., Pelsmajer, M.J., Xia, G., Schaefer, M.: On the induced matching problem. In: STACS, pp. 397–408 (2008) 10. Lozin, V.V.: On maximum induced matchings in bipartite graphs. Inform. Process. Lett. 81(1), 7–11 (2002) 11. Mehlhorn, K., Sanders, P.: Algorithms and data structures. Springer, Berlin (2008) 12. Moser, H., Sikdar, S.: The parameterized complexity of the induced matching problem. Discrete Appl. Math. 157(4), 715–727 (2009) 13. Moser, H., Thilikos, D.M.: Parameterized complexity of finding regular induced subgraphs. J. Discrete Algorithms 7(2), 181–190 (2009) 14. Niedermeier, R.: Invitation to fixed-parameter algorithms. Oxford Lecture Series in Mathematics and its Applications, vol. 31. Oxford University Press, Oxford (2006) 15. Stockmeyer, L.J., Vazirani, V.V.: NP-completeness of some generalizations of the maximum matching problem. Inform. Process. Lett. 15(1), 14–19 (1982) 16. Wegner, G.: Graphs with given diameter and a coloring problem. Technical report, Institut f¨ ur Mathematik, Universit¨ at Dortmund (1977)
Robust Matchings and Matroid Intersections Ryo Fujita1 , Yusuke Kobayashi2, , and Kazuhisa Makino3, 1
Cannon Inc. 3-30-2, Shimomaruko, Ohta, Tokyo 146-0092, Japan 2 Department of Mathematical Informatics, Graduate School of Information Science and Technology, University of Tokyo, Tokyo, 113-8656, Japan
[email protected] 3 Department of Mathematical Informatics, Graduate School of Information Science and Technology, University of Tokyo, Tokyo, 113-8656, Japan
[email protected]
Abstract. In a weighted undirected graph, a matching is said to be αrobust if for all p, the total weight of its heaviest p edges is at least α times the maximum weight of a p-matching in the graph. Here a p-matching is a matching with at most p edges. In 2002, Hassin and Rubinstein [4] showed that every graph has a √12 -robust matching and it can be found by k-th power algorithm in polynomial time. In this paper, we show that it can be extended to the matroid intersection problem, i.e., there always exists a √12 -robust matroid intersection, which is polynomially computable. We also study the time complexity of the robust matching problem. We show that a 1-robust matching can be computed in polynomial time (if exists), and for any fixed number α with √12 < α < 1, the problem to determine whether a given weighted graph has an α-robust matching is NP-complete. These together with the positive result for α = √12 in [4] give us a sharp border for the complexity for the robust matching problem. Moreover, we show that the problem is strongly NP-complete when α is a part of the input. Finally, we show the limitations of k-th power algorithm for robust matchings, i.e., for any > 0, there existsa weighted graph such that no k-th power algorithm outputs a √12 + -approximation for computing the most robust matching.
1
Introduction
Let G = (V, E) be a graph with a nonnegative weight function w on the edges. A p-matching is a matching with at most p edges. A matching M in G is said to be α-robust if for all positive integer p, it contains min{p, |M |} edges whose total weight is at least α times the maximum weight of a p-matching in G. The concept of the robustness was introduced in [4], and studied for several combinatorial
Corresponding author. Supported by the Global COE Program “The research and training center for new development in mathematics”, MEXT, Japan. This research was partially supported by the Scientific Grant-in-Aid from Ministry of Education, Science, Sports and Culture of Japan.
M. de Berg and U. Meyer (Eds.): ESA 2010, Part II, LNCS 6347, pp. 123–134, 2010. c Springer-Verlag Berlin Heidelberg 2010
124
R. Fujita, Y. Kobayashi, and K. Makino
optimization problems such as trees and paths [2,5]. Hassin and Rubinstein [4] showed that the k-th power algorithm (i.e., the one for computing a maximum matching in the graph with respect to the k-th power weights wk ) provides a min{2(1/k)−1 , 2−1/k }-robust matching in polynomial time. In particular, when k = 2, this implies the existence of a √12 -robust matching in any graph. They also show that the √12 -robustness is the best possible for matchings by providing a weighted graph which does not contain α-robust matching for any α > √12 . In this paper, we extend this result to the matroid intersection problem. Let M1 = (E, I1 ) and M2 = (E, I2 ) be two matroids with independent sets I1 and I2 , respectively, and w be a nonnegative weight on the ground set E. The matroid intersection problem is to compute a maximum common independent set I ∈ I1 ∩ I2 of two matroids M1 and M2 . The matroid intersection is a natural generalization of bipartite matching, and one of the most fundamental problems in combinatorial optimization (see e.g., [7]). We show that the k-th power algorithm computes a min{2(1/k)−1 , 2−1/k }-robust common independent set in polynomial time, which implies that the matroid intersection problem admits a √12 -robust solution, where the √12 -robustness is the best possible [4]. In order to obtain the result, we make use of optimal dual values for the linear programming formulation of the matroid intersection problem. We next consider the complexity for the robust matching problem. We show that (1) a 1-robust matching can be computed in polynomial time (if exists), and (2) for any fixed number α with √12 < α < 1, the problem to determine whether a given weighted graph has an α-robust matching is NP-complete. These together with the positive result for α = √12 in [4] give us a sharp border for the complexity for the robust matching problem, although the NP-hardness is in the weak sense. We also show that deciding if G has an α-robust matching is strongly NP-complete when α is a part of the input. We remark that all the negative results use bipartite graphs, and hence these lead to the hardness for robust bipartite matching and matroid intersection. We finally analyze the performance of the k-th power algorithm for the robust matching problem. Recall that the k-th power algorithm provides a min{2(1/k)−1 , 2−1/k }-robust matching, and we might expect that for some k, the k-th power algorithm provides a good approximate solution for the robust matching. However, we show that this is not the case, i.e., √12 is the best possible for the approximation of the robustness. More precisely, we show that for any > 0, there exists a weighted graph such that no k-th power algorithm outputs 1 a √2 + -approximation for computing the most robust matching. The rest of the paper is organized as follows. In the next section, we recall some basic concepts and introduce notation. Section 3 shows the √12 -robustness of the matroid intersection problem, and Section 4 shows the hardness for the robust matching problem. Finally, in Section 5, we analyze the performance of the k-th power algorithm for the robust matching problem, which includes the polynomial solvability for the 1-robust matching. Due to space constraints, some of the proofs are omitted.
Robust Matchings and Matroid Intersections
2
125
Preliminaries
For a finite set E and a nonempty collection of its subsets I ⊆ 2E , the pair (E, I) is called an independent system if I satisfies the hereditary condition: I ⊆ I, I ∈ I ⇒ I ∈ I, where I ∈ I of size at most p (i.e., |I| ≤ p) is called p-independent. An independent system (E, I) is a matroid if I satisfies ∀I, J ∈ I, |I| > |J| ⇒ ∃i ∈ I \ J, J ∪ {i} ∈ I. Given an independence system (E, I) and a nonnegative weight function w : E → R+ , the maximum (p-)independent set problem is to compute a (p-)independent set I ∈ I that maximizes the weight w(I) = e∈I w(e). Let w : E → R+ be a weight function on a ground set E, and let J = {e1 , e2 , . . . , eq } be a subset of E with w(e1 ) ≥ w(e2 ) ≥ · · · ≥ w(eq ). Define {e1 , e2 , . . . , ep } if p ≤ q, J(p) = J otherwise. For an independence system (E, I), we denote by I (p) a maximum p-independent set. An independent set J is called α-robust if w(J(p) ) ≥ α · w(I (p) ) for all p = 1, 2, . . . , |E|. Note that any matroid has a 1-robust set, since a greedy algorithm solves the maximum independent set problem for matroids. Given two matroids M1 = (E, I1 ) and M2 = (E, I2 ), the matroid intersection is the independent system of form M1 ∩ M2 = (E, I1 ∩ I2 ), and the matroid intersection problem is to compute a maximum independent set of the matroid intersection. Let ri : 2E → Z+ denote the rank function of Mi which is defined as ri (A) = max{|I| | I ⊆ A, I ∈ Ii }. Edmonds [1] showed that the following linear programming solves the matroid intersection problem: maximize
w·x
subject to
x(A) ≤ r1 (A) x(A) ≤ r2 (A) xe ≥ 0
where x ∈ RE and x(A) = minimize
(∀A ⊆ E) (∀A ⊆ E) (∀e ∈ E),
xe . Consider the dual of the problem: 1 2 +r2 (A)yA r1 (A)yA e∈A
A⊆E
subject to
1 2 (yA + yA ) ≥ w(e)
(∀e ∈ E)
1 2 , yA ≥0 yA
(∀A ⊆ E).
e∈A⊆E
(1)
126
R. Fujita, Y. Kobayashi, and K. Makino
For an optimal solution (y 1 , y2 ) of this dual, we define weight functions w1 and w2 as follows: w1 (e) = y 1A , w2 (e) = y 2A . (2) e∈A⊆E
e∈A⊆E
Then we have the following result. Theorem 1 (Edmonds [1]). Let J be an optimal solution of the matroid intersection problem. Then it is a maximum independent set of Mi with respect to wi , i = 1, 2. By definition of w1 and w2 , we have w1 (e) + w2 (e) ≥ w(e) for any e ∈ E, and the complementary slackness implies that xe > 0 =⇒ w1 (e) + w2 (e) = w(e) i yA
3
> 0 =⇒ x(A) = ri (A)
(∀e ∈ E),
(∀A ⊆ E, i = 1, 2).
Robust Matroid Intersection
In this section, we prove the following theorem. Theorem 2. Let M1 = (E, I1 ) and M2 = (E, I2 ) be two matroids. For any weight function w : E → R+ and k ≥ 1, let J be a maximum common independent set of M1 and M2 with respect to wk , i.e., wk (J) = max{wk (I) | I ∈ I1 ∩ I2 }. Then J is a min{2(1/k)−1 , 2−1/k }-robust independent set for the matroid intersection (E, I1 ∩ I2 ). Since the matroid intersection problem is polynomially solvable, as a corollary (k = 2), we have the following result. Corollary 1. The matroid intersection problem admits a and furthermore, it can be computed in polynomial time.
√1 -robust 2
solution,
Let J be a maximum common independent set of M1 and M2 with respect to wk . For p ≥ 1, we show that w(J(p) ) ≥ 2(1/k)−1 · w(I (p) ) if |J| ≤ |I (p) |, and w(J(p) ) ≥ min{2(1/k)−1 , 2−1/k } · w(I (p) ), otherwise. Due to space constraints, we only consider the case when |J| ≤ |I (p) |, where the other case can be shown similarly, but the proof requires more complicated analysis. Let q = |I (p) | − |J| ≥ 0. To make the discussion clear, let us modify the problem instance by adding q new elements F = {f1 , . . . , fq } to E: E := E ∪ F, Ii := {I ∪ F | I ∈ Ii , F ⊆ F }, w(fj ) := 0 (j = 1, . . . , q), J := J ∪ F. We furthermore truncate the two matroids by |I (p) |: Ii := {I ∈ Ii | |I| ≤ |I (p) |}.
Robust Matchings and Matroid Intersections
127
It is not difficult to see that after this transformation, M1 and M2 are still matroids, J and I (p) are common bases (i.e., maximal independent sets for both M1 and M2 ). Hence it suffices to show w(J(p) ) = w(J) ≥ 2(1/k)−1 · w(I (p) ). We show this by proving w(J \ I (p) ) ≥ 2(1/k)−1 · w(I (p) \ J). For a common base B and a common independent set L of M1 and M2 , let us construct a bipartite graph G(B, L) = (V, A = A1 ∪ A2 ) as follows. V = (B \ L) ∪ (L \ B), Ai = {(x, y) | x ∈ B \ L, y ∈ L \ B, (B \ {x}) ∪ {y} ∈ Ii } (i = 1, 2). For a vertex x ∈ V and for X ⊆ V , we define δi (x) = {v ∈ V | (x, v) ∈ Ai }, δi (X) = δi (x) (i = 1, 2). x∈X
Lemma 1. For any X ⊆ L \ B, (B \ δ1 (X)) ∪ X ∈ I1 and (B \ δ2 (X)) ∪ X ∈ I2 . The following lemma follows from Lemma 1 and Hall’s theorem. Lemma 2. Bipartite graph G(B, L) has two matchings Mi ⊆ Ai (i = 1, 2) which cover L \ B. Let us now consider the bipartite graph G(J, I (p) ). Since J and I (p) are common bases, Lemma 2 implies that G(J, I (p) ) contains two perfect matchings M1 ⊆ A1 and M2 ⊆ A2 . Therefore, each connected component of M1 ∪ M2 forms an alternating cycle with A1 and A2 . Consider one of these cycles with vertices e1 , f1 , . . . , ed , fd , e1 such that ei ∈ J \ I (p) , fi ∈ I (p) \ J, (ej , fj ) ∈ A1 (j = 1, . . . , d), (fj , ej+1 ) ∈ A2 (j = 1, . . . , d − 1), and (fd , e1 ) ∈ A2 . For every such cycle, we shall show d j=1 d j=1
w(ej ) w(fj )
d
j=1 (w1 (ej )
≥ d
+ w2 (ej ))1/k
1/k j=1 (w1 (fj ) + w2 (fj ))
≥ 2(1/k)−1 ,
(3)
where w1 and w2 are weight functions constructed by an optimal dual solution for LP formulation (1) of the matroid intersection problem with respect to the weight wk (see (2)). This completes the proof for the case in which |J| ≤ |I (p) |. Note that the first inequality in (3) holds by w1 (e) + w2 (e) ≥ wk (e) for every e ∈ E and the complementary slackness (i.e., w1 (e) + w2 (e) = wk (e) for every e ∈ J). To show the second inequality in (3), we introduce 4d variables x0 , . . . , x2d−1 , y1 , . . . , y2d , where x2j−1 , x2j−2 , y2j−1 and y2j (j = 1, . . . , d) correspond to w1 (ej ), w2 (ej ), w1 (fj ) and w2 (fj ), respectively. Since (ej , fj ) ∈ A1 , by Theorem 1, we have w1 (ej ) ≥ w1 (fj ) (j = 1, . . . , d). Similarly, we have
128
R. Fujita, Y. Kobayashi, and K. Makino
w2 (ej+1 ) ≥ w2 (fj ) (j = 1, . . . , d − 1), and w2 (e1 ) ≥ w2 (fd ). Therefore, we consider the following optimization problem to prove (3). (x0 + x1 )1/k + (x2 + x3 )1/k + · · · + (x2d−2 + x2d−1 )1/k (y1 + y2 )1/k + (y3 + y4 )1/k + · · · + (y2d−1 + y2d )1/k ≥ yj ≥ 0 (j = 1, . . . , 2d − 1) ≥ y2d ≥ 0
minimize
Z =
subject to
xj x0
the denominator of Z is positive. Since Z is clearly minimized when xj = yj (1 ≤ j ≤ 2d − 1) and x0 = y2d , it is consequently enough to prove minimize
(x0 + x1 )1/k + (x2 + x3 )1/k + · · · + (x2d−2 + x2d−1 )1/k (x1 + x2 )1/k + (x3 + x4 )1/k + · · · + (x2d−1 + x0 )1/k xj ≥ 0 (j = 0, . . . , 2d − 1) (4) the denominator of Z(x) is positive Z(x) =
subject to
is at least 2(1/k)−1 . Let us consider the following operation REDUCE(j) which transforms a nonnegative x without increasing Z(x). REDUCE(j) If x2j−2 + x2j−1 ≥ x2j + x2j+1 , then x2j−1 := x2j−1 + x2j and x2j := 0. Otherwise, x2j := x2j−1 + x2j and x2j−1 := 0. Here we define x2k = x0 . Note that for any nonnegative a, b and c with a ≥ b ≥ c, we have a1/k + b1/k ≥ (a + c)1/k + (b − c)1/k by the concavity of function f (x) = x1/k for k ≥ 1. Thus REDUCE(j) creates a new feasible solution x to (4) without increasing Z(x). By repeatedly applying REDUCE(j), j = 1, . . . , d, we obtain x with at least one 0 for each pair of (x1 , x2 ), (x3 , x4 ), . . . , (x2d−1 , x0 ). Then, by removing variables of value 0, Z(x) can be represented as
1/k 1/k 1/k + x + j∈J3 x2j+1 j∈J1 (x2j + x2j+1 ) j∈J2 2j 1/k 1/k 1/k 1/k j∈J1 x2j + x2j+1 + j∈J2 x2j + j∈J3 x2j+1
Z(x) =
≥ min
min
j∈J1
(x2j + x2j+1 )1/k 1/k
1/k
2
4
,1 .
x2j + x2j+1
for some sets J1 , J2 , J3 ⊆ {1, . . . , d}. Since (1/k)−1
(a+b)1/k a1/k +b1/k
≥
(a+b)1/k ((a+b)/2)1/k +((a+b)/2)1/k (1/k)−1
for any positive a and b, we can conclude that Z(x) ≥ 2
Complexity of α-Robust Matching
In this section, we study the time complexity of the following problem.
.
=
Robust Matchings and Matroid Intersections
129
α-ROBUST-MATCHING Instance: A graph G = (V, E) and a weight w(e) ∈ Z+ for each e ∈ E. Question: Is there an α-robust matching in G ?
Theorem 3. α-ROBUST-MATCHING is NP-complete when it is polynomially solvable when α ≤ √12 or α = 1.
√1 2
< α < 1, and
Note that the polynomial result for α ≤ √12 is given in [4]. This theorem gives us a sharp border for the complexity of α-ROBUST-MATCHING. The proof of Theorem 3 consists of the√following three parts. In Sections 4.1 and 4.2, we √ 2+ 2 1 2+ 2 √ deal with the cases when 4 < α < 1 and 2 < α ≤ 4 , respectively. A polynomial-time algorithm for α = 1 is presented in Section 5.1 (see Theorem 5). We also show that when α is a part of the input, detecting an α-robust matching is NP-complete in the strong sense. For a precise description, we introduce the following problem. ROBUST-MATCHING Instance: A graph G = (V, E), a weight w(e) ∈ Z+ for each e ∈ E, and positive integers α1 and α2 . 1 Question: Is there an α α2 -robust matching in G ?
Theorem 4. ROBUST-MATCHING is NP-complete in the strong sense, that is, it is NP-complete even if the size of the input is Θ(|V |+|E|+w(E)+α1 +α2 ). The proof of Theorem 4 is omitted. 4.1
NP-Hardness for α-Robust Matching when √
√ 2+ 2 4
<α<1
In this subsection, we deal with the case when 2+4 2 < α < 1. For a concise description, we first discuss the case when α = 78 , and then show how to modify √ the reduction for a general α with 2+4 2 < α < 1. Our proof of Theorem 3 is based on the NP-completeness of PARTITION. For a finite set S and a function f : S → R+ , PARTITION is the problem to find a partition (A, B) of S with f (A) = f (B), and it is one of Karp’s NP-complete problems [6]. By NP-completeness of PARTITION, we can see that the following problem is also NP-complete. PARTITION Instance: A finite set S and a non-negative number f (e) for each e ∈ S. Question: Is there a partition (A, B) of S such that |A| = |B| and f (A) = f (B) ?
130
R. Fujita, Y. Kobayashi, and K. Makino
Proposition 1. PARTITION is NP-complete. Proof. PARTITION is obviously in NP. Given an instance S of PARTITION, construct an instance of PARTITION by adding |S| new elements e with f (e) = 0 to S. Then it is not difficult to see that S has a solution if and only if so does the corresponding instance of PARTITION , which implies that PARTITION is NP-hard.
In what follows, we show that there exists a polynomial-reduction from PARTITION to 78 -ROBUST-MATCHING. Suppose a set S with |S| = n and a function f : S → R+ are an instance of PARTITION . We construct an instance of 78 -ROBUST-MATCHING as follows. Let G = (V, E) be a graph defined by V = {vi,j | i = 1, 2, 3, 4, j ∈ S}, E = {ei,j | ei,j = (vi,j , vi+1,j ), i = 1, 2, 3, j ∈ S}. Let L be a large enough integer relative to f (S) (for example, L = 7f (S) + 1) and define the weights of edges as w(e1,j ) = 7(L + f (j)), w(e2,j ) = 12(L + f (j)), w(e3,j ) = 9(L + f (j)) for j ∈ S. For a maximal matching M ⊆ E, we define a partition (A, B) of S by A = {j | j ∈ S, e1,j ∈ M, e3,j ∈ M }, B = {j | j ∈ S, e2,j ∈ M }. Conversely, a partition (A, B) of S defines a maximal matching M in G in a similar way. We say that such a maximal matching M corresponds to a partition (A, B) of S. Then, we have the following proposition, where the proof is omitted. Proposition 2. Suppose that a maximal matching M corresponds to a partition (A, B) of S. Then M is a 78 -robust matching if and only if |A| = |B| = n2 and f (A) = f (B). Since it suffices to deal with maximal matchings in 78 -ROBUST-MATCHING, Proposition 2 implies that there exists a partition (A, B) of S such that |A| = |B| and f (A) = f (B) if and only if G has a 78 -robust matching with respect to the weight function w. This shows that PARTITION can be reduced to 78 -ROBUSTMATCHING, and hence 78 -ROBUST-MATCHING is NP-hard. In order to show√ the NP-hardness of α-ROBUST-MATCHING for a general number α with 2+4 2 < α < 1, define the weight of the edges of G as w(e1,j ) = a(L + f (j)), w(e2,j ) = b(L + f (j)), w(e3,j ) = c(L + f (j)),
Robust Matchings and Matroid Intersections
131
where 0 < a < c < b < a + c, a+b+c b+c = = α, 2(a + c) 2b and L is a large enough integer relative to f (S). For example, a = 4α(1 − α), √ 2+ 2 2 b = 2α − 1, and c = (2α − 1) satisfy the above conditions if 4 < α < 1. By the same argument as the case α = 78 , we obtain the NP-hardness for α√ ROBUST-MATCHING when 2+4 2 < α < 1. 4.2
NP-Hardness for α-Robust Matching when
1 √ 2
<α≤
√ 2+ 2 4
In this subsection, we modify the proof in the previous subsection to apply the √ case when √12 < α ≤ 2+4 2 . As with the previous subsection, we reduce an instance of PARTITION to the problem. Let a set S with |S| = n and f : S → R+ be an instance of PARTITION . We construct an instance of α-ROBUST-MATCHING as follows. Let G = (V, E) be a graph defined by V = {vi,j | i = 1, 2, 3, 4, j ∈ S} ∪ {u1 , u2 , u3 , u4 , u5 }, E = {ei,j | ei,j = (vi,j , vi+1,j ), i = 1, 2, 3, j ∈ S} ∪ {(u1 , u2 ), (u2 , u3 ), (u3 , u4 ), (u4 , u5 )}. Define the weight of the edges of G as w(e1,j ) = a(L + f (j)), w(e2,j ) = b(L + f (j)), w(e3,j ) = c(L + f (j)), for j ∈ S, and w(u1 , u2 ) = w(u3 , u4 ) = N (nL + f (S)), 1 w(u2 , u3 ) = ( + )N (nL + f (S)), α √ 1 w(u4 , u5 ) = ( 2 − − )N (nL + f (S)), α
√ where 0 < < 2 − α1 , 0 < a < c < b < a + c, N > 0, w(u4 , u5 ) > w(ei,j ) for every i, j, L is large enough relative to f (S), and √ √ 2N + (a + b + c)/2 2N + (b + c)/2 = = α. 2N + (a + c) 2N + b √
21−22α √ satisfy For example, a = 2α, b = 10, c = 11 − 2α, = 2−1/α , and N = 4α−2 2 2 the conditions if n is large enough. Then we can show that S has a desired partition if and only if G has an α-robust matching, and hence α-ROBUST-MATCHING is NP-hard when √12 <
α≤
√ 2+ 2 4 .
132
R. Fujita, Y. Kobayashi, and K. Makino
k-th Power Algorithm
5
As shown in the previous section, α-ROBUST-MATCHING is NP-complete when √12 < α < 1. However, it is known [4] that every graph has a √12 -robust matching which can be computed by the k-th power algorithm. k-th power algorithm Step 1. For an original weight function w : E → Z+ , define wk : E → Z+ by wk (e) = {w(e)}k . Step 2. Find a matching (or an independent set) M maximizing wk (M ), and output M .
In this section, we analyze the performance of the k-th power algorithm. 5.1
1-Robust Matching
When k = 1 and k = 2, the k-th power algorithm finds an ordinary maximum matching and a √12 -robust matching, respectively. In this section, we show that when k is large enough, the k-th power algorithm finds a 1-robust matching, if one exists. Theorem 5. Let (E, I) be an independent system. Suppose that k is large enough to satisfy wk (e) > |E|wk (e ) if w(e) > w(e ). If (E, I) has a 1-robust independent set, then it can be found by the k-th power algorithm. Proof. Let F = {f1 , f2 , . . . , fs } be a 1-robust independent set for (E, I), and G = {g1 , g2 , . . . , gt } be output by the k-th power algorithm, where w(f1 ) ≥ w(f2 ) ≥ · · · ≥ w(fs ) and w(g1 ) ≥ w(g2 ) ≥ · · · ≥ w(gt ). Note that w(I (p) ) = w(F(p) ) holds for any p by the definition of the 1-robustness, where I (p) denotes a maximum p-independent set. Assume that w(I (p−1) ) = w(G(p−1) ) and w(I (p) ) > w(G(p) ) for some p (≥ 1). Then we have wk (I (p) ) − wk (G(p) ) = wk (fp ) − wk (gp ) > (|E| − 1)wk (gp ) > wk (gj ), j≥p+1
which implies wk (I (p) ) > wk (G). This contradicts the maximality of wk (G). Thus, wk (I (p) ) = wk (G(p) ) holds for every p.
As a corollary of Theorem 5, for sufficiently large k, the k-power algorithm compute in polynomial time a 1-robust matching and a 1-robust common independent set of two matroids, for example.
Robust Matchings and Matroid Intersections
133
We remark that, instead of using the k-th power of the original weight, we can use any weight function w that satisfies w (e1 ) > |E| · w (e2 ) for all pairs of edges e1 and e2 with w(e1 ) > w(e2 ). For example, when we have t different weights w1 < w2 < · · · < wt , let f (wi ) = (|E| + 1)i and w (e) = f (w(e)). Then w (e) satisfies this condition. We can find a 1-robust independent set (if exists) by finding a maximum independent set with respect to the weight function w . 5.2
Negative Results
We have already seen that the k-th power algorithm outputs a meaningful solution when k = 1, 2, +∞, and so it might be expected that by choosing an appropriate parameter k depending on an instance, the k-th power algorithm outputs a good approximate solution for the robustness. However, in this subsection, we give a result against this expectation. We consider the following optimization problem corresponding to α-ROBUSTMATCHING. MAX-ROBUST-MATCHING Instance: A graph G = (V, E) and a weight w(e) ∈ Z+ for each e ∈ E. Find: The maximum α such that G has an α-robust matching.
Since this problem is NP-hard by Theorem 3, we consider approximation algorithms for the problem. For an instance of MAX-ROBUST-MATCHING whose maximum value is α∗ and for 0 < β < 1, a matching M in G is βapproximately robust if M is (α∗ β)-robust. Obviously, for any instance of the problem, the k-th power algorithm finds a √12 -approximately robust matching when k = 2. The following theorem shows that √12 is the best approximation ratio of the k-th power algorithm. Theorem 6. For any > 0, there exists an instance of MAX-ROBUST-MATCH ING such that the k-th power algorithm does not output √12 + -approximately robust matching for any k. Proof. It suffices to show that for any small > 0, there exists an instance of MAX-ROBUST-MATCHING satisfying the following conditions: (A) There exists a (1 − )-robust matching. (B) For any k, the output of the k-th power algorithm is not √12 + -robust. We consider the following instance of the problem. Define γ = √12 for a concise description, and let L be an integer such that L > 5 . Let S0 , S1 , . . . , SL be finite √ sets with |S0 | = L and |St | = ( 2)t for t = 1, 2, . . . , L2 . Let G = (V, E) be a graph defined by
134
R. Fujita, Y. Kobayashi, and K. Makino
V0 = {vi,j | i = 1, 2, j ∈ S0 }, Vt = {vi,j | i = 1, 2, 3, 4, j ∈ St } for t = 1, 2, . . . , L2 , ⎛ 2 ⎞ L V = ⎝ Vt ⎠ ∪ {u1 , u2 , u3 , u4 }, t=0
E0 = {(v1,j , v2,j ) | j ∈ S0 }, Et = {ei,j | ei,j = (vi,j , vi+1,j ), i = 1, 2, 3, j ∈ St } ⎛ 2 ⎞ L E = ⎝ Et ⎠ ∪ {(u1 , u2 ), (u2 , u3 ), (u3 , u4 )}.
for t = 1, 2, . . . , L2 ,
t=0
Define a weight function w : E → R+ by ⎧√ 2 − if e = (u2 , u3 ), ⎪ ⎪ ⎪ ⎨1 if e ∈ E0 ∪ {(u1 , u2 ), (u3 , u4 )}, w(e) = t ⎪ γ if e = e2,j for j ∈ St , ⎪ ⎪ ⎩ t+1 γ if e = e1,j or e = e3,j for j ∈ St , for t = 1, 2, . . . , L2 . Then, we can show that (A) and (B) hold in this instance. Lemma 3. There exists a (1 − )-robust matching in G. Lemma 4. For any k, the output of the k-th power algorithm is not robust.
√1 2
+ -
Proofs of these lemmas are omitted. By Lemmas 3 and 4, we complete the proof of Theorem 6.
References 1. Edmonds, J.: Submodular Functions, Matroids, and Certain Polyhedra. In: Guy, R., Hanani, H., Sauer, N., Sch¨ onheim, J. (eds.) Combinatorial Structures and Their Applications, pp. 69–87. Gordon and Breach, New York (1970) 2. Fukunaga, T., Halldorsson, M., Nagamochi, H.: Robust cost colorings. In: SODA 2008, pp. 1204–1212 (2008) 3. Garey, M.R., Johnson, D.S.: Complexity results for multiprocessor scheduling under resource constraints. SIAM J. Compt. 4, 397–411 (1975) 4. Hassin, R., Rubinstein, S.: Robust matchings. SIAM J. Discrete Math. 15, 530–537 (2002) 5. Hassin, R., Segev, D.: Robust subgraphs for trees and paths. ACM Transactions on Algorithms 2, 263–281 (2006) 6. Karp, R.M.: On the computational complexity of combinatorial problems. Networks 5, 45–68 (1975) 7. Schrijver, A.: Combinatorial Optimization. Springer, Heidelberg (2003)
A 25/17-Approximation Algorithm for the Stable Marriage Problem with One-Sided Ties Kazuo Iwama1 , Shuichi Miyazaki2 , and Hiroki Yanagisawa3 1
2
Graduate School of Informatics, Kyoto University
[email protected] Academic Center for Computing and Media Studies, Kyoto University
[email protected] 3 IBM Research - Tokyo
[email protected]
Abstract. The problem of finding a largest stable matching where preference lists may include ties and unacceptable partners (MAX SMTI) is known to be NP-hard. It cannot be approximated within 33/29 (> 1.1379) unless P=NP, and the current best approximation algorithm achieves the ratio of 1.5. MAX SMTI remains NP-hard even when preference lists of one side do not contain ties, and it cannot be approximated within 21/19 (> 1.1052) unless P=NP. However, even under this restriction, the best known approximation ratio is still 1.5. In this paper, we improve it to 25/17 (< 1.4706).
1
Introduction
The stable marriage problem [1,3] is a classical bipartite matching problem. In its original setting, an instance consists of n men, n women, and each person’s preference list, where a preference list is a totally ordered list of all the members of the opposite sex according to his/her preference. A matching is a set of disjoint pairs of a man and a woman. For a matching M , a pair of a man m and a woman w is called a blocking pair if both prefer each other to their current partners. A matching with no blocking pair is called stable. Gale and Shapley [1] showed that every instance admits at least one stable matching, and proposed an O(n2 )-time algorithm, known as the Gale-Shapley algorithm, to find one. There are several examples of using the stable marriage problem in assignment systems, including residents/hospitals matching [3], students/schools matching [19], etc. Clearly, the restrictions that each preference list must be strict (i.e., it is a totally ordered list) and complete (i.e., it includes all the members of the opposite side) are unrealistic for such applications. Two natural relaxations are then to allow for ties and incompleteness. (There are three definitions of stability when ties are allowed [3,6]. In this paper, we consider the weak stability, which is the most natural notion among the three.) Applying either one or both of these extensions does not affect the validity of the properties that a stable matching exists for any instance and that one can be found in polynomial time. Hence a stable matching can be found efficiently even with these extensions. M. de Berg and U. Meyer (Eds.): ESA 2010, Part II, LNCS 6347, pp. 135–146, 2010. c Springer-Verlag Berlin Heidelberg 2010
136
K. Iwama, S. Miyazaki, and H. Yanagisawa
However, if one is insistent on the size of stable matchings, the situation changes significantly. In the original setting and an extension with only ties, a stable matching is a perfect matching by definition. In an extension with only incomplete preference lists, a stable matching may no longer be perfect but for a fixed instance, all of the stable matchings have the same size due to the famous Rural Hospitals Theorem [2]. Thus the problem of finding a largest stable matching is solvable in polynomial time for all of these cases. In contrast, if we allow both extensions, one instance can have stable matchings of different sizes, and hence the problem of finding a largest stable matching (which we call hereafter MAX SMTI (MAXimum Stable Marriage with Ties and Incomplete lists)) is no longer trivial. In fact, this problem is NP-hard [9,15]. Since a stable matching is a maximal matching, any two stable matchings differ in size by at most a factor of two. Hence, constructing a polynomial-time 2-approximation algorithm is easy. There has been a sequence of improvements of the approximation ratio. The first attempt was made by using a local search type algorithm, which successively increases the size of a stable matching at each iteration [10,11], but these upper bounds approach 2 as n goes to infinity. The first upper bound strictly better than 2 was obtained along this line. Iwama et al. [12] obtained an upper bound of 1.875 by modifying the aforementioned local search. Later, Kir´ aly [13] improved it to 5/3 by using a completely different idea. His algorithm is a modification of the Gale-Shapley so that each man may propose to the same woman more than once, and the roles of men and women are exchanged during the execution. McDermid [16] improved Kir´ aly’s algorithm by exploiting a classical result in graph theory, the Gallai-Edmonds decomposition, and obtained an upper bound of 1.5, which is the current best approximation ratio. Meanwhile, the best-known lower bound is 33/29 (> 1.1379) under the assumption that P=NP [20]. For a given instance, let Mopt be a largest stable matching and M be any stable matching. Consider a union of Mopt and M , which can be seen as a bipartite graph. Each connected component is an alternating path or cycle. If every connected component is a path of length three that contains two Mopt edges and one M -edge, then |Mopt | = 2|M |, which is the worst possible case mentioned above. All of the above approximation algorithms were designed to exclude as many such length-three paths as possible. The ratio of 1.875 [12] was achieved by excluding a constant fraction (relative to the size of M ) of such components. It should be noted that in his paper [13], beyond his 5/3 result, Kir´ aly was also able to remove all such length-three paths (which led to the approximation ratio of 1.5) if the instance has ties on only one side, which is another main result of that paper. As an extension, McDermid [16] finally succeeded in excluding them completely for general instances. A natural extension of this line of research is to attack augmenting paths of length five. In order to break the bound of 1.5, we need to remove those paths at least by a constant fraction, which is apparently a challenging goal. Our Contributions. In this paper, we do not achieve this goal for general instances but do so for instances with one-sided ties as Kir´aly did in [13]. We
A 25/17-Approximation Algorithm for the Stable Marriage Problem
137
improve the approximation ratio from 1.5 to 25/17 (< 1.4706). We note that this approximation ratio also holds for the hospitals/residents problem (i.e., manyone variant) with one-sided ties (see [7]). The basic idea is to use an integer program (IP) and its linear program (LP) relaxation, which is summarized as follows: Note that the Gale-Shapley algorithm (GS) consists of a sequence of proposals from men to women (see [3] for details). Kir´aly’s algorithm, GSA1, is similar to GS, but is different in that each man goes through his list twice for proposals. In GSA1, each man has one of two possible states, “unpromoted” or “promoted” (these terms are taken from McDermid’s paper [16]). Each man is initially unpromoted, and once he has proposed to all of the women on his list, he becomes promoted and starts again by making proposals from the top of the list. When a woman receives proposals from two men with the same preference but different states, she always selects the promoted one. It should be more powerful to use not only two different states, unpromoted and promoted, but more quantitative information for the same purpose. Our new idea is to formulate MAX SMTI as an IP by generalizing the IP formulation for the original stable marriage [17,18], and then to solve its LP relaxation (in polynomial time). We use this optimal solution to define the state of each man. Kir´ aly conjectures that breaking the 1.5-approximation upper bound of MAX SMTI with one-sided ties implies breaking the 3-approximation upper bound of the minimum vertex cover problem in 3-uniform hyper-graphs (Conjecture 3 in [13]), but this latter statement disproves the Unique Games Conjecture (UGC) [14]. This means our result disproves Kir´ aly’s Conjecture 3 under UGC. Related Results. As mentioned above, general MAX SMTI is approximable within 1.5, but cannot be approximated with a ratio smaller than 33/29 (> 1.1379) unless P=NP and smaller than 4/3 (> 1.3333) under UGC [20]. If the length of each man’s list is bounded by 2, it is solvable in polynomial time even if women’s lists are of arbitrary length, while it is NP-hard even if the length of each preference list is bounded by 3 [8]. The only known approximability result with a ratio smaller than 1.5 is a randomized approximation algorithm that achieves an expected approximation ratio of 10/7 (< 1.4286) for the special case that ties appear on one side only and the length of each tie is at most two [4]. The restriction for MAX SMTI that ties appear in preference lists of one sex only is quite natural in practice. For example, it is reported that in the Scottish Foundation Allocation Scheme (SFAS), a hospitals/residents matching system in Scotland, residents are required to submit strictly ordered preference lists while each hospital’s list may contain one tie [7]. Even under this restriction, MAX SMTI remains NP-hard and is not approximable with a ratio smaller than 21/19 (> 1.1052) unless P=NP, and smaller than 5/4 under UGC [5]. For the approximability, Irving and Manlove [7] presented a 5/3-approximation algorithm. Kir´ aly [13] improved it to 1.5, which is the previous best upper bound as well as the McDermid bound [16] for the general case.
138
2
K. Iwama, S. Miyazaki, and H. Yanagisawa
Preliminaries
An instance I of MAX SMTI comprises n men, n women and each person’s preference list that may be incomplete and may include ties. If a person p includes a person q (of the opposite sex) in p’s preference list, we say that q is acceptable to p. Without loss of generality, we can assume that m is acceptable to w if and only if w is acceptable to m. A matching M is a set of pairs (m, w) such that m is acceptable to w and vice versa, and each person appears at most once in M . If (m, w) ∈ M , we say that m (w) is matched in M , and write M (m) = w and M (w) = m. If a person p does not appear in M , we say that p is single in M . If m strictly prefers wi to wj , we write wi m wj . If wi and wj are tied in m’s list (including the case that wi = wj ), we write wi =m wj . The statement wi m wj is true if and only if wi m wj or wi =m wj . We use similar notation for women’s preference lists. We say that m and w form a blocking pair for a matching M (or simply, (m, w) blocks M ) if the following three conditions are met: (i) M (m) = w but m and w are acceptable to each other, (ii) w m M (m) or m is single in M , and (iii) m w M (w) or w is single in M . A matching M is called stable if there is no blocking pair for M . MAX SMTI is the problem of finding a largest stable matching. The approximation ratio of an approximation algorithm T is max{opt(I)/T (I)} over all instances I of size n, where opt(I) and T (I) are the sizes of the optimal and the algorithm’s solutions, respectively. The following IP formulation of MAX SMTI instance I, denoted by IP (I), is a generalization of the one for the original stable marriage problem given in [17,18]. For each (man, woman) pair (m, w), we introduce a variable xm,w . Maximize: xi,j i
j
Subject to:
xi,w ≤ 1
∀w
(1)
xm,j ≤ 1
∀m
(2)
∀(m, w) ∈ A
(3)
∀(m, w) ∈ A ∀(m, w)
(4) (5)
i
jm w
j
xm,j +
xi,w − xm,w ≥ 1
iw m
xm,w = 0 xm,w ∈ {0, 1}
Here, A is the set of mutually acceptable pairs, that is, (m, w) ∈ A if and only if each of m and w includes the other in the preference list. In this formulation, “xm,w = 1” is interpreted as “m and w are matched,” and “xm,w = 0” otherwise. So, the objective function is equal to the size of a matching. Note that Constraint (3) ensures that (m, w) is not a blocking pair. When xm,w = 1, then all three terms of the lefthand side are 1 and hence Constraint (3) is satisfied. When xm,w = 0, either the first or the second term of the lefthand side must be 1, which implies that m (respectively w) must be matched with a partner
A 25/17-Approximation Algorithm for the Stable Marriage Problem
139
Algorithm 1. GSA-LP (Gale-Shapley Algorithm with LP solution) Input: An SMTI instance I Output: A matching M 1: Formulate the given instance I as an integer program IP (I) 2: Solve its LP relaxation LP (I) and obtain an optimal solution x∗ (= {x∗i,j }) 3: Let M := ∅ 4: Set f (m) := 0 and p(m) := 1 for each man m 5: while there exists a man m such that (m is single in M ) and (f (m) ≤ 3) do 6: Let m be an arbitrary such man 7: if p(m) is larger than the length of m’s preference list then 8: Set f (m) := f (m) + 1 and p(m) := 1 9: else 10: Let w be the p(m)-th woman of m’s preference list 11: if m has not proposed to w yet then 12: Set f (m) := f (m) + x∗m,w 13: Let m propose to w 14: Set p(m) := 1 15: else 16: Let m propose to w 17: Set p(m) := p(m) + 1 18: end if 19: end if 20: end while 21: return M
as good as w (respectively m). LP (I) denotes the linear program relaxation of IP (I) in which Constraint (5) is replaced by “0 ≤ xm,w ≤ 1.”
3
Approximation Algorithm
In the following, we assume that the men’s lists are strict and the women’s lists may contain ties. Algorithm 1 gives the pseudo-code for our algorithm GSA-LP. We use a variable f (m), which assigns a non-negative value to each man m. The value of f (m) is initially set to zero but increases as the algorithm proceeds. We also use another variable p(m), which stores the current position in m’s preference list. When man m proposes to woman w, w accepts this proposal if either (a) w is single in M , (b) m w M (w), or (c) m =w M (w) and f (m) > f (M (w)). Otherwise, w rejects m’s proposal. When w accepts m’s proposal, we let M := M ∪ {(m, w)} for Case (a), and let M := M ∪ {(m, w)} \ {(M (w), w)} for Cases (b) and (c). Here is an intuition of our new algorithm GSA-LP: It basically consists of three rounds of proposal sequences by each man m with the rank-adjusting value f (m). In the first round, f (m) is increased by x∗m,w whenever m sends a proposal to w for the first time (Lines 11–14). The key thing here is that if m is rejected by this new woman w (either immediately or later after once accepted), then he restarts his sequence of proposals from the top of his list (Lines 15–17). Note that in the
140
K. Iwama, S. Miyazaki, and H. Yanagisawa
restarted sequence of proposals, women he proposes to are not new until w. Up to that woman, f (m) does not change and the restart does not happen. If m has proposed to all of the women in his list and he is still single, then the value of f (m) is increased by 1 and m goes to the second round (Lines 7–8). In the second round, the value of f (m) does not change and m sends a sequence of proposals from top of his list again. If m is still single after finishing this sequence, then f (m) is increased by 1 and m restarts another sequence of proposals again (the third round). This continues until f (m) becomes greater than 3. Note that not every man goes into the third round, but if a man enters the third round, then it is guaranteed that his second round is completed. Note that just before m first proposes to w in his first round, proposed he has ∗ to all of the preceding women with the value f (m) = jm w xm,j but was rejected by all of them. Also, when a man m proposes to the woman at the tail of his list for the first time, f (m) ≤ 1. Finally, note that if m is single in M at the termination of GSA-LP, he has proposed to all the women in his list with the value f (m) > 2, and f (m) > 3 holds at the termination of GSA-LP. It is not hard to see that GSA-LP runs in polynomial time. For the correctness, let M be a matching obtained by GSA-LP, and let (m, w) ∈ A \ M . If m has not proposed to w, then M (m) m w and hence (m, w) is not a blocking pair. If m has proposed to w, then m was rejected by w at some time, and at this moment w must have been matched with a man m such that m w m. Since hereafter w never accepts a proposal from a man inferior to m , it must be the case that M (w) w m w m. Hence (m, w) is not a blocking pair, so that M is stable.
4 4.1
Analysis of the Approximation Ratio Overview of the Analysis
Let us fix an instance I. Let Mopt be an optimal solution, namely one of the maximum stable matchings, of I, and M be the stable matching output by GSALP. Let G = (U, V, E) be the bipartite graph obtained by superimposing Mopt on M , that is, each vertex in U corresponds to each man, and each vertex in V corresponds to each woman. For simplicity, we do not distinguish between a person and a corresponding vertex (e.g., a vertex of G corresponding to a man m is also denoted m). If m ∈ U and w ∈ V are matched in M , then E includes an edge (m, w), called an M -edge. Similarly, if m and w are matched in Mopt , then E includes an Mopt -edge (m, w). Graph G contains parallel edges (m, w) if (m, w) is a pair in both M and Mopt . Note that each vertex in G has degree at most two, and hence each connected component of G is either a single vertex, an alternating path, or an alternating cycle. In this paper, we refer to a path of length three starting from and ending with Mopt -edges as an augmenting threepath (see Fig. 1(a)). We can prove Proposition 1 by following a similar analysis to Kir´ aly’s GSA1 [13]. Proposition 1. There is no augmenting three-path in G (proof is omitted).
A 25/17-Approximation Algorithm for the Stable Marriage Problem ms : wp
m r
!r w !! ! ! r w m r!
mp : wq wp mq : wq ws
141
r
!r wp : mp ms !! ! r!! r !! wq : (mp mq ) ! ! r!! rw :m s
(a)
q
(b)
Fig. 1. Illustrations of (a) an augmenting three-path and (b) an augmenting five-path. Solid lines represent M -edges and dashed lines represent Mopt -edges.
Let S be the set of men and women who are single in M . Also, let us partition M into P , Q, and R using the graph G: Consider a path of length five starting from and ending with Mopt -edges, which we will refer to as an augmenting five-path hereafter. (See Fig. 1(b). The preference structure shown here will be established later by Lemma 3.) Let ms , wp , mp , wq , mq , and ws be the men and the women on this path appearing in this order, that is, both ms and ws are in S, and wp = Mopt (ms ), mp = M (wp ), wq = Mopt (mp ), mq = M (wq ), and ws = Mopt (mq ). Let P be the set of pairs (mp , wp ) and Q be the set of pairs (mq , wq ) on all the augmenting five-paths. Let R = M \ (P ∪ Q). Note that G does not contain a path of length one (i.e., an isolated edge) as a connected component, because if such a path (e.g. an Mopt -edge (m, w)) exists, then (m, w) is a blocking pair for M . Graph G does not contain any augmenting three-path by Proposition 1. An augmenting five-path contains one edge from each of P and Q, and three Mopt -edges. Hence the total number of Mopt -edges contained in augmenting five-paths is exactly 32 (|P | + |Q|). For other connected components of G, the ratio of the number of Mopt -edges to the number of M edges is at most 43 (for augmenting seven-paths). We thus have Lemma 1, which is one way of bounding |Mopt | by |P |, |Q|, and |R|. Lemma 1. |Mopt | ≤ 32 (|P | + |Q|) + 43 |R|. Since |M | = |P | + |Q| + |R|, Lemma 1 gives |Mopt | ≤ 32 |M | − 16 |R|. However this guarantees only the ratio of 1.5 when |R| = 0, which is exactly the worst-case example for Kir´ aly’s GSA1 [13]. The advantage of our new algorithm is that it allows us to apply another formula to bound |Mopt |, in the following way: Recall that x∗i,j is the value of xi,j for the optimal solution x∗ of LP (I). Note that if x∗m,w > 0 for m, w ∈ S, then (m, w) ∈ A by Constraint (4) of LP (I), so (m, w) is a blocking pair for M , a contradiction. Hence i,j∈S x∗i,j = 0. Now, let us define the value x∗ (X) for a subset X ⊆ M as: ⎛ ⎞ ⎝ x∗m,j + x∗i,w + x∗m,j + x∗i,w ⎠ . x∗ (X) = (m,w)∈X
j
i
j∈S
i∈S
It is not hardto see that x∗ (P ) + x∗ (Q) + x∗ (R) = 2 i j x∗i,j , since we already know i,j∈S x∗i,j = 0. Hence the optimal value of the objective function
142
K. Iwama, S. Miyazaki, and H. Yanagisawa
of LP (I) can be written as (x∗ (P )+x∗ (Q)+x∗ (R))/2 and we have that |Mopt | ≤ (x∗ (P ) + x∗ (Q) + x∗ (R))/2. We will later prove the following key lemma. Lemma 2. x∗ (P ) + x∗ (Q) + x∗ (R) ≤ 25 (7|P | + 7|Q| + 9|R|). Hence we have that |Mopt | ≤ 15 (7|P | + 7|Q| + 9|R|), that is, 5|Mopt | ≤ 7(|P | + |Q|) + 9|R|. By Lemma 1, we have 12|Mopt | ≤ 18(|P | + |Q|) + 16|R|. By adding these two inequalities, we have 17|Mopt | ≤ 25(|P | + |Q| + |R|) = 25|M |. Thus we obtain the following theorem: Theorem 1. The approximation ratio of GSA-LP is at most 25/17. 4.2
Properties of P , Q, R, and LP Solution
The rest of this section is devoted to the proof of Lemma 2. We first show several basic properties that will be used later. Lemma 4 is especially important since it makes a difference between our new algorithm and Kir´aly’s. Lemma 3. Consider an augmenting five-path in graph G. Let ms , wp , mp , wq , mq , and ws be the men and women on this path appearing in this order (note that (mp , wp ) ∈ P and (mq , wq ) ∈ Q). Then the following (i) – (v) hold: (i) wq mq ws , (ii) mp wp ms , (iii) wq mp wp , (iv) mp =wq mq , and (v) f (mp ) ≤ 1 and f (mq ) ≤ 1 at the termination of GSA-LP. (Proof is omitted.) Lemma 4.
(m,w)∈P jm w
x∗m,j ≤
x∗m,j .
(m,w)∈Q jm Mopt (m),j∈S
Proof. Consider an augmenting five-path of Fig. 1(b). Note that both mp and mq get their partners in the first round (by Lemma 3(v)). Let f (mq ) be the final f -value of mq , and f (mp ) and f (mq ) be the maximum f -values of mp and mq respectively, when they propose to wq (note that they may propose to wq several times). Then it turns out that x∗mp ,j ≤ f (mp ) ≤ f (mq ) ≤ f (mq ) ≤ x∗mq ,j , jmp wp
jmq Mopt (mq ),j∈S
which proves the lemma, because (i) The first inequality is the key fact obtained from our new algorithm: Observe that in GSA-LP mp proposes to the woman who is one-position to the left of wp with f -value jmp wp x∗mp ,j for the first time. When this woman rejects mp , mp restarts its proposal sequence from the top and hence he proposes to wq (mp wp ) with the same f -value of his list ∗ x jmp wp mp ,j . (ii) The second inequality is due to the fact that wq selected mq rather than mp as her partner (note that mp =wq mq by Lemma 3(iv)). (iii) The third inequality is obvious since the value of f (mp ) is monotone non-decreasing. (iv) The final inequality is because mq proposed neither to Mopt (mq ) (since M (mq ) = wq mq Mopt (mq ) by Lemma 3(i)), nor to any woman in S (because once a woman receives a proposal, she is matched at the end).
A 25/17-Approximation Algorithm for the Stable Marriage Problem
143
We need several more equations: The first ones are x∗m,j = 0 and x∗i,w = 0 for (m, w) ∈ M. jm w,j∈S
(6)
iw m,i∈S
For the left equation, suppose that there is a woman j such that j ∈ S and j m w. If x∗m,j > 0, then (m, j) ∈ A by Constraint (4) of LP (I), and hence (m, j) blocks M , which contradicts the stability of M . The right equation can also be validated in a similar way. The next equation is x∗i,w = 0 for (m, w) ∈ P ∪ Q. (7) i=w m,i∈S
If there exists a man i (=w m) such that i is single and (i, w) ∈ A, then i should have proposed to w with f (i) > 2, contradicting the fact that w selected m whose f value is at most 1 by Lemma 3(v). Thus this equation holds. 4.3
Bounding x∗ (P ), x∗ (Q), and x∗ (R)
Let us define the following values. pm = x∗m,j , pw = (m,w)∈P j∈S
qw =
(m,w)∈P i∈S
x∗i,w , rm =
(m,w)∈Q i∈S
x∗m,j , rw =
π=
x∗m,j ,
(m,w)∈Q j∈S
(m,w)∈R j∈S
x∗i,w , qm =
x∗i,w , and
(m,w)∈R i∈S
x∗m,j .
(m,w)∈P wm j,j∈S
Then the following two lemmas hold, whose proofs are omitted. Lemma 5. x∗ (P ) + x∗ (Q) + x∗ (R) ≤ 4|P | + 4|Q| + 4|R| − 2(pm + qm + rm ). Lemma 6. x∗ (P ) + x∗ (Q) + x∗ (R) ≤ 3|P | + 4|Q| + 3|R| − (π + qm + qw ). Recall that our goal is to obtain a bound that looks like x∗ (P )+x∗ (Q)+x∗ (R) ≤ a(|P | + |Q|) + b|R| for a < 3. For this, we first obtain a bound in the form of 2(|P | + |Q|) + 6|R| + 4t from Lemma 5 and then another bound in the form of 3(|P | + |Q|) + 3|R| − t from Lemma 6. Now it turns out that the goal is immediate by calculating the worst value for t. What is nontrivial is how to find such bounds from Lemmas 5 and 6, in which Lemma 4 plays an important role. 4.4
Bounding x∗ (P ) + x∗ (Q) + x∗ (R) by Lemma 5
We first define several quantities to simplify later expressions: x∗m,j , β = x∗m,j , α= (m,w)∈P jm w
(m,w)∈P Mopt (m)m j,j∈S
144
K. Iwama, S. Miyazaki, and H. Yanagisawa
γ=
x∗m,j , and γ =
(m,w)∈Q Mopt (m)m j,j∈S
x∗m,j
(m,w)∈Q jm Mopt (m),j∈S
Now we have Lemma 7 (whose proof is omitted): Lemma 7.
x∗i,w ≤ β + γ + rm .
(m,w)∈P iw m
Now, by Constraint (3) of LP (I) and Lemma 7, we have ⎛ ⎞ ⎝1 − |P | − α = x∗m,j ⎠ ≤ x∗i,w ≤ β + γ + rm . jm w
(m,w)∈P
(8)
(m,w)∈P iw m
By Lemma 4, we also have α ≤ γ ≤ qm .
(9)
Then, by Lemma 5, we have that x∗ (P ) + x∗ (Q) + x∗ (R) ≤ 4|P | + 4|Q| + 4|R| − 2(pm + qm + rm )
4.5
≤ 4|P | + 4|Q| + 4|R| − 2α − 2(qm + rm ) ≤ 4|P | + 4|Q| + 4|R| − 4α − 2rm
(by Equation (6)) (by Inequality (9))
≤ 4|Q| + 4|R| + 4(β + γ + rm ) − 2rm = 2|P | + 2|Q| + 4|R| + 4β + 4γ + 2rm
(by Inequality (8)) (because |P | = |Q|)
≤ 2|P | + 2|Q| + 6|R| + 4β + 4γ
(because rm ≤ |R|).
(10)
Bounding x∗ (P ) + x∗ (Q) + x∗ (R) by Lemma 6
We can now prove Lemma 8: Lemma 8.
x∗m,j +
x∗i,w ≥ |Q|.
(m,w)∈Q iw m
(m,w)∈P jm Mopt (m)
Proof. Consider an augmenting five-path such that (mp , wp ) ∈ P , (mq , wq ) ∈ Q, and mp = Mopt (wq ). We have ⎛ ⎞ x∗mp ,j + x∗i,wq = ⎝ x∗mp ,j − x∗mp ,wq ⎠ + x∗i,wq ≥ 1, jmp wq
iwq mq
jmp wq
iwq mp
where the equality is from mp =wq mq (Lemma 3(iv)) and the inequality is from Constraint (3) of LP (I) (note that (mp , wq ) ∈ A). By summing up the above inequality for all of the augmenting five-paths, we obtain this lemma.
A 25/17-Approximation Algorithm for the Stable Marriage Problem
145
Then we have π + qm + qw = π + γ + γ + qw ≥ π + α + qw + γ =p ⎛m + qw + γ ≥ ⎝β +
(m,w)∈P jm Mopt (m)
≥ |Q| + β + γ.
⎞ x∗m,j ⎠ +
(by Equation (9)) (by Equation (6)) x∗i,w + γ (m,w)∈Q iw m
(by Lemma 3(iii) and Equations (6) and (7)) (by Lemma 8)
For the second inequality, note that if (m, w) ∈ P , then Mopt (m) m w by Lemma 3(iii). We combine this fact with Equation (6) to prove that x∗m,j = β + x∗m,j . pm = β + (m,w)∈P jm Mopt (m),j∈S
(m,w)∈P jm Mopt (m)
Then, by Lemma 6, we have that x∗ (P ) + x∗ (Q) + x∗ (R) ≤ 3|P | + 4|Q| + 3|R| − π − qm − qw ≤ 3|P | + 3|Q| + 3|R| − β − γ. 4.6
(11)
Proof of Lemma 2
If β + γ ≤ (|P | + |Q| − 3|R|)/5, we have x∗ (P ) + x∗ (Q) + x∗ (R) ≤ 2|P | + 2|Q| + 6|R| + 4β + 4γ ≤ 25 (7|P | + 7|Q| + 9|R|) by Inequality (10). If β + γ > (|P | + |Q| − 3|R|)/5, we have x∗ (P ) + x∗ (Q) + x∗ (R) ≤ 3|P | + 3|Q| + 3|R| − β − γ ≤ 25 (7|P | + 7|Q| + 9|R|) by Inequality (11).
5
Concluding Remarks
An apparent future work is to further improve the upper bound of 25/17. Although details are omitted, we can show that the integrality gap of our IP formulation and LP relaxation is at least 1 + 1/e (> 1.3678). This does not immediately imply a lower bound of GSA-LP, but may be considered as some barometer to show its limit. Therefore, one research direction would be to narrow the gap between 25/17 and 1 + 1/e. Finally, we want to consider the possibilities of applying our method to the general MAX SMTI. Similarly as above, we can show that the integrality gap is at least 1.5 if we use the same IP formulation for the general MAX SMTI. Therefore, generalizing Kir´ aly’s GSA2 [13] (Kir´ aly’s 5/3-approximation algorithm for the general MAX SMTI) or McDermid’s algorithm [16] using an optimal LP solution seems difficult unless a fundamentally new idea is introduced. Acknowledgments. The authors would like to thank the anonymous reviewers for their valuable comments. This work was supported by KAKENHI 22240001 and 20700009.
146
K. Iwama, S. Miyazaki, and H. Yanagisawa
References 1. Gale, D., Shapley, L.S.: College admissions and the stability of marriage. Amer. Math. Monthly 69, 9–15 (1962) 2. Gale, D., Sotomayor, M.: Some remarks on the stable matching problem. Discrete Applied Mathematics 11, 223–232 (1985) 3. Gusfield, D., Irving, R.W.: The Stable Marriage Problem: Structure and Algorithms. MIT Press, Boston (1989) 4. Halld´ orsson, M.M., Iwama, K., Miyazaki, S., Yanagisawa, H.: Randomized approximation of the stable marriage problem. Theoretical Computer Science 325(3), 439–465 (2004) 5. Halld´ orsson, M.M., Iwama, K., Miyazaki, S., Yanagisawa, H.: Improved approximation results for the stable marriage problem. ACM Transactions on Algorithms 3(3), Article No. 30 (2007) 6. Irving, R.W.: Stable marriage and indifference. Discrete Applied Mathematics 48, 261–272 (1994) 7. Irving, R.W., Manlove, D.F.: Approximation algorithms for hard variants of the stable marriage and hospitals/residents problems. Journal of Combinatorial Optimization 16(3), 279–292 (2008) 8. Irving, R.W., Manlove, D.F., O’Malley, G.: Stable marriage with ties and bounded length preference lists. Journal of Discrete Algorithms 7(2), 213–219 (2009) 9. Iwama, K., Manlove, D.F., Miyazaki, S., Morita, Y.: Stable marriage with incomplete lists and ties. In: Wiedermann, J., Van Emde Boas, P., Nielsen, M. (eds.) ICALP 1999. LNCS, vol. 1644, pp. 443–452. Springer, Heidelberg (1999) 10. Iwama, K., Miyazaki, S., Okamoto, K.: A (2 − c log N/N )-approximation algorithm for the stable marriage problem. IEICE Transactions 89-D(8), 2380–2387 (2006) 11. Iwama, K., Miyazaki, S., Yamauchi, N.: (2 − c √1N )-Approximation Algorithm for the Stable Marriage Problem. Algorithmica 51(3), 342–356 (2008) 12. Iwama, K., Miyazaki, S., Yamauchi, N.: A 1.875–approximation algorithm for the stable marriage problem. In: Proc. SODA, pp. 288–297 (2007) 13. Kir´ aly, Z.: Better and simpler approximation algorithms for the stable marriage problem. Algorithmica (2009) doi: 10.1007/s00453-009-9371-7 14. Khot, S., Regev, O.: Vertex cover might be hard to approximate to within 2 − . Journal of Computer and System Sciences 74(3), 335–349 (2008) 15. Manlove, D.F., Irving, R.W., Iwama, K., Miyazaki, S., Morita, Y.: Hard variants of stable marriage. Theoretical Computer Science 276(1-2), 261–279 (2002) 16. McDermid, E.J.: A 3/2-approximation algorithm for general stable marriage. In: Albers, S., Marchetti-Spaccamela, A., Matias, Y., Nikoletseas, S., Thomas, W. (eds.) ICALP 2009. LNCS, vol. 5555, pp. 689–700. Springer, Heidelberg (2009) 17. Roth, A.E., Rothblum, U.G., Vate, J.H.V.: Stable matchings, optimal assignments, and linear programming. Mathematics of Operations Research 18(4), 803–828 (1993) 18. Teo, C.-P., Sethuraman, J.: The geometry of fractional stable matchings and its applications. Mathematics of Operations Research 23(4), 874–891 (1998) 19. Teo, C.-P., Sethuraman, J., Tan, W.P.: Gale-Shapley stable marriage problem revisited: strategic issues and applications. In: Cornu´ejols, G., Burkard, R.E., Woeginger, G.J. (eds.) IPCO 1999. LNCS, vol. 1610, pp. 429–438. Springer, Heidelberg (1999) 20. Yanagisawa, H.: Approximation Algorithms for Stable Marriage Problems. Ph.D. Thesis, Kyoto University (2007)
Strongly Stable Assignment Ning Chen1 and Arpita Ghosh2 1
School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore
[email protected] 2 Yahoo! Research, Santa Clara, CA, USA
[email protected]
Abstract. An instance of the stable assignment problem consists of a bipartite graph with arbitrary node and edge capacities, and arbitrary preference lists (allowing both ties and incomplete lists) over the set of neighbors. An assignment is strongly stable if there is no blocking pair where one member of the pair strictly prefers the other member to some partner in the current assignment, and the other weakly prefers the first to some partner in its current assignment. We give a strongly polynomial time algorithm to determine the existence of a strongly stable assignment, and compute one if it exists. The central component of our algorithm is a generalization of the notion of the critical set in bipartite matchings to the critical subgraph in bipartite assignment; this generalization may be of independent interest.
1
Introduction
The classical stable marriage problem studies a setting with an equal number of men and women, each with a strict preference ranking over all members of the other side. Since the seminal work of Gale and Shapley on stable marriage [5], a number of variants of the stable matching problem have been studied, relaxing or generalizing different assumptions in the original model. One particularly practical generalization is to relax the requirement of strict and complete preferences over all alternatives: this gives rise to the stable marriage problem with ties and incomplete lists (SMTI), where a man can have indifferences, or ties, between women in his preference list, and need not rank all women (and similarly for women). When preference lists have ties, different notions of stability can be defined depending on what qualifies as a blocking pair: a weakly stable matching [7,9,15] is one where there is no pair (i, j) such that both i and j strictly prefer each other to their matched partners; a strongly stable matching [7,14] is one where there is no pair such that one member of the pair strictly prefers the other to its current partner in the matching, and the other member weakly prefers the first to its current partner in the matching. Unlike weakly stable matchings, a strongly stable matching need not always exist. Irving [7] gave a beautiful algorithm to solve the question of deciding whether or not a strongly stable matching exists M. de Berg and U. Meyer (Eds.): ESA 2010, Part II, LNCS 6347, pp. 147–158, 2010. c Springer-Verlag Berlin Heidelberg 2010
148
N. Chen and A. Ghosh
and finding one if it does, and Manlove [14] extended the algorithm to the case of ties and incomplete lists. Motivated by online matching marketplaces where buyers and sellers trade multiple items, we investigate the generalization of SMTI to assignment problems, where nodes on both sides of a bipartite graph have multiple units of capacity c(i) ≥ 1 (as opposed to unit capacity in the stable marriage model) and edges in the graph have arbitrary capacities c(i, j). We study the algorithmic question of finding strongly stable assignments— feasible assignments where there is no pair (i, j) such that both i and j weakly prefer allocating at least one additional unit on the edge (i, j), and at least one of i and j strictly prefers to do so. The many-to-many matching problem, a special case of assignment where at most one unit can be allocated between a pair of nodes, i.e., c(i, j) = 1, turns out to be adequate to showcase most of the complexity introduced by the generalization from matching to assignment. For clarity, we therefore state all our results in terms of the many-to-many matching problem1 , and defer the generalization to assignment with arbitrary edge capacities to the full version of the paper. The generalization from one-to-one matching, where nodes have unit capacity, to many-to-many matching with multi-unit node capacities, introduces significant algorithmic complexity to the problem of strong stability. To explain why, it is first necessary to understand the main idea behind the algorithms of Irving [7] and Manlove [14] for the unit capacity case: Men propose to women in decreasing order of preference, so that at each stage a man’s proposals are (all) at the top of his current list and a woman’s proposals are (all) at the bottom of her current list. The algorithm deletes pairs that can never occur in a strongly stable matching — this happens at one of two times, the first being when a woman receives a strictly better proposal (exactly as in the deferred acceptance algorithm). The second is when a woman is over-demanded, that is, there are multiple men in the engagement graph to whom this woman must be matched to avoid a blocking pair — when this happens, no man at this level, i.e., the bottom of this woman’s preference list, can be matched to her in a strongly stable matching, since these other men will form a blocking pair. The algorithms in [7,14] delete all such pairs which can never occur in any strongly stable matching and then look for a strongly stable matching in a final engagement graph based on these modified lists. In the many-to-many matching setting, there are two major complicating differences. The first difference is quite fundamental, and comes from the fact that the notion of a critical set — defined as the unique maximal subset of men with the largest deficiency, which is the difference between the size of a set and its neighborhood in a bipartite graph — does not generalize in the obvious way to many-to-many matching. With unit capacities, the set of over-demanded women turns out to be precisely the neighborhood of the critical set of men. The obvious generalization when nodes have multi-unit capacities is to define the deficiency as the difference between the total capacity of a set and its neighborhood — that is, define δ(S) = i∈S c(i) − j∈N (S) c(j), where N (S) is the set of neighbors of 1
The problem of finding strongly stable many-to-many matchings has been studied previously in [13]; see Section 1.1.
Strongly Stable Assignment
149
S, and extend the definition of the critical set to be the subset of men maximizing δ(S)2 . However, this obvious extension of the deficiency and the corresponding definition for the critical set does not work, and fails in two ways — first, the neighborhood of this set does not correctly identify the set of over-demanded women (Example 1). Second, it does not possess the property that the size of a maximum many-to-many matching is given by the total capacity of men minus this maximum deficiency (Example 2). We therefore need to appropriately extend the notion of critical set to the multi-unit capacity setting — as it turns out, a subset of men S ⊆ A is no longer an adequate description for the extension of a critical set. Rather, we need to specify a critical subgraph, which is described by a partition of the set of men and their capacities, as well as a subset of women. In Section 2, we develop a definition of the critical subgraph that we show retains both properties of the critical set in unit-capacity matching, and show that the critical subgraph can be computed in polynomial time. We note that the extension of the critical set from bipartite matching to bipartite assignment is of independent interest, and is one of the major contributions of the paper: The critical set [12] plays a central role in algorithms for computing stable outcomes in matching markets, both without and with monetary transfers [7,3,1], by providing a way to identify the set of ‘over-demanded’ women (or items); our generalization of the critical set therefore might be useful in extending these market-clearing algorithms to marketplaces with multi-unit node capacities. The second difference from [7] is that with unit capacity, the edges in an engagement graph are all at the same level for each node (top for men, bottom for women), whereas this is not the case with multi-unit capacities — when c(i) ≥ 1, a man might need to propose to women of different levels in his preference list, and a woman might need to retain proposals from different levels, to meet their respective capacities. This means that not all edges in the engagement graph incident to a node are equally preferred by that node. Therefore, we cannot simply seek a maximum matching or attempt to identify the set of over-demanded women in an engagement graph as in [7], without appropriately processing it to account for the fact that not all edges are equal. In Section 3, we present our algorithm to determine the existence of a strongly stable assignment and compute one, if it exists. All proofs can be found in the full version of the paper [2]. 1.1
Related Work
There has been much work on stable matchings in bipartite graphs focusing on different variants and applications of the original stable marriage problem. As mentioned earlier, our work extends the algorithms of Irving [7] and Manlove [14] for the one-to-one matching problem. Irving et al. [8] gave a strongly stable matching algorithm for the many-to-one matching problem (i.e., nodes on one side of the graph can have multi-unit capacities); the algorithm was improved later by Kavitha et al. [11]. For other related algorithmic problems that arise 2
In fact, this is exactly the definition used in [13].
150
N. Chen and A. Ghosh
from the study of stable matchings, see Gusfield and Irving [6] and two recent survey papers by Iwama and Miyazaki [10] and Roth [16]. Economic properties for stable matchings are discussed in [17,19]. The most obvious work related to ours is [13], which studies the problem of finding strongly stable many-to-many matchings, i.e., the special case of our assignment model with unit edge capacities c(i, j) = 1. Unfortunately, however, the algorithm proposed in [13] is incorrect, both in terms of the processing of the engagement graph to account for the fact that the edges incident to a node belong to multiple levels on its preference list, as well as in terms of identifying over-demanded women (that algorithm uses the extension of the critical set based on the difference between total capacities). We give explicit examples showing two different points at which the algorithm in [13] fails in the full version of the paper [2].
2
The Critical Subgraph
This section develops the notion of the critical subgraph, generalizing the critical set from unit-capacity matching to the setting with multi-unit capacities. As in the rest of the paper, for simplicity we restrict ourselves to many-to-many matchings: Given a bipartite graph G = (A, B; E) with node capacities c(k) ≥ 1 for k ∈ A ∪ B, a many-to-many matching3 of G is a subset of edges such that each node k is matched to at most c(k) pairs. Let dG (k) denote the degree of node k in G. Assume without loss of generality that c(k) ≤ dG (k), since a node cannot be matched to more neighbors than its degree in G. Given a subset S of nodes, we use NG (S) (or simply N (S)) denote the set of neighbors of S in G. 2.1
The Critical Set
Given a bipartite graph G = (A, B; E) where all nodes have unit capacity, the critical set is the (unique) maximal subset with the largest deficiency, where the deficiency of S ⊆ A is defined as the difference between the size of S and the size of its neighborhood in B, i.e., δ(S) |S| − |N (S)|. The critical set is closely related to maximum matchings [12,7] — the size of a maximum matching in G is given by |A| − δ(X), where X = arg maxS⊆A {|S| − |N (S)|} is the critical set of A, and δ(X) is its deficiency. In many-to-many matching, nodes can have arbitrary capacities c(·) ≥ 1. We begin with two examples that show that the obvious generalization of the deficiency of a set S ⊆ A — defining it as the difference between the total of nodes in that set and the total capacity of its neighbors, i.e., capacity i∈S c(i) − j∈N (S) c(j), and defining the critical set to be the (unique) subset maximizing this quantity — fails to capture two important properties of the corresponding definition for unit capacity matching: the neighborhood of the critical set does not correctly identify the set of “over-demanded” women, and the deficiency no longer relates to the size of the maximum matching. 3
It is also called simple b-matching and is a well-studied concept [18].
Strongly Stable Assignment
151
An over-demanded woman j is one for whom there is some maximum matching in which j has an unmatched neighbor with leftover capacity. Specifically, j is over-demanded if there is a maximum matching M where there is an edge (i, j) ∈ E such that (i, j) ∈ / M and the number of matched neighbors of i in M is less than his capacity c(i). The first example illustrates that the neighborhood of the critical set defined this way does not correctly identify the set of overdemanded women when c(·) ≥ 1. Example 1. Consider graph G = (A, B; E) where A = {i1 , i2 , i3 , i4 } with node capacities (2, 1, 1, 1) and B = {j1 , j2 , j3 } with capacities (2, 1, 1). Connect (i1 , j1 ), (i1 , j2 ) and (i2 , j1 ), and connect in {i2 , i3 , i4 } to all nodes in {j2 , j3 }. all nodes The unique subset maximizing i∈S c(i)− j∈N (S) c(j) is A, with neighborhood B. However, j1 is never over-demanded in any maximum matching, since both edges (i1 , j1 ) and (i2 , j1 ) belong to every maximum many-to-many matching (recall that at most one edge, or one unit of capacity, can be assigned between any pair (i, j) in a many-to-many matching). The next example shows that in addition, the size of the maximum matching is not related to the deficiency defined according to the difference ofcapacities, i.e., it is not true that the size of a maximum matching is given by i∈A c(i) − maxS⊆A i∈S c(i) − j∈N (S) c(j) when c(i), c(j) ≥ 1. Example 2. Let (A1 ; B1 ) be a complete bipartite graph with A1 = {i1 , . . . , i10 }, B1 = {j1 , . . . , j10 }, and c(ik ) = 10 for ik ∈ A1 , c(jk ) = 4 for jk ∈ B1 . Let (A2 ; B2 ) be another complete bipartite graph with all unit capacity nodes A2 = } (n is an arbitrary number). G = (A, B) consists {i1 , . . . , in }, B2 = {j1 , . . . , j2n of these two graphs plus an extra node j0 with capacity c(j0 ) = 11 (i.e., A = A1 ∪ A2 and B = B1 ∪ B2 ∪ {j0 }) and edges connecting all nodes in A to j0 . Clearly, any maximum matching of G has size n + 50 (e.g., fully match all nodes in B1 to A1 , all nodes in A2 to B2 , and j0 to all nodes in A1 ). The maximum deficiency of G is given by A1 and its neighbor set B1 ∪{j 0 }, with value c(i)− c(j)− c(j ) = 100 − 40 − 11 = 49. However, 0 i∈A1 j∈B1 i∈A c(i)− 49 = 100 + n − 49 = n + 51, which is not the size of the maximum matching. To appropriately extend the notion of the critical set to multi-unit capacities, we first state the following lemma for the unit capacity case, allowing an alternative view of the critical set. Lemma 1. Let X be the critical set of A and Y = N (X) be its neighbor set. X and Y define a partition of the graph G = (A, B) into two subgraphs G1 = (S, T ) and G2 = (X, Y ), where S = A \ X and T = B \ Y . Then any maximum matching M1 of G1 has size |S|, and any maximum matching M2 of G2 has size |Y |. Further, M1 ∪ M2 gives a maximum matching of G with size |S| + |Y | = |A| − (|X| − |Y |). 2.2
Critical Subgraph
We now extend the notion of critical set to the multi-unit capacity setting: when c(i), c(j) ≥ 1, the critical set X ⊆ A is defined not just by the identities of the
152
N. Chen and A. Ghosh
nodes in X, but also by a vector of associated reduced capacities. In addition, we cannot simply choose the neighborhood of X to define the deficiency (and the set of over-demanded women): we will need a different partition of the nodes in B as well. We define the critical subgraph of a bipartite graph G below. Definition 1 (Critical subgraph). Given a bipartite graph G = (A, B; E) with node capacities c(k) ≥ 1 for k ∈ A ∪ B, for any i ∈ A and S ⊆ B, let dS (i) = |{j ∈ S | (i, j) ∈ E}| denote the degree of i restricted on S. We say S ⊆ B is a perfect subset if there is a maximum matching of G such that every i ∈ A is matched to cS (i) = min{c(i), dS (i)} pairs in S. That is, S is perfect if this maximum matching matches each i to the maximum possible number of neighbors in S. Let S ∗ ⊆ B be the unique (Corollary 1) maximal perfect subset (i.e., S ∗ is not a proper subset of any other perfect set). (Note that if there is no perfect subset, S ∗ = ∅.) Define the critical capacity of A to be x = (x(i))i∈A where x(i) = c(i)−cS∗ (i). Let X = {i ∈ A | x(i) > 0} and Y = B \ S ∗ . We define (X, Y ) to be the critical subgraph of G (i.e., an induced subgraph given by X and Y ), with capacity x(i) for i ∈ X andcapacity c(j)for j ∈ Y . The deficiency of the critical subgraph is defined to be i∈X x(i) − j∈Y c(j). For instance, in Example 1, the maximal perfect set is S ∗ = {j1 }; the critical capacity is x = (1, 0, 1, 1); and the critical subgraph is given by X = {i1 , i3 , i4 } and Y = {j2 , j3 }. In Example 2, the maximal perfect set is S ∗ = B2 ∪ {j0 }; and the critical subgraph is given by X = A1 and Y = B1 , where the critical capacity of each node in X is 9. Note that in both examples, the size of the maximum matching (4 and 50 + n, respectively) is exactly the total capacity of A (5 and 100 + n, respectively), minus the deficiency of the critical subgraph (1 and 50, respectively); this is formalized in Corollary 2. Properties. We first state the following fundamental lemma, which says that to prove a perfect set S, instead of showing a globally maximum matching of G as required by the definition, it suffices to find a locally maximum matching of S. Lemma 2. Given a bipartite graph G = (A, B; E) and a subset of nodes S ⊆ B, if there is a maximum matching M in the subgraph (X, cS ; S), where X = {i ∈ A | cS (i) > 0}, such that every node i ∈ X is matched to cS (i) pairs, then there is a maximum matching of G containing M . Hence, S is a perfect set. This lemma allows us to prove that the maximal perfect set is unique, which implies that the critical subgraph and the critical capacity are uniquely defined as well (Corollary 1). Corollaries 2 and 3 below generalize Lemma 1 to multiunit capacities, showing that the critical subgraph correctly captures the set of over-demanded vertices and the deficiency. Corollary 1. If S1 ⊆ B and S2 ⊆ B are two perfect subsets, then S1 ∪ S2 is a perfect subset as well. Hence, there is a unique maximal perfect subset.
Strongly Stable Assignment
153
Corollary 2. Let S ⊆ B be the maximal perfect set of graph G = (A, B; E) and x(i) = c(i) − cS (i). Consider two subgraphs G1 = (X1 , cS ; S) where X1 = {i ∈ A | cS (i) > 0}, and G2 = (X2 , x; Y ) where X2 = {i ∈ A| x(i) > 0} and Y = B \ S. Then any maximum matching M 1 of G1 has size i∈X1 cS (i), and any maximum matching M2 of G2 has size j∈Y c(j). Further, M1 ∪ M2 gives a maximum matching of G with size equal to the total capacity of A minus the deficiency of the critical subgraph. ⎛ ⎞ cS (i) + c(j) = c(i) − ⎝ x(i) − c(j)⎠ i∈X1
j∈Y
i∈A
i∈X2
j∈Y
The above expression is similar in appearance to (though not the same as) the characterization of maximum size b-matchings (Theorem 21.4, [18]); however, our focus here is to identify the set of over-demanded women. Corollary 3. Let S ⊆ B be the maximal perfect set and Y = B \ S. For any j ∈ Y , we have dG (j) > c(j). Further, there exists a maximum matching M where there is i ∈ NG (j) such that (i, j) ∈ / M and i is under-assigned, i.e., it has fewer than c(i) neighbors in M . That is, Y is precisely the set of over-demanded nodes. Recall that in the definition of a perfect set S, we only need to find one maximum matching in which every node i ∈ A is matched to cS (i) pairs in S. The following claim says that if this holds for one matching, it holds for every matching — that is, if we require the set S to satisfy this requirement in every maximum matching, we obtain the same collection of perfect sets — so that the two definitions are equivalent. Lemma 3. Given a graph G = (A, B; E), let S ⊆ B be the maximal perfect set. Let cS (i) = min{c(i), dS (i)}. Then for any maximum matching of G, each i ∈ A is matched to exactly cS (i) neighbors in S. Computation. The maximal perfect set and critical subgraph can be computed by the following algorithm. Critical-Subgraph Given G = (A, B; E), with capacities c(k) ≤ dG (k) for all k ∈ A ∪ B 1. Compute an arbitrary maximum many-to-many matching M in graph G = (A, B; E) 2. For each i ∈ A, set x(i) = c(i)−dM (i), where dM (i) is the degree of i ∈ A in M 3. Let X = {i ∈ A | x(i) > 0} and Y = ∅ 4. While there are i0 ∈ X and j0 ∈ B \ Y such that edge (i0 , j0 ) ∈ /M – add Y ← Y ∪ {j0 } – let X ← X ∪ {i} for each edge (i, j0 ) ∈ M matched to j0 by M 5. Return S = B \ Y and (X, Y )
154
N. Chen and A. Ghosh
Note that the algorithm can start with an arbitrary maximum many-to-many matching, and recursively adds nodes into X and Y using essentially what are alternating paths with respect to the matching M (all such paths can be found in linear time using, for instance, breadth first search). Therefore, the running time of the algorithm is equivalent to finding a maximum many-to-many matching, e.g., O(m2 n) by Edmonds-Karp algorithm [4] of finding a maximum flow, where m is the number of edges and n is the number of nodes. We will show that the output of the algorithm is independent of the choice of the maximum matching M , and correctly computes the maximal perfect set, and therefore, critical subgraph. Theorem 1. The subset S = B\Y and (X, Y ) returned by algorithm CriticalSubgraph are the maximal perfect set and critical subgraph, respectively.
3
Algorithm for Strongly Stable Assignment
An instance of the bipartite stable many-to-many matching problem consists of two disjoint sets A (men) and B (women), where each node has a preference ranking over nodes in the other side. We give all definitions for man-nodes, i.e., nodes in A; all definitions for B are symmetric. We denote the preferences of a node i ∈ A via a preference list L(i) over nodes in B. The preference lists can have ties, i.e., i need not have strict preferences over all j ∈ L(i), and can be incomplete, i.e., L(i) need not include all j ∈ B. (We assume that lists L(·) have been processed so that i ∈ L(j) if and only if j ∈ L(i).) We use i and i to denote the preferences of i: if j i j , then i strictly prefers j to j ; if j i j , then i weakly prefers j to j . Each node i only wants to be assigned to nodes on his preference list L(i); he has a capacity c(i), which is the maximum number of pairs that can be feasibly assigned to i. Definition 2 (Strong stability). Given a feasible many-to-many matching M , we say (i, j) ∈ / M is a blocking pair for M if one of the following conditions holds: – both i and j have leftover capacity in M , and belong to each others’ preference lists. – i has leftover capacity in M and there is (i , j) ∈ M such that i j i ; or j has leftover capacity and there is (i, j ) ∈ M such that j i j . – there are (i , j), (i, j ) ∈ M such that either j i j , i j i or j i j , i j i . M is strongly stable if it does not admit a blocking pair. That is, a pair (i, j) blocks M if by matching with each other, at least one of them will be strictly better off and the other will not become worse off. In general, a strongly stable matching need not exist, even with unit capacities (i.e., c(·) = 1). We next give an algorithm for determining the existence of a strongly stable many-to-many matching and computing one (if it exists), based on the critical subgraph described in the previous section. For convenience, we will use many-to-many matching and matching interchangeably throughout this section.
Strongly Stable Assignment
3.1
155
Algorithm
The algorithm Strong-Match starts with men proposing to women at the head of their current lists, where the head of a man’s preference list L(i) is the set of all women tied at the top level in L(i). Each proposal from i to j translates to adding an edge (i, j) to the bipartite engagement graph G on the sets A and B. We say a man i ∈ L(j) is dominated in a woman j’s preference list if |{i | i j i, i ∈ NG (j)}| ≥ c(j), i.e., the number of j’s neighbors in G that she strictly prefers to i exceeds her capacity. Every time a woman receives a proposal, she breaks all engagements with men who are now dominated in her list, i.e., the edge (i, j) is removed from the engagement graph G and i and j are deleted from L(j) and L(i) respectively. Men continue proposing until their degree in the engagement graph G is greater than or equal to their capacity, or there are no women left in their preference list. This part of the algorithm is exactly like the algorithms for finding a strongly stable matching with unit capacity [7,14]. Note, however, that because of the multi-unit capacities, a node need not be indifferent amongst its neighbors in our engagement graph G, i.e., it can have neighbors in G from different levels in its preference list, which never happens in the unit capacity matching version. As the algorithm proceeds, the lists L(·) shrink: men’s neighbors get progressively worsen, and women receive progressively better proposals. Once all men have finished making proposals, Strong-Match processes the engagement graph G to account for the fact that edges incident to a node belong to different levels. We define the sets PG (i) and IG (i) below: PG (i) is the set of “preferred” neighbors of i that i must be matched to in a strongly stable matching in G (if one exists), and IG (i) is the set of “indifferent” neighbors of i that i may or may not be matched to in a strongly stable matching in G. Definition 3 (PG (i), IG (i) and EP,P , EI,I ). Given an engagement graph G = (A, B) produced by Strong-Match, and a node i ∈ A, divide i’s neighbors in G into levels L1 , . . . , Lm according to L(i), where i is indifferent between all nodes in the same level Lk and strictly prefers each Lk to Lk+1 . Let r∗ = r ∗ max{r | k=1 |Lk | ≤ c(i)}. Then PG (i) = {j | (i, j) ∈ G, j ∈ Lk , k = 1, . . . , r }, and IG (i) = {j | (i, j) ∈ G, (i, j) ∈ / PG (i)} (and similarly for j ∈ B). That is, – If i has more neighbors than his capacity c(i) (i.e., dG (i) > c(i)), PG (i) consists of the neighbors in L1 , . . . , Lm−1 (by the rule of the algorithm, i stops proposing when his degree in G is greater than or equal to c(i)). IG (i) consists of neighbors at level Lm . – If i’s degree in G is less than or equal to c(i) (i.e., dG (i) ≤ c(i)), all his neighbors belong to PG (i) (in this case, i will propose to all women in L(i)), and IG (i) = ∅. Let
EP,P = {(i, j) ∈ G | j ∈ PG (i), and i ∈ PG (j)} be the set of edges such that both nodes belong to each others’ PG (·)-groups, and EI,I = {(i, j) ∈ G | j ∈ PG (i), or i ∈ PG (j)} be the set of edges where at least one of i or j is in the PG (·)-group of the other.
156
N. Chen and A. Ghosh
Note that by definition, |PG (i)| ≤ c(i) and |PG (j)| ≤ c(j). Further, PG (·) can be empty (when |L1 | > c(·) or the node has no neighbors in G); if this occurs, all neighbors (if any) of the corresponding node are at the same level (by the algorithm). We divide all edges in G into (P, P), (P, I), (I, P), and (I, I) types, where an edge (i, j) is, for example, a (P, I) type if j ∈ PG (i) and i ∈ IG (j), and so on (note that these sets change through the course of the algorithm, as the engagement graph changes). If a strongly stable matching is to be found using only the edges in the current engagement graph G, it must contain all edges in the subset EI,I defined above (i.e., edges where at least one endpoint strictly prefers, i.e., needs to be matched to, the other), since otherwise such an edge gives a blocking pair for the matching. The algorithm therefore attempts to remove all edges in EI,I , i.e., the (P, P), (P, I), and (I, P) types, from G without exceeding the capacity of any node. By the definition of the sets PG (·), all (P, P) edges can be removed from G without violating any node’s capacity (i.e., every node has adequate capacity for all edges in EP,P since |PG (i)| ≤ c(i) and |PG (j)| ≤ c(j)). Next, we proceed to (P, I) edges: in Step 5(a) of the algorithm, the graph H1 only contains (P, I) edges. For every woman who does not have adequate capacity cH1 (j) = c(j) − |PG (j)| in H1 , for all the (P, I) edges incident to her, we delete all pairs in her bottom level in G (and below) in Step 5(a), since if such a pair occurs in a strongly stable matching, it would be blocked by one of her neighbors in H1 . (The capacity of j in H1 is defined this way to ensure that j can be matched to all her PG (j) neighbors without exhausting her capacity, i.e., pushing her remaining capacity below 0.) Finally, in Step 5(b), the algorithm removes all (I, P) edges to form H2 , since if such an edge cannot be included in a strongly stable matching in G, the corresponding pair blocks that matching. Note that every woman’s remaining capacity is nonnegative after the removal of all these edges in EI,I (Step 5.(a) of the algorithm has already dealt with all nodes j who do not have enough leftover capacity for (P, I) edges). However, a man’s capacity might be smaller than the number of edges removed before forming H2 (in this case, no strongly stable matching exists). The graph H2 contains only (I, I) edges, so that finally all edges belong to the same level for each node. However, note that the remaining capacity of nodes in H2 can be greater than one — thus our problem of identifying over-demanded women is still different from [7,14], where all nodes have unit capacity in addition to having all neighbors at the same level in their preference list. Note that if all men cannot be fully matched in H2 , there is a blocking pair in every maximum matching in H2 : any under-assigned man will form a blocking pair with some neighbor in H2 to whom he is not matched. To continue, we therefore need to identify every woman j who is over-demanded in H2 , and delete all pairs from the bottom level of each such over-demanded woman. By Corollary 3, the set of these over-demanded women is given precisely by the critical subgraph. We therefore use the algorithm Critical-Subgraph to identify the critical subgraph in H2 and delete all such pairs in Step 5(b).
Strongly Stable Assignment
157
Strong-Match 1. Set each woman j ∈ B to be unmarked 2. Initialize the engagement graph G = (A, B; E) where E(G) = ∅ 3. While there is i ∈ A such that dG (i) < c(i) and i has a non-empty list – for each j ∈ B at the head of the list L(i) • remove j from L(i) and add (i, j) to E(G) • if j is fully-engaged (i.e., dG (j) ≥ c(j)), set j to be marked ∗ for each dominated man i on j’s list, delete pair i ↔ j 4. Set b(i) = cG (i), b(j) = cG (j) for i ∈ A, j ∈ B 5. (a) Construct graph H1 from G by removing all edges in EP,P and IG (i) for each i ∈ A. If H1 = ∅ – set the capacity of each j ∈ B in H1 to be cH1 (j) = c(j)−|PG (j)| – if there is j ∈ B such that dH1 (j) > cH1 (j) • for each such j ∈ B with dH1 (j) > cH1 (j) ∗ set j to be marked and let i ∈ NH1 (j) be a neighbor of j in H1 ∗ for each man i with i j i , delete pair i ↔ j • goto Step 3 (b) Construct graph H2 from G by removing all edges in EI,I ; reducing b(i) and b(j) by 1 each for every removed edge (i, j) (remove all nodes with non-positive b(i) and their incident edges from H2 ) – set each node’s capacity in H2 to be cH2 (i) = b(i) and cH2 (j) = b(j) – find the critical subgraph X ⊆ A and Y ⊆ B of H2 – if Y = ∅ • for each woman j ∈ Y ∗ set j to be marked and let i ∈ NH2 (j) be a neighbor of j in H2 ∗ for each man i with i j i , delete pair i ↔ j • goto Step 3 6. Reset b(i) = cG (i), b(j) = cG (j) for i ∈ A, j ∈ B (a) Construct graph G from G by removing all edges in EI,I ; set b(i) ← b(i) − 1 and b(j) ← b(j) − 1 for each removed edge (i, j) ∈ EI,I ; let the capacity of each node in G be b(·) (b) If b(i) ≥ 0 for all i ∈ A – let M be any maximum matching in G , and let M = M ∪ EI,I – for each j ∈ B, if the following conditions hold • if j is marked, it is matched to c(j) pairs in M • if j is unmarked, it is matched to cG (j) pairs in M then return M as a strongly stable matching (c) Else no strongly stable matching exists for the given instance
Theorem 2. Algorithm Strong-Match determines the existence of a strongly stable assignment and computes one (if it does) in strongly polynomial time4 O(m3 n). 4
We note that it might be possible to use the techniques in [11] (as well as faster algorithms for max-flow) to improve the runtime of our algorithm; we do not focus on optimizing the runtime in this paper.
158
N. Chen and A. Ghosh
References 1. Chen, N., Deng, X., Ghosh, A.: Competitive Equilibria in Matching Markets with Budgets. ACM SIGecom Exchanges 9.1 (2010) 2. Chen, N., Ghosh, A.: Strongly Stable Assignment, Full version available on arXiv 3. Demange, G., Gale, D., Sotomayor, M.: Multi-Item Auctions. Journal of Political Economy 94(4), 863–872 (1986) 4. Edmonds, J., Karp, R.: Theoretical Improvements in Algorithmic Efficiency for Network Flow Problems. Journal of the ACM 19(2), 248–264 (1972) 5. Gale, D., Shapley, L.S.: College Admissions and the Stability of Marriage. American Mathematical Monthly 69, 9–15 (1962) 6. Gusfield, D., Irving, R.W.: The Stable Marriage Problem: Structure and Algorithms. MIT Press, Cambridge (1989) 7. Irving, R.W.: Stable Marriage and Indifference. Discrete Applied Mathematics 48, 261–272 (1994) 8. Irving, R.W., Manlove, D., Scott, S.: Strong Stability in the Hospitals/Residents Problem. In: Alt, H., Habib, M. (eds.) STACS 2003. LNCS, vol. 2607, pp. 439–450. Springer, Heidelberg (2003) 9. Iwama, K., Manlove, D., Miyazaki, S., Morita, Y.: Stable Marriage with Incomplete Lists and Ties. In: Wiedermann, J., Van Emde Boas, P., Nielsen, M. (eds.) ICALP 1999. LNCS, vol. 1644, pp. 443–452. Springer, Heidelberg (1999) 10. Iwama, K., Miyazaki, S.: Stable Marriage with Ties and Incomplete Lists. In: Encyclopedia of Algorithms (2008) 11. Kavitha, T., Mehlhorn, K., Michail, D., Paluch, K.E.: Strongly Stable Matchings in Time O(nm) and Extension to the Hospitals-Residents Problem. ACM Transactions on Algorithms 3(2) (2007) 12. Liu, C.L.: Introduction to Combinatorial Mathematics. McGraw-Hill, New York (1968) 13. Malhotra, V.S.: On the Stability of Multiple Partner Stable Marriages with Ties. In: Albers, S., Radzik, T. (eds.) ESA 2004. LNCS, vol. 3221, pp. 508–519. Springer, Heidelberg (2004) 14. Manlove, D.F.: Stable Marriage with Ties and Unacceptable Partners, Technical Report TR-1999-29, University of Glasgow (1999) 15. Manlove, D.F., Irving, R.W., Iwama, K., Miyazaki, S., Morita, Y.: Hard Variants of Stable Marriage. Theoretical Computer Science 276(1-2), 261–279 (2002) 16. Roth, A.E.: Deferred Acceptance Algorithms: History, Theory, Practice, and Open Questions. International Journal of Game Theory, 537–569 (2008) 17. Roth, A.E., Sotomayor, M.: Two-Sided Matching: A Study in Game-Theoretic Modeling and Analysis. Cambridge University Press, Cambridge (1992) 18. Schrijver, A.: Combinatorial Optimization. Springer, Heidelberg (2003) 19. Sotomayor, M.: Three Remarks on the Many-to-Many Stable Matching Problem. Mathematical Social Sciences 38(1), 55–70 (1999)
Data Structures for Storing Small Sets in the Bitprobe Model Jaikumar Radhakrishnan1 , Smit Shah2, , and Saswata Shannigrahi1, 1
2
Tata Institute of Fundamental Research, Mumbai, India {jaikumar,saswata}@tifr.res.in Institute of Technology, Nirma University, Ahmedabad, India {05bce106}@nirmauni.ac.in
Abstract. We study the following set membership problem in the bit probe model: given a set S from a finite universe U , represent it in memory so that membership queries of the form “Is x in S?” can be answered with a small number of bitprobes. We obtain explicit schemes that come close to the information theoretic lower bound of Buhrman et al. [STOC 2000, SICOMP 2002] and improve the results of Radhakrishnan et al. [ESA 2001] when the size of sets and the number of probes is small. We show that any scheme that stores sets of size two from a universe of size m and answers membership queries using two bitprobes requires space Ω(m4/7 ). The previous best lower bound (shown by Buhrman et √ al. using information theoretic arguments) was Ω( m). The same lower bound applies for larger sets using standard padding arguments. This is the first instance where the information theoretic lower bound is found to be not tight for adaptive schemes. We show that any non-adaptive three probe√scheme for storing sets of size two from a universe of size m requires Ω( m) bits of memory. This extends a result of Alon and Feige [SODA 2009] to small sets.
1
Introduction
This paper addresses the following set membership problem in the bit probe model: given a set S from a finite universe U , represent it in memory so that membership queries of the form “Is x in S?” can be answered by reading a few bits. This problem was first studied by Buhrman, Miltersen, Radhakrishnan and Venkatesh [2]. Let (n, m, s, t)-scheme be a solution to this problem that uses s bits of memory and answers queries correctly for all n-element subsets of an m-element universe using t probes. Let sN (n, m, t) denote the minimum space s such that there exists a deterministic (n, m, s, t)-scheme that answers queries with t non-adaptive probes. (We replace the subscript N by A when we wish to emphasize that the probes are allowed to be adaptive.) Using this terminology, the results of Buhrman et al. [2] can be stated as
A part of the work was done when the author was visiting TIFR Mumbai. The author has been partially supported by IBM PhD Fellowship 2009-2010.
M. de Berg and U. Meyer (Eds.): ESA 2010, Part II, LNCS 6347, pp. 159–170, 2010. c Springer-Verlag Berlin Heidelberg 2010
160
J. Radhakrishnan, S. Shah, and S. Shannigrahi
sN (n, m, 2t + 1) = O(ntm2/(t+1) ); m and sA (n, m, t) = Ω(nt( )1/t ). n In this work, we will primarily be concerned with bounds when n and t are small (say constants, or at most O(log m)), and focus on the constant in front of 1t in the exponent of m; in particular, we would like to know if that constant could be 1 (matching the lower bound). The upper bound shown by Buhrman et al. [2] used probabilistic existence arguments. Explicit constructions were studied by Radhakrishnan et al. [3]. √ In particular, they showed that sA (n, m, lg(n + 1) + 1) ≤ (n + lg(n + 1)) m. Note that the dependence on m in this result is in general inferior to the m4/t dependence in the bound of above. Our first result improves both these results when t is much larger than n. Result 1. (a) We give an explicit scheme that shows that sA (n, m, t) ≤ Ctn · m1/(t−n+1) , for t > n ≥ 2, and a constant C independent of n and t. (b) Using a probabilistic argument we obtain a deterministic scheme that shows −t+1 ) that sA (n, m, t) ≤ Cntm1/(t−nt2 , for t > n ≥ 2, and a constant C independent of n and t. The main point to note is that for small n and t somewhat larger than n, the exponent of m in these bounds approaches 1t , as in the lower bound above. The above results use space close to the lower bound when the number of probes is large. Are savings possible with a small number of probes? Alon and Feige [1] studied the “power of two, three and four probes.” They showed the following upper bounds for adaptive schemes. Two probes: For n < log m, sA (n, m, 2) = O( 1 3
2 3
mn log lognm log m
).
Three probes: sA (n, m, 3) = O(n m ). 1 3 Four probes: sA (n, m, 4) = O(n 4 m 4 ). These three upper bounds are remarkable in that they show that the space requirement can be o(m) even with a small constant number of probes (disproving a conjecture of [2]). Even for the simple case of t = 2 and n = 2, the current bounds [2] are not tight: √ m ≤ sA (2, m, 2) ≤ m2/3 . We conjecture that the upper bound is tight. For non-adaptive probes, Buhrman et al. determined that sN (2, m, 2) = m; this, in particular, shows that sA (2, m, 2) = o(sN (2, m, 2)). Our next result shows an improved lower bound even when the queries are allowed to be adaptive. Result 2. sA (2, m, 2) = Ω(m4/7 ). This implies a similar lower bound as long as n = o(m). This is the first lower bound for general schemes (previous bounds apply only to non-adaptive schemes)
Data Structures for Storing Small Sets in the Bitprobe Model
161
that beats the information theoretic lower bound of Buhrman et al. mentioned above. Alon and Feige [1] showed the following lower bound for non-adaptive three probe schemes with n ≥ 16 log m. nm ). sN (n, m, 3) = Ω( log m We obtain the following result for n = 2. √ Result 3. sN (2, m, 3) = Ω( m). √ Note that this result shows that the m dependence on m shows up already for sets of size 2, and extends the result of Alon and Feige above to sets of size less than log m. Furthermore, Result 1 (b) shows that sA (2, m, 3) = O(m0.4 ). Thus, adaptive schemes are more efficient than non-adaptive schemes for n = 2, t = 3. A similar result was observed by Buhrman et al. for n = 2, t = 2. Buhrman et al. [2] showed a non-explicit construction of a non-adaptive t probe scheme with O(ntm4/t ) space. Ta-Shma [4] gave explicit constructions of non-adaptive schemes. One can using the techniques of Buhrman et al. obtain n an explicit non-adaptive (n, m, s, t) scheme with s = 2tm t+n−1 . For n = 2, 3, this is an improvement over the O(ntm4/t ) scheme of Buhrman et al. The details are omitted from this version of the paper. Techniques used Upper bounds: The two upper bounds results (Results 1 (a) and 1 (b)) are shown using different methods. Result 1 (a) is elementary, but it uses adaptiveness in a careful way. It employs schemes for storing sets of various sizes, and distributes the set to be stored among these schemes. Using the first few probes the membership algorithm is led to the scheme where this element may be found. Result 1 (b) uses a probabilistic argument. The query scheme first make t − 1 non-adaptive probes and determines the section of the memory it must finally probe to determine if the given element is in the set. The first t − 1 probes can be explicitly described, it is only the location of the last probe is assigned probabilistically. Lower bounds: The proof of Result 2 uses a graph theoretic formulation of the problem. It was shown by Radhakrishnan et al. [3] that for certain restricted schemes, s(2, m, 2) = Ω(m2/3 ). While not resorting to the artificial restriction of the previous proof, our proof only yields a weaker bound of m4/7 . As√ stated the result beats the routine information-theoretic lower bound of Ω( m) proved earlier, where one shows that the small data structure leads to a short encoding of sets of size 2; since bits, one infers a lower bound on any such encoding must use at least log m 2 the size of data structure. The proof in this paper departs from this information
162
J. Radhakrishnan, S. Shah, and S. Shannigrahi
theoretic framework, in that it does not directly derive encodings of sets from the efficient data structure. The proof in this paper uses the graph theoretic formulation similar to that of Radhakrishnan et al. [3], but needs a more technical and complicated argument to deal with what intuitively appear to be easy cases. Result 3 in contrast applies only to non-adaptive schemes but allows for three probes. As stated above, Buhrman et al. showed that sN (2, m, 2) = m. We extend their argument to three probes. This involves considering cases based on the functions used by the query algorithm. In most cases, we are able to show that the scheme √ contains a (2, m , s, 2)-non-adaptive scheme on a universe of size m = Ω( m); the results of Buhrman et al. then immediately gives us the desired lower bound.
2
Result 1: Adaptive Schemes
In this section, we obtain explicit schemes that come close to the information theoretic lower bound of Buhrman et al. [2] and improve the results of Radhakrishnan et al. [3] when the size of sets and the number of probes is small. We first mention a result of Radhakrishnan et al. that we will use to prove Theorem 1 and Theorem 2. Lemma 1. (Theorem 1 of [3]) There is an explicit √ adaptive (n, m, s, t) scheme with t = lg(n + 1) + 1 and s = (n + lg(n + 1)) m. We now present a proof of Result 1 (a) for the case of n = 2. Theorem 1. For any t > 2, there is an explicit adaptive (2, m, s, t) scheme with 1 s = t2 m t−1 . Proof. We show how a multiprobe scheme can be built up from some schemes that use fewer probes. Claim. Let (1, m2 , s1 , t − 2) and (2, m2 , s2 , t − 1) be two adaptive schemes. Then, there exists a (2, m1 m2 , s, t) adaptive scheme such that s ≤ s1 + s2 + 2m1 . We will justify this claim below. Let us first verify that this yields our theorem by induction. For t = 3, the claim follows from Lemma 1 which tells that there is 1 1 an explicit adaptive (2, m, 4m 2 , 3) scheme; this gives sA (2, m, 3) ≤ 4m 2 . Also, it 1 is easy to see that sA (1, m, t) ≤ tm t . (To each element of the universe assign a distinct t-tuple from fx ∈ [m1/t ]t . The memory will consist of t tables T1 , . . . , Tt , each with m1/t bits, and the query “Is x in S?” will be answered yes iff Tj [fx [j]] = t−2 1 1 for all j.) Now, let m1 = m t−1 and m2 = m t−1 . Then, the claim gives: 1 1 1 1 sA (2, m1 m2 , t) ≤ (t − 2)m t−1 + (t − 1)2 m t−1 + 2m t−1 ≤ t2 m t−1 for any t > 2. It remains to justify the claim. Partition the universe into m1 blocks B1 , B2 , . . ., of size at most m2 each. We have a table T with m1 rows, each with two bits (this accounts for the 2m1 term), and space for one (1, m2 , s1 , t − 2)-scheme N1 and another adaptive (2, m2 , s2 , t − 1)-scheme N2 . To answer the query “Is x in S?” for an element x ∈ Bi , the query algorithm reads the first bit of T [i], and if
Data Structures for Storing Small Sets in the Bitprobe Model
163
it is 1, it invests the remaining t − 1 probes in N2 assuming a (2, m2 , s2 , t − 1)scheme for Bi is implemented there. Otherwise, it reads the second bit and if it is 1, invests the remaining t − 2 probes in the scheme N1 assuming an (1, m2 , s1 , t − 2)-scheme for Bi is implemented there. If the two bits read from T are 00, it answers No. Now, given a set {x, y} the storage is determined as follows. (i) If there are distinct blocks Bi and Bj with one element each, we set T [i] = 10, T [j] = 01 and set the other entries of T to 00. We store the elements of Bi using N2 and the elements of Bj using N1 . (ii) If {x, y} ⊆ Bi for some block Bi , we set T [i] = 10 and set the other entries of T to 00. The elements in Bi are now represented in the scheme N2 . 2 We next generalize this result to larger sets. This will complete the proof of Result 1 (a). Theorem 2. For every n ≥ 2 and t > n, there is an explicit adaptive (n, m, s, t)1 scheme with s = tn m t−n+1 . Proof. We demonstrate a scheme inductively. Lemma 1 shows that there exists 1 an explicit adaptive (n, m, 2nm 2 , n + 1) scheme for all n ≥ 2. (Lemma 1 actually demonstrates a scheme with smaller space and fewer probes, but this scheme is sufficient for our proof.) Moreover, it is proven in Theorem 1 above that there is 1 an explicit adaptive (2, m, t2 m t−1 , t) scheme for every t > 2. We use these two schemes as the base cases for our induction argument (note that 2n ≤ (n + 1)2 1 for all n ≥ 2). We also note that there is an non-adaptive (1, m1 , t1 m1 t1 , t1 ) scheme for every m1 and every t1 ≥ 2. 1 For induction, we assume that an explicit adaptive (i2 , m2 , t2 i2 m2 t2 −i2 +1 , t2 ) scheme exists for every m2 , and every i2 and t2 satisfying 2 ≤ i2 < n and 1 i2 < t2 ≤ t. We also assume that an explicit adaptive (n, m3 , i3 n m3 i2 −n+1 , i3 ) scheme exists for every m3 and every i3 satisfying n < i3 < t. We now demonstrate our storage scheme. We divide the universe of size m t−n 1 into m t−n+1 blocks of size at most m t−n+1 each. The first part of the storage 1 scheme consists of a table T with m t−n+1 entries, each with an entry which is at most n bits long. Each entry of this table corresponds to a block. Assume that there are l blocks each of which contain at least one element inside it (note that l is at most n). We order these blocks by the number of elements inside them; let the ordering be B1 , B2 , . . . , Bl where Bk contains at least as many elements as in Bk+1 , for all 1 ≤ k ≤ l − 1. Note that Bk contains at most n − k + 1 elements. At the entry of T that corresponds to Bk , we store a string of (k − 1) zeroes followed by an 1. If a block does not contain any element, we store n zeroes in the corresponding entry of the table. The second part of the storage scheme consists of (n−1) adaptive schemes and one non-adaptive scheme. We denote them by S1 , S2 , . . . , Sn . For every 1 ≤ j ≤ t−n 1 (n − 1), Sj is an explicit adaptive (n − j + 1, m t−n+1 , (t − j)n−j+1 m t−n+1 , t − j) scheme whose existence is guaranteed by the induction hypothesis. Sn is a nont−n 1 adaptive (1, m t−n+1 , (t − n)m t−n+1 , t − n) scheme. For all 1 ≤ j ≤ l (note that
164
J. Radhakrishnan, S. Shah, and S. Shannigrahi
l is the number of blocks that contain at least one element inside them), Sj is used to store the block Bj . Let S{t,n} be the total space required by the above storage scheme. We note 1 that the first part of the storage scheme takes nm t−n+1 amount of space. Hence, 1 1 1 1 S{t,n} = nm t−n+1 + (t − 1)n m t−n+1 + (t − 2)n−1 m t−n+1 + . . . + (t − n)m t−n+1 . 1 This number is less than tn m t−n+1 for all n ≥ 2 and t > n. The query scheme for an element x in the universe is as follows. The scheme first finds the block x belongs to and then probes the corresponding location in the table. It sequentially probes the binary string stored at that location till it either finds an 1 or finds a string containing n zeroes. If it finds an 1 at the i-th position, it gets directed to the scheme Si . If that scheme returns ”yes”, the query scheme answers ”yes” to the query x. If it finds a string of n zeroes in the table, it answers that the element x is not present in the universe. 2 We use a probabilistic argument to show that there exists a scheme which uses less space than the space in the above theorems. This proves Result 1 (b). Theorem 3. Suppose n and t are integers satisfying n ≥ 2 and t > n. Then, 2t−1
there exists an adaptive (n, m, s, t) scheme with s = 8nt · (2m) t2t−1 −(n−1)(t−1) . This scheme is non-explicit. Proof. We prove the theorem for n = 2. The proof for n > 2 uses similar ideas. (See below for the complete proof.) The storage scheme consists of two parts: the index part and the actual storage part. Let s be an integer (whose value will be determined later) such that t−1 m blocks of size at most (s 2t−1 the universe of size m is divided into s 2t−1 )t−1 each. The index part consists of a (t − 1)-partite complete (t − 1)-uniform hyper t−1 graph H with s 2t−1 vertices in each partition. This hypergraph has s 2t−1 distinct hyperedges. Each block is assigned a unique hyperedge, called its index hyperedge. The vertices of the hyperedges are used to store either 0 or 1. We say that the index hyperedge of a block contains k if the sequentially stored bits at the vertices of the hyperedge represent the integer k. The actual storage part consists of 2t−1 tables T0 , T1 , T2 , . . ., T2t−1 −1 , each t (t−1)2
of size s . Each of these tables is divided into (s ) 2m sub-tables of size m t−1 each. For every 0 ≤ i ≤ 2 − 1, each block of the universe is (s )t−1 2(t−1)2 randomly assigned one sub-table of Ti . We call two blocks to be colliding in Ti if they are assigned the same sub-table in Ti . If there is a block B that contains at most two elements (and others contain none), the storage scheme stores its characteristic vector inside a sub-table in one of the tables Tj and assigns j to B’s index hyperedge. Any other block is assigned a number different from j in its index hyperedge and the storage scheme stores 0 in all locations of the remaining tables. If there are two blocks B1 and B2 that contain one element each, we probabilistically show below that it is possible to find a table Tk and a scheme Sk such that: (i) the index hyperedges of both B1 and B2 contain k, (ii) B1 and B2 do not collide in Tk and their sub-tables
Data Structures for Storing Small Sets in the Bitprobe Model
165
store the corresponding characteristic vectors, (iii) the index hyperedges that are not subsets of the union of the index hyperedges of B1 and B2 contain a number not equal to k, (iv) neither B1 nor B2 collides with at most 2t−1 − 2 other blocks whose index hyperegdes are subsets of the union of B1 ’s and B2 ’s index hyperedges and therefore contain k in their index hyperedges, and (v) all tables other that Tk store 0 in all locations inside them. We call (B1 , B2 ) to be a bad pair if no table Tk satisfies the above conditions. We need to show that no such bad pair exists in our scheme. From the query scheme described below, it will be clear that such a storage scheme correctly stores the elements of the universe. Given an element x in the universe, the query scheme first finds out the block B it belongs to. It then makes t − 1 sequential probes into B ’s index hyperedge. If the index hyperedge contains i, then the query scheme looks at Ti ’s sub-table that contains the characteristic vector of B . The query scheme returns a ”yes” if and only if it finds an 1 at the location of x in the characteristic vector. Let us now show that a scheme Sk (as described above) exists. For any i, the probability that a pair of blocks collide in Ti is at most (s )t 2m(t−1)2 . Therefore,
the probability that even one of 2 · 2t−1 − 3 pairs of blocks collide in Ti is at most 2 · 2t−1 (s )t 2m(t−1)2 . Hence, the probability that (B1 , B2 ) is a bad pair is at most 2t−1 2t (s )t 2m(t−1)2 . It follows that the expected number of bad pairs of blocks is 2t−1 2(t−1) t at most s 2t−1 2 (s )t 2m(t−1)2 . So, there exists an assignment of sub2t−1 2(t−1) t 2 (s )t 2m(t−1)2 tables in which there are at most s 2t−1 bad pairs of blocks. Even if half of all possible pairs of blocks are bad, we get a scheme for storing m elements by deleting at most one block from each bad pair of blocks. 2 If we summarize the above discussion, it follows that s has to satisfy the following inequality. t−1 2(t−1) t s2 2 This is satisfied when s ≥ scheme for
m 2
m t (s ) 2(t−1)2
2t−1
t−1 t−1 s2 ≤ 2
2t−1
32 t2t−1 −t+1 . 2t m
Thus, the total space required for a 2t−1
sized universe is t · (2t−1 )s = 16t · m t2t−1 −t+1 . Hence, there exists 2t−1
an adaptive (2, m, s, t) scheme with s = 16t · (2m) t2t−1 −t+1 . Complete Proof of Theorem 3: We use a storage scheme similar to the one described in the proof for n = 2. The only difference now is that the scheme has to ensure that any n blocks are stored correctly. Each block can now collide with at most nt−1 other blocks. This number used to be 2t−1 in the case n = 2. In summary, we need to find an s that satisfies
166
J. Radhakrishnan, S. Shah, and S. Shannigrahi
t−1 n(t−1)
s2
n·n
t−1
⎡ ⇒2
(t−1)2 (n−1)+(2t−1)2t−1
m t (s ) 2(t−1)2
2t−1 ⎤2t−1
⎢ ⎥ ⎥ ⎢ m ⎢ ⎥ . ⎢ t⎥ ⎢ t 1− (n−1)(t−1) ⎥ t2t−1 ⎣ 2 (s ) ⎦
t−1 t−1 s2 ≤ 2
≤
1 2
(1)
n
If
s ≥
8n 2t
t2t−1 t2t−1 −(n−1)(t−1)
2t−1
· (m) t2t−1 −(n−1)(t−1) ,
(2)
then the left hand side of the equation (1) is 2(t−1)
2
(n−1)+(2t−1)2t−1 t2t−1
(8)
=
1 (t+1)2t−1 −(t−1)2 (n−1)
(2)
.
This is at most 12 whenever (t+1)2t−1 −(t−1)2 (n−1) ≥ 1 (since n ≥ 2 and t > n). Note that in order to satisfy equation (2) it is enough to find an s satisfying 2t−1 s ≥ 16n (m) t2t−1 −(n−1)(t−1) (this follows from the fact n ≥ 2 and t > n). From 2t 2t−1
t−1 this, we get an (n, m = 8nt (m) t2t−1 −(n−1)(t−1) . 2 , s , t) scheme where s = ts 2 2
3
Result 2: Lower Bound for Two-Probe Adaptive Schemes
√ In this section, we show that sA (2, m, 2) = Ω(m4/7 ); this improves the Ω( m) bound which follows from a direct counting argument, but it falls short of the current best upper bound O(m2/3 ), which we conjecture to be tight. Preliminaries. Fix an adaptive (2, m, s, 2)-scheme. Our goal is to show that s = Ω(m4/7 ). A two-probe adaptive query algorithm is specified using three access functions a, b, c : [m] → [s], where the query “Is x in S?” is answered by first probing location a(x), and then probing locations b(x) or c(x) depending on the bit the first probe returned. For adaptive schemes we may assume (with at most a constant factor increase in space) that the answer to the query is the last bit read. Such a scheme can be associated with a bipartite (multi-)graph G = (B := [s], C := [s], E := {b(x), c(x) : x ∈ [m]}). The edge b(x), c(x) will be labeled by the element x. Note that E is naturally partitioned into sets E1 , E2 , . . . , Es , where Ei = {b(x), c(x) : x ∈ [m] and a(x) = i}.
Data Structures for Storing Small Sets in the Bitprobe Model
B
C
w
x
c
y
x
y
z x
z
Bα
z b
y w
x y Cα
z
B
167
C
Fig. 1. (a) The pair {x, y} cannot be stored (left), (b) The pair {x, y } cannot be stored (right)
Assumption: Each Ei is a matching. (We may restrict attention to a sub-universe of size Ω(m), so that this assumption always holds. Details omitted.) We will label edges from the same partition using variants of the same letter. For example, in Figure 1 (a), the eight edges come from four matchings: e.g. the edges labeled x and x both belong to a common matching, Eα , edges labeled y and y both belong to a common matching, say Eβ , etc. The obstruction. The configuration of edges and labels depicted in Figure 1 (a) has a special significance for us, for if it were to exist in our graph, then the underlying scheme cannot store all sets of size two. In particular, it is easy to verify that if this configuration exists, then there is no way to store the set {x, y}. 1 Goal. Show that if s < 10 m4/7 , then there exists an obstruction to store some pair {x, y}. In the rest of the proof we will address this question directly in graph theoretic terms. We will denote the vertices involved in Eα by Bα ⊆ B and Cα ⊆ C. Let Fα = Bα ×Cα , that is, Fα is the set of pairs in the graph obtained by replacing Eα by a complete bipartite graph between its end points. For a pair (b, c) ∈ B × C, let d(b, c) = |{α : (b, c) ∈ Fα }|. Clearly,
d(b, c) =
b,c
|Eα |2 ≥
α∈[s]
m2 , s
(3)
where we used α∈[s] |Eα | = m (and the Cauchy-Schwarz inequality) to justify the last inequality. For α ∈ [s], let Δα = (b,c)∈Fα d(b, c). Then, α∈[s]
Δα =
(b,c)
d(b, c)2 ≥
m4 , s4
where we used (3) to justify the last inequality. Thus, there is an α ∈ [s] such 4 that Δα ≥ m s5 . Fix such an α.
168
J. Radhakrishnan, S. Shah, and S. Shannigrahi
Claim. Δα − m ≤ 2s2 . 4
2 If this claim holds, then we have m s5 − m ≤ 2s , which implies that s ≥ It thus remains to prove the claim.
4 1 7 10 m .
Proof of claim: We will now interpret Δα in another way. Let Bα,β = Bα ∩ Bβ and Cα,β = Cα ∩ Cβ ; let Cα,β be the subset of Cβ matched to Bα,β in the matching Eβ , and similarly let Bα,β be the subset of Bβ matched to Cα,β in the matching Eβ . Then, Δα = |Bα,β | · |Cα,β | = |Bα,β | · |Cα,β |. β∈[s]
β∈[s]
Let Fα,β = Bα,β × Cα,β − Eβ . It follows that |Fα,β | ≥ |Bα,β | · |Cα,β | − |Eβ |, and 2 |F | ≥ Δ − m. Now, suppose Δ − m > 2s . Then, by averaging over α α α,β β∈[s] all pairs (b, c) ∈ [s] × [s] we may fix a pair (b, c) that appears in Fα,β for three different choices, β1 , β2 and β3 , of β. (We may assume that β1 , β2 = α.) The resulting situation is described in Figure 1 (b), where we use x, x to label edges from Eβ1 , y, y for edges from Eβ2 and z, z for edges from Eβ3 . It is easy to verify that this results in an obstruction to storing the pair {x, y }—a contradiction. 2
4
Result 3: Lower Bound for Three-Probe Non-adaptive Schemes
In this section, we show that any non-adaptive three√probe scheme for storing sets of size two from a universe of size m requires Ω( m) bits of memory. This extends a result of Alon and Feige [1] to small sets. The following two lemmas will be used in the proof of Theorem 4 below. Lemma 2. Let G be a bipartite graph with v vertices on both sides. If it does not have a path of length three, then it can have at most 2v − 1 edges. Lemma 3. (Theorem 8(1) of [2]) sN (2, m, 2) = m. We now prove Result 3. Theorem 4. For any m, a non-adaptive (2, m, s, 3) scheme uses s ≥ space.
1 √ 32 m
Proof. We divide the 16 functions from {0, 1}2 to {0, 1} into three classes. 1. OR-type functions: x ∨ y, x ∨ y¯, x¯ ∨ y, x¯ ∨ y¯ 2. XOR-type functions: x ⊕ y, x ⊕ y¯ 3. AND-type functions: x ∧ y, x ∧ y¯, x ¯ ∧ y, x ¯ ∧ y¯, and 0, 1, x, y, x ¯, y¯ (functions that depend on at most one variable can be assumed to have an 1 as the other variable of an AND function)
Data Structures for Storing Small Sets in the Bitprobe Model
169
In a non-adaptive scheme S, each z belonging to the universe is assigned three locations l1 , l2 , l3 in the storage and a boolean function fz : {0, 1}3 → {0, 1} that is used to decode whether z is a member or not. We prove that if the functions are the same for all z in the universe then for this type of (2, m, s , 3) scheme (let 1 us call it type-1 scheme), s ≥ 12 m 2 . Note that there are 256 different functions m mapping {0, 1}3 to {0, 1} and therefore there must be at least 256 elements of 1
m 2 the universe which use the same function. This proves that s is at least 12 ( 256 ) for the scheme S. 1 In order to prove s ≥ 12 m 2 for any type-1 scheme S we prove a lower bound for the the following type of scheme (let us call it type-2 scheme). In this scheme all z in the universe use the same function and the 3 probes to determine the membership of every z are made to 3 distinct storages of size s each. We show 1 1 below that s ≥ 12 m 2 . This proves that s ≥ 12 m 2 in any type-1 scheme because otherwise we can copy the storage of (2, m, s , 3) scheme 3 times to obtain a 1 type-2 scheme that uses less than 12 m 2 space in each of its three storages. We assume to the contrary that there exists a type-2 scheme S that uses less 1 than 12 m 2 space in each of its 3 storages. Let f (l1 , l2 , l3 ) denote the function for decoding any element of the universe where l1 , l2 , l3 are the bits stored at the first, second and third storages respectively. In any such scheme, there exists a 1 set U of at least 2m 2 elements in the universe that probe the same location in the first probe. The function f is defined as follows. If l1 is 0, f is a function f0 (l2 , l3 ) of the last two bits, and it is f1 (l2 , l3 ) otherwise. We show that in both the following cases, our assumption of S using less 1 than 12 m 2 space in each of its 3 storages leads to a contradiction. In each case, we show that there exists a subset of size 2 that cannot be stored using this scheme. We define a bipartite graph G on the last two storages of the scheme as follows: two locations are connected by an edge if and only if they are the last two storage locations of an element of the universe. Note that each element of L has a unique edge in this graph (because the first storage location is common).
Case 1. (one of f0 or f1 is either an OR-type function or an XOR-type function) We first assume that the function is f0 and it is an OR-type function. Let F be a maximal acyclic subgraph of G and UF be the elements whose edges 1 1 appear in F . Then, |UF | ≤ m 2 − 1. Let U ∗ = U \ UF . Then, |U ∗ | ≥ m 2 + 1. ∗ Now, observe that to store any pair {x, y} ⊂ U , the scheme must place an 1 at the location of the first probe, i.e., at l1 . This yields a two-probe scheme for sets of size at most two for the universe U ∗ . However, by Lemma 3, any 1 1 such scheme needs space at least |U ∗ | ≥ m 2 + 1, whereas G has m 2 vertices. This leads to a contradiction. Let us now assume that f0 is an XOR-type function. Let ei1 be an edge of G that is not contained in F . Then, ei1 must form an even length cycle {ei1 = (v1 , v2 ), ei2 = (v2 , v3 ), . . . , ei2u = (v2u , v1 )} which corresponds to the set of elements C = {i1 , i2 , . . . , i2u } respectively. Let us first consider the case when f0 is x ⊕ y. We denote by bi the bit corresponding to a vertex vi . In order to store all elements of C \ {i1 } as non-members, the bits bi ’s has to simultaneously satisfy the equations b2 + b3 = 0 (mod 2), b3 + b4 = 0 (mod
170
J. Radhakrishnan, S. Shah, and S. Shannigrahi
2), . . ., b2u + b1 = 0 (mod 2). Adding the equations gives us b1 + b2 = 0 (mod 2) which forces the element i1 to be stored as a non-member. Let us now consider the case when f0 is x⊕ y¯. In order to store all elements of C (except i1 ) as non-members, the bits bi ’s have to simultaneously satisfy the equations b2 +b3 = 1 (mod 2), b3 +b4 = 1 (mod 2), . . ., b2u +b1 = 1 (mod 2). Since there are odd number of equations, these adds up to b1 + b2 = 1 (mod 2) which forces the element i1 to be stored as a non-member. Hence, we observe that to store any pair {x, y} ⊂ U ∗ , the scheme must place an 1 at the location of the first probe, i.e., at l1 . This leads to a contradiction as before. Case 2. (Both f0 and f1 are AND-type functions) From Lemma 2, it follows that G must have a path of length 3. Let us denote it by {ep , eq , er }, where p, q, r are the corresponding elements of U . Consider the set {p, r}. If f0 is of the form x ∧ y, both the endpoints of both ep and er have to be assigned 1. This makes both the endpoints of eq to be 1, which leads to q being wrongly stored as a member. It is easy to see that this argument works for other 2 AND-type functions and for f1 .
5
Concluding Remarks
We have studied the set membership problem in the bitprobe model with the aim of determining if the exponent of m in these bounds can be made to approach 1t . – Even for small n and t, our lower bounds are not tight. We conjecture that 2 sA (2, m, 2) = Θ(m 3 ). 1 – We have shown that Ω(m 3 ) ≤ sA (2, m, 3) ≤ O(m0.4 ). The upper bound is probabilistic; it can be implemented using limited independence, but it is not fully explicit. We believe that a simple explicit scheme should match our upper bound. – We conjecture that the information theoretic lower bound is not tight for any n ≥ 2 and t ≥ 2; we are able to show this for n = 2, t = 2, but believe this to be true in general.
References 1. Alon, N., Feige, U.: On the power of two, three and four probes. In: Proc. Symposium on Discrete Algorithms 2009, pp. 346–354 (2009) 2. Buhrman, H., Miltersen, P.B., Radhakrishnan, J., Venkatesh, S.: Are bitvectors optimal?. In: Proc. Symposium on Theory of Computing 2000, pp. 449–458 (2000); SIAM J. Computing 31, 1723–1744 (2002) 3. Radhakrishnan, J., Raman, V., Rao, S.S.: Explicit Deterministic Constructions for Membership in the Bitprobe Model. In: Proc. European Symposium on Algorithms 2001, pp. 290–299 (2001) 4. Ta-Shma, A.: Storing Information with Extractors. Information Processing Letters 83, 267–274 (2002)
On Space Efficient Two Dimensional Range Minimum Data Structures Gerth Stølting Brodal1 , Pooya Davoodi1 , and S. Srinivasa Rao2 1
2
MADALGO , Department of Computer Science, Aarhus University, IT Parken, ˚ Abogade 34, DK-8200 ˚ Arhus N, Denmark {gerth,pdavoodi}@cs.au.dk School of Computer Science and Engineering, Seoul National University, S. Korea
[email protected]
Abstract. The two dimensional range minimum query problem is to preprocess a static two dimensional m by n array A of size N = m · n, such that subsequent queries, asking for the position of the minimum element in a rectangular range within A, can be answered efficiently. We study the trade-off between the space and query time of the problem. We show that every algorithm enabled to access A during the query and using O(N/c) bits additional space requires Ω(c) query time, for any c where 1 ≤ c ≤ N . This lower bound holds for any dimension. In particular, for the one dimensional version of the problem, the lower bound is tight up to a constant factor. In two dimensions, we complement the lower bound with an indexing data structure of size O(N/c) bits additional space which can be preprocessed in O(N ) time and achieves O(c log2 c) query time. For c = O(1), this is the first O(1) query time algorithm using optimal O(N ) bits additional space. For the case where queries can not probe A, we give a data structure of size O(N · min{m, log n}) bits with O(1) query time, assuming m ≤ n. This leaves a gap to the lower bound of Ω(N log m) bits for this version of the problem.
1
Introduction
In this paper, we study time-space trade-offs for the two dimensional range minimum query problem (2D-RMQ). This problem has applications in computer graphics, image processing, computational Biology, and databases. The input is a two dimensional m by n array A of N = m · n elements from a totally ordered set. A query asks for the position of the minimum element in a query range q = [i1 · · · i2 ] × [j1 · · · j2 ], where 1 ≤ i1 ≤ i2 ≤ m and 1 ≤ j1 ≤ j2 ≤ n, i.e., RMQ(A, q) = argmin(i,j)∈q A[i, j]. We assume w.l.o.g. that m ≤ n and that all the entries of A are distinct (identical entries of A are ordered lexicographically by their index).
Center for Massive Data Algorithmics, a Center of the Danish National Research Foundation.
M. de Berg and U. Meyer (Eds.): ESA 2010, Part II, LNCS 6347, pp. 171–182, 2010. c Springer-Verlag Berlin Heidelberg 2010
172
G.S. Brodal, P. Davoodi, and S. Srinivasa Rao
Table 1. Results for the 1D-RMQ problem. The term |A| denotes the size of the input array A in bits. Using Cartesian tree [13,14,17,8,6] Yes [2] No [16] Yes [12] Yes [11] No Theorem 1 Theorem 2 Yes Reference
1.1
Using Access Space (bits) Query Time LCA to A Yes Yes O(n log n) + |A| O(1) No Yes O(n log n) + |A| O(1) Yes No 4n + o(n) O(1) No Yes 2n + o(n) + |A| O(1) Yes No 2n + o(n) O(1) Yes O(n/c) + |A| Ω(c) Yes Yes O(n/c) + |A| O(c)
Previous Work
One Dimensional. The 1D-RMQ problem is the special case of the two dimensional version where N = n. It has been studied intensively and has numerous applications (Fischer [11] mentions some of them). Several solutions achieve O(1) query time using additional space O(n log n) bits, by transforming RMQ queries into lowest common ancestor (LCA) queries [1] on the Cartesian tree [18] of A [13,14,17,8,6]. Alstrup et al. [2] solved the problem with the same bounds but without using Cartesian trees. Fischer and Heun [12] presented an O(1) query time solution using 2n + o(n) additional bits which uses a Cartesian tree but makes no use of the LCA structure, and gives a simple solution for the static LCA problem1 . Sadakane [16] gave an O(1) query time algorithm using 4n + o(n) bits space which does not access A during the query. Fischer [11] decreased the space to 2n + o(n) bits by introducing a new data structure named 2d-Min-Heap instead of using the Cartesian tree. Table 1 summarizes these results along with the results of this paper. Two Dimensional. A na¨ıve solution for the 2D-RMQ problem is to perform a brute force search through all the entries of the query q in worst case Θ(N ) time. Preprocessing A can reduce the query time. A na¨ıve preprocessing is to store the answer to all the O(N 2 ) possible queries in a lookup table of size O(N 2 log N ) bits. The query time becomes O(1) with no probe into A. All the published algorithms, on the d > 1 dimensional RMQ problem, perform probes into A during the query. The d-dimensional RMQ problem was first studied by Gabow et al. [13]. They apply the range trees introduced by Bentley [7] to achieve O(logd−1 N ) query time using additional space O(N logd N ) bits and O(N logd−1 N ) preprocessing time. Chazelle and Rosenberg [9] gave an algorithm to compute the range sum in the semigroup model, which can be applied to solve the RMQ problem. Their construction achieves O(1) query time using additional space O(N ·αk (n)2 ·log N ) bits with O(N ·αk (n)2 ) preprocessing 1
Fischer and Heun [12] claim 2n − o(n) bits lower bound for the additional space, however their proof is incorrect which, e.g., follows by Theorem 2.
On Space Efficient Two Dimensional Range Minimum Data Structures
173
Table 2. Results for the 2D-RMQ problem. The contributions of [13,9,4] and Theorem 1 can be generalized to the multidimensional version of the problem. Reference Query time Space (bits) Preprocessing time [13] O(log N ) O(N log2 N ) + |A| O(N log N ) [9] O(1) O(N αk (n)2 log N ) + |A| O(N αk (n)2 ) [3] O(1) O(kN log N ) + |A| O(N log[k+1] N ) [10] Ω(N log m) [4] O(1) O(N log N ) + |A| O(N ) Theorem 1 Ω(c) N/c + |A| Theorem 3 O(1) O(N ) + |A| O(N ) Theorem 4 O(c log2 c) O(N/c) + |A| O(N ) Theorem 5 Ω(N log m) Section 3 O(1) O(N · min{m, log n}) O(N )
time for any fixed value of k, where αk (n) is the k th function of the inverse Ackermann hierarchy. The two dimensional version of the problem was considered by Amir et al. [3]. They presented a class of algorithms using O(N log(k+1) N ) preprocessing time, O(kN log N ) bits additional space and O(1) query time for any constant k > 1, where log(k+1) N is the result of applying the log function k + 1 times on N . Recently Atallah and Yuan [4] gave the first linear time preprocessing algorithm for d-dimensional arrays. Their algorithm answers any query in constant time using O(N log N ) bits additional space. Demaine et al. [10] proved that the number of different 2D-RMQ n by n matrices is Ω(( n4 !)n/4 ), where two 2D-RMQ matrices are considered different only if their range minima are in different locations for some rectangular range. This implies a lower bound Ω(n2 log n) for both the number of preprocessing comparisons and the number of bits required for a data structure capturing the answer to all the queries. This proves the impossibility of achieving a linear upper bound for the 2D-RMQ problem conjectured by Amir et al. [3]. Table 2 summarizes these results along with the results of this paper. 1.2
Our Results
We consider the 2D-RMQ problem in the following two models: 1) indexing model in which the query algorithm has access to the input array A in addition to the data structure constructed by preprocessing A, called an index ; and 2) encoding model in which the query algorithm has no access to A and can only access the data structure constructed by preprocessing A, called an encoding. In the indexing model, we initiate the study of the trade-off between the query time and the additional space for the 2D-RMQ problem. We prove the lower bound trade-off that Ω(c) query time is required if the additional space is N/c bits, for any c where 1 ≤ c ≤ N . The proof is in a non-uniform cell probe model [15] which is more powerful than the indexing model. We complement the lower bound with an upper bound trade-off: using an index of size O(N/c)
174
G.S. Brodal, P. Davoodi, and S. Srinivasa Rao
c 110111111111111111 111111110111111111 · · · 111111111111111011 110111111111111111 111101111111111111 · · · 111111111111111011 q2 Fig. 1. Two arrays from Cn,c , each one has n/c blocks. In this example c = 18. The query q2 has different answers for these arrays.
bits we can achieve O(c log2 c) query time. For the indexing model, this is the first O(N )-bit index which answers queries in O(1) time. In the encoding model, the only earlier result on the 2D-RMQ problem is the information-theoretic lower bound of Demaine et al. [10] who showed a lower bound of Ω(N log n) bits for n by n matrices. We generalize their result to m by n (rectangular) matrices to show a lower bound of Ω(N log m) bits. We also present an encoding structure of size O(N · min{m, log n}) bits with O(1) query time. Note that the upper and lower bounds are not tight for non-constant m = no(1) : the lower bound states that the space requirement per element is Ω(log m) bits, whereas the upper bound requires O(min{m, log n}) bits per element.
2 2.1
Indexing Model Lower Bound
In the indexing model, we prove a lower bound for the query time of the 1D-RMQ problem where the input is a one dimensional array of n elements, and then we show that the bound also holds for the RMQ problem in any dimension. The proof is in the non-uniform cell probe model [15]. In this model, computation is free, and time is counted as the number of cells accessed (probed) by the query algorithm. The algorithm is also allowed to be non-uniform, i.e., for different values of input parameter n, we can have different algorithms. For n and any value of c, where 1 ≤ c ≤ n, we define a set of arrays Cn,c and a set of queries Q. We w.l.o.g. assume that c divides n. We will argue that for any 1D-RMQ algorithm which has access to an index of size n/c bits (in addition to the input array A), there exists an array in Cn,c and a query in Q for which the algorithm is required to perform Ω(c) probes into A. Definition 1. Let n and c be two integers, where 1 ≤ c ≤ n. The set Cn,c contains the arrays A[1 · · · n] such that the elements of A are from the set {0, 1}, and in each block A[(i − 1)c + 1 · · · ic] for all 1 ≤ i ≤ n/c, there is exactly a single zero element (see Figure 1). The number of possible data structures of size n/c bits is 2n/c , and the number of arrays in Cn,c is cn/c . By the pigeonhole principle, for any algorithm G there exists a data structure DG which is shared by at least ( 2c )n/c input arrays in Cn,c . DG Let Cn,c ⊆ Cn,c be the set of these inputs.
On Space Efficient Two Dimensional Range Minimum Data Structures
175
Definition 2. Let qi = [(i − 1)c + 1 · · · ic]. The set Q = {qi | 1 ≤ i ≤ n/c} contains n/c queries, each covering a distinct block of A. For algorithm G and data structure DG , we define a binary decision tree capturing DG the behavior of G on the inputs from Cn,c to answer a query q ∈ Q. Definition 3. Let G be a deterministic algorithm. For each query q ∈ Q, we define a binary decision tree Tq (DG ). Each internal node of Tq (DG ) represents a DG probe into a cell of the input arrays from Cn,c . The left and right edges correspond to the output of the probe: left for zero and right for one. Each leaf is labeled with the answer to q. For each algorithm G, we have defined n/c binary trees depicting the probes of DG to answer the n/c queries in Q. Note that the algorithm into the inputs from Cn,c the answers to all these n/c queries uniquely determine the input. We compose all the n/c binary trees into a single binary tree TQ (DG ) in which every leaf determines a particular input. We first replace each leaf of Tq1 (DG ) with the whole Tq2 (DG ), and then replace each leaf of the obtained tree with Tq3 (DG ), and so on. Every leaf of TQ (DG ) is labeled with the answers to all the n/c queries in Q which were replaced on the path from the root to the leaf. Every DG two input arrays in Cn,c correspond to different leaves of TQ (DG ). Otherwise the answers to all the queries in Q are the same for both the inputs which is a contradiction. Therefore, the number of leaves of TQ (DG ) is at least ( 2c )n/c , the DG minimum number of inputs in Cn,c . We next prune TQ (DG ) as follows: First we remove all nodes not reachable DG . Then we repeatedly replace all nodes of degree one with by any input from Cn,c DG correspond to only reachable leaves, their single child. Since the inputs from Cn,c in the pruned tree, the number of leaves becomes equal to the number of inputs DG which is at least ( 2c )n/c . In the unpruned tree, the result of a repeated from Cn,c probe is known already and one child of the node corresponding to the probe is unreachable. Therefore, on a root to leaf path in the pruned tree, there is no repeated probe. Every path from the root to a leaf has at most n/c left edges (zero probes), since the number of zero elements in each input from Cn,c is n/c. The branches along each of these paths represents a binary sequence of length at most d containing at most n/c zeros where d is the depth of the pruned tree. By padding each of these sequences with further 0s and 1s, we can ensure that each sequence has length exactly d + n/c and contains exactly n/c zeros. The , which becomes an upper number of these binary sequences is at most d+n/c n/c bound for the number of leaves in the pruned tree. Lemma 1. For all n and c, where 1 ≤ c ≤ n, the worst case number of probes required to answer a query in Q over the inputs from Cn,c using a data structure of size n/c bits is Ω(c). Proof. Comparing the lower and upper bounds from the above discussion for the number of leaves of TQ (DG ), we have c n/c d + n c ≤ . n 2 c
176
G.S. Brodal, P. Davoodi, and S. Srinivasa Rao
By Stirling’s formula, the following is obtained c n/c n (d + n )e c d + nc n c log ≤ log ≤ log ≤ log , n n 2 c 2 c c c 1 which implies c/2 ≤ (d+n/c)e , and therefore d ≥ n( 2e − 1c ). For any arbitrary n/c algorithm G, the depth d of TQ (DG ) equals the sum of the depths of the n/c binary trees composed into TQ (DG ). By the pigeonhole principle, there exists an DG and an i, where 1 ≤ i ≤ n/c, such that the query qi on x requires input x ∈ Cn,c at least d/(n/c) = Ω(c) probes into the array A maintaining the input.
Theorem 1. Any algorithm solving the RMQ problem for an input array of size N (in any dimension), which uses N/c bits additional space, requires Ω(c) query time, for any c, where 1 ≤ c ≤ N . Proof. Lemma 1 gives the lower bound for the 1D-RMQ problem. The proof for the 2D-RMQ is a simple extension of the proof of Lemma 1. Instead of Cn,c , a set Cm,n,c1 ,c2 of matrices is utilized. Each matrix is composed of mn/c submatrices [ic1 + 1 · · · (i + 1)c1 ] × [jc2 + 1 · · · (j + 1)c2 ] of size c1 by c2 , for 1 ≤ i < m/c1 and 1 ≤ j < n/c2 , where c = c1 · c2 (assuming w.l.o.g. that c1 divides m, and c2 divides n). Each submatrix has exactly one zero element, and all the others are one. There are N/c queries in Q, each one asks for the minimum of each submatrix. As in the proof of Lemma 1, we can argue that there exists a query requiring Ω(c) probes by utilizing the methods of decision trees, composing and pruning them, and bounding the number of leaves. The proof can be generalized straightforwardly to higher dimensional version of the RMQ problem. Theorem 2. The 1D-RMQ problem for an input array of size n is solved in O(n) preprocessing time and optimal O(c) query time using O(n/c) additional bits. Proof. Partition the input array into n/c blocks of size c. Construct an 1D-RMQ encoding structure for the list of n/c block minimums (minimum elements of the blocks) in O(n/c) bits [16]. The query is decomposed into three subqueries. All the blocks spanned by the query form the middle subquery, which can be answered by querying the O(n/c)-bit data structure in O(1) time and then scanning the block containing the answer in O(c) time. The remaining part of the query which includes two subqueries contained in two blocks is answered in O(c) time by scanning the blocks. 2.2
Linear Space Optimal Data Structure
Preliminaries. A block is a rectangular range in a matrix. Let B be a block of size m by n . For the block B, the list MinColList[1 · · · n ] contains the minimum element of each column and MinRowList[1 · · · m ] contains the minimum element of each row. Let TopPrefix(B, ) be the set of blocks B[m /2 − i + 1 · · · m /2] × [1 · · · n ] and BottomPrefix(B, ) be the set of blocks B[m − i + 1 · · · m ] × [1 · · · n ], for 1 ≤ i ≤ m /(2) (assuming w.l.o.g. that m is even and divides m /2). If the rows of B (instead of its columns as the above) are divided by the blocks, then top and bottom denote left and right.
On Space Efficient Two Dimensional Range Minimum Data Structures c↑ bj
177
c↓ q1
q2↑ q2
p
q2↓ bk
q3
Fig. 2. Partitioning the input and building the binary tree structure. The node p is the LCA of the leaves corresponding to bj+1 and bk−1 . The columns c↑ and c↓ , which contain the answers to q2↑ and q2↓ respectively, are found using the Cartesian trees stored in p. The minimum element in each of the columns c↑ and c↓ is found using the Cartesian tree constructed for that column.
Data Structure and Querying. We present an indexing data structure of size O(N ) bits achieving O(1) query time to solve the 2D-RMQ problem. The input matrix of size m by n is partitioned into blocks B = {b1 , . . . , bm/ log m } of size log m by n. According to these blocks, the query q is divided into subqueries q1 , q2 and q3 such that w.l.o.g. q1 is contained in bj and q3 is contained in bk , and q2 spans over bj+1 , . . . , bk−1 vertically, where 1 ≤ j, k ≤ m/ log m (see Figure 2). A binary tree structure is utilized to answer q2 . Since q1 and q3 are range minimum queries in the submatrices bj and bk respectively, they are answered recursively. Lastly, the answers to q1 , q2 and q3 , which are indices into three matrix elements, are used to find the index of the smallest element in q. The binary tree structure has m/ log m leaves, one for each block in B, assuming m/ log m is a power of 2. Each leaf maintains a Cartesian tree for MinColList of its corresponding block. Each internal node having 2k leaf descendants matches with a submatrix M composed of 2k consecutive blocks of B corresponding to the leaf descendants, for 1 ≤ k ≤ m/(2 log m). Note that each of the sets TopPrefix(M, log m) and BottomPrefix(M, log m) contains k blocks, and each block corresponds with a MinColList. The internal node maintains 2k Cartesian trees constructed for these 2k MinColLists. Let M be the submatrix matched with the lowest common ancestor p of the two leaves corresponding to bj+1 and bk−1 . The subquery q2 is composed of the top part q2↑ and the bottom part q2↓ , where q2↑ and q2↓ are two blocks in TopPrefix(M, log m) and BottomPrefix(M, log m) respectively. Two of the Cartesian trees, maintained in p, are constructed for MinColLists of q2↑ and q2↓ . These two Cartesian trees are utilized to find two columns containing the answer to q2↑ and q2↓ . The Cartesian trees constructed for these two columns are utilized to find the answer to q2↑ and q2↓ . Then the answer to q2 is determined by comparing the smallest element in q2↑ and q2↓ . In the second level of the recursion, each block of B is partitioned into blocks of size log m by log n. The recursion continues until the size of each block is log log m
178
G.S. Brodal, P. Davoodi, and S. Srinivasa Rao
by log log n (i.e. four levels). In the binary tree structures built for all the four recursion levels, we construct the Cartesian trees for the appropriate MinColLists and MinRowLists respectively. In the second and fourth levels of recursion, where the binary tree structure gives two rows containing the minimum elements of q2↑ and q2↓ , the Cartesian trees constructed for the rows of the matrix are used to answer q2↑ and q2↓ . We solve the 2D-RMQ problem for a block of size log log m by log log n using the table lookup method given by Atallah and Yuan [4]. Their method preprocesses the block by making at most c G comparisons, for a constant c , where G = log log m · log log n, such that any 2D-RMQ can be answered by performing four probes into the block. Each block is represented by a block type which is a binary sequence of length c G, using the results of the comparisons. The lookup table has 2c G rows, one for each possible block type, and G2 columns, one for each possible query within a block. Each cell of the table contains four indices to address the four probes into the block. The block types of all the blocks of size G in the matrix are stored in another table T . The query within a block is answered by first recognizing the block type using T , and then checking the lookup table to obtain the four indices. Comparing the results of these four probes gives the answer to the query [4]. Theorem 3. The 2D-RMQ problem for an m by n matrix of size N = m · n is solved in O(N ) preprocessing time and O(1) query time using O(N ) bits additional space. Proof. The subquery q2 is answered in O(1) time by using a constant query time LCA structure [5], querying the Cartesian trees in constant time [16], and performing O(1) probes into the matrix. The number of recursion levels is four. In the last level, the subqueries contained in blocks of size G are also answered in O(1) time by using the lookup table and performing O(1) probes into the matrix. Therefore the query is answered in O(1) time. The depth of the binary tree, in the first recursion level, is O(log(m/ log m)). Each level of the tree has O(m/ log m) Cartesian trees for MinColLists of size n elements. Since a Cartesian tree of a list of n elements is stored in O(n) bits [16], the binary tree can be stored in O(n · m/ log m · log(m/ log m)) = O(N ) bits. Since the number of recursion levels is O(1), the binary trees in all the recursion levels are stored in O(N ) bits. The space used by the m + n Cartesian trees constructed for the columns and rows is O(N ) bits. Since G ≤ c log N for a constant c , the size of the lookup table is O(2c c log N G2 log G) = o(N ) bits when c < 1/c . The size of table T is O(N/G · log(2c G )) = O(N ) bits. Hence the total additional space is O(N ) bits. In the binary tree, in the first level of the recursion, each leaf maintains a Cartesian tree constructed for a MinColList of size n elements. These m/ log m lists are constructed in O(N ) time by scanning the whole matrix. Each MinColList in the internal nodes is constructed by comparing the elements of two MinColLists built in the lower level in O(n) time. Therefore constructing these lists, for the whole tree, takes O(n · m/ log m · log(m/ log m)) = O(N ) time. Since a Cartesian tree
On Space Efficient Two Dimensional Range Minimum Data Structures
179
can be constructed in linear time [16], the Cartesian trees in all the nodes of the binary tree are constructed in O(N ) time. The LCA structure is also constructed in linear time [5]. Therefore the binary tree is built in O(N ) time. Since the number of recursion levels is O(1), all the binary trees are built in O(N ) time. The lookup table and table T are also constructed in O(N ) time [4]. 2.3
Space Time Trade-Off Data Structure
We present an indexing data structure of size O(N/c · log c) bits additional space solving the 2D-RMQ problem in O(c log c) query time and O(N ) preprocessing time, where 1 ≤ c ≤ N . The input matrix is divided into N/c blocks of size 2i by c/2i , for 0 ≤ i ≤ log c; assuming w.l.o.g. that c is a power of 2. Let Mi be the matrix of size N/c containing the minimum elements of the blocks of size 2i by c/2i . Let Di be the linear space data structure of Section 2.2 applied to the matrix Mi in O(N/c) bits. Each Di handles a different ratio between the number of rows and the number of columns of the blocks. Note that the matrices Mi are constructed temporarily during the preprocessing and not maintained in the data structure. A query q is resolved by answering log c+ 1 subqueries. Let qi be the subquery of q spanning the blocks of size 2i by c/2i for 0 ≤ i ≤ log c. The minimum elements of the blocks spanned by qi assemble a query over Mi which has the same answer as qi . Therefore, qi is answered by using Di . Note that whenever the algorithm wants to perform a probe into a cell of Mi , a corresponding block of size c of the input is searched for the minimum (since Mi is not maintained in the data structure). The subqueries qi overlap each other. Altogether, they compose q except for at most c log c elements in each of the four corners of q. We search these corners for the minimum element. Eventually, we compare the minimum elements of all the subqueries to find the answer to q (see Figure 3). Theorem 4. The 2D-RMQ problem for a matrix of size N is solved in O(N ) preprocessing time and O(c log2 c) query time using O(N/c) bits additional space. Proof. The number of linear space data structures Di is log c + 1. Each Di requires O(N/c) bits. Therefore, the total additional space is O(log c · N/c) bits. The number of subqueries qi is log c + 1. Each qi is answered by using Di in O(1) query time in addition to the O(1) probes into Mi . Since instead of each probe into Mi , we perform O(c) probes into the input, the query time to answer qi is O(c). The four corners are searched in O(c log c) time for the minimum elemenm. In the end, the minimum elements of the subqueries are compared in O(log c) time to answer q. Consequently, the total query time is O(c log c). Each Di is constructed in O(N/c) time (Section 2.2) after building the matrix Mi . To be able to make all Mi efficiently, we first construct an O(N )-bit space data structure of Section 2.2 for the input matrix in O(N ) time. Then, Mi is built in O(N/c) time by querying a block of the input matrix in O(1) time for each element of Mi . Therefore, the total preprocessing time is O(log c·N/c+N ) = O(N ). Substituting the parameter c by c log c gives the claimed bounds.
180
G.S. Brodal, P. Davoodi, and S. Srinivasa Rao
c c
c
c
q
Fig. 3. Right: The grey area depicts the subqueries of q spanning the blocks of size 2i by c/2i . Left: The dark area depicts a corner of q which is contained in a block of size c by c and includes at most c log c elements.
3
Encoding Model
Upper Bound. The algorithm described in Section 2.2 can preprocess the m by n input array A of size N = m · n into a data structure of size O(N ) bits in O(N ) time. But the query algorithm in Section 2.2 is required to perform some probes into the input. Since A is not accessible in the encoding model, we store another 2D array maintaining the rank of all the N elements using O(N log n) bits. Whenever the algorithm wants to perform a probe into A, it does it into the rank matrix. Therefore the problem can be solved in the encoding model using O(N log n) preprocessing time (to sort A) and O(1) query time using O(N log n) bits space. Another solution in the encoding model is the following. For each of the n columns of A, we build a 1D-RMQ structure using O(m) bits space [16], in total using O(mn) = O(N ) bits space. Furthermore, for each possible pair of rows (i1 , i2 ), i1 ≤ i2 , we construct an 1D-RMQ structure for MinColList of A[i1 · · · i2 ] × [1 · · · n] using O(n) bits space; in total using O(m2 n) = O(N m) bits. The column j containing the answer to a query q = [i1 · · · i2 ] × [j1 · · · j2 ] is found by querying for the range [j1 · · · j2 ] in the 1D-RMQ structure for the rows given by the pair (i1 , i2 ). The query q is answered by querying for the range [i1 · · · i2 ] in the 1D-RMQ structure for column j. Since both 1D-RMQ queries take O(1) time, the total query time is O(1). Selecting the most space efficient solution of the above two solutions gives an encoding structure of size O(N · min{m, log n}) bits with O(1) query time. Lower Bound. We present a set of Ω((m!)n ) different 2D arrays, i.e., for every pair of the arrays there exists a 2D-RMQ with different answers. The elements of the arrays are from the set {1, . . . , mn}. Every array of the set has two parts A = A[1 · · · m/2] × [1 · · · n ] and A containing all the anti-diagonals of length m/2 within the block A[m/2 + 1 · · · m] × [n + 1 · · · n] where n = (n − m/2 + 1)/2 , assuming w.l.o.g. that m is even (see Figure 4). These two parts contain the smallest elements of the array, i.e., {1, . . . , mn }. From this set, the odd numbers are placed in A in increasing order from left to right and then top to bottom,
On Space Efficient Two Dimensional Range Minimum Data Structures
181
n n
j1
A m 2
q
i1 m 2
n
−1 A
m 2
i2
j2
Fig. 4. The elements in the gray area are greater than the elements in the white area. The dotted rectangle denotes the query q which has different answers for A1 and A2 .
i.e. A [i, j] = 2((i−1)n +j)−1. The even numbers are placed in A such that the elements of each anti-diagonal are not sorted but are larger than the elements of the anti-diagonals to the right. The total number of arrays constructed by !)n . permuting the elements of each anti-diagonal of A is ( m 2 For any two matrices A1 and A2 in the set, there exists an index [i2 , j2 ] in the anti-diagonals of A such that A1 [i2 , j2 ]
= A2 [i2 , j2 ]. Let [i1 , j1 ] be the index of an arbitrary odd number between A1 [i2 , j2 ] and A2 [i2 , j2 ]. Since the query q = [i1 · · · i2 ] × [j1 · · · j2 ] has different answers for A1 and A2 , it follows that any two matrices in the set are different (see Figure 4). Theorem 5. The minimum space to store an encoding data structure for the 2D-RMQ problem is Ω(mn log m) bits, assuming that m ≤ n.
n Proof. Since the number of different arrays in the set is ( m 2 !) , the space for a n data structure encoding these arrays is Ω(log( m 2 !) ) = Ω(mn log m) bits.
References 1. Aho, A., Hopcroft, J., Ullman, J.: On finding lowest common ancestors in trees. In: Proc. 5th Annual ACM Symposium on Theory of Computing, pp. 253–265. ACM Press, New York (1973) 2. Alstrup, S., Gavoille, C., Kaplan, H., Rauhe, T.: Nearest common ancestors: a survey and a new distributed algorithm. In: Proc. 14th Annual ACM Symposium on Parallel Algorithms and Architectures, pp. 258–264. ACM, New York (2002) 3. Amir, A., Fischer, J., Lewenstein, M.: Two-dimensional range minimum queries. In: Ma, B., Zhang, K. (eds.) CPM 2007. LNCS, vol. 4580, pp. 286–294. Springer, Heidelberg (2007) 4. Atallah, M.J., Yuan, H.: Data structures for range minimum queries in multidimensional arrays. In: Proc. 20th Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 150–160. SIAM, Philadelphia (2010)
182
G.S. Brodal, P. Davoodi, and S. Srinivasa Rao
5. Bender, M., Farach-Colton, M.: The LCA problem revisited. In: Gonnet, G.H., Viola, A. (eds.) LATIN 2000. LNCS, vol. 1776, pp. 88–94. Springer, Heidelberg (2000) 6. Bender, M.A., Farach-Colton, M., Pemmasani, G., Skiena, S., Sumazin, P.: Lowest common ancestors in trees and directed acyclic graphs. Journal of Algorithms 57(2), 75–94 (2005) 7. Bentley, J.L.: Decomposable searching problems. Information Processing Letters 8(5), 244–251 (1979) 8. Berkman, O., Galil, Z., Schieber, B., Vishkin, U.: Highly parallelizable problems. In: Proc. 21st Ann. ACM Symposium on Theory of Computing, pp. 309–319. ACM, New York (1989) 9. Chazelle, B., Rosenberg, B.: Computing partial sums in multidimensional arrays. In: Proc. 5th Annual Symposium on Computational Geometry, pp. 131–139. ACM, New York (1989) 10. Demaine, E.D., Landau, G.M., Weimann, O.: On cartesian trees and range minimum queries. In: Albers, S., Marchetti-Spaccamela, A., Matias, Y., Nikoletseas, S., Thomas, W. (eds.) ICALP 2009. LNCS, vol. 5555, pp. 341–353. Springer, Heidelberg (2009) 11. Fischer, J.: Optimal succinctness for range minimum queries. In: L´ opez-Ortiz, A. (ed.) LATIN 2010. LNCS, vol. 6034, pp. 158–169. Springer, Heidelberg (2010) 12. Fischer, J., Heun, V.: A new succinct representation of rmq-information and improvements in the enhanced suffix array. In: Chen, B., Paterson, M., Zhang, G. (eds.) ESCAPE 2007. LNCS, vol. 4614, pp. 459–470. Springer, Heidelberg (2007) 13. Gabow, H.N., Bentley, J.L., Tarjan, R.E.: Scaling and related techniques for geometry problems. In: Proc. 16th Annual ACM Symposium on Theory of Computing, pp. 135–143. ACM, New York (1984) 14. Harel, D., Tarjan, R.E.: Fast algorithms for finding nearest common ancestors. SIAM Journal on Computing 13(2), 338–355 (1984) 15. Miltersen, P.B.: Cell probe complexity - a survey. In: Advances in Data Structures Workshop (FSTTCS) (1999) 16. Sadakane, K.: Succinct data structures for flexible text retrieval systems. Journal of Discrete Algorithms 5(1), 12–22 (2007) 17. Schieber, B., Vishkin, U.: On finding lowest common ancestors: simplification and parallelization. SIAM Journal on Computing 17(6), 1253–1262 (1988) 18. Vuillemin, J.: A unifying look at data structures. Communications of the ACM 23(4), 229–239 (1980)
Pairing Heaps with Costless Meld Amr Elmasry Max-Planck Institut f¨ ur Informatik and University of Copenhagen
[email protected]
Abstract. Improving the structure and analysis in [1], we give a variation of pairing heaps that has amortized zero cost per meld (compared to an O(lg lg n) in [1]) and the same amortized bounds for other operations. More precisely, the new pairing heap requires: no cost per meld, O(1) per find-min and insert, O(lg n) per delete-min, and O(lg lg n) per decreasekey, where n is the size of the priority queue at the time the operation is performed. These bounds are the best known for any self-adjusting heap, and match the lower bound established by Fredman for a family of such priority queues. Moreover, our structure is even simpler than that in [1].
1
Introduction
The pairing heap [5] is a self-adjusting heap that is implemented as a single heapordered multiway tree. A primitive operation is the join operation in which two trees are combined by linking the root with the larger key value as the leftmost child of the other. The following operations are defined for the standard implementation of pairing heaps: – find-min. Return the value at the root of the heap. – insert. Create a single-node tree and join it with the main tree. – decrease-key. Decrease the value of the corresponding node. If this node is not the root, cut its subtree and join the two resulting trees. – meld. Join the two trees representing the two priority queues. – delete-min. Remove the root of the heap and return its value. The resulting trees are then combined to form a single tree. The joins are performed in two passes. In the first pass, called the pairing pass, the trees are joined in pairs from left to right (pairing these trees from right to left achieves the same amortized bounds). In the second pass, called the right-to-left incrementalpairing pass, the resulting trees are joined in order from right to left, where each tree is joined with the combined tree resulting from the joins of the trees to its right. (Other variants with different delete-min implementation were given in [1,2,3,5].) The original analysis of pairing heaps [5] established O(lg n) amortized cost for all operations. Around the same time, the skew heap, another self-adjusting heap
Supported by Alexander von Humboldt Fellowship and VELUX Fellowship.
M. de Berg and U. Meyer (Eds.): ESA 2010, Part II, LNCS 6347, pp. 183–193, 2010. c Springer-Verlag Berlin Heidelberg 2010
184
A. Elmasry
Table 1. Upper bounds on the operations of the pairing heap (first rows) and its variants (last rows). All the time bounds are amortized. insert O(lg n) O(1) √
delete-min decrease-key meld Fredman et al. [5] O(lg n) O(lg n) O(lg n) Iacono [7] O(lg n) O(lg n) zero √ √ Pettie [10] O(22 lg lg n ) O(lg n) O(22 lg lg n ) O(22 lg lg n ) Stasko and Vitter [12] O(1) O(lg n) O(lg n) O(lg n) Elmasry [1] O(1) O(lg n) O(lg lg n) O(lg lg n) This paper O(1) O(lg n) O(lg lg n) zero
that has O(lg n) amortized cost per operation, was also introduced [11]. Theoretical results concerning pairing heaps and their variants were later obtained through the years. The amortized bounds for the standard implementation were improved by Iacono [7] to: O(1) per insert, and zero cost per √meld. Pettie [10] proved amortized costs of: O(lg n) per delete-min, and O(22 lg lg n ) for other operations including decrease-key. Some variants were also introduced. Stasko and Vitter [12] suggested a variant that achieves O(1) amortized cost per insert. Elmasry [1] introduced a variant that achieves the following amortized bounds: O(1) per insert, O(lg n) per delete-min, and O(lg lg n) per decrease-key and meld. See Table 1. Fredman [4] showed that Ω(lg lg n) amortized comparisons, in the decision-tree model, would be necessary per decrease-key operation for a family of priority queues that generalizes pairing heaps. Several experiments were conducted on pairing heaps, either comparing their performance with other priority queues [8,9] or with some variants [2,3,12]. Such experiments illustrate that pairing heaps are practically efficient and superior to other priority queues, including Fibonacci heaps [6]. In this paper, we give another variant of pairing heaps that achieves the best bounds known for any self-adjusting heap. Namely, our amortized bounds are: zero cost per meld, O(1) per find-min and insert, O(lg n) per delete-min, and O(lg lg n) per decrease-key. To achieve these bounds, we apply similar ideas to those in [1] adapted to efficiently support meld. In addition, we avoid using an insertion buffer, which makes the new structure even simpler than that in [1]. To prove our bounds, we use similar ideas to those of Iacono’s analysis to the standard implementation [7], and tailor them for our new structure. We describe the data structure in Section 2, explain our accounting strategies in Section 3, prove the time bounds in Section 4, and conclude the paper with three open questions in Section 5.
2
The Data Structure
Our priority-queue structure is composed of two components: the main tree and the decrease pool. The decrease pool holds at most lg n heap-ordered trees of various sizes, which are the subtrees that have been cut by decrease-key operations since the last call to the consolidate operation (see below). Similar to the
Pairing Heaps with Costless Meld
185
standard implementation of pairing heaps, our trees are heap-ordered multiway trees. We also maintain a pointer to the root of the tree that has the minimum value among the roots of the decrease pool and that of the main tree. Next, we give the detailed implementations for various priority-queue operations. – find-min. Return the value of the node pointed to by the minimum pointer. – insert. Create a single-node tree and join it with the main tree. Make the minimum pointer point to the new node if its value is the new minimum. We use the following procedure in implementing the upcoming operations: - consolidate. Combine the trees of the decrease pool in one tree by sorting the values of the roots of these trees, and linking the trees in this order such that their roots form a path of nodes in the combined tree (make every root the leftmost child of the root with the next smaller value). Join this combined tree with the main tree. – decrease-key. Decrease the value of the corresponding node x. Make the minimum pointer point to x if its new value is smaller than the minimum. If x is not a root: Cut x’s subtree and the subtree of the leftmost child of x. Glue the subtree of the leftmost child of x in place of x’s subtree, and add the rest of x’s subtree (excluding the subtree of x’s leftmost child that has just been cut) to the decrease pool as a standalone tree. See Figure 1. If the decrease pool has lg n trees, call consolidate.
y
y
x 1
x 1
2
2
Fig. 1. A cut performed by the decrease-key operation on node x
– meld. Combine the trees of the priority queue that has the smaller number of nodes in one tree by calling consolidate. Join the resulting tree with the main tree of the larger priority queue, and destroy the smaller priority queue. Make the minimum pointer point to the root of the new main tree if it has the minimum element of the two melded priority queues. – delete-min. Remove the root pointed to by the minimum pointer. Combine the subtrees of this root in one tree using the standard two-pass implementation of the pairing heaps [5]. Combine the trees of the priority queue in one main tree by calling consolidate. Make the minimum pointer point to the root of the resulting tree.
186
A. Elmasry
The difference between this structure and the structure in [1] is that here we do not use an insertion buffer. Accordingly, insert and consolidate (called combine-queue in [1]) are slightly modified. In addition, the implementation of meld is slightly different here in order to achieve the claimed cost improvement.
3
Accounting
For the sake of the analysis, we categorize the nodes being either black or white. A node is black if it remains in the priority queue after performing the sequence of operations under consideration, otherwise it is white. Theorem 1. Consider a sequence of operations S = o1 , o2 , . . . performed on a set of priority queues, starting with no elements. Let A = {i | oi is a meld operation}, B = {i | oi is a find-min or an insert operation}, C = {i | oi is a decrease-key operation}, and D = {i | oi is a delete-min operation}. The sequence S is executed on our priority queues in O(|B| + i∈C lg lg ni + i∈D lg ni ) time, where ni is the number of white nodes in the priority queue under consideration when operation i is performed. Let w(x) be the number of white descendants of a node x, including x if it is white. A black node whose descendants are all black is called an inactive node. In other words, x is inactive if and only if w(x) = 0. Otherwise, x is active. To prove Theorem 1, which implies the claimed time bounds, we use a combination of the potential function and the accounting methods [13]. 3.1
The Potential Function
Consider the link between a node x and its parent p(x). Let w (x) be the number of white descendants of p(x) restricted to the subtrees of the right siblings of x, (x) including p(x) if it is white. If x is active, we associate a potential of lg w(x)+w w(x) to this link. Otherwise, the potential is 0. The potential of the priority queues Φ is the sum of the potentials on their links. More formally, w(x) + w (x) lg . Φ= w(x) ∀x | w(x)>0
Despite the fact that the potential on a link may reach lg n, the sum of potentials on a path from an active node x to any of its descendants telescopes to at most lg w(x). If the path is a left spine (every node, up to a leaf, is the leftmost child), the sum of potentials telescopes to exactly lg w(x). 3.2
Debits
Consider the following two cases: – a white node is inserted in a priority queue whose main tree has active root. – a consolidated priority queue that has an active root is melded with a nonsmaller priority queue whose main tree has an active root.
Pairing Heaps with Costless Meld
187
The key idea is that the potential on a link that involves a tree with an inactive root is 0, and accordingly we need to consider only those links involving two trees with active roots. To fulfill the potential requirements, O( i∈D lg ni ) units are borrowed from the spared cost for the delete-min operations that will be performed on the white nodes. The following lemma illustrates that these debits are enough to cover the above two cases. Lemma 1. Consider the set of priority queues at any time during the execution of the sequence of operations S. Let D = {i | oi is a delete-min operation that will be performed on a node currently in the priority queues}. The sum of the potentials on the links formed by insert and meld operations is at most i∈D lg ni , where ni is the number of white nodes in the priority queue under consideration when operation i is performed. Proof. Let τ be any tree in our priority queues, and let k be the number of white nodes in τ at this point of time. Let Dτ be the set D restricted to the delete-min operations to be performed on nodes of τ , and Pτ be the sum of the potentials on the links of τ constructed by and meld operations. We prove insert k by induction the stronger fact that Pτ ≤ i=1 lg i. Since all the white nodes will k eventually be deleted, then i=1 lg i ≤ i∈Dτ lg ni . Consider an insert operation, where a white node is joined to τ resulting in a tree τ . The required potential on the formed link is at most lg (k + 1). By induction, Pτ ≤ lg (k + 1) + ki=1 lg i = k+1 i=1 lg i. Consider a meld operation, where two trees τ1 and τ2 with active roots are joined resulting in a tree τ . Assume that τ1 and τ2 have k1 and k2 white nodes, respectively. The required potential on the formed link is at most lg (k1 + k2 ). By 1 2 1 +k2 induction, Pτ ≤ lg (k1 + k2 ) + ki=1 lg i + ki=1 lg i ≤ ki=1 lg i. This follows from the fact that k1 ! · k2 ! ≤ (k1 + k2 − 1)!, for any integers k1 , k2 ≥ 1. 3.3
Credits
We maintain the following credits in addition to the potential function: - Decrease credits: O(lg lg n) credits for every node whose value has been decreased after the preceding consolidate, n is the current priority-queue size. - Queue credits: O(1) credits per priority queue. - Active-parent credits: O(1) credits for every inactive child of an active node. - Active-run credits: O(1) credits for every active node that has an inactive right sibling. The active-run credits that are assigned to any node are strictly less than the active-parent credits assigned to any other node.
4
The Time Bounds
We will use the following proposition, whose proof is in [1]. Proposition 1. If n > 1, then lg n · (lg lg 2n − lg lg n) = O(1).
188
A. Elmasry
Next, we analyze the time bounds for our operations. Each operation must maintain the potential function, the credits, and pay for the work it performs. 4.1
find-min
No potential changes or extra credits are required. The actual work performed is O(1). It follows that the amortized (and worst-case) cost of find-min is O(1). 4.2
Insert
If the inserted node is white, extra potential units may be needed. But, as Lemma 1 illustrates, these units are borrowed from the cost of the delete-min operations, and the insert operation need not pay for that. Assume that as a result of the insert operation node x is linked to node y. If y is active and x is not, the active-parent credits need to be increased by O(1). If x is active, and the previous leftmost child of y was inactive, the active-run credits need to be increased by O(1). The first node that is inserted in each priority queue pays O(1) for the queue credits. The decrease credits must be increased by O(lg lg(n + 1) − lg lg n) per tree. Using Proposition 1, since at most lg n trees are in the decrease pool, the extra decrease credits sum up to O(1). The actual work to join an inserted node with the main tree is O(1). It follows that the amortized cost of insert is O(1). 4.3
Consolidate
The trees of the decrease pool are combined by sorting the values in their roots and linking them accordingly in order. This will result in a new path of links. Since the sum of the potential values on a path telescopes, the increase in potential as a result of consolidate is O(lg n). This O(lg n) potential increase is paid by the operation that calls consolidate. If consolidate is called within decreasekey, then the number of nodes on which decrease-key has been called beforehand without consolidate being invoked is lg n, and the increase in potential is paid from the decrease credits that will be released. If consolidate is called within delete-min, the delete-min operation pays the increase in potential. If consolidate is called within meld, again resorting to Lemma 1, the increase in potential is borrowed from the cost of the upcoming delete-min operations. As a result of each join, the active runs and active parents may increase by one, and O(1) credits per join would be needed. These extra credits are as well paid from the released decrease credits. Since the number of trees of the decrease pool is O(lg n), the actual work done in sorting is O(lg n · lg lg n), which is also paid from the decrease credits (O(lg lg n) per tree). It follows that the amortized cost of consolidate is zero. 4.4
decrease-key
Let x be the node to be decreased. Consider the path having the ancestors of x and starting from the root followed by the nodes on the left spine of x’s subtree.
Pairing Heaps with Costless Meld
189
Since we cut the subtree of x and replace it with the subtree of its leftmost child, the nodes of this path remain the same except for x. If all the descendants of x are black, possibly excluding the subtree of its leftmost child, then the potentials on all the links do not change as a result of the cut. Otherwise, all the ancestors of x before the cut are active. In such case, the same proof that is given in [1] as well handles the case and can be applied, indicating that the sum of the potentials on all the links does not increase. If x and both its left and right siblings are active while its leftmost child is inactive, then the number of active runs increases by one, and O(1) extra credits would be needed. These extra credits are paid by the decrease-key operation, which also pays O(lg lg n) units to maintain the decrease credits on x. The actual work performed, other than that involved in a possible call to consolidate, is O(1). It follows that the amortized cost of decrease-key is O(lg lg n). 4.5
Meld
Similar to insert, extra potential units may be needed when joining the consolidated tree of one priority queue with the main tree of the other. As Lemma 1 illustrates, these units are paid from the cost of the delete-min operations. The active-parent credits and the active-run credits may need to be increased by O(1). Since the size of the larger priority queue increases, but at most doubles, the decrease credits need to be increased by O(lg lg 2n − lg lg n) per tree. Using Proposition 1, since there are at most lg n trees in the decrease pool, the extra decrease credits sum up to O(1). The actual work performed, other than that involved to call consolidate on the smaller priority queue, is O(1). All these costs are paid from the queue credits released from the destroyed smaller priority queue. It follows that meld pays nothing; everything is taken care of by the other operations. 4.6
delete-min
We think about the two-pass pairing as being performed in steps. At the i-th step, the pair of trees that is the i-th pair from the right among the subtrees of the deleted root are joined, then the resulting tree is joined with the combined tree from the joins of all the previous steps. Each step will then involve three trees and two joins. Let ai be the tree resulting from the joins of the previous steps, and let Ai be the number of white nodes in ai . Let bi and ci be the i-th pair from the right among the subtrees of the deleted root to be joined at the i-th step, and let Bi and Ci respectively be the number of white nodes in their subtrees. It follows that Ai+1 = Ai + Bi + Ci . Let (τ1 τ2 ) denote the tree resulting from the linking of tree τ1 to tree τ2 as its leftmost subtree. We consider the following possibilities, categorized according to the types of the roots of bi and ci and who wins the comparison. See Figure 2.
190
A. Elmasry
ci
ai
bi
bi
ai
ci Fig. 2. A delete-min ((ci bi ) ai ) step
1. Both roots are inactive: There was no potential on the two links that were cut, and no potential is either required on the new links. The actual cost of this step is paid from the released active-parent credits, as these two roots were children of an active parent and at least one of them is not any more. 2. An active root is linked to an inactive root, and ((· · · · · ·) ai ): The potential related to the active root before the operation is enough to cover the potential of the new link with ai . Since the leftmost child of the inactive root before the link is inactive, the active-run credits need to be increased by O(1). Similar to the previous case, these extra credits and the actual cost of the step are paid from the released active-parent credits. 3. (a) Both roots are active: The active-run credits may need to be increased by O(1). The potential on the two links that are cut at the i-th step was lg
Ai + Bi Ai + Bi + Ci + lg . Bi Ci
We consider the four possibilities: i. ((ci bi ) ai ): The potential on the new links is lg
Bi + Ci Ai + Bi + Ci Ai + Bi + Ci + lg = lg . Ci Bi + Ci Ci
The difference in potential is lg
Bi Bi + Ci < lg . Ai + Bi Ai
ii. ((bi ci ) ai ): The potential on the new links is lg
Bi + Ci Ai + Bi + Ci Ai + Bi + Ci + lg = lg . Bi Bi + Ci Bi
The difference in potential is lg
Ci Bi + Ci < lg . Ai + Bi Ai
Pairing Heaps with Costless Meld
191
iii. (ai (ci bi )): The potential on the new links is lg
Bi + Ci Ai + Bi + Ci + lg . Ci Ai
The difference in potential is lg
(Bi + Ci ) · Bi Bi + Ci < 2 lg . Ai · (Ai + Bi ) Ai
iv. (ai (bi ci )): The potential on the new links is lg
Ai + Bi + Ci Bi + Ci + lg . Bi Ai
The difference in potential is lg
(Bi + Ci ) · Ci Bi + Ci . < 2 lg Ai · (Ai + Bi ) Ai
(b) One root is active while the other is inactive, and (ai (· · · · · ·)): The active-run credits may need to be increased by O(1). Since either Bi or Ci equals zero, we use Mi = max {Bi , Ci }. The potential on the cut links is lg
Ai + Mi . Mi
The potential on the new links is lg
Ai + Mi . Ai
The difference in potential is lg
Mi Bi + Ci = lg . Ai Ai
We have just proved that the change in potential resulting from performing i i a step of type 3(a) or 3(b) is at most 2 lg BiA+C or lg BiA+C . We show in the i i next two items that the sum of the amortized costs of all steps of type 3(a) and 3(b) performed within a delete-min operation is O(lg n). i (i) If Bi +Ci < Ai /2, then lg ( BiA+C ) < −1. Then, for all the above subcases, i the change in potential is less than −1. This released potential is used to pay for the possibly-required increase in the active-run credits, in addition to the actual work done at this step.
192
A. Elmasry
(ii) If Bi + Ci ≥ Ai /2, we call this step a bad step. For all the above subcases, in potential resulting from all bad steps is at most the change i 2 i lg BiA+C (taking the summation for positive terms only, i.e. Bi + i Ci > Ai ). Since Ai > Bi + Ci when i > i, the sum of the changes in potential for all steps telescopes to O(lg n). It remains to account for the actual work done at the bad steps. Since Ai+1 = Ai + Bi + Ci , a bad step results in Ai+1 ≥ 32 Ai . Then, the number of bad steps is O(lg n). It follows that the increase in the active-run credits and the actual work done at bad steps is O(lg n) for each delete-min operation. 4. An inactive root is linked to an active root, and ((· · · · · ·) ai ): The potential that was related to the active root before the operation is enough to cover the potential of the new link with ai . To cover the actual work done in such step, consider the two steps that follow it. If those two steps are of the same type as this step, the number of active runs decreases (at least one inactive node is taken out of the way of two active runs) and such released credits are used to pay for all three steps (this is similar to Iacono’s triple-white notion in his potential function [7]). Otherwise, one of the subsequent two steps will pay for the current step as well. From the above case analysis, it follows that the amortized cost of delete-min is O(lg n).
5
Conclusions
We have given a variation of pairing heaps that achieves the optimal amortized bounds, except for decrease-key that still matches Fredman’s lower bound for a family of priority queues that generalizes pairing heaps [4]. However, we note that our variation does not follow the settings of Fredman’s lower bound. In contrast to [1,12], we have opted not to use an insertion buffer. However, the same time bounds are still achievable even with lazy insertions. In [4], Fredman stated that the cost of m pairing-heap operations, including n delete-min operations, is O(m lg2m/n n), which implies a constant amortized cost per decrease-key when m = Ω(n1+ ), for any constant > 0. This suggests that in such case performing decrease-key by utilizing a decrease pool and applying sorting would not be a good idea. It may be practically better to avoid sorting once such situation is detected. Three important open questions are: – Is there a self-adjusting heap that has amortized o(lg lg n) decrease-key cost? – Is it possible that the standard implementation of pairing heaps [5] has the same bounds as those we achieve in this paper? – Which priority queue performs better in practice?
Pairing Heaps with Costless Meld
193
References 1. Elmasry, A.: Pairing heaps with O(log log n) decrease cost. In: Proceedings of the 20th ACM-SIAM Symposium on Discrete Algorithms, pp. 471–476. SIAM, Philadelphia (2009) 2. Elmasry, A.: Parametrized self-adjusting heaps. Journal of Algorithms 52(2), 103– 119 (2004) 3. Fredman, M.L.: A priority queue transform. In: Vitter, J.S., Zaroliagis, C.D. (eds.) WAE 1999. LNCS, vol. 1668, pp. 243–257. Springer, Heidelberg (1999) 4. Fredman, M.L.: On the efficiency of pairing heaps and related data structures. Journal of the ACM 46(4), 473–501 (1999) 5. Fredman, M.L., Sedgewick, R., Sleator, D.D., Tarjan, R.E.: The pairing heap: a new form of self-adjusting heap. Algorithmica 1(1), 111–129 (1986) 6. Fredman, M.L., Tarjan, R.E.: Fibonacci heaps and their uses in improved network optimization algorithms. Journal of the ACM 34, 596–615 (1987) 7. Iacono, J.: Improved upper bounds for pairing heaps. In: Halld´ orsson, M.M. (ed.) SWAT 2000. LNCS, vol. 1851, pp. 32–45. Springer, Heidelberg (2000) 8. Jones, D.: An empirical comparison of priority-queue and event-set implementations. Communications of the ACM 29(4), 300–311 (1986) 9. Moret, B., Shapiro, H.: An empirical assessment of algorithms for constructing a minimum spanning tree. DIMACS Monographs in Discrete Mathematics and Theoretical Computer Science 15, 99–117 (1994) 10. Pettie, S.: Towards a final analysis of pairing heaps. In: Proceedings of the 46th IEEE Annual Symposium on Foundations of Computer Science, pp. 174–183. IEEE, Los Alamitos (2005) 11. Sleater, D.D., Tarjan, R.E.: Self-adjusting heaps. SIAM Journal on Computing 15(1), 52–69 (1986) 12. Stasko, J., Vitter, J.S.: Pairing heaps: experiments and analysis. Communications of the ACM 30(3), 234–249 (1987) 13. Tarjan, R.E.: Amortized computational complexity. SIAM Journal on Algebraic Discrete Methods 6, 306–318 (1985)
Top-k Ranked Document Search in General Text Databases J. Shane Culpepper1 , Gonzalo Navarro2, , Simon J. Puglisi1 , and Andrew Turpin1 1
School of Computer Science and Information Technology, RMIT Univ., Australia {shane.culpepper,simon.puglisi,andrew.turpin}@rmit.edu.au 2 Department of Computer Science, Univ. of Chile
[email protected]
Abstract. Text search engines return a set of k documents ranked by similarity to a query. Typically, documents and queries are drawn from natural language text, which can readily be partitioned into words, allowing optimizations of data structures and algorithms for ranking. However, in many new search domains (DNA, multimedia, OCR texts, Far East languages) there is often no obvious definition of words and traditional indexing approaches are not so easily adapted, or break down entirely. We present two new algorithms for ranking documents against a query without making any assumptions on the structure of the underlying text. We build on existing theoretical techniques, which we have implemented and compared empirically with new approaches introduced in this paper. Our best approach is significantly faster than existing methods in RAM, and is even three times faster than a state-of-the-art inverted file implementation for English text when word queries are issued.
1
Introduction
Text search is a vital enabling technology in the information age. Web search engines such as Google allow users to find relevant information quickly and easily in a large corpus of text, T . Typically, a user provides a query as a list of words, and the information retrieval (IR) system returns a list of relevant documents from T , ranked by similarity. Most IR systems rely on the inverted index data structure to support efficient relevance ranking [24]. Inverted indexes require the definition of terms in T prior to their construction. In the case of many natural languages, the choice of terms is simply the vocabulary of the language: words. In turn, for the inverted index to operate efficiently, queries must be composed only of terms that are in the index. For many natural languages this is intuitive for users; they can express their information needs as bags of words or phrases. However, in many new search domains the requirement to choose terms prior to indexing is either not easily accomodated, or leads to unacceptable restrictions
Partially funded by Millennium Institute for Cell Dynamics and Biotechnology (ICDB), Grant ICM P05-001-F, Mideplan, Chile.
M. de Berg and U. Meyer (Eds.): ESA 2010, Part II, LNCS 6347, pp. 194–205, 2010. c Springer-Verlag Berlin Heidelberg 2010
Top-k Ranked Document Search in General Text Databases
195
on queries. For example, several Far East languages are not easily parsed into words, and a user may adopt a different parsing as that used to create the index. Likewise, natural language text derived from OCR or speech-to-text systems may contain “words” that will not form terms in the mind of a user because they contain errors. Other types of text simply do not have a standard definition of a term, such as biological sequences (DNA, protein) and multimedia signals. With this in mind, in this paper we take the view of a text database (or collection) T as a string of n symbols drawn from an alphabet Σ. T is partitioned into N documents {d1 , d2 , . . . , dN }. Queries are also strings (or sets of strings) composed of symbols drawn from Σ. Here, the symbols in Σ may be bytes, letters, nucleotides, or even words if we so desire; and the documents may be articles, chromosomes or any other texts in which we need to search. In this setting we consider the following two problems. Problem 1. A document listing search takes a query q ∈ Σ ∗ and a text T ∈ Σ ∗ that is partitioned into N documents, {d1 , d2 , . . . , dN }, and returns a list of the documents in which q appears at least once. Problem 2. A ranked document search takes a query q ∈ Σ ∗ , an integer 0 < k ≤ N , and a text T ∈ Σ ∗ that is partitioned into N documents {d1 , d2 , . . . , dN }, ˆ di ). and returns the top-k documents ordered by a similarity measure S(q, By generalizing the problems away from words, we aim to develop indexes that support efficient search in new types of text collections, such as those outlined above, while simultaneously enabling users in traditional domains (like web search) to formulate richer queries, for example containing partial words or markup. In the ranked document search problem, we focus on the specific case ˆ di ) is the tf×idf measure. tf×idf is the basic building block for a where S(q, large class of similarity measures used in the most successful IR systems. This paper contains two contributions towards efficient ranked document search in general texts. (1) We implement, empirically validate and compare existing theoretical proposals for document listing search on general texts, and include a new variant of our own. (2) We propose two novel algorithms for ranked document search using general query patterns. These are evaluated and compared empirically to demonstrate that they perform more efficiently than document listing approaches. In fact, the new ranked document search algorithms are three times faster than a highly tuned inverted file implementation that assumes terms to be English words. Our approach is to build data structures that allow us to efficiently calculate the frequency of a query pattern in a document (tf) on the fly, unlike traditional inverted indexes that stores precomputed tf values for specific query patterns (usually words). Importantly, we are able to derive this tf information in an order which allows rapid identification of the top-k ranked documents. We see this work as an important first step toward practical ranked retrieval for large general-text collections, and an extension of current indexing methods beyond traditional algorithms that assume a lexicon of terms a priori.
196
2
J.S. Culpepper et al.
Basic Concepts
Relevance Ranking. We will focus on the tf×idf measure, where tft,d is the number of times term t appears in document d, and idft is related to the number of documents where t appears. Suffix Arrays and Self-Indexes. The suffix array A[1..n] of a text collection T of length n is a permutation of (1 . . . n), so that the suffixes of T , starting at the consecutive positions indicated in A, are lexicographically sorted [10]: T [A[i]..n] < T [A[i + 1]..n]. Because of the lexicographic ordering, all the suffixes starting with a given substring t of T form a range A[sp..ep], which can be determined by binary search in O(|t| log n) time. Variants of this basic suffix array are efficient data structures for returning all positions in T where a query pattern q occurs; once sp and ep are located for t = q, it is simple to enumerate the occ = ep − sp + 1 occurrences of q. However, if T is partitioned into documents, then listing the documents that contain q, rather than all occurrences, in less than O(occ) time is not so straightforward; see Section 3.1. Self-indexes [13] offer the same functionality as a suffix array but are heavily compressed. More formally, they can (1) extract any text substring T [i..j], (2) compute sp and ep for a pattern t, and (3) return A[i] for any i. For example, the Alphabet-Friendly FM-index (AF-FMI) [5] occupies nHh (T ) + o(n log σ) bits, where σ is the size of the text alphabet, Hh is the h-th order empirical entropy [11] (a lower bound on the space required by any order-h statistical compressor), and h ≤ α logσ n for any constant 0 < α < 1. It carries out (1) in time O(log1+ n + (j − i) log σ) for any constant > 0, (2) in time O(|t| log σ) and (3) in time O(log1+ n). Wavelet Trees. The wavelet tree [8] is a data structure for representing a sequence D[1..n] over an alphabet Σ of size σ. It requires nH0 (D) + o(n log σ) + O(σ log n) bits of space, which is asymptotically never larger than the nlog σ bits needed to represent D in plain form (assuming σ = o(n)), and can be significantly smaller if D is compressible. A wavelet tree computes D[i] in time O(log σ), as well as rankc (D, i), the number of occurrences of symbol c in D[1..i], and selectc (D, j), the position in D of the j-th occurrence of symbol c. An example of a wavelet tree is shown in Fig. 1, and has a structure as follows. At the root, we divide the alphabet Σ into symbols < c and ≥ c, where c is the median of Σ. Then store bitvector Broot [1..n] in the root node, where Broot [i] = 0 if D[i] < c and 1 otherwise. Now the left child of the root will handle sequence Dleft , formed by concatenating together all the symbols < c in D[1..n] (respecting the order); and the right child will handle Dright , which has the symbols ≥ c. At the leaves, where all the symbols of the corresponding Dleaf are equal, nothing is stored. It is easy to see that there are log σ levels and that n bits are spent per level, for a total of at most nlog σ bits. If, instead, the bitvectors at each level are represented in compressed form [17], the total space of each bitvector Bv becomes nH0 (Bv ) + o(n), which adds up to the promised H0 (D) + o(n log σ) + O(σ log n) bits for the whole wavelet tree.
Top-k Ranked Document Search in General Text Databases sp
197
ep
1
6 2 5 6 2 3 1 8 5 1 5 5 1 4 3 7 1 0 1 1 0 0 0 1 1 0 1 1 0 0 0 1 n 1 =6
n 0 =5 2
3
2 2 3 1 1 1 4 3 0 0 1 0 0 0 1 1 n 0 =4 4
n 1 =1 5
2 2 1 1 1 1 1 0 0 0
8
9
1
10
2
n 0 =5
n 1 =1
6
3 4 3 0 1 0
n 1 =1 n 0 =1
n 0 =3
6 5 6 8 5 5 5 7 0 0 0 1 0 0 0 1
n 1 =0 11
3
7
6 5 6 5 5 5 1 0 1 0 0 0 n 0 =4 12
4
n 1 =1
n 1 =1 n 0 =0 13
5
8 7 1 0
14
6
15
7
8
Fig. 1. D = {6, 2, 5, 6, 2, 3, 1, 8, 5, 1, 5, 5, 1, 4, 3, 7} as a wavelet tree. The top row of each node shows D, the second row the bitvector Bv , and numbers in circles are node numbers for reference in the text. n0 and n1 are the number of 0 and 1 bits respectively in the shaded region of the parent node of the labelled branch. Shaded regions show the parts of the nodes that are accessed when listing documents in the region D[sp = 3..ep = 13]. Only the bitvectors (preprocessed for rank and select) are present in the actual structure, the numbers above each bitvector are included only to aid explanation.
The compressed bitvectors also allow us to obtain B[i], and to compute rank and select, in constant time over the bitvectors, which enables the O(log σ)time corresponding operations on sequence D; in particular D[i], rankc (D, i) and selectc (D, j) all take O(log σ)-time via simple tree traversals (see [13]).
3 3.1
Previous Work Document Listing
The first solution to the document listing problem on general text collections [12] requires optimal O(|q| + docc) time, where docc is the number of documents returned; but O(n log n) bits of space, substantially more than the n log σ bits used by the text. It stores an array D[1..n], aligned to the suffix array A[1..n], so that D[i] gives the document where text position A[i] belongs. Another array, C[1..n], stores in C[i] the last occurrence of D[i] in D[1..i − 1]. Finally, a data structure is built on C to answer the range minimum query RMQC (i, j) = argmini≤r≤j C[r] in constant time [4]. The algorithm first finds A[sp..ep] in time O(|q|) using the suffix tree of T [2]. To retrieve all of the unique values in D[sp..ep], it starts with the interval [s..e] = [sp..ep] and computes i = RMQC (s, e). If C[i] ≥ sp it stops; otherwise it reports D[i] and continues recursively with [s..e] = [sp..i − 1] and [s..e] = [i + 1..ep] (condition C[i] ≥ sp always refers to the original sp value). It can be shown that a different D[i] value is reported at each step.
198
J.S. Culpepper et al.
By representing D with a wavelet tree, values C[i] can be calculated on demand, rather than stored explicitly [22]. This reduces the space to | CSA | + n log N + 2n + o(n log N ) bits, where | CSA | is the size of any compressed suffix array and N is the number of documents (Section 2). The CSA is used to find D[sp..ep], and then C[i] = selectD[i] (D, rankD[i] (D, i) − 1) is determined from the wavelet tree of D in O(log N ) time. They use a compact data structure of 2n + o(n) bits [6] for the RMQ queries on C. If, for example, the AFFMI is used as the compressed suffix array then the overall time to report all documents for query q is O(|q| log σ + docc log N ). With this representation, tft,d = rankd (D, ep) − rankd (D, sp − 1). Gagie et al. [7] use the wavelet tree in a way that avoids RMQs on C at all. By traversing down the wavelet tree of D, while setting sp = rankb (Bv , sp − 1) + 1 and ep = rankb (Bv , ep) as we descend to the left (b = 0) or right (b = 1) child of Bv , we reach each possible distinct leaf (document value) present in D[sp, ep] once. To discover each successive unique d value, we first descend to the left child each time the resulting interval [sp , ep ] is not empty, otherwise we descend to the right child. By also trying the right child each time we have gone to the left, all the distinct successive d values in the interval are discovered. We also get tft,d = ep − sp + 1 upon arriving at the leaf of each d. They show that it is possible to get the i-th document in the interval directly in O(log N ) time. This is the approach we build upon to get our new algorithms described in Section 4. Sadakane [20] offers a different space-time tradeoff. He builds a compressed suffix array A, and a parentheses representation of C in order to run RMQ queries on it without accessing C. Furthermore, he stores a bitvector B indicating the points of T where documents start. This emulates D[i] = rank1 (B, A[i]) for n document listing. The overall space is |CSA| + 4n + o(n) + N log N bits. Other |CSA| bits are required in order to compute the tft,d values. If the AF-FMI is used as the implementation of A, the time required is O(|q| log σ + docc log1+ n). Any document listing algorithm obtains docc trivially, and hence idft = log(N/docc). If, however, a search algorithm is used that does not list all documents, idf must be explicitly computed. Sadakane [20] proposes a 2n + o(n) bit data structure built over the suffix array to compute idft for a given t. 3.2
Top-k Retrieval
In IR it is typical that only the top k ranked documents are required, for some k, as for example in Web search. There has been little theoretical work on solving this “top-k” variant of the document listing problem. Muthukrishnan [12] solves a variant where only the docc documents that contain at least f occurrences of q (tfq,d ≥ f ) are reported, in time O(|q| + docc ). This requires a general data structure of O(n log n) bits, plus a specific one of O((n/f ) log n) bits. This approach does not exactly solve the ranked document search problem. Recently, Hon et al. [9] extended the solution to return the top-k ranked documents in time O(|q| + k log k), while keeping O(n log n) bits of space. They also gave a n compressed variant with 2|CSA| + o(n) + N log N bits and O(|q| + k log4+ n) query time, but its practicality is not clear.
Top-k Ranked Document Search in General Text Databases
4
199
New Algorithms
We introduce two new algorithms for top-k document search extending Gagie et al.’s proposal for document listing [7]. Gagie et al. introduce their method as a repeated application of the quantile(D[sp..ep], p) function, which returns the p-th number in D[sp..ep] if that subarray were sorted. To get the first unique document number in D[sp..ep], we issue d1 = quantile(D[sp..ep], 1). To find the 1 + tfq,d1 ). next value, we issue d2 = quantile(D[sp..ep], The j-th unique doc j−1 ument will be dj = quantile D[sp..ep], 1 + i=1 tfq,di , with the frequencies computed along the way as tft,d = rankd (D, ep) − rankd (D, sp − 1). This lists the documents and their tf values in increasing document number order. Our first contribution to improving document listing search algorithms is the observation, not made by Gagie et al., that the tfq,d value can be collected on the way to extracting document number d from the wavelet tree built on D. In the parent node of the leaf corresponding to d, tfq,d is equal to the number of 0-bits (resp. 1-bits) in Bv [sp ..ep ] is d’s leaf is a left child (resp. right child). Thus, two wavelet tree rank operations are avoided; an important practical improvement. We now recast Gagie et al.’s algorithm. When listing all distinct documents in D[sp..ep], the algorithm of Gagie et al. can be thought of as a depth-first traversal of the wavelet tree that does not follow paths which do not lead to document numbers not occurring in D[sp..ep]. Consider the example tree of Fig. 1, where we list the distinct numbers in D[3..13]. A depth-first traversal begins by following the leftmost path to leaf 8. As we step left to a child, we take note of the number of 0-bits in the range used in its parent node, labelled n0 on each branch. Both n0 and n1 are calculated to determine if there is a document number of interest in the left and right child. As we enter leaf 8, we know that there are n0 = 3 copies of document 1 in D[3..13], and report this as tfq,1 = 3. Next in the depth-first traversal is leaf 9, thus we report tfq,2 = 1, the n1 value of its parent node 5. The traversal continues, reporting tfq,3 = 1, and then moves to the right branch of the root to fetch the remainder of the documents to report. Again, this approach produces the document numbers in increasing document number order. These can obviously be post-processed to extract the k documents with the highest tfq,d values by sorting the docc values. A more efficient approach, and our focus next, fetches the document numbers in tf order, and then only the first k are processed. 4.1
Top-k via Greedy Traversal
The approach used in this method is to prioritize the traversal of the wavelet tree nodes by the size of the range [sp ..ep ] in the node’s bitvector. By traversing to nodes with larger ranges in a greedy fashion, we will reach the document leaves in tf order, and reach the first k leaves potentially having explored much less of the tree than we would have using a depth-first-style traversal. We maintain a priority queue of (node, range) pairs, initialized with the single pair (root, [sp..ep]). The priority of a pair favors larger ranges, and ties are broken in favor of deeper nodes. At each iteration, we remove the node (v, [sp ..ep ]) with
200
J.S. Culpepper et al.
largest ep − sp . If v is a leaf, then we report the corresponding document and its tf value, ep − sp + 1. Otherwise, the node is internal; if Bv [sp ..ep ] contains one or more 0-bits (resp. 1-bits) then at least one document to report lies on the left subtree (resp. right subtree) and so we insert the child node with an appropriate range, which will have size n0 (resp. n1 ), into the queue. Note we can insert zero to two new elements in the queue. In the worst case, this approach will explore almost as much of the tree as would be explored during the constrained depth-first traversal of Gagie et al., and so requires O(docc log N ) time. This worst case is reached when every node that is a parent of a leaf is in the queue, but only one leaf is required, e.g. when all of the documents in D[sp..ep] have tfq,d = 1. 4.2
Top-k via Quantile Probing
We now exploit the fact that in a sorted array X[1..m] of document numbers, if a document d occurs more than m/2 times, then X[m/2] = d. The same argument applies for numbers with frequency > m/4: if they exist, they must occur at positions m/4, 2m/4 or 3m/4 in X. In general we have the following: Observation 1. On a sorted array X[1..m], if there exists a d ∈ X with frequency larger than m/2i then there exists at least one j such that X[jm/2i ] = d. Of course we cannot afford to fully sort D[sp..ep]. However, we can access the elements of D[sp..ep] as if they were sorted using the aforementioned quantile queries [7] over the wavelet tree of D. That is, we can determine the document d with a given rank r in D[sp..ep] using quantile(D[sp..ep], r) in O(log N ) time. In the remainder of this section we refer to D[sp..ep] as X[1..m] with m a power of 2, and assume we can probe X as if it were sorted (with each probe requiring O(log N ) time). To derive a top-k listing algorithm, we apply Obs. 1 in rounds. As the algorithm proceeds, we will accumulate candidates for the top-k documents in a min-heap of at most k pairs of the form (d, tfq,d ), keyed on tfq,d . In round 1, we determine the document d with rank m/2 and its frequency tfq,d . If d does not already have an entry in the heap,1 then we add the pair (d, tfq,d ) to the heap, with priority tfq,d . This ends the first round. Note that the item we inserted in fact may have tfq,d ≤ m/2, but at the end of the round if a document d has tfq,d > m/2, then it is in the heap. We continue, in round 2, to probe the elements X[m/4] and X[3m/4], and their frequencies fX[m/4] and fX[3m/4] . If the heap contains less than k items, and does not contain an entry for X[m/4], we insert (X[m/4], fX[m/4]). Else we check the frequency of the the minimum item. If it is less than fX[m/4] , we extract the minimum and insert (X[m/4], fX[m/4]). We then perform the same check and update with (X[3m/4], fX[3m/4]). In round 2 we need not ask about the element with rank 2m/4 = m/2, as we already probed it in round 1. To avoid reinspecting ranks, during the ith round, we determine the elements with ranks m/2i , 3m/2i , 5m/2i .... The total number 1
We can determine this easily in O(1) time by maintaining a bitvector of size N .
5
4 10
6
5 6
1000
1
2 2
3
1 3
2 1
1 2
3
3
1 2 3
4
4
4
1 3 4
4 5 6
5 6
5 6
1 2 3
1 2 3
1 3 2
1
1
3 2
3 2
4
4
5 6
5 6
11 12 13 Query Length
14
4
4
5 6
5 6
5 6
9
10
4 5 6
1
1
1
1
1
1
3 2
3 2
3 2
3 2
3 2
3 2
4
4
4
4
4
4
5 6
5 6
5 6
5 6
5 6
5 6
15
16
17
18
19
20
1
5 6
3
4
5
WSJ 1
3 2
1 2 3
4
1 2 3
2
4 5
1 2
3
3
4
4
5
5
4 5 5
6
1 2
3
4
10
4
2 1
Sada l−gram VM WT Quantile Greedy
Time (msec) 100
1 3 2
Time (msec) 100
10000
PROTEIN 1 2 3 4 5 6
201
6
6
2 1
3
5 6
2 1 3
1
1
1
1 2 3
2 3
3 2
3 2
4
4
4
5 6
5 6
5 6
4 5 6
11 12 13 Query Length
14
15
16
1 2 3
4
4
4
5
5 6
5 6
6
6
6
7
8
9
1 2 1
2 3
3 4 5 6
4 5 6
1 2 3 4 5 6
1
1
3 2
3 2
4
4
5 6
5 6
19
20
1
1000
10000
Top-k Ranked Document Search in General Text Databases
6
7
8
3
4
5
6
10
17
18
Fig. 2. Mean time to find documents for all 200 queries of each length for methods Sada, -gram, VM and WT, and mean time to report the top k = 10 documents by tfq,d for methods Quantile and Greedy. (Lines interpolated for clarity.)
of elements probed (and hence quantile queries) to list all documents is at most 4m/fmin , where fmin is the k-th highest frequency in the result. Due to Obs. 1, and because we maintain items in a min-heap, at the end of the ith round, the k most frequent documents having tf > m/2i are guaranteed to be in our heap. Thus, if the heap contains k items at the start of round i + 1, and the smallest element in it has tf ≥ m/2i+1 , then no element in the heap can be displaced; we have found the top-k items and can stop.
5
Experiments
We evaluated our new algorithms (Greedy from Section 4.1 and Quantile from Section 4.2) with English text and protein collections. We also implemented our improved version of Gagie et al.’s Wavelet Tree document listing method, labelled WT. We include three baseline methods derived from previous work on the document listing problem. The first two are implementations of V¨ alim¨ aki and M¨ akinen [22] and Sadakane [20] as described in Section 3, labelled VM and Sada respectively. The third, -gram, is a close variant of Puglisi et al.’s inverted index of -grams [16], used with parameters = 3 and block size= 4096. Our machine is a 3.0 GHz Intel Xeon with 3 GB ram and 1024 kB on-die cache. Experimental Data. We use two data sets. wsj is a 100 MB collection of 36,603 news documents in text format drawn from disk three of the trec data collection (http://trec.nist.gov). protein is a concatenation of 143,244 Human and Mouse protein sequences totalling 60 MB (http://www.ebi.ac.uk/swissprot). For each collection, a total of 200 queries of character lengths ranging from 3 to 20 which appear at least 5 times in the collection were randomly generated, for a total of 3,600 sample queries. Each query was run 10 times. Timing Results. Fig. 2 shows the total time for 200 queries of each query length for all methods. The document listing method of Gagie et al. with our optimizations (number 4 on the graphs) is clearly the fastest method for finding all documents and tfq,d values that contain the query q in document number order. The two algorithms which implicitly return documents in decreasing tfq,d
1.2
J.S. Culpepper et al.
0.0
0.0
0.2
0.2
Time (msec per Document) 0.4 0.6 0.8 1.0
Time (msec per Document) 0.4 0.6 0.8 1.0
1.2
202
Sada
l−gram
VM
WT
Quantile
Sada
Greedy
l−gram
VM
WT
Quantile
Greedy
Fig. 3. Time per document listed as in Fig. 2, with 25th and 75th percentiles (boxes), median (solid line), and outliers (whiskers) Table 1. Peak memory use during search (MB) for the algorithms on wsj and protein
wsj protein
Sada -gram 572 122 870 77
VM 391 247
WT Quantile Greedy 341 341 341 217 217 217
order, Quantile and Greedy, are faster than all other methods. Note these final two methods are only timed to return k = 10 documents, but if the other methods were set the same task, their times would only increase as they must enumerate all documents prior to choosing the top-k. Moreover, we found that choosing any value of k from 1 to 100 had little effect on the runtime of Greedy and Quantile. Note the anomalous drop in query time for |q| = 5 on protein for all methods except -gram. This is a result of the low occ and docc for that set of queries, thus requiring less work from the self-index methods. Time taken to identify the range D[sp..ep] is very fast, and about the same for all the query lengths tested. A low occ value means this range is smaller, and so less work for the document listing methods. Method -gram however does not benefit from the small number of occurrences of q as it has to intersect all inverted lists for the 3-grams that make up q, which may be long even if the resulting list is short. Fig. 3 summarizes the time per document listed, and clearly shows that the top-k methods (Quantile and Greedy) do more work per document listed. However, Fig. 2 demonstrates that this is more than recouped whenever k is small, relative to the total number of documents containing q. The average docc is well above 10 for all pattern lengths in the current experimental setup. Memory Use. Table 1 shows the memory use of the methods on the two data sets. The inverted file approach, -gram, uses much less memory than the other approaches, but must have the original text available in order to filter out false matches and perform the final tf calculations. It is possible for the wavelet trees in all of the other methods to be compressed, but it is also possible to compress the text that is used (and counted) in the space requirements for method -gram. The Sada method exhibits a higher than expected memory usage because the protein collection has a high proportion of short documents. The Sada method requires a csa to be constructed for each document, and in this
Top-k Ranked Document Search in General Text Databases
203
15
case is undesirable, as the csa algorithm has a high startup overhead that is only recouped as the size of the text indexed increases.
4 word queries
0
Time per query (msec) 5 10
2 word queries
Zet
Zet−p
Zet−io
WT
Quantile
Greedy
Zet
Zet−p
Zet−io
WT
Quantile
Greedy
Fig. 4. Time to find word based queries using Zettair and the best of the new methods for 2 and 4 word queries on wsj
Term-based Search. The results up to now demonstrate that the new compressed self-index based methods are capable of performing document listing search on general patterns in memory faster than previous -gram based inverted file approaches. However, these approaches are not directly comparable to common word-based queries at which traditional inverted indexes excel. Therefore, we performed additional experiments to test if these approaches are capable of outperforming a term-based inverted file. For this sake we generated 44,693 additional queries aligned on English word boundaries from the wsj collection. Short of implementing an in-memory search engine, it is difficult to choose a baseline inverted file implementation that will efficiently solve the top-k document listing problem. Zettair is a publicly available, open source search engine engineered for efficiency (www.seg.rmit.edu.au/zettair). In addition to the usual bag-of-terms query processing using various ranking formulas, it readily supports phrase queries where the terms in q must occur in order, and also implements the impact ordering scheme of Anh and Moffat [1]. As such, we employed Zettair in three modes. Firstly, zet used the Okapi BM-25 ranking formula to return the top 20 ranked documents for the bag-of-terms q. Secondly, zet-p used the “phrase query” mode of Zettair to return the top 20 ranked documents which contained the exact phrase q. Finally, we used the zet-io mode to perform a bagof-terms search for q using impact ordered inverted lists and the associated early termination heuristics. Zettair was modified to ensure that all efficiency measurements were done in ram, just as the self-indexing methods require. Time to load posting lists into memory is not counted in the measurements. Fig. 4 shows the time for searching for two- and four-word patterns. We do not show the times for Sada, VM, and -gram, as they were significantly slower than the new methods, as expected from Fig. 2. The Greedy and Quantile methods used k = 20. The Zet-ph has better performance, on average, than Zet, and Zet-io is the most efficient of all word-based inverted indexing methods tested. A direct comparison between the three Zettair modes and the new algorithms is tenuous,
204
J.S. Culpepper et al.
as Zettair implements a complete ranking, whereas the document listing methods simply use only the tf and idf as their “ranking” method. However, Zet-io preorders the inverted lists to maximize efficiency, removing many of the standard calculations performed in Zet and Zet-ph. This makes Zet-io comparable with the computational cost of our new methods. The WT approach is surprisingly competitive with the best inverted indexing method, Zet-io. Given the variable efficiency of two-word queries with WT (due to the diverse number of possible document matches for each query), it is difficult to draw definitive conclusions on the relative algorithm performance. However, the Greedy algorithm is clearly more efficient than Zet-io (means 0.91ms and 0.69ms, Wilcoxon test, p < 10−15 ). When the phrase length is increased, the two standard Zettair methods get slower per query, as expected, because they now have to intersect more inverted lists to produce the final ranked result. Interestingly, all other methods get faster, as there are fewer total documents to list on average, and fewer intersections for the impact ordered inverted file. For four word queries, all of the self-indexing methods are clearly more efficient than the inverted file methods. Adding an idf computation to Greedy and Quantile will not make them less efficient than WT.
6
Discussion
We have implemented document listing algorithms that, to date, had only been theoretical proposals. We have also improved one of the approaches, and introduced two new algorithms for the case where only the top-k documents sorted by tf values are required. For general patterns, approach WT as improved in this paper was the fastest for document listing, whereas our novel Greedy approach was much faster for fetching the top k documents (for k < 100, at least). In the case where the terms comprising documents and queries are fixed as words in the English language, Greedy is capable of processing 4600 queries per second, compared to the best inverted indexing method, Zet-io, which processes only 1400 on average. These results are extremely encouraging: perhaps self-index based structures can compete efficiently with inverted files. In turn, this will remove the restriction that IR system users must express their information needs as terms in the language chosen by the system, rather than in a more intuitive way. Our methods return the top-k documents in tf order, instead of the tf×idf order of most information retrieval systems. However, in the context of general pattern search, q only ever contains one term (the search pattern), and thus the idf does not alter the tf order. If these data structures are to be used for bagof-strings search, then the idf factor may become important, and can be easily extracted using method WT, which is still faster than Zet-io in our experiments.
References 1. Anh, V., Moffat, A.: Pruned query evaluation using pre-computed impacts. In: Proc. 29th ACM SIGIR, pp. 372–379 (2006) 2. Apostolico, A.: The myriad virtues of subword trees. In: Combinatorial Algorithms on Words. NATO ISI Series, pp. 85–96. Springer, Heidelberg (1985)
Top-k Ranked Document Search in General Text Databases
205
3. Baeza-Yates, R., Ribeiro-Neto, B.: Modern Information Retrieval. Addison Wesley, Reading (1999) 4. Bender, M., Farach-Colton, M.: The LCA problem revisited. In: Gonnet, G.H., Viola, A. (eds.) LATIN 2000. LNCS, vol. 1776, pp. 88–94. Springer, Heidelberg (2000) 5. Ferragina, P., Manzini, G., M¨ akinen, V., Navarro, G.: Compressed representations of sequences and full-text indexes. ACM TALG 3(2), article 20 (2007) 6. Fischer, J., Heun, V.: A new succinct representation of RMQ-information and improvements in the enhanced suffix array. In: Chen, B., Paterson, M., Zhang, G. (eds.) ESCAPE 2007. LNCS, vol. 4614, pp. 459–470. Springer, Heidelberg (2007) 7. Gagie, T., Puglisi, S., Turpin, A.: Range quantile queries: Another virtue of wavelet trees. In: Karlgren, J., Tarhio, J., Hyyr¨ o, H. (eds.) SPIRE 2009. LNCS, vol. 5721, pp. 1–6. Springer, Heidelberg (2009) 8. Grossi, R., Gupta, A., Vitter, J.: High-order entropy-compressed text indexes. In: Proc. 14th SODA, pp. 841–850 (2003) 9. Hon, W.-K., Shah, R., Vitter, J.S.: Space-efficient framework for top-k string retrieval problems. In: Proc. FOCS, pp. 713–722 (2009) 10. Manber, U., Myers, G.: Suffix arrays: a new method for on-line string searches. SIAM J. Computing 22(5), 935–948 (1993) 11. Manzini, G.: An analysis of the Burrows-Wheeler transform. J. ACM 48(3), 407– 430 (2001) 12. Muthukrishnan, S.: Efficient algorithms for document retrieval problems. In: Proc. 13th SODA, pp. 657–666 (2002) 13. Navarro, G., M¨ akinen, V.: Compressed full-text indexes. ACM Computing Surveys 39(1), article 2 (2007) 14. Persin, M., Zobel, J., Sacks-Davis, R.: Filtered document retrieval with frequencysorted indexes. JASIS 47(10), 749–764 (1996) 15. Ponte, J.M., Croft, W.B.: A language modeling approach to information retrieval. In: Proc. 21th ACM SIGIR, pp. 275–281 (1998) 16. Puglisi, S., Smyth, W., Turpin, A.: Inverted files versus suffix arrays for locating patterns in primary memory. In: Crestani, F., Ferragina, P., Sanderson, M. (eds.) SPIRE 2006. LNCS, vol. 4209, pp. 122–133. Springer, Heidelberg (2006) 17. Raman, R., Raman, V., Rao, S.: Succinct indexable dictionaries with applications to encoding k-ary trees and multisets. In: Proc. SODA, pp. 233–242 (2002) 18. Robertson, S.E., Jones, K.S.: Relevance weighting of search terms. JASIST 27, 129–146 (1976) 19. Robertson, S.E., Walker, S., Jones, S., Hancock-Beaulieu, M., Gatford, M.: Okapi at TREC-3. In: Harman, D.K. (ed.) Proc. 3rd TREC (1994) 20. Sadakane, K.: Succinct data structures for flexible text retrieval systems. J. Discrete Algorithms 5(1), 12–22 (2007) 21. Salton, G., Wong, A., Yang, C.S.: A vector space model for automatic indexing. Comm. ACM 18(11), 613–620 (1975) 22. V¨ alim¨ aki, N., M¨ akinen, V.: Space-efficient algorithms for document retrieval. In: Ma, B., Zhang, K. (eds.) CPM 2007. LNCS, vol. 4580, pp. 205–215. Springer, Heidelberg (2007) 23. Witten, I., Moffat, A., Bell, T.: Managing Gigabytes, 2nd edn. Morgan Kaufmann, San Francisco (1999) 24. Zobel, J., Moffat, A.: Inverted files for text search engines. ACM Computing Surveys 38(2), 1–56 (2006)
Shortest Paths in Planar Graphs with Real Lengths in O(n log2 n/ log log n) Time Shay Mozes1, and Christian Wulff-Nilsen2 1
Department of Computer Science, Brown University, Providence, RI 02912, USA
[email protected] 2 Department of Computer Science, University of Copenhagen, DK-2100, Copenhagen, Denmark
[email protected]
Abstract. Given an n-vertex planar directed graph with real edge lengths and with no negative cycles, we show how to compute single-source shortest path distances in the graph in O(n log 2 n/ log log n) time with O(n) space. This improves on a recent O(n log2 n) time bound by Klein et al.
1
Introduction
Computing shortest paths in graphs is one of the most fundamental problems in combinatorial optimization. The Bellman-Ford algorithm and Dijkstra’s algorithm are classical algorithms that find distances from a given vertex to all other vertices in the graph. The Bellman-Ford algorithm works for general graphs and runs in O(mn) time where m resp. n is the number of edges resp. vertices of the graph. Dijkstra’s algorithm runs in O(m + n log n) time when implemented with Fibonacci heaps but it only works for graphs with non-negative edge lengths. We are interested in the single-source shortest path (SSSP) problem for planar directed graphs. There is an optimal O(n) time algorithm for SSSP when all edge lengths are non-negative [4]. For planar graphs with arbitrary real edge lengths and with no negative cycles1 , Lipton, Rose, and Tarjan [8] gave an O(n3/2 ) time algorithm. Henzinger, Klein, Rao, and Subramanian [4] obtained a (not strongly) ˜ 4/3 ). Later, Fakcharoenphol and Rao [3] showed how polynomial bound of O(n to solve the problem in O(n log3 n) time and O(n log n) space. Recently, Klein, Mozes, and Weimann [7] presented a linear space O(n log2 n) time recursive algorithm. In this paper, we present a linear space algorithm with O(n log2 n/ log log n) running time. The speed-up comes from a reduction of the recursion depth of the algorithm in [7] from O(log n) to O(log n/ log log n) levels. Each recursive step now becomes more involved. To deal with this, we show a new technique for using the Monge property in graphs that do not necessarily posses that property. Both [3] and [7] showed how to partition a set of distances that are not Monge, into subsets, each of which is Monge. Exploiting this property, the 1
Supported by NSF grant CCF-0635089. Algorithms for this problem can be used to detect negative cycles.
M. de Berg and U. Meyer (Eds.): ESA 2010, Part II, LNCS 6347, pp. 206–217, 2010. c Springer-Verlag Berlin Heidelberg 2010
Shortest Paths in Planar Graphs with Real Lengths
207
distances within each subset can be processed efficiently. Here we extend that technique by exhibiting sets of Monge distances whose union is a superset of the distances we are actually interested in. We believe this technique may be useful in solving other problems, not necessarily in the context of the Monge property. From observations in [7], our algorithm can be used to solve bipartite perfect matching, feasible flow, and feasible circulation in planar graphs in O(n log2 n/ log log n) time. Chambers et al. claimed [1] that the algorithm in [7] generalizes to bounded genus graphs. However, the proof (Theorem 3.2 in [1]) is not detailed, and it seems that our new technique is required for its correctness [2]. The resulting running time for fixed genus is also improved to O(n log2 n/ log log n). The organization of the paper is as follows. In Section 2, we give some definitions and review some basic results. In Section 3 we give an overview of the algorithm of Klein et al. In Section 4 we show how to improve the running time. Finally, we make some concluding remarks in Section 5.
2
Preliminaries
In the following, G = (V, E) denotes an n-vertex planar directed graph with real edge lengths and with no negative cycles. For vertices u, v ∈ V , let dG (u, v) ∈ R ∪ {∞} denote the length of a shortest path in G from u to v. We extend this notation to subgraphs of G. We will assume that G is triangulated such that there is a path of finite length between each ordered pair of vertices of G. The new edges added have sufficiently large lengths so that finite shortest path distances in G will not be affected. Given a graph H, let VH and EH denote its vertex set and edge set, respectively. For an edge e ∈ EH , let l(e) denote the length of e (we omit H in the definition but this should not cause any confusion). Let P = u1 , . . . , um be a path in H, where |P | = m. For 1 ≤ i ≤ j ≤ m, P [ui , uj ] denotes the subpath ui , . . . , uj . If P = um , . . . , um is another path, we define P P = u1 , . . . , um−1 , um , um+1 , . . . , um . Path P is said to intersect P if VP ∩ VP = ∅. Define a region R to be the subgraph of G induced by a subset of V . In G, the vertices of VR adjacent to vertices in V \ VR are called boundary vertices (of R) and the set of boundary vertices of R is called the boundary of R. Vertices of VR that are not boundary vertices of R are called interior vertices (of R). The cycle separator theorem of Miller [9] states that, given an √ m-vertex triangulated plane graph, there is a Jordan curve C intersecting O( m) vertices and no edges such that between m/3 and 2m/3 vertices are enclosed by C. Furthermore, this Jordan curve can be found in linear time. Let r ∈ (0, n) be a parameter. Fakcharoenphol and Rao [3] showed how to recursively apply the cycle separator theorem so that in O(n log n) time, (a plane embedding of) G is divided into O(n/r) regions with the following properties: √ 1. Each region contains at most r vertices and O( r) boundary vertices. 2. No two regions share interior vertices. 3. Each region has a boundary contained in O(1) faces, defined by simple cycles.
208
S. Mozes and C. Wulff-Nilsen
We refer to such a division as an r-division of G. For simplicity we assume that the O(1) faces in property 3 contain boundary vertices only. This can always be achieved by adding edges between consecutive boundary vertices on each face. Let R be a region in an r-division. We assume that R is enclosed by one of the cycles C in the boundary of R. This can be achieved by adding a new cycle if needed. C is the external face of R. Let F be one of the O(1) faces defining the boundary of R. If F is not the external face of R then the subgraph of G enclosed by F (including the boundary vertices of R in F ) is called a hole of R. For a graph H, a price function is a function p : VH → R. The reduced cost function induced by p is the function wp : EH → R, defined by wp (u, v) = p(u) + l(u, v) − p(v). We say that p is a feasible price function for H if for all e ∈ EH , wp (e) ≥ 0. It is well known that reduced cost functions preserve shortest paths, meaning that we can find shortest paths in H by finding shortest paths in H with edge lengths defined by the reduced cost function wp . Furthermore, given p and the distance in H w.r.t. wp from a u ∈ VH to a v ∈ VH , we can extract the original distance in H from u to v in constant time [7]. Observe that if p is feasible, Dijkstra’s algorithm can be applied to find shortest path distances since then wp (e) ≥ 0 for all e ∈ EH . The distances dH (s, u) from any s ∈ VH are an example of a feasible price function u → dH (s, u) (recall that we have assumed that dH (s, u) < ∞ for all u ∈ VH ). A matrix M = (Mij ) is totally monotone if for every i, i , j, j such that i < i , j < j , Mij ≤ Mij implies Mi j ≤ Mi j . Totally monotone matrices were introduced by Aggarwal et al. [10], who gave an algorithm, nicknamed SMAWK, that, given a totally monotone n × m matrix M , finds all row minima of M in just O(n + m) time. A matrix M = (Mij ) is convex Monge if for every i, i , j, j such that i < i , j < j , we have Mij + Mi j ≥ Mij + Mi j . It is easy to see that if M is convex Monge then it is totally monotone, and that SMAWK can be used to find the column minima of a convex Monge matrix. The algorithm in [7] uses a generalization of SMAWK to so called falling staircase matrices, due to Klawe and Kleitman [5]. Klawe and Kleitman’s algorithm finds all column minima in O(mα(n) + n) time, where α(n) is the inverse Ackerman function.
3
The Algorithm of Klein et al.
In this section, we give an overview of the algorithm of [7]. Let s be a vertex of G. To find SSSP √ distances in G with source s, the algorithm finds a cycle separator C with O( n) boundary vertices that separates G into two subgraphs, G0 and G1 . Let r be any of these boundary vertices. The algorithm consists of five stages: Recursion: SSSP distances from r are computed recursively in G0 and G1 . Intra-part boundary distances: Distances in Gi between every pair of boundary vertices of Gi are computed in O(n log n) time using the algorithm of [6] for i = 0, 1.
Shortest Paths in Planar Graphs with Real Lengths
209
Single-source inter-part boundary distances: A variant of Bellman-Ford is used to compute SSSP distances √ in G from r to all boundary vertices √ on C. The algorithm consists of O( n) iterations. Each iteration runs in O( nα(n)) time using the algorithm of Klawe and Kleitman [5]. This stage takes O(nα(n)) time. Single-source inter-part distances: Distances from the previous stage are used to modify G such that all edge lengths are non-negative without changing the shortest paths. Dijkstra’s algorithm is then used in the modified graph to obtain SSSP distances in G with source r. Total running time for this stage is O(n log n). Rerooting single-source distances: The computed distances from r in G form a feasible price function for G. Dijkstra’s algorithm is applied to obtain SSSP distances in G with source s in O(n log n) time. The last four stages of the algorithm in [7] run in a total of O(n log n) time. Since there are O(log n) recursion levels, the total running time is O(n log2 n). We next describe how to improve this time bound.
4
An Improved Algorithm
The main idea is to reduce the number of recursion levels by applying the cycle separator theorem of Miller not once but several times at each level of the recursion. More precisely, for a suitable p, we obtain an n/p-division of G in O(n log n) time. For each region Ri in this n/p-division, we pick an arbitrary boundary vertex ri and recursively compute SSSP distances in Ri with source ri . This is similar to the first stage of the algorithm in [7], except that we recurse on O(p) regions instead of just two. We will show how all these recursively computed distances can be used to compute SSSP distances in G with source s in O(n log n + npα(n)) additional time. This bound is no better than the O(n log n) bound of the original algorithm but does result in fewer recursion levels. Since the size of regions is reduced by a factor of p with each recursive call, the depth of the recursion is only O(log n/ log p). Furthermore, by recursively applying the separator theorem of Miller as done by Fakcharoenphol and Rao [3], the subgraphs at the kth recursion level define an r-division of G where r = n/pk . This r-division consists of O(n/r) regions each containing at most r vertices, implying that the total time spent at the kth recursion level is O(n/r(r log r + rpα(r))) = O(n log n + npα(n)). Summing over all O(log n/ log p) levels, it follows that the total running time of our algorithm is log n (n log n + npα(n)) . O log p To minimize this expression, we set n log n = npα(n), so p = log n/α(n). This gives the desired O(n log2 n/ log log n) running time. It remains to show how to compute SSSP distances in G with source s in O(n log n + npα(n)) = O(n log n) time, excluding the time for recursive calls. Assume that we are given an n/p-division of G and that for each region R, we are given SSSP distances in R with some boundary vertex of R as source. Note
210
S. Mozes and C. Wulff-Nilsen
that thenumber of regions is O(p) and each region contains at most n/p vertices and O( n/p) boundary vertices. The main technical difficulty arises from the existence of holes. We will first describe a generalization of [7] using multiple regions instead of just two, but assuming that no region has holes. In this case, as is the case of [7], all of the boundary vertices in a region are cyclically ordered on its external face. In section 4.4 we show how to handle the existence of holes. Without holes, the remaining four steps of the algorithm are very similar to those in the algorithm of Klein et al. We give an overview here and go into greater detail in the subsections below. Each step takes O(n log n) time. Intra-region boundary distances: For each region R, distances in R between each pair of boundary vertices of R are computed. Single-source inter-region boundary distances: Distances in G from an arbitrary boundary vertex r of an arbitrary region to all boundary vertices of all regions are computed. Single-source inter-region distances: Using the distances obtained in the previous stage to obtain a modified graph, distances in G from r to all vertices of G are computed using Dijkstra’s algorithm on the modified graph. Rerooting single-source distances: Identical to the final stage of the original algorithm. 4.1
Intra-region Boundary Distances
Let R be a region. Since R has no holes, we can apply the multiple-source shortest path algorithm of [6] to R since we have a feasible price function from the recursively computed distances in R. Total time for this is O(|VR | log |VR |) time which is O(n log n) over all regions. 4.2
Single-Source Inter-region Boundary Distances
Let r be some boundary vertex of some region. We need to find distances in G from r to all boundary vertices of all regions. To do this, we use a variant of Bellman-Ford similar to the one used in stage three of the original algorithm. Let R be the set of O(p) regions, letB ⊆ V be the set of boundary vertices √ over all regions, and let b = |B| = O(p n/p) = O( np). Note that a vertex in B may belong to several regions. Pseudocode of the algorithm is shown in Figure 1. Notice the similarity with the algorithm in [7] but also an important difference: in [7], each table entry ej [v] is updated only once. Here, it may be updated several times in iteration j since more than one region may have v as a boundary vertex. For j ≥ 1, the final value of ej [v] will be ej [v] = min {ej−1 [w] + dR (w, v)}, w∈Bv
(1)
where Bv is the set of boundary vertices of regions having v as boundary vertex.
Shortest Paths in Planar Graphs with Real Lengths
211
initialize vector ej [v] for j = 0, . . . , b and v ∈ B ej [v] := ∞ for all v ∈ B and j = 0, . . . , b e0 [r] := 0 for j = 1, . . . , b for each region R ∈ R let C be the cycle defining the boundary of R ej [v] := min{ej [v], minw∈VC {ej−1 [w] + dR (w, v)}} for all v ∈ VC D[v] := eb [v] for all v ∈ B
1. 2. 3. 4. 5. 6. 7. 8.
Fig. 1. Pseudocode for single-source inter-region boundary distances algorithm
To show the correctness of the algorithm, we need the following two lemmas. Lemma 1. Let P be a simple r-to-v shortest path in G where v ∈ B. Then P can be decomposed into at most b subpaths P = P1 P2 P3 . . ., where the endpoints of each subpath Pi are boundary vertices and Pi is a shortest path in some region of R. Lemma 2. After iteration j of the algorithm in Figure 1, ej [v] is the length of a shortest path in G from r to v that can be decomposed into at most j subpaths P = P1 P2 P3 . . . Pj , where the endpoints of each subpath Pi are boundary vertices and Pi is a shortest path in a region of R. Both lemmas are straightforward generalizations of the corresponding lemmas in [7]. They imply that after b iterations, D[v] holds the distance in G from r to v for all v ∈ B. This shows the correctness of our algorithm. Line 7 can be executed in O(|VC |α(|VC |)) time using the technique of [7] using the distances dR (w, v) which have been precomputed in the previous stage for all v, w ∈ VC . It is important to note that the techniques of [7] only apply since we have assumed that all boundary vertices of R are cyclically ordered on its external face. Thus, each iteration of lines 4–7 takes O(bα(n)) time, giving a total running time for this stage of O(b2 α(n)) = O(npα(n)). Recalling that p = log n/α(n), this bound is O(n log n), as desired. 4.3
Single-Source Inter-region Distances
In this step we need to compute, for each region R, the distances in G from r to each vertex of R. We apply a nearly identical construction to the one used in the corresponding step of [7]. Let R be a region. Let R be the graph obtained from R by adding a new vertex r and an edge from r to each boundary vertex of R whose length is set to the distance in G from r to the boundary vertex. Note that dG (r, v) = dR (r , v) for all v ∈ VR , so it suffices to find distances in R from r to each vertex of VR . Let rR be the boundary vertex of R for which distances in R from rR to all vertices of R have been recursively computed. Define a price function φ for R as follows. Let BR be the set of boundary vertices of R and let D = max{dR (rR , b)− dG (r, b)|b ∈ BR }. Then for all v ∈ VR , dR (rR , v) if v = r φ(v) = D if v = r .
212
S. Mozes and C. Wulff-Nilsen
Lemma 3. Function φ defined above is a feasible price function for R . Proof. Let e = (u, v) be an edge of R . By construction, no edges enter r so v = r . If u = r then φ(u) + l(e) − φ(v) = dR (rR , u) + l(u, v) − dR (rR , v) ≥ 0 by the triangle inequality so assume that u = r . Then v ∈ BR so φ(u)+l(e)−φ(v) = D + dG (r, v) − dR (rR , v) ≥ 0 by definition of D. This shows the lemma. Price function φ can be computed in time linear in the size of R and Lemma 3 implies that Dijkstra’s algorithm can be applied to compute distances in R from r to all vertices of VR in O(|VR | log |VR |) time. Over all regions, this is O(n log n), as requested. We omit the description of the last stage where single-source distances are rerooted to source s since it is identical to the last stage of the original algorithm. We have shown that all stages run in O(n log n) time and it follows that the total running time of our algorithm is O(n log2 n/ log log n). It remains to deal with holes in regions. 4.4
Dealing with Holes
In Sections 4.1 and 4.2, we made the assumption that no region has holes. In this section we remove this restriction. This is the main technical contribution of this paper. As mentioned in Section 2, each region of R has at most a constant number, h, of holes. Intra-region boundary distances: In Section 4.1 we used the fact that all boundary vertices of each region are on the external face, to apply the multiple-source shortest path algorithm of [6]. Consider a region R with h holes. If we apply [6] to R we get distances from boundary vertices on the external face of R to all boundary vertices of R. This does not compute distances from boundary vertices belonging to the holes of R. Consider one of the holes of R. We can apply the algorithm of [6] with this hole considered as the external face to get the distances from the boundary vertices of this hole to all boundary vertices of R. Repeating this for all holes, we get distances in R between all pairs of boundary vertices of R in time O(|VR | log |VR | + h|VR | log |VR |) = O(|VR | log |VR |) time. Thus, the time bound in Section 4.1 still holds when regions have holes. Single-source inter-region boundary distances: It remains to show how to compute single-source inter-region boundary distances when regions have holes. Let C be the external face of region R. Let HR be the directed graph having the boundary vertices of R as vertices and having an edge (u, v) of length dR (u, v) between each pair of vertices u and v. As usual in this context, we say that we relax an edge if it is being considered by the algorithm as the next edge in the shortest path. Line 7 in Figure 1 relaxes all edges in HR having both endpoints on C. We need to relax all edges of HR . In the following, when we say that we relax edges of R, we really refer to the edges of HR .
Shortest Paths in Planar Graphs with Real Lengths
213
To relax the edges of R, we consider each pair of cycles (C1 , C2 ), where C1 and C2 are C or a hole, and we relax all edges starting in C1 and ending in C2 . This will cover all edges we need to relax. Since the number of choices of (C1 , C2 ) is O(h2 ) = O(1), it suffices to show that in a single iteration, the time to relax all edges starting in C1 and ending in C2 is O((|VC1 |+|VC2 |)α(|VC1 |+|VC2 |)), with O(|VR | log |VR |) preprocessing time. We may assume that C1 = C2 , since otherwise we can relax edges as described in Section 4.2. Before going into the details, let us give an intuitive and informal overview of our approach. We transform R in such a way that C1 is the external face of R and C2 is a hole of R. Let P be a simple path from some vertex r1 ∈ VC1 to some vertex r2 ∈ VC2 . Let RP be the graph obtained by “cutting along P ” (see Figure 2). Note that every shortest path in RP corresponds to a shortest path in R that does not cross P . We will show that relaxing all edges in HR from C1 to C2 with respect to distances in RP can be done efficiently. Unfortunately, relaxing edges w.r.t. RP does not suffice since shortest paths in R that do cross P are not represented in RP . To overcome this obstacle we will identify two particular paths Pr and P such that for any u ∈ C1 , v ∈ C2 there exists a shortest path in R that does not cross both Pr and P . Then, relaxing all edges between boundary vertices once in RPr and once in RP suffices to compute shortest path distances in R. More specifically, let T be a shortest path tree in R from r1 to all vertices of C2 . The rightmost and leftmost paths in T satisfy the above property (see Figure 3). We proceed with the formal description. In the following, we define graphs, obtained from R, required in our algorithm. It is assumed that these graphs are constructed in a preprocessing step. Later, we bound construction time. We transform R in such a way that C1 is the external face of R and C2 is a hole of R. We may assume that there is a shortest path in R between every ordered pair of vertices, say, by adding a pair of oppositely directed edges between each consecutive pair of vertices of Ci in some simple walk of Ci , i = 1, 2 (if an edge already exists, a new edge is not added). The lengths of the new edges are chosen sufficiently large so that shortest paths in R and their lengths do not change. Where appropriate, we regard R as some fixed planar embedding of that region. We say that an edge e = (u, v) with exactly one endpoint on path P emanates right (left) of P if (a) e is directed away from P , and (b) e is to the right (left) of P in the direction of P (see e.g., [6] for a more precise definition). If e is directed towards P , then we say that e enters P from the right (left) if (v, u) emanates right (left) of P . We extend these definitions to paths and say, e.g., that a path Q emanates right of path P if there is an edge of Q that emanates right of P . For a simple path P from a vertex r1 ∈ VC1 to a vertex r2 ∈ VC2 , take a copy ← − → − RP of R and remove P and all edges incident to P in RP . let E resp. E be the set of edges that either emanate left resp. right of P or enter P from the left ← − → − ← − → − resp. right. Add two copies, P and P , of P to RP . Connect path P resp. P to ← − → − the rest of RP by attaching the edges of E resp. E to the path, see Figure 2. If ← − → − (u, v) ∈ ER , where (v, u) ∈ EP , we add (u, v) to both P and P in RP .
214
S. Mozes and C. Wulff-Nilsen
R r2
v1
RP
v|C 2 |+1
C2
P2
P C1
← − P u|C1 |+1
− → P
u1
P1
r1
Fig. 2. Region RP is obtained from R essentially by cutting open at P the “ring” bounded by C1 and C2
A simple, say counter-clockwise, walk u1 , u2 , . . . , u|C1 | , u|C1 |+1 of C1 in R where u1 = u|C1 |+1 = r1 corresponds to a simple path P1 = u1 , . . . , u|C1 |+1 in RP . In the following, we identify ui with ui for i = 2, . . . , |C1 |. The vertex r1 in R corresponds to two vertices in RP , namely u1 and u|C1 |+1 . We will identify both of these vertices with r1 . Similarly, a simple, say clockwise, walk of C2 in in RP . We R from r2 to r2 corresponds to a simple path P2 = v1 , . . . , v|C 2 |+1 make a similar identification between vertices of C2 and P2 . In the following, when we say that we relax all edges in RP starting in vertices of C1 and ending in vertices of C2 , we really refer to relaxing edges in HR with respect to the distances between the corresponding vertices of P1 and P2 in RP . More precisely, suppose we are in iteration j. Then relaxing all edges entering a vertex v ∈ VC2 in RP means updating ej [v] := min {ej−1 [v], ej−1 [u] + dRP (u , v )}. u∈VC1
It is implicit in this notation that if u = r1 , we relax w.r.t. both u1 and u|C1 |+1 and if v = r2 , we relax w.r.t. both v1 and v|C . 2 |+1 The fact that in RP P1 and P2 both belong to the external face implies (see Lemma 4.3 in [7]): Lemma 4. Relaxing all edges from VC1 to VC2 in RP can be done in O(|VC1 | + |VC2 |) time in any iteration of Bellman-Ford. As we have mentioned, relaxing edges between boundary vertices in RP does not suffice since shortest paths in R that cross P are not represented in RP . Let T be a shortest path tree in R from r1 to all vertices of C2 . A rightmost (leftmost) path P in T is a path such that no other path Q in T emanates right (left) of P . Let Pr and P be the rightmost and leftmost root-to-leaf simple paths in T , respectively; see Figure 3(a). Let vr ∈ C2 and v ∈ C2 denote the leaves of Pr and P , respectively. In order to state the desired property of Pr and P we now define what we mean when we say that path Q = q1 , q2 , q3 , . . . crosses path P . Let out0 be the smallest index such that qout0 does not belong to P . We recursively define ini
Shortest Paths in Planar Graphs with Real Lengths
v
vr
v Pr
P
T
r1 (a)
P
215
vr u
Pr wout
v
Q
win
x
Tv r1
(b)
Fig. 3. (a): The rightmost root-to-leaf simple path Pr and the leftmost root-to-leaf simple path P in T . (b): In the proof of Lemma 5, if Q first crosses Pr from right to left and then crosses P from right to left then there is a u-to-v shortest path in R that does not cross P .
to be smallest index greater than outi−1 such that qini belongs to P , and outi to be smallest index greater than ini such that qouti does not belong to P . We say that Q crosses P from the right (left) with entry vertex vin and exit vertex vout if (a) vin = qini and vout = qouti −1 for some i > 0 and (b) qini −1 qini enters P from the right (left) and (c) qouti −1 qout emanates left (right) of P . Lemma 5. For any u ∈ VC1 and any v ∈ VC2 , there is a simple shortest path in R from u to v which does not cross both Pr and P . Proof. Let Q be a simple u-to-v shortest path in R which is minimal with respect to the total number of time it crosses Pr and P . If Q does not cross Pr or P , we are done, so assume it crosses both. Also assume that Q crosses Pr first. The case where Q crosses P first is symmetric. Let win and wout be the entry and exit vertices of the first crossing, see Figure 3(b). There are two cases: – Q first crosses Pr from left to right. In this case Q must cross P at the same vertices. In fact, it must be that all root-to-leaf paths in T coincide until wout and that Q crosses them all. In particular, Q crosses the root-to-v path in T , which we denote by Tv . Since Tv does not cross Pr , the path Q[u, wout ]Tv [wout , v] is a shortest u-to-v path in R that does not cross Pr . – Q first crosses Pr from right to left. Consider the path S = Q[u, wout ]Pr [wout , vr ]. We claim that Q does not cross S. To see this, assume the contrary and let w denote the exit point corresponding to the crossing. Since Q is simple, w ∈ / Q[u, wout ]. So w ∈ Pr [wout , vr ], but then Q[u, wout ]Pr [wout , w ]Q [w , v] is a shortest path from u to v in R that crosses Pr and P fewer times than Q. But this contradicts the minimality of Q. Since Q first crosses Pr from right to left and never crosses S, its first crossing with P must be right-to-left as well, see Figure 3(b). This implies that Q enters all root-to-leaf paths in T before (not strictly before) it enters P . In particular, Q enters Tv . Let x be the entry vertex. Then Q[u, x]Tv [x, v] is a u-to-v shortest path in R that does not cross P .
216
S. Mozes and C. Wulff-Nilsen
The algorithm: We can now describe our Bellman-Ford algorithm to relax all edges from vertices of C1 to vertices of C2 . Pseudocode is shown in Figure 4. Assume that RPl and RPr and distances between pairs of boundary vertices in these graphs have been precomputed. In each iteration j, we relax edges from vertices of VC1 to all v ∈ VC2 in RP and in RPr (lines 9 and 10). Lemma 5 implies that this corresponds to relaxing all edges in R from vertices of VC1 to vertices of VC2 . By the results in Section 4.2, this suffices to show the correctness of the algorithm. Lemma 4 shows that lines 9, 10 can each be implemented to run in O(|VC1 | + |VC2 |) time. Thus, each iteration of lines 6–10 takes O((|VC1 | + |VC2 |)α(|VC1 | + |VC2 |)) time, as desired. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.
initialize vector ej [v] for j = 0, . . . , b and v ∈ B ej [v] := ∞ for all v ∈ B and j = 0, . . . , b e0 [r] := 0 for j = 1, . . . , b for each region R ∈ R for each pair of cycles, C1 and C2 , defining the boundary of R if C1 = C2 , relax edges from C1 to C2 as in Section 4.2 else (assume C1 is external and that dRPr and dRP have been precomputed) ej [v] := min{ej [v], minw∈VC1 {ej−1 [w] + dRPr (w , v )}} for all v ∈ VC2 ej [v] := min{ej [v], minw∈VC1 {ej−1 [w] + dRP (w , v )}} for all v ∈ VC2 D[v] := eb [v] for all v ∈ B
Fig. 4. Pseudocode for the Bellman-Ford variant that handles regions with holes
It remains to show that RPr and RP and distances between boundary vertices in these graphs can be precomputed in O(|VR | log |VR |) time. Shortest path tree T in R with source r1 can be found in O(|VR | log |VR |) time with Dijkstra using the recursively computed distances in R as a feasible price function φ. Given T , we can find its rightmost path in O(|VR |) time by starting at the root r1 . When entering a vertex v using the edge uv, leave that vertex on the edge that comes after vu in counterclockwise order. Computing RPr given Pr also takes O(|VR |) time. We can next apply Klein’s algorithm [6] to compute distances between all pairs of boundary vertices in RPr in O(|VR | log |VR |) time (here, we use the non-negative edge lengths in R defined by the reduced cost function induced by φ). We similarly compute P and pairwise distances between boundary vertices in RP . We can finally state our result. Theorem 1. Given a planar directed graph G with real edge lengths and no negative cycles and given a source vertex s, we can find SSSP distances in G with source s in O(n log2 n/ log log n) time and linear space.
5
Concluding Remarks
We gave a linear space algorithm for single-source shortest path distances in a planar directed graph with arbitrary real edge lengths and no negative cycles. The running time is O(n log2 n/ log log n), which improves on the previous
Shortest Paths in Planar Graphs with Real Lengths
217
bound by a factor of log log n. As corollaries, bipartite planar perfect matching, feasible flow, and feasible circulation in planar graphs can be solved in O(n log2 n/ log log n) time. The true complexity of the problem remains unsettled as there is a gap between our upper bound and the linear lower bound. Is O(n log n) time achievable?
References 1. Chambers, E.W., Erickson, J., Nayyeri, A.: Homology flows, cohomology cuts. In: Proc. 42nd Ann. ACM Symp. Theory Comput., pp. 273–282 (2009) 2. Erickson, J.: Private Communication (2010) 3. Fakcharoenphol, J., Rao, S.: Planar graphs, negative weight edges, shortest paths, and near linear time. J. Comput. Syst. Sci. 72(5), 868–889 (2006) 4. Henzinger, M.R., Klein, P.N., Rao, S., Subramanian, S.: Faster shortest-path algorithms for planar graphs. J. Comput. Syst. Sci. 55(1), 3–23 (1997) 5. Klawe, M.M., Kleitman, D.J.: An almost linear time algorithm for generalized matrix searching. SIAM Journal on Discrete Math. 3(1), 81–97 (1990) 6. Klein, P.N.: Multiple-source shortest paths in planar graphs. In: Proceedings, 16th ACM-SIAM Symposium on Discrete Algorithms, pp. 146–155 (2005) 7. Klein, P.N., Mozes, S., Weimann, O.: Shortest Paths in Directed Planar Graphs with Negative Lengths: a Linear-Space O(nlog2 n)-Time Algorithm. In: Proc. 19th Ann. ACM-SIAM Symp. Discrete Algorithms, pp. 236–245 (2009) 8. Lipton, R.J., Rose, D.J., Tarjan, R.E.: Generalized nested dissection. SIAM Journal on Numerical Analysis 16, 346–358 (1979) 9. Miller, G.L.: Finding small simple cycle separators for 2-connected planar graphs. J. Comput. Syst. Sci. 32, 265–279 (1986) 10. Aggarwal, A., Klawe, M., Moran, S., Shor, P.W., Wilber, R.: Geometric applications of a matrix searching algorithm. In: SCG 1986: Proceedings of the second annual symposium on Computational geometry, pp. 285–292 (1986)
When LP Is the Cure for Your Matching Woes: Improved Bounds for Stochastic Matchings (Extended Abstract) Nikhil Bansal1 , Anupam Gupta2, , Jian Li3 , Juli´ an Mestre4 , 1 5, Viswanath Nagarajan , and Atri Rudra 1 IBM T.J. Watson Research Center, Yorktown Heights, NY, USA Computer Science Department, Carnegie Mellon University, Pittsburgh, PA, USA 3 Computer Science Department, University of Maryland College Park, MD, USA 4 Max-Planck-Institut f¨ ur Informatik, Saarbr¨ ucken, Germany 5 Computer Science & Engg. Dept., University at Buffalo, SUNY, Buffalo NY, USA 2
Abstract. Consider a random graph model where each possible edge e is present independently with some probability pe . We are given these numbers pe , and want to build a large/heavy matching in the randomly generated graph. However, the only way we can find out whether an edge is present or not is to query it, and if the edge is indeed present in the graph, we are forced to add it to our matching. Further, each vertex i is allowed to be queried at most ti times. How should we adaptively query the edges to maximize the expected weight of the matching? We consider several matching problems in this general framework (some of which arise in kidney exchanges and online dating, and others arise in modeling online advertisements); we give LP-rounding based constant-factor approximation algorithms for these problems. Our main results are: • We give a 5.75-approximation for weighted stochastic matching on general graphs, and a 5-approximation on bipartite graphs. This answers an open question from [Chen et al. ICALP 09]. • Combining our LP-rounding algorithm with the natural greedy algorithm, we give an improved 3.88-approximation for unweighted stochastic matching on general graphs and 3.51-approximation on bipartite graphs. • We introduce a generalization of the stochastic online matching problem [Feldman et al. FOCS 09] that also models preferenceuncertainty and timeouts of buyers, and give a constant factor approximation algorithm.
1
Introduction
Motivated by applications in kidney exchanges and online dating, Chen et al. [4] proposed the following stochastic matching problem: we want to find a maximum
Supported in part by NSF awards CCF-0448095 and CCF-0729022, and an Alfred P. Sloan Fellowship. Supported by NSF CAREER award CCF-0844796.
M. de Berg and U. Meyer (Eds.): ESA 2010, Part II, LNCS 6347, pp. 218–229, 2010. c Springer-Verlag Berlin Heidelberg 2010
When LP Is the Cure for Your Matching Woes
219
matching in a random graph G on n nodes, where each edge (i, j) ∈ [ n2 ] exists with probability pij , independently of the other edges. However, all we are given are the probability values {pij }. To find out whether the random graph G has the edge (i, j) or not, we have to try to add the edge (i, j) to our current matching (assuming that i and j are both unmatched in our current partial matching)—we call this “probing” edge (i, j). As a result of the probe, we also find out if (i, j) exists or not—and if the edge (i, j) indeed exists in the random graph G, it gets irrevocably added to the matching. Such policies make sense, e.g., for dating agencies, where the only way to find out if two people are actually compatible is to send them on a date; moreover, if they do turn out to be compatible, then it makes sense to match them to each other. Finally, to model the fact that there might be a limit on the number of unsuccessful dates a person might be willing to participate in, “timeouts” on vertices are also provided. More precisely, valid policies are allowed, for each vertex i, to only probe at most ti edges incident to i. Similar considerations arise in kidney exchanges, details of which appear in [4]. Chen et al. asked the question: how can we devise probing policies to maximize the expected cardinality (or weight) of the matching? They showed that the greedy algorithm that probes edges in decreasing order of pij (as long as their endpoints had not timed out) was a 4-approximation to the cardinality version of the stochastic matching problem. This greedy algorithm (and other simple greedy schemes) can be seen to be arbitrarily bad in the presence of weights, and they left open the question of obtaining good algorithms to maximize the expected weight of the matching produced. In addition to being a natural generalization, weights can be used as a proxy for revenue generated in matchmaking services. (The unweighted case can be thought of as maximizing the social welfare.) In this paper, we resolve the main open question from Chen et al.: Theorem 1. There is a 5.75-approximation algorithm for the weighted stochastic matching problem. For bipartite graphs, there is a 5-approximation algorithm. Our main idea is to use the knowledge of edge probabilities to solve a linear program where each edge e has a variable 0 ≤ ye ≤ 1 corresponding to the probability that a strategy probes e (over all possible realizations of the graph). This is similar to the approach for stochastic packing problems considered by Dean et al. [6,5]. We then give two different rounding procedures to attain the bounds claimed above. The first algorithm (§2.1) is very simple: it considers edges in a uniformly random order and probes each edge e with probability proportional to ye ; the analysis uses Markov’s inequality and a Chernoff-type bound (Lemma 2). The second algorithm (§2.2) is more nuanced: we use the y-values to define an auxiliary LP that is shown to be integral, and then probe only the edges chosen by this auxiliary LP; the analysis here requires more work and uses certain ideas from the generalized assignment problem [18]. This second rounding algorithm can also be extended to general graphs, but it results in a slightly worse approximation ratio of 7.5. However, this approach has the following two advantages:
220
N. Bansal et al.
• The probing strategy returned by the algorithm is in fact matching-probing [4], where we are given an additional parameter k and edges need to be probed in k rounds, each round being a matching. It is clear that this matching-probing model is more restrictive than the usual edge-probing model (with timeouts min{ti , k}) where one edge is probed at a time; this algorithm obtains a matching-probing strategy that is only a small constant factor worse than the optimal edge-probing strategy. Hence we also obtain the same constant approximation guarantee for weighted stochastic matching in the matching-probing model; previously only a logarithmic approximation in the unweighted case was known [4]. • We can combine this algorithm with the greedy algorithm [4] to obtain improved bounds for unweighted stochastic matching: Theorem 2. There is a 3.88-approximation algorithm for the unweighted stochastic matching problem; this improves to a 3.51-approximation algorithm in bipartite graphs. Apart from solving these open problems and giving improved ratios, our LPbased analysis turns out to be applicable in a wider context: Online Stochastic Matching Revisited. In a bipartite graph (A, B; E) of items i ∈ A and potential buyer types j ∈ B, pij denotes the probability that a buyer of type j will buy item i. A sequence of n buyers are to arrive online, where the type of each buyer is an i.i.d. sample from B according to some pre-specified distribution—when a buyer of type j appears, he can be shown a list L of up to tj as-yet-unsold items, and the buyer buys the first item onthe list according to the given probabilities p·,j . (Note that with probability i∈L (1 − pij ), the buyer leaves without buying anything.) What items should we show buyers when they arrive online, and in which order, to maximize the expected weight of the matching? Theorem 3. There is a 7.92-approximation algorithm for the above online stochastic matching problem. This question is an extension of similar online stochastic matching questions considered earlier in [7]—in that paper, wij , pij ∈ {0, 1} and tj = 1. Our model tries to capture the facts that buyers may have a limited attention span (using the timeouts), they might have uncertainties in their preferences (using edge probabilities), and that they might buy the first item they like rather than scanning the entire list. Other Extensions. The proof in [4] that the greedy algorithm for stochastic matching was a 4-approximation in the unweighted case was based on a somewhat delicate charging scheme involving the decision trees of the algorithm and the optimal solution. We show that the greedy algorithm, which was defined without reference to any LP, admits a simple LP-based analysis and is a 5 approximation. We also consider the model from [4] where one can probe as many as C edges in parallel, as long as these C edges form a matching; the goal is to maximize
When LP Is the Cure for Your Matching Woes
221
the expected weight of the matched edges after k rounds of such probes. We improve on the min{k, C}-approximation offered in [4] (which only works for the unweighted version), and present a constant factor approximation for the weighted cardinality constrained multiple-round stochastic matching. We also extend our analysis to a much more general situation where we try to pack k-hyperedges with random sizes into a d-dimensional knapsack of a given size; this is just the stochastic knapsack problem of [5], but where we consider the √ situation where k d. For this setting of parameters, we improve on the d-approximation of [5] and present a 2k-approximation algorithm. Due to lack of space, the details on these extensions as well as the omitted proofs in this extended abstract can be found in a full version of the paper [1]. Related Work. As mentioned above, perhaps the work most directly related to this work is that on stochastic knapsack problems (Dean et al. [6,5]) and multiarmed bandits (see [9,10] and references therein). Also related is some recent work [2] on budget constrained auctions, which uses similar LP rounding ideas. In recent years stochastic optimization problems have drawn much attention from the theoretical computer science community where stochastic versions of several classical combinatorial optimization problems have been studied. Some general techniques have also been developed [11,19]. See [20] for a survey. The online bipartite matching problem was first studied in the seminal paper by Karp et al. [13] and an optimal 1 − 1/e competitive online algorithm was obtained. Katriel et al. [14] considered the two-stage stochastic min-cost matching problem. In their model, we are given in a first stage probabilistic information about the graph and the cost of the edges is low; in a second stage, the actual graph is revealed but the costs are higher. The original online stochastic matching problem was studied recently by Feldman et al. [7]. They gave a 0.67competitive algorithm, beating the optimal 1 − 1/e-competitiveness known for worst-case models [13,12,16,3,8]. Our model differs from that in having a bound on the number of items each incoming buyer sees, that each edge is only present with some probability, and that the buyer scans the list linearly (until she times out) and buys the first item she likes. Our problem is also related to the Adwords problem [16], which has applications to sponsored search auctions. The problem can be modeled as a bipartite matching problem as follows. We want to assign every vertex (a query word) on one side to a vertex (a bidder) on the other side. Each edge has a weight, and there is a budget on each bidder representing the upper bound on the total weight of edges that may be assigned to it. The objective is to maximize the total revenue. The stochastic version in which query words arrive according to some known probability distribution has also been studied [15]. Preliminaries. For any integer m ≥ 1, define [m] to be the set {1, . . . , m}. For a maximization problem, an α-approximation algorithm is one that computes a solution with expected objective value at least 1/α times the expected value of the optimal solution.
222
N. Bansal et al.
We must clarify here the notion of an optimal solution. In standard worst case analysis we would compare our solution against the optimal offline solution, e.g. the value of the maximum matching, where the offline knows all the edge instantiations in advance (i.e. which edge will appear when probed, and which will not). However, it can be easily verified that due to the presence of timeouts, this adversary is too strong [4]. Hence, for all problems in this paper we consider the setting where even the optimum does not know the exact instantiation of an edge until it is probed. This gives our algorithms a level playing field. The optimum thus corresponds to a “strategy” of probing the edges, which can be chosen from an exponentially large space of potentially adaptive strategies. We note that our algorithms in fact yield non-adaptive strategies for the corresponding problems, that are only constant factor worse than the adaptive optimum. This is similar to previous results on stochastic packing problems: knapsack (Dean et al. [6,5]) and multi-armed bandits (Guha-Munagala [9,10] and references therein).
2
Stochastic Matching
We consider the following stochastic matching problem. The input is an undirected graph G = (V, E) with a weight we and a probability value pe on each edge e ∈ E. In addition, there is an integer value tv for each vertex v ∈ V (called patience parameter ). Initially, each vertex v ∈ V has patience tv . At each step in the algorithm, any edge e(u, v) such that u and v have positive remaining patience can be probed. Upon probing edge e, one of the following happens: (1) with probability pe , vertices u and v get matched and are removed from the graph (along with all adjacent edges), or (2) with probability 1 − pe , the edge e is removed and the remaining patience numbers of u and v get reduced by 1. An algorithm is an adaptive strategy for probing edges: its performance is measured by the expected weight of matched edges. The unweighted stochastic matching problem is the special case when all edge-weights are uniform. Consider the following linear program: as usual, for any vertex v ∈ V , ∂(v) denotes the edges incident to v. Variable ye denotes the probability that edge e = (u, v) gets probed in the adaptive strategy, and xe = pe · ye denotes the probability that u and v get matched in the strategy. (This LP is similar to the LP used for general stochastic packing problems by Dean, Goemans and Vondr´ ak [5].) maximize
e∈E
we · xe
(LP1)
e∈∂(v)
xe ≤ 1
∀v ∈ V
(1)
e∈∂(v)
ye ≤ t i
∀v ∈ V
(2)
x e = p e · ye
∀e ∈ E
(3)
∀e ∈ E
(4)
0 ≤ ye ≤ 1
It can be shown that the LP above is a valid relaxation for the stochastic matching problem.
When LP Is the Cure for Your Matching Woes
2.1
223
Weighted Stochastic Matching: General Graphs
Our algorithm first solves (LP1) to optimality and uses the optimal solution (x, y) to obtain a non-adaptive strategy achieving expected value Ω(1) · (w · x). Next, we present the algorithm. We note that the optimal solution (x, y) to the above LP gives an upper-bound on any adaptive strategy. Let α ≥ 1 be a constant to be set later. The algorithm first fixes a uniformly random permutation π on edges E. It then inspects edges in the order of π, and probes only a subset of the edges. A vertex v ∈ V is said to have timed out if tv edges incident to v have already been probed (i.e. its remaining patience reduces to 0); and vertex v is said to be matched if it has already been matched to another vertex. An edge (u, v) is called safe at the time it is considered if (A) neither u nor v is matched, and (B) neither u nor v has timed out. The algorithm is the following: 1. Pick a permutation π on edges E uniformly at random 2. For each edge e in the ordering π, do: a. If e is safe then probe it with probability ye /α, else do not probe it. In the rest of this section, we prove that this algorithm achieves a 5.75approximation. We begin with the following property: Lemma 1. For any edge (u, v) ∈ E, when (u, v) is considered under π, 1 (a) the probability that vertex u has timed out is at most 2α , and 1 (b) the probability that vertex u is matched is at most 2α . Proof: We begin with the proof of part (a). Let random variable U denote the number of probes incident to vertex u by the time edge (u, v) is considered in π. E[U ] = ≤
Pr[edge e appears before (u, v) in π AND e is probed] ye ye e∈∂(u) Pr[edge e appears before (u, v) in π] · α = e∈∂(u) 2α ≤ e∈∂(u)
tu 2α .
The first inequality above follows from the fact that the probability that edge e is probed (conditioned on π) is at most ye /α. The second equality follows since π is a u.a.r. permutation on E. The last inequality is by the LP constraint (2). The probability that vertex u has timed out when (u, v) is considered equals 1 Pr[U ≥ tu ] ≤ E[U] tu ≤ 2α , by the Markov inequality. This proves part (a). The proof of part (b) is identical (where we consider the event that an edge is matched instead of being probed and replace ye and tu by xe and 1 respectively and use the LP constraint (1)) and is omitted. Now, a vertex u ∈ V is called low-timeout if tu = 1, else u is called a high-timeout vertex if tu ≥ 2. We next prove the following bound for high-timeout vertices that is stronger than the one from Lemma 1(a). Lemma 2. Suppose α ≥ e. For a high-timeout vertex u ∈ V , and any edge f incident to u, the probability that u has timed out when f is considered in π is at most 3α2 2 .
224
N. Bansal et al.
Using this, we can analyze the probability that an edge is safe. (The proof is a case analysis on whether the end-points are low-timeout or high-timeout.) Lemma 3. For α ≥ e, an edge f = (u, v) is safe with probability at least (1 − 1 4 α − 3α2 ) when f is considered under a random permutation π. Theorem 1 follows from the definition of the algorithm, the LP formulation and √ using Lemma 3 (with α = 1 + 5). 2.2
Weighted Stochastic Matching: Bipartite Graphs
In this section, we obtain an improved bound for stochastic matching on bipartite graphs, via a different rounding procedure. In fact, the algorithm produces a matching-probing strategy whose expected value is a constant fraction of the optimal value of (LP1) (which was for edge-probing). A similar rounding algorithm also works for non-bipartite graphs, achieving a slightly weaker bound. Furthermore, we show in the next subsection that this LP-rounding algorithm can be combined with the greedy algorithm of [4] to get improved bounds for unweighted stochastic matching. Algorithm round-color-probe. First, we find an optimal fractional solution Then we use (x, y) to (LP1) and round x to identify a set of interesting edges E. edge coloring to partition E into a small collection of matchings M1 , . . . , Mh , which are then probed in a random order. If we are only interested in edge in random order would suffice. We probing strategies, probing the edges in E denote this edge-probing strategy by edge-probe. The key difference from the which we rounding algorithm of the previous subsection is in the choice of E, describe next. Our scheme is based on the rounding procedure of Shmoys and Computing E. Tardos for the generalized assignment problem [18]. Let q ∗ denote the values of x-variables in an optimal solution to (LP1). For each vertex u, sort the edges incident on u in non-increasing values of their probabilities eu1 , eu2 , . . . , eudeg(u) , and write a new LP:
we pe · ze
(LP2)
e∈∂(u) ze ≤ tu i i ∗ u ≤ u z q e e j=1 j=1 j j
∀i ∈ V
(5)
∀u ∈ V, i = 1, . . . , deg(u)
(6)
ze ∈ [0, 1]
∀e ∈ E
maximize
e∈E
Notice that q ∗ is a feasible solution of this new program. Thus, the optimal value of (LP2) is at least that of (LP1). As shown in the next lemma, this new linear program has the nice property of being integral. Lemma 4. All basic solutions of (LP2) are integral. be the Let q be an optimal basic (and therefore integral) solution of (LP2) and E set of edges in the support of q, i.e., E = {e | qe = 1}. Let h = maxv∈V degE (v).
When LP Is the Cure for Your Matching Woes
225
into h matchings in Using K¨onig’s Theorem [17, Ch. 20], we can decompose E polynomial time. Notice that each vertex u ∈ V will be matched in at most tu of these matchings. Analysis. We now analyze the performance guarantee. First, we notice that the downside of exchanging LPs is that the “expected number of successful probes” incident on a vertex can be larger than 1. However, the excess can be bounded by the next lemma. Lemma 5. For any feasible (integral or fractional) solution q of (LP2) we have e ≤ 1 + pmax ∀u ∈ V, where pmax = maxe∈E pe . e∈∂(u) pe q is It only remains to bound the probability that a given edge e = (u, v) ∈ E in fact probed by our probing strategy. Consider a random permutation of the h matchings used by the edge coloring. Let π be the edge ordering induced by this permutation where edges within a matching are listed in some arbitrary but fixed order. Let us denote by B(e, π) the set of edges incident on u or v that appear before e in π. It is not hard to see that
(1 − p ) ; (7) Pr [ e was probed ] ≥ Eπ f f ∈B(e,π) here we assume that f ∈B(e,π) (1 − pf ) = 1 when B(e, π) = ∅. Notice that in (7) we only care about the order of edges incident on u and v. Furthermore, the expectation does not range over all possible orderings of these edges, but only those that are consistent with some matching permutation. We call this type of restricted ordering random matching ordering and we denote it by π; similarly, we call an unrestricted ordering random edge ordering and we denote it by σ. Our plan to lower bound the probability of e being probed is to study first the expectation in (7) over random edge orderings and then to show that the expectation can only increase when restricted to range over random matching orderings. The following simple lemma is useful in several places. Lemma6. Let r and pmax be positive real values. Consider the problem of minimizing ti=1 (1 − pi ) subject to the constraints ti=1 pi ≤ r and 0 ≤ pi ≤ pmax for i = 1, . . . , t. Denote the minimum value by η(r, pmax ). Then,
r r η(r, pmax ) = (1 − pmax ) pmax 1 − (r − pmax pmax ) ≥ (1 − pmax )r/pmax . \ {e} incident on either endpoint of e. Let ∂E (e) be the set of edges in E and let σ be a random edge ordering. Let Lemma 7. Let e be an edge in E pmax = maxf ∈E pf . Assume that f ∈∂ (e) pf ≤ r for all u ∈ V . Then, Eσ
E
f ∈B(e,σ) (1
1 − pf ) ≥ 0 η(xr, xpmax ) dx.
226
N. Bansal et al.
Corollary 1. Let ρ(r, pmax ) =
1
η(xr, xpmax ) dx. For any r, pmax > 0, we have
0
1. ρ (r, pmax ) is convex and decreasing on r. r 1 2. ρ (r, pmax ) ≥ r+pmax · 1 − (1 − pmax )1+ pmax >
1 r+pmax
· 1 − e−r
Let π be a random matching ordering and σ be Lemma 8. Let e = (u, v) ∈ E. a random edge ordering of the edges adjacent to u and v. Then
Eπ (1 − p ) ≥ E (1 − p ) . f σ f f ∈B(e,π) f ∈B(e,σ) Everything is in place to derive a bound on the expected weight of the matching found by our algorithm. Theorem 4. If G is bipartite then there is a 1/ρ(2+2pmax, pmax ) approximation with ρ as in Corollary 1. The worst ratio is attained at pmax = 1 and is 5. Proof: Recall that the optimal value of (LP2) is exactly e∈E we pe . On the other hand, the expected size of the matching found by the algorithm is
we pe Pr [ e was probed ] ≥ we pe Eπ (1 − pf ) E [ our solution ] = e∈E
≥
e∈E
we pe Eσ
e∈E
f ∈B(e,π)
(1 − pf ) ≥ ρ(2 + 2pmax , pmax ) value( q)
f ∈B(e,σ)
where the first inequality follows from (7) and the second from Lemma 8—here π is a random matching ordering and σ is a random edge ordering. The third inequality follows from Lemma 7 and setting r = 2 + 2pmax (using Lemma 5 on endpoints of e). Recall that the value of q is at least the value of q ∗ and this, in turn, is an upper bound on the cost of an optimal probing strategy. In the full version of the paper, we present the final version of round-color3 1 probe which obtains a a slightly weaker bound of k+1 k · 2 · ρ (2+2pmax ,pmax ) for the matching-probing model on general graphs, and edge-probe which is a 3 1 2 · ρ (2+2pmax ,pmax ) -approximation for the edge-probing model on general graphs. 2.3
Improved Bounds for Unweighted Stochastic Matching
In this subsection, we consider the unweighted stochastic matching problem, and show that our algorithm from §2.2 can be combined with the natural greedy algorithm [4] to obtain a better approximation guarantee than either algorithm can achieve on its own. Basically, our algorithm attains its worst ratio when pmax is large and greedy attains its worst ratio when pmax is small. Therefore, we can combine the two algorithms as follows: We probe edges using the greedy heuristic until the maximum edge probability in the remaining graph is less than a critical value pc , at which point we switch to algorithm edge-probe.
When LP Is the Cure for Your Matching Woes
227
Theorem 5. Suppose we use the greedy rule until all remaining edges have probability less than pc , at which point we switch to an algorithm with approximation ratio γ(pc ). Then the approximation ratio of the overall scheme is α(pc ) = max {4 − pc , γ(pc )}. The proof follows by an induction on the size of the problem instance (and we use existing bounds on the optimum from Chen et al. [4]). The proof of Theorem 2 follows by setting the cut-off point pc = 0.49 for bipartite graphs and pc = 0.12 for general graphs and using the edge-probe algorithm. We remark that the approximation ratio of the algorithm in §2.1 does not depend on pmax , thus we can not combine that algorithm with the greedy algorithm to get a better bound.
3
Stochastic Online Matching (Revisited)
As mentioned in the introduction, the stochastic online matching problem is best imagined as selling a finite set of goods to buyers that arrive over time. The input to the problem consists of a bipartite graph G = (A, B, A × B), where A is the set of items that the seller has to offer, with exactly one copy of each item, and B is a set of buyer types/profiles. For each buyer type b ∈ B and item a ∈ A, pab denotes the probability that a buyer of type b will like item a, and wab denotes the revenue obtained if item a is sold to a buyer of type b. Each buyer of type b ∈ B also has a patience parameter tb ∈ Z+ . There are n buyers arriving online, with eb ∈ Z denoting the expected number of buyers of type b, with eb = n. Let D denote the induced probability distribution on B by defining PrD [b] = eb /n. All the above information is given as input. The stochastic online model is the following: At each point in time, a buyer arrives, where her type b ∈D B is an i.i.d. draw from D. The algorithm now shows her up to tb distinct items one-by-one: the buyer likes each item a ∈ A shown to her independently with probability pab . The buyer purchases the first item that she is offered and likes; if she buys item a, the revenue accrued is wab . If she does not like any of the items shown, she leaves without buying. The objective is to maximize the expected revenue. We get the stochastic online matching problem of Feldman et al. [7] if we have wab = pab ∈ {0, 1}, in which case we need only consider tb = 1. Their focus was on beating the 1 − 1/e-competitiveness known for worst-case models [13,12,16,3,8]; they gave a 0.67-competitive algorithm that works for the unweighted case whp; whereas our results are for the weighted case (with preference-uncertainty and timeouts), but only in expectation. By making copies of buyer types, we may assume that eb = 1 for all b ∈ B, ˆ denote the and D is uniform over B. For a particular run of the algorithm, let B ˆ = (A, B, ˆ A × B), ˆ where actual set of buyers that arrive during that run. Let G ˆ ˆ for each a ∈ A and b ∈ B (and suppose its type is some b ∈ B), the probability ˆ associated with edge (a, ˆb) is pab and its weight is wab . Moreover, for each ˆb ∈ B (with type, say, b ∈ B), set its patience parameter to tˆb = tb . We will call this
228
N. Bansal et al.
ˆ in random order, and the instance graph; the algorithm sees the vertices of B ˆ has to adaptively find a large matching in G. It now seems reasonable that the algorithm of §2.1 should work here. But the ˆ (the actual instantiation of the buyers) up front, it algorithm does not know G only knows G, and hence some more work is required to obtain an algorithm. Further, as was mentioned in the preliminaries, we use OPT to denote the optimal ˆ as was done in adaptive strategy (instead of the optimal offline matching in G [7]), and compare our algorithm’s performance with this OPT. The Linear Program. For a graph H = (A, C, A × C) with each edge (a, c) having a probability pac and weight wac , and vertices in C having patience parameters tj , consider the LP(H):
c∈C
a∈A a∈A
wac · xac ∀a ∈ A
(LP3) (8)
xac ≤ 1
∀c ∈ C
(9)
xac
maximize ≤1
a∈A, c∈C
yac ≤ tc
∀c ∈ C
(10)
xac = pac · yac
∀a ∈ A, c ∈ C
(11)
yac ∈ [0, 1]
∀a ∈ A, c ∈ C
(12)
Note that this LP is very similar to the one in §2, but the vertices on the left do not have timeout values. Let LP(H) denote the optimal value of this LP. The algorithm 1. Before buyers arrive, solve the LP on the expected graph G to get values y ∗ . 2. When any buyer ˆb (of type b) arrives online: a. If ˆb is the first buyer of type b, consider the items a ∈ A in u.a.r. order. One by one, offer each unsold item a to ˆb independently with probability ∗ /α; stop if either tb offers are made or ˆb purchases any item. yab b. If ˆb is not the first arrival of type b, do not offer any items to ˆb. In the following, we prove that our algorithm achieves a constant approximation to the stochastic online matching problem. The first lemma show that the expected value obtained by the best online adaptive algorithm is bounded above ˆ by E[LP(G)]. ˆ Lemma 9. The optimal value OPT of the given instance is at most E[LP(G)], ˆ where the expectation is over the random draws to create G. The proof of the next lemma is similar to the analysis of Theorem 1 for weighted stochastic matching. Lemma 10. Our expected revenue is at least 1 − 1e α1 1 − α1 − 3α2 2 · LP(G). ˆ is an upper bound on OPT, and that we Note that we have shown that E[LP(G)] can get a constant fraction of LP(G). The final lemma relates these two, namely the LP-value of the expected graph G (computed in Step 1) to the expected ˆ the proof uses a simple but subtle duality-based LP-value of the instantiation G; argument.
When LP Is the Cure for Your Matching Woes
229
ˆ Lemma 11. LP(G) ≥ E[LP(G)]. Lemmas 9, 10 and 11, with α =
√2 , 3−1
prove Theorem 3.
References 1. Bansal, N., Gupta, A., Li, J., Mestre, J., Nagarajan, V., Rudra, A.: When LP is the Cure for Your Matching Woes: Improved Bounds for Stochastic Matchings. arXiv (2010) 2. Bhattacharya, S., Goel, G., Gollapudi, S., Munagala, K.: Budget constrained auctions with heterogeneous items. In: STOC (2009), arxiv:abs/0907.4166 3. Birnbaum, B.E., Mathieu, C.: On-line bipartite matching made simple. SIGACT News 39(1), 80–87 (2008) 4. Chen, N., Immorlica, N., Karlin, A.R., Mahdian, M., Rudra, A.: Approximating matches made in heaven. In: ICALP, Part I, vol. (1), pp. 266–278 (2009) 5. Dean, B.C., Goemans, M.X., Vondr´ ak, J.: Adaptivity and approximation for stochastic packing problems. In: SODA, pp. 395–404 (2005) 6. Dean, B.C., Goemans, M.X., Vondr´ ak, J.: Approximating the stochastic knapsack problem: the benefit of adaptivity. Math. Oper. Res. 33(4), 945–964 (2008), http://dx.doi.org/10.1287/moor.1080.0330 7. Feldman, J., Mehta, A., Mirrokni, V.S., Muthukrishnan, S.: Online stochastic matching: Beating 1 − 1/e. In: FOCS (2009), Arxiv:abs/0905.4100 8. Goel, G., Mehta, A.: Online budgeted matching in random input models with applications to adwords. In: SODA, pp. 982–991 (2008) 9. Guha, S., Munagala, K.: Approximation algorithms for partial-information based stochastic control with markovian rewards. In: FOCS, pp. 483–493 (2007) 10. Guha, S., Munagala, K.: Multi-armed bandits with metric switching costs. In: Albers, S., Marchetti-Spaccamela, A., Matias, Y., Nikoletseas, S., Thomas, W. (eds.) ICALP 2009. LNCS, vol. 5556, pp. 496–507. Springer, Heidelberg (2009) 11. Gupta, A., P´ al, M., Ravi, R., Sinha, A.: Boosted sampling: approximation algorithms for stochastic optimization. In: STOC, pp. 417–426. ACM, New York (2004) 12. Kalyanasundaram, B., Pruhs, K.: Online weighted matching. J. Algorithms 14(3), 478–488 (1993) 13. Karp, R.M., Vazirani, U.V., Vazirani, V.V.: An optimal algorithm for on-line bipartite matching. In: STOC, pp. 352–358 (1990) 14. Katriel, I., Kenyon-Mathieu, C., Upfal, E.: Commitment under uncertainty: Twostage stochastic matching problems. Theoretical Computer Science 408(2-3), 213– 223 (2008) 15. Mahdian, M., Nazerzadeh, H., Saberi, A.: Allocating online advertisement space with unreliable estimates. In: EC. p. 294 (2007) 16. Mehta, A., Saberi, A., Vazirani, U.V., Vazirani, V.V.: Adwords and generalized on-line matching. In: FOCS, pp. 264–273 (2005) 17. Schrijver, A.: Combinatorial Optimization. Springer, Heidelberg (2003) ´ An approximation algorithm for the generalized assign18. Shmoys, D.B., Tardos, E.: ment problem. Math. Program. 62, 461–474 (1993) 19. Shmoys, D.B., Swamy, C.: An approximation scheme for stochastic linear programming and its application to stochastic integer programs. J. ACM 53(6), 1012 (2006) 20. Swamy, C., Shmoys, D.B.: Approximation algorithms for 2-stage stochastic optimization problems. ACM SIGACT News 37(1), 46 (2006)
Feasibility Analysis of Sporadic Real-Time Multiprocessor Task Systems Vincenzo Bonifaci1 and Alberto Marchetti-Spaccamela2 1
Max-Planck Institut f¨ ur Informatik, Saarbr¨ ucken, Germany
[email protected] 2 Sapienza Universit` a di Roma, Rome, Italy
[email protected]
Abstract. We give the first algorithm for testing the feasibility of a system of sporadic real-time tasks on a set of identical processors, solving an open problem in the area of multiprocessor real-time scheduling [S. Baruah and K. Pruhs, Journal of Scheduling, 2009]. We also investigate the related notion of schedulability and a notion that we call online feasibility. Finally, we show that discrete-time schedules are as powerful as continuous-time schedules, which answers another open question in the above mentioned survey.
1
Introduction
As embedded microprocessors become more and more common, so does the need to design systems that are guaranteed to meet deadlines in applications that are safety critical, where missing a deadline might have severe consequences. In such a real-time system, several tasks may need to be executed on a multiprocessor platform and a scheduling policy needs to decide which tasks should be active in which intervals, so as to guarantee that all deadlines are met. The sporadic task model is a model of recurrent processes in hard real-time systems that has received great attention in the last years (see for example [1,5] and references therein). A sporadic task τi = (Ci , Di , Pi ) is characterized by a worst-case execution time Ci , a relative deadline Di , and a minimum interarrival separation Pi . Such a sporadic task generates a potentially infinite sequence of jobs: each job arrives at an unpredictable time, after the minimum separation Pi from the last job of the same task has elapsed; it has an execution requirement less than or equal to Ci and a deadline that occurs Di time units after its arrival time. A sporadic task system T is a collection of such sporadic tasks. Since the actual interarrival times can vary, there are infinitely many job sequences that can be generated by T . We are interested in designing algorithms that tell us when a given sporadic task system can be feasibly scheduled on a set of m ≥ 1 identical processors, where we allow any job to be interrupted and resumed later on another processor at no penalty. The problem can be formulated in several ways: – Feasibility: is it possible to feasibly schedule on m processors any job sequence that can be generated by T ? M. de Berg and U. Meyer (Eds.): ESA 2010, Part II, LNCS 6347, pp. 230–241, 2010. c Springer-Verlag Berlin Heidelberg 2010
Feasibility Analysis of Sporadic Real-Time Multiprocessor Task Systems
231
– Online feasibility: is there an online algorithm that can feasibly schedule on m processors any job sequence that can be generated by T ? – Schedulability: does the given online algorithm Alg feasibly schedule on m processors any job sequence that can be generated by T ? Previous work. Most of the previous work in the context of sporadic real-time feasibility testing has focused on the case of a single processor [4]. The seminal paper by Liu and Layland [13] gave a best possible fixed priority algorithm for the case where deadlines equal periods (a fixed priority algorithm initially orders the tasks and then – at each time instant – schedules the available job with highest priority). It is also known that the Earliest Deadline First (EDF) algorithm, that schedules at any time the job with the earliest absolute deadline, is optimal in the sense that for any sequence of jobs it produces a valid schedule whenever a valid schedule exists [7]. Because EDF is an online algorithm, this implies that the three questions of feasibility, of online feasibility and of schedulability with respect to EDF are equivalent. It was known for some time that EDF-schedulability could be tested in exponential time and more precisely that the problem is in coNP [6]. The above results triggered a significant research effort within the scheduling community and many results have been proposed for specific algorithms and/or special cases; nonetheless, we remark that the feasibility problem for a single processor remained open for a long time and that only recently it has been proved coNP-complete [9]. The case of multiple processors is far from being as well understood as the single processor case. For starters, EDF is no longer optimal – it is not hard to construct feasible task systems for which EDF fails, as soon as m ≥ 2. Another important difference with the single processor case is that here clairvoyance does help the scheduling algorithm: there exists a task system that is feasible, but for which no online algorithm can produce a feasible schedule on every job sequence [10]. Thus, the notions of feasibility and on-line feasibility are distinct. On the positive side there are many results for special cases of the problem; however we remark that no optimal scheduling algorithm is known, and no test – of whatsoever complexity – is known that correctly decides the feasibility or the online feasibility of a task system. This holds also for constrained-deadline systems, in which deadlines do not exceed periods. The question of designing such a test has been listed as one of the main algorithmic problems in real-time scheduling [5]. Regarding schedulability, many schedulability tests are known for specific algorithms (see [1] and references therein), but, to the best of our knowledge, the only general test available is a test that requires exponential space [2]. Our results. We study the three above problems in the context of constraineddeadline multiprocessor systems and we provide new results for each of them. First, for the feasibility problem, we give the first correct test, thus solving [5, Open Problem 3] for constrained-deadline systems. The test has high complexity, but it has the interesting consequence that a job sequence that witnesses
232
V. Bonifaci and A. Marchetti-Spaccamela
the infeasibility of a task system T has without loss of generality length at most doubly exponential in the bitsize of T . Then we give the first correct test for the online feasibility problem. The test has exponential time complexity and is constructive: if a system is deemed online feasible, then an optimal online algorithm can be constructed (in the same time bound). Moreover, this optimal algorithm is without loss of generality memoryless: its decisions depend only on the current (finite) state and not on the entire history up to the decision point (see Section 2 for a formal definition). These results suggest that the two problems of feasibility and online feasibility might have different complexity. For the schedulability problem, we provide a general schedulability test showing that the schedulability of a system by any memoryless algorithm can be tested in polynomial space. This improves the result of Baker and Cirinei [2] (that provided an exponential space test for essentially the same class of algorithms). We finally consider the issue of discrete time schedules versus continuous time schedules. The above results are derived with the assumption that the time line is divided into indivisible time slots and preemptions can occur only at integral points, that is, the schedule has to be discrete. In a continuous schedule, time is not divided into discrete quanta and preemptions may occur at any time instant. We show that in a sporadic task system a discrete schedule exists whenever a continuous schedule does, thus showing that the discrete time assumption is without loss of generality. Such equivalence is known for periodic task systems (i.e. task system in which each job of a task is released exactly after the period Pi of the task has elapsed); however, the reduction does not extend to the sporadic case and the problem is cited among the important open problems in real-time scheduling [5, Open Problem 5]. All our results can be extended to the arbitrary-deadline case, at the expense of increasing some of the complexity bounds. In this extended abstract we restrict to the constrained-deadline case to simplify the exposition. Our main conceptual contribution is to show how the feasibility problem, the online feasibility problem and the schedulability problem can be cast as the problem of deciding the winner in certain two-player games of infinite duration played on a finite graph. We then use tools from the theory of games to decide who has a winning strategy. In particular, in the case of the feasibility problem we have a game of imperfect information where one of the players does not see the moves of the opponent, a so-called blindfold game [15]. This can be reformulated as a one-player (i.e., solitaire) game on an exponentially larger graph and then solved via a reachability algorithm. However, a technical complication is that in our model a job sequence and a schedule can both have infinite length, which when the system is feasible makes the construction of a feasible schedule challenging. We solve this complication by an application of K¨ onig’s Infinity Lemma from graph theory [8]. This is the technical ingredient that, roughly speaking, allows us to reduce the infinite job sequences with infinite length to
Feasibility Analysis of Sporadic Real-Time Multiprocessor Task Systems
233
finite sequences and ultimately to obtain the equivalence between continuous and discrete schedules. The power of our new approach is its generality: it can be applied to all three problems and – surprisingly – it yields proofs that are not technically too complicated. We hope that this approach might be useful to answer similar questions for other real-time scheduling problems. Organization. The remainder of the paper is structured as follows. In Section 2 we formally define the model and set up some common notation. In Section 3 we describe and analyze our algorithms for feasibility and schedulability analysis. The equivalence between continuous and discrete schedules is treated in Section 4, and we finish with some concluding remarks in Section 5.
2
Definitions
Let N = {0, 1, 2, . . .} and [n] = {1, 2, . . . , n}. Given a set X, with X k we denote the set of all k-subsets of X. Consider a task system T with n tasks, and m processors; without loss of generality, m ≤ n. Each task i is described by three parameters: a worst-case execution time Ci , a relative deadline Di , and a minimum interarrival time Pi . We assume these parameters to be positive integers and that Di ≤ Pi for all i. Let C := ×ni=1 ([Ci ] ∪ {0}), D := ×ni=1 ([Di ] ∪ {0}), P := ×ni=1 ([Pi ] ∪ {0}), 0 := (0)ni=1 . A job sequence is a function σ : N → C. The interpretation is that σ(t) = (σi (t))ni=1 iff, for each i with σi (t) > 0, a new job from task i is released at time t with execution time σi (t), and no new job from task i is released if σi (t) = 0. A legal job sequence has the additional property that for any distinct t, t ∈ N and any i, if σi (t) > 0 and σi (t ) > 0, then |t − t | ≥ Pi . A job sequence is finite if σ(t ) = 0 for all t greater or equal to some t ∈ N; in this case, we say that the sequence has length t. [n] Let S := ∪m k=0 k . A schedule is a function S : N → S; we interpret S(t) as the set of those k tasks (0 ≤ k ≤ m) that are being processed from time t to time t + 1 1 . We allow that S(t) contains a task i even when there is no pending job from i at time t; in that case there is no effect (this is formalized below). A backlog configuration is an element of B := C × D × P. At time t, a backlog configuration (ci , di , pi )ni=1 ∈ B 2 will denote the following: – ci ∈ [Ci ] ∪ {0} is the remaining execution time of the unique pending job from task i, if any; if there is no pending job from task i, then ci = 0; – di ∈ [Di ] ∪ {0} is the remaining time to deadline of the unique pending job from task i, if any; if there is no pending job from task i, or the deadline has already passed, then di = 0; 1 2
Since Di ≤ Pi , there can be at most one pending job from task i. In the arbitrarydeadline case, this can be generalized by considering O(Di /Pi ) jobs. For notational convenience, here we have reordered the variables so as to have ntuples of triples, instead of triples of n-tuples.
234
V. Bonifaci and A. Marchetti-Spaccamela
– pi ∈ [Pi ] ∪ {0} is the minimum remaining time to the next activation of task i, that is, the minimum pi such that a new job from task i could be legally released at time t + pi . A configuration (ci , di , pi )ni=1 ∈ B is a failure configuration if for some task i, ci > 0 and di = 0. Remark 1. The set B is finite, and its size is 2O(s) , where s is the input size of T (number of bits in its binary encoding). Given a legal job sequence σ and a schedule S, we define in the natural way an infinite sequence of backlog configurations σ, S := b0 b1 . . .. The initial configuration is b0 := (0, 0, 0)ni=1 , and given a backlog configuration bt = (ci , di , pi )ni=1 , its successor configuration bt+1 = (ci , di , pi )ni=1 is obtained as follows: – if σi (t) > 0, then ci = σi (t) − xi , where xi is 1 if i ∈ S(t), and 0 otherwise; moreover, di = Di and pi = Pi ; – if σi (t) = 0, then ci = max(ci −xi , 0), where xi is defined as above; moreover, di = max(di − 1, 0) and pi = max(pi − 1, 0). We can now define a schedule S to be feasible for σ if no failure configuration appears in σ, S. Finally, a task system T is feasible when every legal job sequence admits a feasible schedule. Stated otherwise, a task system is not feasible when there is a legal job sequence for which no schedule is feasible. We call such a job sequence a witness of infeasibility. A deterministic online algorithm Alg is a sequence of functions: Algt : Ct+1 → S,
t = 0, 1, 2, . . .
By applying an algorithm Alg to a job sequence σ, one obtains the schedule S defined by S(t) = Algt (σ(0), . . . , σ(t)). Then Alg feasibly schedules σ whenever S does. A memoryless algorithm is a single function Malg : B × C → S; it is a special case of an online algorithm in which the scheduling decisions at time t are based only on the current backlog configuration and on the tasks that have been activated at time t. Finally, a task system T is online feasible if there is a deterministic online algorithm Alg such that every legal job sequence from T is feasibly scheduled by Alg. We then say that Alg is optimal for T , and that T is schedulable by Alg. Online feasibility implies feasibility, but the converse fails: there is a task system that is feasible, but that does not admit any optimal online algorithm [10].
3 3.1
Algorithms for Feasibility and Schedulability Analysis Feasibility
We first model the process of scheduling a task system as a game between two players over infinitely many rounds. At round t = 0, 1, 2, . . ., the first player (the “adversary”) selects a certain set of tasks to be activated. Then the second player
Feasibility Analysis of Sporadic Real-Time Multiprocessor Task Systems
235
(acting as the scheduler) selects a set of tasks to be processed, and so on. The game is won by the first player if a failure configuration is eventually reached. In order to capture the definition of feasibility correctly, the game must proceed so that the adversary has no information at all on the moves of the scheduler; in other words, the job sequence must be constructed obliviously from the schedule. This is because if the task system is infeasible, then a single witness job sequence must fail all possible schedules simultaneously. Models of such games, where the first player has no information on the moves of the opponent, have been studied in the literature under the name of blindfold games [15]. One approach to solving these games is to construct a larger one-player game, in which each state encodes all positions that are compatible with at least one sequence of moves for the second player. Given a task system T , we build a bipartite graph G+ (T ) = (V1 , V2 , A). Nodes in V1 (V2 ) will correspond to decision points for the adversary (scheduler). A node in V1 or V2 will encode mainly two kinds of information: (1) the counters that determine time to deadlines and next earliest arrival dates; and (2) the set of all plausible remaining execution times of the scheduler. Let B+ := D × P × 2C . Each of V1 and V2 is a copy of B+ , so each node of V1 is identified by a distinct element from B+ , and similarly for V2 . We now specify the arcs of G+ (T ). Consider an arbitrary node v1 ∈ V1 and let ((di , pi )ni=1 , Q) be its identifier, where Q ∈ 2C . Its successors in G+ (T ) are all nodes v2 = ((di , pi )ni=1 , Q ) ∈ V2 for which there is a tuple (ki )ni=1 ∈ C such that: 1. pi = 0 for all i ∈ supp(k), where supp(k) = {i : ki > 0} (this ensures that each task in k can be activated); 2. pi = Pi , and di = Di for all i ∈ supp(k) (activated jobs cannot be reactivated before Pi time units); / supp(k) (counters of other tasks are not 3. pi = pi and di = di for all i ∈ affected); 4. each (ci )ni=1 ∈ Q is obtained from some (ci )ni=1 ∈ Q in the following way: / supp(k) (in every possible ci = ki for all i ∈ supp(k), and ci = ci for all i ∈ scheduler state, the remaining execution time of each activated job is set to the one prescribed by k); 5. Q contains all (ci )ni=1 that satisfy Condition 4. Now consider an arbitrary node v2 ∈ V2 , say v2 = ((di , pi )ni=1 , Q). The only successor of v2 will be the unique node v1 = ((di , pi )ni=1 , Q ) ∈ V1 such that: 1. di = max(di − 1, 0), pi = max(pi − 1, 0) for all i ∈ [n] (this models a “clocktick”); 2. for each (ci )ni=1 ∈ Q , there are an element (ci )ni=1 ∈ Q and some S ∈ S such that ci = max(ci − 1, 0) for all i ∈ S and ci = ci for all i ∈ / S (each new possible state of the scheduler is obtained from some old state after the processing of at most m tasks); 3. for each (ci )ni=1 ∈ Q , one has, for all i, ci = 0 whenever di = 0 (this ensures that the resulting scheduler state is valid); 4. Q contains all (ci )ni=1 that satisfy Condition 2 and Condition 3.
236
V. Bonifaci and A. Marchetti-Spaccamela
That is, the only successor to v2 is obtained by applying all possible decisions by the scheduler and then taking Q to be the set of all possible (valid) resulting scheduler states. Notice that because we only keep the valid states (Condition 3), the set Q might be empty. In this case we say that the node v1 is a failure state; it corresponds to some deadline having been violated. Also notice that any legal job sequence σ induces an alternating walk in the bipartite graph G+ (T ) whose (2t + 1)-th arc corresponds to σ(t). Finally, the initial state is the node v0 ∈ V1 for which di = pi = 0 for all i, and for which the only possible scheduler state is 0. Note that, given two nodes of G+ (T ), it is easy to check their adjacency, in time polynomial in |B+ |. Definition 1. For a legal job sequence σ, the set of possible valid scheduler states at time t is the set of all (ci )ni=1 ∈ C for which there exists a schedule S such that (i) σ, S = b0 b1 b2 . . . with no configuration b0 , b1 , . . . , bt being a failure configuration, and (ii) the first component of bt is (ci )ni=1 . We denote this set by valid(σ, t). Lemma 1. Let t ≥ 0 and let ((di , pi )ni=1 , Q) ∈ V1 be the node reached by following for 2t steps the walk induced by σ in the graph G+ (T ). Then Q = valid(σ, t). Proof (sketch). By induction on t. When t = 0 the claim is true because the only possible scheduler state is the 0 state. For larger t it follows from how we defined
the successor relation in G+ (T ) (see in particular the definition of Q ). Lemma 2. Task system T is infeasible if and only if, in the graph G+ (T ), some failure state is reachable from the initial state. Proof (sketch). If there is a path from the initial state to some failure state, by Lemma 1 we obtain a legal job sequence σ that witnesses that for some t, valid(σ, t) = ∅, that is, there is no valid scheduler state for σ at time t; so there cannot be any feasible schedule for σ. Conversely, if no failure state is reachable from the initial state, for any legal job sequence σ one has valid(σ, t) = ∅ for all t by Lemma 1. This immediately implies that no finite job sequence can be a witness of infeasibility. We also need to exclude witnesses of infinite length. To do this, we apply K¨onig’s Infinity Lemma [8, Lemma 8.1.2]. Consider the infinite walk induced by σ in G+ (T ) and the corresponding infinite sequence of nonempty sets of possible valid scheduler states Q0 , Q1 , . . ., where Qt := valid(σ, t). Each scheduler state q ∈ Qt (t ≥ 1) has been derived by some scheduler state in q ∈ Qt−1 and so q and q can be thought of as neighbors in an infinite graph on the disjoint union of Q0 , Q1 , . . .. Then K¨ onig’s Lemma implies that there is a sequence q0 q1 . . . (with qt ∈ Qt ) such that for all t ≥ 1, qt is a neighbor of qt−1 . This sequence defines a feasible schedule for σ.
Theorem 1. The feasibility problem for a sporadic constrained-deadline task O(s) , where s is the input size of T . Moreover, system T can be solved in time 22 O(s) if T is infeasible, there is a witness job sequence of length at most 22 .
Feasibility Analysis of Sporadic Real-Time Multiprocessor Task Systems
237
Algorithm 1. Algorithm for the feasibility problem for all failure states vf ∈ V1 do if Reach(v0 , vf , 2|B+ |) then return infeasible end if end for return feasible
Algorithm 2. Reach(x, y, k) if k = 0 then return true if x = y, false if x =y end if if k = 1 then return true if (x, y) ∈ A, false otherwise end if for all z ∈ V1 ∪ V2 do if Reach(x, z, k/2) and Reach(z, y, k/2 ) then return true end if end for return false O(s)
Proof. The graph has 2|B+ | = 22 nodes, so the first part follows from Lemma 2 and the existence of linear-time algorithms for the reachability problem. The second part follows similarly from the fact that the witness sequence σ can be defined by taking σ(t) as the set of task activations corresponding to the (2t+1)th arc on the path from the initial state to the reachable failure state.
We can in fact improve exponentially the amount of memory needed for the computation. The idea is to compute the state graph as needed, instead of storing it explicitly (Algorithm 1). We enumerate all failure nodes; for each failure node vf , we check whether there exists a path from v0 to vf in G+ (T ) by calling the subroutine Reach (Algorithm 2). This subroutine checks recursively whether there is a path from x to y of length at most k by trying all possible midpoints z. Some readers might recognize that Reach is nothing but Savitch’s reachability algorithm [16]. This yields the following improvement. Theorem 2. The feasibility problem for a sporadic constrained-deadline task system T can be solved in space 2O(s) , where s is the input size of T . Proof. Any activation of Algorithm 2 needs O(log |B+ |) = 2O(s) space, and the depth of the recursion is at most O(log |B+ |) = 2O(s) .
3.2
Online Feasibility
An issue with the notion of feasibility as studied in the previous section is that, when the task system turns out to be feasible, one is still left clueless as to how
238
V. Bonifaci and A. Marchetti-Spaccamela
the system should be scheduled. The definition of online feasibility (see Section 2) addresses this issue. It could be argued from a system design point of view that one should focus on the notion of online feasibility, rather than on the notion of feasibility. In this section we discuss an algorithm for testing online feasibility. The idea is again to interpret the process as a game between the environment and the scheduler, with the difference that now the adversary can observe the current state of the scheduler (the remaining execution times). In other words, the game is no longer a blindfold game but a perfect-information game. We construct a graph G(T ) = (V1 , V2 , A) where V1 = B and V2 = B × C. The nodes in V1 are decision points for the adversary (with different outgoing arcs corresponding to different tasks being activated) and the nodes in V2 are decision points for the scheduler (different outgoing arcs corresponding to different sets of tasks being scheduled). There is an arc (v1 , v2 ) ∈ A if v2 = (v1 , k) for some tuple k = (ki )ni=1 ∈ C of jobs that can legally be released when the backlog configuration is v1 ; notice the crucial fact that whether some tuple k can legally be released can be decided on the basis of the backlog configuration v1 alone. There is an arc (v2 , v1 ) if v2 = (v1 , k) and v1 is a backlog configuration that can be obtained by v1 after scheduling some subset of tasks; again this depends only on v1 and k. In the interest of space we omit the formal description of the adjacency relation. The game is now played with the adversary starting first in state b0 = (0, 0, 0)ni=1 . The two players take turns alternately and move from state to state by picking an outgoing arc from each state. The adversary wins if it can reach a state in V1 corresponding to a failure configuration. The scheduler wins if it can prolong play indefinitely while never incurring in a failure configuration. Lemma 3. The first player has a winning strategy in the above game on G(T ) if and only if T is not online feasible. Moreover, if T is online feasible, then it admits an optimal memoryless deterministic online algorithm. Proof (sketch). If the first player has a winning strategy s, then for any online algorithm Alg, the walk in G(T ) obtained when player 1 plays according to s and player 2 plays according to Alg, ends up in a failure configuration. But then the job sequence corresponding to this walk in the graph (given by the odd-numbered arcs in the walk) defines a legal job sequence that is not feasibly scheduled by Alg. If, on the other hand, the first player does not have a winning strategy, from the theory of two-player perfect-information games it is known (see for example [11,14]) that the second player has a winning strategy and that this can be assumed to be, without loss of generality, a deterministic strategy that depends only on the current state in V2 (a so-called memoryless, or positional, strategy). Hence, for each node in V2 it is possible to remove all but one outgoing arc so that in the remaining graph no failure configuration is reachable from b0 . The set of remaining arcs that leave V2 implicitly defines a function from V2 = B × C to S, that is, a memoryless online algorithm, which feasibly schedules every legal job sequence of T .
Feasibility Analysis of Sporadic Real-Time Multiprocessor Task Systems
239
Theorem 3. The online feasibility problem for a sporadic constrained-deadline task system T can be solved in time 2O(s) , where s is the input size of T . If T is online feasible, an optimal memoryless deterministic online algorithm for T can be constructed within the same time bound. Proof. We first construct G(T ) in time polynomial in |B × (B × C)| = 2O(s) . We then apply the following inductive algorithm to compute the set of nodes W ⊆ V1 ∪ V2 from which player 1 can force a win; its correctness has been proved before (see for example [11, Proposition 2.18]). Define the set Wi as the set of nodes from which player 1 can force a win in at most i moves, so W = ∪i≥0 Wi . The set W0 is simply the set of all failure configurations. The set Wi+1 is computed from Wi as follows: Wi+1 = Wi ∪ {v1 ∈ V1 : (v1 , w) ∈ A for some w ∈ Wi } ∪ {v2 ∈ V2 : w ∈ Wi for all (v2 , w) ∈ A}. At any iteration either Wi+1 = Wi (and then W = Wi ) or Wi+1 \ Wi contains at least one node. Since there are 2O(s) nodes, this means that W = Wk for some k = 2O(s) . Because every iteration can be carried out in time 2O(s) , it follows that the set W can be computed within time (2O(s) )2 = 2O(s) . By Lemma 3, T is online feasible if and only if b0 ∈ / W. The second part of the claim follows from the second part of Lemma 3 and from the fact that a memoryless winning strategy for player 2 (that is, an optimal memoryless scheduler) can be obtained by selecting, for each node v2 ∈ V2 \ W , any outgoing arc that does not have an endpoint in W .
3.3
Schedulability
In the case of the schedulability problem, we observe that the construction of Section 3.1 can be applied in a simplified form, because for every node of the graph there is now at most one possible valid scheduler state, which can be determined by querying the scheduling algorithm. This implies that the size of the graph reduces to 2|B| = 2O(s) . By applying the same approach as in Section 3.1, we obtain the following. Theorem 4. The schedulability problem for a sporadic constrained-deadline task 2 system T can be solved in time 2O(s ) and space O(s2 ), where s is the input size of T . Proof (sketch). Any activation of Algorithm 2 needs O(log |B|) = O(s) space, and the depth of the recursion is at most O(log |B|) = O(s), so in total a space of O(s2 ) is enough. The running time can be found by the recurrence T (k) = 2 2O(s) ·2·T (k/2)+O(1) which gives T (k) = 2O(s log k) and finally T (2|B|) = 2O(s ) .
4
Continuous versus Discrete Schedules
In this section we show that, under our assumption of integer arrival times for the jobs, the feasibility of a sporadic task system does not depend on whether one is considering discrete or continuous schedules.
240
V. Bonifaci and A. Marchetti-Spaccamela
Let J be the (possibly infinite) set of jobs generated by a job sequence σ. In this section we do not need to keep track of which tasks generate the jobs, so it will be convenient to use a somewhat different notation. Let rj , cj , dj denote respectively the release date, execution time and absolute deadline of a job j; so job j has to receive cj units of processing in the interval [rj , dj ]. A continuous schedule for J on m processors is a function w : J × N → R+ such that: 1. w(j, t) ≤ 1 for all j ∈ J and t ∈ N; 2. j∈J w(j, t) ≤ m for all t ∈ N. Quantity w(j, t) is to be interpreted as the total amount of processing dedicated to job j during interval [t, t + 1]. Thus, the first condition forbids the parallel execution of a job on more than one processor; the second condition limits the total volume processed in the interval by the m processors. The continuous schedule w is feasible for σ if it additionally satisfies 3. rj ≤t
5
Conclusion
We have given upper bounds on the complexity of testing the feasibility, the online feasibility, and the schedulability of a sporadic task system on a set of identical processors. It is known that these three problems are at least coNPhard [9]; however, no sharper hardness result is known. A natural question is to characterize more precisely the complexity of these problems, either by improving on the algorithms given here, or by showing that these problems are hard for some complexity class above coNP. Acknowledgments. We thank Sanjoy K. Baruah, Nicole Megow and Sebastian Stiller for useful discussions.
Feasibility Analysis of Sporadic Real-Time Multiprocessor Task Systems
241
References 1. Baker, T.P., Baruah, S.K.: Schedulability analysis of multiprocessor sporadic task systems. In: Son, S.H., Lee, I., Leung, J.Y.T. (eds.) Handbook of Real-Time and Embedded Systems, ch. 3. CRC Press, Boca Raton (2007) 2. Baker, T.P., Cirinei, M.: Brute-force determination of multiprocessor schedulability for sets of sporadic hard-deadline tasks. In: Tovar, E., Tsigas, P., Fouchal, H. (eds.) OPODIS 2007. LNCS, vol. 4878, pp. 62–75. Springer, Heidelberg (2007) 3. Baruah, S.K., Cohen, N.K., Plaxton, C.G., Varvel, D.A.: Proportionate progress: A notion of fairness in resource allocation. Algorithmica 15(6), 600–625 (1996), http://www.springerlink.com/content/xy72c891q4pn6e0b/ 4. Baruah, S.K., Goossens, J.: Scheduling real-time tasks: Algorithms and complexity. In: Leung, J.Y.T. (ed.) Handbook of Scheduling: Algorithms, Models, and Performance Analysis, ch. 28. CRC Press, Boca Raton (2003) 5. Baruah, S.K., Pruhs, K.: Open problems in real-time scheduling. Journal of Scheduling (2009) doi:10.1007/s10951-009-0137-5 6. Baruah, S.K., Rosier, L.E., Howell, R.R.: Algorithms and complexity concerning the preemptive scheduling of periodic, real-time tasks on one processor. Real-Time Systems 2(4), 301–324 (1990) 7. Dertouzos, M.L.: Control robotics: The procedural control of physical processes. In: Proc. IFIP Congress. pp. 807–813 (1974) 8. Diestel, R.: Graph theory, 3rd edn. Springer, Heidelberg (2005), http://diestel-graph-theory.com/Contents3.pdf 9. Eisenbrand, F., Rothvoß, T.: EDF-schedulability of synchronous periodic task systems is coNP-hard. In: Proc. 21st Symp. on Discrete Algorithms. pp. 1029–1034 (2010), http://www.siam.org/proceedings/soda/2010/SODA10_083_eisenbrandf.pdf 10. Fisher, N., Goossens, J., Baruah, S.K.: Optimal online multiprocessor scheduling of sporadic real-time tasks is impossible. Tech. Rep. 09–009, University of North Carolina at Chapel Hill, Department of Computer Science, Chapel Hill, NC (2009) 11. Gr¨ adel, E., Thomas, W., Wilke, T. (eds.): Automata, Logics, and Infinite Games. LNCS, vol. 2500. Springer, Heidelberg (2002) 12. Horn, W.A.: Some simple scheduling algorithms. Naval Research Logistics Quarterly 21, 177–185 (1974) 13. Liu, C.L., Layland, J.W.: Scheduling algorithms for multiprogramming in a hardreal-time environment. Journal of the ACM 20(1), 46–61 (1973) 14. McNaughton, R.: Infinite games played on finite graphs. Annals of Pure and Applied Logic 65(2), 149–184 (1993) 15. Reif, J.H.: The complexity of two-player games of incomplete information. Journal of Computer and System Sciences 29(2), 274–301 (1984) 16. Savitch, W.J.: Relationships between nondeterministic and deterministic tape complexities. Journal of Computer and Systems Sciences 4(2), 177–192 (1970)
Author Index
Abraham, Ittai II-87 Adler, Isolde I-97 Agarwal, Pankaj K. I-487 Ajwani, Deepak II-75 Anshelevich, Elliot I-158 Arya, Sunil I-374 Azar, Yossi II-51
Dickerson, Matthew T. Dorn, Frederic I-97
Bae, Sang Won I-500 Bahmani, Bahman I-170 Bansal, Nikhil II-218 Bartal, Yair II-87 Bar-Yehuda, Reuven I-255 Bast, Hannah I-290 Belazzougui, Djamal I-427 Bendich, Paul I-1 Berenbrink, Petra I-134 Bhawalkar, Kshipra II-17 Boldi, Paolo I-427 Bonifaci, Vincenzo II-230 Brodal, Gerth Stølting II-171 Buchbinder, Niv II-51 Buchin, Kevin I-110, I-463, II-63 Buchin, Maike I-463, II-63 B¨ using, Christina I-350 Carlsson, Erik I-290 Chan, Sze-Hang I-23 Chao, Kun-Mao I-415 Chechik, Shiri I-84 Chen, Kuan-Yu I-415 Chen, Ning II-147 Chrobak, Marek I-195 Cicalese, Ferdinando I-207 ´ Colin de Verdi`ere, Eric II-100 Cormode, Graham I-231 Crescenzi, Pierluigi I-302 Culpepper, J. Shane II-194 Cygan, Marek I-72 Czumaj, Artur I-410 da Fonseca, Guilherme D. Das, Abhimanyu I-219 Davoodi, Pooya II-171
I-374
I-362
Edelsbrunner, Herbert I-1 Eigenwillig, Arno I-290 Eisenbrand, Friedrich I-11 Elkin, Michael I-48 Elmasry, Amr II-183 Els¨ asser, Robert I-134 Eppstein, David I-362 Feldman, Jon I-182 Ferragina, Paolo II-1 Fomin, Fedor V. I-97 Friedler, Sorelle A. I-386 Fujita, Ryo II-123 Fujiwara, Hiroshi I-439 Gairing, Martin II-17 Galli, Laura I-338 Gaspers, Serge I-267 Geisberger, Robert I-290 Ghosh, Arpita II-147 Gibson, Matt I-243 Giesen, Joachim I-524 Goodrich, Michael T. I-362 Grandoni, Fabrizio I-536 Grossi, Roberto I-302 Gupta, Anupam II-218 Gutin, Gregory I-326 Hajiaghayi, MohammadTaghi Halperin, Dan I-398 Harks, Tobias II-29 Harrelson, Chris I-290 Hellweg, Frank I-60 Hemmer, Michael I-398 Henzinger, Monika I-182 Hermelin, Danny I-255 Hoefer, Martin I-158, II-29 Iersel, Leo van I-326 Imbrenda, Claudio I-302 Iwama, Kazuo II-135
I-314
244
Author Index
Jacobs, Tobias I-439 Jaggi, Martin I-524 Jain, Kamal II-51 Jansson, Jesper I-573 Kami´ nski, Marcin I-122 Kang, Ross J. II-112 Kaplan, Haim I-475 Kapralov, Michael I-170 K¨ arkk¨ ainen, Juha I-451 Katz, Matthew J. I-475 Kempe, David I-219 Kesavan, Karthikeyan I-11 Khandekar, Rohit I-314 Klimm, Max II-29 Kobayashi, Yusuke II-123 Korman, Matias I-500 Kortsarz, Guy I-314 Korula, Nitish I-182 Kowalik, Lukasz I-72 Kreveld, Marc van I-463 Krysta, Piotr II-39 Lampis, Michael I-549 Lam, Tak-Wah I-23 Langberg, Michael I-84 Lanzi, Leonardo I-302 Laue, S¨ oren I-524 Lee, Lap-Kei I-23 Li, Jian II-218 L¨ offler, Maarten I-463 Lorenz, Ulf I-512 Makino, Kazuhisa I-195, II-123 Marchetti-Spaccamela, Alberto II-230 Marino, Andrea I-302 Martin, Alexander I-512 Mattikalli, Raju S. I-11 Maue, Jens I-350 Mestre, Juli´ an II-218 Mirrokni, Vahab S. I-182 Mitzenmacher, Michael I-231 Miyazaki, Shuichi II-135 Mnich, Matthias I-267, I-326, II-112 Morgenstern, Gila I-475 Morozov, Dmitriy I-1 Mount, David M. I-374, I-386 Mozes, Shay II-206 Mucha, Marcin I-72 M¨ uller, Tobias II-112
Nagarajan, Viswanath II-218 Navarro, Gonzalo II-194 Neiman, Ofer II-87 Niemeier, Martin I-11 Nordsieck, Arnold W. I-11 Okamoto, Yoshio I-500 Osipov, Vitaly I-278 Pagh, Rasmus I-427 Patel, Amit I-1 Patel, Viresh I-561 Paulusma, Dani¨el I-122 Peleg, David I-84 Phillips, Jeff M. I-487 Pilipczuk, Marcin I-72 Pirwani, Imran A. I-243 Pountourakis, Emmanouil I-146 Puglisi, Simon J. I-451, II-194 Radhakrishnan, Jaikumar Rao, S. Srinivasa II-171 Rawitz, Dror I-255 Raychev, Veselin I-290 Roditty, Liam I-84 Roughgarden, Tim II-17 Rudra, Atri II-218
II-159
Sanders, Peter I-278 Sankowski, Piotr I-72 Sauerwald, Thomas I-134 Sau, Ignasi I-97 Schmidt, Melanie I-60 Schulman, Leonard J. II-87 Schulz, Andr´e I-110, II-63 Setter, Ophir I-398 Shah, Smit II-159 Shannigrahi, Saswata II-159 Sharir, Micha I-475 Silveira, Rodrigo I. I-463 Sitchinava, Nodari II-75 Skopalik, Alexander II-29 Skutella, Martin I-11, I-36 Sohler, Christian I-60 Solomon, Shay I-48 Stein, Cliff I-182 Stiller, Sebastian I-338 Sung, Wing-Kin I-573
Author Index Thaler, Justin I-231 Thilikos, Dimitrios M. I-97, I-122 Turpin, Andrew II-194 Vaccaro, Ugo I-207 Ventre, Carmine II-39 Verschae, Jos´e I-11, I-36 Vidali, Angelina I-146 Viger, Fabien I-290 Vigna, Sebastiano I-427 Wenk, Carola I-463 Wiese, Andreas I-11
Wiratma, Lionov I-463 Woeginger, Gerhard J. I-195 Wolf, Jan I-512 Wulff-Nilsen, Christian II-206 Xu, Haifeng
I-195
Yanagisawa, Hiroki II-135 Yeo, Anders I-326 Yu, Hai I-487 Zeh, Norbert II-75 Zenklusen, Rico I-536
245