Springer Tracts in Advanced Robotics Volume 51 Editors: Bruno Siciliano · Oussama Khatib · Frans Groen
Dezhen Song
Sharing a Vision Systems and Algorithms for Collaboratively-Teleoperated Robotic Cameras
ABC
Professor Bruno Siciliano, Dipartimento di Informatica e Sistemistica, Università di Napoli Federico II, Via Claudio 21, 80125 Napoli, Italy, E-mail:
[email protected] Professor Oussama Khatib, Robotics Laboratory, Department of Computer Science, Stanford University, Stanford, CA 94305-9010, USA, E-mail:
[email protected] Professor Frans Groen, Department of Computer Science, Universiteit van Amsterdam, Kruislaan 403, 1098 SJ Amsterdam, The Netherlands, E-mail:
[email protected]
Author Dezhen Song 311C, HRBB Department of Computer Science TAMU 3112 Texas A&M University College Station, TX 77843 USA Email:
[email protected]
ISBN 978-3-540-88064-6
e-ISBN 978-3-540-88065-3
DOI 10.1007/978-3-540-88065-3 Springer Tracts in Advanced Robotics
ISSN 1610-7438
Library of Congress Control Number: 2008935492 c 2009
Springer-Verlag Berlin Heidelberg
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable for prosecution under the German Copyright Law. The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Typeset & Cover Design: Scientific Publishing Services Pvt. Ltd., Chennai, India. Printed in acid-free paper 543210 springer.com
Editorial Advisory Board
EUR ON
Herman Bruyninckx, KU Leuven, Belgium Raja Chatila, LAAS, France Henrik Christensen, Georgia Institute of Technology, USA Peter Corke, CSIRO, Australia Paolo Dario, Scuola Superiore Sant’Anna Pisa, Italy Rüdiger Dillmann, Universität Karlsruhe, Germany Ken Goldberg, UC Berkeley, USA John Hollerbach, University of Utah, USA Makoto Kaneko, Osaka University, Japan Lydia Kavraki, Rice University, USA Sukhan Lee, Sungkyunkwan University, Korea Tim Salcudean, University of British Columbia, Canada Sebastian Thrun, Stanford University, USA Yangsheng Xu, Chinese University of Hong Kong, PRC Shin’ichi Yuta, Tsukuba University, Japan
European
***
***
Research Network
***
***
STAR (Springer Tracts in Advanced Robotics) has been promoted ROBOTICS under the auspices of EURON (European Robotics Research Network)
To my parents and to Ye
Foreword
By the dawn of the new millennium, robotics has undergone a major transformation in scope and dimensions. This expansion has been brought about by the maturity of the field and the advances in its related technologies. From a largely dominant industrial focus, robotics has been rapidly expanding into the challenges of the human world. The new generation of robots is expected to safely and dependably co-habitat with humans in homes, workplaces, and communities, providing support in services, entertainment, education, healthcare, manufacturing, and assistance. Beyond its impact on physical robots, the body of knowledge robotics has produced is revealing a much wider range of applications reaching across diverse research areas and scientific disciplines, such as: biomechanics, haptics, neurosciences, virtual simulation, animation, surgery, and sensor networks among others. In return, the challenges of the new emerging areas are proving an abundant source of stimulation and insights for the field of robotics. It is indeed at the intersection of disciplines that the most striking advances happen. The goal of the series of Springer Tracts in Advanced Robotics (STAR) is to bring, in a timely fashion, the latest advances and developments in robotics on the basis of their significance and quality. It is our hope that the wider dissemination of research developments will stimulate more exchanges and collaborations among the research community and contribute to further advancement of this rapidly growing field. The monograph written by Dezhen Song is focused on a robotic camera simultaneously controlled by multiple online users via the Internet. A challenging match between the collaboratively tele-operated robotic cameras and the needs from nature environment observation is sought, which greatly extends the domain of online robots in both application and technology development directions, including building construction site monitoring, public space surveillance, and distance education. New solutions are proposed which demonstrate the enormous potential of Internet-based infrastructures for immediate success in the market.
X
Foreword
This book is the outcome of the author’s doctoral work and research conducted in his early stage of academic career. Effectively organized in three parts after an introduction to the subject matter, the volume constitutes a very fine addition to the STAR series!
Naples, Italy June 2008
Bruno Siciliano STAR Editor
Preface
The work presented in this book summarizes my thesis work and the research conducted early in my academic career (2000 - 2007 AD). In 1996, I was fascinated by the tremendous potential of the Internet and co-found an Internet-based video surveillance company. The simple integration of the communication infrastructure and an array of visual sensors was an immediate success in the market. While my partners were drawn deeper into the excitement of entrepreneurship, I actually became more and more interested in integrating human users with sensors via the Internet. I realized that Internet is not only a vast network of wires and routers but also a vast social network which never existed before. A deep understanding of the topic would require a serious academic approach, which inspired me to pursuit an academic career. In 2000, I was fortunate enough to be admitted into the graduate program of University of California, Berkeley. What was more exciting was being able to work with Prof. Ken Goldberg, who pioneers the research in the Internet-based Tele-operation. Ken’s research interest at the time was to investigate how to allow a group of online users to collaboratively control a single robot, which is apparently a great fit to my personal interests. Ken’s group has attempted strategies such as averaging. This method has proved to be viable and effective in noise reduction when controlling a 4 degrees-of-freedom (DOF) industry arm. However, the simple aggregation strategy does not work when a robotic camera replaces the industry robot. This quickly becomes my Ph.D. thesis topic. The rest of time in Berkeley becomes the most enjoyable time in my life. In 2004, I was again fortunate that the Computer Science Department of Texas A&M University offered me an assistant professor position. I was excited with the opportunity to continue my research and to work with well-known robotics researchers such as Prof. Richard Volz and Prof. Nancy Amato. The support from the department and the university allows me to quickly establish my own research group. The taste of academic freedom is just unbelievable. As a young researcher, I am hungry and eager to prove myself. We extend the research from laboratory settings to challenges in real world applications such as building
XII
Preface
construction site monitoring, public space surveillance, distance education, and nature environment observation. Finding the match between the collaboratively tele-operated robotic cameras and the needs from nature environment observation was the most exciting moment in past few years. For many years, the researchers in the online robots field struggle to find new applications other than health care, education, and surveillance. The new match greatly extends the domain of online robots in both application and technology development directions. On the one hand, we provide new solutions to address the primary challenges in nature environment observation. On the other hand, the challenging nature environment presents us a lot of new research problems to explore. This book summarizes our recent development and hopefully provides insights for researchers in similar domains.
Acknowledgement There is absolutely no way that I can accomplish this work my all by myself. This is a joint adventure with my collaborators in past seven years. First of all, I would like to thank for Prof. Ken Goldberg. Ken is the best thesis advisor and the greatest mentor. Ken’s inspirational thinking, unique scientific/artistic styles in technical writing and presentation, and tremendous support essentially convert me from a naive graduate student to an independent academic researcher. I also would like to thank my thesis committee members: Prof. A. Frank van der Stappen, Prof. Ilan Adler, Prof. Satish Rao, and Prof. Andrew Lim for their great inputs and help in improving my thesis. I would like to specially thank for Frank. Frank is not just my thesis committee member but another thesis advisor that I was fortunate to work with. With a keen feeling on the geometric aspect of the collaborative camera control problem, Frank’s inputs guide me to investigate the collaborative control problem from this new perspective, which yields fruitful results. I am grateful for Prof. Richard Volz and Prof. Nancy Amato for their help early in my career. They are great mentors for me and always remind me of things that a new faculty would tend to forget. I appreciate the great opportunity and the great support provided by Prof. Valerie Taylor and the Department of Computer Science of Texas A&M University. I am grateful for Prof. Ricardo Gutierrez-Osuna, Prof. Wei Zhao, and colleagues in the department for their help and inputs. With projects stretch for over seven years, I am sure that I may forget to include some important names. I want to apologize for this ahead of time. For the tele-actor project, I would like to thank E. Paulos and D. Pescovitz for valuable inputs on initial experiments; J. Donath and her students at MIT Media Lab; E. Paulos, C. Myers, and M. Fogarty for helmet design; the other students who have participated in the project: A. Levandoski, J. McGonigal, W. Zheng, A. Ho, M. McKelvin, I. Song, B. Chen, R. Aust, M. Metz, M. Faldu, V. Colburn, Y. Khor, J. Himmelstein, J. Wang, J. Shih, K. Gopal Gopalakrishnan, F. Hsu,
Preface
XIII
J. McGonigal and M. Last; and research colleagues R. Bajcsy, J. Canny, P. Wright, G. Niemeyer, A. Pashkevich, R. Luo, R. Siegwart, S. Montgomery, B. Laurel, V. Lumelsky, N. Johnson, R. Arkin, L. Leifer, P. Pirjanian, D. Greenbaum, K. Pister, C. Cox, D. Plautz, and T. Shlain. For sharecam/co-opticon/cone projects, I would like to thank for J. Yi, Y. Xu, N. Qin, C. Kim, H. Wang, S. Har-Peled, V. Koltun, and A. Pashkevich for their contributions to the project. Thanks H. Lee and J. Liu for their insightful discussions. Thanks also to J. Schiff, T. Schmidt, A. Dahl, and other students in the Automation Sciences Lab at UC Berkeley. Special thanks are given to B. Full, E. Brewer, and C. Newmark for providing camera installation sites. Thanks are given to Q. Hu, and Z. Goodwin for implementing part of the system. For the acone project, we are grateful to J. Fitzpatrick and R. Rohrbaugh of the Cornell Ornithology Lab, and J. Liu for providing inputs for system design and providing their help technically and logistically in field experiments. Thanks George Lee and Junku Yuh of NSF for their support. Thanks to H. Lee, B. Green, and H. Wang for their contribution to the Networked Robot Lab in the Texas A&M University. Thanks to Bryce Lee and Jeff Tang from the UC Berkeley Automation Sciences Lab. Thanks to Richard Crosset, Billy Culbreath and U.S. Fish and Wildlife Service. Thanks to Robert Johnson and the Arkansas Electric Cooperatives Corp.. Thanks to Patsy Arnett and the support from Brinkley Convention Center, and to Mary Harlin and her family for providing space for our wireless antenna in Arkansas. For the Cone-Welder project, we thank J. Rappole from Smithsonian Institution, S. Glasscock and T. Blankenship from Welder Wildlife Foundation, K. Goldberg, B. Lee, Y. Zhang, Y. Xu, C. Kim, and H. Wang for their contributions, insightful inputs, and feedback. This work was supported in part by the National Science Foundation under Grant IIS-0534848 and IIS-0643298, in part by Panasonic Research, in part by the Microsoft Corporation, in part by the Intel Corporation, in part by the University of California (UC) Berkeley’s Center for Information Technology Research in the Interest of Society (CITRIS), and in part by Texas A&M University Startup funds. College Station, TX, USA
Dezhen Song June 2008
Contents
1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Tele-operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Networked Telerobots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Web Cameras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Collaborative Telerobot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.1 What Is a Collaborative Telerobot? . . . . . . . . . . . . . . . . . . . 1.4.2 History of Collaborative Telerobots . . . . . . . . . . . . . . . . . . . 1.4.3 Characteristics of CT Systems . . . . . . . . . . . . . . . . . . . . . . . 1.5 Organization of the Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1 1 2 4 5 5 6 8 9
Part I: Systems 2
3
The Co-Opticon System: Interface, System Architecture, and Implementation of a Collaboratively Controlled Robotic Webcam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 The Co-Opticon Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Frame Selection Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.1 Memoryless Frame Selection Model . . . . . . . . . . . . . . . . . . . 2.5.2 Temporal Frame Selection Model . . . . . . . . . . . . . . . . . . . . . 2.5.3 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.4 Field Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13 13 14 14 15 17 17 17 19 20 21
The Tele-Actor System: Collaborative Teleoperation Using Networked Spatial Dynamic Voting . . . . . . . . . . . . . . . . . . 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
23 23
XVI
Contents
3.2 System Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 SDV User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Hardware and Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 Version 3.0 (July 18, 2001) . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 Version 9.0 (July 25, 2002) . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Problem Definition and Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.1 Problem Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.2 Ensemble Consensus Region . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.3 Collaboration Metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Online Field Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8 Closure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
25 25 29 29 30 33 33 34 34 36 36 37
Part II: Algorithms 4
Exact Frame Selection Algorithms for Agile Satellites . . . . . . 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Problem Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 Inputs and Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 Reward Metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.3 Properties of the CRR Reward Metric . . . . . . . . . . . . . . . . 4.3.4 Comparison with “Similarity Metrics” . . . . . . . . . . . . . . . . 4.4 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.1 Base Vertices and Plateau Vertices . . . . . . . . . . . . . . . . . . . 4.4.2 Algorithms for Discrete Resolutions . . . . . . . . . . . . . . . . . . . 4.4.3 Algorithms for Continuous Resolution . . . . . . . . . . . . . . . . . 4.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 Conclusions and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
41 41 43 44 44 48 50 51 53 53 56 57 66 69
5
Approximate and Distributed Algorithms for a Collaboratively Controlled Robotic Webcam . . . . . . . . . . . . . . . . 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Problem Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Algorithm I: Exhaustive Lattice Search . . . . . . . . . . . . . . . 5.3.2 Algorithm II: BnB Implementation . . . . . . . . . . . . . . . . . . . 5.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.1 Numerical Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.2 Field Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
71 71 72 74 74 81 84 84 86 87
Contents
XVII
6
An Approximation Algorithm for the Least Overlapping p-Frame Problem with Non-Partial Coverage for Networked Robotic Cameras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 6.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 6.3 Problem Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 6.3.1 Input and Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 6.3.2 Nomenclature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 6.3.3 Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 6.3.4 Satisfaction Metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 6.3.5 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 6.4 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 6.4.1 Construction of Lattice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 6.4.2 Virtual Non-Overlapping Condition . . . . . . . . . . . . . . . . . . . 95 6.4.3 Approximation Solution Bound . . . . . . . . . . . . . . . . . . . . . . 96 6.4.4 Lattice-Based Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 6.5 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 6.6 Conclusion and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
7
Unsupervised Scoring for Scalable Internet-Based Collaborative Teleoperation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Problem Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.1 Inputs and Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.2 Unsupervised Scoring Metric . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Distributed Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5 The “Tele-Twister” Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6 Conclusion and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7 Closure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
103 103 105 106 106 108 109 110 112 113
Part III: Deployment 8
Projection Invariants for Pan-Tilt-Zoom Robotic Cameras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 Problem Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.1 Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.2 Nomenclature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.3 Perspective Projection and Re-projection for a PTZ Camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.4 Definition of Projection Invariants . . . . . . . . . . . . . . . . . . . . 8.4 Projection Invariants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.1 Projection Invariants and Re-projection . . . . . . . . . . . . . . .
117 117 118 120 120 120 121 122 122 122
XVIII
Contents
8.4.2 Spherical Wrapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.3 Spherical Re-Projection (SRP) . . . . . . . . . . . . . . . . . . . . . . . 8.4.4 Projection Invariants for SRP . . . . . . . . . . . . . . . . . . . . . . . . 8.5 Application: Image Alignment Problem . . . . . . . . . . . . . . . . . . . . . . 8.5.1 Problem Description and Existing Methods . . . . . . . . . . . . 8.5.2 Projection Invariant-Based Image Alignment Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.3 Experiments and Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.4 Speed Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.5 Application in Panorama Construction . . . . . . . . . . . . . . . . 8.6 Conclusion and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Calibration Algorithms for Panorama-Based Camera Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 Assumptions and Nomenclature . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4 Calibration Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.1 Problem Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.2 Calibration Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.3 Calibration Accuracy Analysis . . . . . . . . . . . . . . . . . . . . . . . 9.5 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.6 Conclusions and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10 On-Demand Sharing of a High-Resolution Panorama Video from Networked Robotic Cameras . . . . . . . . . . . . . . . . . . . 10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 System Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.1 Evolving Panorama . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.2 Understanding User Requests . . . . . . . . . . . . . . . . . . . . . . . . 10.4 Data Representation and Algorithms . . . . . . . . . . . . . . . . . . . . . . . . 10.4.1 Patch-Based Evolving Panorama Video Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.2 Frame Insertion Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.3 User Query Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5 Experiments and Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6 Conclusion and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Conclusions and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1.1 Challenges Identified in CT Systems . . . . . . . . . . . . . . . . . . 11.1.2 Formulation of CTRC Problems and Metrics . . . . . . . . . . . 11.1.3 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1.4 System Development and Experiments . . . . . . . . . . . . . . . .
124 125 126 130 131 132 133 134 135 137 139 139 140 142 143 143 145 148 149 151 153 153 155 156 157 158 159 159 159 160 161 163 165 165 165 166 166 167
Contents
11.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.1 Big Picture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.2 Extensions of Frame Selection Problems . . . . . . . . . . . . . . . 11.2.3 Another Viewpoint on Future Work . . . . . . . . . . . . . . . . . .
XIX
169 169 169 170
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
1 Introduction
A Collaboratively Teleoperated Robotic Camera (CTRC) refers to a robotic camera simultaneously controlled by multiple online users via the Internet. The applications of a CTRC system, such as distance learning, natural observation, security surveillance, emergency response, traffic monitoring, etc., are wideranging. A CTRC has its roots in both robotics and computer vision. As illustrated in Figure 1.1, a CTRC is first a telerobot because it allows the remote control of a robotic device. Since its communication usually relies on computer networks, it belongs to the category of networked telerobots. The collaborative element of a CTRC also allows multiple users to share control the single robotic camera simultaneously, which classifies it as a collaborative telerobot. Since it transmits a live video stream via the network and uses web browsers as clients, it is a special kind of web cameras.
Telerobot
Networked telerobot Collaborative telerobot
CTRC
Web cameras
Fig. 1.1. The research domain of the Collaboratively Teleoperated Robotic Camera (CTRC) system
1.1 Tele-operation A “telerobot” is a remotely controlled machine equipped with sensors such as cameras and the means to move through and interact with a remote physical environment. NASA’s Mars Sojourner is a well-known example. The Sojourner telerobot, like almost all telerobots to date, is controlled by a single human D. Song: Sharing a Vision: Systems and Algorithms, STAR 51, pp. 1–9. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
2
1 Introduction
operator. A robotic camera can perform pan-tilt-zoom operations. It can be viewed as a robot with three Degree-of-Freedoms (DOF). Hence, a tele-operated robotic camera is a special type of telerobot. Since Nikola Tesla demonstrated a first radio-controlled boat in New York City in 1898 [186], teleoperation has a history of more than a century. Goertz demonstrated one of the first bilateral simulators in the 1950’s at the Argonne National Laboratory [64]. Remotely operated mechanisms have long been desired for use in inhospitable environments such as radiation sites, undersea [14] and space exploration [17]. At General Electric, Mosher [133] developed a complex two-arm teleoperator with video cameras. Prosthetic hands were also applied to teleoperation [188]. More recently, teleoperation is being considered for medical diagnosis [16], manufacturing [63] and micromanipulation [158]. See Sheridan [161] for an excellent review of the extensive literature on teleoperation and telerobotics.
1.2 Networked Telerobots A traditional telerobot often relies on dedicated one-to-one communication channels and specially designed hardware. It is very difficult for a normal user to access the robot because the needs of the special equipment and communication mechanism. Since the middle 1990s, the evolving Internet changes the paradigm of telerobot research. As a subclass of telerobots, networked telerobots, controllable over networks such as the Internet, are an active research area. In addition to the challenges associated with time delay, supervisory control, and stability, online robots must be designed to be operated by non-specialists through intuitive user interfaces and to be accessible 24 hours a day. Our robotic cameras are controlled over the Internet. We can view them as networked telerobots. The Mercury Project was the first Internet robot [70, 69]; it went online in August 1994. Working independently, a team led by K. Taylor and J. Trevelyan at the University of Western Australia demonstrated a remotely controlled sixaxis telerobot in September 1994 [44, 184]. There are now dozens of Internet robots online, a book from MIT Press [71], and an IEEE Technical Committee on Internet and Online Robots. See [92,156,102,101,108,132,82,140,121] examples of recent projects. As of 2006, several hundred networked telerobots have been developed and put online for public use. Many papers have been published describing these systems and a book on this subject by Goldberg and Siegwart is available from MIT Press [71]. Updated information about new research and an archive/survey of networked telerobots is available on the website of the IEEE Technical Committee on Networked Robots (Google “networked robots”), which fosters research in both networked telerobots and networked robots. Same as the other networked telerobots, the networked robotic cameras have the following properties: • The physical world is affected by a device that is locally controlled by a network “server” which communicates with remote human users through web browsers such as Internet Explorer or Mozilla, which are generally referred
1.2 Networked Telerobots
• • • •
3
to as “clients.” As of 2006, the standard protocol for network browsers is HTTP, a stateless transmission protocol. Most networked telerobots are continuously accessible (online), 24 hours a day, 7 days a week. Since hundreds of millions of people now have access to the Internet, mechanisms are needed to handle client authentication and contention. Input and output for networked telerobots is usually achieved with the standard computer screen, mouse, and keyboard. Clients may be inexperienced or malicious, so online tutorials and safeguards are generally required.
Networked telerobots are a special case of “supervisory control” telerobots, as proposed by Sheridan and his colleagues [161]. Fig. 1.2 summarizes the spectrum of the teleoperation control. Under supervisory control, a local computer plays an active role in closing the feedback loop. Most networked robotics are type (c) supervisory control systems. Networked telerobots provide a new medium for people to interact with remote environment. A networked robot can provide more interactivity beyond what a normal videoconferencing system. The physical robot not only represents the remote person but also transmits multi-modal feedback to the person, which is often referred as “telepresence” in literature [141]. Paulos and Canny’s Personal
Human Operator Display Controller
Sensor
Actuator
Human Operator
Human Operator
Human Operator
Display Controller
Display Controller
Display Controller
Computer
Computer
Computer
Sensor
Actuator
Sensor
Actuator
Sensor
Actuator
Human Operator Display
Computer Sensor
Actuator
Task
Task
Task
Task
Task
(a)
(b)
(c)
(d)
(e)
Direct control
Supervisory control
Full automatic control
Fig. 1.2. A spectrum of teleoperation control modes adapted from Sheridan’s text [161]. We label them a-e, in order of increasing robot autonomy. At the far left would be a mechanical linkage where the human directly operates the robot from another room through sliding mechanical bars, and on far right would be a system where the human role is limited to observation/monitoring. In c-e, the dashed lines indicated that communication may be intermittent.
4
1 Introduction
ROving Presence (PRoP) robot [145] and Jouppi and Thomas’ Surrogate robot [141] are recent representative work. Networked telerobots have great potential for education and training. In fact, one of the earliest networked telerobot systems [185] originates from the idea of a remote laboratory. Networked telerobots provide universal access to the general public, who may have little to no knowledge of robots, with opportunities to understand, learn, and operate robots, which were expensive scientific equipment limited to universities and large corporate laboratories before. Built on networked telerobots, online remote laboratories [107, 40] greatly improves distance learning by providing an interactive experience. For example, teleoperated telescopes help students to understand astronomy [47]. Teleoperated microscope [149] helps student to observe micro-organisms. The Tele-Actor project [73] allows a group of students to remotely control a human tele-actor to visit environments that are normally not accessible to them such as cleanroom environments for semi-conductor manufactory facility and DNA analysis laboratories.
1.3 Web Cameras Since a group of thirsty researchers in Cambridge University put the first web camera online to observe a coffee pot in 1991, web cameras has became an active research area. Research on webcams or Internet cameras are focus on two perspectives: system architectures and applications. Desmet, Verkest, Mignolet et al. [45, 97, 196] designed webcams using reconfigurable hardware and embedded software. They implemented a secure VPN (Virtual Private Network) with 3DES encryption and Internet camera server (including JPEG compression). Brooks and McKee [21] implemented an automated camera which is placed during teleoperation using Visual Acts theory and architecture to provide operators with task relevant information in a timely manner. The applications of webcams is not limited to surveillance [104] or teleconferencing [109, 119, 146]. Schmid, Maule, and Roth [159] used a controllable webcam to perform all the tests for industrial robots given by ISO 9283 “Performance criteria and related test methods”. Pollak and Hutter [149] installed a Phillips webcam on an Olympus BX60 light microscope to record movies of investigated samples. Zhang, Navab, and Liou [207] used webcams to creat an interactive sales model for web customers. Remote camera systems have a long history in applications such as nature observation. In 1950s, Gysel and Davis [79] built an early video camera based on remote wildlife observation system to study rodents. Biologists use remote photography systems to observe nest predation, feeding behavior, species presence, and population parameters [112,93,43,157,175,115]. Commercial remote camera systems such as Trialmaster [112] and DeerCam have been developed since 1986 and have been widely used in wildlife observation. The Internet enables webcam systems that allow the general public to access remote nature cameras. Thousands of webcams have been installed around the world, for example to observe elephants [1], tigers [187], bugs [199], birds/squirrels [138] [99], cranes [136], and
1.4 Collaborative Telerobot
5
swans [179]. Many other examples can be found at [200]. However, most of these cameras are non-robotic and online function as passive recording devices. Due to their flexible coverage and resolution, robotic PTZ cameras have been used in many applications such as distance learning [198], improving vehicle driving saftety [36], tracking human movement [204], building construction monitoring [169], nature observation [167], and traffic monitoring [128]. Matsuyama and his colleagues [128] point out that the dynamic integration of visual perception and camera action is an important research problem for PTZ cameras. They develop a new architectures that coordinates multiple cameras to track multiple moving objects. Sinha and Pollefeys [165] develop automatic calibration algorithms for a pan-tilt-zoom camera with a focus on automatic zoom calibration. They first determine intrinsic camera parameters at the lowest zoom and then increase camera zoom settings to obtain radial distortion parameters. Although calibration and content analysis are very important problems for web cameras, they have been well studied by robotics and computer vision researchers. Here we will focus on system, algorithm, and deployment issues for the collaboratively teleoperated robotic camera, where multiple users share control the robotic camera.
1.4 Collaborative Telerobot 1.4.1
What Is a Collaborative Telerobot?
A traditional telerobot is often controlled in an one-to-one mater-slave mode and usually does not allow one-to-many and many-to-many collaborations. By “collaborative” we mean a system where a number of participants simultaneously share control. For example, if a robotic camera is installed at a popular location such as Time Square in New York city, it is imageable that a lot of users may want to control the robot simultaneously. We define a “collaborative telerobot” as a telerobot simultaneously controlled by many participants, where input from each participant is combined to generate a single control stream. In our settings, it is possible to control more than one camera using a single control stream because all cameras are coordinated in action and hence the multiple cameras can be viewed as a sophisticated robot. Collaborative Telerobotics (CT) is a novel approach to teleimmersion and teleworking. With CT, participants collaborate rather than compete for access to valuable resources such as historical and scientific sites. Collaboration is a crucial ingredient for education and teamwork. A scalable infrastructure for CT, compatible with the Internet, would allow large groups of students or researchers to simultaneously participate in remote experiences. For example, CT can allow groups of disadvantaged students to collaboratively steer a telerobot through a working steelmill in Japan or the Presidential Inauguration, and allow groups of researchers to collaboratively steer a telerobot around a newly active volcano or a fresh archaeological site. Figure 1.3 illustrates a non-remote collaborative control architecture. Before we consider the details of problems, a preliminary question is: Can a group of
6
1 Introduction
Fig. 1.3. Examples of Collaborative Control System
many participants simultaneously driving the motion of a single resource achieve anything resembling coherent control? 1.4.2
History of Collaborative Telerobots
Anecdotal evidence with Cinematrix [55, 31], a commercial audience interaction system, suggests that collaborative motion control is not only possible but fairly robust to deviations in individual behavior. The inventors of Cinematrix, Loren and Rachel Carpenter, performed a series of experiments in large theaters in the 1990s. Each audience member is given a plastic paddle, colored red on one side and green on the other. By rotating his or her paddle each player simultaneously provides input. Overhead cameras detect which color is being presented by each participant in real time. The camera output is used to drive a live display projected onto the front screen of the theater. The average level of red or green conveyed by the group provides an aggregate audience signal that is re-computed several times a second. The theater is divided down the central aisle and a cursor is projected on the screen. Participants on the right control the horizontal motion of the cursor, participants on the left control the vertical motion. A large circle is displayed on the screen and the audience is requested to move the shared cursor to trace
1.4 Collaborative Telerobot
7
a trajectory around the circle. Since each player only controls one small component of the average signal, and the participants are a heterogeneous group with different personalities, one might conjecture that the shared cursor motion would resemble random Brownian motion. But in repeated experiments, groups of participants were quickly able to adapt their individual paddle signals to achieve coherent control of the shared cursor. Groups were not only able to track given trajectories, but to play competitive games such as Pong and Pac Man, and even to collaboratively control an airplane flight simulator! Audiences ranged from 5000 graphics professionals at Siggraph 1991 to groups of unruly high school students in Pittsburgh PA. Tanie, Matsuhira, Chong, et al. [35] proposed the following taxonomy for teleoperation systems: Single Operator Single Robot (SOSR), Single Operator Multiple Robot (SOMR), Multiple Operator Multiple Robot (MOMR). and Multiple Operator Single Robot (MOSR). . The collaborative teleoperation system is a Multiple Operator Single Robot (MOSR) system. Most online robots are SOSR, where control is limited to one operator at a time. Tanie et al. analyzed an MOMR system where each operator controls one robot arm and the robot arms have overlapping workspace. They show that predictive displays and scaled rate control are effective in reducing pick-and-place task completion times that require cooperation from multiple arms. A number of SOSR systems have been designed to facilitate remote interaction. Paulos and Canny’s Personal Roving Presence (PRoP) telerobots, built on blimp or wheeled platforms, were designed to facilitate remote social interaction with a single remote operator [143,144]. Fong, Thorpe and colleagues study SOSR systems where collaboration occurs between a single operator and a mobile robot that is treated as a peer to the human and modeled as a noisy information source [56]. Related models of SOSR “cobots” are analyzed in [6,20,56,122,164]. In an MOMR project by Fukuda, Liu, Xi, and colleagues [49], two remote human operators collaborate to achieve a shared goal such as maintaining a given force on an object held at one end by a mobile robot and by a multi-jointed robot at the other. The operators, distant from the robots and from each other, each control a different robot via force-feedback devices connected to the Internet. The authors show both theoretically and experimentally that event-based control allows the system to maintain stable synchronization between operators despite of variable time-lag on the Internet. MOMR models are also relevant to online collaborative games such as Quake, where players remotely control individual avatars in a shared environment [118]. In SOMR systems, one teleoperator or process controls multiple robots. This bears some relation to Cooperative (behavior-based) robots, where groups of autonomous robots interact to solve an objective [9]. Recent results are reported in [28, 46, 154, 148]. One precedent of an online MOSR system is described in McDonald, Cannon and colleagues [130]. For waste cleanup, several users assist in waste cleanup using Point-and-Direct (PAD) commands [29]. Users point to cleanup locations in a shared image and a robot excavates each location in turn. In this Internetbased MOSR system, collaboration is serial but pipelined, with overlapping plan
8
1 Introduction
and execution phases. The authors demonstrate that such collaboration improves overall execution time but do not address conflict resolution between users. In [67] Goldberg and Chen analyze a formal model of collaborative control and in [68] describe Internet-based MOSR system that averaged multiple human inputs to simultaneously control a single industrial robot arm. 1.4.3
Characteristics of CT Systems
Although our research can be applied to a wider class of CT systems, we focus on web-based collaboratively teleoperated robotic camera systems in particular. Web-based CTRC systems utilize Internet as their media and usually do not require users to install specialized software, which makes them widely accessible. Similar to other CT systems, a CTRC system usually has the following characteristics, • Sharing valuable resources and providing live access to remote environments: As a special type of teleoperation system, a CT allows multiple people to share a single resource simultaneously. In case of a general robot, people are very interested in valuable resources like robots. They want to learn more about robots and play more with them. The problem is that they are too expensive to be affordable by a normal user. On the other hand, many advanced robots are not fully utilized in research labs and universities. Webbased CT systems can provide public access to the valuable telepresence experience, which is proven to be very usefully for education. For example, Colton et al. [127] designed an online heat exchanger in MIT, which allows a class of students to do experiments online. For a CTRC, this becomes more attractive because this provides an active observation channel to the scene of interests. A CTRC can bring a large number of audience to places like a war field, a clean room, a volcano, where otherwise either are inaccessible or require special training and equipment. Firsthand and unfiltered information can be obtained by those online users through CTRC systems. • Collaboration improves reliability: A CT system combines multiple user requests to control a single shared device. This makes CT systems less sensitive to individual errors and malicious behavior. The result from Cinematrix [31] confirms that collaborative control may be surprisingly robust in practice. What’s more interesting is that group diversity may actually improve performance [67]. Similar effects have been observed in very different contexts [103]. The telerobot/camera in CT is not necessarily controlled by human inputs. The collaborative framework also allows a combination of human inputs and in-situ sensory inputs such as requests from motion sensors. The collaboration between human users and system automation can greatly improve reliability and provide appropriate automation levels for different applications. • User interaction: CT system interfaces usually provide functionality to allow people to see others’ decisions and to be involved in others’ decision processes. Users can interact with each other using the bulletin boards, chat rooms and voice/video
1.5 Organization of the Book
9
conferencing systems. The exchanging of ideas helps them to get a better understanding of what is happening in the system. This feature can be useful for training and educational purposes. For example, novice users can observe experienced users’ behavior and learn from it. This also fits the idea of education: teamwork is a key element in education at all levels [155, 41, 39].
1.5 Organization of the Book In this book, we summarize our research on CTRCs from three different aspects including systems, algorithms, and deployment as follows. • Part I systems A collaboratively-teleoperated robotic camera usually serves multiple online users concurrently. The system needs to take care the concurrent user requests and to provide a collaborative working space for all users. A traditional single-user webcam often employs button-based input and directly displays the video as the output. However, this would not work for a collaboratively teleoperated robotic camera because such system design cannot provide feedback regarding other user inputs and how the video output relates to all requests. New interface, new architecture, and system design are required. In this part, we present out system development for two collaboratively teleoperated camera systems: a fixed pan-tilt-zoom camera and a camera that mounts on a person’s helmet. This part also helps readers to understand the algorithmic challenges associated with a CT system before entering Part II, which is the core of this book. • Part II algorithms How to coordinate the competing user quests to generate a meaningful single control stream to control the robotic camera is the core algorithmic problem in a CTRC system. Four different variations of CTRC algorithms are discussed in this part: 1) exact algorithms that address the need from applications which prefer accuracy to speed such as satellite imaging; 2) approximate algorithms for remote observation cameras when speed is preferred; 3) the p-frame selection algorithm that address the frame allocation problem when there is more than one camera available; and 4) a voting mechanism that addresses the problem where the user intention cannot be represented by rectangular geometric inputs. • Part III deployment To deploy a CTRC system is nontrivial. To establish the context of the camera workspace, a CTRC system usually constructs a panoramic image as the collaborative work space for all users. As new images arrives, the panoramic interface updates as a video. User inputs are displayed and interacted with each other on the panoramic video. However, construction, update, calibration, and transmission of the motion panorama are important research problems in deployment. Part III discusses these deployment problems in detail with different application contexts such as nature observation, construction monitoring, and surveillance.
2 The Co-Opticon System: Interface, System Architecture, and Implementation of a Collaboratively Controlled Robotic Webcam⋆
Beginning with this chapter, we introduce two CTRC systems. The first system is the Co-Option where the shared robot camera is a pan-tilt-zoom camera with a fixed base.
2.1 Introduction Robotic webcameras with pan, tilt, and zoom controls are now commercially available and are being installed in dozens of locations1 around the world. In these systems, the camera parameters can be remotely adjusted by viewers via the Internet to observe details in the scene. Current control methods restrict control to one user at a time; users have to wait in a queue for their turn to operate the camera. In this chapter we describe the Co-Opticon, a new system that eliminates the queue and allows many users to share control of the robotic camera simultaneously. As illustrated in Figure 2.1, the Co-Opticon system includes the camera and two servers that communicate with users via the Internet. Streaming video is captured at the camera server and streamed back to the remote users using a Java interface. User responses are collected at the Co-Opticon server and used to compute optimal camera positions, which are sent to camera server to control the camera. The Co-Opticon’s Java-based interface includes two image windows, one fixed for user input and the other a live streaming video image. The interface collects requested camera frames (specified as desired rectangles) from n users, computes a single camera frame based on all inputs, and moves the camera accordingly. Below we describe system details and two frame selection models based on user “satisfaction”. In independent work, Kimber, Liu, Foote et al describe a multi-user robot camera in [109, 119]. The application is designed for videoconferencing system. They use multiple cameras in the systems: panoramic cameras and a pan-tiltzoom camera. Panoramic cameras generate a dynamic panoramic view of the ⋆
1
This chapter was presented in part at the 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV [166]. See: http://www.x-zone.canon.co.jp/WebView-E/index.htm
D. Song: Sharing a Vision: Systems and Algorithms, STAR 51, pp. 13–22. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
14
2 The Co-Opticon System
Users
Internet Co-opticon server
Canon VC-C3 robotic camera
NTSC
RS232C Video server HTTP Fig. 2.1. The Co-Opticon System Architecture
conference site. Users control the pan-tilt-zoom camera by drawing on panoramic view. The system is well suitable for videoconferencing environment, where illumination condition is constantly good so that the image quality of panoramic view can be guaranteed. We believe multiple camera systems are good but not necessary for scenic sites where dynamic information is not necessary. The panoramic image can be generated by the same pan-tile-zoom camera resulting in less bandwidth requirement.
2.2 The Co-Opticon Interface The Co-Opticon interface facilitates interaction and collaboration among remote users. Users register online to participate by selecting a characteristic color and submitting their email address to the Co-Opticon server, which stores this information in our database and immediately sends back a password via email. The server also maintains a tutorial and an FAQ section to familiarize new users with how the systems works. The Co-Opticon interface contains two windows: The video window shows the current camera view. Figure 2.2 illustrates the panoramic window and the Co-Opticon user interface. The interface also facilitates chat between users. Each user can type in a short sentence, which is displayed underneath his/her requested frame in the panoramic image. A clocklike timer is located at the bottom right of the interface indicating the time before the next camera movement (typically 5-10 seconds).
2.3 Hardware The Co-Opticon server is an AMD K7 950Mhz PC with 1.2GB SDRAM connected to a 100Mbs T3 line. The camera server is an AMD K7 950Mhz PC with
2.4 Software
Requested frames
15
Optimal camera frames
Fig. 2.2. This figure illustrates the Co-Opticon’s Java-based user interface, which currently runs on most Windows based PCs. Users view two windows. One (not shown) displays a live video stream as captured by the robotic camera. The second window, illustrated here, contains the user interface. The panoramic image is a fixed photo of the camera’s reachable range of view. The snapshot above shows 6 active users listed in the scrollable window at the left. Each user requests a camera frame by positioning a dashed rectangle over the panoramic image. Based on these requests, the algorithm computes an optimal camera frame (shown with solid rectangle), and servoes the camera accordingly to displays the resulting live video stream. The horizontal bars indicate levels of user satisfaction as described below.
640MB SDRAM connected to an 100Mbs T3 line at the remote site. It has a video-capture card, which captures video at 320 × 240 resolution. It also serves as video server running InetCam2 software to broadcast video. We used the Canon controllable camera, model VC-C3. A comparable camera is available from Sony. The Canon camera has motorized pan, tilt and zoom with a 10x power zoom lens. It has PAL, composite, and S-video output with a resolution of 450 horizontal lines. It can communicate with a PC via a RS232C link at 14,400bps. Its pan, tilt, and zoom speed is 76 degrees per second at maximum and 0.5 degrees per second at minimum. It has an accuracy of 0.5 degrees and a 380,000 pixel CCD array.
2.4 Software As illustrated in Figure 2.3, custom software includes: (1) the Co-Opticon server, (2) the camera control software and video capturing package at the video server, and (3) the client side Co-Opticon Java applet. The Co-Opticon server runs Mandrake Linux 9.0 and the Apache web server 1.3.26. All modules are written in GNU C++ with optimization of running speed. 2
http://www.inetcam.com
16
2 The Co-Opticon System Login CGI
Console/Log
Communication
Core (with shared memory segements)
Apache module
Apache module
User database
Registration
Apache module
ShareCam web server TCP/IP
Calibration Camera control RS232C
Canon VCC3 Camera
InetCam server Panoramic image generation
HTTP
ShareCam applet
TCP/IP
Video server
InetCam applet Client
Fig. 2.3. The Co-Opticon system software diagram
The Co-Opticon server package consists of core process, Apache modules, communication process, user databases, registration module, console/log module, and login CGI script. The customized Apache module deals with communication between web clients and the server via HTTP. It accepts the requested frame from a client and sends him/her the requested frames of others every second. It can be viewed as a CGI script but with much higher scalability. The communication module connects to the video server via a socket link to send camera control commands. A console/log module allows us to monitor and record system status in real time. The overall design emphasizes data sharing among all processes. Collaborative control requires that all clients are able to see each other’s information in real time. This is achieved by sharing memory segments among all server processes. Therefore the shared memory segment managed by the core process is the key data structure. Clients download two applets: the Co-Opticon applet and the InetCam applet. The Co-Opticon applet is a customized software, which is shown in Figure 2.2. Part of the frame selection computation is done at the client side, which is implemented in the Co-Opticon applet. The Co-Opticon applet is written in Java 1.1.8 to ensure the compatibility with most browsers. The InetCam applet is a third party software that functions as a video terminal. The video server package includes camera control, InetCam server, calibration, and panoramic image generation. The camera control module written in Microsoft Visual C++ is the primary module. It accepts camera control
2.5 Frame Selection Models
17
commands from the Co-Opticon server and translates it into the RS232C protocol, which is built on packages provided by Lawrence Berkeley National Laboratory3.
2.5 Frame Selection Models In this section, we will present two frame selection models. We begin with a review of definitions and notation. More details can be found in the next chapter. We consider two models for the optimal camera frame, the first is memoryless based only on the current set of frame requests. The second is a temporal model based on the history of frame requests with exponentially decaying weights. 2.5.1
Memoryless Frame Selection Model
In the Co-Opticon system, c is a vector of camera parameters that users can control. Let c define a camera frame [x, y, z], where x, y specify the center point of the frame, which is corresponding to pan and tilt, and z specifies size of the frame, which corresponds to zoom level. c defines a rectangular camera frame (the camera has a fixed aspect ratio of 4:3). User i requests a desired frame ri . Given requests from n users, the system computes a single global frame c∗ that will best satisfy the set of requests. We define a Generalized Intersection Over Maximum (GIOM) metric for user “satisfaction” s(c, ri ) based on how the user’s requested frame ri compares with a candidate camera frame c. Each of n users submits a request. Let s(c) =
n
si (ri , c)
(2.1)
i=1
In the memoryless frame selection model, we want to find c∗ , the value of c that maximizes s(c) based only on the current set of requests: max s(c). c
In each motion cycle, we servo the camera to this frame. 2.5.2
Temporal Frame Selection Model
An alternative frame selection model is based on the history of user frame requests over multiple motion cycles. We extend (2.1) using a weighted sum of the user satisfaction. In this case total satisfaction is a function of time t: s(c, t) =
n
αi (t)si (ri (t), c(t))
i=1
3
http://www-itg.lbl.gov/mbone/devserv/
(2.2)
18
2 The Co-Opticon System
where the weight αi (t) for user i is a function of the user’s previous “dissatisfaction” level: ui (t) = 1 − si (ri (t), c(t)). One candidate form for weights is αi (t) =
t−1 ui (k) 2t−1−k
k=0
which yields the recursive formulation: αi (t) = ui (t − 1) + αi (t − 1)/2 If user i does not get satisfied by the camera frame computed during the current frame, his weight αi (t), will increase over future motion cycles, eventually dominating the weights of other users to satisfy his desired frame request. In this sense fairness is guaranteed over time. These frame optimization problems can be solved with algorithms in part II. Figure 2.4 shows four examples with the Memoryless Frame Selection model. Note that the optimal frame grows in image (b) after a large requested frame
(a)
(b)
(c)
(d) Requested frames
Optimal camera frame
Fig. 2.4. Examples using Memoryless Frame Selection model defined by (2.1). Four different sets of requested frames and the corresponding optimal frame are displayed. Note that the resulting frame is very different than what would be determined by simple averaging, and that some requests never get satisfied.
2.5 Frame Selection Models
19
(a) t=0
(b) t=1
(c) t=2
(d) t=3
Requested frames
Optimal camera frame
Fig. 2.5. Examples with the Temporal Frame Selection Model defined by (2.2). The set of requested frames is held constant, but weights evolve so that the camera frame changes to facilitate “fairness”.
is added. In Figure 2.4(c), two more frames are requested. Since they can not compete with the central group of requested frames, the optimal frame remains unchanged. Figure 2.4(d) shows a case with all but two requested frames disjoint, the algorithm selects a frame that covers the two overlapping frames. Figure 2.4 also illustrates that some users can be starved indefinitely. Figure 2.5 shows four examples with the Temporal Frame Selection model, where frame selection is based on user satisfaction over multiple motion cycles. A sequence of 4 motion cycles is illustrated with the same set of requested frames. Note that with this model, the camera frame changes to balance overall user satisfaction over time. 2.5.3
Experiments
The Co-Opticon system went online in June of 2002 with the camera installed in our Alpha Lab from June 8, 2002 to February 2003 as shown in the previous figures. An illustration of the total requested frames is shown in figure 2.6. Figure 2.6(a) displays all 4822 requested frames for the experiment duration. We are interested in how user interest is distributed in the panorama. To compute
20
2 The Co-Opticon System
(a) 4822 Requested frames
(b) Interest density distribution in grayscale
Fig. 2.6. Data from June 8, 2002 to February 6, 2003
the interest distribution, we define g(x, y) be the interest for point (x, y) in gray scale, i.e. 0 ≤ g(x, y) ≤ 255, rj : 1 ≤ j ≤ 4822 be the j th requested frame, and an indicator variable, 1 if (x, y) ∈ rj I(x, y, j) = 0 otherwise Say a darker point means more interest, the interest for point (x, y) is g(x, y), and define gmax = arg max(x,y) g(x, y), g(x, y) = 255(1 −
4822 j=1
I(x, y, j)
gmax
).
We compute g(x, y) for each point in the panorama and generate the figure 2.6(b). As shown in the figure, the most popular region is the center of the camera workspace, looking at the Adept robot arm in our lab, where one our colleague was often performing robot calibration tests. 2.5.4
Field Tests
We have tested our system using three types of Pan-Tilt-Zoom cameras as illustrated in Figure 2.7. Table 2.1 lists the specifications of the three cameras. Among those parameters, pan range, tilt range, and lens Horizontal Field Of View (HFOV) determine the overall coverage of the panorama. Image resolution, size of CCD sensor, and focus length are used to establish coordinate projection model between the image coordinate system and the composite panorama coordinate system.
2.6 Conclusions
(a) Canon VCC3
(b) Canon VCC4
21
(c) Panasonic HCM 280
Fig. 2.7. The Pan-Tilt-Zoom cameras tested with our system Table 2.1. A comparison of technical specifications of the 3 Pan-Tilt-Zoom cameras tested in our system Camera VCC3 VCC4
pan ◦
tilt ◦
−90 ∼ +90
◦
zoom focus length ◦
−30 ∼ +25
10x
−100◦ ∼ +100◦ −30◦ ∼ +90◦ 16x
HCM 280 −175◦ ∼ +175◦ 0◦ ∼ −120◦
HFOV
4.2 ∼ 42mm
4◦ ∼ 46◦
4 ∼ 64mm
3◦ ∼ 47.5◦
21x 3.8 ∼ 79.8mm 2.6◦ ∼ 51◦
Table 2.2. A summary of field tests
1
Application
Duration
Location
In-lab test
06/02-02/03
Alpha Lab, UC Berkeley
2 Construction monitoring 06/03-06/05
Stanley Hall, UC Berkeley
3
Sproul Plaza, UC Berkeley
Public surveillance
09/04
4 Construction monitoring 10/04-08/05
CITRIS II building, UC Berkeley
5
Central Park, College Station, TX
Natural observation
08/05
6 Marine life observation 11/05-01/06 Richardson Bay Audubon Sanctuary, CA 7
Distance learning
05/06
PolyPEDAL Lab, UC Berkeley
8
Bird watching
04/07-
Sutro Forest, CA
Our system has been deployed in 8 different sites for a variety of applications including building construction monitoring, public surveillance, distance learning, and natural observation. Table 2.2 summarizes the deployment history. Due to space limit, we cannot detail each deployment. The system design appears to be robust in the field tests. We have recorded over 300,000 users in those deployments. The robotics cameras have executed more than 52 million commands.
2.6 Conclusions This chapter describes the Co-Opticon, a MOSR CTRC system that allows a group of Internet users to simultaneously share control of a pan, tilt, and zoom
22
2 The Co-Opticon System
camera. We described the Co-Opticon interface, system architecture, and experiments with two frame selection models. We also summarized the field experiment history from June 2002 to Sep. 2007. A collaboratively teleoperated robotic camera is not not limited to a pantilt-zoom camera. It can be any camera mounted on a mobile platform. If this mobile platform is a human, it becomes the tele-actor system in next chapter.
3 The Tele-Actor System: Collaborative Teleoperation Using Networked Spatial Dynamic Voting⋆
When a camera is mounted on top of a mobile robotic platform, the system is not just a mere observation system. More interactivity can be provided by the new architecture. When the mobile robotic platform is actually a skilled human teleactor, the integration of remote intelligence with collective decisionmaking from online users becomes a challenging problem. The movement of the camera and the effective interaction between the teleactor and the remote environment demand new system architecture, interface, and hardware/software design.
3.1 Introduction Consider the following scenario: an instructor wants to take a class of students to visit a research lab, semiconductor plant, or archaeological site. Due to safety, security, and liability concerns, it is impossible to arrange a class visit. Showing a pre-recorded video does not provide the excitement nor group dynamics of the live experience. In this chapter we describe a system that allows groups to collectively visit remote sites using client-server networks. Such “collaborative teleoperation” systems may be used for applications in education, journalism, and entertainment. Remote-controlled machines and teleoperated robots have a long history [161]. Networks such as the Internet provide low-cost and widely-available interfaces that makes such resources accessible to the public. In almost all existing teleoperation systems, a single human remotely controls a single machine. We consider systems where a group of humans shares control of a single machine. In a taxonomy proposed by Tanie et al. [35], these are Multiple Operator Single Robot (MOSR) systems, in contrast to conventional Single Operator Single Robot (SOSR) systems. ⋆
This chapter was presented in part at the 2002 IEEE International Conference on Robotics and Automation (ICRA), Washington DC [72] and in part in the Proceedings of IEEE [73].
D. Song: Sharing a Vision: Systems and Algorithms, STAR 51, pp. 23–37. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
24
3 The Tele-Actor System
00:00:28 12 Users
Which test agent should we add next?
Fig. 3.1. The Spatial Dynamic Voting (SDV) interface as viewed by each user. In the remote environment, the Tele-Actor takes images with a digital camera which are transmitted over the network and displayed to all participants with a relevant question. With a mouseclick, each user places a color-coded marker (a “votel” or voting element) on the image. Users view the position of all votels and can change their votel positions based on the group’s response. Votel positions are then processed to identify a “consensus region” in the voting image that is sent back to the Tele-Actor. In this manner, the group collaborates to guide the actions of the Tele-Actor.
In MOSR systems, inputs from many participants are combined to generate a single control stream. There can be benefits to collaboration: teamwork is a key element in education at all levels [39, 41, 155] and the group may be more reliable than a single (possibly malicious) participant [67]. As an alternative to a mobile robot, which can present problems in terms of mobility, dexterity, and power consumption, we propose the “Tele-Actor”, a skilled human with cameras and microphones connected to a wireless digital network, who moves through the remote environment based on live feedback from online users. We have implemented several versions of the system. Figure 3.1 shows a view of the “Spatial Dynamic Voting” (SDV) interface implemented for Internet browsers. Users are represented by “votels”: square colored markers that are positioned by each user with a mouse click. This chapter presents system architecture, interface, and collaboration metrics .
3.3 SDV User Interface
25
3.2 System Architecture The Tele-Actor system architecture is illustrated in Figure 3.2. As the Tele-Actor moves through the environment, camera images are sent to the Tele-Actor server for distribution to users, who respond from their Internet browsers. User voting responses are collected at the Tele-Actor server, which updates java applets for all users and for the Tele-Actor in the field. The Tele-Actor carries a laptop which communicates to the Internet using the 2.4 GHz 802.11b wireless protocol. A camera-person with a second camera provides third person perspectives as needed. Using this architecture, the users, the Tele-Actor Server, the Local Director, the camera-person, and the Tele-Actor communicate via the Internet.
Local Director
Tele-Actor Server
Camera Person Wireless Network (802.11b)
Internet
Wireless Basestation
Tele-Actor
Users
Fig. 3.2. System architecture. Participants on the Internet view and voting on a series of voting images. The human “Tele-Actor,” with head-mounted wireless audio/video link, moves through the remote environment in response. The “Local Director” facilitates interaction by posting textual queries.
3.3 SDV User Interface We have developed a new graphical interface to facilitates interaction and collaboration among remote users. Figure 3.1 illustrates the “Spatial Dynamic Voting” (SDV) interface that is displayed on the browser of all active voters. Users register online to participate by selecting a votel color and submitting their email address to the Tele-Actor server, which stores this information in our database and sends back a password via email. The server also maintains a tutorial and an FAQ section to familiarize new users with how the systems works. Using the SDV interface, voters participate in a series of 30-60 second voting images. Each voting image is a single image with a textual question. In the example from Figure 3.1, the Tele-Actor is visiting a biology lab. Voters click on their screens to position their votels. Using the HTTP protocol, these positions
26
3 The Tele-Actor System
Fig. 3.3. The human Tele-Actor transmits images from the remote environment using the helmet-mounted video camera and responds to user votes. Helmet design by E. Paulos, C. Myers, and M. Fogarty.
are sent back to the Tele-Actor server and appear in an updated voting image sent to all voters every 3-5 seconds. In this way voters can change their votes. When the voting cycle is completed, SDV analysis algorithms analyze the voting pattern to determine a consensus command that is sent to the Tele-Actor. The SDV interface differs from multiple-choice polling because it allows spatially and temporally continuous inputs. To facilitate user training and asynchronous testing, the Tele-Actor system has two modes. In the offline mode, voting images are drawn from a prestored library. In the online mode, voting images are captured live by the Tele-Actor. Both offline and online modes have potential for collaborative education, testing, and training. In this chapter we focus on the online mode. Figures 3.4, 3.5, 3.6, and 3.7 illustrate four types of SDV queries and their associated branching structures. In each case, the position of the majority of votels decides the outcome. We tried including a live video broadcasting stream but found that due to bandwidth limitations, the resolution and frame rate is unacceptable for lowlatency applications. Standard video broadcasting software requires about 20 seconds of buffered video data for compression, which introduces unacceptable delays for live visits. We are hoping this can be reduced in the future with faster networks such as Internet2.
3.3 SDV User Interface
27
Which way should we go? Fig. 3.4. Navigation Query. Participants indicate where they want the Tele-Actor to go.
Which knob should we turn?
Fig. 3.5. Point Query. Participants point out a region of interest in the voting image.
28
3 The Tele-Actor System
Should we open this chamber?
Fig. 3.6. Opinion Query. Votel position can be anywhere between extreme values to indicate degree of opinion.
Which acid is used for wet etching?
Fig. 3.7. Multiple-choice Query. A variation on the Point Query with a small number of explicit options.
3.4 Hardware and Software
29
Who should we talk to?
Fig. 3.8. Local Director Software. Local director selects a voting image and propose a voting question.
3.4 Hardware and Software 3.4.1
Version 3.0 (July 18, 2001)
The Tele-Actor webserver is an AMD K7 950Mhz PC with 1.2GB memory connected to a 100Mbs T3 line. The Local Base Station is a Dell Pentium III 600Mhz laptop with 64MB memory connected to a 10Mbs T1 line at the remote site. It has a USB video card, which captures video at 320 × 240 resolution. We used the Swann MicroCam wireless video camera, model ALM-2452 1 . It is 18x34x20 mm and weighs 20 grams, with a 9 volt battery as its power supply. It has a 2.4 GHz analog RF output at 10 mW and transmits line-of-sight up to 300 feet with a resolution of 380 horizontal lines. Custom software includes: 1. the client side SDV browser interface based on DHTML, (Screen shot is shown in Figure 3.9). 2. the Local Basestation image selection interface, (Screen shot is shown in figure 3.8), and 3. the Tele-Actor server. 1
http://www.swann.com.au
30
3 The Tele-Actor System
Which way should we go?
Fig. 3.9. DHTML-based SDV interface in version 3.0
During online mode, the Local Basestation, running Microsoft Windows 98, uses a custom C++ application to capture images with textual questions and transmit them to the Tele-Actor server for distribution. During both online and offline modes, the Tele-Actor server uses custom C and C++ applications to maintain the database and communicate with the local base station and with all active voters. The Tele-Actor server runs Redhat Linux 7.1 and the Apache web server 1.3.20. The Resin 2.0.1 Apache plug-in and Sun JDK 1.3.1 with Mysql database 3.23.36 provide java server pages to handle the user registration and data logging 2 . Custom software built on the graphic development toolkit GD 2.0.1 generates election images overlaid with current votel positions. 3.4.2
Version 9.0 (July 25, 2002)
A lot improvements had been done during the year after the lauch of the TeleActor syste. Some changes are 1. Scenario design: We found that it is necessary to have some scripts for action to meet the requirement of education. We usually have a few prepared small scenarios before the live event. Voting result will drive the event from one 2
http://www.caucho.com
3.4 Hardware and Software
31
Note: Live Audio and Video feedback will be available during live events, TechTV report is playing during offline mode
Where is the biggest city?
Fig. 3.10. Java-based SDV interface in Version 9.0
scenario to the other, which follows a treelike structure. Scenarios should be educational and interesting. 2. New java-based interface: We have encountered a number of compatibility problems when we use DHTML as the main programming tool to build SDV interface. DHTML also has very limited functionality. We decided to switch to java based interface since version 4.0. The java-based interface allows us to animate the votel movement and employ more flexible voting intervals. A screen shot of new interface is shown in figure 3.10. 3. Improved voting image quality: We found that the image quality provided by analog wireless camera is not satisfying if illumination condition is not good, which is a common problem for most indoor environments. We used Sony camcorder with night vision functionality to solve the problem. As shown in figure 3.2, we also used a dedicated cameraman to shoot the video. Sony camcorder has local video display, which lets cameraman know the image quality immediately. Image capturing task has been shifted to cameraman instead of local director. A shutter-like device has been added to the system to allow cameraman to capture the precious moment. Cameraman also has a small LCD display mounted on his elbow to show the current voting information to facilitate view planning. 4. Facilitate voting question formulation: Local director only focuses on posting voting questions. A new java applet supported by C-based CGI script has been developed to replace the software shown in figure 3.8. Some candidate questions are pre-stored in database to reduce the time delay caused by voting question formulation. The screen shot of the local director applet is shown in figure 3.11.
32
3 The Tele-Actor System
Who should we meet?
Fig. 3.11. Java applet for local director
5. Video/Audio broadcasting: Starting from Version 8.0, we provide remote users with streaming video/audion feedback. As shown in the figure 3.2, a new video server has been added to the system. Figure 3.10 also shows that a small video window has been added to top-right corner of the screen. We have compared several kinds of video streaming technologies and selected Microsoft Media Encoder as our broadcasting software. 6. New wireless technology: In Version 3.0, we used analog 2.4 GHz wireless camera to transmit image from event field to local base station. We soon learned that analog 2.4 GHz based camera is vulnerable to interference and has very limited image quality. Also, the distance between local base station and event field is limited by the range of the analog signal. We then switched to 802.11b protocol based wireless ethernet, which is usually hooked up to Internet. We digitize the image locally and send it to local base station using TCP/IP. Based on the new approach, the local base station can be located at anywhere on the Internet. Also the camera selection becomes more flexible because we decoupled the camera and wireless communication. 7. New hardware design: The primary Tele-Actor is carrying a 600 Mhz SONY picture book laptop with 128MB memory connected to a 11Mbs 802.11b wiress LAN at the remote site. It has a USB video card, which captures video at 320 × 240 resolution. The cameraperson has a PIII 750Mhz SONY VAIO latop with 256Mb memory with similar USB video capture device. The laptops direct their video displays to hand-mounted TVs to provide updates on voting patterns. Figure 3.2 show that the primary Tele-Actor has a Canon camera mounted on her helmet. Figure 3.12 shows that the camera-person has a Sony camcorder with night vision capability, which provides very high quality image and video stream. Both of them are equipped with a shutterlike device to allow them to capture the precious moment in the live event.
3.5 Problem Definition and Algorithms TV Monitor
33
Shutter
Tele-Actor Backpack Contains the Following: PCMCIA USB Hub
Laptop
USB Wireless (802.11b)
USB Video Capture Card
Fig. 3.12. Hardware configuration for the camera-person. The hardware configuration of the Tele-Actor is similar but has a helmet-mounted camera.
3.5 Problem Definition and Algorithms Users express responses by clicking on the voting image to spatially indicate a preferred object or direction in the field of view. As an alternative to semantic analysis of the voting image, we consider votels as spatial distributions and identify preferred “consensus” regions in the image. We then use these regions to define two metrics for individual and group performance in terms of leadership and collaboration. 3.5.1
Problem Definition
Voter Interest Functions Consider the kth voting image. The server receives a response from user i in the form of an (x, y) mouseclick on image k at time t. We define the corresponding votel: vik (t) = [xik (t), yik (t)]. Consensus Regions Votels are usually clustered around some regions in a voting image. We refer to these regions as consensus regions: Sk = {C1k , C2k , ..., Cmk }. Since there are n voters, m ≤ n. Given Vk = {vik (T )}, i = 1, ..., n, we can compute the consensus regions Sk .
34
3 The Tele-Actor System
y x time: t
Election begins Votel
Voting image
Fig. 3.13. Evolution of voting image as votels arrive
One approach is to use existing methods: cluster analysis[24, 98, 190] and convex hull generation. After votels are classified into groups, we can compute the convex hull of each group with 3 or more votels and treat each convex polygon as a consensus region. In the rest of the section, we analyze voting patterns in terms of goals and collaboration based on known consensus regions {C1k , C2k , ..., Cmk }. 3.5.2
Ensemble Consensus Region
Given Sk , Vk , the ensemble consensus region is a region with the most votels. Let 1 if [xik (T ), yik (T )] ∈ Cjk Ik (i, j) = 0 otherwise The count nkj =
n
Ik (i, j)
i=1
is the number of votels inside consensus region j of voting image k. Breaking ties arbitrarily, let Ck∗ , the ensemble consensus region, be any Cjk with max nkj . A consensus region can be projected onto a line in the voting image plane to obtain a consensus interval. Table 3.1 summarizes votel analysis for the votels shown in Figure 3.14, where consensus regions are projected onto the x axis to obtain three consensus intervals. Consensus interval 3, with the most votels, is the ensemble consensus interval. 3.5.3
Collaboration Metric
To what degree are voters collaborating? We define a measure of collaboration based on the density of votels in each consensus region. For consensus region j in voting image k, define the votel density ratio as:
3.5 Problem Definition and Algorithms
35
Which motor actuates vertically? Fig. 3.14. Voting image of an industrial robot arm with 27 votels Table 3.1. SDV analysis of Voting Image from Figure 3.14. Intervals and widths are in pixels. Cjk Interval 1 [52, 94] 2 [139, 180] 3 [236, 288] Overall –
Dkj =
dkj = dk
Width 42 51 52 145
nkj akj Nk A
=
#Votes 8 5 14 27
Dkj 2.26 1.16 3.19 2.21
nkj A ( ) Nk akj
where dkj is the votel density (votes per unit area) for consensus region j, dk is the overall average votel density for the voting image k, nkj is number of votels in consensus region j, akj is the area or width of the consensus region j, Nk is the total number of votes and A is the area of the voting image. This metric is proportional to the ratio nkj /Nk and inversely proportional to the area of the consensus region. The metric is high when many votes are concentrated in a small consensus region and low when votes are uniformly spread among multiple consensus regions. We can also compute an overall collaboration level for voting image k: nkj A = A/ akj Dk = akj Nk which can measure of how focused the votels are.
36
3 The Tele-Actor System
Table 3.2. SDV Analysis for another voting image Cjk 1 2 3 Overall
Interval [44, 84] [141, 168] [223, 283] –
Width 40 27 60 127
#Votes 10 6 16 32
Dkj 2.35 3.32 2.51 2.37
Table 2 gives results for another voting image. The collaboration measure for each consensus region is given in the last column of Tables 1 and 2. In table 3.2, the data suggests that users are collaborating in a focused manner to vote for consensus interval 2 even though it has fewer votes than consensus interval 3.
3.6 Online Field Tests We performed a half-hour field test on 8 Nov 2001, with 25 7th-grade students from Dolores Huerta Middle School, and used the Tele-Actor to visit the UC Berkeley Microlab to learn how microchips are made. The Microlab is located at fourth floor of Cory Hall, UC Berkeley. Microchip fabrication needs a clean room environment and there are hazardous materials being used in fabrication process as well. It is usually difficult to arrange a field trip for students to such environment. Our Tele-Actor, who is well-trained and aware of safety and security issues, is directed by the 7th Grader students to explore the lab. A second field test was conducted on July 25th, 2002, with 26 9th gradestudents visiting a Biotechnology lab at the Lawrence Berkeley National Laboratory. This was part of the Robot-Clone-Human high-school curriculum project involving UC Berkeley’s Alpha Lab, the Interactive University Project, and San Francisco Unified School District. This project is developing a teaching module geared for high school biology students to learn about what biotechnology is, how robots work, and how robots are used in biotech. The third field test was conducted on Oct 23th, 2002 with 23 students from a UC Berkeley Mechanical Engineering graduate course, who used the Tele-Actor interface to visit the UC Berkeley Microlab to learn details about the wafer manufacturing process.
3.7 Conclusions This chapter describes a CTRC system that allows groups of participants to collaboratively explore remote environments. We propose two innovations: the SDV, a networked interface for collecting spatial inputs from many simulataneous users, and the “Tele-Actor,” a skilled human with cameras and microphones who navigates and performs actions in the remote environment based on
3.8 Closure
37
this input. We presented system architecture, interface, experiments, and metric that accesses the collaboration level. Collaborative teleoperation systems will benefit from advances in broadband Internet, local wireless digital standards (802.11x), video teleconferencing standards, and Gigahertz processing capabilities at both client and server. We will report efficient algorithms for consensus identification and “scoring” to motivate user interaction in Part II. To learn more about our system, please visit: www.tele-actor.net.
3.8 Closure Now we have seen the system elements of two different CTRC systems. It is not difficult to find the common structure of the two systems, which is to use collective decision-making to control a single robotic device. However, the underlying problems have not been addressed yet: 1) how to combine the user inputs to generate a sensible single control stream to control the single robotic camera and 2) how to insure the user has incentive to participate the collective decision making process and associate that with group decision quality. In other words, how to form a collective decision computationally and efficiently? This is the core of Part II.
4 Exact Frame Selection Algorithms for Agile Satellites⋆
A core problem in any collaborative teleoperation system is how to combine the competing requests into a single control stream computationally and efficiently. We notice that there are both collaborative and competitive elements in a group decision making process. We need a computation framework that can encapsulate the both. We look into the group decision process in a human society and find there are two important types include those with the centralized coordination and those without the centralized coordination. If we treat the two types as each end of a spectrum, most of group decision progresses fall somewhere in between them. The first route we explore is the centralized approach. We assume that there is a centralized decision-maker who can take all inputs from the group and make the final decision. Obviously, an un-biased decision-maker should optimize the overall utility and hence the collective decision process becomes an optimization process. This allows us to adopt an optimization framework in algorithm design. When the telerobot is a robotic pan-tilt-zoom camera, user requests and camera output can be represented by geometric objects such as rectangles. Therefore, the CTRC control problem becomes a geometric resource allocation problem. Due to the complexity of the problem, an exact algorithm cannot run in real time (less than a fraction of a second) for the large number of requests. However, there are applications where real time computation is not required. For example, a camera on an agile satellite usually plans its coverage hours ahead of its imaging time if not days. The exact solution is favorable in this application because it maximizes the potential utility of the image frame.
4.1 Introduction A traditional imaging satellite like Spot 5 can only passively scan a narrow land belt that is directly underneath its orbit like a 1D scanner. The time between 2 ⋆
This chapter was presented in part at the 2004 IEEE International Conference on Robotics and Automation (ICRA), New Orleans, LA [172] and in part in the IEEE Transactions on Automation Science and Engineering [174].
D. Song: Sharing a Vision: Systems and Algorithms, STAR 51, pp. 41–69. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
42
4 Exact Frame Selection Algorithms for Agile Satellites
Requested viewing zones
Optimal satellite frame (a)
(b) Fig. 4.1. The Satellite Frame Selection (SFS) problem: each time window defines the satellite’s possible field of view, which is described by the map in (a). Client requests for images are shown as dashed rectangles. Given a set of requests, the objective is to compute the satellite frame that optimizes the coverage-resolution metric. The solution in this case is illustrated with a solid rectangle, which yields a satellite image in (b).
consecutive visits over a same spot is around 16 ∼ 44 days depending on the orbit parameters. The long inter-visit duration makes it difficult to respond to time critical tasks. However, recent development in multi-axis satellite platform allows a new generation of commercial satellites to perform pitch and roll operations, which allow their imaging sensors to access a large region. This capability not only allows the satellite to selectively shot region of interest but also dramatically reduce the inter-visit duration. For example, French PLEIADES satellite can
4.2 Related Work
43
visit the same location in less than one day. Such satellites are usually called agile satellites. Agile satellites produce Near Real Time (NRT) images, which makes it more suitable for time critical applications like weather prediction, disaster response, search and rescue, surveillance, and defense applications. On the other hand, the modern remote sensing technology also allow the satellite to shot images at different resolutions to balance the region of coverage and the resolution. These technologies enable commercial satellites to shot a selective region at a desirable resolution over the large field of view. In other words, a traditional satellite is more like a fixed 1D scanner. A multi-axis satellite with variable resolution is more like a pan-tilt-zoom camera. To make it easy to understand, we will use “camera” to represent imaging sensors and use “pan, tilt, and zoom”to illustrate agile satellite motions. Although more powerful, the combination of multi-axis platform and variable resolution also proposes a new challenge in satellite planning and scheduling. Unlike the typical job shop scheduling problem, the planning method for the new satellite should involve both spatial and tempo dimensions. In this Chapter, we discretize the time dimension and develop a frame selection algorithm for a single time slice: at any given time, the camera’s field of view is restricted to a zone on the Earth’s surface. During each time window, a number of client requests for images are pending, and only one image can be captured. We consider the problem of automatically selecting pan, tilt, and zoom parameters to capture images that maximize reward. The Satellite Frame Selection problem is illustrated in Figure 4.1. The camera image frame is a rectangle with a fixed aspect ratio. Input is the set of n isooriented requested rectangular regions from users. We propose a reward metric based on how closely a requested viewing zone compares with a candidate satellite image frame. The metric is proportional to the intersection of the candidate frame and the requested viewing zone and to the ratio of the resolution of the candidate and the request. Satellite Frame Selection is a non-linear optimization problem; we exploit the structure of the problem to develop polynomial time algorithms that compute the exact solution for cameras with discrete and continuous resolution levels.
4.2 Related Work Satellite Frame Selection is related to problems in job scheduling, facility location, spatial databases, videoconferencing and teleoperation. The Satellite Space Mission problem (SM) [80] is to select and schedule a set of jobs on a satellite. Each candidate job has fixed duration, available time window, and weight. The goal is to select a feasible sequence of jobs maximizes the sum of weights. This combinatorial optimization problem is known to be NP-hard. Recent research [19, 60, 84, 194] on the SM problem and its variations focuses on developing exact and approximate methods using numerical methods such as column generation, Tabu search, and genetic algorithms.
44
4 Exact Frame Selection Algorithms for Agile Satellites
Lemaitre et al. [113] study a related problem for the Earth Observing Satellite (EOS), which has a three-axis robotic camera that can be steered during each time window. Given a set of requested zones, they consider the problem of finding a trajectory for the camera that will maximally cover the requested zones (they do not consider variations in zoom/resolution). Their coverage problem is analogous to planning optimal routes for lawn mowers and vacuum cleaners [61]. Researchers have proposed greedy algorithms, dynamic programming algorithms, and methods based on constraint programming and Local Search. In our model, the time window is shorter and the objective is to servo the camera to a single optimal position with optimal zoom/resolution setting. The structure of the SFS problem is related to the planar p−center problem, which Megiddo and Supowit [131] showed to be NP-complete. Given a set of point demand centers on the plane, the goal is to optimally locate p service centers that will minimize the worst case travel distance between client and server. Using a geometric approach, Eppstein [50] found an algorithm for the the planar 2-center problem in O(n log2 n). Halperin et al. [81] gave an algorithm for the 2-center problem with m obstacles that runs in randomized expected time O(m log2 (mn) + mn log2 n log(mn)). The SFS problem is also related to “box aggregation” querying in spatial database research [205]. The spatial objects could be points, intervals, or rectangles. Aggregation over points is a special case of the orthogonal range search queries from computational geometry. Agarwal and Erickson [2] provide a review of geometric range searching and related topics. Grossi and Italiano [77,78] proposed the cross-tree data structure, a generalized version of a balanced tree, to speed up range search queries in high-dimensional space. The continuity of the solution space of our problem makes it impossible to simply evaluate a fixed set of candidate frames through queries.
4.3 Problem Definition In this section we formalize the SFS problem using a reward metric based on satellite image resolution. 4.3.1
Inputs and Assumptions
Scheduling for different satellites As illustrated in Figure 4.2, the camera on a typical satellite orbits the Earth at a speed of more than 7km per second. An agile satellite platform usually has a reflection mirror that moves under pitch and roll control, which directs a rectangular region of the earth into the camera’s field of view. The roll motion acts like the pan motion of a normal Pan-Tilt-Zoom camera. The pitch motion acts like the tilt motion of the normal PTZ camera. The platform and sensor technologies determine the variables in selecting zones for satellite imaging. Figure 4.3 illustrates the different capabilities for
4.3 Problem Definition
45
Orbit roll
Pitch Earth surface
Scan length
Accessible region
Swath width
Frame
Fig. 4.2. Agile satellites with accessible region, and frame definition Table 4.1. Scheduling methods for different satellites Type Roll Pitch Resolution Planning Representative Satellites Fig. 4.3(a) × × Fixed × SPOT5 √ Fig. 4.3(b) × Fixed Trajectory only RADARSAT-2 √ √ Fig. 4.3(c) Fixed Frame Selection AEOS, PLEIADES, IKONOS √ √ Fig. 4.3(d) Variable Frame Selection TerraSAR-X
4 different types of satellites, which is further illustrated in the table 4.1. Our frame selection algorithm is targeted at the agile satellites in Figure 4.3(c) and 4.3(d). Definition of a satellite frame Figure 4.3(c) and 4.3(d) also tell us that an optimal set of satellite frames should consist of k > 1 frames. For example, PLEIADES can take k = 20 disjoint frames when it flies over a 1000km × 1000km accessible region. Finding an optimal k−frame solution could be very difficult. However, if we can solve the special case when k = 1, we can attack k−frame problem using dynamic programming or other heuristic-based methods. In this Chapter, we will focus on the special case that k = 1 for a given time window T . Since most satellites cannot perform yaw rotation, the satellite frame is always has a pair of edges parallel to orbit. We call those candidate frames be “iso-oriented” frames. We define a satellite frame to be a rectangle: c = [x, y, z, w, l] where [x, y] ∈ Ra specifies the center point of the frame with respect to an accessible region Ra , z specifies the resolution of the frame. The pair x, y determines the pitch and roll angles. Setting z = 10 meters means a pixel in the image is equivalent to area of 10 × 10 square meters. A bigger z-value means lower image resolution but a bigger coverage. The attainable resolution set is Z, so z ∈ Z. As shown in
46
4 Exact Frame Selection Algorithms for Agile Satellites
(a)
(b)
(c)
(d)
Orbit projection and satellite traveling direction Requested viewing zones
Imaging zones
Fig. 4.3. Scheduling for different satellites: (a) a satellite without pitch and roll capability can only scan a narrow land belt underneath its orbit; (b) a satellite with roll motion can scan along a predefined trajectory; (c) a satellite with pitch and roll capability can shot regions of interests; and (d) a satellite with pitch, roll, and variable resolution can shot regions of interests at different resolutions (Low resolution yields a big coverage)
Figure 4.2, the swath width w and the scan length l of the frame determine the size of the frame. They are functions of resolution z for a given time window T : w = w(z) and l = l(z), which depend on the type of satellite sensors: • Push broom sensor: the 1D camera scans parallel to orbit. In this case, swath width w(z) = αz + β, where α and β are constant factors. For 1D camera with fixed swath width, α = 0. For variable swath width, β = 0 because w(z) = 0 if z = 0, which means if the camera can see things that is infinitely small then swath width has to be infinitely small because there are only finite number of pixels on the camera. Scanning length l(z) = ηzT , where ηz is scanning speed, which is also a linear function of z, which means we can scan faster when image resolution needed is low. If the scanning speed can not be adjusted, then l(z) = ηT is a constant for the fixed T . • Whisk broom sensor: the 1D camera scans vertical to orbit. In this case, we can simply switch the w(z) and l(z) functions defined for push broom sensors. • Frame camera: this is a 2D non-scanning camera like regular cameras. A example is TK-350 camera on Russian KOSMOS. Definite α1 , β1 , α2 , and β2 be constants. In this case, w(z) = α1 z for variable resolution or w(z) = β1
4.3 Problem Definition
47
for fixed resolution, which is the same as the w(z) in push room sensor. The l(z) = α2 z or l(z) = β2 shares the same format but with different constants. The definitions of l(z) and w(z) show the factor that we have to balance the area of coverage and resolution because the satellite can only process a limited amount information for the fixed time window. Now we can see that w(z) = α1 z and l(z) = α2 z for frame camera with variable resolution is a generic formulation. Other cases are just de-generated special cases with at lease side of the frame to be constant. If we can solve the problem for the frame camera with variable resolution, other cases can also be solved with appropriate simplifications. In the rest of the Chapter, we will focus on this generic formulation. Now the camera frame is a rectangle with a fixed aspect ratio (α2 : α1 ). We use α2 : α1 = 4 : 3 in the rest of the Chapter. The camera frame c = [x, y, z, w, l] is reduced to a triple c = [x, y, z]. For example, a frame has α2 = 1333 and α1 = 1000, then the area of the frame is 1000 ∗ 1333 × z 2 . Definition of user requested viewing zones For a given time window, we consider n requested viewing zones. The ith request, 0 ≤ i ≤ n, is a rectangle ri = [xi , yi , wi , li , zi , ui ], where [xi , yi ] ∈ Ra specifies center point with respect to the accessible region, wi , li are the width and the length of the requested rectangle, zi is the desired resolution, and ui is the utility for the request, which describes how much the client is willing to pay for the requested view zone. This is also the maximum reward associated with this request. We assume that all requested viewing zones (not necessarily with 4:3 aspect ratio) have a pair of edges parallel to orbit. Therefore, they are isooriented with the satellite frame. Effect of image distortion Given a set of n requested viewing zones, the objective is to compute a single frame c∗ that yields maximum total reward. The solution space is Φ = Ra × Z = {[x, y, z]|[x, y] ∈ Ra , z ∈ Z}. We consider two cases: 1) Z is a finite discrete set and 2) Z is a continuous set. However, set Z may not uniform across Ra due to image distortion. As shown in Figure 4.2, the distance between a satellite and ground is the shortest when the imaging spot is directly underneath the satellite, which yields the best resolution. This resolution is nadir resolution. When the spot is off nadir, the resolution decreases. For example, for Quickbird 2 satellite, its nadir resolution is 2.5m and its 30◦ off nadir resolution is 2.9m. This effect actually reduces the resolution set in the solution space. Since we are developing searching algorithms for the optimal frame, a smaller solution space can actually reduce the searching time. Our algorithms search for the satellite frame with maximum reward, which is defined in the following subsection.
48
4 Exact Frame Selection Algorithms for Agile Satellites
4.3.2
Reward Metric
We need to evaluate a candidate satellite frame with respect to user requests. We define a reward metric based on sensible assumptions: for a given candidate satellite frame, its reward has to rely on how much it overlaps with user requests and how good its resolution is. The former is coverage discount in reward and the later is resolution discount in reward. Coverage discount Recall that ri is the ith requested viewing zone. The corresponding client has a utility ui for this region. Define si as the reward from the ith request. Let c = [x, y, z] be a candidate camera frame. If the ri is fully covered by c, i.e., ri ⊆ c, and the desired resolution is obtained, i.e., zi ≥ z, then si = ui . If the resolution requirement is satisfied but the coverage is partial, then the reward is discounted by a coverage ratio: si = u i
Area(ri ∩ c) . Area(ri )
Resolution discount Define resolution ratio γi = zzi . If γi < 1 (the resolution requirement is not satisfied) then the reward should be discounted by a resolution discount factor fd (γi ). Hence, Area(ri ∩ c) fd (γi ). (4.1) si = u i Area(ri ) As illustrated in Figure 4.4(a), the resolution discount function fd (γi ) has to satisfy the following conditions, 0 ≤ fd (γi ) < 1 when γi < 1, fd (γi ) = 1 when γi ≥ 1, d fd (γi ) ≥ 0. dz
(4.2)
It is an increasing function of γi because an image has more value as resolution increases. We call this reward metric the Coverage-Resolution Ratio (CRR). This general definition of CRR metric covers a large family of metrics. Sample CRR metrics In section 4.3.1, we mentioned that we plan to solve the SFS problem for two cases: 1) Z is a finite discrete set and 2) Z is a continuous set. For case 1), resolution function fd does not need to be a continuous function. It could be a table for different resolution level. Table 4.2 gives a sample fd function based on pricing data from RADARSAT 1 satellite on 12/18/03 [91].
4.3 Problem Definition
49
Table 4.2. Sample fd for discrete z and zi = 8m z Price per km2 8m $1.200 25m $0.275 30m $0.133 50m $0.033 100m $0.012
fd 1.000 0.229 0.111 0.028 0.010
For case 2), we assume fd is a continuous function. In the rest of the Chapter, we use the fd in (4.3) to walk through the algorithm development. Readers can expand the algorithm to fit different fd functions as long as fd is composed of elementary functions. fd (γi ) = min{(γ)b , 1}.
(4.3)
Now CRR becomes, si (c) = ui
z Area(ri ∩ c) i b ,1 min Area(ri ) z
(4.4)
The exponential discount factor b determines how resolution affects reward. Figure 4.4(b) shows two cases: b = 1 and b = ∞. The case is b = ∞ corresponds to a scenario in which users will not accept any image with a resolution that is lower than requested. we use the case b = 1 as default setting for numerical examples in the rest of the Chapter. Given n requests, the total reward is the sum of individual rewards, s(c) =
n
si (c).
(4.5)
i=1
Our objective is to find c∗ = arg maxc s(c), the frame that maximizes total reward. fd
fd
1
1
0
(a)
1
γi
0
b=1
b=∞ 1 (b)
Fig. 4.4. Resolution discount function
γi
50
4 Exact Frame Selection Algorithms for Agile Satellites
4.3.3
Properties of the CRR Reward Metric
In this section, we explore the properties of CRR metric, which can help us to develop algorithms later. The CRR metric s is non-smooth and piecewise linear in both x and y. For convenience we use s(x, y, z) instead of s(c) with c = [x, y, z]. Separability According to (4.1), we know that choosing of [x, y] does not affect value of resolution discount function fd . This property allows us to reduce the problem from one high dimension to multiple low dimension problems. Nonsmoothness Recall that Area(ri ) = wi li . To study the properties of the reward function, we first treat z as a constant: Area(c ∩ ri ) = pi (x, y) is a function of (x, y). The objective function defined by (4.1) becomes a function of the center point of the candidate frame, n ωi pi (x, y) (4.6) s(x, y) = i=1
where ωi =
ui fd (γi ) wi li
(4.7)
is a constant for each user. We know that pi (x, y) is the area of the intersection of the ith requested viewing zone and the candidate frame (x, y, z). Therefore, the maximum value of pi (x, y) is min(Area(c), Area(ri )). This property determines that the shape of user i’s satisfaction function is plateau-like. Since z is fixed, the objective function si (c) become a function of center point of the candidate frame si (x, y). Figure 4.5 shows the shape of si (x, y) given z < zi and candidate frame c smaller than ri . Note that si is non-differentiable with respect to x and y so we cannot use derivative-based approaches to solve this problem. Piecewise linearity in x and y Since all requested viewing zones and the candidate frame are iso-oriented rectangles, the shape of any intersection between them is also a rectangle with its edges parallel to either x axis or y axis. Thus the term pi (x, y) in (4.6) is either 0 or the area of the rectangle formed by intersection between ri and c = [x, y, z]. This yields a nice property: the pi (x, y) is piecewise linear with respect to x if we fix y, and piecewise linear with respect to y if we fix x. Since the total reward function s(x, y) is a linear combination of pi (x, y), i = 1, ..., n, it has the same property. Figure 4.6 shows an example for a case with two requested viewing zones.
4.3 Problem Definition
51
y
yˆ2i
yi
yˆ2i−1
Candidate frame c = [x, y, z] w(z) li
si
l(z)
wi
ri
xˆ2 i −1
xi
x
xˆ 2 i
x (a) The ith requested viewing zone ri y
~ y4i
si
l(z) li-l(z)
~ y 4 i −1
yi
~y 4 i −2 ~y
4i − 3
~ x4 i − 2 xi ~x 4 i −1 ~x 4 i x 4i −3 ~
(c) 3D shape of si(x, y)
l(z) li-l(z) l(z) (d) Front view of si(x,y)
x
w(z) wi-w(z)w(z)
y
si
w(z)
ri
y
x
(e) Side view of si(x,y)
(b) Top view of si(x,y)
Fig. 4.5. The CRR reward function, si (x, y), for a given candidate frame. Assume zi > z, l(z) ≤ li , and w(z) ≤ wi , we can move the candidate frame (gray rectangle in (a)) around the the ri to observe how si (x, y) changes. The function is plateau-like with a maximum height of ui Area(c ∩ ri )/Area(ri ). The function consists of 5 planar and 4 quadratic surfaces at the corners.
4.3.4
Comparison with “Similarity Metrics”
Symmetric Difference (SD) and Intersection Over Union (IOU) are standard “similarity metrics” used in pattern recognition as a measure of how similar two shapes are [25, 23, 195]. In our case, for a requested viewing zone ri and a candidate frame c, the SD metric would be: SD =
Area(ri ∪ c) − Area(ri ∩ c) . Area(ri ∪ c)
The intersection-over-union metric would be IOU =
Area(ri ∩ c) = 1 − SD. Area(ri ∪ c)
52
4 Exact Frame Selection Algorithms for Agile Satellites
y ~y 8
s1
~ y4 & 5
~ y1
~ x4
x
y
s2
s1 s2
y
s
y ~ y4& 5
y
x
(a) Top view of s1 and s2
~ y1
x
~ y8
y
(b) Piecewise linear objective function x4 at x = ~
s
x
y
(c) 3D view of objective functions
Fig. 4.6. The combined reward function s(y) for two users. Ordered sets {˜ yk } and {˜ xk }, k = 1, ..., 8 are corresponding to horizontal and vertical edges of plateaus. Note that y˜4 and y˜5 overlap in this case.
Compared with IOU, our Coverage-Resolution Ratio (CRR) metric has similar properties: • • • •
IOU and CRR attain their minimum value of 0 if and only if c ∩ ri = ∅, both attain their maximum value if and only if c = ri , both are proportional to the area of c ∩ ri , and both depend—albeit differently—on the sizes of c and ri . The key differences between CRR and these metrics are:
• the SD and IOU metrics are not piecewise linear in x or y,
4.4 Algorithms
53
• it is hard to extend SD or IOU to arbitrarily-shaped requested viewing zones because they will become non-normalized for such cases, • the SD and IOU metrics do not take resolution into account.
4.4 Algorithms In this section, we start with the definition of two geometric events including “base vertices” and “plateau vertices’. We show that how those geometric events lead to new algorithms for two versions of the Satellite Frame Selection problems. Subsection 4.4.2 describes an algorithm when the resolution is restricted to a set of discrete values. In Subsection 4.4.3 we allow the resolution to vary continuously. 4.4.1
Base Vertices and Plateau Vertices
Base vertex As illustrated in Figure 4.7(b), we define a base vertex as an intersection of two extended edges of original rectangular shaped requested viewing zones. As shown in Figure 4.5a, the extended edges for ri are • Vertical ˆ2i = xi + .5li x ˆ2i−1 = xi − .5li , x
(4.8)
yˆ2i−1 = yi − .5wi , yˆ2i = yi + .5wi
(4.9)
• Horizontal 2
For n requested viewing zones, there are O(n ) base vertices as well. Since the definition of base vertices does not depend on resolution, each base vertex can be represented by a two-dimensional vector in (x, y) plane. Plateau vertex Recall that the objective function for a given resolution z and one requested viewing zone si (x, y) is plateau-like as shown in Figure 4.5c. The function consists of nine facets: one top plane, four side planes, and four quadratic surfaces at the corners. There are two vertical boundaries and two horizontal boundaries at the bottom (bounding the entire plateau), the same numbers of similar edges at the top (bounding the plateau’s flat top), and eight boundaries separating side planes and corner quadratic surfaces (see Figure 4.7(a)). For requested viewing zone ri , its has 4 vertical and 4 horizontal plateau boundaries. Define {˜ x4i−3 , x ˜4i−2 , x˜4i−1 , x˜4i } be vertical plateau boundaries and {˜ y4i−3 , y˜4i−2 , y˜4i−1 , y˜4i } be horizontal plateau boundaries for ri . Recall that ri = [xi , yi , wi , li , zi , ui ], w(z) and l(z) are width and length of the candidate satellite frame, and x ˆ2i−1 , xˆ2i , yˆ2i−1 , and yˆ2i are left, right, bottom, and top extended edge of the requested zone respectively. As shown in Figure 4.5b, we can compute coordinates of those boundaries,
54
4 Exact Frame Selection Algorithms for Agile Satellites
• Vertical x ˜4i−3 = x ˆ2i−1 − .5l(z), x ˜4i−2 = x ˆ2i−1 + .5l(z), x ˜4i−1 = x ˆ2i − .5l(z), x ˜4i = x ˆ2i + .5l(z).
(4.10)
y˜4i−3 = yˆ2i−1 − .5w(z), y˜4i−2 = yˆ2i−1 + .5w(z), y˜4i−1 = yˆ2i − .5w(z), y˜4i = yˆ2i + .5w(z).
(4.11)
• Horizontal
For a fixed resolution z and n requested viewing zones, there are n plateaus. Compute plateau boundaries for each requested viewing zone, we can obtain a ˜ = {˜ x4i−2 } ∪ {˜ x4i−1 } ∪ {˜ x4i }, i = 1, ..., n, set of vertical boundaries X x4i−3 } ∪ {˜ and a set of horizontal boundaries Y˜ = {˜ y4i−3 } ∪ {˜ y4i−2 } ∪ {˜ y4i−1 } ∪ {˜ y4i }, i = 1, ..., n. We define a plateau vertex as an intersection between any two boundaries, which includes both intersections of facet boundaries induced by a single plateau or by two distinct plateaus. Unlike definition of base vertex, the plateau vertex refers to intersection between the real boundaries of the plateau instead of extended boundaries. Since all plateaus are iso-oriented, one of the boundaries is horizontal and the other is vertical. A plateau vertex can be represented by a ˜ and y˜ ∈ Y˜ . For n requested viewing three-dimensional vector (˜ x, y˜, z), x ˜∈X zones and m fixed resolutions, there are O(mn2 ) plateau vertices. Figure 4.7(a) shows an example of plateau vertices for two requested viewing zones. Relationship between base vertex and plateau vertex Equation (4.8 ∼ 4.11) and Figure 4.7 show the relationship between plateau vertices and base vertices, which is also described by Lemma 1. Lemma 1 (Base Vertex /Plateau Vertex Relation). A candidate frame that is a plateau vertex must have one corner coinciding with a base vertex. Although we can find a corresponding base vertex for each plateau vertex, they are not equivalent. There are three reasons. • The notion of plateau vertex is only applicable to cases where the resolution of the candidate frame is discrete and finite; the notion of base vertex applies to cases where the resolution is either discrete or continuous. • Not all base vertices have corresponding plateau vertices. For example, the top left base vertex in Figure 4.7(b) has no corresponding plateau vertex. In fact, a base vertex that does not overlap with any REAL edge of a requested viewing zone does not have a corresponding plateau vertex. • For m discrete resolutions, a base vertex has O(m) corresponding plateau vertices. A base vertex has at most four corresponding plateau vertices for a fixed resolution. The position of the base vertex is invariant to resolution by definition.
4.4 Algorithms
y
55
y
Corresponding base vertices and plateau vertices
(a) Plateau vertices : Requested
x
viewing zones
: Plateau vertices
: Base vertices
(b) Base vertices
x
: Candidate frame
: Extended edges
Fig. 4.7. Illustration of relationship between “plateau vertices” and “base vertices” with geometric interpretation for two requested viewing zones and a candidate frame with fixed resolution. A “plateau vertex” is a candidate frame that has one of its corner coincided with a “base vertices”.
Optimality conditions Plateau vertices and base vertices can help us to find the optimal solution for the optimization problem defined in (4.5) for the discrete resolution case and the continuous resolution case respectively. Lemma 2 (Base Vertex Optimality Condition). At least one optimal frame has one corner coinciding with a base vertex. Proof. Let c∗ = [x∗ , y ∗ , z ∗ ] be an optimal solution. If we fix z = z ∗ , we get s(x, y) as a summation of plateaus. As discussed earlier, for a fixed z and x, the objective function s(y) is piecewise linear. So the optimum must be at a vertex y = y˜ such that s(x∗ , y˜, z ∗ ) = s(x∗ , y ∗ , z ∗ ). We also know that the line y = y˜ in the (x, y) plane is one of the horizontal facet boundaries of the plateaus. Similarly, we can find another optimal frame [˜ x, y˜, z ∗ ], where line x = x ˜ is one of the vertical facet boundaries of the plateaus. Therefore, the optimal frame [˜ x, y˜, z ∗ ] is centered at ∗ a plateau vertex (˜ x, y˜) for a fixed resolution z = z . Apply Lemma 1, we know that the optimal frame [˜ x, y˜, z ∗ ] must have one corner at one of the base vertices. Using the Base Vertex Optimality Condition (BVOC), we can restrict frames to coincide one of its corner with a base vertex, thereby reduce the dimensionality of the problem. The BVOC is true no matter whether the resolution variable is discrete or continuous. However, it is more convenient to use plateau vertices when the resolution variable is discrete. The BVOC can be transformed to the following Plateau Vertex Optimality Condition, Lemma 3 (Plateau Vertex Optimality Condition). When the resolution variable is discrete, at least one optimal frame is coinciding with a plateau vertex.
56
4 Exact Frame Selection Algorithms for Agile Satellites
Proof. From the proof the Lemma 2, we know that we can find an equivalent optimal solution [˜ x, y˜, z ∗ ] from a given optima solution c∗ = [x∗ , y ∗ , z ∗ ]. We also know that [˜ x, y˜] are intersection of two facet boundaries. For the discrete resolution case, z ∗ has to be one of the discrete resolutions in the solution space. Then the point [˜ x, y˜, z ∗ ] is one of the plateau vertices. 4.4.2
Algorithms for Discrete Resolutions
A satellite camera may have a discrete set of m resolution levels. We can use this algorithm to find the best position and resolution parameters. Brute force approach Based on the sLemma 3, we can solve the optimization problem by simply checking all combinations of resolution levels and corresponding plateau vertices. We evaluate the objective function for each of the O(n2 ) plateau vertices and repeat this for each of the m resolution levels. It takes O(n) time to evaluate a candidate frame c. Therefore, the brute force algorithm runs in O(n3 m). Efficient traversal of plateau vertices For n requested viewing zones, we have 4n horizontal plateau facet boundaries {˜ y1 , y˜2 , ..., y˜4n } and 4n vertical plateau facet boundaries {˜ x1 , x˜2 , ..., x ˜4n }. The Plateau Vertex Traversal Algorithm is summarized below. It reduces the computation complexity from O(n3 m) to O(n2 m). In step (iii) of the PVT algorithm, we traverse the vertical facet boundaries of the plateaus one by one. Figure 4.8 illustrates how it works using the example of two requested viewing zones. For each vertical edge, we find the maximum. Using Lemma 2, we know that this procedure will find an optimal solution. It remains to show how much time is required to solve the resulting problem of finding max s(x, y, z) y
for given x and z. This special optimization problem can be solved in O(n) with a sorted sequence {˜ y1 , y˜2 ..., y˜4n }. The objective function is a “summation” of n plateaus, which is shown in Figure 4.6. For fixed x and z, this piecewise linear function only changes slope at {˜ yi }, i = 1, ..., 4n. For each vertex y˜i , we know how much the slope will change after crossing the vertex. We can find the maximum objective value by walking over all ordered vertices {˜ yi } from the one side to the other side on the line x = x ˜i . This process only takes O(n). Therefore, step (iii) of the algorithm will take O(n2 ), proving the following theorem. Theorem 1. We can solve the Satellite Frame Selection problem in time O(n2 m) for n users and m resolution levels.
4.4 Algorithms
57
Algorithm 1. Plateau Vertex Traversal (PVT) Algorithm input : requests, ri , i = 1, ..., n output: optimal solution c∗ = (x∗ , y ∗ , z ∗ ) O(1) s∗ = 0; Sort {ˆ y2i−1 }, i = 1, ...n; O(n log n) Sort {ˆ y2i }, i = 1, ...n; O(n log n) for each resolution level z do Ω(m) ˜ Compute X; O(n) y4i−2 } from {ˆ y2i−1 }; O(n) Get {˜ y4i−3 }, {˜ Get {˜ y4i−1 }, {˜ y4i } from {ˆ y2i }; O(n) y4i−2 }, {˜ y4i−1 }, and {˜ y4i }, Merge the 4 ordered sequences: {˜ y4i−3 }, {˜ i = 1, ..., n to get the sorted Y˜ ; O(n) /* Solve 1D problems */ Ω(n) for x = x ˜i , i = 1, ..., 4n do s = maxy s(˜ xi , y, z); O(n) if s > s∗ then s∗ = s, x∗ = x ˜i , y ∗ = y, z ∗ = z; O(1) Output s∗ as optimal objective function value and (x∗ , y ∗ , z ∗ ) as optimal frame; O(1) y
~ y8 x=~ x4
~ y 4 &5 s
~y 1 ~ x1
~ x4
~ x8 x
(a) Sweeping along x axis : Sweeping lines
~ y1
~ y 4 &5
~ y8
y
(b) Sweeping along y axis : Sweeping direction
Fig. 4.8. An illustration of PVT algorithm using the example in figure 4.6. Figure (a) shows how we sweep along x axis to dissect the 2D optimization problem into O(n) 1D optimization problems. Figure (b) shows how we solve the 1D optimization problem by traversal over the ordered vertices for x = x ˜4 .
4.4.3
Algorithms for Continuous Resolution
If a satellite camera has a variable and continuous resolution, there are infinitely many “plateau vertices”. Instead, we can use base vertex optimality condition to reduce the 3D optimization problem to O(n2 ) 1D optimization problems with respect to variable z. We then show that each 1D optimization problem can be dissected into O(n) piecewise polynomial functions, for each, we can find an optimal in time O(n). Using incremental computation and a diagonal sweep, we show how to improve the running time to O(n3 ).
58
4 Exact Frame Selection Algorithms for Agile Satellites
Base Vertex Algorithm (BV) For n requested viewing zones, there are O(n2 ) base vertices. The base vertex optimality condition (Lemma 2) allows us to find the optimal frame by checking the candidate frames that have one of their corners coincided with one of the base vertices. This means that we can reduce the original 3D optimization problem in (4.5) to O(n2 ) 1D optimization problems. Define pi (z) = Area(ri ∩ c), ai = Area(ri ) = wi li , the 1D optimization problem is to find, max s(z) = z
n
ui (pi (z)/ai ) min((zi /z)b , 1)
(4.12)
i=1
subject to the constraint that a corner of the candidate frame c = [x, y, z] coincides with a base vertex. To study the 1D maximization problem in (4.12), consider a base vertex. For simplicity, we assume that the base vertex is at the origin. Moreover, we assume that the base vertex coincides with the lower left corner of the candidate frame. (The base vertex in Figure 4.9 is the intersection of the extensions of the left edge of r2 and the bottom edge of r5 .) Placements in which one of the other three corners of the candidate frame coincides with the base vertex are handled in a similar fashion. We may be able to eliminate some of the placements beforehand, but it reduces the computation by only a constant factor. Now, we gradually increase z and observe the value of s(z): Figure 4.10 shows the function for the example in Figure 4.9. a. Critical z Values and Intersection Topologies. The function s(z) is a piecewise smooth function (see Figure 4.10), so derivative-based approaches cannot be used directly. We refer to a maximal z-interval on which s(z) is smooth as a segment. We consider four questions that form the basis for our algorithms. 1. Can we give a geometric characterization of the endpoints of the segments? 2. How many segments are there? 3. What is the closed-form description of s(z) within a single segment, and how complex is the computation of the maximum of s(z) on that segment? 4. How different are the closed-form descriptions of s(z) on two adjacent segments? y
r1
z=117
r2
z=83
Candidate frames
r3
z=61 z=39
r
O
r4 r5 x
Requested viewing zones
Fig. 4.9. An example of the 1D optimization problem with respect to z. In this example, we assume l(z) = 4z, w(z) = 3z, b = 1, and ui = 1 for i = 1, ..., n.
4.4 Algorithms
59
1.60 1.40 1.20
S(z)
1.00 0.80 0.60 0.40 0.20 0.00
Fig. 4.10. resolution
0
20
40
60
z
80
100
120
140
160
Reward function for the example in figure 4.9 as a function of image
The first three questions lead to an O(n4 ) algorithm; the fourth question results in an improvement to O(n3 log n). We start with question 1). Definition 1. A critical z value is the z value such that s(z) changes its closedform representation, which is caused either by intersection topology changes or crossing one of the zi , i = 1, ..., n when the candidate frame changes its size. Let Zc (xv , yv ) be the set of critical z values for base vertex (xv , yv ). From (4.4), we see that the non-smoothness comes from the non-smoothness of either min((zi /z)b , 1) or pi (z). The critical z values that come from the former type form a subset Zc′ (xv , yv ), those of the latter type a subset Zc′′ (xv , yv ). The former type is easy to deal with because it occurs at z = zi , i = 1, ..., n. Therefore, Zc′ (xv , yv ) = {zi |i=1,...,n}, so |Zc′ (xv , yv )| = n. Note that Zc′ (xv , yv ) is the same for all base vertices (xv , yv ), so Zc′ (xv , yv ) = Zc′ . Obtaining Zc′′ (xv , yv ) is less straightforward. Depending upon the intersection topology, the intersection area pi (z) of a rectangle ri with an expanding candidate frame c is one of the following 4 types: it is of type 0 if pi (z) equals zero, of type 1 if pi (z) equals a positive constant qi0 , of type 2 if pi (z) is described by a first-degree polynomial qi1 z + qi0 , and of type 3 if pi (z) is described by a seconddegree polynomial qi2 z 2 + qi1 z + qi0 , where qi0 , qi1 , and qi2 are coefficients. We are interested in how the type changes as z gradually increases from 0+ to +∞. To further simplify this problem, we consider “fundamental rectangles” from three classes. • Class (o): A rectangle that does not intersect Quadrant I, • Class (a): A rectangle that is fully contained in Quadrant I and does not intersect the extended diagonal of the candidate frame. • Class (b): A rectangle that is fully contained in the Quadrant I and that has a diagonal that overlaps the extended diagonal of the candidate frame. Figure 4.11 gives examples for these three classes of fundamental rectangles.
60
4 Exact Frame Selection Algorithms for Agile Satellites II
y
I r1
r4
Candidate frame
r3
r2 III
O
Requested viewing zones x
IV
Fig. 4.11. Examples for “fundamental rectangles”. In this figure, r1 and r2 are type (a) rectangles, r3 is a type (b) rectangle, and r4 is a type (o) rectangle.
(o)
0
0
(a)
0
2
1
(b)
0
3
1
Candidate frames
pi (z) type
Requested viewing zones Fig. 4.12. Change of pi (z) for the three classes of requested viewing zones when z gradually increases from 0+ to +∞
As shown in Figure 4.12, as z increases, • the pi (z) for a class (o) rectangle always remains type 0, • the pi (z) for class (a) rectangle starts from type 0, changes to type 2 when its intersection with the expanding candidate frame begins, then changes to type 1 when it becomes fully contained. • the pi (z) for a class (b) rectangle can start either from type 3 or type 0 depending on whether the bottom left corner of the rectangle coincides with the origin or not. It also changes to type 1 once it becomes fully contained. The transitions correspond to critical z values. We can ignore class (o) fundamental rectangles because they do not contribute to our objective function. A requested viewing zone that is a fundamental rectangle from class (a) or (b) generates at most two critical z values. Many of the requested viewing zones though will not be fundamental rectangles. We resolve this by decomposing those requests. b. Requested viewing zone decomposition. A requested viewing zone that is not a fundamental rectangle intersects at least one of following: the positive xaxis, the positive y-axis, and the extended diagonal of the expanding candidate
4.4 Algorithms
61
Fig. 4.13. Examples of four requested viewing zone decomposition cases
frame. We treat the different intersection patterns and show that in each case the requested viewing zone can be decomposed into at most four fundamental rectangles (see also Figure 4.13). • As illustrated in Figure 4.13a, if the requested viewing zone intersects only the diagonal, then it can be decomposed into two class (a) rectangles and one class (b) rectangle. • As illustrated in Figure 4.13b, if the requested viewing zone intersects the diagonal and exactly one positive coordinate axis, then it can be decomposed into two class (a) rectangles, one class (b) rectangle, and one class (o) rectangle. • As illustrated in Figure 4.13c, if the requested viewing zone intersects the diagonal and both positive coordinate axes, then it can be decomposed into one class (a) rectangle, one class (b) rectangle, and two class (o) rectangles. • As illustrated in Figure 4.13d, if the requested viewing zone intersects only one positive coordinate axis, then it can be decomposed into a class (a) rectangle and a class (o) rectangle. As we can see from figure 4.13, a decomposed requested viewing zone can yield at most three fundamental rectangles that are either class (a) or class (b). Every fundamental rectangle inherits the zi value of the original request. In summary, we claim that the n requested viewing zones can be classified and/or decomposed into O(n) fundamental rectangles that are either class (a) or class (b). Since each rectangle in class (a) or (b) generates (at most) two critical z values, we find that |Zc′′ (xv , yv )| = O(n). Combining this with the bound on the size of Zc′ (xv , yv ) yields that |Zc (xv , yv )| = O(n). Since the critical z values partition the z axis into O(n) segments, on each of which s(z) is a smooth function, the following lemma is true. Lemma 4. For each base vertex, the z-axis can be partitioned into O(n) segments, on which s(z) is smooth.
62
4 Exact Frame Selection Algorithms for Agile Satellites
Lemma 4 answers our question 2) from the previous section. c. Optimization Problem on a Segment. With the knowledge of question 1) and 2), we are ready to attack question 3): derive a closed-form representation of s(z) on a segment and solve the constrained optimization problem. We have the following lemma. (The order of the resulting polynomial depends on the resolution discount factor b), Lemma 5. For each segment, s(z) is a polynomial function with 6 coefficients g0 , g1 , g2 , g3 , g4 , and g5 , s(z) = g0 z −b + g1 z −b+1 + g2 z −b+2 + g3 + g4 z + g5 z 2 .
(4.13)
Proof. For a base vertex (xv , yv ), let us assume the segment is defined by [z ′ , z ′′ ), where z ′ , z ′′ ∈ Zc (xv , yv ) are two adjacent critical z values. The n requested viewing zones have been classified and decomposed into k = O(n) class (a) or (b) rectangles. We denote those rectangles as r˜i , i = 1, ..., k. Let us define set S ′ = {i|zi ≤ z ′ } and set S ′′ = {i|zi ≥ z ′′ }. From the definition of critical z value, we know that zi ∈ / (z ′ , z ′′ ) for i = 1, ...n so that S ′ ∪ S ′′ = {1, ..., k} and ′ ′′ S ∩ S = ∅. Therefore, (4.12) becomes, ui (pi (z)/ai )(zi /z)b (4.14) ui pi (z)/ai + s(z) = i∈S ′
i∈S ′′
We also define Sj be the set of rectangles with type j intersection areas when z ∈ [z ′ , z ′′ ), for j = 1, 2, 3 respectively. Recall that ai = wi li is a constant; we have ui qi0 /ai ui pi (z)/ai = i∈S ′′ ∩S1
i∈S ′′
+
ui (qi1 z + qi0 )/ai
ui (qi2 z 2 + qi1 z + qi0 )/ai
i∈S ′′ ∩S2
+
i∈S ′′ ∩S3
We can perform a similar transform for the second term of (4.14). (ui pi (z)/ai )(zi /z)b i∈S ′
= z −b
i∈S ′ ∩S
+z
−b
+z
−b
ui zib qi0 /ai 1
ui zib (qi1 z + qi0 )/ai
ui zib (qi2 z 2 + qi1 z + qi0 )/ai
i∈S ′ ∩S2
i∈S ′ ∩S3
Combining them, we get (4.13).
4.4 Algorithms
63
Algorithm 2. Base Vertex (BV) Algorithm input : requests, ri , i = 1, ..., n output: optimal solution c∗ = (x∗ , y ∗ , z ∗ ) s∗ = 0; for each base vertex (xv , yv ) do Compute members of Zc (xv , yv ); for each segment do Compute polynomial coefficients; Find the maximum s for the polynomial; if s > s∗ then s∗ = s, x∗ = x ˜i , y ∗ = y, z ∗ = z;
O(1) Ω(n2 ) O(n) Ω(n) O(n) O(1) O(1)
Output s∗ as optimal objective function value and (x∗ , y ∗ , z ∗ ) as optimal frame; O(1)
The proof of Lemma 5 shows that (4.12) can be converted into (4.13) in O(n) time. The maximum of (4.13) can be found in constant time because it does not depend on n. Combining Lemma 4 and Lemma 5 yields the Base Vertex Algorithm. Theorem 2. The Base Vertex algorithm (BV) solves the problem in O(n4 ) time. Lemma 5 is only applicable to the CRR metric defined in (4.4). For general CRR metrics that consist of continuous elementary functions, (4.13) remains a continuous elementary function so that complexity of computing the optimal will not go higher. Base Vertex with Incremental Computing (BV-IC) The inner loop in the BV algorithm takes O(n2 ), which is the product of two factors: O(n) segments and O(n) time to compute polynomial coefficients. One observation is that we do not need to re-compute the coefficients entirely if we solve the O(n) sub-problems in an ordered manner. Comparing the polynomial coefficients of two adjacent segments, we find that the difference is caused by the critical z that separates the two segments. The critical z value belongs to some rectangle. Therefore, we only need to do a coefficient update on one polynomial to get another one. This update only takes constant time. To exploit this coherence we must sort the elements of Zc (xv , yv ) in the inner loop to be able to consider the segments in order; this takes O(n log n) time. We replace the inner loop in BV by the following subroutine. The BV-IC algorithm improves the running time: Theorem 3. The Base Vertex with Incremental Computing (BV-IC) algorithm solves the problem in O(n3 log n).
64
4 Exact Frame Selection Algorithms for Agile Satellites
Algorithm 3. Base Vertex with Incremental Computing (BV-IC) begin Sort members of Zc (xv , yv ); Compute first polynomial coefficient; for each subsequent segment do Update polynomial coefficients; Find the maximum s for the polynomial; if s > s∗ then s∗ = s, x∗ = x ˜i , y ∗ = y, z ∗ = z;
O(n log n) O(n) O(n) O(1) O(1) O(1)
end
Base Vertex with Incremental Computing and Diagonal Sweeping (BV-IC-DS) In the outer loop of the BV-IC algorithm, sorting of Zc (xv , yv ) for each base vertex is the dominating factor. The question is: is it necessary to sort critical z values repeatedly for each base vertex? Recall Zc (xv , yv ) is the union of a set Zc′ and a set Zc′′ (xv , yv ). Each critical z value in Zc′′ (xv , yv ) uniquely defines the position of the upper right corner of the candidate frame on its extended diagonal, which is called critical point in the figure 4.14(a). Each critical point corresponds to the point that the candidate frame start intersecting some requested viewing zone or the point that the intersection between the candidate frame and some requested viewing zone ends. This gives a geometric interpretation for those critical z values. Figure 4.14(a) shows a case with two requested viewing zones and five critical z values. Let Ze′′ (xv , yv ) be the set of the corresponding z values of the intersections between the extended diagonal and the extended edges, which is illustrated in Figure 4.14(b). Ze ′′ (xv , yv ) also depends on base vertex (xv , yv ). As shown in Figure 4.14(a) and Figure 4.14(b), Zc ′′ (xv , yv ) ⊆ Ze ′′ (xv , yv ). If we have a sorted sequence Ze ′′ (xv , yv ), we can get a sorted sequence Zc ′′ (xv , yv ) by checking whether a point in Ze ′′ (xv , yv ) belongs to Zc ′′ (xv , yv ). This takes O(n) time because there are O(n) points in Ze ′′ (xv , yv ). Figure 4.14(c) illustrates a nice property of the sorted sequence of points in Ze ′′ (xv , yv ). In the figure, we have an ordered sequence of intersected points at the extended diagonal that starts from the origin O. We number the point closest to the origin as point 1 and the second closest as point 2. As we gradually move the extended diagonal downward and observe what happens to the sorted sequence, we find that the order of the sorted sequence does not change until the diagonal line hits an intersection between two extended edges, which is a base vertex by definition. Let us define this base vertex be the adjacent base
4.4 Algorithms
y
y
r1
r1
r2 x
(a)
r1
1
2 2
x
(c) Critical points
Requested viewing zones
r2 x
(b)
Order of
BVs
r1
r2
1
Base vertices
O
y
y
O
65
O
r2 x
(d) Candidate frame
Extended edges
Extended diagonal of the candidate frame
Intersected points between the extended edges and the extended diagonal Overlapped Intersected points at a base vertex
Fig. 4.14. (a) Zc ′′ (xv , yv ) for a two requested viewing zone case, (b) Ze ′′ (xv , yv ) are set of intersection points between the extended diagonal of the candidate frame and the extended edges, (c) The two intersection points switch order only at a base vertex formulated by the intersection of the two extended edges that generate the two intersection points, and (d) Sorting base vertices in this order can reduce the sorting cost in the algorithm
vertex to the base vertex at the origin. Point 1 and point 2 switch their order at the adjacent base vertex (i.e. the gray rectangle in the figure 4.14(c)). This phenomenon shows that if we have a sorted sequence of the intersection points at a base vertex, we can get the sorted sequence at an “adjacent base vertex” in constant time. This result can reduce the sorting cost from O(n log n) to O(n) if we handle the base vertices in a diagonal order: imagine there is a sweep line that has same slope as the extended diagonal and an intercept at +∞, we decrease the intercept and stop at each base vertex. As shown in figure 4.14(d), we solve the sub problem for the base vertex when the sweeping line stops. This yields the following BV-IC-DS algorithm. Theorem 4. The Base Vertex with Incremental Computing and Diagonal Sweeping (BV-IC-DS) approach solves the problem in O(n3 ) time.
66
4 Exact Frame Selection Algorithms for Agile Satellites
Algorithm 4. BV-IC with Diagonal Sweeping (BV-IC-DS) Algorithm input : requests, ri , i = 1, ..., n output: optimal solution c∗ = (x∗ , y ∗ , z ∗ ) s∗ = 0; Sort Zc′ ; Sort base vertices in sweeping order; Sort Ze′′ (xv , yv ) for the first base vertex; for each base vertex (xv , yv ) do Update ordered set Ze′′ (xv , yv ); Get members of Zc′′ (xv , yv ); Merge Zc′ and Zc′′ (xv , yv ) to get Zc (xv , yv ); Compute first polynomial coefficient; for each subsequent segment do Update polynomial coefficients; Find the maximum s for the polynomial; if s > s∗ then s∗ = s, x∗ = x ˜i , y ∗ = y, z ∗ = z;
O(1) O(n log n) O(n2 log n) O(n log n) Ω(n2 ) O(1) O(n) O(n) O(n) O(n) O(1) O(1) O(1)
Output s∗ as optimal objective function value and (x∗ , y ∗ , z ∗ ) as optimal frame; O(1)
4.5 Results We have implemented all algorithms. The discrete resolution algorithms were implemented on a PC with 950Mhz AMD Athlon CPU and 1GB RAM. The machine runs under Redhat Linux 7.1 and the algorithms are programmed in Java. The algorithms for variable and continuous resolutions were implemented using Microsoft Visual C++ on a PC laptop with 1.6Ghz Pentium-M and 512MB RAM. Figure 4.15 shows the results for four different sets of inputs. As we can see from Figure 4.15(a) and (b), the optimal frame does not necessarily have one corner coinciding with a corner of a requested viewing zone. However, one corner of the optimal frame does coincide with one of the base vertices. Figure 4.15(b) has three requested viewing zones exactly the same as those in (a) and one more big requested viewing zone. It is interesting to see how the optimal frame changes after the big requested viewing zone joined in the system. Figure 4.15(c) shows that if all input rectangles fall far way from each other, the algorithm functions as a chooser and selects one input rectangle with the highest utility value as the output. Figure 4.15(d) shows that a large requested viewing zone does not necessarily yield a large optimal frame. It worth mentioning that results depend on utility ui , which functions as weight on each requested viewing zone. Those samples only illustrate cases that utility are same across or requested viewing zones. Figure 4.16 shows a speed comparison for the two algorithms presented in Subsection 4.4.2 for a fixed resolution level (m = 1). It confirms what the theoretical analysis predicts; the plateau vertex traversal algorithm clearly outperforms the brute-force approach. We use random inputs for testing. The random inputs are
4.5 Results
(a) s=2.11
(b) s=2.39
(c) s=1.00
(d) s=1.67
Requested viewing zones
67
Optimal frame
Seconds
Fig. 4.15. Examples of computed optimal frames (shown in grey). We set b = 1 and ui = 1 for all requests and use PVT Algorithm from section 4.4.2. We have 10 different resolution levels and set l(z) = 4z and w(z) = 3z. We put the optimal reward S in the caption of sub figures.
B
V
n
Fig. 4.16. Speed comparison for the two algorithms from Subsection 4.4.2. Curve B refers to the brute-force algorithm; curve V refers to the PVT algorithm. Each data point is based on average of 10 runs with random inputs.
generated in two steps. First, we generate four random points, which are uniformly distributed in Ra . The four points represent locations of interests, which are referred as seeds. For each seed, we use a random number to generate a radius
68
4 Exact Frame Selection Algorithms for Agile Satellites
(a) b=0.5
(b) b=1.0
(c) b=2.0
(c) b=4.0
Requested viewing zones
Optimal frame
Fig. 4.17. Relationship between the optimal frame size and the choice of the b value in Coverage-Resolution Ratio metric in Subsection 4.3.2. Algorithm chosen and its settings are the same as the samples in figure 4.15.
of interest. Then we generate requested viewing zones. To generate a requested viewing zone, we create six random numbers. One of them is used to determine which seed the request will be associated with. Two of them will be used to generate the location of the center point of the request, which is located within the corresponding radius of the associated seed. The remaining three random 60 50
Seconds
40
4
BV BVC
O (n )
BV-IC VC-IC
O (n logn )
BV-IC-DS VC-IC-DS
O (n )
3 3
30 20 10 0
20
30
40
50
n
60
70
80
Fig. 4.18. Computation speed comparison for random inputs
4.6 Conclusions and Future Work
69
numbers are used to generate width, length, and resolution for the request. In the test, we set utility to be 1 across all inputs. Each data point in Figure 4.16 is the average of ten runs. Figure 4.17 shows the relationship between the optimal frame size and the choice of b value in the Coverage-Resolution Ratio. This demonstrates the tradeoff: large b leads to large output frames. As b → 0+ , the optimal frame becomes the smallest frame that contains all request viewing zones: Area(c ∩ ri ) = Area(ri ) ∀i. Figure 4.18 illustrates the speed difference between BV, BV-IC, and BV-ICDS algorithms. Each data point in Figure 4.18 is an average of 10 trials with different random inputs, where the same random inputs are used to test all three algorithms. The timing results are consistent with the theoretical analysis.
4.6 Conclusions and Future Work To automate satellite camera control, this Chapter introduces the Satellite Frame Selection problem: find the satellite camera frame parameters that maximize reward during each time window. We formalize the SFS problem based on a new reward metric that incorporates both image resolution and coverage. For a set of n client requests and a satellite with m discrete resolution levels, we give an SFS algorithm that computes optimal frame parameters in O(n2 m). For satellites with continuously variable resolution (m = ∞), we give an SFS algorithm that computes optimal frame parameters in O(n3 ). We have implemented all algorithms and compared computation speeds on randomized input sets. The exact algorithm in this chapter is good for offline applications. However, they are not fast enough to control robotic web cameras, where users demand fast feedback. The computation time should be controlled within a second if possible. Accuracy of the computation is less important if compared with speed requirement. This means the approximation algorithm in next chapter is preferred for such applications.
5 Approximate and Distributed Algorithms for a Collaboratively Controlled Robotic Webcam⋆
5.1 Introduction The recent development of low-power networked robotic cameras provides lowcost interactive access to remote sites. Robotic pan-tilt-zoom cameras can cover a large region without using excessive communication bandwidth. With applications in natural environment observation or surveillance, a single robotic camera is often concurrently controlled by many online users and networked in-situ sensors such as motion detectors. Since there are multiple simultaneous requests in a dynamic environment, an optimal camera frame needs to be computed quickly to address the resource contention problem and hence to achieve the best observation or surveillance results. In last chapter, this is proposed as a Single Frame Selection (SFS) problem when requests are rectangular regions. However, a majority of requests are not necessarily rectangular in many applications. The shapes of requests are usually determinated by factors such as the shapes of objects in the scene and the coverage of in-situ sensors. On the other hand, the existing algorithms give exact optimal solutions and are not scalable due to their high complexity. A new class of fast and approximate algorithms are favorable for this generalized SFS problem that prefers speed to accuracy. As illustrated in Figure 5.1, the input of a generalized SFS problem is a set of n requests. The output of the problem is a camera frame that maximizes the satisfaction of all requests, which can be intuitively understood as the best tradeoff between the coverage and the detail/resolution of the camera frame. We present a lattice-based approximation algorithm: given n requests and an approximation bound ǫ, we analyze the tradeoff between solution quality and the required computation time and prove that the algorithm runs in O(n/ǫ3 ) time. We also develop a BnB-like approach, which can reduce the constant factor of the algorithm by more than 70%. We have implemented the algorithm, and the ⋆
This chapter was presented in part at the 2003 IEEE/RSJ International Conference on International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV [170] and in part in the IEEE Transactions on Robotics [168].
D. Song: Sharing a Vision: Systems and Algorithms, STAR 51, pp. 71–88. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
72
5 Approximation Algorithms for Single Frame Selection
Requests
Optimal camera frame
Fig. 5.1. An illustration of the frame selection problem. The panorama describes the camera’s full workspace (reachable field of view). Each user/sensor positions a request as a dashed closed region in the panorama. Based on these requests, the algorithm computes an optimal camera frame (shown with a rectangle) and moves the camera accordingly. In this case, the wildlife-observing camera is pointed at Richardson Bay Audubon Center and Sanctuary, which is located inside San Francisco Bay.
speed testing results conform to our design and analysis. We have successfully tested the algorithm in a variety of situations such as construction monitoring, the surveillance of public space, and natural environment observation.
5.2 Problem Definition In this section, we formulate the frame selection problem as an optimization problem: finding the camera frame that maximizes total request satisfaction. Nomenclature Variable Description Z Z = [z, z] a set of feasible values of image resolution / camera zoom range (x, y) the center position of a camera frame z camera frame size, z ∈ Z c camera frame c = [x, y, z] c∗ optimal camera frame c∗ = [x∗ , y ∗ , z ∗ ] n the number of request frames zi image resolution of the i-th request, zi ∈ Z, i = 1, ..., n
5.2 Problem Definition
73
ri
request i, ri = [Ti , zi ], where Ti is an arbitrary closed region and it takes constant time to compute its area, i = 1, ..., n si the satisfaction function of request i s overall satisfaction ǫ approximation bound Area(·) the function that computes the area of the closed polygon w, h, g camera pan, tilt, and zoom range Let c be a vector of control parameters for the pan-tilt-zoom camera with a fixed base. Frame c = [x, y, z], where x, y specify the center point of the camera frame, which corresponds to the pan and tilt, and z specifies the size of the rectangular camera frame. For a camera with a fixed aspect ratio of kx : ky , we define z such that the four corners of frame c = [x, y, z] are located at,
x±
ky z kx z . ,y ± 2 2
(5.1)
c uniquely defines a rectangular camera frame because the camera has a fixed aspect ratio. It is worth mentioning that z uniquely defines camera zoom and image resolution and is sometimes referred to as image resolution or camera zoom. A smaller z means a smaller coverage of the camera frame and actually corresponds to a higher zoom. Since camera CCD sensors have a fixed number of pixels, a smaller z also refers to higher resolution. Let Z = [z, z¯] be the range of camera frame size. Since the camera cannot have infinite resolution and infinite coverage, therefore, (5.2) z > 0 and z¯ < ∞. For frame c = [x, y, z] with aspect ratio of 4:3, the width of the frame is 4z, the height of the frame is 3z, and the area of the frame is 12z 2 . To facilitate our analysis, the aspect ratio of 4:3 is used as the default aspect ratio. Our algorithm can be easily adapted to cameras with different aspect ratios. Define w and h to be the camera pan and tilt ranges, respectively. Let Θ = {(x, y) : x ∈ [0, w], y ∈ [0, h]} be the set of all reachable x, y pairs. Set C = Θ × Z = {[x, y, z]|[x, y] ∈ Θ, z ∈ Z} as the feasible region of the problem. Request i is defined as ri = [Ti , zi ], where Ti specifies the closed requested region and zi ∈ Z specifies the desired image resolution using the same units as z in c. The only requirement for Ti is that its area of coverage can be computed in constant time. Given n requests, the system computes an optimal frame c∗ that will best satisfy the set of requests. As described in [173], request “satisfaction” is a Coverage-Resolution Ratio (CRR) function. It is based on how a requested region compares with a candidate camera frame. The metric is a scalar si ∈ [0, 1], the level of “satisfaction” that request i receives. Request i gets no satisfaction if the candidate frame does not intersect ri : si = 0 when c∩ri = ∅. In this chapter we abuse the set intersection
74
5 Approximation Algorithms for Single Frame Selection
operator ∩ as the intersection between the coverage of c and the coverage of ri . Request i is perfectly satisfied, si = 1, when Ti is located inside c (full coverage) and zi ≥ z because it means the resolution of the camera frame is equal to or better than the requested zi . When there is a partial overlap, si (ri , c) =
zi Area(ri ∩ c) min( , 1). Area(ri ) z
(5.3)
If z is bigger, the candidate frame will be bigger. A sufficiently large z can i ∩ c) define a candidate frame that covers all requests: Area(r Area(ri ) = 1 for i = 1, ..., n. However, request satisfaction is not necessarily high because the request wants to see the camera frame at a desired resolution. The term min(zi /z, 1) characterizes this desire: it reaches its maximum of 1 if the candidate frame resolution is better than the requested resolution: z ≤ zi . We do not consider the case of over-satisfaction in this version. More accurately speaking, the notation ri in (5.3) should be Ti . We abuse ri to make the equation easy to be associated with the request. For n total requests, let the total request satisfaction be s(c) =
n
si (ri , c).
(5.4)
i=1
We want to find c∗ , the value of c that maximizes s(c). Since c = [x, y, z], we now have a maximization problem: arg max s(c); subject to c ∈ C. c
(5.5)
We next present two lattice-based approximation algorithms to solve it.
5.3 Algorithms We begin with a lattice-based approximation algorithm and derive formal approximation bounds that characterize the tradeoff between speed and accuracy. We then present a BnB-like approach to reduce the constant factor of the latticebased algorithm. 5.3.1
Algorithm I: Exhaustive Lattice Search
Since the camera needs to respond to fast-moving objects, a fast approximate solution is more desirable than a slow exact solution. We propose an algorithm that searches a regular lattice for an approximate solution c˜. Define the lattice as the set of points with coordinates, L = {(pd, qd, rdz )|pd ∈ [0, w], qd ∈ [0, h], rdz ∈ [z, z¯ + 2dz ], p, q, r ∈ N },
(5.6)
5.3 Algorithms
75
where d is the spacing of the pan and tilt samples, dz is the spacing of the zoom, and p, q, r are positive integers. To find c˜, we evaluate all (wh/d2 )(g/dz ) candidate points, where g = z¯ − z. According to (5.4), it takes O(n) computing time to determine the satisfaction for a single candidate frame c. The total amount of computation is O((wh/d2 )(g/dz )n).
(5.7)
How good is the approximate solution in comparison to the optimal solution? Specifically, what is the tradeoff between solution quality and computation speed? Let c∗ be an optimal solution. Let ǫ characterize the comparative ratio of the objective values for the two solutions: s(˜ c)/s(c∗ ) = 1 − ǫ.
(5.8)
Since (5.5) defines a maximization problem, s(c∗ ) is always greater than or equal to s(˜ c), so 0 ≤ ǫ < 1. As ǫ → 0, s(˜ c) → s(c∗ ). We will establish theorems that give an upper bound ǫu such that ǫ ≤ ǫu (d, dz ) for given d and dz . This characterizes the tradeoff between solution quality and computation speed. We first prove lemmas based on 2 observations: • As illustrated in Figure 5.2, consider a set of requests ri , each with zoom level zi . Now consider two candidate frames for the camera, ca , cb with za , zb such that for all i, ri ⊂ ca ⊂ cb 1 and zi < za < zb . This case provides a lower bound, where s(cb )/s(ca ) = za /zb . In general, some requests ri will be included in cb but outside ca , which will only increase the ratio.
ca cb Requests
Candidate frames
Fig. 5.2. Example illustrating the lower bound on solution quality 1
We use set operators ⊂ and ∩ to represent the coverage relationship between a camera frame and a requested region.
76
5 Approximation Algorithms for Single Frame Selection
• Now consider the smallest frame on the lattice that contains an optimal frame. Its size is a function of the size of the optimal frame z ∗ , d, and dz , as derived in lemma 2. We now prove these formally in the general case to obtain a bound on solution quality. Lemma 1. For two candidate frames ca = [xa , ya , za ] and cb = [xb , yb , zb ], if ca s(cb ) ≥ zzab . is within cb , then s(c a) Proof. Recall that ri is the i-th request. Let • • • • • •
ai = Area(ri ). pai = Area(ca ∩ ri ). pbi = Area(cb ∩ ri ), then pbi ≥ pai . Ia = {i|ri ∩ ca = ∅} be the set of requests that intersect with frame ca . Ib = {i|ri ∩cb = ∅} be the set of requests that intersect with frame cb . Ia ⊆ Ib . I ′ be the set of requests that intersect with frame ca and are bigger than ca , I ′ = {i|i ∈ Ia and zi ≥ za }. • I ′ ′ be the set of requests that intersect with frame ca and are bigger than cb , I ′ ′ = {i|i ∈ Ia and zi ≥ zb }. I ′ ′ ⊆ I ′ ⊆ Ia because zb ≥ za . We can classify the proof into two cases, s(cb ) • s(cb ) ≥ s(ca ): here we have s(c ≥ 1 ≥ zzab . The lemma holds. a) • s(cb ) < s(ca ): this is nontrivial. Let us focus on this case in the rest of the proof.
We have, s(ca ) =
n
(pai /ai ) min(zi /za , 1),
i=1
and because Ia ⊆ Ib , s(cb ) =
(pbi /ai ) min(zi /zb , 1)
(pbi /ai ) min(zi /zb , 1).
i∈ Ib
≥
i∈ Ia
Therefore, s(cb )/s(ca ) (pbi /ai ) min(zi /zb , 1) ≥ i∈ Ia (p ai /ai ) min(zi /za , 1) i∈ Ia (pai /ai ) min(zi /zb , 1) ≥ i∈ Ia (pai /ai ) min(zi /za , 1) i∈ Ia ′′ (pai /ai )(zi /zb ) i∈ I ′′ (pai /ai ) + i∈ Ia −I . = (p /a ) + i i∈ Ia −I ′ (pai /ai )(zi /za ) i∈ I ′ ai
(5.9)
5.3 Algorithms
77
′
We also know that a generic function f (x) = x+a x+b′ is a increasing function of x if b′ ≥ a′ ≥ 0 and x ≥ 0. We know that 1 > s(cb )/s(ca ). If we simultaneously decrease the nominator and the denominator of (5.9) by a nonnegative quantity (p /ai ), then the following holds, ′′ ai i∈I s(cb )/s(ca ) ≥ Define Sl =
i∈Ia −I ′′ (pai /ai )(zi /zb )
i∈I ′ −I ′′ (pai /ai ) +
i∈I ′ −I ′′ (pai /ai )(zi /za ).
i∈Ia −I ′ (pai /ai )(zi /za )
.
(5.10)
We know that
zi /za ≥ 1, ∀i ∈ I ′ − I ′′ . Then,
(pai /ai ) ≤ Sl .
i∈I ′ −I ′′
If we use Sl to replace
i∈I ′ −I ′′ (pai /ai )
s(cb )/s(ca ) ≥
in the denominator of (5.10), we have
i∈Ia −I ′′ (pai /ai )(zi /zb )
Sl + i∈Ia −I ′ (pai /ai )(zi /za ) ′′ (pai /ai )(zi /zb ) = i∈Ia −I i∈Ia −I ′′ (pai /ai )(zi /za ) (1/zb ) i∈Ia −I ′′ (pai /ai )zi = (1/za ) i∈Ia −I ′′ (pai /ai )zi =
za 1/zb = . 1/za zb
Now, we are ready to find the smallest frame on the lattice that contains the optimal frame. Lemma 2. Recall that d is the spacing of the lattice and dz is the spacing for zoom levels. For any frame c = [x, y, z] ∈ C, there exists c′ = [x′ , y ′ , z ′ ] ∈ L such that c′ is the smallest frame on the lattice that ensures c is within c′ , which implies, |x − x′ | ≤ d/2 and |y − y ′ | ≤ d/2, 3z + d ⌉dz . z′ ≤ ⌈ 3dz
(5.11) (5.12)
If we choose d = 3dz , then z ′ ≤ z + 2dz .
(5.13)
Proof. The center (point B in Figure 5.3) of the given frame c must have four neighboring lattice points. Without loss of generality, let us assume the nearest lattice point of the center is the top right lattice point, which is point O in Figure 5.3. Other cases can be proven by symmetry.
78
5 Approximation Algorithms for Single Frame Selection
y
A B E
α β
O
C
F
x c
cˆ
Fig. 5.3. The relationship between frame c and the smallest frame cˆ on the lattice that encloses it. In the figure, α = ∠AOB, β = ∠AOC, and the frame c is centered at point B.
Since frame c′ is the smallest frame on the lattice that ensures c is within c′ , (x , y ′ ) has to be the closest neighboring lattice point of (x, y) on the Θ plane, which implies that (5.11) has to be true. Recall that d is the spacing of the lattice. To ensure the point O is the nearest lattice point, (5.11) states that the point B must satisfy following constraints, ′
|OB| sin α ≤ d/2, and |OB| cos α ≤ d/2.
(5.14)
Let us define frame cˆ = [x′ , y ′ , zˆ] to be the smallest frame containing frame c such that (x′ , y ′ ) ∈ Θ and zˆ ∈ R+ . In other words, frame cˆ is located at the lattice point (x, y), but with continuous zoom zˆ. It is not difficult to find the relationship between c′ and cˆ: z ′ = ⌈ˆ z /dz ⌉dz .
(5.15)
Since point F at (xF , yF ) is the bottom-left corner of frame cˆ and point E at (xE , yE ) is the bottom-left corner of frame c, the condition that frame c is located inside frame cˆ is equivalent to following conditions, xF ≤ xE and yF ≤ yE .
(5.16)
Since the frames are iso-oriented rectangles and have the same aspect ratio, their diagonal lines have to be parallel to each other: BE OF . Therefore, when 0 ≤ α ≤ β, BE is always above OF , and if xF = xE , then yF ≤ yE . The boundary conditions for cˆ can be simplified:
5.3 Algorithms
79
• case 1: xF = xE , if 0 ≤ α ≤ β, and • case 2: yF = yE , if β ≤ α ≤ π/2. Figure 5.3 describes case 1. We draw a vertical line at point B, which intersects the x axis at point A and OF at point C. Since xF = xE , we know EF AC. Therefore, we have |CF | = |BE| and |OC| = |OF | − |CF | = |OF | − |BE|.
(5.17)
Also, since AC⊥OA, we have, |OC| cos β = |OB| cos α ⇒ (|OF | − |BE|) cos β = |OB| cos α. According to (5.14), ⇒ (|OF | − |BE|) cos β ≤ d/2. The aspect ratio of the frame is 4 : 3 ⇒ cos β = 4/5, ⇒ |OF | ≤ |BE| + 5d/8. Similarly, we can get |OF | ≤ |BE| + 5d/6 from case 2. Combining the two cases, we know, |OF | ≤ |BE| + 5d/6. Since |OF | = 5ˆ z /2 and |BE| = 5z/2, zˆ ≤ z + d/3. Plugging it into (5.15), z′ ≤ ⌈
3z + d ⌉dz . 3dz
If we choose d = 3dz , (5.12) can be simplified as, z′ ≤ ⌈
z ⌉dz + dz ≤ z + 2dz . dz
(5.18)
Remark 1. It is worth mentioning that the result of lemma 2 depends on camera aspect ratio. Take a close look, it is not difficult to find that the constant 3 in the proof is from min(kx , ky ) for a camera with an aspect ratio of 4 : 3 (kx = 4, ky = 3). Therefore, the choice of lattice spacing in x − y plan, d = min(kx , ky )dz = 3dz in (5.18) also depends on camera aspect ratio. It is not difficult to extend it to cameras with different aspect ratios because it only changes the constant factors in the analysis.
80
5 Approximation Algorithms for Single Frame Selection
Theorem 1. Recall z is the smallest allowable z value and d = 3dz . The approximation factor ǫ of the deterministic lattice-based algorithm is bounded below some constant ǫu , 0 ≤ ǫ ≤ ǫu , where ǫu =
2dz . z + 2dz
Proof. Recall that, • c∗ = [x∗ , y ∗ , z ∗ ] is the optimal frame, • c′ = [x′ , y ′ , z ′ ] is the closest lattice point with the smallest zoom level that ensures c∗ is within c′ , and • c˜ = [˜ x, y˜, z˜] is the lattice point found by the approximation algorithm. It is worth mentioning that we do not know c∗ and c′ . We do know the geometric enclosure relationship between c∗ and c′ . We also know that c′ is just one of the points in lattice L. Note that c˜ is the solution to: arg maxc∈L s(c). Since c′ ∈ L ⊂ C, we know that s(c′ ) ≤ s(˜ c). Therefore, s(c′ ) ≤ s(˜ c) ≤ s(c∗ ). Applying Lemma 1, 1 − ǫ = s(˜ c)/s(c∗ ) ≥ s(c′ )/s(c∗ ) ≥
z∗ . z′
Applying (5.13) of Lemma 2, we have z ′ ≤ z ∗ + 2dz . Using this result, we have, 1−ǫ≥
z∗ . z ∗ + 2dz
On the other hand, we know z ∗ ≥ z, so 1−ǫ≥
z 2dz ↔ǫ≤ . z + 2dz z + 2dz
2dz is a monotonically inTheorem 1 says that the approximation bound z+2d z creasing function of dz . It characterizes the tradeoff between accuracy and computation speed for the lattice-based approximation algorithm in Algorithm 1. The relationship between solution quality and computation speed is summarized by Theorem 2.
Theorem 2. We can solve the frame selection problem in O(n/ǫ3 ) for a given approximation bound ǫ. ǫ )z, we need to evaluate all (wh/d2 )(g/dz ) = Proof. Since d = 3dz and dz = 12 ( 1−ǫ whg (9/4)( ǫ )3 z 3 points. According to (5.4), each point will take O(n) time. Removing 1−ǫ
the constants, ǫ approaches zero, so the computation time approaches O(n/ǫ3 ).
5.3 Algorithms
81
Algorithm 1. Exhaustive Lattice Search input : requests, ri , i = 1, ..., n, approximation bound ǫ output: approximate solution c˜ compute the appropriate lattice spacing: choosing d = 3dz , according to Theorem 1, we set 1 ǫ 2dz ⇒ dz = ( )z. ǫ= z + 2dz 2 1−ǫ
(5.19)
/* This is the maximum dz that ensures the objective function */ value is bounded above (1 − ǫ)s(c∗ ). s(˜ c) = 0; O(1) for each lattice point c = (x, y, z) do Compute the objective function value s(c); O(n) if s(c) > s(˜ c) then s(˜ c) = s(c); O(1) c˜ = (x, y, z); O(1) Report c˜ and s(˜ c);
5.3.2
O(1)
Algorithm II: BnB Implementation
In the lattice-based approximation algorithm, we evaluate the objective function at each lattice point. However, we may not need to check them all. The proof of Theorem 1 implies the following corollary, Corollary 1. Given a frame cˆ is currently the best known solution, if a candidate frame c = [x, y, z] does not satisfy the following condition, s(c) ≥ s(ˆ c)(z/z),
(5.20)
then the candidate frame does not contain any optimal frame. Proof. Assume that c∗ = [x∗ , y ∗ , z ∗ ] is an optimal solution, if the candidate frame contains it. Then according to Lemma 1, the following is true, z∗ s(c) ≥ ≥ z/z. ∗ s(c ) z Since c∗ is the optimal solution, then s(ˆ c) ≤ s(c∗ ). Therefore, (5.20) is true if the candidate frame contains an optimal frame. The corollary is true. Corollary 1 allows us to improve the lattice-based algorithm by using a BnB-like approach. We check if the condition in (5.20) is satisfied. If not, we know that the optimal frame is not contained in the candidate frame. Hence we can delete frames that are contained in the candidate frame. Say that the candidate frame is c′ = [x′ , y ′ , z ′ ], then the frames contained in c′ define a subset of the solution space, which is Φc′ .
82
5 Approximation Algorithms for Single Frame Selection
y
y '+ k y z ' / 2
Φ
y + kyz / 2
y − kyz / 2
c'
y '− k y z ' / 2
c' = ( x' , y' , z ' ) x
x '+ k x z ' / 2
x '− k x z ' / 2
x − kx z / 2
x + kx z / 2
(a)
Φc’ x
z
y
(b)
Fig. 5.4. An illustration of the solution space formed by frames contained in the given frame c′ . The constants kx and ky are determined by the camera aspect ratio. For an aspect ratio of 4:3, kx = 4 and ky = 3.
Recall that kx : ky is the camera aspect ratio in (5.1). As illustrated in Figure 5.4(a), if a frame (x, y, z) is contained in c′ , it has to satisfy the following set of conditions, x − kx z/2 ≥ x′ − kx z ′ /2, x + kx z/2 ≤ x′ + kx z ′ /2, y − ky z/2 ≥ y ′ − ky z ′ /2, y + ky z/2 ≤ y ′ + ky z ′ /2,
(5.21)
(x, y, z) ∈ Φ. Therefore, Φc′ = {(x, y, z)|(x, y, z) satisfies (5.21)}. Recall that the solution space Φ is a 3D rectangle. Figure 5.4(b) illustrates that the shape of Φc′ is a pyramid within the 3D rectangle and has its top located at c′ . The volume of the pyramid is determined by its height z ′ of the candidate frame. A larger z ′ means a larger candidate frame, which leads to a bigger cut in Φ if the candidate frame does not satisfy Corollary 1. This suggests that we should evaluate the lattice points in descending order relative to the z-axis. Figure 5.5 illustrates how to perform the BnB-like search using a 3 × 3 × 3 lattice. We divide the lattice points into different layers with respect to their z values. The search starts with the top-most layer and follows a descending order in z. In this lattice, we set d = 3dz , which will be used as the default setting in the rest of the section. In each layer, we evaluate the objective function at each lattice point in lexicographic order (i.e., the numbered sequence in the layer 1 of Figure 5.5). After the evaluation, we test if the point satisfies the condition in Corollary 1. If so, we refer to this point as a survived node. Otherwise, this is a deleted node. If a node is deleted, it will also cause some nodes in next layer to be deleted because of the shape of the pyramid. We refer to those nodes in next layer as the child nodes
5.3 Algorithms
z 1
3
2
6
5
4
7
8
9 Layer 1
y
Layer 2
dz
Layer 3
kdz kdz
83
x
Survived nodes
Deleted nodes
Fig. 5.5. An illustration of the BnB-like approach
of the deleted node. Since we choose d = kdz (k = min(kx , ky ). Therefore, k = 3 for most cameras, which have an aspect ratio of 4 : 3.), we have the following lemma, Lemma 3. If a lattice point (x, y, z) is deleted and is not a boundary node, then its 9 child nodes in the next layer (frames with zoom z − dz ), (x − kdz , y − kdz ), (x − kdz , y), (x − kdz , y + kdz ), (x, y), (x, y + kdz ), (x, y − kdz ), (x + kdz , y − kdz ), (x + kdz , y), (x + kdz , y + kdz ) should also be deleted. Lemma 3 can be proven by checking if all 9 child nodes are located inside the frame (x, y, z) and their union region covers no more area than the frame (x, y, z) does. Figure 5.5 also illustrates this relationship. The central node in layer 2 is deleted and causes all 9 children to be deleted. If the deleted node is a boundary node, the number of children is less than 9. If one node concludes it is not viable and the neighbor node concludes that it is viable, then the shared child nodes should be non-viable, which leads to great computation savings. Lemma 3 unveils an iterative scheme that we can use to cut the solution space. Recall that we need to evaluate the lattice points in a descending order in z and follow a lexicographic order in the x-y plane. Recall that cˆ is the currently bestknown solution. Initially, we set cˆ to be an arbitrary feasible frame and every node in the lattice to be a survived node. Combining the information above, we can reduce the computational effort required and present the BnB-like approach in Algorithm 2. In the worst case scenario, this approach does not improve the complexity bound. For example, if all requested viewing zones are identical to the accessible region, the approach is not able to cut any computation. Since such worst cases are rare, the approach has its value. We will show the numerical test results in the next section.
84
5 Approximation Algorithms for Single Frame Selection
Algorithm 2. BnB-like Approach input : requests, ri , i = 1, ..., n, approximation bound ǫ output: approximate solution c˜ compute d and dz following Algorithm 1; s(˜ c) = 0; for each lattice point c = (x, y, z) do if the node was deleted then if not the lowest layer then delete its child notes in lower layer else Compute the objective function value s(c); if Equation (5.20) holds then if s(c) > s(˜ c) then s(˜ c) = s(c); c˜ = (x, y, z);
O(1) O(1)
O(n)
O(1) O(1)
else if not the lowest layer then delete its child notes in lower layer Report c˜ and s(˜ c);
O(1)
5.4 Experiments We have implemented the algorithms in sections 5.3.1 and 5.3.2. The algorithms are programmed in C++ that is compatible with both Microsoft Visual C++ and Gnu C++. Both numerical experiments and field applications have been used to test performance. During numerical experiments, we test algorithm speed under different parameter settings, such as the number of requests n and the approximation bound ǫ. The extensive field tests are conducted over three years and are across a variety of applications. We first report the results of the numerical experiments. 5.4.1
Numerical Experiments
The testing computer in the numerical experiments is a PC laptop with a 1.6Ghz Intel Centrino CPU and 512MB RAM. The operating system is Windows XP. Figure 5.1 shows the algorithm’s sample result for an example with 7 requests. During the speed test, triangular random inputs are used. The random inputs are generated for testing in two steps. First, we generate four random points, which are uniformly distributed in the reachable field of view of the robotic camera. The four points represent locations of interest, which are referred to as seeds. For each seed, we use a random number to generate a radius of interest. Then, we generate the requested regions in the second step. We generate a requested region using eight random numbers: one is used to determine which seed the request is associated with; six are used to generate the location of the request (two
5.4 Experiments
85
random numbers per vertex for a triangular request), which is located within the corresponding radius of the associated seed; and the remaining random number is used to generate the resolution for the request. To measure the effectiveness of the BnB-like approach in section 5.3.2, we define computation time using BnB (5.22) γ= computation time using Exhaustive Lattice Search as the performance index variable. Equation (5.22) shows that a smaller γ is more desirable because it means that less computation time is needed. Each data point in Figure 5.6 is an average of 5 iterations using different sets of random requests. The result in Figure 5.6 can be classified into two cases according to ǫ values. If ǫ ≤ 0.05, the curves in Figure 5.6 appear two trends. The first trend is that γ decreases as the number of requests n increases. The second trend is that γ decreases as ǫ decreases. Both trends are very desirable because it means BnB becomes more efficient as more computation is needed. 0.4 0.35 0.3
γ
n=5
0.25
n=20
0.2
n=40
0.15 0.1 0.05 0
0
0.02
0.04
0.06
ε
0.08
0.1
Fig. 5.6. Efficiency of the BnB-like approach. Smaller γ is more desirable. Recall that n is the number of requests.
If ǫ > 0.05 and n = 5, the overhead cost of the BnB-like approach dominates the computation and hence γ decreases as ǫ increases. When n gets large, the point for γ to change its trend from increasing to decreasing does not show up until ǫ is very large. However, it is very rare for us to set ǫ > 0.1 because it is faster enough for our applications as illustrated later. Therefore, it is not an interesting trend for us. Nevertheless, the BnB-like approach can cut the constant factor of the algorithm by more than 70% when ǫ < 0.05, which speeds up computation by more than three times. It is also worth mentioning that the effectiveness of the BnB-like approach also depends on z according to (5.19). If z = 0, the BnB-like approach fails to
86
5 Approximation Algorithms for Single Frame Selection
cut the computation time. Fortunately, z = 0 means that the camera has an infinite resolution, which cannot happen according to (5.2). We have also compared the approximation algorithm to the exact algorithm in [173]. Since the exact algorithm takes O(n3 ) computation time and only accepts rectangular requests, we have modified our algorithm accordingly to ensure a fair comparison. Figure 5.7 illustrates the speed comparison between the approximation algorithm and the exact algorithm. The implementation is the basic lattice-based algorithm in section 5.3.1. We set w = h = 500, z = 40, z¯ = 80, and k = 3. The result conforms to our analysis. The computation time of our approximation algorithm is linear in the number of requests and the slope of the line is determined by the approximation bound ǫ. 5.4.2
Field Tests
Our algorithms have also been continuously tested in the field since September 2002. Applications of the algorithm include construction monitoring, public space surveillance, and natural environment observation. Table 2.2 in Chapter 2 summarizes the 8 testing sites and the corresponding durations and applications. The algorithm runs on a server with dual 2.5Ghz Intel Xeon CPUs and 60 50 T i 40 m e 30 S e c 20 .
(
ε=0.035 Exact algorithm
)
ε=0.05
10 0
ε=0.1 20
30
40
n
50
60
ε=0.2 70
Fig. 5.7. The speed comparison between the basic lattice-based approximation algorithm and the exact algorithm in [173]
5.5 Conclusions
87
2GB RAM. The operating system is Mandrake Linux. According to our records (as of 2006/02/22), we have received more than 301,340 requests. For the duration that system is not idle, the experiment data show that requests follow an interesting 95-5 distribution pattern for human inputs, which means only 5% of requests occupy 95% of system time and 95% of users compete for the remaining 5% of the time. Our conjecture is that users tend to log on the system when there are some activities going on such as a political rally, crane operations for building construction, and a moving wild animal. The irregular traffic pattern favors our fast approximation algorithm because the system can satisfy a majority of users in a timely manner. We know that camera servo time is around 1 second. If it takes more than 1 second to compute an optimal frame, the significant delay can make the system difficult to retain users and track dynamic events. Since accuracy in optimality is not a big concern on web cameras, we set ǫ = 0.1 to favor speed. The ability of choosing the tradeoff between speed and accuracy is another advantage of the approximate algorithm. Even when the number of requests is small (95% of time with 5% users), the exact algorithm still has difficulty to match the computation speed because its overhead in the construction of complex data structures. Although the number of concurrent requests from human users is rarely more than 50, the number of requests generated from sensory inputs (i.e. requests from motion detection) can easily be more than 100. The exact algorithm is not able to handle such amount of inputs. During the tests, the algorithm successfully combines real-time inputs from online users, pre-programmed commands, and sensory inputs and drives the robotic camera automatically and efficiently. The cameras used in our systems include Panasonic HCM 280, Canon VCC3, and Canon VCC4. After analyzing the data from multiple deployments, we have an interesting finding that the average user satisfaction level is inversely proportional to the number of concurrent activities in the site. In other words, users tend to spread their requests evenly across activities. Our latest research goal is to incorporate the algorithm into sensor/human-driven natural environment observation. Please visit us at http://www.c-o-n-e.org for details.
5.5 Conclusions We present new algorithms for the frame selection problem: controlling a single robotic pan-tilt-zoom camera based on n simultaneous requests. With approximation bound ǫ, the algorithm runs in O(n/ǫ3 ) time. We also introduce a BnB-like approach that can reduce the constant factor of the algorithm by more than 70%. The algorithms have been implemented and tested in both numerical experiments and extensive field applications. The results of the numerical experiments conform to our design and analysis. The field tests in different sites have demonstrated that the algorithm has successfully addressed the problem of effective collaborative camera control in a variety of applications.
88
5 Approximation Algorithms for Single Frame Selection
Up to this point, we have dealt with the single camera control problem. Recall that this is a centralized control system from the description at the beginning of Chapter 4. Under such settings, it is possible to view multiple robotic cameras as a more sophisticated robot. Hence we can coordinate multiple camera frames simultaneously under a similar MOSR CTRC structure. We name this problem as p−frame problem in the next chapter.
6 An Approximation Algorithm for the Least Overlapping p-Frame Problem with Non-Partial Coverage for Networked Robotic Cameras⋆
A set of co-centered p−cameras can be viewed as a single sophisticated robot when there exists a centralized coordinator. Obviously, p−cameras can provide better concurrent coverage than the single camera case in previous chapters. However, allocating p−camera frame to n competing requests is nontrivial. In this chapter, we present our initial algorithmic development for the p− frame selection problem.
6.1 Introduction Networked robotic pan-tilt-zoom cameras have found many applications such as natural environment observation, surveillance, and distance learning. Consider that a group of p networked robotic pan-tilt-zoom cameras have been installed for public surveillance in a popular location such as Time Square in New York city. There are n different concurrent requests initiated by a variety of sources such as networked chemical sensors, online user requests, and scheduled events. Fig. 6.1 illustrates the p-frame problem: how to identify optimal p frames that best satisfy the n different polygonal requests. We assume that the p frames have the least overlap (will be formally defined later) on the coverage between the frames and a request is satisfied only if it is fully covered by one of the p frames. Under the assumptions, we propose a Resolution Ratio with Non-Partial Coverage (RRNPC) metric to quantify the satisfaction level for a given request with respect to a set of p candidate frames. Hence the p-frame problem is to find the optimal set of (up to p) frames that maximizes the overall satisfaction. Building on the results in last chapter, we propose a lattice-based approximation algorithm. The algorithm builds on an induction-like approach that finds the relationship between the solution to the p − 1 frame problem and the solution to the p-frame problem. For a given ⋆
This chapter was presented in part at the 2008 IEEE International Conference on Robotics and Automation (ICRA), Pasadena, CA [203].
D. Song: Sharing a Vision: Systems and Algorithms, STAR 51, pp. 89–102. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
90
6 Approximation Algorithms for p−Frame Selection
5HTXHVWHGUHJLRQV
2SWLPDOIUDPHV
Fig. 6.1. An illustration of the least overlapping 3-frame problem
approximation bound ǫ, the algorithm runs in O(n/ǫ3 + p2 /ǫ6 ) time. We have implemented the algorithm in Java and experiment results are consistent with our complexity analysis. The rest of the chapter is organized as follows. In Section 6.2, we present the related work. Section 6.3 formulates the least overlapping p-frame problem with non-partial coverage as an optimization problem. The approximation algorithm is presented in Section 6.4. Experimental results of the algorithm are presented in Section 6.5 before we conclude the chapter in Section 6.6.
6.2 Related Work The p-frame problem is structurally similar to the p-center facility location problem, which has been proven to be NP-complete [131]. Given n request points on a plane, the task is to optimally allocate p points as service centers to minimize the maximum distance (called min-max version) between any request point and its corresponding service center. In [50], an O(n log2 n) algorithm for a 2center problem is proposed. As an extension, replacing service points by orthogonal boxes, Arkin et al. [8] propose a (1 + ǫ)-approximation algorithm that runs in O(n min(lg n, 1/ǫ) + (lg n)/ǫ2 ) for the 2-box covering problem. Alt et al. [5] proposed a (1 + ǫ)-approximation algorithm that runs in O(nO(m) ), where ǫ = O(1/m), for the multiple disk covering problem. The requests in these problems are all points instead of polygonal regions as those in our p-frame problem and the objective of the p-frame problem is to maximize the satisfaction, which is not a distance metric. The p-frame problem also relates to the art gallery problem [142]. The art gallery problem is to minimize the number of security guards to guard an art gallery, which is usually represented by a polygon with n vertices. Each guard has a certain range of vision. The location of the guard can be represented by a point while the reachable region of the guard can be represented by any geometrical shapes. Agarwal et al. [3] consider a variation of the art gallery problem where the terrain is not planar and there are only two guards with minimal heights. They propose an exact algorithm that runs in O(n2 log4 n)
6.3 Problem Definition
91
time. In [51], Eppstein et al. propose the sculpture garden problem where each guard has only a limited angle of visibility. They prove that the upper bound is n − 2 and the lower bound is n/2 for the number of the guards needed. More results on the art gallery problem can be found in [162]. Unlike the art gallery problem, the p-frame problem does not need to cover all requests. However, the selection has to be made based on maximizing the level of satisfaction of covered requests.
6.3 Problem Definition In this section, we formulate the p-frame problem. We begin with the definition of the inputs and outputs. Assumptions are then presented. We establish the request satisfaction metric so that we can formulate the problem as a geometric optimization problem. 6.3.1
Input and Output
The input of the problem is a set of n requests R = {ri |i = 1, 2, ..., n}. Each request is defined as ri = [Ti , zi ], where Ti denotes the polygonal requested region and zi ∈ Z specifies the desired resolution level, which is in the range of Z = [z, z]. The only requirement for Ti is that its coverage area can be computed in constant time. A solution to the p-frame problem is a set of p camera frames. Given a fixed aspect ratio (e.g. 4:3), a camera frame can be defined as c = [x, y, z], where pair (x, y) denotes the center point of the rectangular frame and z ∈ Z specifies the resolution level of the camera frame. The width and height of the camera frame can be represented as 4z and 3z respectively. The coverage area of the frame is 3z 12z 2. The four corners of the frame are located at (x ± 4z 2 , y ± 2 ). Given w and h are the camera pan-tilt ranges respectively, then C = [0, w] × [0, h] × Z defines the set of all candidate frames. Therefore, Cp indicates the solution space for the p-frame problem. We define any candidate solution to the pframe problem as C p = (c1 , c2 , ..., cp ) ∈ Cp , where ci , i = 1, 2, ..., p, indicates the i-th camera frame in the solution. In the rest of the chapter, we use superscription ∗ to indicate the optimal solution. The objective of the p-frame problem is to find the optimal solution C p∗ = (c∗1 , c∗2 , ..., c∗p ) ∈ Cp that best satisfies the requests. 6.3.2
Nomenclature
We clarify the use of set operators such as “∩”, “⊆ ” and “∈” to represent the relationship between frames, frame sets, and requests in the rest of the chapter. • When two operands are frames or requests (e.g., ri ∈ R, cu , cv ∈ C), the set operators represent the 2-D regional relationship between them. For example, ri ⊆ cu represents that the region of ri is fully contained in that of frame cu while cu ∩ cv represents the overlapping region of frames cu and cv .
92
6 Approximation Algorithms for p−Frame Selection
• When the operands are one frame (e.g., ci ∈ C) and one frame set (e.g., C k ∈ Ck , k < p), we treat the frame as an element of a frame set. For example, ci ∈ C k represents that ci is not an element frame in the frame set C k . • When the operands are two frame sets, we use set operators. For example, {c1 } ⊂ C p means frame set {c1 } is a subset of C p . Frame set {c1 , c2 } = {c1 } ∪ {c2 } is different from c1 ∪ c2 . The former is the frame set that consists of two element frames and the later is the union area of the two frames. 6.3.3
Assumptions
We assume that the p-frames are either taken from p cameras that share the same workspace or taken from the same camera. Therefore, if a location can be covered by a frame, other frames can cover that location, too. We assume that the solution C p∗ to the p-frame problem satisfies the following condition. Definition 1 (Least Overlapping Condition (LOC)). ∀ri , i = 1, ...n, ∀cu ∈ C p∗ , ∀cv ∈ C p∗ , and cu = cv , ri cu ∩ cv .
(6.1)
The LOC means that the overlap between frames is so small that no request can be fully covered by more than one frame simultaneously. The LOC forces the overall coverage of a p-frame set ∪pj=1 cj to be close to the maximum. This is meaningful in applications when the cameras need to search for unexpected events while best satisfying the n existing requests because the ability to search is usually proportional to the union of overall coverage. Therefore, the LOC can increase the capability of searching for unexpected events. The extreme case of the LOC is that there is no overlap between camera frames. Definition 6.1 (Non-Overlapping Condition (NOC)). Given a p-frame set C p = (c1 , c2 , ..., cp ) ∈ Cp (p ≥ 2), C p satisfies the NOC, if ∀u = 1, 2, ..., p, ∀v = 1, 2, ..., p, u = v, cu ∩ cv = φ . It is not difficult to find that the NOC is a sufficient condition to the LOC. The NOC yields the maximum union coverage and is a favorable solution to applications where searching ability is important. 6.3.4
Satisfaction Metric
To measure how well a p-frame set satisfies the requests, we need to define a satisfaction metric. We extend the Coverage-Resolution Ratio (CRR) metric in [173] and propose a new Resolution Ratio with Non-Partial Coverage (RRNPC).
6.3 Problem Definition
93
Definition 6.2 (RRNPC metric). Given a request ri = [Ti , zi ] and a camera frame c = [x, y, z], the satisfaction of request ri with respect to c is computed as s(c, ri ) = I(c, ri ) · min(
zi , 1), z
(6.2)
where I(c, ri ) is an indicator function that describes the non-partial coverage condition, 1 if ri ⊆ c, (6.3) I(c, ri ) = 0 otherwise. Equation (6.3) indicates that we do not accept partial coverage over the request. Only the requests completely contained in a camera frame contribute to the overall satisfaction. From (6.2) and (6.3), the satisfaction of the ith request is a scalar si ∈ [0, 1]. Based on (6.2), the satisfaction of ri with respect to a candidate least overlapping p-frame set C p = (c1 , c2 , ..., cp ) ∈ Cp is, s(C p , ri ) =
p
I(cu , ri ) · min(
u=1
zi , 1), zu
(6.4)
where zi , zu indicate the resolution values of ri and the u-th camera frame in C p respectively. The LOC implies that although (6.4) is in the form of summation, at most one frame contains the region of request ri and thus non-negative s(C p , ri ) has a maximum value of 1. Therefore, RRNPC is a standardized metric that takes both the region coverage and the resolution n level into account. To simplify the notation, we use s(c) = i=1 s(c, ri ) to represent the over all satisfaction of a single frame c. We also use s(C k ) = kj=1,cj ∈C k s(cj ), to represent the overall satisfaction of a partial candidate k-frame set C k , k < p. 6.3.5
Problem Formulation
Based on the assumption and the RRNPC metric definition above, the overall satisfaction of a p-frame set C p = {c1 , c2 , ..., cp } ∈ Cp over n requests is the sum of the satisfaction of each individual request ri , i = 1, 2, ..., n, p
s(C ) =
p n
I(cu , ri ) · min(
i=1 u=1
zi , 1). zu
(6.5)
Equation (6.5) shows that the satisfaction of any candidate C p can be computed in O(pn) time. Now we can formulate the least overlapping p-frame problem as a maximization problem, s(C p ). C p∗ = arg max p p C ∈C
(6.6)
94
6 Approximation Algorithms for p−Frame Selection
6.4 Algorithm Solving the optimization problem in ( 6.6) is nontrivial. To enumerate all possible combinations of candidate solutions by brute force can easily take up to O(np ) time. In this section, we present a lattice-based approximation algorithm beginning with the construction of the lattice. To maintain the LOC in the lattice framework, we introduce the Virtual Non-Overlapping Condition(VNOC). Based on the VNOC, we analyze the structure of the approximate solution and derive the approximation bound with respect to the optimal solution that satisfies the NOC . To summarize this, a lattice-based induction-like algorithm is presented at the end of the section. 6.4.1
Construction of Lattice
We construct a regular 3-D lattice, which is inherited from last chapter to discretize the solution space Cp . Let 2-D point set V = {(αd, βd)|αd ∈ [0, w], βd ∈ [0, h], α, β ∈ N ) discretize the 2-D reachable region and represent all candidate center points of rectangular frames, where d is the spacing of the pan and tilt samples. Let 1-D point set Z = {γdz |γdz ∈ [z, z + 2dz ], γ ∈ N } discretize the feasible resolution range and represent all candidate resolution values for the camera, where dz is the spacing of the zoom. Therefore, we can construct the lattice as a set of 3-D points, L = V × Z. Each point c = (αd, βd, γdz ) ∈ L represents the setting of a candidate camera frame. There are totally (wh/d2 )(g/dz ) = |L| candidate points/frames in L, where g = z − z. We set dz = d/3 for cameras with an aspect ration of 4 : 3 according to the last chapter. What is new is that the spacing of the lattice d and dz also depends on the size of the requested regions. For any request ri ∈ R, let [xmin , xmax ]×[yimin , yimax ] i i represent the smallest rectangle that encloses the region Ti . Let us define λ = min(xmax − xmin , ..., xmax − xmin ), 1 1 n n (6.7) μ = min(y max − y min , ..., y max − y min ). 1
1
n
n
We choose d such that d < min(3λ/10, μ/3).
(6.8)
This input-sensitive lattice setting can help us to establish the LOC on the lattice and will be discussed later in Section 6.4.2. From here on, we use symbol˜ to denote the lattice-based notations. For example, C˜ p denotes a p-frame set on lattice L. Definition 6.3. For any camera frame c ∈ C, c˜′ = min c˜, s.t. c˜ ∈ L and c ⊆ c˜. Hence c˜′ is the smallest frame on the lattice that fully encloses c.
6.4 Algorithm
95
We use symbol ′ to denote the corresponding smallest frame(s) on the lattice in the rest of the chapter. From the results from the last chapter, we know that for any camera frame c = [x, y, z] and its corresponding c˜′ = [˜ x′ , y˜′ , z˜′ ], ′ given their coverage regions are [xmin , xmax ]×[ymin , ymax ] and [˜ xmin , x ˜′max ]× ′ ′ [˜ ymin , y˜max ] respectively, then we have xmin − x ˜′min ≤ 5d/3, x˜′max − xmax ≤ 5d/3, ′ ′ ≤ 3d/2, y˜max − ymax ≤ 3d/2. ymin − y˜min 6.4.2
(6.9)
Virtual Non-Overlapping Condition
The NOC defined in Definition 6.1 guarantees the LOC. However, due to the limitation of lattice spacing, it is very difficult for candidate frames on the lattice to follow the NOC. Actually, it is unnecessary (though sufficient) to follow the NOC to satisfy the LOC. It is possible to allow a minimum overlap that is controlled by the lattice spacing and meanwhile guarantee that the LOC is still satisfied, which yields the Virtual Non-Overlapping Condition (VNOC). Definition 6.4 (Virtual Non-Overlapping Condition(VNOC)). Given any j-frame set C j = (c1 , c2 , ..., cj ) ∈ Cj , j = 2, 3, ..., p and any two frames cu , cv ∈ C j with their coverage regions being [xmin , xmax ] × [yumin , yumax ] u u min max min max j and [xv , xv ] × [yv , yv ] respectively, then C satisfies the VNOC, if min(xumax − xmin , xmax − xmin ) ≤ 10d/3 or min(yumax − yvmin , yvmax − yumin ) v v u ≤ 3d. Corollary 1. Given any two frames c1 , c2 ∈ C, if {c1 , c2 } satisfies the VNOC, then {c1 , c2 } also satisfies the LOC. ] × [y1min , y1max ] and Proof. Define coverage regions of c1 , c2 to be [xmin , xmax 1 1 [x2min , xmax ] × [y2min , y2max ], respectively. According to the definition of the 2 VNOC and the upper bound value of d defined in (6.8), we have min(xmax − xmin , xmax − xmin ) < λ, 1 2 2 1 min max min max −y ,y −y ) < μ. or min(y 1
2
2
1
(6.10) (6.11)
Eqs. (6.10) and (6.11) indicate that the size of the overlapping region c1 ∩ c2 , on either the x-axis or y-axis, is less than the size of the smallest request. This guarantees that no requested region is fully contained in the overlapping region. Therefore, the LOC is satisfied. Lemma 1. Given any two frames c1 , c2 ∈ C such that {c1 , c2 } satisfies the VNOC, then (6.12) s({c1 , c2 }) = s(c1 ) + s(c2 ). Proof. From Corollary 1, {c1 , c2 } satisfies the LOC. From the definition of the LOC and the RRNPC satisfaction metric defined in (6.2), the conclusion follows.
96
6 Approximation Algorithms for p−Frame Selection
6.4.3
Approximation Solution Bound
The construction of the lattice allows us to search for the best p frames on the lattice, which yields an approximation solution. Furthermore, the VNOC and Lemma 1 assist us in deriving the approximation bound. Lemma 2. For any two frames c1 , c2 ∈ C, if {c1 , c2 } satisfies the NOC, then {˜ c′1 , c˜′2 } satisfies the VNOC. Proof. Define the coverage regions of c1 and c2 to be [xmin , xmax ]×[y1min , y1max ] 1 1 and [x2min , xmax ] × [y2min , y2max ], respectively. Let the coverage regions of c˜′1 2 ′ ′min and c˜ be [˜ y ′min , y˜′max ], x′min , x ˜′max ] × [˜ y ′min , y˜′max ] and [˜ x ,x ˜′max ] × [˜ 2
1
1
2
1
1
2
2
2
respectively. {c1 , c2 } satisfies the NOC, which indicates, min(xmax − xmin , xmax − xmin ) ≤ 0, 1 2 2 1 or min(y max − y min , y max − y min ) ≤ 0. 1
2
2
1
Substituting the inequalities above into ( 6.9), we have min(˜ x′1max − x˜′2min , x˜′2max − x ˜′1min ) ≤ 10d/3, or min(˜ y ′max − y˜′min , y˜′max − y˜′min ) ≤ 3d, 1
2
2
1
which satisfies the VNOC. Given the optimal solution C p∗ = (c∗1 , c∗2 , ..., c∗p ) for the optimization problem defined in (6.6) that satisfies the NOC, there is a solution on the lattice C˜ ′p∗ = ˜′∗ ˜′∗ (˜ c′∗ p ) whose element frames are the corresponding smallest frames on 2 , ..., c 1 ,c the lattice that contain those of C p∗ . Lemma 2 implies that C˜ ′p∗ exists and satisfies the VNOC. However, how good is this solution in comparison to the optimal solution? We define the approximation bound ǫ which characterizes the comparative ratio of the approximation solution to the optimal solution s(C˜ ′p∗ )/s(C p∗ ) ≥ 1 − ǫ.
(6.13)
Based on Lemma 1 and Theorem 1 in the last chapter, we have s(C˜ ′p∗ )/s(C p∗ ) ≥ 1 −
2dz . z + 2dz
(6.14)
Let C˜ p∗ denote the optimal p-frame set on the lattice. Since C˜ ′p∗ is one of the p-frame sets on the lattice, then we have s(C˜ p∗ ) s(C˜ ′p∗ ) 2dz ≥ ≥1− . s(C p∗ ) s(C p∗ ) z + 2dz
(6.15)
Equation 6.15 implies that we can use the solution C˜ p∗ as the approximate solution to the optimal solution. Let the approximation bound be ǫ=
2dz . z + 2dz
(6.16)
6.4 Algorithm
97
Solving (6.16) and combining the upper bound value of d as in (6.8), we have 3 ǫ )z, min(3λ/10, μ/3)). d = 3dz = min( ( 2 1−ǫ
(6.17)
Equation (6.17) indicates that when ǫ → 0, d = 3dz =
3 ǫ ( )z. 2 1−ǫ
(6.18)
Eqs. (6.16) and (6.18) imply that we can control the quality of the approximate solution by tuning the lattice spacing d. On the other hand, based on the lattice structure and the definition of the approximation bound, we know that the number of all candidate ponints/frames on the lattice is, |L| = O(1/ǫ3 ). 6.4.4
(6.19)
Lattice-Based Algorithm
With the approximation bound established, the remaining task is to search C˜ p∗ on L. We design an induction-like approach that builds on the relationship between the solution to the (p − 1)-frame problem and the solution to the p-frame problem. The key elements that establish the connection are Conditional Optimal Solution (COS) and Conditional Optimal Residual Solution (CORS). ˜j (˜ c) = Definition 6.5 (Conditional Optimal Solution). ∀˜ c ∈ L, the COS, U j∗ j∗ ˜ ˜ {C |˜ c ∈ C }, is defined as the optimal j-frame set, j = 1, 2, ..., p, for the j˜j (˜ c) satisfies the frame problem that must include c˜ in the solution set. Also, U VNOC. Therefore, we can obtain the optimal solution, C˜ p∗ , on the lattice by searching c˜ over L and its corresponding COS, ˜p (˜ C˜ p∗ = U c∗ ),
(6.20)
˜p (˜ where c˜∗ = arg maxc˜∈L s(U c)). Definition 6.6 (Conditional Optimal Residual Solution). Given any ˜j+1 (˜ c), j = 0, 1, ..., p − 1, we define the j-frame CORS with respect to COS, U ˜j+1 (˜ ˜ c) = U c) − {˜ c}. c˜ as: Qj (˜ ˜ j (˜ Corollary 2. Q c) is the optimal j-frame set that satisfies, ˜ j (˜ • c˜ ∈ /Q c), ˜ j (˜ • {˜ c} ∪ Q c) satisfies the VNOC. What is interesting is that CORS allows us to establish the relationship between ˜ j−1 . ˜ j and Q Q
98
6 Approximation Algorithms for p−Frame Selection
Lemma 3
˜ j (˜ ˜ j−1 (c˜∗ ) ∪ {c˜∗ }, Q cu ) = Q
(6.21)
˜ j−1 (˜ where c˜∗ = arg maxc˜∈L s(Q c) ∪ {˜ c}), subject to the constraint that {˜ cu , c˜} ∪ ˜ Qj−1 (˜ c) satisfies the VNOC. Proof. We prove the lemma by contradiction. Notice that the right hand side of (6.21) returns one of the j-frame sets that satisfy the two conditions in Corollary 2, while the left hand side is defined to be the optimal j-frame set that satisfies the same two conditions. Therefore, if we assume (6.21) does not hold, the only possibility is, ˜ j (˜ ˜ j−1 (c˜∗ ) ∪ {c˜∗ }). s(Q cu )) > s(Q (6.22) ˜ j (˜ ˜ j (˜ ˜ j (˜ Take an arbitrary frame c˜v ∈ Q cu ) out of Q cu ), the result is Q cu ) − {˜ cv } and according to Lemma 1, we have, ˜ j (˜ ˜ j (˜ s(Q cu ) − {˜ cv }) = s(Q cu )) − s(˜ cv ).
(6.23)
˜ j−1 (˜ ˜ j−1 (˜ Take c˜v out of Q cv ) ∪ {˜ cv }, the result is Q cv ) and ˜ j−1 (˜ ˜ j−1 (˜ s(Q cv )) = s(Q cv ) ∪ {˜ cv }) − s(˜ cv ).
(6.24)
Based on (6.22) and the fact that ˜ j−1 (c˜∗ ) ∪ {c˜∗ }) ≥ s(Q ˜ j−1 (˜ s(Q cv ) ∪ {˜ cv }), we have, ˜ j−1 (˜ ˜ j (˜ cu )) > s(Q cv ) ∪ {˜ cv }). s(Q
(6.25)
Take c˜v out of both sides and combine with (6.23) and (6.24) respectively, we have, ˜ j (˜ ˜ j−1 (˜ s(Q cu ) − {˜ cv }) > s(Q cv )).
(6.26)
˜ j−1 (˜ The frame set on the right hand side of (6.26), Q cv ), is defined to be the optimal (j − 1)-frame set that satisfies the two conditions in Corollary 2 while ˜ j (˜ the frame set on left hand side, Q cu ) − {˜ cv }, is only one of the (j − 1)-frame sets that satisfy the two conditions. Contradiction occurs. ˜ j (˜ cu )) It is worth mentioning that it takes O(p) time to check if ({˜ c, c˜u } ∪ Q ˜ ˜ satisfies the VNOC. Because {˜ c}∪ Qj (˜ c) = Uj+1 (˜ c) satisfies the VNOC as defined ˜j+1 (˜ c) satisfies the in Definition 6.5 and thus we only need to check if {˜ cu } ∪ U VNOC, which takes O(p) time. Equation‘(6.20) implies that we can obtain the approximation solution C˜ p∗ ˜p . Definition 6.6 indicates that we can obtain U ˜p from Q ˜ p−1 . Now Lemma from U ˜ ˜ 3 implies that we can construct Qj from Qj−1 , j = 1, 2, ..., p − 1. Considering the ˜ 0 = φ, this allows us to establish the algorithm using an induction-like fact that Q approach. Algorithm 1 shows the complete lattice-based algorithm. Considering
6.5 Experimental Results
99
any candidate frame c˜ ∈ L, we pre-calculate the satisfaction values for all the |L| candidate frames and store the values in a lookup table to avoid redundant calculation. Given any candidate frame c˜u ∈ L as the input, the lookup function l returns the satisfaction value of c˜u , l(˜ cu ) = s(˜ cu ). We implement the lookup function using the array, l[u] = s(˜ cu ). From the pseudo code in Algorithm 1, it is not difficult to know that, Algorithm 1. Lattice-based Algorithm begin for j ← 1 to |L| do O(1/ǫ3 ) l[j] = s(˜ cj ) O(n) ˜ 0 (˜ cj ) = ∅; O(1) Q ˜ 0 (˜ cj )) = 0; O(1) s(Q end for k ← 1 to p do O(p) ˜ k∗ = ∅; O(1) C ˜ k∗ ) = 0; O(1) s(C ˜ k∗ ,O(1/ǫ3 ) for u ← 1 to |L| do update C ˜ k∗ < s(Q ˜ k−1 (˜ if C cu )) + l[u] then k∗ ˜ ˜ C = Qk−1 (˜ cu ) ∪ {˜ cu }; O(1) ˜ k−1 (˜ ˜ k∗ ) = s(Q s(C cu )) + l[u]; O(1) end end ˜ k (˜ for u ← 1 to |L| do update Q cu ),O(1/ǫ3 ) ˜ ˜ Qk (˜ cu ) = Qk−1 (˜ cu ) ∪ ∅; O(1) ˜ k−1 (˜ ˜ k (˜ cu )) = s(Q cu )); O(1) s(Q for v ← 1 to |L| do O(1/ǫ3 ) ˜ ˜ if s(Qk (˜ cu )) < s(Qk−1 (˜ cv )) + l[v] AND ˜ k−1 (˜ cv ) satisfies the VNOC O(p) {˜ cu , c˜v } ∪ Q then ˜ k (˜ ˜ k−1 (˜ Q cu ) = Q cv ) ∪ {˜ cv }; O(1) ˜ ˜ k−1 (˜ cu )) = s(Q cv )) + l[v]; O(1) s(Qk (˜ end end end end ˜ p∗ ; return C end
Theorem 1. Algorithm 1 runs in O(n/ǫ3 + p2 /ǫ6 ) time.
6.5 Experimental Results We have implemented the algorithm using Java. The computer used is a desktop computer with an Intel Core 2 Duo 2.13GHz CPU and 2GB RAM. The
6 Approximation Algorithms for p−Frame Selection
7LPHVHF
100
Q
Fig. 6.2. The computation time vs. the number of requests n, (p = 2, ǫ = 0.25)
7LPHVHF
operating system is Windows XP. In experiments, we test the algorithm speed with different parameter settings including the number of request n, the number of camera frames p, and the approximation bound ǫ. In the experiments, both triangular and rectangular inputs are randomly generated. First, sd points in V are uniformly generated across the reachable field of view. These points indicate the locations of interest and are referred to as seeds. Each seed is associated with a random radius of interest. To generate a request, we randomly assign it to one seed. For a triangular request, three 2-D points are randomly generated within the radius of the corresponding seed as the vertices of the triangle. For a rectangular request, a 2-D point is randomly generated as the center of the rectangular region within the radius of corresponding seed and then two random numbers are generated as the width and height of the request. Finally, the resolution value of the request is uniformly randomly generated across the resolution range [z, z]. Across the experiment, we set w = h = 100, z = 5, z = 20 and sd = 4. For each parameter setting, 50 trials have been carried out for averaged performance. Fig. 6.2 illustrates the relationship between n and the computation time while other parameters are fixed. The computation time is O(n) in the figure. Fig. 6.3
S
Fig. 6.3. The computation time vs. the number of frames p, (n = 100, ǫ = 0.25)
7LPHVHF
6.5 Experimental Results
101
¦
Fig. 6.4. The computation time vs. the approximation bound ǫ, (n = 100, p = 2)
(a) p = 1, s = 4.21
(c) p = 3, s = 8.11
(b) p = 2, s = 6.32
(d) p = 4, s = 9.07
Fig. 6.5. Sample outputs when p increases for a fixed input set n = 10
102
6 Approximation Algorithms for p−Frame Selection
illustrates the relationship between p and the computation time while other parameters are fixed. Fig. 6.4 illustrates the relationship between ǫ and the computation time when other parameters are fixed. The results in Figs. 6.2, 6.3, and 6.4 are consistent with our analysis. Fig. 6.5 shows how the output of the algorithm for a fixed set of inputs (n=10) changes when p increases from 1 to 4. It shows that our algorithm reasonably allocates camera frames in each case.
6.6 Conclusion and Future Work We formulated the least overlapping p-frame problem with non-partial coverage as an optimization problem. A lattice-based approximation algorithm was proposed for solving the problem. Given n requests and p camera frames, the algorithm runs in O(n/ǫ3 + p2 /ǫ6 ) time with the approximation bound ǫ. We have implemented the algorithm and tested it on random inputs. The experimental results are consistent with our theoretical analysis. There are a rich set of extension problems that we can explore in the future. We will explore the new geometric data structures to improve complexity results. The existing implementation is based on Java, which is not fast enough for field applications. We will implement them algorithm in C++ with new data structures for better performance. We will also develop algorithms for different variations of the problem such as allowing camera frames to overlap with each others. We will develop new standardized satisfaction metrics, perform complexity analysis, and present exact algorithms for those variations. We have shown three different versions of frame selection problems for robotic pan-tilt-zoom cameras. The algorithms are all based on two assumptions 1) there exists a centralized coordinator and 2) inputs and outputs can be represented as geometric objects. However, neither of the assumptions would hold for the teleactor system in Chapter 3. The tele-actor has the freedom to perform high level tasks instead of low-level motion commands. The decision space has infinite degrees of freedom and cannot be explicitly characterized as a simple optimization problem.
7 Unsupervised Scoring for Scalable Internet-Based Collaborative Teleoperation⋆
As mentioned in Chapter 3, an SVD voting interface is used to explore group consensus to guide the actions for the human tele-actor. While the voting is an effective way to identify a group consensus, the mechanism itself cannot encourage the user participation in the collaborative decision process. There is a strong need for the system to provide incentives for active participation because the quality of the group decision depends on the participation rate. This chapter attempts to develop a scoring mechanism to encourage the participation rate and to study whether there is a positive correlation between the scoring mechanism and the group decision quality.
7.1 Introduction In Tele-Actor field tests with students ranging from middle and high-school we discovered that students would often become passive, simply watching the video without participating. Instructors asked for a mechanism to quantify student engagement for grading purposes. We realized we could address both issues by introducing a scoring metric that would continually assess students and provide a competitive element to the interactions. This chapter describes an unsupervised scoring metric and algorithms for rapidly computing it. The metric is “unsupervised” in the sense that it does not rely on a human expert to continuously evaluate performance. Instead, performance is based on “leadership”: how quickly users anticipate the decision of the majority. As illustrated in Figure 7.1, the unsupervised scoring metric is based on clustering of user inputs. For n users, computing scores runs in O(n) time. This chapter presents problem formulation, distributed algorithms, and experiment results. ⋆
This chapter was presented in part at the 2004 IEEE International Conference on Robotics and Automation (ICRA), New Orleans, LA [74].
D. Song: Sharing a Vision: Systems and Algorithms, STAR 51, pp. 103–113. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
104
7 Unsupervised Scoring for Collaborative Teleoperation
Which item should we examine? (a)
(b) Fig. 7.1. Above, (a) illustrates a sample view from our scalable user interface, where spatial inputs from users are represented with colored square markers, in this case indicating points of interest in a live video image. The unsupervised scoring metric assigns a scalar value to each input. Below, (b) illustrates the aggregate (consensus) density distribution that is used to automatically compute scores based on vote arrival time.
7.2 Related Work
105
7.2 Related Work Outside of robotics, the notion of MOSR is related to a very broad range of group activities including compute gaming, education, social psychology, voting, etc. Although there currently are no standard “best practices” for the design of scoring systems in the computer games industry, some tentative efforts recently have been made to identify the key features of an effective scoring and ranking model. GamaSutra, the flagship professional resource for digital game designers from around the world, has proposed four characteristics of a successful scoring system. It should be reproducible (consistent), comprehensive (incorporating all significant aspects of game play), sufficiently detailed (so that players understand how it is calculated), and well-balanced (allowing for players to stay competitive even when playing from a disadvantage) [111]. Kriemeier suggests that a failure to address each of these four principles will negatively affect user participation and motivation. Proposed customizations to further increase player motivation in a system with these four standard features include introducing non-zero sum (cooperative) scoring elements and a score decay algorithm that will penalize players for a lower rate of participation in the game. The Tele-Actor unsupervised scoring model aims to incorporate these features, and to investigate their value and effectiveness from a user-perspective. Two different areas in educational psychology investigate the usefulness of games in learning. The first addresses the element of “fun” in games as a powerful motivational tool [65, 88]. In particular, cooperative play models are thought to provide particularly powerful evidence of games as tools of engagement; as Sutton-Smith notes [177], children and young adults are so motivated to be accepted in such play, they make sacrifices of egocentricity for membership in the group, a claim that may be tested by the unsupervised scoring model. Csikszentmihalyi’s theory of “flow” [42], a confidence-building state in which a participant becomes absorbed in and highly effective at a particular activity, argues for the importance of feedback as the primary criterion for achieving flow. According to Csikszentmihalyi, flow creates a sense of motivation that is intrinsic to the activity, rather than relying on extrinsic rewards (prizes, grades, acclaim), and intrinsic motivation is ultimately more important for long-term success in any activity. For example: Gee [62] argues that students today are used to experiencing flow and achieving a sense of mastery in their at-home game play, but not in school. Many educators are considering adopting more game-like scenarios for learning as a way to incorporate students’ proclivity for structured play. Group decision-making is sometimes modeled as an n-person (multi-player) cooperative game [197, 11]. We can view the unsupervised scoring system as a nonzero-sum multi-player game. in which users compete and cooperate to increase their individual leadership scores. The open-endedness of the n-person, non-zero sum scenario makes its outcome the most theoretically challenging to predict in game theory. The data collected on user behavior in the form of leadership scores may be useful in evaluating a variety of predictive models
106
7 Unsupervised Scoring for Collaborative Teleoperation
Fig. 7.2. The Spatial Dynamic Voting (SDV) interface as viewed in the browser by each online voter. Low framerate live video (approx 1 fps) is displayed in the left window. Users vote on spatial preferences using their mouse to position a small marker (votel) in the middle (voting) window, over either a prestored or live image. Users view the position of all votels and can change their votel positions based on group dynamics. As described below, votel positions are processed to assign scores to each user. The list of active voters, ranked by score is displayed in the right window. The lower right window displays a plot of voter score, overall average, and scores for the top three voters.
for n-person games and the associated likelihood and structures of cooperative behavior in real-life [12, 126, 160]. A related question is, how do players learn game strategies and adapt their game play behavior accordingly over time? The study of adaptive learning models in games attempts to develop a theory of “game cognition” that explains why people do not always discover or follow optimal strategies in game play [125]. Erev and Roth [52], for instance, argue for the importance of the presence or absence of reinforcement and feedback within a repetitive game structure as the most important factor in predicting player action. By providing feedback in the form of a leadership score and simultaneously tracking changes in scores over time, we can quantify how well scores correlate with group performance.
7.3 Problem Definition Figure 7.2 illustrates the user interface. In this section, we propose an unsupervised scoring metric based spatial distributions of votes. 7.3.1
Inputs and Assumptions
Consider the kth voting image. The server receives a response from user i in the form of an (x, y) mouseclick on image k at time t. We define the corresponding votel: vik (t) = [xik (t), yik (t)].
7.3 Problem Definition
(a)
(b)
107
(c)
Fig. 7.3. An example of voter interest functions, the corresponding majority interest function, and an illustration of consensus region generation for the voting image in Figure 7.2
Each votel represents a user’s response (“vote”) to the voting image. We model such responses with a voter interest function, a density function based on the bivariate normal distribution: fik (x, y) ∼ N (vik (t), Σik (t)) where vik (t) is the mean vector and Σik (t) is a 2 × 2 variance matrix, such that, fik (x, y) dx dy = 1 σ
where σ is the area of the voting image. Since σ is a bounded 2D region, the voter interest function is a truncated bivariate normal density function with mean at vik (t) as illustrated in Figure 7.3(a). Majority Interest Function When voting on image k ends at stopping time T , the last votel received from each of n active voters determines Vk , a set of n votels. We define the ensemble interest function for voting image k as the normalized sum of these voter interest functions. n 1 fik (x, y). fk (x, y) = n i=1 Consensus Regions As illustrated in figure 7.3(c), we can extract spatial regions by cutting the ensemble interest function using a horizontal plane at a height proportional to the overall volume. Let zk be the cutting threshold. The zk value satisfies the condition that the ratio between the partial volume of the ensemble interest function above the horizontal plane and the total volume of the ensemble interest function is constant r. We use a value of 0.10 (10% of the volume lies above the plane). The cutting plane defines an iso-density contour in the ensemble interest function that defines a set of one or more closed subsets of the voting image, Sk = {(x, y)|fk (x, y) ≥ zk }.
108
7 Unsupervised Scoring for Collaborative Teleoperation
As illustrated in Figure 7.3(c), we refer to these subsets as consensus regions: Sk = {C1k , C2k , ..., Clk }. Since there are n voters, l ≤ n is number of consensus regions. Majority Consensus Region Given Sk , Vk , the majority consensus region is the region with the most votels (breaking ties arbitrarily). Let Ik (i, j) =
1 if [xik (T ), yik (T )] ∈ Cjk 0 otherwise
The count nkj =
n
Ik (i, j)
i=1
is the number of votels inside consensus region j of voting image k. Breaking ties arbitrarily, let Ck∗ , the majority consensus region, be any Cjk with max nkj . 7.3.2
Unsupervised Scoring Metric
We measure individual performance in terms of “leadership”. By definition, a “leader” anticipates the choices of the group. In our context, a leader is an individual who votes early in a position consistent with the majority consensus region. Define Is is an outcome index for voter i and voting image s: Is,i =
1 if [xi,s (Ts ), yi,s (Ts )] ∈ Cs∗ 0 otherwise
Define ts,i as the duration of the time that voter i’s votel stays in the majority interest region for the sth voting image, Ts is the total voting time for voting T −t image s. Therefore term s Ts s,i Is,i characterizes how well the voter anticipated the majority consensus region for the voting image s. To smooth out rapid changes in user scores, we pass the term to the following low pass filter to get a stabilized “Leadership score”: Lk+1,i = (1 − α)Lk,i + α
Tk − tk,i Ik,i Tk
where the initial value L0,i = 0 for each voter i. The value of filter factor α is set to 0.1 in our experiments to allow smooth fluctuation in user scores. Note that this scoring metric depends only on the spatio-temporal pattern of votes and does not require a human expert.
7.4 Distributed Algorithm
109
7.4 Distributed Algorithm To compute the unsupervised score for each user, we start by maintaining the ensemble interest function with a grid. We partition the voting image into 160 × 160 regular cells. For each voter interest function fik (x, y), we discretize it into a 2D array with respect to the same lattice resolution. Depending on the variance of the Gaussian function and accuracy threshold, each element of the 2D array only contains constant number of non-zero entries. Therefore, to compute the ensemble interest function for n votes, we add those n 2D arrays into the cells. This operation takes O(n). Figure 7.3(b) shows the shape of the ensemble interest function for the voter interest function in 3(a). As illustrated in Figure 7.3(c), we can extract spatial regions by cutting the ensemble interest function using a horizontal cutting plane. The next step is to compute the height of the cutting plane zk for the given volume ratio r. Define r(z), z > 0 be the volume ratio between the partial volume of the majority interest function above the cutting plane with height z and the total volume of the ensemble interest function. As z increases, the horizontal plane rises. Hence r(z) is a monotonic decreasing function of z. A binary search can find zk in O(log(1/ε)) steps with error ε. Since we need to compute the partial volume for each step, which takes at most O(m) for m cells in the grid. Therefore, the complexity for computing the threshold is O(log(1/ε)m). If we consider ε as a constant, then it is O(m). The second step is to perform the threshold for the ensemble function and find connected components, each of which forms a consensus region. The connected components algorithm makes two passes through the 2D array processing a row of cells at a time, from left to right. During the first pass, each cell is assigned the minimum label of its neighbors or if none exists, a new label is assigned. If neighboring cells have different labels, those labels are considered equivalent and entered into an equivalence table. The second pass uses the equivalence table to assign the minimum equivalent label to each non-zero entry. Since, this step also takes O(m) time, the total computation time for computing the consensus regions for a given ensemble interest function is O(m). To determine the majority consensus region, we need to count the number of votels inside each consensus region. For this purpose, each cell maintains a votel count that is updated during votel insertions. In a single pass, we can sum the number of votels in the cells belonging to each consensus region. Thus, this step takes O(m) time. If we add computation time of algorithms for ensemble interest function, consensus regions, majority consensus regions, and leadership score together, the total computation time is O(n + m). The algorithm has been implemented and runs on the client side using Java. We tested the algorithm using a 750Mhz PC Laptop with 256Mb RAM. Figure 7.4 illustrates the linear scalability of the algorithm in terms of number of voters. The Tele-Actor server is primarily responsible for distributing voting data to all users. Each client sends the user’s voting data and receives updated voting data from the Tele-Actor server every 1.0 seconds. For every server update, at
110
7 Unsupervised Scoring for Collaborative Teleoperation 7000
time (msecs)
6000 5000 4000 3000 2000 1000 0
0
20000
40000
60000
80000
n
Fig. 7.4. Processing time for computing the unsupervised scoring metric as a function of the number of voters based on trials using random voter positions
most n new voter interest functions will need to be inserted. Using the updated interest functions, the consensus regions and the majority consensus region are recomputed. When a voting cycle ends, each voter computes only the voter’s new leadership score, thus distributing the scoring calculations among the clients.
7.5 The “Tele-Twister” Application To understand how the unsupervised scoring metric works with groups of participants and to test software reliability, we developed a collaborative teleoperation system based on a popular party game. The result is a sequence of multi-player non-zero sum games embedded inside a sequence of two-player zero sum games. Twister was the first board game where human bodies are the board pieces. In this classic game, human players interact over a large horizontal playing board. Players sequentially place their hands and feet on colored circular targets chosen randomly (eg, Left foot: GREEN). The challenge for players is to maintain placement of hands and feet in increasingly difficult combinations without falling over. In our version, Tele-Twister, there are two human players called “twisters”. One twister is dressed in red, the other in blue, as illustrated in figure 7.5. Remote participants (“voters”), download the Java applet and are assigned to one of two teams, red or blue. In Tele-Twister, random target selection is replaced by the teams, who view game status using the low framerate video and vote using the interface to collectively decide on the targets and compete to win: having their opponent fall over first. A typical voting pattern is illustrated in Figure 7.5(a). Votel positions for all online voters are updated every second and displayed at each voter’s browser. Consensus regions are computed and updated continuously by each browser. voters are free to change their votel position at any time during the voting cycle, but when it ends, the majority cluster determines the next move for the human twisters.
7.5 The “Tele-Twister” Application
(a)
(b)
(c)
(d)
111
Fig. 7.5. Tele-Twister: a collaborative teleoperation system where online voters direct the movements of human players
Tele-Twister is thus strategic: the red team chooses targets that are easy for the red twister and difficult for the blue twister, and vice versa for the blue team. Tele-Twister encourages active collaboration within teams and competition between teams. In this context, the unsupervised scoring metric rewards active participation and collaboration (voters who don’t vote or are outside the majority region at the end of a voting cycle receive a zero score). Figure 7.5(b) shows the board layout, with 16 circles: red, blue, yellow, and green. Since August 12th, 2003 we have conducted public “field tests” on Fridays, from 12-1pm Pacific Time. These field tests attract 10-25 online participants, who vote in alternating 30 second voting periods until one twister falls, ending the round. Typically, each round continues for 10-20 voting periods, so we conduct 4-5 rounds during each field test. Figure 7.5(c) and (d) are two snapshots from a typical round. Figure 7.6 shows scoring data from the field test on Friday September 26th, 2003. All players begin the field test with a score of zero, so average score per team climbs during the initial voting cycles: the blue team wins the first round after approx 23 voting cycles. How do user scores correlate with task performance (winning the round)? Note that in the four subsequent rounds, the team with the highest average score consistently wins the round: the red team wins rounds 2,3 and 5 and the blue team wins round 4. During round 4, members of the red team had difficulty agreeing on the appropriate next move, reducing the score of many red voters
112
7 Unsupervised Scoring for Collaborative Teleoperation 0.4
0.35 0.3
Avg Score
0.25 0.2 0.15 0.1 0.05 0
0
10
20
Average score for red team
30
40
Voting Cycle
50
60
70
Average score for blue team
Fig. 7.6. Plot of unsupervised scoring metric from the Sept 26 2003 field test with two teams of 21 online voters, for five rounds of the Tele-Twister game. The figure plots average score for members of each team during sequential voting cycles. Vertical bars indicate the end of a round and which team wins. A solid vertical line indicates that the blue team won that round and a dashed vertical line indicates that the red team won.
and in turn the average red team score. The lack of consensus (ie. “split votes”) resulted in a loss during that round. A team has higher average scores when the team collaborates, reaching consensus faster. This does not always correlate with success: it can lead to short-term snap decisions that may appear strong but are strategically weak. Figure 7.7 plots individual voter scores from the same field test. Note that the score of voter 13 is consistently higher. In this case voter 13 is a member of our lab who has played the game during many previous rounds and has developed skill at picking the next moves. Other players follow his moves, resulting in a high score.
7.6 Conclusion and Future Work This chapter describes an unsupervised scoring metric for collaborative teleoperation that encourages active participation and collaboration, and a distributed algorithm for automatically computing it. To understand how the scoring metric works with groups of participants and to test software reliability, we developed a collaborative teleoperation system based on a sequence of multi-player non-zero sum games embedded inside a sequence of two-player zero sum games. Initial
7.7 Closure
113
0.9 0.8 0.7
Score
0.6 0.5 0.4 0.3 0.2 0.1 0
0
10
20
Voter 13
30
40
Voting Cycle
50
60
70
Fig. 7.7. From the same Sept 26 2003 field test, plot of unsupervised scoring metric for seven individual voters from the Blue team
results suggest that the metric encourages active participation and correlates reasonably with task performance.
7.7 Closure In this part, we have introduced four different algorithmic developments for CTRC systems. We have present frame selection algorithms and the unsupervised scoring mechanism to combine user inputs into a single control stream. If we view the collaborative control problem as the special case of the collective decision making, we can immediately find out that there are a rich set of research problems here. Popular decision-making methods such as optimization, auction, game theory, and voting have different variations. Those variations might have different impact on the teleoperation of the robotic device. We have also touched an important issue in the collaborative teleoperation, which is the user incentive. In an SOSR teleoperation system, the user incentive is usually not a problem because the robot performance directly relates to the master command quality. The feedback of the operation successfulness is direct. However, it is not the case in an MOSR system. Since the system has to combine user inputs to determine robot action, there is no direct link between the individual user input and the collective system output. The lack of feeling of “being in control” breaks down the link. How to design a feedback mechanism that ensures the individual user incentive is compatible with the group decision quality is an interesting and challenging problem itself.
8 Projection Invariants for Pan-Tilt-Zoom Robotic Cameras⋆
To successfully construct and deploy a CTRC system requires us to address the research problems that arise in the deployment process. Part III classified the problems into three categories, • fast processing the data from the robotic camera, • automatic calibration of the robotic camera, and • on-demand transmission of the video data. Each of the problem will be addressed in detail in a chapter in Part III. This chapter focuses on the first problem. As we know that the output of a CTRC system is video frames taken at various pan-tilt-zoom settings. How to quickly analyze the data and establish the correspondence among frames from different camera configurations is an important problem in the data processing. The chapter aims to address this problem.
8.1 Introduction A low-cost Pan-Tilt-Zoom (PTZ) camera can provide a coverage over a large region at variable resolutions. The flexibility of the camera finds many applications such as homeland security, natural observation, construction monitoring, distance learning, and defense industry. Our group is developing remote robotic observatories for natural environments. PTZ cameras serve as a low-cost, low bandwidth, and energy-efficient solution. Equipped with a high optical zoom lens and pan-tilt mechanisms, a PTZ camera can track a moving animal at a distance with no intrusiveness. As a common problem in all PTZ camera applications, we have to process a large number of images taken at different PTZ settings to understand what is happening in the scene. Finding pixel-correspondence across different PTZ ⋆
This chapter was presented in part at the 2006 IEEE International Conference on Robotics and Automation (ICRA), Orlando, FL [151].
D. Song: Sharing a Vision: Systems and Algorithms, STAR 51, pp. 117–137. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
118
8 Projection Invariants for Pan-Tilt-Zoom Robotic Cameras
settings can be computationally expensive. Since pixel-correspondence is a fundamental problem for many computer vision approaches such as image alignment, motion tracking, video encoding, and 3D construction, there is a strong need for accelerating the computation speed given the reality that PTZ cameras are often deployed for time critical applications. More specifically, we know that the difficulty of finding pixel-correspondence is due to imprecise pan and tilt positions caused by the limited accuracy in camera potentiometers. As a common approach, all images have to be projected into a common planar space to address the pixel correspondence problem. This requires accurate pan and tilt settings. Feature matching is applied to estimate those parameters. However, the imprecise camera parameters and the sheer large number of images make it difficulty to respond to the dynamic event of the scene. We know that computation speed of finding corresponding pixels can be greatly improved if we can find a subset of pixels that can maintain fixed relative positions. Those subset of pixels are referred to as projection invariants because of their shape-preserving properties. Note the fact that images from a PTZ camera can be treated with the same optical center and a spherical coordinate system is a natural coordinate system for organizing images with the same optical center. Therefore, we focus our research on search of projection invariants in spherical coordinate systems. We formally prove that projectioninvariants can be constructed on the spherical surface. The shape-preserving property of projection invariants can transfer the image re-projection process into a rigid body translation and rotation of projection-invariants. As a fundamental problem in machine vision, image alignment is used to demonstrate how projection invariants can be used to accelerate its computation seed. Experiment results from comparison study show that the projection invariant based image alignment algorithm outperforms the existing best image alignment method by at least an order of a magnitude. The rest of the chapter is organized as follows. We begin with the literature survey of projection invariants and image alignment in the next section. In Section 8.3, we formally define projection invariants. In Section 8.4, we analyze why the projection invariants do not exist for planar image re-projection. We then take the analysis into spherical coordinate system and find projection invariants. To demonstrate the power of projection invariants, we apply the projection invariants to an image alignment problem in Section 8.5.
8.2 Related Work Our development of projection invariants is aimed to assist fast image content analysis for PTZ cameras. Due to their flexible coverage and resolution, PTZ cameras have been used in many applications such as distance learning [198], improving vehicle driving saftety [36], tracking human movement [204], building construction monitoring [169], nature observation [167], and traffic monitoring [128].
8.2 Related Work
119
Most research is to explore new technologies and applications based on new pan-tilt-zoom capabilities. Matsuyama and his colleagues [128] point out that the dynamic integration of visual perception and camera action is an important research problem for PTZ cameras. They develop a new architectures that coordinates multiple cameras to track multiple moving objects. Sinha and Pollefeys [165] develop automatic calibration algorithms for a pan-tilt-zoom camera with a focus on automatic zoom calibration. They first determine intrinsic camera parameters at the lowest zoom and then increase camera zoom settings to obtain radial distortion parameters. Our previous work [166, 170, 173] develop a series of collaborative control algorithms that allow multiple users to share control of a single PTZ camera. The existing works have advanced our understandings of PTZ cameras through the process of addressing individual problems. For example, all of the work above agrees that spherical coordinates are a better choice to organize image data as opposed to the planar image coordinate system. However, those understandings are largely sporadic and fragmented. We believe that it is necessary to extend and bridge them into a set of methodologies that systematically address problems unique to PTZ cameras. For example, if the spherical coordinate system is a better coordinate system, we need to understand image re-projection under the spherical coordinate system. New data structures, models, and algorithms are needed. Our development of projection invariants is one step toward this target. The development of projection invariants for PTZ cameras is inspired by invariant descriptors for 3D object recognition in pattern recognition [59]. For example, Euclidean distance is invariant to shift and rotation. Fourier Descriptor is invariant to affine transformation and can be used to recognize objects from multiple view points [7]. The purpose of invariant descriptors in pattern recognition is to find object properties invariant to perspectives, lighting conditions, and lens parameters for object identification. This differs from our problem because we are looking at the shape-preserving property instead of an arbitrary object property. As an application, projection invariants is used to address image alignment problem. Current image alignment techniques can be classified into three categories: direct method [38, 106, 163, 180, 181, 182], frequency domain registration [32, 153], and feature-based image registration [83, 189, 209, 212, 92, 22, 116, 34,105,206]. The direct method directly compares intensity values of pixels from the overlapping images and is sensitive to lighting conditions, while featurebased alignment works on a sparse set of feature points and is less sensitive to lighting conditions and needs less computation. Frequency domain registration works well for translation, but has problems with rotation. Recent research on improving the speed and the accuracy of image alignment focuses on the feature-based method, which extracts features such as Harris corner point [83, 189, 212], Moravec’s interest point [92], SUSAN corner point [34], vanishing point [206], and Scale Invariant Feature Transform (SIFT) [27]. Torr and Zisserman [189] outline the feature-based method: First, features are
120
8 Projection Invariants for Pan-Tilt-Zoom Robotic Cameras
extracted automatically. An initial set of matches are computed based on proximity and similarity of their intensity neighborhood. These estimations inputs are then placed into a robust estimation algorithm such as the Least Median of Squares(LMedS) [209] or Random Sample Consensus(RANSAC) [206] to choose the solution with the largest number of inliers. Numerical minimization techniques such as the Levenberg-Marquardt algorithm are then applied to refine the estimation result from RANSAC. Since an SIFT feature point is invariant to projections, the combination of SIFT and RANSAC in [27] has been one of the most successfully image alignment method. To demonstrate the power of project invariants, we develop a new image alignment algorithm based on our projection invariants. An SIFT and RANSAC based algorithm is used for speed comparison in the chapter.
8.3 Problem Definition 8.3.1
Assumptions
We assume that all images are taken from the same PTZ camera that is installed on a rigid base. No translational motion for the camera is allowed and mechanical vibrations are negligible. Since there are only PTZ motions, all images can be treated with the same optical center. Camera potentiometer readings give an estimation of camera pan/tilt position. These readings are inherently approximate with error (i.e. ±1.0◦ ) and cannot be directly used to assist re-projection, which requires a much higher angular resolution. For example, to accurately reproject images from Panasonic HCM 280 camera requires an angular resolution of < 0.0041◦ at zoom = 21x and a resolution of 640x480. We assume that the camera intrinsic parameters including lens distortion, skew factor, and CCD sensor size are pre-calibrated and known. The camera knows its zoom position (focal length) accurately based on pre-calibration. We also assume that the camera has a maximum Horizontal Field Of View (HFOV) less than or equal to 45 degrees. Most PTZ cameras satisfy this assumption. 8.3.2
Nomenclature
We use notations in format of {·} to refer to a coordinate system in the chapter. We use left superscriptions to indicate the coordinate system of a point/set. Let us define, • O as the camera optical center. • {W } as a 3D fixed Cartesian coordinate system with its origin at camera optical center point O. We refer to it as world coordinate system. A point in {W } is denoted as W Q = W [x y z]T . • {C} as a 3D Cartesian coordinate system with its origin at O, its Z axis overlapping with optical axis, its X − Y plane parallel with CCD sensor plane and its X axis parallel to the horizontal direction of the image. In this chapter we refer to it as the camera coordinate system. A point in {C} is
8.3 Problem Definition
121
C
• •
•
• • • •
denoted as C Q = [x y z]T . Note that {C} changes as the camera changes its PTZ settings. {CA } and {CB } as camera coordinate systems for images A and B, respectively. {I} as a 2D image plane for image I. The origin of {I} is the center of the image. We refer to it as the image coordinate system. A point in I is denoted as I q = [u v 1]T . In the rest of the chapter, we use Q notation to indicate a 3D Cartesian point and q to represent a 2D coordinate. {A} and {B} as a 2D image plane for images A and B, respectively. They follow the same definition of {I} and are used during image alignment analysis. A q = [A u, A v, 1]T as a point in {A} and B q = [B u, B v, 1]T as its corresponding position in {B}. f as camera focal length. (pA , tA ) and (pB , tB ) are the pan and tilt settings for images A and B, respectively. functions s(·) and c(·) as sin(·) and cos(·), respectively.
8.3.3
Perspective Projection and Re-projection for a PTZ Camera
Image acquisition in a perspective PTZ camera is a process that maps a 3D world onto a 2D image plane, which can be described by perspective projection model [183]. Therefore, a point in {W } is converted to a point in {I} by I
q = IC K
C W Q, WR
(8.1)
where rotation matrix C W R maps a point from {W } to {C} and is determined by pan and tilt settings, which are camera extrinsic parameters. Intrinsic camera parameter matrix IC K projects the points from {C} to {I}, which is a function of focal length f and is determined by zoom level according to our assumptions. According to Equation (8.1), 2D image points in two overlapping images A and B can be mapped with each other using a 3 × 3 matrix M , [85, 182, 183] as, A
q=A CA K
CA B −1 B q CB R CB K
=M
B
q,
(8.2)
where A q and B q are corresponding points in {A} and {B}, respectively, and roA tation matrix C CB R characterizes the relationship between camera coordinate systems {CA } and {CB } for images A and B, respectively. Since (8.2) just projects pixels in B to {A}, the process is referred to as the re-projection process and M B as the re-projection matrix [87]. Matrices A CA K and CB K are functions of focal A lengths, which are known according to our assumptions. Rotation matrix C CB R is uniquely defined by camera pan and tilt values. Hence matrix M is a function of camera pan and tilt settings for images A and B, A
q = M (pA , tA , pB , tB )B q.
(8.3)
122
8 Projection Invariants for Pan-Tilt-Zoom Robotic Cameras
B If images A and B share the same focal length, then A CA K = CB K = K and (8.2) can be simplified as, A
q=K
CA CB R
K −1
B
q=M
B
q,
(8.4)
A where M is just the similarity transformation of the rotation matrix C CB R. Hence | det(M )| = 1. With the knowledge of the re-projection, we are ready to introduce projection invariants.
8.3.4
Definition of Projection Invariants
The intuition behind projection invariants is the shape-preserving property. In other words, a projection invariant is a subset of pixels that maintain fixed relative positions with respect to each other under re-projection. Define A C ⊂ A as a patch of pixels located at the overlapping region of images A and B. Therefore, it has a corresponding position B C ⊂ B in image B. Definition 1 (Projection Invariant Definition). ∀A q1 , A q2 ∈ A C and their corresponding position B q1 , B q2 ∈ B C, define ΔA q = A q1 − A q2 and ΔB q = B q1 − B q2 , A C and B C are a pair of projection invariants if and only if the follow shape-preserving condition is satisfied, |ΔA q| = |ΔB q|,
(8.5)
where | · | is L2-norm. Our objectives are to find/construct projection variants under re-projection in either planar coordinate systems such as M or its equivalence in other coordinate systems.
8.4 Projection Invariants In this section, we first analyze the relationship between re-projection matrix M and projection invariants. We find that projection invariants do NOT exist in planar image coordinate systems. Hence we search nonlinear coordinates to pre-project images. We then prove that projection invariants exist and can be constructed in spherical coordinate systems. We now begin with the analysis of the relationship between re-projection matrix M and projection invariants. 8.4.1
Projection Invariants and Re-projection
Plug (8.2) into (8.5), we get, |M ΔB q| = |ΔB q|.
(8.6) ⎡
⎤
m11 m12 m13 The re-projection matrix M can be expanded as M = ⎣ m21 m22 m23 ⎦ . We m31 m32 m33 have the following theorem,
8.4 Projection Invariants
123
Theorem 1 (Projection Invariant Condition). To meet the shape-preserving condition in (8.5), if and only if the re-projection matrix M satisfies the following condition, m11 m12 = R2× 2 and m31 = m32 = 0 (8.7) m21 m22 over the projection invariant, where R2× 2 is a 2 × 2 rotation matrix. Proof. (if ): Plug (8.6) into (8.5). (8.5) holds. This is trivial. (only if ): According to our nomenclature, we know that A q1 = [A u1 , A v1 , 1]T and A q2 = [A u2 , A v2 , 1]T . Define ΔA u = A u1 − A u2 and ΔA v = A v1 − A v2 . Then ΔA q = [A u, A v, 0]T . Similarly, ΔB q = [B u, B v, 0]T . From (8.2), we know, ⎡A ⎤ ⎡ ⎤⎡B ⎤ u u m11 m12 m13 ⎣ A v ⎦ = ⎣ m21 m22 m23 ⎦ ⎣ B v ⎦ 0 m31 m32 m33 0 Take a close look at the third row of the equation above, we know that, m31 B u + m32 B v = 0.
(8.8)
Since B u and B v can take arbitrary values, m31 = m32 = 0 has to be true in order to satisfy (8.8). Therefore, the left hand side of (8.6) is, (8.9) |M ΔB q| = (m11 B u + m12 B v)2 + (m21 B u + m22 B v)2 . The right hand side of (8.6) is, |ΔB q| =
B u2
+ B v2 .
(8.10)
m211 + m221 = 1;
(8.11a)
m212
(8.11b) (8.11c)
Plug (8.9) and (8.10) into (8.6), we have,
m222
+ = 1; m11 m12 + m21 m22 = 0. Hence R2× 2 is a 2 × 2 rotation matrix.
Remark 8.1. Theorem 1 intuitively tells us that re-projection can be viewed as a rotation of the projection invariant if the projection invariant exists. Unfortunately, the condition in Theorem 1 cannot always be satisfied by the re-projection matrix M . From (8.2), we know that M is determined by camera parameters. Therefore, there is no guarantee that condition in Theorem 1 would be satisfied. In fact, it is not difficult to come with counter examples. However, this does provide the insight for searching for directions that lead to the discovery of projection invariants.
124
8 Projection Invariants for Pan-Tilt-Zoom Robotic Cameras
8.4.2
Spherical Wrapping
Theorem 1 reveals the fact that the deformation of the projected image in the re-projection process cannot be sensitive to camera parameters if projection invariants exist. The re-projection M defined by (8.2) projects one planar image into another planar image space. It is not surprising that the amount of the deformation of the projected image is very sensitive to the relative positions of those two planes, which is determined by camera parameters. Our immediate thinking is to try a different coordinate system. If we wrap the image around a spherical surface, then the re-projection between two spherical coordinate systems should introduce less deformation.
C
O
v
Image plane
p t C
q = (u, v,1)
T
Y
f C
Z
X Q = ( x, y, z)T ~ q = ( p,t)T
C
u
Fig. 8.1. An illustration of spherical wrapping and coordinate systems: q in image coordinate system, q˜ on the local spherical coordinate system, and C Q is the same point as q˜ but in the camera coordinate system
The chosen sphere is centered at the camera optical center and has focal length f as its radius. Recall that I is the image captured by the camera. As illustrated in Figure 8.1, the projection generates a wrapped image I˜ based on ˜ Recall that I q = (u, v, 1)T is a point in a local spherical coordinate system {I}. ˜ where (p, t) is the angular I. Define q˜ = (p, t)T as the corresponding point in I, coordinate of the point. The spherical wrapping that projects q to q˜ is, u p = arctan( ), f v ). t = − arctan( u2 + f 2
(8.12a) (8.12b)
Each point in I˜ is defined using local pan and tilt spherical coordinates with ˜ usually consists of three eleunits in radians. Spherical coordinate system {I} ments including radius, pan, and tilt. Although images taken at different zoom
8.4 Projection Invariants
125
levels have different radius, it is not difficult to scale them into the same spher˜ is represented in angular coordinates instead of pixel ical surface because {I} coordinates. Therefore, we can treat f as the same and yield a 2D represen˜ has its tation. Also, q˜ = (0, 0)T overlaps with q = (0, 0, 1)T . Note that {I} origin centered at each image and is different from the global spherical coordinate defined by real camera pan and tilt settings. In fact, the p and t in q˜ only depend on its corresponding pixel coordinates in I. We use ∼ above I to indicate that I˜ is image I’s spherical wrapping. We will use this convention in the rest of the chapter. The spherical wrapping can be conducted without the knowledge of camera pan and tilt settings. This is an important feature that will be reiterated later. 8.4.3
Spherical Re-Projection (SRP)
Now the new re-projection can be performed between two local spherical coordinate systems, which is referred to as Spherical Re-Projection (SRP) to distinguish it from the planar re-projection in the rest of the chapter. Define Q = C Q = [x, y, z]T as q˜ in {C} as illustrated in Figure 8.1. Recall that cos(θ) and sin(θ) are denoted as c(θ) and s(θ), respectively. The relationship ˜ and {C} can be described by function P and its inverse P −1 , between {I} arctan(x/z) p √ = P (Q), (8.13) = q˜ = − arctan(y/ x2 + z 2 ) t ⎡ ⎤ ⎡ ⎤ x f · c(t)s(p) q ). Q = ⎣ y ⎦ = ⎣ −f · s(t) ⎦ = P −1 (˜ z f · c(t)c(p)
(8.14)
˜ be the resulting image from the spherical wrapping for image A Let A˜ and B and image B, respectively. Without loss of generality, we select image A˜ as the ˜ around A. ˜ To align the two images, we need reference image. We shift image B ˜ into A’s ˜ space, to re-project B A
A q˜ = P (C CB R
B
−1 B A Q) = P (C ( q˜)) CB R P
B A = F (C ˜). CB R, q
(8.15)
where F is the SRP function, A q˜ = (A p,A t)T and B q˜ = (B p,B t)T are positions ˜ respectively. of the corresponding point in wrapped image A˜ and B, We are interested in comparing the re-projection on the spherical surface with the original planar re-projection. The testing image is a square image with a resolution of 640 × 640. It is projected to another camera configuration that shares 30◦ tilt value and has 30◦ pan difference. Figure 8.2 suggests that the deformation on the spherical surface is significantly less than that in the original planar image space. Since the absolute distortion is an increasing function of image size, we conjecture that if we sample a very small square region on
126
8 Projection Invariants for Pan-Tilt-Zoom Robotic Cameras 400
0.8
200
0.4
0
t 0
v -200
-0.4
-400
-1000
-600
u
200
-200
-0.8
-1.5
-1
(a)
-0.5 p
0
0.5
(b)
Fig. 8.2. Comparison of image deformation caused by the re-projection operation (a) in the original planar image space and (b) on the spherical surface. Note that the unit in (a) is pixel and the unit in (b) is radian.
the spherical surface then the deformation for each square should be negligible after the spherical wrapping. If so, it possesses the property of a projection invariant. 8.4.4
Projection Invariants for SRP
Before we prove the conjecture, let us define a squared-shaped cell in image A˜ as, A C = {(A p,A t)|A p ∈ [A po ± pc ],A t ∈ [A to ± tc ]}, (8.16) where A q˜o = (A po , A to ) is the cell center coordinate, and (pc , tc ) is the maximum cell span in pan and tilt directions. We define B C as A C’s projection in image ˜ with its center at B q˜o = (B po , B to ). B We need to adapt the Projection Invariant Condition in Theorem 1, which is constructed on planar re-projection, to the nonlinear SRP function F . Define ΔB q˜ = B q˜ − B q˜o and ΔA q˜ = A q˜ − A q˜o . Equation (8.5) now becomes, |ΔA q˜| = |ΔB q˜|.
(8.17)
We have the following corollary, Corollary 1 (SRP Projection Invariant Condition). A C and B C are a pair of projection invariants, if and only if the following condition is satisfied, ∆A q˜ ≈ R2× where the 2 × 2 rotation matrix R2×
2
2∆
B
q˜,
(8.18)
is not a function of ΔA q˜ or ΔB q˜.
The proof of Corollary 1 is trivial. We can treat R2× 2 as the linearized approximation of F at center point of the cell because each cell is small and the linearized approximation is accurate enough. Then it follows the proof of Theorem 1. Note that we use ‘≈’ in (8.18) instead of ‘=’. This is acceptable because an image is a discretized representation of the real environment and any distortion that is less than half a pixel is negligible. In other words, Equation (8.18) tells us that
8.4 Projection Invariants
127
the linearized nonlinear function F over the cell can be approximated by a same rotation matrix over the entire cell if the cell is projection invariant. Now we are ready to prove the conjecture about SRP projection invariants. Theorem 2. If the corresponding spherical cells A C and B C are small, pc ≤ 5◦ and tc ≤ 5◦ , and the camera has a vertical field of view ≤ 34◦ , then A C and B C are projection invariant under SRP. Proof. Recall that functions s(·) and c(·) as sin(·) and cos(·), respectively. From vector calculus, we know that ⎤⎡ ⎤ ⎤ ⎡ dp f c(t)c(p) −f s(t)s(p) c(t)s(p) dx ⎦ ⎣ ⎦ ⎣ ⎣ ∇Q = dy = dt ⎦ . 0 −f c(t) −s(t) df −f c(t)s(p) −f s(t)c(p) c(t)c(p) dz ⎡
(8.19)
Define [Δx, Δy, Δz]T as the small displacement in {C} and [Δp, Δt, Δf ]T as the ˜ Since pc ≤ 5◦ and tc ≤ 5◦ , Δp < pc /2 = 2.5◦ and corresponding change in {I}. ◦ Δt < tc /2 = 2.5 . Hence we have ⎤ ⎡ ⎤⎡ ⎤ Δx c(t)c(p) −s(t)s(p) r13 Δp ⎣ Δy ⎦ = f ⎣ 0 −c(t) −r23 ⎦ ⎣ Δt ⎦ , Δz Δf −c(t)s(p) −s(t)c(p) r33 ⎡
(8.20)
where r13 = c(t)c(p)/f , r23 = s(t)/f , r33 = c(t)c(p)/f corresponds to the last ˜ as part of a sphere, column of the Jacobian matrix in (8.19). Since we have {I} radius f remains constant. Therefore Δf = 0. To move the negative sign out of the second row of the matrix in (8.20), we introduce coefficient matrix H = ⎡ ⎤ f 0 0 ⎣ 0 −f 0 ⎦. Then (8.20) can be rewritten as, 0 0 f ⎤ ⎤⎡ ⎤ ⎡ Δp c(t)c(p) −s(t)s(p) r13 Δx ⎣ Δy ⎦ = H ⎣ 0 c(t) r23 ⎦ ⎣ Δt ⎦ , 0 −c(t)s(p) −s(t)c(p) r33 Δz ⎡
(8.21)
Recall that t are the tilt positions with respect to the image center inside an image. Recall that the camera has a maximum vertical field of view of 34◦ . To ensure that the existence of an overlapping region between the two images, the tilt overlap has to be larger than the tilt range of a cell tc , the maximum value of t is 34/2−tc = 12◦ for tc = 5◦ . Since cos(12◦ ) = 0.995, therefore, 0.995 ≤ c(t) ≤ 1. 5 480 = If the camera has a resolution of 640×480, then pixel cell width is around 34 ◦ 68 pixels for pc = tc = 5 . If we approximate c(t) ≈ 1, the maximum distortion (1 − 0.995) × 68 is less than half a pixel. If tc < 5◦ , then the pixel cell width is tc also decreased. It is not difficult to show that (1 − cos(34/2 − tc )) 34 480 < 0.5 ◦ for 0 < tc < 5 because it is an increasing function of tc for 0 < tc < 5◦ . Since the distortion is very small, instead we drop c(t) in the first column,
128
8 Projection Invariants for Pan-Tilt-Zoom Robotic Cameras
⎤ ⎡ ⎤ ⎤⎡ c(p) s(p)s(−t) r13 Δx Δp ⎣ Δy ⎦ ≈ H ⎣ 0 c(−t) r23 ⎦ ⎣ Δt ⎦ , Δz −s(p) c(p)s(−t) r33 0 ⎡
(8.22)
Since Δf = 0, we know that [r13 , r23 , r33 ]T can take arbitrary values without affecting the equality in (8.22). Let us choose r13 = s(p)c(−t), r23 = −s(−t), and r33 = c(p)c(−t). Then we have, ⎤ c(p) s(p)s(−t) s(p)c(−t) ⎣ 0 c(−t) −s(−t) ⎦ = RY (p)RX (−t), −s(p) c(p)s(−t) c(p)c(−t) ⎡
(8.23)
where RY and RX are rotation matrices along Y axis and X axis, respectively. Define ΔQ = [Δx, Δy, ΔZ]T and Δ˜ q = [Δp, Δt, 0]T , Now (8.22) is, ΔQ ≈ HRY (p)RX (−t)Δ˜ q
(8.24)
Hence, we have ΔA q˜ , 0
(8.25)
ΔB q˜ . Δ Q ≈ HRY ( po )RX (− to ) 0
(8.26)
ΔA Q ≈ HRY (A po )RX (−A to )
and, B
B
B
B Since ΔA Q = A B RΔ Q, we get,
ΔA q˜ ≈ RX (A to )RY (−A po )H −1 0 ·
CA B B CB RHRY ( po )RX (− to )
(8.27)
ΔB q˜ . 0
A Since H and H −1 are diagonal matrices, we have H −1 C CB RH = tion (8.27) becomes, B A Δ q˜ Δ q˜ ≈ R∆ , 0 0
CA CB R.
Equa(8.28)
where B B A R∆ = RX (A to )RY (−A po )C CB RRY ( po )RX (− to ),
(8.29)
is a rotation matrix because the multiplication of rotation matrices yields a rotation matrix. On the other hand, the last row has to satisfy 0 = 0 no matter what value ΔB q˜ takes. This means R∆ has to be in the following format, R2×2 02×1 R∆ = . 01×2 1
8.4 Projection Invariants
Hence it satisfies Corollary 1 and SRP.
A
C and
B
129
C are projection invariant under
Remark 8.2. Theorem 2 also tells us how to construct projection invariants under SRP and applicable cameras. Most PTZ cameras have vertical field of views less than 34◦ . When operated at high zooms, camera vertical field of views are even smaller. For example, a Panasonic HCM 280 camera has a 2.8◦ vertical field of view at zoom=21x. Even for a camera that has a vertical field of view larger than 34◦ , we can still construct projection invariants by sample cells that are within the 34◦ range. Theorem 2 suggests that each cell can be treated as a rigid object in SRP, which can lead to a significant computation savings. The next question is how to compute the rotation matrix R2×2 , which can be characterized by a single rotation angle θ. We have the following lemma, Corollary 2. Recall that (pA , tA ) and (pB , tB ) are the pan and tilt settings for images A and B, respectively. Rotation angle θ of rotation matrix R2×2 can be approximated by,
θ ≈ arccos c(A po )c(B po )c(pB − pA ) + s(A po )s(B po ) ∗ α + s(pB − pA )s(A po )c(B po )c(tA ) A
(8.30)
B
− s(pB − pA )c( po )s( po )c(tB ) . where α is a function of (pA , tA ) and (pB , tB ) only and can be pre-computed. α = c(tA )c(tB )c(pB − pA ) + s(tA )s(tB ).
(8.31)
(α is the dot product of Z axes of {CA } and {CB } in world coordinate system.) Proof. Let us use the following vectors, A Δ q˜ • = [1/f, 0, 0]T , 0 B Δ q˜ • = [1/f, 0, 0]T , 0 A Δ q˜ • CA X0A = HRY (A po )RX (−A to ) , and 0 B Δ q˜ . • CB X0B = HRY (B po )RX (−B to ) 0 It is clear that CA X0A and CB X0B are unit vectors. By defining X0B as their corresponding coordinate in {W }, we know that
W
c(θ) =< W X0A , W X0B >,
W
X0A and
(8.32)
130
8 Projection Invariants for Pan-Tilt-Zoom Robotic Cameras
from the definition of vector inner product. From coordinate transform relationship, we know, W
X0A = W CA R
CA
X0A
= RY (pA )RX (tA )HRY (A po )RX (−A to )[1/f, 0, 0]T ⎤ ⎡ c(pA )c(A po ) − s(pA )c(tA )s(A po ) ⎦. =⎣ s(tA )s(A po ) A A −s(pA )c( po ) − c(pA )c(tA )s( po ) Similarly, we can compute (8.30).
W
X0B . Inserting them into Equation (8.32), we get
Remark 8.3. It is worth mentioning that if two images share similar pan positions (i.e. |pA − pB | ≤ 5◦ ), then (8.30) becomes
(8.33) θ ≈ arccos c(A po )c(B po ) + s(A po )s(B po )c(tB − tA ) .
Recall that a standard camera has a maximum vertical field view of 34◦ . To guarantee the overlap between the two frames, the maximum value of tB − tA has to be less than 17◦ . Therefore, cos(17◦ ) = 0.956 ≤ c(tB − tA ) ≤ 1 and c(tB − tA ) can be approximated by 1. Hence, we have, θ ≈ B po − A po , for this special case, which can further speed up the computation.
At the first glance, (8.30) in Corollary 1 is very complex. It tells us that θ depends on A po , B po , pB − pA , tA , and tB . Since we choose the position for A C, we knew ˜ According to (8.15), B C’s center position B po is its center position A po in A. A uniquely defined by po , pB − pA , tA , and tB . Since A po is usually known in ˜ θ uniquely depends on pB − pA , tA , and tB . image A, ˜ is uniquely Therefore, the position and the orientation of B C in image B defined by pB − pA , tA , and tB , which define the spatial relationship between the two intersecting images. The shape of B C remains a square with the same ˜ This desirable shape-preserving property side length as that of A C in image A. has many potential applications such as image alignment, panorama generation, real-time tracking of moving objects, and/or video encoding, where pixel correspondence dominates the computation. Since image alignment is a fundamental problem in computer vision, below we use it as a sample application to introduce how projection invariants can be used to accelerate the computation significantly for PTZ cameras.
8.5 Application: Image Alignment Problem We first analyze the image alignment problem and introduce existing approaches that address this problem with emphasis on SIFT and RANSAC methods. We
8.5 Application: Image Alignment Problem
131
then introduce how projection invariants can be used to accelerate the computation. We present a projection invariant-based image alignment algorithm for PTZ cameras with experiments. We now begin with problem definition of an image alignment problem. 8.5.1
Problem Description and Existing Methods
A planar image aliment problem is to align two images by estimating M that minimizes the pixel/feature differences in the overlapping part of the two images. Among existing error metrics for pixel/feature differences, Sum of Squared Differences (SSD) is one of the most popular metrics, [181, 183],
2 SSD = FeatureB (B qi )) − FeatureA (A qi ) , i∈A∩B
where set A ∩ B is the overlapping pixel set between image A and image B, qi and B qi are the ith overlapping pixel from image A and image B, respectively, and FeatureA () and FeatureB () are feature values for images A and B, respectively. Feature values can take different forms. Feature values can be pixel intensity values if direct methods are used. Feature values can also be probability measure if posterior probability distribution is used to represent feature. On the other hand, the error measure is not necessarily limited to SSD. The analysis can be easily adapted to other non-negative difference metrics. According to (8.3), M can be determined by camera pan and tilt settings. When we align image B with respect to A, the pan and tilt settings (pA , tA ) for image A are usually known as reference. Therefore, M can be determined by two unknown variables (pB , tB ), A
A
q = M (pB , tB )B q.
(8.34)
Therefore, the image alignment problem for PTZ cameras is to solve the following optimization problem,
2 FeatureB (M (pB , tB )B qi )) − FeatureA (A qi ) . (8.35) arg min (pB ,tB )
i∈A∩B
There are two unavoidable problems if we solve the optimization problem in (8.35) by directly evaluating candidate (pB , tB ) pairs. The first problem is the speed. Define m = |A ∩ B| as the number of feature pixels in A ∩ B and let k be the number of candidate (pB , tB ) pairs, it can easily take O(km) re-projection operations. Since the re-projection computation involves extensive floating point computation, the dominating factor km is usually very large for high resolution images. The second problem, which is more of a concern, is the alignment accuracy. Since M is very sensitive to (pB , tB ), a minor error in (pB , tB ) would significantly change the shape of the feature pixel set {FeatureB (M (pB , tB )B qi ) : i ∈ A ∩ B}, which leads to inaccurate alignment. Among all of the recently proposed methods, one of the most effective way to address this accuracy problem is to introduce a feature transformation that
132
8 Projection Invariants for Pan-Tilt-Zoom Robotic Cameras
is not sensitive to affine transformation. The re-projection process is an affine transformation. As one of the most popular feature transformation methods, Lowe’s SIFT [120] is designed to be scaling and rotation invariant and fits the requirement. Combining SIFT with RANSAC [206] to choose the solution with the largest number of inliers, Brown and Lowe [27] have well-addressed the accuracy problem in image alignment. However, computing SIFT is expensive in time, because SIFT feature points have to be evaluated at different scaling levels and orientations. The long computation time limits its usage in time-critical applications. Projection invariants can be used to improve image alignment efficiency. Due to their shape-preserving property, we know that there are no scaling difference or nonlinear distortions among corresponding projection invariants. Therefore, we do not need to use sophisticated feature transformations in the image alignment algorithm. Instead, we can use simple feature transformation such as ZeroCrossing Edge Detector (ZCED) [75] to reduce computation cost. Furthermore, image alignment can be reduced to the problem that finds matching projection invariant pairs, which allows us to speed up the computation. Building on the intuition, we can design a Projection Invariant-based Image Alignment Algorithm (PIIAA). 8.5.2
Projection Invariant-Based Image Alignment Algorithm
As illustrated in Figure 8.3, our algorithm is based on a set of small squareshaped cells evenly scattered in the overlapping region. Defined in (8.16), each cell is a projection invariant that satisfies the condition specified by Theorem 2. Define kc as the number of cells, which is between 25 and 36 in most cases. ˜ 1 ≤ j ≤ kc as the jth cell. From potentiometer reading and its Define A Cj ⊂ A, ˜ will be found within error range, we know that the matching region of B Cj ⊂ B A ˜ region ǫ( Cj ) ⊂ A, which is the gray region in Figure 8.3, ˜ ∈ [A poj ± (pc + .5pmax )] ǫ(A Cj ) = {(p, t) ∈ A|p
t ∈ [A toj ± (tc + .5tmax )]},
(8.36)
where (A poj , A toj ) = A q˜oj is the center point of A Cj , (pc , tc ) defines cell size, and (pmax , tmax ) is the potentiometer error range. For example, for the images
˜ B A B
˜ A
Cj
p˜oj ǫ(A Cj )
Fig. 8.3. An illustration of Projection Invariant-Based Image Alignment Algorithm. ˜ and image B’s ˜ barrel-like shape is due to spherical wrapping. Image A
8.5 Application: Image Alignment Problem
133
captured from a Canon VCC3 camera that has a 45◦ horizontal field of view, an image size of 640 × 480-pixels, and ±1.5◦ potentiometer error, ǫ(A Cj ) is ±20 ˜ Based on Corollary 2, we also know that the inverse pixels shifting range in A. rotation by −θ around cell center B qoj defines the orientation of B Cj , B
Cj = Rc (−θ)A Cj .
Therefore, we transfer the optimization problem in (8.35) to min
(pB ,tB )
kc
2 FeatureB (Rc (−θ)A Cj ) − FeatureA (A Cj ) ,
(8.37)
j=1
subject to, B
Cj ⊂ ǫ(A Cj ).
(8.38)
Since B Cj is considered as a solid square with only rotation and shifting, computing the solution becomes less costly. Each candidate solution will determine orientation and location of kc cells. Since the relative positions between cells are rigid and known, the search for a solution is to simultaneously shift all kc rotated cells in A˜ and find the optimal solution with the pre-computed B Cj ’s. Because kc is a relatively small number (i.e. 25 ∼ 36) and each cell is very small (i.e. 10 × 10 pixels), the computation is very fast. Define (δp, δt) as B Cj shifting variable such that δp ∈ [±0.5pmax] and δt ∈ [±0.5tmax ] to satisfy (8.38). Because of the image resolution limit, there are only a constant number of (δp, δt) pairs. Another benefit is that feature detection and spherical wrapping do not need to be computed for the entire image. Only pixels in the selected cells and their neighboring search regions need to be computed. Define n as the number of total pixels in images A and B. We summary the analysis above as the Projection Invariant-based Image Alignment Algorithm in Algorithm 1. Since the property projection invariant of allows to avoid complex feature extractions, plus the fact that we can further limit the ZCED to the minimum number pixels, the PIIAA is actual a constant time algorithm if we do not consider image I/O time. This is very desirable in dealing with high resolution images. 8.5.3
Experiments and Results
We have implemented the algorithm and tested in a series of experiments. The computer we used for testing is a 3.2Ghz Desktop PC with 2GB RAM and a 120GB hard disk. The C++ based source code is complied in Microsoft Visual Studio 2003.net under Windows XP Professional Edition. As shown in Table 8.1 and Figure 8.4, images from 4 different cameras are used in experiments. Cameras VCC3, VCC4, and HCM 280 are PTZ cameras. Camera SD 630 is a regular digital camera mounted on a tripod, which provides high resolution images for comparing algorithms.
134
8 Projection Invariants for Pan-Tilt-Zoom Robotic Cameras
Algorithm 1. Projection Invariant-based Image Alignment Algorithm input : Image A, Image B, Image A’s pan and tilt setting (pA , tA ) output: Image B’s pan and tilt setting (pB , tB ) ˜ Computing lookup table for spherical wrapping A˜ and B; A Select evenly scattered Cj , j = 1, ..., kc in the overlapping region; for each j, 0 ≤ j ≤ kc , do Compute B Cj using initial readings from potentiometer; Compute FeatureB (B Cj ) using ZCED; Compute FeatureA (ǫ(A Cj )) using ZCED;
O(1) O(1) O(1)
for each (δp, δt), do for each j, 0 ≤ j ≤ kc , do Compute cell orientation θ; Rotate FeatureB (B Cj ) by −θ; Compute SSD for the cell j;
O(1) O(1) O(1)
Report the sum of SSD across all cells;
O(1) O(1)
O(1)
Report (δp, δt) with the minimum SSD; Add (δp, δt) to initial potentiometer reading to get (pB , tB );
O(1) O(1)
Table 8.1. A comparison of technical specifications of cameras tested in our experiments. VCC3, VCC4, and SD 630 are from Canon. HCM 280 is from Panasonic. Camera pan tilt ◦ ◦ ◦ VCC3 −90 ∼ +90 −30 ∼ +25◦ VCC4 −100◦ ∼ +100◦ −30◦ ∼ +90◦ HCM 280 −175◦ ∼ +175◦ 0◦ ∼ −120◦ SD 630 N/A N/A
(a) VCC3
(b) VCC4
zoom 10x 16x 21x 3x
focal length 4.2 ∼ 42mm 4 ∼ 64mm 3.8 ∼ 79.8mm 5.8 ∼ 17.4mm
(c) HCM 280
(d) SD 630
Fig. 8.4. Cameras tested in the experiments
8.5.4
Speed Test
We first compare the speed of our algorithm with the fastest method that is currently available [27]. It is a combination of SIFT and RANSAC with k-d tree support. We have used open source SIFT code1 and k-d tree code2 and implemented RANSAC according to [87]. Since this algorithm is used to construct 1 2
http://vision.ucla.edu/∼vedaldi/code/siftpp/siftpp.html http://ilab.usc.edu/toolkit/home.shtml
8.5 Application: Image Alignment Problem
135
Table 8.2. A comparison of algorithm speed versus image resolution Resolution PRA Time (milisec.) PIIAA Time (milisec.) 176 × 132 230.8 12.5 352 × 264 1209.3 43.7 704 × 528 5359.4 82.8 1408 × 1056 24401.5 215.6 2816 × 2112 113196.9 731.4
Factor 18.5x 27.7x 64.7x 113.2x 154.8x
panorama from aligning multiple image frames, it is referred to as Panorama Recognition Algorithm (PRA) in [27]. To ensure a fair comparison, we only compare image alignment time. Additional components in panorama construction such as image I/O, bundle adjustment, and blending/rendering are not counted in the time comparison. We first investigate how well each algorithm scales up when image resolution increases. Images used in the test are taken by a Panasonic SD 360 camera with a maximum resolution of 2816× 2112. Table 8.2 shows how much time each algorithm takes under different image resolutions. The input is a pair of overlapping images. The two algorithms are fed with the same input pair during the experiment. Both PRA and PIIAA are initialized with the same initial conditions (i.e. using the same inaccurate pan and tile potentiometer readings as their initial solutions). At each resolution level, we use 10 independent image pairs taken in the Texas A&M University campus. With each image pair as a trial, the time in the table is an average of 10 trials. Since the variance from trial to trial is small, it is not presented here. The factor column in the table indicates the speed improvement of PIIAA over PRA. It is clear that PIIAA is significantly faster than PRA for PTZ cameras. It is also desirable to see that factors get bigger as image resolution increases. Projection invariants clearly speed up the computation. 8.5.5
Application in Panorama Construction
We have also applied PIIAA into panorama construction algorithms to construction panoramas. Constructing panorama requires to perform a large number of image alignments at various camera PTZ settings. With applications range from natural observation and building construction documentation, our algorithm has been tested in three different sites as illustrated in Figure 8.5. Figure 8.5(a) illustrates the application of our algorithm for documenting building construction. A Canon VCC4 PTZ camera has been installed at a new CITRIS II building construction site in UC Berkeley. This $120 million construction project will add additional 145580 square feet research and teaching space to Berkeley campus at the of year 2007. The camera installation was requested by construction contractors. Since the PIIAA runs very fast, we are able to steer the Canon VCC4 PTZ camera to patrol the entire construction site to generate panorama on the fly. It takes 9.7 seconds to construct each panorama, which
136
8 Projection Invariants for Pan-Tilt-Zoom Robotic Cameras
(a) Construction documentation of CITRIS II building at UC Berkeley.
(b) Pilot test of natural observation at Central Park, College Station, TX
(c) Natural Observation at Richardson Bay Audubon Sanctuary, San Francisco Bay. Fig. 8.5. A snapshot of motion panorama videos created using PIIAA
is consisted of 21 images. Since the camera motion takes time, the PIIAA is as fast as the camera can be tele-operated. We select some panoramas collected to create time-elapsing motion panorama for building construction progress documentation. The resulting motion panorama contains 103 invidual panoramas from Feb 10, 2005 to June 2, 2005. Some panoramas are not selected in the final motion panroama because of bad weather and lack of construction progress during holidays. At a resultion of 2600 × 900 pixels, Figure 8.5(a) is a snapshot of the motion panorama. We also apply our algorithm to natural observation. As a pilot test, Figure 8.5(b) illustrates a snapshot of motion panorama generated during bird
8.6 Conclusion and Future Work
137
watching. Experiments were conducted from Aug 24, 2005 to Aug 31, 2005. We have collected 2186 frames and the original panorama has a resolution of 4000 × 1000 with a 240◦ horizontal field of view and 60◦ vertical field of view. The camera used is a Panasonic HCM 280 networked pan-tilt-zoom camera. Under the same setup, Figure 8.5(c) illustrates a panorama generated by PIIAA at an installation site for natural observation.
8.6 Conclusion and Future Work In this chapter, we propose projection invariants for PTZ cameras. We formally define projection invariant as a set of pixels that maintain fixed relative positions. We have analyzed, derived, and proved that projection invariants under spherical coordinate systems. To demonstrate the power of projection invariants, we present a projection invariant-based image alignment algorithm, which outperformed the best algorithm available by at least an order of a magnitude. Projection invariants can be used for many applications such as fast motion tracking, camera calibration, and image compression. As one of important applications, we are development algorithms to increase the speed and efficiency of encoding motion panorama based on projection invariants. New results will be reported in future publications.
9 Calibration Algorithms for Panorama-Based Camera Control⋆
We have seen that a panorama that describes the full coverage of a robotic camera is a great user interface to control the camera. The panorama provides context for users to directly point and click on the panoramic interface to command the camera to move to where they want the camera to observe. Moreover, a panorama also serves as the collaborative workspace that allows the display and the sharing of different user requests in CTRC systems. We need to guarantee the accurate correspondence between the pixel coordinates of the panorama and the camera configurations. However, the potentiometers in robotic cameras cannot provide feedback on camera configurations with sufficient accuracy. One example is the Panasonic HCM 280 camera with built-in streaming server, 22x zoom motorized optical lens, 350◦ pan range, and 90◦ tilt range. An error of 0.5◦ in camera tilt position can cause a 41.67% error in coverage when a Panasonic HCM 280 camera operates at its highest zoom. This presents a new calibration problem.
9.1 Introduction In a conventional pan-tilt-zoom camera control system, users click buttons on the control interface, which is slow and inefficient especially when there is a long time delay caused by communication. This is due to the lack of context in the camera control. Users have to do spatial reasoning to envision the camera coverage, which is usually difficult and prone to errors. All our CTRC systems utilize panorama-based control interface, which performs better under such scenarios. The panoramic control interface is based on a 2D image describing the reachable regions of the PTZ camera. Users control the camera by drawing a rectangle (with a fixed aspect ratio) on the panoramic image. The center position of the rectangle controls the pan and tilt and the size of the rectangle controls the ⋆
This chapter was presented in part in Transactions of Belarusian Engineering Academy [66].
D. Song: Sharing a Vision: Systems and Algorithms, STAR 51, pp. 139–151. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
140
9 Calibration Algorithms for Panorama-Based Camera Control
zoom level. This interface becomes an advance feature of some webcams that are commercially available. As shown in Canon’s website [30], 20% of its clients have panoramic control interface. Moreover, the panorama-based control interface can serve as collaborative working space to allow user interactions in a CTRC system. As illustrated in Figure 9.1, the panorama calibration problem is how to ensure the precise correspondence between the desired frame and the actual camera frame. Since users draw their inputs in a panoramic window, correspondence can be distorted for a number of reasons, such as changes in the camera location, in the camera internal parameters, or in the calibration of the pan-tilt-zoom drive. Additional distortion may be introduced by the panoramic image itself, which is created from stitching a set of smaller images.
Nominal/desired camera coverage
Actual camera coverage
Fig. 9.1. Operators’ desired camera coverage and the actual camera coverage cannot perfectly match with each other when there is a calibration error. The calibration error causes difficulties when the tele-operated robotic camera is used to track moving animals over a distance.
We notice that intrinsic camera parameters such as CCD sensor size, pixel sizes, and skew factor do not change over time. We assume that pan-tilt-zoom potentiometer readings are approximate or have a limited accuracy. We approach the calibration problem in two steps. First, we model the calibration problem using a rigid-body transformation and present a feature point-based calibration. Then we analysis the calibration accuracy by analyzing the variance of the calibration results. Experiments show that our algorithm can effectively reduce calibration error by 73.7%.
9.2 Related Work Camera calibration is used to determine accurate intrinsic parameters (CCD sensor size, skew factor, focal length, and lens distortion) and/or extrinsic parameters (position and orientation of a camera). Our calibration problem can be
9.2 Related Work
141
viewed as a variation of a camera calibration problem with non-rigid scene and the focus on pan, tilt, and zoom mechanisms for interface correspondence. The fundamental work on camera calibration is to calibrate a still camera using still calibration objects, which is credited as photogrammetric calibration. This is based on a parameterized camera imaging model such as the pinhole perspective projection model[208]. The unique characteristics of the imaging model distinguish camera calibration from robot calibration. Imprecision in camera parameters causes discrepancy between image coordinate systems and the world coordinate system. Sometimes, the discrepancy is caused by the fact that the pinhole model, which is an approximate model itself, cannot accurately model the imaging process[76]. The calibration process can be viewed as model-fitting or parameter identification. Photogrammetric calibration observes a 3D calibration object with a known geometry, usually consisting of several mutually orthogonal planes[26, 53, 54, 117]. This approach is very efficient, but requires carefully designed calibration objects located at 2 or 3 orthogonal planes[208]. Since it is not convenient to accurately set up calibration objects involving 2 or 3 orthogonal planes, a different approach, known as self-calibration, does not use any calibration object, but relies on the rigidity of a scene to calibrate calibrate intrinsic camera parameters. It takes a series of images while moving a camera in a static scene. The assumed rigidity of a scene is used to compute camera parameters[33, 37, 100, 114, 129, 135, 147, 176, 192, 193]. Self-calibration reduces the complexity of setting up the calibration process by assuming the perfect motion accuracy, which can be viewed as the inverse of the problem that we are facing. Because camera motion is inaccurate and the scene is non-rigid in our problem. Sinha and Pollefeys[165] develop automatic calibration algorithms for a pantilt-zoom camera with a focus on automatic zoom calibration. Similar to our approach, their method does not require a structured scene or calibration object. They first determine intrinsic camera parameters at the lowest zoom and then increase camera zoom settings to obtain radial distortion parameters. During the calibration, they focus on intrinsic parameter calibration and assume camera pan-tilt is accurate and repeatable. They obtain extrinsic parameters by matching images captured to a pre-constructed panorama. The accuracy of extrinsic parameters depends on the panorama quality, which is sensitive to the number of frames and lighting variations across frames. Our work does not concern the correspondence between the scene and the image coordinate, which is the focus of a typical camera calibration problem. Our work concerns the correspondence between the camera control commands generated on the interface and the actual coverage of the camera. It is also possible to view our calibration problem as a special case of handeye calibration in robotics, which searches for the transformation between a robot end-effector and a camera mounted on the robot arm. Since robot manipulators work in a structured environment, predefined calibration feature points are readily available for such calibration. Research on hand-eye calibration focuses on numerical methods. Algebraically, the problem leads to a quadratic
142
9 Calibration Algorithms for Panorama-Based Camera Control
homogeneous equation with respect to an unknown pose matrix. Early solutions exploit the properties of the homogeneous matrix by decoupling orientational and positional components [202]. Further solutions[10, 89] employ quaternions and more sophisticated matrix techniques such as iterative routines with singular-value decomposition at each iteration that does not lead to excessive computation. A detailed comparison of these techniques can be found in [48, 124, 211]. More recent methods employ nested recursive linear Least Squares routines and concentrate on robustness issues [110, 123, 134].
9.3 Assumptions and Nomenclature We assume that all images are taken from a camera with a fixed base, which only performs pan, tilt, and zoom movements. Due to cost and space limitations, the camera’s angular potentiometer usually has limited accuracy and may deteriorate over time. Hence these extrinsic parameter readings are inherently approximate and need to be calibrated. We assume that the rate of the error change is slow and hence periodic calibration can compensate for it. We assume that the intrinsic parameters including pixel size, skew factor, and CCD sensor size are pre-calibrated and known. One important assumption of the system is that we assume that the images taken at different pan, tilt, and zoom configurations share the same optical center. Since the most objects observed are relatively far away from the camera (usually more than 20 meters), the small displacement (usually less than 2cm) between the mechanical rotation center and the camera optical center is negligible. Salient and fixed points in the environment are identified as calibration feature points. Figure 9.2 shows some sample feature points including the center of a clock, a corner of the bookshelf, a power outlet on the wall, and a nail in the wall. For the jth feature point, we can pan and tilt the camera to center a frame j on it. Hence, we also reference it as the jth frame. Let us define the following variables, • Xj∗ = (p∗j , t∗j ): the true center position of the jth frame on the panorama. It remains unknown during the entire calibration process. ˆ j = (ˆ • X pj , tˆj ): the nominal center position of the jth frame. It is readings from camera potentiometers. It is referred to as the nominal frame center position because of the errors in the camera potentiometers. Users send their requests according to nominal positions because the actual position Xj∗ is unknown. • Xj = (pj , tj ): the measured center position of the jth frame. It is the corresponding measurement value of the true center position. The measurement process can be done using human readings of the known ground truth points in the scene. • ej = Xj − Xj∗ : the measurement error of the jth frame.
9.4 Calibration Scheme
143
8
2 5
1 6
3
10 4
7
9
Fig. 9.2. Feature point-based pan-tilt-zoom calibration for a robotic camera where features are identified by remote human operators. The measured positions of the frame center are marked as “+”, with their nominal positions centered at “◦”. The arrows indicate error vectors.
The subsequent calibration is a two-step operation. We first identify calibration feature points and collect their readings. Secondly, a calibration model is selected before its parameters are estimated using the collected feature points.
9.4 Calibration Scheme 9.4.1
Problem Definition
The panorama in Figure 9.3(a) describes pan and tilt range of a Canon VCC3 camera. Assume the panorama has no non-linear distortion and coordinate system XOY perfectly matches the camera position, a camera frame can be represented as a rectangle, φ∗ = [p∗ , t∗ , z ∗ ]T = [(X ∗ )T , z ∗ ]T , where z ∗ corresponds to “Horizontal Field Of View” in degrees that quantitatively defines zoom value due to the fixed aspect ratio of the camera. For example, a Canon VCC3 camera has variable HFOV from 4.9◦ to 46.9◦ . Users can draw a rectangle ˆ T , zˆ]T on the panorama to control the camera. If the precise φˆ = [ˆ p, tˆ, zˆ]T = [X correspondence exists, then φˆ = φ∗ after the camera finishes its travel. Unfortunately, such precise correspondence is hard to reach. During the panorama construction, the merging of mosaics and the trimming of the image may introduce translational and rotational errors. On the other hand, camera mechanisms may lose calibration over time. Therefore, as shown in Figure 9.3(b), ˆO ˆ Yˆ , which yields calibration erthe coordinate system used in control may be X rors. The relation between φˆ and φ∗ can represented as the following function, ˆ φ∗ = f (φ). We are interested in finding a pre-compensation transformation function h(·) for the nominal frame φˆ such that precise correspondence can be reached after the compensation, ˆ = φ. ˆ φ∗ = f (h(φ)) The ideal h(·) = f −1 (·). Therefore, the problem becomes how to estimate the inverse of the function f (·) based on the dissimilarity of nominal frames and
144
9 Calibration Algorithms for Panorama-Based Camera Control
25° Tilt
t*
-25° -90°
p*
Pan
z*=HFOV
90°
(a) Panorama-based Camera Control Interface
X*
O* Y*
Oˆ
φˆ
Xˆ
Yˆ
φ*
(b) Calibration Error Illustration Fig. 9.3. A graphic illustration of panorama based camera control and a possible misalignment using the panoramic image of Automation Science and Engineering Lab, UC Berkeley
the corresponding measured camera frames. Furthermore, the “frame to frame” dissimilarity can be divided into two lower order metrics including the “centerto-center” dissimilarity and “size-to-size” dissimilarity. We consider a set of m frames. Since we do not know actual frames {φ∗j }, we use measurement values {φj = [XjT , zj ]} instead. Therefore, we have following problem definition, Definition 1 (Panorama Calibration Problem). For given m nominal ˆ T , zˆj ], 1 ≤ j ≤ m} and their corresponding measured frames frames {φˆj = [X j T {φj = [Xj , zj ], 1 ≤ j ≤ m}, find the transformation function h, which minimizes the dissimilarity of the rectangle centers, min h
ˆ j − Xj 2 , X
(9.1)
j
and the dissimilarity of the sizes, min h
j
|ˆ zj − zj |2 .
(9.2)
9.4 Calibration Scheme
9.4.2
145
Calibration Technique
Let us assume that the “center-to-center” mapping can be approximated by a superposition of three geometrical affine transformations (translation, rotation, and scaling) (This transformation is sometimes referred to as similarity transformation in projective geometry.), which is defined by the following expression, ˆ j + t, j = 1, ..., m Xj = μRX
(9.3)
where R is an orthogonal 2 × 2 rotation matrix, t is a 2 × 1 translation vector, and μ is a scaling factor (scalar). Then the model parameters can be identified using the following theorem. ˆ j , Xj )} be a set of corresponding center points for the nomTheorem 1. Let {(X inal and measured camera frames respectively. Then the least square fitting of the model in (9.3) yields the following parameters, T
m
j=1
R = VU , μ = m
ˆ¯ ¯ T RX X j j , ˆ ˆ T ¯ ¯ X X
j=1
j
j
m m 1 μ ˆ Xj , Xj − R m j=1 m j=1
t= where
m
m
ˆ ¯j = X ˆj , ¯ j = Xj − 1 ˆj − 1 X X Xj , X m j=1 m j=1 and the orthogonal matrices V and U are computed via the following SVDdecomposition, m ˆ ¯j X ¯ T = USVT . X (9.4) j j=1
Proof (Sketchy). To estimate the unknown model parameters (R, t, μ), let us apply the least square technique with an objective function, min F (R, t, μ) =
R,t,μ
m
ˆ j − t)T (Xj − μRX ˆ j − t) (Xj − μRX
(9.5)
j=1
with a non-linear orthogonality constraint RT R = I. Differentiating F with respect to t and solving the system of equations ∂F/∂t = 0 yields the following, t=
m m µ ˆ 1 Xj . Xj − R m j=1 m j=1
(9.6)
146
9 Calibration Algorithms for Panorama-Based Camera Control
Plug (9.6) into (9.5), we get, F (R, μ) =
m
ˆ ˆ¯ ). ¯ j − μRX ¯ j )T (X ¯ j − μRX (X j
j=1
Further differentiate F with respect to μ and solve the equation ∂F/∂µ = 0, we get the scaling factor, m ¯ ¯T ˆ j=1 Xj RXj µ = m , ˆ ˆTX ¯ ¯ X j=1
j
j
which leads to a subsequent simplification of the objective function, F (R) =
m
¯ jT X ¯j − X
m j=1
j=1
ˆ ˆ ¯j ¯ jT X X
m −1
ˆ¯ ¯ jT RX X j
2
,
j=1
where only the second term depends on the unknown orthogonal matrix R. Hence the initial minimum least square formulation in (9.5) can be replaced by, max F˜ (R) = R
m j=1
ˆ ¯ T RX ¯ j = tr R X j
m j=1
ˆ¯ X ¯T X j j ,
(9.7)
where function tr(·) computes the trace of the matrix. To take into account the orthogonality constraint RT R = I, let us apply the SVD-technique [90], which allows factoring the second term of the inside-trace product as USV where U and V are 2 × 2 orthogonal matrixes and S is a 2 × 2 non-negative diagonal matrix, i.e. S = diag(σ1 , σ2 ); σ1 ≥ σ2 ≥ 0. After this factorization, (9.7) can be rewritten as, F˜ (R) = tr(RUSVT ) = tr(VT RUS) =
2
αkk σk ,
k=1
where αkk are the diagonal elements of the matrix VT RU, which is also an orthogonal matrix. For this reason, by the values of αkk are upper-bounded 1, so the largest value of F˜ (R) is σk , which is achieved when VT RU = I. Therefore, the optimal rotation matrix is R = VUT and the theorem is proven. Since the model in (9.3) deals with 2-dimensional transformations, the SVDdecomposition in (9.4) can be avoided by a scalar parameterization of the rotation matrix. Recall t is the 2 × 1 translations vector. Let us define t = [tp , tt ], where tp and tt are the translation along pan and tilt directions, respectively. In this case, the expression in (9.3) is converted into, pj = µˆ pj cos ϕ − µtˆj sin ϕ + tp , tj = µˆ pj sin ϕ + µtˆj cos ϕ + tt ,
(9.8)
where ϕ is the rotation angle. Then the model parameters can be identified using the following corollary.
9.4 Calibration Scheme
147
Corollary 1. For the scalar parameterization of the model in (9.8), the leastsquare fitting parameters are expressed as, Ccr cos ϕ + Scr sin ϕ , Crr m m m µ 1 tp = pˆj − sin ϕ pj − cos ϕ tˆj , m j=1 m j=1 j=1
ϕ = atan2(Scr , Ccr ), µ =
tt =
m m m 1 µ sin ϕ tj − pˆj + cos ϕ tˆj , m j=1 m j=1 j=1
Scr =
m m ¯j ; Ccr = p¯j pˆ¯j + t¯j tˆ¯j ¯j − p¯j tˆ t¯j pˆ
(9.9)
where
j=1
j=1
Crr
m m ˆ 2 2 t¯j pˆ ¯j + = j=1
j=1
m m 1 1 p¯j = pj − pk ; t¯j = tj − tk m m k=1
k=1
m m 1 ˆj = tˆj − 1 pˆ ¯j = pˆj − pˆk ; t¯ tˆk . m m k=1
k=1
To obtain these expressions, the least square objective in (9.5) should be presented as a function of four scalar variables F (ϕ, tp , tt , µ) and differentiated with respect to them. The related system of nonlinear equation is solved by subsequent substitutions using the technique in the theorem proof. As follows from (9.9), at least 2 feature points are required to identify the model parameters. The second part of the calibration problem is the “size-to-size” mapping, which is based on the linear relations zj zj = ηˆ between the sizes of the nominal camera frames and the corresponding measured camera frames zj and zˆj respectively. Its least-square solution is trivial, m ˆj j=1 zj z η = m 2 . ˆj j=1 z This yields the second scaling factor η, which is used for camera zoom control. It should also be noted that the inverses of both models in (9.3) and (9.8) can be computed by a simple swap of the data sets of the nominal frames and the corresponding measured frames.
148
9 Calibration Algorithms for Panorama-Based Camera Control
9.4.3
Calibration Accuracy Analysis
The measurement processes that used to identify {Xj } and the nominal posiˆ j } from angular potentiometers introduce errors into the calibration. tions {X Furthermore, how many feature points are needed to guarantee the accuracy of the calibration? How does the distribution of feature points on the panorama affect the result? To answer those problems, error variance analysis is necessary. Let us augment the affine model in (9.3), ˆ j + eˆj ) + t; j = 1, ..., m Xj + ej = μR(X
(9.10)
where eˆj and ej are the measurement errors, which are assumed to be independent and identically distributed (iid ). Then influence of these errors on the calibration accuracy is defined by the following theorem. Theorem 2. Assume that errors eˆj and ej are iid random variables with zero 2 2 2 + mean and variances σ ˆ 2 and σm respectively for each coordinate, and σΣ = σm 2 T µˆ σ , then the covariance of the model parameter errors ∆ = (δϕ, δµ, δt) is expressed as, ⎤−1 ⎡ 2 µ A11 0 01×2 2 ⎣ (9.11) cov(∆) = σΣ 0 A11 01×2 ⎦ 02×1 02×1 mI2×2 m ˆ ˆ ¯j . ¯TX where A11 = j=1 X j Proof. Let us linearize the model in (9.10) in the neighborhood of the exact solution, ˆ ˆ ¯ j δϕ + RX ¯ j δµ + δt = ej − µRˆ ej ; j = 1, ..., m µR+ X where, without loss of generality, the point coordinates are defined relative to their means, and R+ is the Jacobian of the rotation matrix R with respect to ϕ, which is computed by increasing ϕ by π/2. Applying the least square technique to this system yields, m m m
T ˆ T R+ e , ˆ ¯ ¯ jT R+ ej , X X e ∆ = Ω −1 µ j j j j=1
j=1
(9.12)
j=1
where ej = ej − µRˆ ej , ⎡ 2 ⎤ µ A11 µA12 µA13 Ω = ⎣ µA12 A11 A23 ⎦ µAT13 AT23 mI2×2 m ˆ T ˆ¯ T T ˆ ˆ ¯ j , A13 = m X ¯ T R+T , A23 = m X ¯ R⊥ X where A12 = j=1 X j j j=1 j R , and j=1 R⊥ is a rotation matrix for the angle π/2. Detailed analysis of Ω shows that all its non-diagonal elements are equal to zero, because they include the sums
9.5 Experiments
149
m ˆ ¯ of the centered variables (i.e. j=1 Xj = 0 ) or the dot products of the orˆ ˆ T ¯ j = 0). So, the system 9.12 can be split into the ¯ R⊥ X thogonal vectors (i.e. X j separate subsystems for each unknown variable δϕ, δµ, and δt. Then, computing covariance for each subsystem, keeping in mind that ej are also independent and 2 + µ2 σ ˆ 2 )I2×2 , leads to (9.11). var(ej ) = (σm Corollary 2. For the scalar presentation in (9.8), the variance for its parameters can be expressed as, var(δϕ) =
2 σΣ 2 Ω −1 ; var(δµ) = σΣ Ωs−1 ; µ2 s
var(δtp ) = var(δtt ) = where Ωs =
m
j=1
pˆ ¯2j +
2 σΣ m
m ˆ2 ¯ j=1 tj .
The Corollary 2 can be derived from (9.11) directly using the formula for the dot product of vectors. It should be mentioned that var(δϕ) and var(δµ) are inversely proportional to the parameter Ωs , which may be interpreted as “the moment of inertia for the test points relative to their center of gravity”. It matches the intuitive desire to spread the test points in order to get accurate values for the rotation angle and the scaling factor. On the other hand, var(tp ) and var(tt ) do not depend on the test point distribution in the panorama. Remark 1. It is worth mentioning that the calibration accuracy, which is char2 acterized by cov(∆) in (9.11), depends on the value of σΣ . Recall that σΣ = 2 + µˆ σ 2 . Since σ ˆ 2 is the variance of camera potentiometer readings and is not σm 2 , which is the variance controllable, the calibration accuracy really depends on σm of measurement error. This also implies that to improve the calibration accuracy, it is crucial to reduce the variance of measurement error. Remark 2. It is also worth mentioning that although the result in Theorem 2 is derived using the similarity transformation model in (9.3), the conclusion that 2 , the variance of the measurethe resulting calibration accuracy is linear to, σm ment error actually can be derived from more generic homographic transformations in projective geometry [86].
9.5 Experiments The developed technique has been applied to the calibration of the web-camera installed in the Automation Science and Engineering Lab (University of California, Berkeley). The panoramic image covers a wide viewing area with all lab facilities and has been prepared before the actual camera location was decided. The discrepancy between the requested frame and camera frame led to inconvenience of the user who must request viewing of a desired object by pointing
150
9 Calibration Algorithms for Panorama-Based Camera Control
essentially different area on the fixed image (Fig. 9.2). Because there is no noticeable size difference between the requested frame and the actual camera frame, the calibration was limited by correcting of “center-to-center” mapping for the requested/actual camera frames. To obtain the initial data, 10 static, easily recognizable objects were selected, which are to remain unmoved after the fixed panorama has been created. This collection is presented in Fig. 9.2, where arrows show the displacement. The point coordinates are given in Table 9.1. Table 9.1. Experiment data Point# pˆj tˆj (pix) (pix) #1 416 59 #2 72 40 #3 92 88 #4 166 123 #5 204 60 #6 244 84 #7 272 124 #8 390 40 #9 441 141 #10 513 111
pj (pix) 416.3 88.0 103.4 179.8 217.6 253.3 278.3 408.1 437.2 505.9
tj (pix) 31.3 22.7 65.0 104.1 40.4 60.6 103.5 22.3 120.1 85.3
Before the calibration, the mean square error was 25.0 pixels (see Fig. 9.2 where, for comparison, the fixed panorama size is 567 × 168 pixels). Several calibration configurations were then applied which differ only in the transformations included in the model 9.3. Corresponding results are presented in Table 9.2, where the translation, rotation and scaling are denoted as Tr, Rt, and Sc respectively. As shown, the simplest model (Tr) reduces the navigation error by 64.3%, and the model (Tr, Rt) also yields similar improvements. Another reduced model (Tr, Sc) and the full model (Tr, Rt, Sc) give relatively better result, reducing the error by 73.7%. To compute the covariance and confidence intervals for the identified model parameters, 50 experiments were conducted to estimate the accuracy of the input data. A user pointed the same object; the mouse coordinates were recorded and processed using the robust median-based technique. The measurement noise can be approximately modeled by a Gaussian process with standard deviation values of: σr ≈ 1.4pix. Their substitution into equations in Corollary 2 yields: σϕ = 0.25deg, σμ = 0.0041, and σx = σy = 0.61pix. This corresponds to the following half-width for the confidence intervals: ∆ϕ = 0.48deg, ∆μ = 0.0081, and ∆x = ∆y = 1.2pix. The results in Table 9.2 show that the error is higher than expected for such measurement noise. It indicates that some other nonlinearity exists, which is not described by the equation 9.3. This could be caused by the nonlinear distortion
9.6 Conclusions and Future Work
151
Table 9.2. Calibration results for different models Model
Model parameters tp tt ϕ μ (pix) (pix) (deg) None 0 0 0 1.00 Tr 7.76 -22.07 0.00 1.00 Tr, Rt 7.24 -20.37 -0.35 1.00 Tr, Sc 18.85 -18.64 0.00 0.96 Tr, Tr, Sc 18.35 -17.00 -0.35 0.96
Error ǫ (pix) 25.0 8.80 8.76 6.56 6.50
in panorama construction, where the lens distortion was not properly corrected. However, the achieved calibration accuracy seems satisfactory for this particular application. The calibration results have also been verified through real-life experiments using the Sharecam system [171].
9.6 Conclusions and Future Work The Pan-Tilt-Zoom cameras with panorama control interface provide users with an intuitive interface. However, in practice, the live camera image may not fully match the requested frame. The developed technique reduces this discrepancy by applying “center-to-center” and “size-to-size” mapping. The paper proposes a new technique for online feature extraction from a single image using a human-in-the-loop architecture. Using this model, the calibration accuracy can be estimated using new expressions developed for computing the transformation parameters and their covariance. The developed technique has been implemented and validated by on-line experiments with the ShareCam system installed in Automation Science and Engineering Laboratory, UC Berkeley where it has yielded essential reduction of the navigation error (about 75%). The proposed calibration scheme is based on human-inputs, which are prone to errors. In the future, we will develop automatic calibration scheme that builds on image matching. We notice that the panorama is not necessary static. We are developing online incremental calibration scheme that can be embedded into panorama construction process.
10 On-Demand Sharing of a High-Resolution Panorama Video from Networked Robotic Cameras⋆
An important problem in a CTRC system is the organization and the transmission of the image data. Unlike a regular camera system where frames are usually compressed into a single MPEG stream, frames from a pan-tilt-zoom camera should be indexed spatiotemporally to allow efficient archive and retrieval. The transmission of the video data should not be limited to current live video. The request of the user should be considered in the transmission process. This chapter presents an on-demand sharing of the high resolution panorama video to address this problem.
10.1 Introduction Consider a high-resolution pan-tilt-zoom camera installed in a deep forest. Connected to the Internet through a long-range wireless network communication, the robotic camera allows scientists and/or the general public to continuously observe nature remotely. Equipped with robotic pan-tilt actuation mechanisms and a high-zoom lens, the camera can cover a large region with very high spatial resolution and allow for observation at a distance without disturbing animals. For example, a Panasonic HCM 280A pan-tilt-zoom camera has a 22x motorized optical zoom. The camera can reach a spatial resolution of 500 megapixel per steradian at the highest zoom level. Since the camera has a 350 pan range and a 120 tilt range, the full coverage of the viewable region is more than 3 gigapixels if represented as a panorama. As the camera patrols the viewable region, the giga-pixel panorama is also continuously updated. As illustrated in Fig. 10.1, there are many concurrent scientists and other online users who want to access the camera (or cameras if multiple cameras are installed for better coverage). Transmitting the full-sized ever-changing giga-pixel panorama video to every user is unnecessary and expensive in the bandwidth ⋆
This chapter was presented in part at the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), San Diego, CA [150].
D. Song: Sharing a Vision: Systems and Algorithms, STAR 51, pp. 153–163. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
154
10 On-Demand Panorama Video Streaming
Fig. 10.1. Evolving panorama video system diagram. The left hand side illustrates the server side. The right hand side is a user at the client side. The grid at server represents a patch-based high-resolution panorama video system that allows multiple users to query different parts of the video concurrently. I’s and B’s indicate the I-frame and the B-frame used in MPEG-2 compression. A user sends a spatiotemporal request to server side and to retrieve the part of his/her interests in the panorama.
requirement. Each user may want to observe a different sub-region and time window of the panorama video. For example, an ornithologist is often interested in bird video data when the camera is aimed at the top of the forest in the morning. Since that both camera coverage and user requests have spatiotemporal constraints, how to efficiently organize the frames captured by the camera and satisfy various and concurrent user requests becomes a challenging problem. An analogy to this problem is the Google Earth (http://earth.google.com) system where each user requests a view of different regions of the planet Earth. While the image-sharing aspect of Google Earth is similar to our system, the primary difference is that the satellite image database of the google Earth system is relatively static and user requests do not involve the time dimension whereas our system has to be run in near real time and satisfies spatiotemporal user requests. In this chapter we present systems and algorithms that allow on-demand sharing of a high-resolution panorama video. It is the first panorama video system that is designed to efficiently deal with multiple different spatiotemporal requests. We propose a patch-based approach in a spherical coordinate system to organize data captured by cameras at the server end. Built on an existing videostreaming protocol, the patch-based approach allows efficient on-demand transmission of the request regions. We present a system architecture, user interface, data representation, and encoding/decoding algorithms followed by experiments. We begin with the related work.
10.2 Related Work
155
10.2 Related Work Our system builds on the existing work of networked tele-operation[71] and panorama video systems [18]. Existing work on MOSR and MOMR systems provides strategies to efficiently coordinate the control of the shared robot. Users are usually forced to share the same feedback from the robot. However, users may not be interested in the same event at the same time even when they access the system at the same time. This becomes more obvious when the shared robot is a robotic camera. Time and space of interests may vary for different online users. This chapter is aimed to address this new problem. Our work is directly related to panorama video systems because a panorama is a natural data representation to visualize the data from pan-tilt-zoom cameras. Because of its capability to visualize a large region with high resolution, a panorama video system finds many applications such as videoconferencing[57], distance learning[58], remote environment visualization[18], and natural environment observation[167]. The related work can be classified into two categories: panorama video generation and panorama video delivery. There are many methods to generate a panorama video. A panorama can be generated using a single fixed camera with a wide-angle lens or parabolic mirrors[13, 137, 139, 201]. However, due to the fact that it can not distribute pixels evenly in the space and the resolution limitation imposed by CCD sensors, it cannot generate high-quality video. A panorama video can also be generated by aligning videos from multiple cameras[58, 178]. Although the design can provide complete coverage with live video streams, those system require simultaneous transmission of multiple video streams and the bandwidth requirement is very high. Panorama video can also be built from registering a pre-recorded sequence of video frames [4, 94, 191, 210] captured by a single rotating camera. However, only portions of the panorama contain live video data at any moment. Our system fits into this category as well. Argarwala et al.’s panoramic video texture (PVT)[4] and Rav-Acha et al.’s dynamosaics [152] are representative work in this category that constructs pseudo-live panorama video out of a single video sequence by alternating time-space correspondence. Bartoli et al.[15] develop motion panoramas that extract moving objects first and then overlay the motion part on top of a static background panorama. We summarize the existing panoramic video systems in Table 10.1. In existing systems, a panorama video is always transmitted and fully reconstructed at the user end because panorama resolution is not a concern. However, when the resolution of the panorama is very high, on-demand transmission is necessary. Our development is the first system that tackles this problem. Transmitting a panorama video is non-trivial. For a low resolution panorama video system, we can encode the whole panorama video and send it to clients. However it consumes too much bandwidth when the resolution of the panorama increases. Furthermore, it cannot deal with random spatiotemporal accesses. Irani et al. [95, 96] propose mosaic-based compression. A static panorama background is first constructed out of the video sequence and then each video frame is compressed using the static panorama background as a reference. Furthermore,
156
10 On-Demand Panorama Video Streaming
Table 10.1. A comparison of existing panoramic video systems BandSystem Camera Video Output Sample Systems width Wide angle Single Low quality live Low [13, 137, 139, 201] lens / mirrors fixed stream Multiple Multiple camera High Live panoramic video [58, 178] fixed panorama video Pseudo-live Single panorama video by Panoramic High [4] pan changing video video texture temporal display Pseudo-live Single panorama video by Dynamosaics High [152] pan changing space-time volume Static panorama Motion background overlaid Single Low [15, 94] panorama with live moving objects trajectory PTZ Our system Low Partial live panorama This chapter cameras
it detects and indexes the motion objects and provides content-based video indexing. Although they do not deal with on-demand transmission, their work inspires our work. Ng et al.[139] propose to partition the panorama into six vertical slices spatially and compress each sliced video sequence separately using MPEG2. When a user requests for the video of a part of the panorama video, only sliced video sequences that intersect with the user’s requested area are transmitted. This method is among the first to consider on-demand queries. However, its efficiency of encoding decreases as the camera tilt range increases. Also it repeats the data from previous frame when there is no live coverage; it is not efficient or a faithful representation of the remote environment. Our work advances the idea of partitioning panorama into 2-D patches and significantly reduces computation time and bandwidth by only encoding/transmitting updated patches.
10.3 System Architecture Fig. 10.2 illustrates our system architecture. Any user with Internet access can access our system. Users log on to our system and send their requests to a streaming server. The streaming server directly connects to the camera. The system is not limited to a single camera. The development can be easily extended to a multiple-camera system to increase concurrent coverage as long as their image frames can be projected into the same spherical panorama. Here we use a single camera to illustrate the idea. Note that the camera cannot provide concurrent coverage of the entire viewable region due to its limited field of view
10.3 System Architecture
157
Fig. 10.2. System architecture and user interface
and the limited number of pixels in its CCD sensor. The motion of the camera can be controlled by preprogrammed patrolling sequence, sensor inputs, or the optimization of frames from competing user inputs [170, 173]. The user interface consists of two parts: a static background panorama that covers the user requested region and a video segment superimposed on top of the background panorama if there are video data collected for the requested time duration. Therefore, depending on user requests and camera configurations, the streaming server may transmit different contents to a user such as a pre-stored video segment, a high-resolution static image with the time stamp closest to the request time window, or a live video from the camera. On the other hand, users might use low-power devices such as PDAs or cell phones, which do not have the computation power to perform expensive image alignment and panorama construction computation. The streaming server should perform as much computation in generating and delivering panorama video as possible. The streaming server employs our evolving panorama to accomplish the task. 10.3.1
Evolving Panorama
An evolving panorama is the data representation we design to deal with spatiotemporal camera frame inputs and user requests. The evolving panorama is not a panorama but a collection of individual frames with timestamped registration parameters. The registration parameters allow the frame to be registered as part of a virtual spherical panorama. A panorama is usually constructed by projecting frames taken at different camera configurations into a common coordinate system, which is referred to as a composite panorama coordinate system. We choose a spherical coordinate system as the composite panorama coordinate system due to its relative small distortion if compared to a planar panorama composite coordinate system and large tilt coverage if compared to a cylindrical panorama composite coordinate system. In
158
10 On-Demand Panorama Video Streaming
[151], we have shown that image alignment on the same spherical surface can be performed very efficiently because there exist projection invariants to allow the quick computation of registration parameters. Using a pre-calibrated camera, a point q = (u, v)T in a newly-arrived video frame F is projected to the point q˜ = (˜ u, v˜)T in F˜ in the spherical coordinate system. The spherical coordinate system is centered at the lens optical center and has its radius equal to focal length f of the lens. The spherical pre-projection that projects q to q˜ is defined in (8.12). Each point (˜ u, v˜)T in F˜ is defined using local pan and tilt spherical coordinates with units of radians. This is a local spherical coordinate because it forces the camera’s optical axis to overlap with vector (˜ u = 0, v˜ = 0). The next step is to re-project the local spherical coordinate to a global spherical coordinate to obtain image registration parameters using image alignment. The concept of an evolving panorama builds on the fact that the panorama is continuously updated by the incoming camera frames. In fact, we do not store and build the whole panorama in order to avoid expensive computation. Different clients might have different spatiotemporal requests. It is important to understand the relationship between the evolving panorama and user requests. 10.3.2
Understanding User Requests
For a giga-pixel panorama video, it is impractical to transmit the entire video sequence due to bandwidth limitations. The screen resolution of the display device also limits the resolution of the video. Additionally, a user might not be interested in the entire viewable region. As illustrated in Fig. 10.3, a typical user request can be viewed as a 3D rectangular query box in space and time. Define ri as the ith request, ri = [u, v, w, h, ts , te ], (10.1) where (u, v) defines the center position of the requested rectangle on the panorama, w and h are width and height of the rectangle, and time interval [ts , te ] defines the time window of the request. Fig. 10.3 only illustrates a single user request. At any time k, there may be many different concurrent requests.
Fig. 10.3. The relationship between the evolving panorama and a user request. The striped regions indicate how the evolving panorama updates as camera frames arrive. The shaded box indicates the part of the data the user queries.
10.4 Data Representation and Algorithms
159
Addressing the need of different and concurrent requests is the requirement for our system. With the concept of the evolving panorama and user requests, we are ready to introduce the data representation and algorithms for the system.
10.4 Data Representation and Algorithms We propose a patch-based panorama video data representation. This data representation allows us to partition the image space and allows partial update and partial retrieval. Built on the data representation, we then present a frame insertion algorithm and a user query algorithm. To illustrate the idea, we build our algorithms based on the MPEG-2 streaming protocol, which is the most popular protocol that can be decoded by a majority of client devices. However, the design can be easily extended to more recent protocols such as the MPEG-4 family for better compression and performance. 10.4.1
Patch-Based Evolving Panorama Video Representation
We partition the panorama video into patches and encode each patch individually using MPEG-2 algorithms. The grid in Fig. 10.1 shows a snapshot of the patch-based panorama at a given time. Only a subset of patches contain live video data because cameras cannot provide full coverage of the entire viewable region at a high-zoom setting. The panorama snapshot is a mixture of live patches and static patches. Let us define the jth patch as pj , j = 1, ..., N for a total of N patches. Each patch contains a set of video data pj = {pjk |k = 1, ..., ∞} across the time dimension. Define Fk as the camera coverage in the viewable region at time k. If pj intersects with Fk , pjk contains live video data at time k. Otherwise, pjk is empty and does not need to be stored. To summarize this, the whole patch-based evolving panorama video Pt at time t is a collection of live patches pjk s, Pt = {pjk |j = 1, ..., N, k = 1, ..., t, pjk ∩ Fk = ∅}. 10.4.2
(10.2)
Frame Insertion Algorithm
When a new video frame Ft arrives at time t, we need to update Pt−1 to get Pt , Pt = Pt−1 ∪ {pjt |j ∈ {1, ..., N }, pjt ∩ Ft = ∅}.
(10.3)
Implementing (10.3) on the streaming server is nontrivial. As illustrated in Fig. 10.1, for raw video frame Ft , its extrinsic camera parameters are first estimated by aligning with previous frames. The alignment process is performed on the spherical surface coordinate system. Next, we project the frame Ft onto the composite panorama spherical coordinate system. For each patch pj intersecting with Ft , we encode it individually. We use an MPEG-2 encoder for patch encoding in our implementation. As with any MPEG-2 encoders, the size boundary
160
10 On-Demand Panorama Video Streaming
Algorithm 1. Frame Insertion Algorithm input : Ft output: Updated evolving panorama video wrap Ft onto the spherical surface; estimate Ft ’s registration parameters by aligning it with previous frames; project Ft onto the sphere panorama surface; for each pj and pj ∩ Ft = ∅ do insert pjt into pj ’s GOP buffer; for each pj , j = 1, ..., N do if pj ’s GOP buffer is full then encode patch video segment; store patch video segment start position and time data into lookup table; reset GOP buffer for incoming data;
for the number of frames inside one group of pictures (GOP) is predefined. Each GOP contains one I frame and the rest of the frames are either P frames or B frames. The size of the GOP should not be too large for quick random temporal video retrieval. Each patch holds its own GOP buffer. If the patch pj intersects the current frame Ft , the updated patch data are inserted into patch video sequence Pj ’s GOP buffer. Whenever the GOP buffer reaches its size limit, we encode it using the standard MPEG-2. Since only a partial area of the panorama contains live video data at a certain time range and the number of the frames inside the GOP is predefined, the patch video data pjk inside one patch video segment are not necessarily continuous in the time dimension. We summarize the patch-based evolving panorama video encoding algorithm in Algorithm 1. 10.4.3
User Query Algorithm
At time t, the system receives the ith user request ri = [u, v, w, h, ts , te ]. To satisfy the request, we need to send the following data to the user at time t, ri ∩ Pt = {pjk |j ∈ {1, ..., N }, k ∈ [ts , te ],
pjk ∩ ri = ∅, pjk = ∅}.
(10.4)
We implement this query as follows: for each pj we keep track of its start position and the timestamp of I frames in a lookup table, which is used for random spatiotemporal video access. After receiving ri , the streaming server first locates the nearest I frame with respect to ts and te . If the streaming server identifies there is no live data in patch pj in the requested time range, no additional video data is transmitted for patch pj . This procedure can be summarized as the following Algorithm 2. The decoding procedure at the client side is the standard MPEG-2 decoding. It is worth mentioning that the output of the system is not always a video segment. As illustrated in Fig. 10.3, a user-requested region does not overlap with camera coverage at time k + 1. It is possible that a user request might not intersect with any camera frames for the entire query time window [ts , te ]. For
10.5 Experiments and Results
161
Algorithm 2. User Query Algorithm input : ri output: ri ∩ P in MPEG-2 format Identify patch set S = {pj |j ∈ {1, ..., N }, pj ∩ ri = ∅}; for each pj ∈ S do find the nearest I frame pjb earlier or equal to ts ; find the nearest I frame pjc later or equal to te ; transmit the patch segments between pjb and pjc ;
this situation, this algorithm will output an I-frame that is closest to [ts , te ]. Therefore, it sends a static image closest to the request. If the user request happens to be overlapped with current live camera coverage, the user receives live video. This algorithm allows three types of outputs: a pre-stored video, a live video, and a static image.
10.5 Experiments and Results We test our algorithms using a Dell Dimension DX with a 3.2Ghz Pentium dualcore processor and 2GB RAM. The video camera is a Panasonic HCM 280a. It has a 2.8 − 51 horizontal field of view. We have implemented our algorithms using Visual C++ in Microsoft Visual Studio 2003.NET and adopted the MPEG2 encoder and decoder source code developed by the MPEG Software Simulation Group. We have conducted experiments using the data from field tests. As illustrated in Figfigresults, we have deployed our camera in two testing fields including a construction site at UC Berkeley and a pond in Central Park, College Station, Texas. We have collected data at both sites. For the construction site, data cover a duration from Feb. 10, 2005 to Jun. 2, 2005. The camera has been controlled by both online users and a pre-programmed patrolling sequence. Data collected in the park cover the experiment duration of Aug. 24, 2005 to Aug. 31, 2005. The construction site provides an urban environment setting while tests in the park provide a natural environment setting. The data for each trial consist of 609 image frames captured at a resolution of 640 × 480. For a frame rate of 25 frames per second, the data represent 24 seconds of recording by the HCM 280a. The overall raw RGB data file size is 536 megabytes for the 24-bit color depth used in the experiment. The constructed panorama has an overall resolution of 2742 × 909 after cropping the uneven edges. The panorama size is much smaller than what the camera can provide (i.e. giga-pixel level). Since our tests involve speed tests, a large image file will involve an excessive mixture of RAM and disk operations, which could bias the speed test results. Using a smaller data set can minimize disk-seeking operations and reveal the real difference in computation speed. In the first test, we are interested in testing how much storage savings we can gain from the design and how much computation time is needed to achieve the gain. During all the tests, we set the MPEG-2 quantization level to 50 without
162
10 On-Demand Panorama Video Streaming Table 10.2. Storage and computation speed versus different patch sizes Patch size #Patches File size (kb) Speed 1 96 × 96 2 128 × 96 3 256 × 192 4 320 × 240 5 480 × 320 6 2742 × 909
290 220 55 36 18 1
8044 8191 8871 9965 11099 22163
6.9x 6.4x 5.0x 3.8x 3.1x 1x
a rate limit. Therefore, we can compare the size of the video file data at the same video quality at different patch size settings. The last row in Table 10.2 actually encodes the entire panorama video at once without using patches, which is used as the benchmarking case. In this case, we update and generate a full panorama for each arriving camera frame. Then the full panorama is added into the GOP for encoding (same as [139]). The file size in Table 10.2 is displayed in units of kilobytes. Smaller file size means less storage and is preferable. It is interesting to see that patch-based approach has significant savings in storage. This is expected because our system does not encode the un-updated part of the panorama as opposed to the benchmarking case which repeatedly encodes the un-updated regions. The speed column compares the computation speed under the various patch size settings with the benchmarking case. As shown in the Table 10.2, encoding the entire panorama in the benchmarking case takes more time than that of the patch-based approach. The computation speed gets faster as the patch size reduces. This can be explained by two reasons 1) less data: we do not repeatedly encode the un-updated region and 2) smaller problem space: the block matching problem space is much smaller for a smaller patch size in the MPEG-2 encoding. In the second test, we are interested in studying how much bandwidth is needed for a normal user query. We assume that user has a screen resolution of 800 × 600. Therefore, the request follows the same size. We know that the bandwidth requirement depends on how many patches the request intersects with. We study two cases including the best-case scenario and the worst-case scenario. The best-case scenario refers to the case that the request intersects Table 10.3. Bandwidth for a user query versus different patch sizes Patch size Worst case (kbps) Best case (kbps) 1 96 × 96 2 128 × 96 3 256 × 192 4 320 × 240 5 480 × 320 6 2742 × 909
739.7 794.3 1344.1 1476.3 1849.8 7387.7
582.5 608.1 860.2 830.4 822.1 7387.7
10.6 Conclusion and Future Work
163
with the least number of patches. The worst-case scenario is the opposite. Again, the last row of the table is the bencharking case. Table 10.3 summarizes the test results. As expected, a smaller patch size is preferred because it requires less bandwidth.
10.6 Conclusion and Future Work In this chapter, we present a patch-based panorama video encoding/decoding system that allows multiple online users to share access to pan-tilt-zoom cameras with various spatiotemporal requests. We present the evolving panorama as the data representation. We also present algorithms for efficient frame insertion operations and user query operations. We have implemented the system and conducted field tests. The experiments have shown that our system can significantly reduce the storage needs and bandwidth requirements of online users. Research on the on-demand sharing of the data is just at the beginning stage. We notice that MPEG2 protocol is not an efficient compression protocol and it is not designed specifically for robotic pan-tilt-zoom cameras. Since we know the camera motion information in the panorama construction, the compression computation should be able to use the information to reduce the computation speed and increase its efficiency. Network bandwidth is always a problem for cameras deployed in applications such as natural observation. Low power and on-demand data compression are favorable features in lots of applications. New protocols, new standards, and new algorithms are needed to address those problems.
11 Conclusions and Future Work
11.1 Contributions 11.1.1
Challenges Identified in CT Systems
A collaboratively teleoperated robotic camera system allows many users to simultaneously share control of a pan-tilt-zoom camera or a camera mounted on a human tele-actor. In CTRC, the challenges are how to design effective systems and how to compute consensus commands for the shared camera. We have studied from three aspects: systems, algorithms, and deployment issues. • Part I systems A CTRC system needs to take care the concurrent user requests and to provide a collaborative working space for all users. Unlike a traditional singleuser webcam, which often employs button-based input and directly displays the video as the output, the new CTRC interface, architecture, and system design have been presented. A collaborative working space is created in the format of either a motion panorama (Chapter 2) or a spatial dynamic voting interface (Chapter 3). Depending on the systems, time-stamped user inputs take the forms such as rectangular objects and votels to characterize the user requests. • Part II algorithms This part concerns how to coordinate the competing user quests to generate a meaningful single control stream to control the robotic camera from an algorithmic perspective. Four different variations of the algorithms are discussed in this part: 1) exact algorithms that address the need from applications which prefer accuracy to speed such as satellite imaging; 2) approximate algorithms for remote observation cameras when speed is preferred; 3) the p-frame selection algorithm that address the frame allocation problem when there is more than one camera available; and 4) voting mechanism that addresses the problem where the user intention cannot be represented by geometric inputs. • Part III deployment To deploy a CTRC system is nontrivial, panorama construction, update, calibration, and transmission of the panoramic video are important research D. Song: Sharing a Vision: Systems and Algorithms, STAR 51, pp. 165–171. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
166
11 Conclusions and Future Work
problems in deployment. Initial development regarding those problems have been presented. 11.1.2
Formulation of CTRC Problems and Metrics
To compute consensus commands, we gave formulated the problem as an optimization problem: maximize total user satisfaction levels/reward by choosing the optimal control command. The satisfaction level/reward is defined as a metric function of users’ inputs and the current control command. User inputs for the Co-Opticon project are iso-oriented and congruent planar rectangles. Output, which is used to control the shared device, is a rectangle ensuring that total satisfaction levels are maximized. User inputs for the TeleActor project are planar points. Output of the Tele-Actor project is a closed region describing the most interesting region in the voting image. These two problems for CTRC systems can be seen as instances of a class of problems described by: given n requests from a parameterized family of objects and an objective function, choose an optimal set of k representatives from a (possibly different) family of objects, where k < n. The objective function depends on definition of satisfaction/reward metric function. For the Co-Opticon system and its extension in the satellite frame selection, the metric function depends on how the camera frame resembles the user requests with respect to location, shape, and resolution. The traditional similarity metrics such as Intersection Over Union (IOU) and Symmetric Difference (SD) in pattern recognition literature only measure location difference and shape difference but can not measure resolution difference. We propose Coverage-Resolution (CRR) metric, which captures the location difference, the shape difference, and the resolution difference. The CRR metrics are a not a single metric but a parameterized family of metrics. In addition to that, the CR metrics possess a piecewise linearity property, which actually speeds up the algorithms in comparison to the SD or IOU metric. In case that the shape is coupling with the resolution, i.e. a large frame means lower resolution, the CR metric can be reduced to a simple special format: Intersection Over Maximum (IOM) metric. 11.1.3
Algorithms
We have applied knowledge of computational geometry and optimization theory to address the two instances of the CT problems: the Frame Selection problem in the Co-Opticon system and the decision/scoring problem in the Tele-Actor system. As illustrated in table 11.1, for the Frame Selection Problem in Co-Option system, since the speed is a critical issue, I developed both exact and approximation algorithms, the best of which runs in O((n + 1/ǫ3 ) log2 n) for n requests and approximation bound ǫ. For the Satellite Frame Selection problem, I developed an exact algorithm that runs in O(n3 ). For the decision/scoring problem in the Tele-Actor system, I developed Gaussian-based clustering approach and develop linear time approximation algorithms.
11.1 Contributions
167
Table 11.1. Algorithms developed for CT problems. Recall that n is number of requests. No. System Type 1 Co-Opticon Centralized 2 Co-Opticon 3 Satellite 4 Co-Opticon 5 Co-Opticon 6
11.1.4
Tele-Actor
Zoom m levels
Solution Exact
Complexity O(mn2 ) Server: O(mn) Distributed m levels Exact Client: O(n) Centralized Continuous Exact O(n3 ) Centralized Continuous Approximation O((n + 1/ǫ3 ) log2 n) Sever: O(n) Distributed Continuous Approximation Client O(1/ǫ3 ) Server: O(n) Distributed Approximation Client O(n)
System Development and Experiments
We have implemented both systems and they have been extensively field tested with students and online users. We use cutting edge technologies such as high resolution networked cameras, broadband videoconferencing, and wireless networking to enhance collaborative teleoperation and its applications in education, security, entertainment, and journalism. System Development We have used C/C++, java, PERL, javascript, SQL to code the Tele-Actor system and the Co-Opticon system. The total amount of coding is more than 200K lines. About 60% of the code is written in C/C++, 30% of the code is written in Java. The system development involves all main stream OSs including Mac, Linux, and Windows. The scope of the coding covers knowledge of Video Streaming, Operating System, Network Communications, Computer Architecture, and Software Engineering. The compiler used includes GNU C++, Microsoft Visual C++, J2SE, and Active Perl. I have also spent a lot time on improving system scalability, reliability, and security when optimizing my development. Experiments The Co-Opticon systems was pre-launched inside the Alpha Lab, UC Berkeley in fall of 2002. After 6 months of testing, it was deployed at Evans Hall in Berkeley campus in summer of 2003. The system has been running 24 hours a day and 7 days a week since then. The system has never crashed. The Tele-Actor system was launched in summer of 2001. Since then, the TeleActor visited many places including schools, semi-conductor manufacturing fabrication facilities, biology laboratories, the grand opening of a new building in Berkeley campus, San Francisco Exploratorium, Pasadena Art Center, and the Fifth Annual Webby Awards event. The primary users of the system are online users and students. The students who experienced with the system range from
168
11 Conclusions and Future Work
7th grade high school students to graduate students from Berkeley. We learned important lessons from those experiments, such as network traffic jamming, network security problems, interference of wireless communication, administrative problems, and coordination of teammates from different backgrounds. Internet Videostreaming for CTRC In both CTRC systems that we have developed, we need to deliver the representation of the remote environment in video format to online users. We have tested mainstream software packages including Quick time, RealVideo Codec, Microsoft Media Encoder, Cuseeme, and Microsoft Netmeeting. One important experience we have learned from the experiments is that the state of art videostreaming technology cannot satisfy the requirements for CTRC systems or vision-based telerobotics in general. In teleoperation, real time video is used as primary feedback information, which imposes different Quality of Service requirements in comparison to videostreaming for media contents. To deliver video over the Internet, we need to consider video compression standards and network transmission protocols. There is a tradeoff between the frame rate and the resolution for a given bandwidth. There is also a tradeoff between compression ratio and computation time for a given CPU. The computation time includes both CPU time and data-buffering time. Video compression is a very computationally intensive task. A long computation period will introduce latency and significantly impair the tele-operation performance. We can use hardware to cut down the CPU time but not the databuffering time. There are many standards and protocols available but are just variations of MJPEG, MPEG2, MPEG4, and H.26x family. We compare those standards in the following Table 11.2. From teleoperation point of view, the buffering time determines the latency and the frame rate determines the responsiveness of the system. An ideal video stream should have both high frame rate and low buffering time. But if both cannot be achieved at the same time, low latency is preferred. From Table 11.2, H.263+ outperforms other competitors. However, since H.263+ has to use random ports on UDP to transmit video signals, which is blocked by most of today’s firewalls, which affects the scope of deployment. Another interesting observation is that all of those standards try to rebuild a true and complete representation of the field of view. However, it might not be necessary for a teleoperation task. Sometimes, a high level abstraction is Table 11.2. A comparison of existing videostreaming standards for a given resolution of 320 × 240 Standards Buffering Time Framerate MJPEG Negligible Low MPEG2 Noticeable Moderate MPEG4 Long (8∼10secs) Highest H.263+ <300 msecs High
11.2 Future Work
169
sufficient. For example, when a mobile robot is avoiding an moving obstacle, all the robot needs to know is the speed and bounding box of the moving object instead of knowledge that whether this object is human or other robots. We might want to control the level of details in video perception and transmission. This actually imposes a interesting problem: we need a new videostreaming standard that serves for teleoperation.
11.2 Future Work 11.2.1
Big Picture
The research on CTRC or collaborative teleoperation in general is still at its infancy. The CTRC can be viewed as an application of collaborative decision making process for a tele-robot over the Internet. As discussed in previous chapters, we have tested voting and optimization as the collaborative decision making strategies using two kinds of shared devices. However, there are other group decision strategies available, i.e. auction-based schemes, that may worth trying. Game theory based approach can also be applied here. Some collaboration methods can be viewed as cooperative game while others can be viewed as non-cooperative game. There also may be a hybrid of both as we discussed in Chapter 7. Further research on how to select an optimal collaborative teleoperation strategy and how to classify a family of robotic device that share the same collaboration strategy is necessary. The research can not be done without extensive experimenting with different robotic devices and different group-making strategies. 11.2.2
Extensions of Frame Selection Problems
4-D 1-Frame Problem In the Chapter 5 and Chapter 4, we assume that the camera frames are isooriented rectangles. However, this may not be adequate if the camera can rotate along its optical center axis. For example, it becomes more and more common for modern imaging satellites to perform such rotations. As illustrated in figure 11.1, this introduce a 4th degree of freedom in the camera control in addition to the pan, tilt, and zoom motions. This problem has not be addressed in the thesis. It is unclear how to get the exact optimal solution at this moment. Our initial study show that we may be able to give a bounded approximation solution by modifying our algorithms in Chapter 5. 3D 1-Frame Problem with Dwelling Time and Camera Travel Time One frequently asked problem is that some requested frames may never get satisfied if the request is located in a non-popular region. We touch briefly on the problem in Chapter 2 by using memory-based approaches. However, a better approach is to modify our metric to plan the camera frame with respect to frame shape, location, and the dwelling time.
170
11 Conclusions and Future Work
Requested frames
Optimal frame Fig. 11.1. The Optimal Frame is not necessarily an iso-oriented rectangle
p−frame/p−camera Problem Another natural extension of the problem is what if we can take more frames. These frames can be either taken from the same camera from the same view point or simultaneously taken from multiple cameras. We name the former as p−frame problem and the later as p−camera problem. These problems is analogy to the P −center problem in facility location problem. The initial conjecture is that p−frame problem is NP-complete. We need to work on effective approximation algorithms and bounds. The difficulty of the p−camera problem also depends on if the cameras are homogenous or not and if the camera views overlap or not. If the p cameras have no overlap in viewable region, this problem can be reduced to p single frame selection problems. To attack p−frame and p−camera problems, we may also need to generalize our metrics. Chapter 6 has touch a special case of the p− frame problem with least overlapping condition. More general versions of the p− frame problem remain unsolved. 11.2.3
Another Viewpoint on Future Work
A CTRC system includes both human side and machine side. In human side, we need to design systems that motivate people to collaborate to improve group decision quality. Incentive compatibility between individual participants and the group performance is an important issue in system design. In machine side, we need to develop innovative strategies that make good use of the machine to improve reward, productivity, and usage. In our Tele-Actor system, we tried to address the human side problem. In our Co-Opticon system, we tried to address the machine side problem. We are also interested in how to combine both human side and machine side in consideration in new CT system: How to design an incentive compatible system for human operators and also optimize the robot utilization at the same time? We are developping a new system, which is built on the Co-Opticon system, will try to apply those strategies.
11.2 Future Work
171
There are very exciting rich set of problems in the CTRC field. From a school bus to the robot on the Mars, we share robotic resource quite often in our world. In robotics research, we spend a lot time on making the robotic device function properly but do not pay enough attention on how to efficiently share control of the device. In our practices on CTRC system, we feel that there is a clear call on efficiency and human incentive compatibility, which may not be unique to us and means more attention is needed for those issues in future robotics research.
References
1. AfricaWebCams (2005), http://www.zulucam.org/ 2. Agarwal, P., Erickson, J.: Geometric range searching and its relatives. In: Chazelle, B., Goodman, J.E., Pollack, R. (eds.) Advances in Discrete and Computational Geometry, Providence, RI. Contemporary Mathematics, vol. 23, pp. 1–56. American Mathematical Society Press (1999) 3. Agarwal, P.K., Bereg, S., Daescu, O., Kaplan, H., Ntafos, S.C., Zhu, B.: Guarding a terrain by two watchtowers. In: Symposium on Computational Geometry, pp. 346–355 (June 2005) 4. Agarwala, A., Zheng, C., Pal, C., Agrawala, M., Cohen, M., Curless, B., Salesin, D., Szeliski, R.: Panoramic video textures. In: ACM Transactions on Graphics (Proceedings of SIGGRAPH 2005), Los Angeles, CA (July 2005) 5. Alt, H., Arkin, E.M., Br¨ onnimann, H., Erickson, J., Fekete, S.P., Knauer, C., Lenchner, J., Mitchell, J.S.B., Whittlesey, K.: Minimum-cost coverage of point sets by disks. In: Symposium on Computational Geometry, pp. 449–458 (June 2006) 6. Arai, H., Takubo, T., Hayashibara, Y., Tanie, K.: Human-robot cooperative manipulation using a virtual nonholonomic constraint. In: IEEE International Conference on Robotics and Automation (April 2000) 7. Arbter, K., Snyder, W., Burkhardt, H., Hirzinger, G.: Application of afflineinvariant fourier descriptors to recognition of 3-d objects. IEEE Transactions on Pattern Analysis and Machine Intelligence 12(7), 640–646 (1990) 8. Arkin, E.M., Barequet, G., Mitchell, J.S.B.: Algorithms for two-box covering. In: Symposium on Computational Geometry, pp. 459–467 (June 2006) 9. Arkin, R.: Cooperation without communication: Multiagent schema-based robot navigation. Journal of Robotic Systems 9(3), 351–364 (1992) 10. Arun, K.S., Huang, T.S., Blostein, S.D.: Least squares fitting of two 3-d point sets. IEEE Transactions on Pattern Analysis and Machine Intelligence 9(5), 698–700 (1987) 11. Aumann, R.J., Dreze, J.H.: Cooperative games with coalition structures. International Journal of Game Theory 3, 217–237 (1974) 12. Aumann, R.J., Maschler, M.: The bargaining set of cooperative games, pp. 443– 476 (1964) 13. Baker, S., Nayar, S.K.: A theory of single-viewpoint catadioptric image formation. International Journal of Computer Vision 35(2), 175–196 (1999)
174
References
14. Ballard, R.D.: A last long look at titanic. National Geographic 170(6) (December 1986) 15. Bartoli, A., Dalal, N., Horaud, R.: Motion panoramas. Computer animation and Virtual Worlds 15, 501–517 (2004) 16. Bejczy, A., Bekey, G., Taylor, R., Rovetta, S.: A research methodology for telesurgery with time delays. In: First International Symposium on Medical Robotics and Computer Assisted Surgery (September 1994) 17. Bejczy, A.K.: Sensors, controls, and man-machine interface for advanced teleoperation. Science 208(4450) (1980) 18. Benosman, R., Kang, S.B.: Panoramic Vision. Springer, New York (2001) 19. Bensana, E., Lemaˆıtre, M.: Earth observation satellite management. In Constaints: An International Journal 4(3), 293–299 (1999) 20. Bernard, C., Kang, H., Sigh, S.K., Wen, J.T.: Robotic system for collaborative control in minally invaive surgery. Industrial Robot: An international Journal 26(6), 476–484 (1999) 21. Brooks, B.G., McKee, G.T.: The visual acts model for automated camera placement during teleoperation. In: IEEE International International Conference on Robotics and Automation (ICRA 2001), Korea, vol. 5 (2001) 22. Borgefors, G.: Hierarchical chamfer matching: a parametric edge matching algorithm. In: Proceedings of IEEE Transactions on Pattern Analysis and Machine Intelligence, November 1988, vol. 10, pp. 849–865 (1988) 23. Broder, A.Z., Charikar, M., Frieze, A.M., Mitzenmacher, M.: Min-wise independent permutations. Special Issue for STOC, Journal of Computer and System Sciences 60(3), 630–659 (2000) 24. Broder, A.J.: Strategies for efficient incremental nearest neighbor search. PATREC: Pattern Recognition 23, 171–178 (1990) 25. Broder, A.Z.: On the resemblance and containment of documents. In: Proceedings of Compression and Complexity of SEQUENCES 1997, Positano, Italy, pp. 21–29. IEEE Comput. Soc., Los Alamitos (1998) 26. Brown, D.C.: Close-range camera calibration. Photogrammetric Engineering 37(8), 855–866 (1971) 27. Brown, M., Lowe, D.: Recognising panoramas. In: Proceedings of IEEE International Conference on Computer Vision, Nice, France, vol. 2, pp. 1218–1225 (October 2003) 28. Butler, Z., Rizzi, A., Hollis, R.: Cooperative coverage of rectilinear environments. In: IEEE International Conference on Robotics and Automation (April 2000) 29. Cannon, D.J.: Point-And-Direct Telerobotics: Object Level Strategic Supervisory Control in Unstructured Interactive Human-Machine System Environments. PhD thesis, Stanford Mechanical Engineering (June 1992) 30. Canon (2003), http://www.x-zone.canon.co.jp/WebView-E/all/list.htm 31. Carpenter, R., Carpenter, L.: www.cinematrix.com 32. Castro, E.D., Morandi, C.: Registration of translated and rotated images using finite fourier transform. In: Proceedings of IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 9, pp. 700–703 (1987) 33. Chen, X., Davis, J., Slusallek, P.: Wide area camera calibration using virtual calibration objects. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 520–527 (June 2000) 34. Cho, S.-H., Chung, Y.-K., Lee, J.Y.: Automatic image mosaic system using image feature detection and taylor series. In: Proceedings of VIIth Digital Image Computing: Techniques and Applications, Sydney, pp. 549–556 (December 2003)
References
175
35. Chong, N., Kotoku, T., Ohba, K., Komoriya, K., Matsuhira, N., Tanie, K.: Remote coordinated controls in multiple telerobot cooperation. In: IEEE International Conference on Robotics and Automation, vol. 4, pp. 3138–3343 (April 2000) 36. Clady, X., Collange, F., Jurie, F., Martinet, P.: Object tracking with a pantilt-zoom camera: application to car driving assistance. In: IEEE International Conference on Robotics and Automation (ICRA), Seoul, Korea, pp. 1653–1658 (May 2001) 37. Clarke, T.A., Fryer, J.G.: The development of camera calibration methods and models. Photogrammetric Record 16(9), 51–66 (1998) 38. Coorg, S., Teller, S.: Spherical mosaics with quaternions and dense correlation. International Journal of Computer Vision 37(3), 259–273 (2000) 39. Coppin, P., Whittaker, W., et al.: Eventscope: Amplifying human knowledge and experience via intelligent robotic systems and information interaction. In: 9th IEEE International Workshop on Robot and Human Interactive Communication (Roman) (2000) 40. Cosma, C., Confente, M., Botturi, D., Fiorini, P.: Laboratory tools for robotics and automation education. In: IEEE Internetional Conference on Robotics and Automation (ICRA), Taipei, ROC, pp. 3303–3308 (September 2003) 41. Crouch, C.H., Mazur, E.: Peer instruction: Ten years of experience and results. Americian Journal of Physices 69(9), 970–977 (2001) 42. Csikszentmihalyi, M.: Beyond Boredom and Anxiety. Jossey-Bass Publishers (1982) 43. Cutler, T.L., Swann, D.E.: Using remote photography in wildlife ecology: a review. Wildlife Society Bulletin 27(3), 571–581 (1999) 44. Dalton, B., Taylor, K.: A framework for internet robotics. In: IEEE International Conference On Intelligent Robots and Systems (IROS): Workshop on Web Robots, Victoria, Canada (1998) 45. Desmet, D., Avasare, P., Coene, P., Decneut, S., Hendrickx, F., Marescaux, T., Mignolet, J.-Y., Pasko, R., Schaumont, P., Verkest, D.: Design of Cam-Eleon, a run-time reconfigurable Web camera. Springer, Berlin (2002) 46. Donald, B., Gariepy, L., Rus, D.: Distributed manipulation of multiple objects using ropes. In: IEEE International Conference on Robotics and Automation (April 2000) 47. Dorman, K.W., Pullen, J.L., Keksz, W.O., Eismann, P.H., Kowalski, K.A., Karlen, J.P.: The servicing aid tool: A teleoperated robotics system for space applications. In: The Seventh Annual Workshop on Space Operations Applications and Research (SOAR 1993), Johnson Space Center, Houston, TX, January 1994, vol. 1 (1994) 48. Eggert, D.W., Lorusso, A., Fisher, R.B.: Estimating 3-d rigid body transformations: a comparison of four major algorithms. Machine Vision and Applications (9), 272–290 (1997) 49. Elhaji, I., Tan, J., Xi, N., Fung, W., Liu, Y., Kaga, T., Hasegawa, Y., Fukuda, T.: Multi-site internet-based cooperative control of robotic operations. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2000) 50. Eppstein, D.: Fast construciton of planar two-centers. In: Proc. 8th ACM-SIAM Sympos. Discrete Algorithms, pp. 131–138 (January 1997) 51. Eppstein, D., Goodrich, M.T., Sitchinava, N.: Guard placement for efficient pointin-polygon proofs. In: Symposium on Computational Geometry, pp. 27–36 (2007) 52. Erev, I., Roth, A.E.: Predicting how people play games: Reinforcement learning in experimental games with unique, mixed strategy equilibria. American Economic Review 88(4), 848–881 (1998)
176
References
53. Faig, W.: Calibration of close-range photogrammetry systems: Mathematical formulation. Photogrammetric Engineering and Remote Sensing 41(12), 1479–1486 (1975) 54. Faugeras, O. (ed.): Three-dimensional computer vision: a geometric viewpoint. MIT Press, Cambridge (1993) 55. Fisher, R., Vanouse, P., Dannenberg, R., Christensen, J.: Audience interactivity: A case study in three perspectives (1996) 56. Fong, T., Thorpe, C., Baur, C.: Collaboration, dialogue, and human-robot interation. In: 10th International Symposium of Robotics Research (2001) 57. Foote, J., Kimber, D.: Flycam: Practical panoramic video and automatic camera control. In: IEEE International Conference on Multimedia and Expo. ICME 2000, New York, NY, vol. 3, pp. 1419–1422 (July 2000) 58. Foote, J., Kimber, D.: Enhancing distance learning with panoramic video. In: Proceedings of the 34th Hawaii International Conference on System Sciences, pp. 1–7 (2001) 59. Forsyth, D., Mundy, J., Zisserman, A., Coelho, C., Heller, A., Rothwell, C.: Invariant descriptors for 3-d object recognition and pose. IEEE Transactions on Pattern Analysis and Machine Intelligence 13(10), 971–991 (1991) 60. Gabrel, V.: Improved linear programming bounds via column generation for daily scheduling of earth observation satellite. Technical report, LIPN, University 13 Paris Nord (January 1999) 61. Gabriely, Y., Rimon, E.: Competitive on-line coverage of grid environments by a mobile robot. Computational Geometry: Theory & Applications 24(3), 197–224 (2003) 62. Gee, J.P.: High score education. Wired 11(5) (May 2003) 63. Gertz, M., Stewart, D., Khosla, P.: A human-machine interface for distributed virtual laboratories. IEEE Robotics and Automation Magazine (December 1994) 64. Goertz, R., Thompson, R.: Electronically controlled manipulator. Nucleonics (1954) 65. Goffman, E.: Strategic Interaction. University of Pennsylvania Press (1969) 66. Goldberg, K., Pashkevich, A., Song, D.: Geometrical calibration of robotic webcameras. Transactions of Belarusian Engineering Academy 1(15), 12–14 (2003) 67. Goldberg, K., Chen, B.: Collaborative control of robot motion: Robustness to error. In: International Conference on Intelligent Robots and Systems (IROS), vol. 2, pp. 655–660 (October 2001) 68. Goldberg, K., Chen, B., Solomon, R., Bui, S., Farzin, B., Heitler, J., Poon, D., Smith, G.: Collaborative teleoperation via the internet. In: IEEE International Conference on Robotics and Automation (ICRA), vol. 2, pp. 2019–2024 (April 2000) 69. Goldberg, K., Mascha, M., Gentner, S., Rothenberg, N., Sutter, C., Wiegley, J.: Beyond the web: Manipulating the physical world via the www. Computer Networks and ISDN Systems Journal 28(1) (December 1995) (Archives can be viewed), http://www.usc.edu/dept/raiders/ 70. Goldberg, K., Mascha, M., Gentner, S., Rothenberg, N., Sutter, C., Wiegley, J.: Robot teleoperation via www. In: International Conference on Robotics and Automation. IEEE, Los Alamitos (May 1995) 71. Goldberg, K., Siegwart, R. (eds.): Beyond Webcams: An Introduction to Online Robots. MIT Press, Cambridge (2002)
References
177
72. Goldberg, K., Song, D., Khor, Y., Pescovitz, D., Levandowski, A., Himmelstein, J., Shih, J., Ho, A., Paulos, E., Donath, J.: Collaborative online teleoperation with spatial dynamic voting and a human “tele-actor”. In: IEEE International Conference on Robotics and Automation (ICRA), vol. 2, pp. 1179–1184 (May 2002) 73. Goldberg, K., Song, D., Levandowski, A.: Collaborative teleoperation using networked spatial dynamic voting. The Proceedings of The IEEE 91(3), 430–439 (2003) 74. Goldberg, K., Song, D., Song, I.Y., McGonigal, J., Zheng, W., Plautz, D.: Unsupervised scoring for scalable internet-based collaborative teleoperation. In: IEEE International Conference on Robotics and Automation (ICRA), New Orleans, LA, US (April 2004) 75. Gonzalez, R., Woods, R.: Digital Image Processing. Addison-Wesley Publishing Company, Reading (1992) 76. Grossberg, M.D., Nayar, S.K.: A general imaging model and a method for finding its parameters. In: Proceedings of the Eighth IEEE International Conference on Computer Vision, vol. 2, pp. 108–115 (July 2001) 77. Grossi, R., Italiano, G.F.: Efficient cross-trees for external memory. In: Abello, J., Vitter, J.S. (eds.) External Memory Algorithms and Visualization, pp. 87–106. American Mathematical Society Press, Providence (1999) 78. Grossi, R., Italiano, G.F.: Revised version of efficient cross-trees for external memory. Technical Report TR-00-16 (October 2000) 79. Gysel, L.W., Davis, E.M.J.: A simple automatic photographic unit for wildlife research. Journal of wildlife management 20, 451–453 (1956) 80. Hall, N.G., Magazine, M.J.: Maximizing the value of a space mission. European Journal of Operation Research (78), 224–241 (1994) 81. Halperin, D., Sharir, M., Goldberg, K.: The 2-center problem with obstacles. Journal of Algorithms 32, 109–134 (2002) 82. Han, K., Kim, Y., Kim, J., Hsia, S.: Internet control of personal robot between kaist and uc davis. In: IEEE International Conference on Robotics and Automation (ICRA) (2002) 83. Harris, C.J., Stephens, M.: A combined corner and edge detector. In: Proceedings 4th Alvey Vision Conference, Manchester, pp. 147–151 (1988) 84. Harrison, S.A., Price, M.E.: Task scheduling for satellite based imagery. In: The Eighteenth Workshop of the UK Planning and Scheduling, Special Interest Group, University of Salford, Manchester, UK, pp. 64–78 (December 1999) 85. Hartley, R.: Self-calibration of stationary cameras. IJCV 22, 5–23 (1997) 86. Hartley, R., Zisserman, A.: Multiple View Geometry in computer vision. Cambridge Press (2000) 87. Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision, 2nd edn. Cambridge University Press, Cambridge (2004) 88. Hartmann, W., Rollett, B.: Positive interaction in the elementary school, pp. 195–202 (1994) 89. Horn, B.K.P.: Closed form solution of absolute orientation using unit quaternions. Journal of the Optical Society A 4(4), 629–642 (1987) 90. Horn, R.A., Johnson, C.R. (eds.): Matrix analysis. Cambridge University Press, Cambridge (1990) 91. http://www.rsi.ca/products/sensor/radarsat/rs1 price us.asp 92. Hu, B., Brown, C., Choi, A.: Acquiring an environment map through image mosaicking. University of Rochester: TR-786 (November 2001)
178
References
93. Iida, K., Takahashi, R., Tang, Y., Mukai, T., Sato, M.: Observation of marine animals using underwater acoustic camera. Japanese Journal of Applied Physics 45(5B), 4875–4881 (2006) 94. Irani, M., Anandan, P., Bergen, J., Kumar, R., Hsu, S.: Efficient representations of video sequences and their applications. Signal Processing: Image Communication 8 (November 1996) 95. Irani, M., Anandan, P.: Video indexing based on mosaic representations. Proceedings of the IEEE 86, 905–921 (1998) 96. Irani, M., Hsu, S., Anandan, P.: Video compression using mosaic representations. Signal Processing: Image Communication 7, 529–552 (1995) 97. Mignolet, J.-Y., Vernalde, J.-Y.S., Engels, M.: A low-power reconfigurable internet appliance camera. Revue Hf. Electronique, Telecommunications (4), 4–11 (2000) 98. Jain, A., Duin, P., Mao, J.: Statistical pattern recognition: A review. IEEE Transactions on PAMI 22(1), 4–37 (2000) 99. JamesReserve (2005), http://www.jamesreserve.edu/webcamsphp.lasso 100. Ji, Q., Zhang, Y.: Camera calibration with genetic algorithms. IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans 31(2), 120–130 (2001) 101. Jia, S., Hada, Y., Ye, G., Takase, K.: Distributed telecare robotic systems using corba as a communication architecture. In: IEEE International Conference on Robotics and Automation (ICRA), Washington, DC, United States (2002) 102. Jia, S., Takase, K.: A corba-based internet robotic system. Advanced Robotics 15(6), 663–673 (2001) 103. Johnson, D.E.: Applied Multivariate Methods for Data Analysis. Duxbury Press (1998) 104. Kaga, T., Nojima, S., Nanma, E., Tamura, M., Nakagane, H.: Internet camera system. Matsushita Technical Journal 44(5), 66–73 (1998) 105. Kanazawa, Y., Kanatani, K.: Image mosaicing by stratified matching. In: Proceedings of Statistical Methods in Video Processing Workshop, Denmark, pp. 31–36 (January 2002) 106. Kang, S.B., Weiss, R.: Characterization of errors in compositing panoramic images. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Juan, Puerto Rico, pp. 103–109 (June 1997) 107. Khamis, A., Rivero, D.M., Rodriguez, F., Salichs, M.: Pattern-based architecture for building mobile robotics remote laboratories. In: IEEE Internetional Conference on Robotics and Automation (ICRA), Taipei, ROC, September 2003, pp. 3284–3289 (2003) 108. Kim, J., Choi, B., Park, S., Kim, K., Ko, S.: Remote control system using real-time mpeg-4 streaming technology for mobile robot. In: IEEE International Conference on Consumer Electronics (2002) 109. Kimber, D., Liu, Q., Foote, J., Wilcox, L.: Capturing and presenting shared multiresolution video. In: Proceedings of SPIE ITCOM 2002, Boston, vol. 4862, pp. 261–271 (July 2002) 110. Kopparapu, S., Corke, P.: The effect of noise on camera calibration parameters. Graphical Models 63(5), 277–303 (2001) 111. Kreimeier, B.: Rising from the ranks:rating for multiplayer games, http://www.gamasutra.com/features/20000209/kreimeier pfv.htm 112. Kucera, T.E., Barrett, R.H.: The trailmaster camera system for detecting wildlife. Wildlife Society Bulletin 21, 505–508 (1993)
References
179
113. Lemaitre, M., Verfaillie, G., Jouhaud, F., Lachiver, J.-M., Bataille, N.: Selecting and scheduling observations of agile satellites. Aerospace Science and Technology 6, 367–381 (2002) 114. Lenz, R.K., Tsai, R.Y.: Techniques for calibration of the scale factor and image center for high accuracy 3-d machine vision metrology. IEEE Transactions on Pattern Analysis and Machine Intelligence 10(5), 713–720 (1988) 115. Lewis, S.B., DeSlmone, P., Titus, K., Fuller, M.R.: A video surveillance system for monitoring raptor nests in a temperate rainforest environment. Northwest Science 78(1), 70–74 (2004) 116. Li, H., Manjunath, B.S., Mitra, S.K.: A contour-based approach to multisensor image registration. Proceedings of IEEE Transactions on Image Processing 4, 320–334 (1995) 117. Liebowitz, D., Zisserman, A.: Metric rectification for perspective images of planes. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 482–488 (1998) 118. Liu, J., Sun, L., Chen, T., Huang, X., Zhao, C.: Competitive multi-robot teleoperation. In: IEEE International Conference on Robotics and Automation Barcelona, Spain (April 2005) 119. Liu, Q., Kimber, D., Wilcox, L., Cooper, M., Foote, J., Boreczky, J.: Managing a camera system to serve different video requests. In: Proceedings of IEEE International Conference on Multimedia and Expo (ICME), Lausanne, Switzerland, vol. 2, pp. 13–16 (August 2002) 120. Lowe, D.: Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision 60(4), 91–110 (2004) 121. Luo, R.C., Chen, T.M.: Development of a multibehavior-based mobile robot for remote supervisory control through the internet. IEEE/ASME Transactions on Mechatronics 5(4), 376–385 (2000) 122. Lynch, K., Liu, C.: Designing motion guides for ergonomic collaborative manipulation. In: IEEE International Conference on Robotics and Automation (April 2000) 123. Magnor, M.A.: Geometry-based automatic object localization and 3-d pose detection. In: Fifth IEEE Southwest Symposium on Image Analysis and Interpretation, pp. 144–147 (2002) 124. Malis, E., Chaumette, F.: Theoretical improvements in the stability analysis of a new class of model-free visual servoing methods. IEEE Transactions on Robotics and Automation 18(2), 176–186 (2002) 125. Marimon, R., McGrattan, E.: On adaptive learning in strategic games (1995) 126. Mas-Collell, A.: An equivalence theorem for a bargaining set. Journal of Math and Economics (18), 129–139 (1989) 127. Matlab calibration, http://heatex.mit.edu/ 128. Matsuyama, T., Ukita, N.: Real-time multitarget tracking by a cooperative distributed vision system. Proceedings of The IEEE 90(7), 1136–1150 (2002) 129. Maybank, S.J., Faugeras, O.D.: Theory of self-calibration of a moving camera. The International Journal of Computer Vision 8(2), 123–151 (1992) 130. McDonald, M., Small, D., Graves, C., Cannon, D.: Virtual collaborative control to improve intelligent robotic system efficiency and quality. In: IEEE International Conference on Robotics and Automation, vol. 1, pp. 418–424 (April 1997) 131. Megiddo, N., Supowit, K.J.: On the complexity of some common geometric location problems. SIAM Journal on Computing 13, 182–196 (1984)
180
References
132. Mirfakhrai, T., Payandeh, S.: A delay prediction approach for teleoperation over the internet. In: IEEE International Conference on Robotics and Automation (ICRA) (2002) 133. Mosher, R.S.: Industrial manipulators. Scientific American 211(4) (1964) 134. Motai, Y., Kosaka, A.: Smartview: hand-eye robotic calibration for active viewpoint generation and object grasping. In: IEEE International Conference on Robotics and Automation (ICRA 2001), vol. 3, pp. 2183–2190 (May 2001) 135. Nakano, K., Okutomi, M., Hasegawak, Y.: Camera calibration with precise extraction of feature points using projective transformation. In: IEEE International Conference on Robotics and Automation (ICRA 2002), vol. 3, pp. 2532–2538 (May 2002) 136. NationalGeographic (2005), http://magma.nationalgeographic.com/ngm/cranecam/cam.html 137. Nayar, S.K.: Catadioptric omnidirectional camera. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Juan, Puerto Rico, pp. 482–488 (June 1997) 138. newyorkwild.org (2005), http://www.newyorkwild.org/webcams/webcams.htm 139. Ng, K., Chan, S., Shum, H.: Data compression and transmission aspects of panoramic videos. IEEE Transactions On Circuits and Systems for Video Technology 15(1), 82–95 (2005) 140. Ngai, L., Newman, W.S., Liberatore, V.: An experiment in internet-based, humanassisted robotics. In: IEEE International Conference on Robotics and Automation (ICRA) (2002) 141. Jouppi, N.P., Thomas, S.: Telepresence systems with automatic preservation of user head height, local rotation, and remote translation. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 62–68 (2005) 142. O’Rourke, J.: Art Gallery Theorems and Algorithms. Oxford University Press, Oxford (1987) 143. Paulos, E., Canny, J.: Ubiquitous tele-embodiment: Applications and implications. International Journal of Human-Computer Studies/Knowledge Acquisition (1997); Special Issue on Innovative Applications of the World Wide Web 144. Paulos, E., Canny, J.: Designing personal tele-embodiment. In: IEEE International Conference on Robotics and Automation (ICRA), Detroit, MI (1999) 145. Paulos, E., Canny, J., Barrientos, F.: Prop: Personal roving presence. In: SIGGRAPH Visual Proceedings, p. 99 (August 1997) 146. Perry, M., Agarwal, D.: Remote control for videoconferencing. In: Proceedings of 2000 Information Resources Management Association International Conference, Anchorage, AK, USA (2000) 147. Personnaz, M., Horaud, R.: Camera calibration: estimation, validation and software. Technical report, INRIA Rhone Alpes No RT-0258 (October 2002) 148. Pirjanian, P., Mataric, M.: Multi-robot target acquisition using multiple objective behavior coordination. In: IEEE International Conference on Robotics and Automation (April 2000) 149. Pollak, C., Hutter, H.: A webcam as recording device for light microscopes. Journal of Computer-Assisted Microscopy 10(4), 179–183 (1998) 150. Qin, N., Song, D.: On-demand sharing of a high-resolution panorama video from networked robotic cameras. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), San Diego, CA (November 2007)
References
181
151. Qin, N., Song, D., Goldberg, K.: Aligning windows of live video from an imprecise pan-tilt-zoom robotic camera into a remote panoramic display. In: IEEE International Conference on Robotics and Automation (ICRA), Orlando, Florida (May 2006) 152. Rav-Acha, A., Pritch, Y., Lischinski, D., Peleg, S.: Dynamosaics: Video mosaics with non-chronological time. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2005) 153. Reddy, B.S., Chatterji, B.N.: An fft-based technique for translation, rotation, and scale-invariant image registration. Proceedings of IEEE Transactions on Image Processing 5, 1266–1271 (1996) 154. Rekleitis, I., Dudek, G., Milios, E.: Multi-robot collaboration for robust exploration. In: IEEE International Conference on Robotics and Automation (April 2000) 155. Rogoff, B., Matusov, E., White, C.: Models of teaching and learning: Participation in a community of learners. Blackwell, Oxford (1996) 156. Safaric, R., Debevc, M., Parkin, R., Uran, S.: Telerobotics experiments via internet. IEEE Transactions on Industrial Electronics 48(2), 424–431 (2001) 157. Sanders, M.D., Maloney, R.F.: Causes of mortality at nests of ground-nesting birds in the upper waitaki basin, south island, new zealand: a 5-year video study. Biological Conservation 106(2), 225–236 (2002) 158. Sato, T., Ichikawa, J., Mitsuishi, M., Hatamura, Y.: A new micro-teleoperation system employing a hand-held force feedback pencil. In: International Conference on Robotics and Automation. IEEE, Los Alamitos (May 1994) 159. Schmid, D., Maule, B., Roth, I.: Performance teletests for industrial robots by the internet. In: First IFAC-Conference on Telematics Applications in Automation and Robotics TA 2001, Weingarten, Germany, IFAC, vol. 7 (2001) 160. Shenoy, P.: On coalition formation: A game-theoretical approach. International Journal of Game Theory 8, 133–164 (1979) 161. Sheridan, T.B.: Telerobotics, Automation, and Human Supervisory Control. MIT Press, Cambridge (1992) 162. Shermer, T.C.: Recent results in art galleries. Proceedings of the IEEE 80(9), 1384–1399 (1992) 163. Shum, H., Szeliski, R.: Panoramic image mosaics. Microsoft Research:MSR-TR97-23 (1997) 164. Siegwart, R., Saucy, P.: Interacting mobile robots on the web. In: IEEE International Conference on Robotics and Automation (ICRA) (1999) 165. Sinha, S., Pollefeys, M.: Towards calibrating a pan-tilt-zoom cameras network. In: OMNIVIS 2004, ECCV Conference Workshop, Zofin Palace, Slovansky ostrov, Prague 1, Czech Republic (May 2004) 166. Song, D., Goldberg, K.: Sharecam part I: Interface, system architecture, and implementation of a collaboratively controlled robotic webcam. In: IEEE/RSJ International Conference on Intelligent Robots (IROS), Las Vegas, NV, vol. 2, pp. 1080–1086 (October 2003) 167. Song, D., Goldberg, K.: Networked robotic cameras for collaborative observation of natural environments. In: The 12th International Symposium of Robotics Research (ISRR), San Francisco, CA (October 2005) 168. Song, D., Goldberg, K.: Approximate algorithms for a collaboratively controlled robotic camera. IEEE Transactions on Robotics 23(5), 1061–1070 (2007)
182
References
169. Song, D., Hu, Q., Qin, N., Goldberg, K.: Automating high resolution panoramic inspection and documentation of construction using a robotic camera. In: IEEE Conference on Automation Science and Engineering, Edmonton, Canada (August 2005) 170. Song, D., Pashkevich, A., Goldberg, K.: Sharecam part II: Approximate and distributed algorithms for a collaboratively controlled robotic webcam. In: IEEE/RSJ International Conference on Intelligent Robots (IROS), Las Vegas, NV, vol. 2, pp. 1087–1093 (October 2003) 171. Song, D., van der Stappen, A.F., Goldberg, K.: Exact and distributed algorithms for collaborative camera control. In: Boissonnat, J.-D., Burdick, J., Goldberg, K., Hutchinson, S. (eds.) Algorithmic Foundations of Robotics V. Springer Tracts in Advanced Robotics, vol. 7, pp. 167–183. Springer, Heidelberg (2002) 172. Song, D., van der Stappen, A.F., Goldberg, K.: An exact algorithm optimizing coverage-resolution for automated satellite frame selection. In: IEEE International Conference on Robotics and Automation (ICRA), New Orleans, LA (May 2004) 173. Song, D., van der Stappen, A.F., Goldberg, K.: Exact algorithms for single frame selection on multi-axis satellites. IEEE Transactions on Automation Science and Engineering 3(1), 16–28 (2006) 174. Song, D., van der Stappen, F., Goldberg, K.: Exact algorithms for single frame selection on multi-axis satellites. IEEE Transactions on Automation Science and Engineering 3(1), 16–28 (2006) 175. Stake, M.M., Cimprich, D.A.: Using video to monitor predation at black-capped vireo nests. BioOne 105(2), 348–357 (2003) 176. Sturm, P., Maybankt, S.: On plane-based camera calibration: A general algorithm, singularities, applications. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 432–437 (1999) 177. Sutton-Smith, B.: The Ambiguity of Play. Harvard University Press (1997) 178. Swaminathan, R., Nayar, S.K.: Nonmetric calibration of wide-angle lenses and polycameras. IEEE Transactions on Pattern Analysis and Machine Intelligence 22(10), 1172–1178 (2000) 179. SwanCam (2005), http://www.osage.net/∼ mccb/trumpeterswan.html 180. Szeliski, R.: Image mosaicing for tele-reality applications. In: Proceedings of the Second IEEE Workshop on Applications of Computer Vision, Sarasota, USA, pp. 44–53 (December 1994) 181. Szeliski, R.: Video mosaics for virtual environments. Proceedings of IEEE Computer Graphics and Applications 16, 22–30 (1996) 182. Szeliski, R., Shum, H.: Creating full view panoramic image mosaics and environment maps. In: Proceedings of the 24th annual conference on Computer Graphics and Interactive Techniques, SIGGRAPH 1997, Los Angeles, California, vol. 31, pp. 251–258 (August 1997) 183. Szeliski, R.: Image alignment and stitching, microsoft research, technical report msr-tr-2004-92 (2004) 184. Taylor, K., Trevelyan, J. (1994), http://telerobot.mech.uwa.edu.au/ 185. Taylor, K., Trevelyan, J.P.: Australia’s telerobot on the web. In: 26th Symposium on Industrial Robotics, Singapore, October 1995, pp. 39–44 (1995) 186. Tesla, N.: Method of and apparatus for controlling mechanism of moving vessels or vehicles (1898), http://www.pbs.org/tesla/res/613809.html 187. tigherhomes.org (2005), http://www.tigerhomes.org/animal/web-cams.cfm 188. Tomovic, R.: On man-machine control. Automatica 5 (1969)
References
183
189. Torr, P.H.S., Zisserman, A.: Feature based methods for structure and motion estimation. In: Proceedings of the International Workshop on Vision Algorithm: Theory and Practice, Corfu, pp. 278–294 (1999) 190. Toussaint, G.T.: The relative neighbourhood graph of a finite planar set. Pattern Recognition 12, 261–268 (1980) 191. Trucco, E., Doull, A., Odone, F., Fusiello, A., Lane, D.: Dynamic video mosaicing and augmented reality for subsea inspection and monitoring. In: Oceanology International Conference, Brighton, UK (March 2000) 192. Tsai, R.Y.: An efficient and accurate camera calibration technique for 3d machine vision. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 364–374 (1986) 193. Tsai, R.Y.: A versatile camera calibration technique for high-accuracy 3d machine vision metrology using off-the-shelf TV cameras and lenses. IEEE Journal of Robotics and Automation 3(4), 323–344 (1987) 194. Vasquez, M., Hao, J.-K.: A logic-constraint knapsack formulation of a tabu algorithm for the daily photograph scheduling of an earth observation satellite. J. Comput. Optim. Appl. 20(2), 137–157 (2001) 195. Veltkamp, R.C., Hagedoorn, M.: Shape similarity measures, properties, and constructions. In: Laurini, R. (ed.) VISUAL 2000. LNCS, vol. 1929, pp. 467–476. Springer, Heidelberg (2000) 196. Verkest, D., Desmet, D., Avasare, P., Coene, P., Decneut, S., Hendrickx, F., Marescaux, T., Mignolet, J.-Y., Pasko, R., Schaumont, P.: Design of a secure, intelligent, and reconfigurable web cam using a c based system design flow. In: Thirty-Fifth Asilomar Conference on Signals, Systems and Computers, vol. 1, pp. 463–467. IEEE, Piscataway (2001) 197. von Neumann, J., Morgenstern, O.: The Theory of Games and Economic Behavior. Princeton University Press, Princeton (1953) 198. Wallick, M., Rui, Y., He, L.: A portable solution for automatic lecture room camera management. In: IEEE International Conference on Multimedia and Expo (ICME), Taipei, ROC, pp. 987–990 (June 2004) 199. Weber, D.E., Grosser, B., Fried, G. (2005), http://bugscope.beckman.uiuc.edu/ 200. wildcam.com (2005), http://www.wildcam.com/ 201. Xiong, Y., Turkowski, K.: Creating image-based vr using a self-calibrating fisheye lens. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Juan, Puerto Rico, pp. 237–243 (June 1997) 202. Xu, G., Zhang, Z. (eds.): Epipolar geometry in stereo, motion, and object recognition: a unified approach. Kluwer Academic Publishers, Dordrecht (1996) 203. Xu, Y., Song, D., Yi, J.: An approximation algorithm for the least overlapping p-frame problem with non-partial coverage for networked robotic cameras. In: IEEE International Conference on Robotics and Automation (ICRA), Pasadena, CA (May 2008) 204. Ye, Y., Tsotsos, J., Harley, E., Bennet, K.: Tracking a person with pre-recorded image database and a pan, tilt, and zoom camera. Machine Vision and Applications 12(1), 32–43 (2000) 205. Zhang, D., Tsotras, V.J., Gunopulos, D.: Efficient aggregation over objects with extent. In: Proc. of 21th ACM International SIGACT-SIGMOD-SIGART Symposium on Principles of Database Systems (PODS), Madison, Wisconsin, USA, pp. 121–132 (June 2002)
184
References
206. Zhang, W., Kosecka, J., Li, F.: Mosaics construction from a sparse set of views. In: Proceedings of First International Symposium on 3D Data Processing Visualization and Transmission, pp. 177–180 (June 2002) 207. Zhang, X., Navab, N., Liou, S.-P.: E-commerce direct marketing using augmented reality. In: Proceedings of International Conference on Multimedia and Expo., New York, NY, USA, vol. 7 (2000) 208. Zhang, Z.: A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence 22(11), 1330–1334 (2000) 209. Zhang, Z., Deriche, R., Faugeras, O., Luong, Q.T.: A robust technique for matching two uncalibrated images through the recovery of the unknown epipolar geometry. Artificial Intelligence 78, 87–119 (1995) 210. Zhu, Z., Xu, G., Riseman, E.M., Hanson, A.R.: Fast generation of dynamic and multi-resolution 360-degree panorama from video sequences. In: Proceedings of the IEEE International Conference on Multimedia Computing and Systems, Florence, Italy, vol. 1, pp. 9400–9406 (June 1999) 211. Zhuang, H., Roth, Z.S. (eds.): Camera-aided robot calibration. RC Press, Boca Raton (1996) 212. Zoghlami, I., Faugeras, O., Deriche, R.: Using geometric corners to build a 2d mosaic from a set of images. In: Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition, San Juan, Puerto Rico, pp. 420– 425 (June 1997)
Index
p−frame algorithms 89 p−frame/p−camera Problem 170 3D 1-Frame Problem with Dwelling Time 169 4-D 1-Frame Problem 169 Agile satellite 43 Approximation bound Definition 75
79
Base vertex 53 Base Vertex (BV) Algorithm 63 Base Vertex Optimality Condition (BVOC) 55 Base Vertex with Incremental Computing (BV-IC) 63 Base Vertex with Incremental Computing and Diagonal Sweeping (BV-IC-DS) 64 BnB-like Approach Algorithm 84 Calibration feature points 142 camera-person 25 Cannon VCC3 Camera 15 Cinematrix 8 Co-Opticon Algorithms 71 Interface 14 Software 16 System design 13 Collaboration Metric 34 Collaboratively Teleoperated Robotic Camera 1 Collaborative teleoperation 165 Collaborative Telerobot 5
Consensus Regions 33, 107 cooperative game 105 cooperative play models 105 Coverage-Resolution Ratio (CRR) metric 48 Nonsmoothness 50 Piecewise linearity 50 Sample metrics 48 Separability 50 Coverage discount 48 Critical z Values 58 CT 5 CTRC 1, 8 Definition of Approximation Bound DHTML 29 Diagonal sweeping 65 Ensemble Consensus Region 34 Evolving Panorama 157 Exhaustive Lattice Search Algorithm 80 Exponential discount factor 49 Frame camera 46 Frame Insertion Algorithm 159 fundamental rectangles 59 game cognition
106
Image distortion 47 Intersection Over Maximum (IOM) metric 166 Intersection Over Union (IOU) 51 Intersection Topologies 58
75
186
Index
Lattice 74 Least Overlapping Condition (LOC) 92 Local Director 25 Majority Consensus Region 108 Majority Interest Function 107 Memoryless Frame Selection Model MOMR 7 MOSR 7, 23 Multi-axis satellite 41
Requested viewing zones 47 Resolution discount 48 Resolution Ratio with Non-Partial Coverage (RRNPC) 92 Reward Metric 48 18
Networked Telerobot 2 Networked Telerobots 3 Non-Overlapping Condition (NOC] 92 Non-Overlapping Condition (VNOC) 95 Patch-based Evolving Panorama Video Representation 159 Plateau vertex 53 Plateau Vertex Optimality Condition (PVOC) 55 Plateau Vertex Traversal (PVT) Algorithm 56 Projection Invariant-based Image Alignment Algorithm (PIIAA) 133 projection invariants 122 Projection Invariants for SRP 126 Push broom sensor 46 Re-projection 121 Requested viewing zone decomposition 60
Satellite frame definition 45 Satellite frame selection 43 similarity metrics 51 SOMR 7 SOSR 7 Space Mission problem 43 Spatial Dynamic Voting (SDV) interface 24, 25 spectrum of teleoperation 3 Spherical Re-Projection (SRP) 125 Spherical Wrapping 124 supervisory control 3 Symmetric Difference (SD) 51 Tele-Actor 24 System architecture 25 Tele-Twister 110 Telerobot 1 Temporal Frame Selection Model Unsupervised scoring metric User Query Algorithm 160 votel 24, 33 Voter Interest Functions Whisk broom sensor
46
33
18
103, 108
Springer Tracts in Advanced Robotics Edited by B. Siciliano, O. Khatib and F. Groen Further volumes of this series can be found on our homepage: springer.com Vol. 51: Song, D. Sharing a Vision 186 p. 2009 [978-3-540-88064-6]
Vol. 40: Birglen, L.; Laliberté, T.; Gosselin, C. Underactuated Robotic Hands 241 p. 2008 [978-3-540-77458-7]
Vol. 50: Alterovitz, R.; Goldberg K. Motion Planning in Medicine: Optimization and Simulation Algorithms for Image-Guided Procedures 153 p. 2008 [978-3-540-69257-7]
Vol. 39: Khatib, O.; Kumar, V.; Rus, D. (Eds.) Experimental Robotics 563 p. 2008 [978-3-540-77456-3]
Vol. 49: Ott, C. Cartesian Impedance Control of Redundant and Flexible-Joint Robots 190 p. 2008 [978-3-540-69253-9] Vol. 48: Wolter, D. Spatial Representation and Reasoning for Robot Mapping 185 p. 2008 [978-3-540-69011-5] Vol. 47: Akella, S.; Amato, N.; Huang, W.; Mishra, B.; (Eds.) Algorithmic Foundation of Robotics VII 524 p. 2008 [978-3-540-68404-6] Vol. 46: Bessière, P.; Laugier, C.; Siegwart R. (Eds.) Probabilistic Reasoning and Decision Making in Sensory-Motor Systems 375 p. 2008 [978-3-540-79006-8] Vol. 45: Bicchi, A.; Buss, M.; Ernst, M.O.; Peer A. (Eds.) The Sense of Touch and Its Rendering 281 p. 2008 [978-3-540-79034-1] Vol. 44: Bruyninckx, H.; Pˇreuˇcil, L.; Kulich, M. (Eds.) European Robotics Symposium 2008 356 p. 2008 [978-3-540-78315-2] Vol. 43: Lamon, P. 3D-Position Tracking and Control for All-Terrain Robots 105 p. 2008 [978-3-540-78286-5] Vol. 42: Laugier, C.; Siegwart, R. (Eds.) Field and Service Robotics 597 p. 2008 [978-3-540-75403-9] Vol. 41: Milford, M.J. Robot Navigation from Nature 194 p. 2008 [978-3-540-77519-5]
Vol. 38: Jefferies, M.E.; Yeap, W.-K. (Eds.) Robotics and Cognitive Approaches to Spatial Mapping 328 p. 2008 [978-3-540-75386-5] Vol. 37: Ollero, A.; Maza, I. (Eds.) Multiple Heterogeneous Unmanned Aerial Vehicles 233 p. 2007 [978-3-540-73957-9] Vol. 36: Buehler, M.; Iagnemma, K.; Singh, S. (Eds.) The 2005 DARPA Grand Challenge – The Great Robot Race 520 p. 2007 [978-3-540-73428-4] Vol. 35: Laugier, C.; Chatila, R. (Eds.) Autonomous Navigation in Dynamic Environments 169 p. 2007 [978-3-540-73421-5] Vol. 34: Wisse, M.; van der Linde, R.Q. Delft Pneumatic Bipeds 136 p. 2007 [978-3-540-72807-8] Vol. 33: Kong, X.; Gosselin, C. Type Synthesis of Parallel Mechanisms 272 p. 2007 [978-3-540-71989-2] Vol. 32: Milutinovi´c, D.; Lima, P. Cells and Robots – Modeling and Control of Large-Size Agent Populations 130 p. 2007 [978-3-540-71981-6] Vol. 31: Ferre, M.; Buss, M.; Aracil, R.; Melchiorri, C.; Balaguer C. (Eds.) Advances in Telerobotics 500 p. 2007 [978-3-540-71363-0] Vol. 30: Brugali, D. (Ed.) Software Engineering for Experimental Robotics 490 p. 2007 [978-3-540-68949-2]
Vol. 29: Secchi, C.; Stramigioli, S.; Fantuzzi, C. Control of Interactive Robotic Interfaces – A Port-Hamiltonian Approach 225 p. 2007 [978-3-540-49712-7] Vol. 28: Thrun, S.; Brooks, R.; Durrant-Whyte, H. (Eds.) Robotics Research – Results of the 12th International Symposium ISRR 602 p. 2007 [978-3-540-48110-2] Vol. 27: Montemerlo, M.; Thrun, S. FastSLAM – A Scalable Method for the Simultaneous Localization and Mapping Problem in Robotics 120 p. 2007 [978-3-540-46399-3] Vol. 26: Taylor, G.; Kleeman, L. Visual Perception and Robotic Manipulation – 3D Object Recognition, Tracking and Hand-Eye Coordination 218 p. 2007 [978-3-540-33454-5] Vol. 25: Corke, P.; Sukkarieh, S. (Eds.) Field and Service Robotics – Results of the 5th International Conference 580 p. 2006 [978-3-540-33452-1] Vol. 24: Yuta, S.; Asama, H.; Thrun, S.; Prassler, E.; Tsubouchi, T. (Eds.) Field and Service Robotics – Recent Advances in Research and Applications 550 p. 2006 [978-3-540-32801-8] Vol. 23: Andrade-Cetto, J,; Sanfeliu, A. Environment Learning for Indoor Mobile Robots – A Stochastic State Estimation Approach to Simultaneous Localization and Map Building 130 p. 2006 [978-3-540-32795-0]
Vol. 18: Barbagli, F.; Prattichizzo, D.; Salisbury, K. (Eds.) Multi-point Interaction with Real and Virtual Objects 281 p. 2005 [978-3-540-26036-3] Vol. 17: Erdmann, M.; Hsu, D.; Overmars, M.; van der Stappen, F.A (Eds.) Algorithmic Foundations of Robotics VI 472 p. 2005 [978-3-540-25728-8] Vol. 16: Cuesta, F.; Ollero, A. Intelligent Mobile Robot Navigation 224 p. 2005 [978-3-540-23956-7] Vol. 15: Dario, P.; Chatila R. (Eds.) Robotics Research – The Eleventh International Symposium 595 p. 2005 [978-3-540-23214-8] Vol. 14: Prassler, E.; Lawitzky, G.; Stopp, A.; Grunwald, G.; Hägele, M.; Dillmann, R.; Iossifidis. I. (Eds.) Advances in Human-Robot Interaction 414 p. 2005 [978-3-540-23211-7] Vol. 13: Chung, W. Nonholonomic Manipulators 115 p. 2004 [978-3-540-22108-1] Vol. 12: Iagnemma K.; Dubowsky, S. Mobile Robots in Rough Terrain – Estimation, Motion Planning, and Control with Application to Planetary Rovers 123 p. 2004 [978-3-540-21968-2] Vol. 11: Kim, J.-H.; Kim, D.-H.; Kim, Y.-J.; Seow, K.-T. Soccer Robotics 353 p. 2004 [978-3-540-21859-3]
Vol. 22: Christensen, H.I. (Ed.) European Robotics Symposium 2006 209 p. 2006 [978-3-540-32688-5]
Vol. 10: Siciliano, B.; De Luca, A.; Melchiorri, C.; Casalino, G. (Eds.) Advances in Control of Articulated and Mobile Robots 259 p. 2004 [978-3-540-20783-2]
Vol. 21: Ang Jr., H.; Khatib, O. (Eds.) Experimental Robotics IX – The 9th International Symposium on Experimental Robotics 618 p. 2006 [978-3-540-28816-9]
Vol. 9: Yamane, K. Simulating and Generating Motions of Human Figures 176 p. 2004 [978-3-540-20317-9]
Vol. 20: Xu, Y.; Ou, Y. Control of Single Wheel Robots 188 p. 2005 [978-3-540-28184-9]
Vol. 8: Baeten, J.; De Schutter, J. Integrated Visual Servoing and Force Control – The Task Frame Approach 198 p. 2004 [978-3-540-40475-0]
Vol. 19: Lefebvre, T.; Bruyninckx, H.; De Schutter, J. Nonlinear Kalman Filtering for Force-Controlled Robot Tasks 280 p. 2005 [978-3-540-28023-1]
Vol. 7: Boissonnat, J.-D.; Burdick, J.; Goldberg, K.; Hutchinson, S. (Eds.) Algorithmic Foundations of Robotics V 577 p. 2004 [978-3-540-40476-7]